id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
159007168
|
pes2o/s2orc
|
v3-fos-license
|
Media Preferences, Political Knowledge and Attentiveness in the 2016 US Presidential Campaign
From the Payne Fund Studies of the 1920s [1] and their focus on the effects of motion pictures on children to Marshall McLuhan’s [2] famous admonition, “the medium is the message,” scholars and pundits have long recognized the importance of understanding how advancements in media influence the mass communication process and ultimately alter discourse and society. The last three decades in particular have given rise to diverse perspectives of how increasing media options in the form of expanded television offerings and the internet influence media consumption and society [3-8].
Introduction
From the Payne Fund Studies of the 1920s [1] and their focus on the effects of motion pictures on children to Marshall McLuhan's [2] famous admonition, "the medium is the message," scholars and pundits have long recognized the importance of understanding how advancements in media influence the mass communication process and ultimately alter discourse and society. The last three decades in particular have given rise to diverse perspectives of how increasing media options in the form of expanded television offerings and the internet influence media consumption and society [3][4][5][6][7][8].
Within the larger discussion of the effects of "newer" forms of media on the democratic process, multiple streams of research have emerged. A prominent stream of research steeped in cognitive dissonance theory and selective exposure [9,10] focuses on the potential for increased media options for news and public affairs content to permit consumers to select more ideologically congruent news sources [4,11,12]. Some scholars and pundits within this tradition argue the increased ability to self-select ideologically congruent news programming potentially fosters greater polarization and less informed decision making among other notable effects [8,13].
At the same time, some academics, though they may acknowledge partisan-based news consumption, alternatively focus on the ability of increased media options to permit consumers to effectively opt out of political programming, preferring instead to satisfy other media interests [6,14]. The present study draws on both theoretical perspectives through an examination of the recent 2016 presidential campaign and election.
In Post-Broadcast Democracy, Prior [6] argues the transition from a "low-choice" broadcast television environment to a "high-choice" cable and internet environment has significantly altered our political sphere, as many individuals have increasingly opted out of consuming public affairs programming. This development, Prior argues, has been associated with a marked decline in public affairs knowledge, especially among lower-educated segments of society.
In such a diverse media environment where increasing numbers of individuals are tuning out politics in favor of entertainment content, political candidates and campaigns become incentivized to incorporate more stylistic messaging. Indeed, such an environment would seem to encourage non-traditional candidates with the ability to exploit their celebrity status, in essence a post-broadcast candidate. As a product in many ways of our modern media environment, the candidacy of Donald Trump thus provides a unique opportunity to examine whether a celebrity-based, post-broadcast candidate appeals to entertainment driven media consumers and, if so, whether consumers of entertainment programming are associated with declining political knowledge.
Simultaneously, the divisiveness with which Trump campaigned, especially his disdain for members of the news media and news outlets [15,16], offers an opportunity to explore the possible presence of ideological news consumption and its relationship with political knowledge. To that end, this study examines the media preferences of both Hillary Clinton and Donald Trump supporters on dimensions of both entertainment and news consumption consistent with both research traditions.
Literature Review Expanded news options and selective exposure
New developments in media technology inevitably give rise to discussion of the potential consequences novel forms of media may have on society. Widespread adoption of newer forms of media over the last 30 years (e.g., cable television, internet) has fostered a spirited debate as to their effects on society, and in particular our political process. Some academics and pundits focus on newer media's capabilities in fostering a robust, deliberative democracy [3].
At the same time, others express growing concern over the potential for increased media options to promote greater ideological fragmentation and subsequent polarization [5,8,17,18]. Those advancing this perspective have fostered renewed interest in cognitive dissonance theory and selective exposure, the view that we tend to avoid disagreeable information, as it causes dissonance, and thus seek out congruent information sources that reaffirm our beliefs [9,10,19,20]. Indeed, a growing body of scholarship suggests news viewers in a high-choice media environment increasingly select ideologically consonant programming [4,[21][22][23]. The tendency for news consumers to increasingly opt for ideologically consonant content (and perceive incongruent sources as biased) continues to be borne out over time through experimental research [11,13] as well as public opinion polling [24][25][26].
Debate exists, however, as to the potential consequences of news fragmentation and partisan news exposure. Left unchecked, Sunstein [8,18] argues increased fragmentation and consumption along ideological lines has marked ramifications for democracy. Sunstein notes fragmentation inherently results in exposure to less diverse political information, which limits informed decision-making, and ultimately compromises freedom, as freedom requires sufficient discrepant information to form one's beliefs and preferences. The work of Stroud [7], however, suggests the impact of ideological consonant news consumption isn't certain and may not be as dire as Sunstein [8,18] posits. Although Stroud [7] recognizes potential polarization associated with partisan news consumption [13], her research also suggests consuming consonant news appears to favorably influence political participation and solidify candidate preference.
As a whole, the literature on selective exposure to partisan news sources paints a mixed picture. While scholars [4,17] and pollsters [25,26] alike find consistent evidence of the phenomenon occurring, debate nonetheless exists as to its potential consequences. What seems clear given the divisive tenor of the 2016 US Presidential campaign, one would expect to observe the continued presence of partisan news sorting among supporters of the major parties, particularly as candidate Trump was openly hostile to some news organizations and reporters on multiple occasions, particularly those he deemed unfriendly to his candidacy. From referring to some reporters as "the lowest form of humanity" and opining he was running against the "crooked media" [27] to routinely labeling some media members as "dishonest" and "not good people" [15], it is reasonable to conclude such rhetoric may help foster continued partisan news consumption. Consistent with such rhetoric and in keeping with research suggesting the association between partisan news consumption and polarization, it is likewise reasonable to conclude partisan news consumption will be strongest among more fervent supporters of each candidate. Accordingly, the first two hypotheses for examination propose: H1: News preferences of candidate supporters will reveal partisan selective exposure. H2: Partisan news preferences will be more pronounced among stronger supporters of the candidates.
Beyond the divisive rhetoric aimed at media and others, however, the emergence of Trump as a viable candidate in the 2016 campaign gives rise to questions about the association between Trump and his supporters, particularly as he has long cultivated an image as a celebrity at a time scholars have argued media consumers have been increasingly turning away from political programming in favor of entertainmentoriented content.
High-choice offerings and the entertainment consumer
Indeed, just as scholars have voiced concern over the potential for expanded media choice to promote increased ideological fragmentation, others have posited the possibility of interest-based fragmentation along other dimensions [6,11,28,29]. Prior [6], for example, has argued increased media offerings permit viewers to opt out of political programming, preferring instead to satisfy other preferences, frequently entertainment interests.
Comparing the "low-choice" broadcast era with our modern "highchoice" media environment, Prior [6] finds that broadcast television via nightly news fostered learning, both directly and indirectly, among lower educated segments of society. With few media options, the captive nature of the broadcast era promoted political learning while simultaneously mitigating partisan aspects of elections. "Television made it easier to learn about politics for less educated Americans…. Television changed the composition of the voting public by increasing the proportion of less educated voters" [6]. This compositional change brought with it a notable decline in partisanship in elections, as greater numbers of less politically knowledgeable, and therefore less partisan, voters participated in the political process. In sum, broadcast television, through television news, produced profound political effects by informing and enfranchising less educated, less partisan voters [6].
The transition to an expanded media environment, however, reversed this effect. Prior [6] finds evidence of marked fragmentation based on consumer preferences between news and entertainment. While the transition away from broadcast to cable television provided the opportunity for many captive news viewers to switch to entertainment programming, expanded offerings simultaneously allowed news connoisseurs to watch far more news. As a result, Prior [6] argues the political knowledge gap, shrinking in the broadcast era, is expanding in the cable and internet era -a development that has significant implications for political behavior. "An avid newsseeker becomes almost twice as likely to go to the polls as a devoted entertainment fan when both have access to these two media".
Prior's [6] analyses demonstrate, however, that cable television and the internet do not affect everyone equally. "Though political information is abundant and more readily available than ever before, political knowledge has decreased for a substantial portion of the electorate….Those who prefer entertainment and have access to new media display the lowest levels of political knowledge and turnout" . "A widening knowledge gap brought about as news junkies consume more news while entertainment fans increasingly turn away from public affairs programming has marked ramifications for the democratic process. Such an environment, however, potentially incentivizes politicians and candidates with the ability to garner the interest and attention of entertainment-centric consumers. Put differently, a post-broadcast media environment encourages a candidate with the requisite skills to exploit such a media environment -enter Donald Trump. With a background steeped in high-profile media coverage, salesmanship and reality television, Trump's skillset seems remarkably well-suited to appeal to relatively politically disinterested entertainment-focused media consumers -the very type of potential voter fleeing political affairs programming. As Trump himself argued throughout the campaign, he draws media coverage and ratings [30,31] -even going so far as to brag about his past ratings on The Apprentice at the National Prayer breakfast following his inauguration [32]." Given his background, Trump provides a unique opportunity to explore the ability of a celebrity candidate to appeal to politically disinterested entertainment-centric media consumers. As Trump is an unorthodox presidential candidate with substantial name recognition cultivated, in part, through years as the star of reality television series (i.e., The Apprentice and The Celebrity Apprentice), one would expect Trump supporters to prefer entertainment programming, especially reality television and similar procedural content, as opposed to political-based entertainment content (e.g., political satire).
Additionally, as Prior [6] argues preferences for entertainment to be associated with a decline in consumption of public affairs programming, thus leading to a concomitant decline in public affairs knowledge, to the extent Trump supporters demonstrate a preference for non-political entertainment, one would expect to recognize lower political knowledge stores and relative disinterest in politics compared to news consumers. Accordingly, the next two hypotheses proposed for examination are: H3: Entertainment preferences will differ by supporters of each candidate with Trump supporters favoring reality-based, non-political content consistent with his entertainment background.
H4: Entertainment versus news preferences will yield significant political knowledge and political attentiveness differentials.
Methods
Proposed hypotheses were tested through exploration of 2016 American National Election Studies (ANES) Time Series Studies, which were conducted in pre-and post-election waves using both face-to-face (N=1,181) and web-based (N=3,090) survey methods. Preelection interviews and internet surveys were conducted September 7 through November 7, 2016, with post-election follow-ups occurring November 9 through January 8, 2017. Face-to-face interviews were typically conducted in the subject's residence with the interviewer using computer assisted personal interviewing software. Response rate for pre-election interviews was 50% (using AAPOR's RR1 method) with a 90% re-interview rate on the post-election component. Web-based surveys could be completed anywhere respondent had internet access via computer or mobile device. The pre-election internet response rate was 44% with an 84% post-election follow-up rate. Prior to analyses, data were weighted consistent with guidelines for including both preand post-election variables.
Measures
Entertainment and news preferences: Entertainment and news preferences were examined using a battery of pre-election questions asking respondents which television programs they regularly watch from an extensive list of entertainment and news programs. The question was posed as a follow-up to a qualifying question that asked respondents, "From which of the following sources have you heard anything about the Presidential campaign?" If subjects responded they had heard anything about the campaign from response options including a) "television news programs (morning or evening)" or b) "television talk shows, public affairs, or news analysis programs," they were then presented the follow-up question probing which shows they watched regularly. Specifically, the question asked, "Which of the following television programs do you watch regularly? Please check any that you watch at least once a month." Analysis focuses on all 47 English response options spanning broadcast and cable entertainment and news programs (see Appendix 1 for a complete list of programs). For each program, subjects were offered a simple dichotomous yes or no response option. The number of valid responses for any one program ranged from N=2,149 to N=2,151.
Focus was placed on responses to the television probe to analyze news and entertainment preferences as the television question was the only medium offering a diverse mix of both news and entertainment content. Questions probing radio and internet usage offered response options drawn from news providers almost exclusively, thus television offered the only platform to examine news and entertainment preferences.
Based on initial analysis of media preferences for supporters of Hillary Clinton and Donald Trump across all 47 television programs (results presented below), significant differences emerged between supporters of the two candidates on 21 of 25 entertainment-oriented programs and 19 of 22 news-based programs. Programs yielding significant differences among supporters of either candidate were folded into two index news variables, Trump news (α=.66) and Clinton news (α=.74), or two entertainment index variables, Trump-Tainment (α=.56) and Clinton-Tainment (α=.61), depending on content of the program. All Clinton and Trump index variables were scaled to 1.
Additionally, three index news variables were created to differentiate partisan and mainstream news providers. A conservative news index variable was created by combining three commonly perceived conservative news-oriented programs on FOX News: The O'Reilly Factor, Hannity and The Kelly File (α=.80). A liberal news index variable was created by summing three commonly perceived liberal news-oriented programs on MSNBC: The Rachel Maddow Show, Hardball with Chris Matthews and All in with Chris Hayes (α=.53). Finally, a mainstream news index variable was created by summing the traditional three nightly broadcast news programs: NBC Nightly News, ABC World News and CBS News (α=.41). All three index variables were normed to 1. Political attentiveness: Two pre-election questions formed the basis for calculating political attentiveness. The first asked respondents, "How often do you pay attention to what's going on in government and politics?" with five response options including: "Always; most of the time; about half the time; some of the time; never." The second question asked respondents "How much attention do you pay to news about national politics on TV, radio, printed newspapers, or the Internet?" with five response options including: "A great deal; a lot; a moderate amount; a little; or none at all." Responses were recoded for larger values to indicate positive responses, then summed (r=.73) and scaled to 1 (M=.61, SD=.25) to create a political attentiveness index variable.
Education: As the study examines aspects of political knowledge, education was included as a control variable to more effectively isolate the potential relationship between media consumption and political knowledge. Education was tapped with a single pre-test question asking respondents to identify the highest level of schooling they had completed or highest degree received. The 16 response options ranging from first grade to doctorate degree were folded into seven intuitive hierarchical categories: 8 th grade and under; 9 th -12 th grade without a diploma; high school graduate or GED; some college or associate's degree; bachelor's degree; master's degree; and professional/doctorate degree. The education variable was then normed to 1 (M=.57, SD=.18).
Differentiating news preferences
To examine the extent to which supporters of Clinton or Trump demonstrate partisan news consumption consistent with the first proposed hypothesis (H1), a Crosstabs was conducted across candidate support for all 22 televised news-oriented programs among the 47 English programs probed on the ANES. Significant differences emerged for 19 programs between Clinton and Trump supporters. Programs yielding significant results are presented graphically in Figure 1. Figure 1 depicts the percentage of respondents for each candidate responding they had regularly watched the news program in the last month. Programs preferred by Clinton supporters are depicted by solid bars and ordered left to right from largest percentage of respondents (60 minutes) to lowest percentage of respondents (Out Front with Erin Burnett) while still maintaining a statistically significant difference relative to Trump supporters. As noted in the caption below Figure 1, most differences between Clinton and Trump supporters are highly statistically significant (i.e., p<.001) (Figure 1). Only three news-oriented programs of the 22 surveyed did not yield statistically significant differences between supporters of Clinton and Trump -CBS This Morning, Nancy Grace and Dateline.
While the news preferences of Clinton supporters are depicted in solid bars, the news preferences of Trump supporters are illustrated in patterned bars ordered right to left based on percentage of respondents reporting they watch the program. As is evident, with the exception of 20/20, the televised news programs where Trump supporters demonstrate a significantly stronger preference constitute the former FOX News prime-time lineup. Approximately seven times as many Trump supporters preferred The O'Reilly Factor when compared to Clinton supporters and nearly 10 times as many Trump supporters responded watching Hannity.
At the same time, although the percentage of viewers is smaller, Clinton supporters demonstrated pronounced preferences for liberaloriented news programming relative to Trump supporters. Nearly seven times as many Clinton supporters reported watching All in with Chris Hayes and six times as many reported watching The Rachel Maddow Show when compared to Trump supporters. Such stark differences between supporters of the two candidates in terms of partisan news viewing offers persuasive support for H1, namely supporters of both candidates demonstrate partisan news preferences.
As noted above, the results of the initial Crosstabs examining differences in news preferences were used to construct index news variables for each candidate. News programs yielding statistically significant preferences by supporters of each candidate were combined and scaled to 1 for each candidate.
Partisan news viewing and level of candidate support
Consistent with literature suggesting partisan news consumption to be associated with polarization [13], a stronger relationship between ideological news consumption and support for either candidate should emerge. Put differently, fervent supporters of the candidates should demonstrate a greater affinity for partisan/ideological news sources relative to weaker supporters of the candidates.
To examine the extent to which more fervent supporters of Clinton or Trump engage in partisan selective exposure relative to weaker supporters, a comparison of means was drawn on strong and weak supporters for each candidate across three distinct partisan index news variables: conservative news, liberal news and mainstream news. The results of these analyses are presented visually in Figures 2 and 3. Figure 2 presents mean values for strong and weak Trump supporters, while Figure 3 presents the same for strong and weak Clinton supporters. As is evident, the level of support for each candidate is associated with greater partisan news consumption. Among Trump supporters, stronger support for the candidate is associated with a significant increase in consumption of conservative news (i.e., watching FOX News programs). Although statistically significant F(1, 1,024)=28.46, p<.001, the effect size of the increase is relatively modest, η 2 =.03. Nonetheless, there is clear indication among Trump supporters that more fervent support is associated with more partisan news consumption. At the same time, there is a marginally significant, F(1, 1,024)=2.69, p=.10, decline for mainstream news consumption associated with stronger Trump supporters. Overall, it appears more fervent support for Trump is associated with significantly greater partisan news consumption coupled with diminished mainstream news viewing (Figures 2 and 3). A caveat here, the conservative news index produced a far more reliable variable relative to the mainstream news index, thus results for the mainstream decline should be interpreted accordingly.
As with supporters for Trump, strong support for Clinton was associated with a notable increase in partisan news viewing. As strength of support for Clinton increased, so too did self-reported viewing of liberal news sources (i.e., MSNBC programming). As above, although the increase for partisan news viewing is highly significant, F(1, 1,116)=20.05, p<.001, the effect size is once again relatively modest, η 2 =.02. Nonetheless, the results for Clinton supporters lend credence to the contention that more fervent support appears to be associated with greater partisan-based news consumption. In contrast to Trump supporters, however, there is no concomitant drop off in mainstream news consumption among strong Clinton supporters. If anything, strong Clinton backers appear to consume more mainstream news, not less. However, there is a significant, albeit small, decline among conservative news viewership among stronger Clinton supporters.
Differences in mainstream viewership among Clinton and Trump supporters aside, there does appear to be support for H2. That is, stronger support for both candidates is associated with increased partisan news viewing. Such results speak to the potential for increased polarization, especially among more fervent supporters of liberal and conservative candidates. Having established the presence of partisan news viewing among supporters of both candidates coupled with evidence of increased ideological news selectivity as a function candidate support levels, the analysis shifted to more entertaining fare.
Entertainment preferences, political knowledge and attentiveness
Similar to the news analysis above, examination of entertainment preferences focused on establishing the degree to which supporters of the candidates differ in terms of entertainment consumption, and if so, to what extent these viewing patterns may be associated with levels of political knowledge and interest in politics. As above, a Crosstabs analysis was conducted across supporters of each candidate for the 25 entertainment shows probed by ANES. Distinct viewing patterns emerged for 21 of the 25 entertainment-based shows. Programs yielding significant differences between supporters of the candidates are presented visually in Figure 4. Clinton supporters are depicted in solid bars ordered left to right based on percentage of respondents affirming they viewed the program, while Trump supporters are represented by the patterned bars ordered right to left.
As anticipated, Clinton and Trump supporters demonstrated significantly different entertainment preferences across a host of shows.
In general, Clinton supporters demonstrated a significant preference for late-night entertainment programming relative to Trump supporters, particularly shows frequently incorporating politicalbased humor (e.g., Late Show, Larry Wilmore Show). Alternatively, Trump supporters preferred more reality-centric content (e.g., Note: Strong supporters N=757; Weak supporters N=269. *p=.10, *** p <.001. Note: Strong Supporters N=820; Weak Supporters N=298. ** p <.05, *** p <.001. Shark Tank, Judge Judy, Dancing with the Stars) and crime-oriented procedurals (e.g., NCIS, Blue Bloods) relative to Clinton supporters. Such significant difference across a diverse mix of shows lends support for H3. Supporters of each candidate demonstrated distinct televised entertainment viewing tendencies with Trump supporters favoring more reality-based content as compared to Clinton supporters, who opted for late-night entertainment programming.
Based on results of the entertainment Crosstabs analysis, index variables were created for each candidate. Specifically, preferred programs associated with the supporters of each candidate were summed and scaled to 1 to be used in subsequent analyses. Clinton- Tainment Once created, the entertainment index variables were included in a regression model to determine how entertainment preferences of each candidate's supporters were associated with political knowledge. H4 hypothesized significant knowledge differentials would emerge between supporters of the candidates to the extent Trump supporters demonstrated a preference for more non-political, reality-centric content, which was borne out by the aforementioned analysis. To test H4, political knowledge was regressed on Clinton-Tainment and Trump-Tainment. Clinton News and Trump News were also included in the model to additionally explore the potential knowledge levels associated with the news preferences of each candidate. Finally, education was included as a control variable to help isolate the associations of media preferences with political knowledge.
The results of the OLS regression model including both unstandardized and standardized coefficients are presented in Table 1. Of particular interest are the results for the entertainment variables of each candidate. Significant differences emerged in terms of political knowledge associated with the entertainment preferences of the supporters of each candidate. Based on coding of the political knowledge variable, positive coefficients and t values indicate corresponding media preferences to be associated with an increase in political knowledge, while negative values signify an associated decline in political knowledge. Thus, the negative coefficients and t value present for Trump-Tainment reflects a decline in political knowledge associated with the consumption of entertainment preferred of Trump supporters (Table 1). At the same time, the entertainment preferences of Clinton supporters (i.e., Clinton-Tainment) are associated with a modest, yet significant increase in political knowledge.
Review of the standardized beta β reveals the negative effect size for the entertainment preferences of Trump supporters to be approximately twice the size of the positive effect associated with the preferences of Clinton supporters. For context, however, the effect size for the entertainment viewing habits of supporters of either candidate is substantively small when compared to the size of the positive effect for education, which is approximately five times that of Trump-Tainment. Nonetheless, results demonstrate a non-trivial influence of entertainment consumption habits on resulting political knowledge.
Moreover, of interest beyond the differential effects observed of the entertainment viewing habits for supporters of either candidate, are the positive effects noted for viewing news, which speak to the ability of televised news to promote political learning, especially among consumers who may be less likely to consume public affairs content in an arena of increased media options [6]. As results point to marked differences in terms of political knowledge associated with televised viewing practices, the potential exists for similar effects to be observed on dimensions of political interest and attentiveness. If as some scholars argue [6,11], individuals uninterested in politics may opt to change the channel from news and into entertainment, then the results of the previous analysis suggest the possibility of a similar association between the viewing preferences of Trump supporters and disinterest in politics. Hence, the final analysis examines the relationship between media viewing habits and political attentiveness.
Similar to the approach examining political knowledge and media preferences, a second regression model was undertaken whereby political attentiveness was regressed on the two entertainment index variables (Clinton-Tainment and Trump-Tainment) as well as the two news variables (Clinton News and Trump News). Once again, education was included as a control. The OLS estimates of this model including unstandardized and standardized coefficients are presented in Table 2.
Not surprisingly, the two news variables revealed a positive association with political attentiveness. News consumption for both Clinton and Trump supporters positively predicted interest and attention one gives politics. Of particular interest in the current analysis, however, is the relationship of entertainment viewing preferences on political attentiveness, which takes the form of the two entertainment index variables, Clinton-Tainment and Trump-Tainment. As indicated in Table 2, no significant effect emerged for the entertainment preferences of Clinton supporters.
In contrast, however, a significant, negative effect emerged for the entertainment viewing choices of Trump supporters. The entertainment viewing preferences among Trump supporters was associated with a significant decrease in political attentiveness. Thus, not only were the televised entertainment choices of Trump supporters associated with reduced political knowledge as demonstrated above, but so too diminished interest in politics. Although the negative effect size of entertainment viewing among Trump supporters was relatively modest when compared to the substantively larger positive effect for viewing news, the televised entertainment preferences among Trump supporters were nonetheless consequential in determining the level of attention one pays to politics.
Taken together, the significant negative effects for entertainment preferences among Trump supporters on measures of political knowledge and political interest lend persuasive support to H4. Multiple analyses revealed distinct differences between supporters of both candidates in terms of entertainment preferences as well as the consequential nature of those preferences on levels of political knowledge and degree of political attentiveness.
Discussion
The present study sought to examine the news and entertainment preferences of individuals in a high-choice media environment in the context of the 2016 US Presidential contest. Drawing from two prominent theoretical perspectives on the nature of news and entertainment consumption in an increasingly diverse media landscape, the current study confirmed the presence of partisan news consumption among supporters of Hillary Clinton and Donald Trump. Consistent with a growing body of scholarship and polling on news exposure in our modern media environment [4,7,13,25], supporters of both candidates demonstrated significant preferences for like-minded political news. Moreover, selective partisan news viewing intensified when strength of candidate support was factored into news preferences. Partisan news consumption significantly increased on both the left and right among more fervent supporters of the candidates.
Although not the focus of the present study, it is additionally worth noting that differences existed between supporters of each candidate in terms of diversity of news consumption. While televised news consumption among Trump supporters tended to be more tightly wed to partisan news sources, news consumption among Clinton supporters was more diversified across both mainstream and partisan news outlets. Consider, three of the top seven news-oriented programs watched by Trump supporters were FOX News shows, with all three partisan programs viewed by more than 20% of respondents. Only one of the top 10 news programs favored by Clinton supporters was a cable news offering, Anderson Cooper 360. Such findings reflecting greater diversity of news preferences among Clinton supporters and greater concentration of partisan news viewing among Trump supporters, especially for FOX News content, is consistent with recent polling research on news preferences of both candidates' supporters [33].
Just as supporters of both candidates differed in terms of news preferences, so too did they differ in terms of entertainment preferences. Consistent with hypothesized expectations, distinct viewing patterns emerged between supporters of each candidate with Clinton backers favoring late-night entertainment fare and Trump supporters opting for more reality-centric and crime-based content. Further analyses revealed distinct entertainment preferences of Clinton and Trump supporters to be associated with significant changes in political knowledge and political attentiveness. While Clinton-Tainment was shown to be associated with a small, but significant increase in political knowledge, consuming Trump-Tainment programming resulted in a significant decrease in political knowledge. Moreover, entertainment preferences of Trump supporters were shown to be associated with diminished attention paid to politics. Such results speak to the thrust of Prior's [6] observation that some forms of entertainment consumption may be associated with declines in political knowledge and interest in politics, especially among lower educated segments of society. To that end, additional research into the nature of Trump supporters and their relationship to both education and interest in politics would be beneficial. Likewise, comparisons of Trump supporters and their media consumption habits relative to the supporters of previous presidential candidates, particularly Republican candidates, would be helpful to glean a better understanding if Trump was successful in galvanizing some of the politically disinterested entertainment consumers Prior [6] proposes. While the current study suggests a "celebrity" candidate in the vein of Trump would seemingly be better positioned to capitalize on an increasingly apolitical, entertainment-viewing segment of society, additional exploration is warranted. From a normative standpoint, bringing politically disaffected individuals into the political process would be a potential positive, yet, in spite of the results presented above, it is unclear if the individuals depicted in the entertainment analyses associated with diminished political knowledge and political attentiveness are those same individuals Prior [6] and others [11] acknowledge in their observation of our high-choice media landscape.
Beyond calls for further research into the connection between media preferences and political behaviors, it is important to note the present study only explores media preferences from the vantage of televised content. Thus, a clear limitation of the current research in terms of its focus on only one medium also gives rise to areas of future exploration, namely examination of more diverse media consumption and resulting influences on the political process.
Additionally, the limited focus of the current study on televised consumption further refined by a single qualifying question coupled with simple dichotomous response options likely limited the ability to capture the full effects of media consumption on examined political behaviors. Moreover, and as noted above, on occasion index variables demonstrated relatively modest reliability. However, in spite of less than ideal reliability levels with some index variables, the fact that study findings are consistent with other scholarship and results of national polling firms [33] provides confidence in study findings.
|
2019-05-20T13:06:23.443Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "8cfa65dd10f874e0b3a23a1e291aca881be701a1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2165-7912.1000387",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "92159b863deb480e3276b319196371734cf88f35",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
154517665
|
pes2o/s2orc
|
v3-fos-license
|
Economic Efficiency of Cocoa Production in Gashaka Local Government Area , Taraba State , Nigeria
The study was carried out to analyze the Economic Efficiency of Cocoa Production in Gashaka Local Government Area of Taraba State, Nigeria. Data for the study were collected from 80 respondents in 2012 using multi-stage sampling techniques and were analyzed using budgeting technique and profit function. The gross income per hectare was estimated to be N53, 250.00 while the total production cost per hectare was estimated to be N116, 470.00, giving a gross margin per hectare of N64, 005.00. The net farm income was estimated to be N35, 780.00. Purchasing costs accounted for 72.9% of the total production cost with an average cost of N851/kg. The Rate of Returns on Investment (RRI) was N0.75. Profit function result revealed that, labour cost and herbicide have negative relationship with the estimated profit, while cost of cocoa seed and fertilizer were found to be inversely related to profit. Major production constraints associated with cocoa production identified were inadequate support on research (20%), inadequate farm tools (19%) inadequate credit (17%) and lack of storage facilities (16%). The study recommended among others that strengthening of extension services and subsidization of farm inputs could improve farmers’ profit margin in cocoa production. Keyword: Economic Efficiency, Cocoa Production, Gashaka and constraints
Introduction
Cocoa Theobrorna cacao is an important cash crop which is believed to have originated from several localities in the area between the Andes and the upper reaches of the Amazon in South America (Julius, 2007).In the 19th century, cocoa production began to expand beyond its native base in Amazonia and Meso-America, spurred by an increased demand for chocolate as an item of mass consumption.Cote d'ivoire which was placed third in Africa with 143.000 tones behind Nigeria's 196,000 tons in 1970 is now the largest producer in the world with 1.3 million tones accounting for about 40% of the total world's production while Nigeria is currently the fourth largest producer after Cote d'ivoire, Ghana and Indonesia (International Cocoa Organization [ICCO], 2003).The dramatic growth of cocoa production in Cote d'ivoire is very interesting in that, Nigeria supplied the improved Amazon hybrid seed to Cote d'ivoire in 1965 for commercial planting to replace Amelonado variety hitherto grown (Opeke, 2003).There are over 500,000, cocoa farmers engaged in cocoa production in Nigeria, producing more than 200,000 tons of cocoa per year from over 600,000 hectares of land.Over 50% of this quantity is produced in Ondo State alone with substantial quantities produced in Oyo, Ogun and Osun States.
Most cocoa farmers in Nigeria were established over 40 years ago.Averagely, each farmer has a total of about 1.6 hectares with distribution between 0.5-20 hectares, scattered in 2-7 different locations.These farmers either own their farms by establishing the farms themselves or by inheritance from their parents.Recently, more educated people across different sectors have gone into cocoa production (Cocoa Research Institute of Nigeria [CRIN], 2000).Presently, fourteen out of the 36 states in Nigeria produce cocoa and they are grouped into three categories according to their level of production.The groups are: high producing States (Ondo, Cross River and Osun).Medium producer states (Edo, Ogun, Oyo, Ekiti, Abia, Delta and Akwa-Ibom) as well as low production states (Taraba and Adamawa).Despite the fluctuations in production, Western Nigeria remains the predominant cocoa zone, accounting for about 94% of Nigeria's total output (Olayeni in Hamzat et al., 2004;Ojo, 2003).Within western Nigeria itself, most of the crop is produced in a small contiguous area, generally referred to as the cocoa belt (Ojo, 2003).
The tree crop sub-sector of which cocoa is a major component is very important in African agriculture and ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences 571 contributes significantly to the income of farmers.It plays a critical role in sustaining biodiversity, under sound management of natural resources and provides additional pathways for the diversification and intensification of food crop systems.The relevance of cocoa to most developing economies cannot be overstressed as cocoa is produced by more than 50 developing countries across Asia, Africa and Latin America.All of these countries are in tropical and semi-tropical areas.Cocoa is a high value cash crop among farmers in the major producing areas in Nigeria.In total, more than 20 million people depend directly on cocoa for their livelihood.Approximately, 90% of the productions are exported in the form of beans or semi-manufactured cocoa products.Cocoa was among Nigeria's leading source of foreign exchange before the oil boom, and until now it is still Nigeria's largest agricultural foreign trade commodity and has helped to boost the economy of the major producing states in Nigeria.In recent years, the production of this important cash crop for export has declined in the country owing to a number of factors.The decline in production could be attributed to the following causes; advent of the petroleum sector which led to the neglect of agriculture; policies and activities of the Nigerian Cocoa Marketing Board (NCMB) of 1978-1986; non availability and high cost of cocoa production input; activities of middlemen; over-aged and low yielding trees, non-remunerative prices; non-availability of farm labour; old agronomic practices, poor nutrient status of cultivated land; and lack of credit to cocoa farmers.Other factors are the problem of poor control of pests and diseases, use of poor planting materials and poor handling of post harvest processes and inefficient agricultural extension services (Oluyole andUsman, 2006 andFGN, 2007).Also, it was revealed that the country's average production level of 239,000 metric tons recorded between 1970 and 1974 was far above the production level of 150,200 metric tons between 1999 and 2009 probably as a result of abandonment of cocoa farms.It was also observed that famers in Gashaka LGA of Taraba State, Nigeria are engaged in commercial production of cocoa.This could be because of the economic gain for its production.In the same vein farmers in the study area do not know the importance of record keeping therefore do not take into consideration the costs and returns associated with cocoa production.This neglected attitude attributed to their inability to ascertain the profitability status of their production.This study is therefore designed to: i. describe the socio-economic characteristics of cocoa farmers in Gashaka Local Government area., ii.
estimate the costs and returns associated with cocoa production in the study area., iii.
determine the profit-cost relationship of cocoa production.iv.
Identify constraints militating against cocoa production in the study area.
Methodology
The study was conducted in Gashaka Local Government Area of Taraba State, Nigeria.Gashaka local government area is located roughly between latitude 30 0 -20' and 6 0 .28'North and longitudes 7 0 .9and 9 0 .44'East.Purposive and multi-stage random sampling techniques were adopted to select respondents for the study.Five (5) out of the ten (10) wards of the study area were purposively selected.Seventeen (17) villages were considered proportional to the size of the wards as first stage.A list consisting of all the names of cocoa farmers in each of the villages were obtained and numbered, this form the second stage of the sampling process.At the final stage, a total of 110 farmers were randomly chosen for the study in a ratio proportional to the size of their population in each village.Descriptive statistic, gross margin and profit function were used as tools of analyses for the study.
The gross margin is given by the formula in equation ( 1) GM= GFI -TVC (1) Where: GM = Gross Margin GFI = Gross Farm Income TVC = Total Variable Cost.Net Farm Income was calculated by the formula in equation ( 2 (3) Where: R.O.I = Return On Investment
Production Function Analysis
The profit function relates maximized profit to the prices of product(s) and input(s), (Sankhayan, 1988 as cited by Musa, 2011).The function was used to determine the influence of the production cost on the profit from cocoa enterprise.
Socio-economic characteristics of respondents (n=80)
The socio-economic characteristics of the cocoa farmers' showed that 87.5% are males, while 12.5% are females.Also, 75% fell within the age of 3l-40 years, 36.25% are within the range of 21-30, 12-50% are within the range of 4l-50 while only 2.50% are within the range of 51 years and above.The result agreed with that of FAO (1995).Marital status of the respondents indicated that 61.25% are married, 18.75% are single, and 8.75% are widows while 11.25% are divorced.This means that majority of the cocoa producers are married.This agrees with the findings of Fabiyi et al., (2007) in Gombe State.The level of education result indicated that 36% attended secondary school.About 21.25% had non-formal education, 25% attended primary school education and only 17.5% had tertiary education.These showed the farmers are literate to keep farm record that will help them to estimate their cost and returns of cocoa production.The result also revealed that farming (87.5%) is the major occupation in the study area, 6.25% engaged in other businesses such as fishing and trading.Farming experience shows that 43.75% of the respondents had farming experience of 11-20 years, 40.5% had 1-10 years experience and about 13.75% had farming experience of 21-30 years, and about 2.5 0% had farming experience between 31 years and above.This shows that most of the farmers are experienced farmers in cocoa production.Majority of the respondents representing 75% had farm size of 3 hectares and above, therefore referred to as small scale farmers.
Cost and Return Analysis
The costs and returns analysis of cocoa production per hectare as shown in Table 2 indicated that average variable costs were estimated to be N 88,245.00while the fixed cost amounted to N 28,225.00per hectare.The returns in naira in terms of gross income, gross margin, net income and return per each naira invested per hectare were estimated at N 153,250.00,N 64,005.00,N 35,780.00and N 0.7565 respectively.This result concurs with the finding of Folayan et al., (2006); Gotsch and Burger (2001).Source: Field Survey, 2012.
Estimated Production Function for Cocoa
Profit function was used to determine the influence of costs associated with cocoa production on the profit realized.This involved the use of four functional forms (Linear, Exponential, Cob-Douglas and Semi-log) for the analysis.The semi-log function had the best fit and was selected as the lead equation.The selection was based on the magnitude of coefficient of multiple determinations (R 2 ), the aprori expectation and the statistical significance of the estimated regression coefficients.The summary of the estimated relationship is expressed in Table 3.The result indicates that cost of cocoa seed and herbicide were inversely related to farmers profit at 5% level.This implies that as the costs of cocoa seeds and herbicides decrease, profit increases.This scenario is attributed to the relatively high cost of cocoa seeds during planting periods as well as high cost of herbicides due to scarcity.However, the coefficient of labour was found to be positive and significant at 1% level, implying that as labour cost increases, so also the profit.This increase in labour cost in cocoa production results from seasonal scarcity and over dependence on much hired labour during farm operations.This is obvious, because most cocoa operations such as land clearing, weeding and harvesting are done manually and these demand much in terms of labour requirements.Source: Computer print out.
Constraints Encountered in Cocoa Production
Inadequate support on research was found to be the most important problem (20%) of cocoa production in the study area; this will affect the adoption of innovations to slow pace.Also, inadequate farm inputs representing 19% of the respondents was found as the farmers' most important constraint in cocoa production.These inputs are farm implements and some vital farm requirements (agro-chemicals, fertilizers, seed etc).This would be attributed to government inabilities in supplying these essential inputs for the support of agricultural activities.Lack of modem storage facilities also constituted a constraint (16%) for the production of cocoa in Gashaka LGA.This could lead to an increased attack of insect pest on cocoa produce and subsequently a decline in cocoa returns of the farmers in the study area.Lack of improved varieties (11%) also posed a problem in the area.This could not allow for commercial production of cocoa to meet the demand of international market.Hence, subsistence level of production will be in practice.Other problems that were identified are high costs of agro-chemical (9%) and lack of government assistance (8%).These also may contribute to impediment of bumper harvest in the cocoa producing area(s).Source: Field Survey, 2012.
Conclusion and Recommendations
From the findings of the study, it can be concluded that cocoa production in the study area is a profitable business.Marketing for cocoa is thus different from other food crops as it is pruned to price fluctuations.Costs of cocoa seed and herbicides were inversely related to farmers' profit at 5% probability level while the coefficient of labour was positively related and statistically significant at 1% level of probability indicating that, as labour cost increases the profit increases.
The following recommendations are therefore proffered so as to increase the farmers' output in the study area.
1. Extension agents should as a matter of concern mount serious campaign to create awareness to farmers, most especially cocoa farmers because it is a viable cash crop.2. Since the marketing of coca is left to market forces, government should as a matter of urgency set up an agency that will determine the market price of the commodity based on average cost of production every cropping season.3. Government or NGOs should assist the farmers by providing them with subsidized inputs such as fertilizer and other agrochemicals.4. Pest and disease resistant and high yielding seed varieties of cocoa should be introduced or made available to cocoa farmers in order to minimize costs.
Fixed Cost.The rate of return on investment (ROI) is computed by equation (3) ie; Profit (N) Py = Unit price of output (N) PiXi = Cost of variable input (N) Pi = Unit price of the ith variable input Z = fixed input Xi = variable inputs.The revenue equation is given as: of cocoa beans (N/ha) P1 1 Cost of cocoa seeds.(N/ha) P2 2 = Cost of labour used (N/ha) P3 3 = Cost of herbicide used (N/ha) P4 4 = Cost of transportation (N) P5 5 = Cost of storage (N) P6 6 = Fixed capital assets (N)
Table 2 :
Average Costs and Returns Analysis of Cocoa Production
Table 3 :
Semi-Log Profit Function Result.
Table 4 :
Constraints of Cocoa Production
|
2018-11-09T12:52:18.697Z
|
2015-01-08T00:00:00.000
|
{
"year": 2015,
"sha1": "1d94d7792559ea9236c5f040f8601494a8d58639",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/5575/5378",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1d94d7792559ea9236c5f040f8601494a8d58639",
"s2fieldsofstudy": [
"Economics",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Economics"
]
}
|
108292087
|
pes2o/s2orc
|
v3-fos-license
|
Spinal-Induced Hypotension in Preeclamptic and Healthy Parturients Undergoing Cesarean Section
BACKGROUND: There is a widespread belief that spinal anaesthesia in patients with preeclampsia might cause severe hypotension and decreased uteroplacental perfusion. This study aimed to evaluate the incidence and severity of spinal induced-hypotension in preeclamptics and healthy parturients. METHODS: Total of 78 patients (40 healthy and 38 preeclamptic) undergoing a C-Section with spinal anaesthesia were included. Spinal anaesthesia was performed with a mixture of 8-9 mg isobaric 0.5% bupivacaine, 20 mcg fentanyl and 100 mcg morphine (total volume 2.2-2.4 ml). Blood pressures (BP)-SBP, DBP, MAP were recorded non-invasively before performing spinal anaesthesia and at 2.5 minutes after a spinal puncture. RESULTS: The BP falls (%) from baseline were significantly greater in the healthy parturients compared to those with preeclampsia (25.8% ± 10.1 vs 18.8% ± 17.0 for SBP, 28.5% ± 8.8 vs 22.5% ± 10.4 for DBP, and 31.2% ± 14.2 vs 18.2% ± 12.6% for MAP, p < 0.05). The incidence rate of hypotension in the preeclamptics was 25% compared to 53% in healthy parturients (p < 0.001). Higher doses of vasopressors both ephedrine (16.5 ± 8.6 vs 6.0 ± 2.0 mg) and phenylephrine (105 ± 25 mg) in the healthy women were required. There was no need for phenylephrine treatment in the preeclamptic group. CONCLUSION: This study showed that the incidence and severity of spinal-induced hypotension in preeclamptic patients are less than in healthy women. The use of low dose spinal anaesthesia also contributed to this statement.
Introduction
There is a widespread belief that that spinal anaesthesia in patients with preeclampsia might cause severe hypotension and decreased uteroplacental perfusion. However, several studies had shown that the risk of spinal hypotension seen with spinal anaesthesia in preeclampsia is not as effective as it was believed, especially when a low dose of spinal anesthetic was used [1], [2]. In fact, studies show that parturients with severe preeclampsia experience less frequent and less severe hypotension than healthy parturients [3]. The aim of this study was to evaluate the hemodynamic effects of spinal anesthesia in patients with preeclampsia, as compared to healthy parturients undergoing Cesarean delivery.
Patients and Methods
Seventy-eight (78) parturients, 40 healthy (group SA H) and 38 preeclamptic parturients (group SA PE)-for a period of 2 years (2015-2017) were included in this study after providing informed consent and Ethic committee approval.
Inclusion criteria were parturients defined as preeclamptic, which means: a systolic blood pressure (SBP) of 160 mmHg or higher, or a diastolic blood pressure (DBP) of 100 mmHg or higher, or both, associated with proteinuria > 3 g/24 hours. All the preeclamptic patients were treated with a 4.0 g loading dose of intravenous magnesium sulfate (Mg SO4), followed by an -1.5 g/h infusion for 48 hours as seizure prophylaxis. Methyl-dopa or nifedipine, or both, was given for blood pressure control, but this antihypertensive protocol was not standardised and was left to the choice of the obstetrician or anesthesiologist. Mg therapy was discontinued just before the operation; antihypertensive drugs were excluded for at least 4 h before spinal puncture.
Exclusion criteria were the parturients with severe fetal distress or those in labour, placental abruption, placenta praevia, cord prolapse or less than 30 weeks' gestation, twin pregnancy; signs of hypovolemia, HELLP or coagulopathy (< 85,000), oligoanuria, cerebral or visual disturbances.
Before performing the spinal puncture, once after the first call, preoperative IV fluid administration equal to a maximum of 500 ml 0.9% saline for preeclamptic and 15 mL/kg for the healthy group of 0.9% saline was administered over the 15-20 minutes with the patients turned to the left lateral tilt. After skin disinfection, a 26-27 G Pencan needle was inserted at the L3-L4 or L2-L3 vertebral interspaces. Spinal anaesthesia was performed with a mixture of 8-9 mg isobaric 0.5% bupivacaine, 20 mcg fentanyl and 100 mcg morphine (total volume 2.2-2.4 ml) in the sitting position. Each patient was then placed in the supine position with a left lateral tilt of 15-20 degrees. All of the patients in both groups continued to receive 1.000-1.500 ml of 0.9% saline after the spinal puncture and during the operation. The height of the sensory block was assessed, and after achieving an adequate sensory block (T4 level), the procedure was initiated.
Patients were monitored with non-invasive automated blood pressure cuffs, ECG, pulse oximetry and capnograph.
Heart rate (HR) and blood pressure (BP) were recorded before performing spinal anaesthesia and at 2.5-minute intervals for 10 minutes after the puncture, and then every 5 minutes until the end of the surgery. Hypotension was defined as more than a 20% decline in mean arterial blood pressure (MAP) below the baseline in both groups and decrease of systolic blood pressure (SBP) less than 100 mmHg in healthy parturients.
Hypotension was treated with boluses of 5 mg IV ephedrine, and if it persisted, IV phenylephrine 50 mcg was given following 10 mg ephedrine. The total amounts of IV administered fluid, and the total doses of ephedrine (phenylephrine) were recorded as well. The largest and lowest value of maternal hypotension and HR from the baseline were also recorded and compared.
Data are presented as number, median and range, mean ± SD, or percentage as appropriate.
Fisher's exact test was used for intergroup comparisons of the incidence of hypotension and the upper sensory level and the incidence of changes in HR. Student t-test was used to detect a significant difference for difference of means. A p value of less than 0.05 (p < 0.05) was considered to indicate statistical significance and was highly significant if p < 0.001. Data was compiled in Microsoft Excel worksheet.
Results
Total of 78 patients, 40 healthy (group SA H) and 38 preeclamptic parturients (group SA PE) were included in this study. No spinal patient was excluded because of inadequate analgesia or another reason. Patient characteristics: a dose of 0.5% bupivacaine (mg), the upper sensory level at 5 min, spinal puncture to uterine-incision period, the Apgar score at 5 min was similar between groups.
Preeclamptic parturients were older than those in the healthy group, included more nulliparous, and their neonates had a younger gestational age, which was the likely reason for the lower Apgar 1-min scores on neonates in this group. However, four (4) neonates had an Apgar 1-min score < 5 in the preeclamptic group, compared to two (2) in the healthy group (Table 1). In the preeclamptic patients, SBP and DBP were consistently higher than the corresponding values among the healthy parturients, and the same trend was happening to MAP, which was at a constantly higher level in preeclamptic ( Figure 1). https://www.id-press.eu/mjms/index There was decreased BP after the spinal block in both groups, but the BP falls were significantly greater in the healthy parturients compared to those with preeclamptics: 25. 8 The incidence rate of hypotension in the preeclamptics was 25% and was significantly less than that of the healthy parturients (53%), p < 0.001. It should also be taken into account that the preeclamptic parturients were prehydrated with lower volumes of saline (450 versus 740 ml), and secondly, the hypotension under 100 mmHg for SBP was not seen in any parturient from the preeclamptic group.
Furthermore, higher doses of vasopressors, both ephedrine (16.5 ± 8.6 vs 6.0 ± 2.0 mg, p < 0.05) and phenylephrine in the healthy group, were used to correct hypotension. There was no need to use phenylephrine to correct hypotension in the preeclamptic group.
Discussion
The belief that spinal anaesthesia in patients with preeclampsia might produce severe hypotension and decreased uteroplacental perfusion has prevented the widespread use of spinal anaesthesia in these patients. It was traditionally believed that epidural anaesthesia is safer than spinal anaesthesia in preeclamptics because the former was expected to produce a lower risk of clinically significant hypotension, but this method of choice has now been rejected [4], [5]. Concerns that spinal anaesthesia might produce severe hypotension in the preeclamptic population have dissipated as a result of greater familiarity with this technique and less expected complications that follow spinal anaesthesia in this population. Nowadays, spinal anaesthesia has become a priority technique over general and epidural anaesthesia, primarily because of its unique advantages: it's a simple and practical technique, owns rapid onset of action and causes a dense sensory block, less tissue trauma and lower risk of spinal-epidural hematoma. If time allows, it can be used in a setting of acute fetal compromise also.
Also, some studies have been conducted, and reports of the risk of spinal-induced hypotension in preeclamptics are encouraging. In a most rigorous study concerning this issue, a multicenter-controlled trial involving 100 severely preeclamptic parturients, Visalyaputra et al., concluded that differences from spinal-induced hypotension compared to epiduralinduced hypotension is not clinically significant [6]. A prospective study by Aya et al., found that the risk of hypotension following spinal anaesthesia in preeclamptic patients was significantly lower than the risk among healthy-term parturients (17% vs 53% in healthy parturients), [7]. Similar to the study by Aya et al., Nikooseresht M. et al., reported that the incidence of hypotension in severely preeclamptics undergoing spinal anaesthesia for C-Section was found to be significantly lower in comparison to the rate among healthy parturients (55% vs 89%). Factors such as the difference in gestational age, the carrying of a smaller fetus, less aortocaval compression, sympathetic hyperactivity, and high vascular tone might have led to this finding [8]. Additionally, some other studies show that parturients with preeclampsia might experience less frequent and less severe hypotension than the healthy ones [9], [10], [11].
The lower incidence of spinal-induced hypotension in preeclamptic patients compared to the healthy ones might be more causative: 1. Preeclamptic pregnancy ends with less gestational maturity carrying lower birth weight neonates (smaller uterine size) compared to a healthy pregnancy. Hence the risk of aortocaval obstruction is lower. For the same reasons, the epidural venous plexuses in preeclamptics are less exaggerated, thus leading to a lower cephalic spread of the local anaesthetic. Aya et al. suggested that the risk of hypotension following a subarachnoid block in preeclampsia was related to other preeclampsiaassociated factors rather than to a small uterine size [9].
2. The vasodilator system in preeclampsia (regulated by the endothelial pathway via endothelialdependent relaxation of small resistant vessels) has an altered response-thus maintaining a high vascular tone on a constantly higher level, independent of spinal-induced sympathetic blockade, keeping the BP high [6].
3. The circulation of preeclamptic patients contains an increased production of numerous potent vasopressor factors, which also keep BP at a higher level. Also, there is an increased sensitivity of small resistant vessels to the exogenous vasopressor stimulation; this can explain the lower ephedrine dose needed to correct the spinal-hypotension in preeclamptics [12].
Results from our study show that hypotension is greater in healthy parturients as opposed to preeclamptics (53 vs 25%, p < 0.001). Spinal-induced hypotension was short-lived (1.2 min) and was easily treated with a low dose of vasopressors. The ephedrine requirement for treatment of spinal-induced hypotension in preeclampsia has been reported to be lower than that required by healthy parturients [12], [13]. Preeclamptics have also been reported to require significantly less phenylephrine to treat hypotension [14]. These results were comparable to our findings in that the total doses of IV ephedrine for treating hypotension were significantly lower for the preeclamptics (6.0 ± 2.0 mg) than for the healthy patients (16.5 ± 8,6 mg, p < 0.05). Furthermore, there was no need to treat the preeclamptics with phenylephrine.
Regardless of the previous reasons, we consider that the incidence of spinal anaesthesia induced-hypotension might be related and to the local anaesthetic dose, so a low dose concept should provide a lower incidence of spinal hypotension, but certainly not to the expense of unsatisfactory surgical analgesia [15], [16]. In a pilot study which compared the hemodynamic consequences of two doses of spinal bupivacaine (7.5 mg vs 10 mg) for a C-Section in those with severe preeclampsia, predelivery MAP was lower, and the ephedrine requirements were greater in the 10 mg group [3]. In another study, Roofthoof and Van de Velde had shown that when low dose spinal anaesthesia (6.5 mg bupivacaine) was administered with sufentanil as part of a combined spinal-epidural technique (CSE) in shorter surgeries (less than 60 minutes), the need for epidural supplementation was rare [16].
The originality of this article is that this study includes a concept based on a mixture consisting of low bupivacaine dose (8-9 mg) added to two opioids (lipophilic fentanyl 20 mcg and long-acting hydrophilic morphine 100 mcg) thus providing stable hemodynamics with good surgical anaesthesia and satisfactory postoperative analgesia for the next 24 hours with. Adding (two) opioids to the LA act synergistically, thus strengthening both the analgesic potential of LA and reducing the possibility of LA dose-induced spinal hypotension. The rapid intraoperative analgesic onset of lipophilic fentanyl is well-known, but some authors believe that hydrophilic long-lasting intrathecal morphine could reduce the intraoperative discomfort as well as improve intraoperative analgesia [17], [18]. Other researchers have reached a similar conclusion, and a decrease in intraoperative pain with spinal morphine was seen in some studies [19], [20]. In the event of a short time interval between spinal puncture and the start of a C-Section, Weigl W. et al., also suggest a mixture of two opioids-fentanyl and morphine-addled to LA, thus confirming the previous statements [21].
In conclusion, this study showed that the incidence and severity of spinal-induced hypotension associated with patients undergoing C-Section are less in preeclamptics than in healthy parturients. Like healthy patients, however, preeclamptics may also experience some degree of spinal hypotension, but it is short-lived and easily treated with significantly lower ephedrine dose than in healthy parturients. The concept of low-dosage spinal anaesthesia in preeclamptics can successfully contribute to reducing the spinal-induced hypotension, thus positively influencing both hemodynamics and neonatal wellbeing. However, more patients and further research are needed to find and optimise maternal hemodynamics in preeclamptics undergoing spinal anaesthesia for C-Section.
|
2019-04-13T13:02:47.822Z
|
2019-03-28T00:00:00.000
|
{
"year": 2019,
"sha1": "5eeada2c5acc2769ce50211907e126e05320a95c",
"oa_license": null,
"oa_url": "https://doi.org/10.3889/oamjms.2019.230",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5eeada2c5acc2769ce50211907e126e05320a95c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14465833
|
pes2o/s2orc
|
v3-fos-license
|
Linking road casualty and clinical data to assess the effectiveness of mobile safety enforcement cameras: a before and after study
Objectives To use police STATS19 road casualty data and accident and emergency and in-patient information to estimate the impact of mobile safety cameras on the cost of treating individuals injured in road traffic collisions. Design A data-matching and costing exercise to link casualty and clinical information in a ‘before’ and ‘after’ study of 56 mobile safety cameras. Setting The Northumbria Police Force area of the UK covering six local authority districts. Participants Slight, serious and fatal casualties involved in road traffic collisions at mobile camera sites in the case-study area between April 2001–March 2003 and April 2004–March 2006. Primary and secondary outcome measures Changes in the number and severity of casualties at the mobile camera sites between the ‘before’ and ‘after’ period that can be attributed to mobile safety camera activity, and any impacts these changes had on the ‘cost of treatment saved’ by the secondary healthcare service in the case-study area. Results Using tariff values for accident and emergency and In-patient Health Resource Groups, the impacts of the cameras in terms of the ‘cost of treatment saved’ are in the range £12 500–£15 000 per annum. However, inconsistencies between databases resulted in approximately one-third of the casualties not being matched successfully in the clinical databases. The number of closed fractures requiring investigations, treatment and follow-up care reduced considerably, although this was offset by an increase in head injury contusions and open fractures that require high-cost investigations and extensive in-patient care. Conclusions Road safety cameras could have a significant impact in terms of ‘cost of treatment saved’. However, it is argued that investigating the impacts of road safety measures in the future should be based on Fully Bayesian techniques as they can produce more reliable estimates of the effects of regression to mean and general trends in casualty statistics.
▪ On the basis of matched casualty and clinical data, it is estimated that secondary healthcare providers in the case-study area saved between £12 500 and £15 000 per annum in terms of the 'treatment saved' as a result of mobile safety camera deployment during the study period. Savings at the national level could be considerable across all safety camera partnerships. ▪ Inconsistencies between the databases resulted in approximately two-thirds of the road casualties in the study being matched successfully with their clinical information. This was only achieved by supplementing the automatic matching process with resource-intensive manual matching. ▪ Conventional statistical methods can lead to under-estimates of the effects of confounding factors, thus overvaluing the benefits of road safety measures.
Strengths and limitations of this study
▪ The main strength is a more accurate estimate of the actual benefits of mobile speed camera deployment to secondary healthcare providers in terms of 'cost of treatment saved'. The method matches actual casualties with the cost of their clinical treatment, and accounts more realistically for the confounding factors of general casualty trends and regression to mean effects. ▪ Inconsistencies between casualty and clinical databases limited the number of successful matches. Thus, potential problems of bias in the estimates of cost savings cannot be ignored.
INTRODUCTION
There is increasing evidence to suggest that road safety cameras are effective at reducing road traffic casualties, [1][2][3][4][5][6] with clear benefits for healthcare providers in terms of a reduced economic burden of medical treatment. Mobile safety cameras have now been deployed routinely across much of Great Britain for almost a decade. However, their continued use as a road safety measure remains contentious. This is often due to ongoing disputes over how much of any observed reduction in casualties can be attributed directly to the cameras, and how much instead to non-scheme effects such as regression to mean, changes in traffic flows at camera sites and general trends in casualty numbers (eg, due to improved in-vehicle safety devices). This raises the important question of how best to account for these nonscheme effects when trying to measure camera benefits accurately? The cameras' contribution to improved road safety, and any subsequent impact on the healthcare sector through changes in the number and severity of casualties, is overvalued when these confounding factors are either underestimated or ignored altogether. Unfortunately, it is argued here that this is often the case. Further, simply using police data regarding casualty severity can often be inaccurate due to misclassification errors 7 8 and using average 'cost' values for each severity class provides a less comprehensive picture of the real impacts on the healthcare sector than using the available data on the costs of treatment for individual casualties. This paper therefore addresses the following important issues: ▸ Matching casualty and clinical data to estimate the 'cost of treatment saved' to healthcare providers as a result of mobile safety camera deployment; ▸ Whether the conventional approach for accounting for regression to mean effects produces reliable estimates of camera effectiveness; ▸ The effect of general trends in casualty figures.
In reality, medical and ambulance costs represent only a tiny fraction of the estimated overall value of preventing a road casualty in the UK-for example, less than 1% of the £1.8 m for a fatal casualty and approximately 7% (of £200 000) and 5% (of £20 000) for serious and slight casualties, respectively. In comparison, the human cost element (representing pain, grief and suffering for the casualty and their close friends and relatives) accounts for as much as 70% of the value of preventing a serious casualty, and 55% and 50% of the value for fatal and slight casualties. It is important to consider these different cost elements, for example, when a potential fatal casualty becomes a seriously injured casualty due to a safety camera. Although medical and ambulance costs are much higher for serious casualties (approximately £13 000 compared with £1 000 for a fatal), human costs are considerably different-£1 M for a fatal compared to £140 000 for a serious casualty. Thus, relatively small increases in medical and ambulance costs must be considered in the light of much larger reductions in human costs when reducing casualty severity.
Findings are presented from recent research in the Northumbria Police Force area of the UK to estimate the 'cost of treatment saved' to regional healthcare providers resulting from mobile safety camera deployment. Initial research involving the authors 9 and funded by the (then) Northumbria Safety Camera Partnership (NSCP) established a data collection methodology and reported on the findings from a larger sample of (67) mobile camera sites. However, this research followed the conventional approach of using Empirical Bayes statistical methods to account for regression to mean, omitted the effects of general casualty trends and had no set of control sites to help estimate the expected number of casualties in the 'after' period. This research has now been extended by the current authors to include general trend effects within a different analytical approach for accounting for regression to mean that uses the less widely applied Fully Bayesian framework 10 11 with the aim of providing more reliable estimates of the impact of mobile safety camera enforcement. Importantly, these findings have implications for the assessment of road safety interventions in general, in terms of the appropriate treatment of confounding factors, and add further evidence to the case for preferring Full Bayes to Empirical Bayes methods.
Linking road casualty information from the police and patient data from health authorities to assist road safety research has been the focus of several previous studies worldwide, 12 but has often proved problematic due to incompatibilities between datasets limiting (sometimes significantly) the number of successful casualty:patient matches that can be made for further detailed analysis. The most relevant study here involved a 'before' and 'after' investigation of the epidemiological and economic impacts of 47 (44 fixed and 3 mobile) safety cameras by linking casualty and patient data in the Strathclyde region of the UK. 12 For the period 1997-2005, some 10 000 (of 19 000) road casualty records were linked successfully to approximately 30 000 hospital and death records. Using straightforward 'before' and 'after' comparisons of costs, the study estimates that the cameras contributed to savings in the region of £5 M in the study area, but acknowledges that the potential effects of confounding factors on this estimate cannot be ignored.
METHODS
The NSCP (now known as the Northumbria Road Safety Initiative), established in April 2003, is responsible for operating road safety cameras on the region's road network at sites with a known history of speeding and collisions in accordance with national government guidelines. A 'before' and 'after' study was conducted to assess the impact of mobile cameras on the secondary healthcare sector in the region-the 'before' (2) what would have been the cost of treating these casualties in hospital had they occurred?
In the initial stage of the project, key data (ie, age, gender, date of collision and local authority code of collision location) were extracted from the NSCP's database for every casualty that occurred at mobile camera sites during the 'before' and 'after' periods. 9 Data were also extracted from accident and emergency departments' records at the seven hospitals in the case-study area (and those in the immediate surroundings of Carlisle, Durham and the Scottish Borders). For this, approval was granted by the local Research Ethics committee in 2005, and Research and Development Trust and Caldicott approval was obtained from each NHS hospital involved. The two lists were then matched to access medical records of casualties injured at mobile safety camera sites. Thus, casualties not admitted to hospital (ie, via accident and emergency or as an in-patient), for example those who died at the collision scene, are not included in the analysis. A two-stage data-linking process was designed. The first (automatic) stage involved seeking identical matches between police and hospital databases on three key casualty variables: age, gender and date of collision (on police records)/date of admission (on hospital records). This exercise achieved a 44% matching success rate for the 'before' period data and 48% for the 'after' period data from over 18 000 accident and emergency and over 3000 in-patient records. 9 To boost the disappointing sample size resulting from this automatic stage, a second and significantly more labour-intensive (manual) stage was implemented. Having obtained relevant data protection approvals, this involved obtaining the names of unmatched casualties from the police and interrogating databases at each of the 11 hospitals in the study area and the immediate surroundings. This increased the matching success rate in the 'before' and 'after' periods to 66% and 68%, respectively.
Clinical information was gathered for each matched casualty by grouping them on the basis of their operational demands on the secondary healthcare service. Casualties were clustered into Healthcare Resource Groups (HRGs) based on similar amounts of healthcare resources consumed using the International Classification of Diseases (ICD) and the Office of Population Census and Surveys classification of surgical operations (OPCS). A tariff is allocated to each HRG using data returned annually from National Health Service Trusts reflecting a national average cost of providing healthcare to patients in each HRG (table 1). Using medical records, each casualty was then allocated to one of the eight HRGs.
Patients who were then admitted to hospital from accident and emergency departments were allocated to one of 700 in-patient HRGs that also carry a cost of treatment tariff. Here, tariffs were clustered into £500 bands to reduce the in-patient HRGs to a more manageable number. Each matched casualty in the study was then allocated to a single accident and emergency and in-patient HRG combination. These observed frequencies from the 'before' data can then be used to estimate the probability of a casualty, which did not occur in the 'after' period, falling into a particular HRG combination and hence the cost of the treatment that had been prevented.
To account for confounding factors, the conventional approach to quantify regression to mean generally relies on Empirical Bayes techniques (eg, see ref. 1 and 9) and should use casualty data from a representative sample of control sites to predict the expected number of casualties in the 'after' period in the absence of any intervention. The difference between the Empirical Bayes estimate and the observed casualty frequency is then attributed to the road safety camera ( plus any trend effects). A weakness here is that the standard application of the Empirical Bayes method only produces a point estimate of the mean number of casualties expected in the 'after' period. Frequency distributions of casualties are predominantly skewed positively, with the median value being the usually accepted descriptor of such distributions rather than the mean. In positively skewed distributions, the mean is almost always higher than the median, suggesting (misleadingly) a higher number of expected casualties in the 'after' period after accounting for regression to mean, hence a lower regression to mean effect and thus an overestimate of the effectiveness of the road safety measure. Instead, Fully Bayesian methods are increasingly being recommended 10 11 to produce a frequency distribution (rather than a point estimate of the mean) of the expected number of casualties in the 'after' period that can be described by a range of statistical summaries (such as the mean, median and SD or even plausible ranges for the parameter of interest). Both Empirical and Fully Bayesian approaches are implemented through multiple linear regression models to predict the number of casualties at each mobile camera site in the 'after' period using explanatory variables relating to site-specific vehicle speed profiles, daily traffic flow, road type and road classification. To improve the reliability of the regression model outputs, it is crucial that both sets of sites (camera and control) are as comparable as possible in terms of the explanatory variables to control for the effects of all other factors except for the effect of a safety camera. To test the degree of comparability in the explanatory variables between the control and camera sites, a Monte Carlo permutation test 13 was conducted on the site characteristics data which confirmed that our sites are comparable at the 5% significance level. As the control sites are only a relatively small sample of all possible control sites that could have been included, a degree of uncertainty clearly exists over the parameter estimates for the explanatory variables in the regression model. Unfortunately, this uncertainty is ignored in the Empirical Bayes approach as only a point estimate of each parameter is calculated. In the Fully Bayesian approach, however, a statistical distribution is produced for each parameter estimate to reflect, and carry forward into the 'after' period, the inherent uncertainty and variability. The Fully Bayesian approach also has the added advantages of producing much more realistic SEs as all sources of variability are accounted for, being more flexible both in terms of the models that can be tested to improve model fit (eg, the Weibull/lognormal models) and in terms of enabling general trends in casualty numbers to be included relatively easily as a trend statistic in the regression model. Like-for-like comparisons between Empirical and Fully Bayesian estimates should be made with caution, however, as important differences exist in their statistical approaches.
To account for general trends in casualties, data from the Northumbria road network not covered routinely by mobile camera enforcement suggest a consistent downward trend in total casualties since the start of the 'after' period (approximately a 4.7% reduction per annum). This trend is assumed also to have occurred at the treated sites. The trend statistic built into the regression model reflects the trend in the estimate of the expected number of casualties at camera locations in the 'after' period. To represent the uncertainty about how much the changes in casualty figures at camera sites are related to overall casualty figures at non-camera sites (ie, a drop of 4.7% per year) and casualty figures from the 'before' period only (ie, no annual reduction in the 'after' period due to trend), the statistic is allowed to vary between 1 (no trend effect) and 0.906 (2 years of casualty reduction at 4.7% per year in the 'before' period) with equal probability.
RESULTS
The study identified 436 casualties at the 56 mobile camera sites in the 'before' period. In comparison, 287 casualties were identified at the set of 67 'control' sites during the same period. In the 'after' period, casualties fell by 132 (−32%) to 298 at the mobile camera sites. Using the tariff values for each accident and emergency/in-patient HRG combinations and the probability of a casualty falling into each combination calculated from the 'before' clinical data, estimates can be made of the effect of the mobile cameras on the cost of treatment saved as a result of the casualties that were prevented. From the Empirical Bayes approach (which ignores the impact of trend), the estimate of treatment saved is approximately £25 600 over the 2-year 'after' period. For the Fully Bayesian analysis, estimates of savings are available using either the mean or median estimate of casualties in the 'after' period both with and without the effect of trend. Using the mean value, estimates of the cost of treatment saved during the 'after' period are £30 900 (without trend) and £26 500 (with trend), and with the median value, £30 900 (without trend) and £26 700 (with trend).
To provide an insight into the possible reasons behind the changes in cost of treatment, table 2 reports changes in the frequency of the most frequent injuries occurring in the 'before' and 'after' periods in the initial study, 9 noting that some casualties could of course sustain more than one type of injury. For example, the number of closed fractures has reduced considerably. These injuries require investigations, treatment and follow-up care. On the other hand, the number of soft tissue inflammations increased, although these injuries are usually referred to general practitioners. The frequency of head injury contusions and open fractures increased and these injuries require high-cost investigations and extensive in-patient care.
DISCUSSION AND CONCLUSIONS
This study's principal findings are that, based on the matched casualty and clinical data, the estimated 'cost of treatment saved' by secondary healthcare providers in the study area is between £12 500 and £15 000 per annum during the study period as a result of the deployment of mobile safety cameras. If these findings are typical, then annual savings at a national level across all the safety camera partnerships that cover the vast majority of the UK could be considerable. The study identified inconsistencies between available casualty and clinical databases that limited the number of successful matches that could be made, and also that conventional statistical methods have the potential for underestimating the effects of regression to mean, thus over-valuing the benefits of road safety interventions.
The main strength of the study, which sets it apart from previous research, lies in the development of a procedure for estimating more accurately the actual benefits of mobile safety camera deployment in terms of the 'cost of treatment saved', with the method accounting more realistically for the confounding factors of regression to mean and general casualty trends. Casualties are matched with the cost of their clinical treatment and the procedure has the potential to be used in the evaluation of a wide range of other road safety measures. Therefore, the results clearly have implications for the cost effectiveness of mobile safety cameras, especially if the benefits in terms of casualty reduction are not as great as currently thought due to the underestimation of regression to mean. Until 2007, camera operations were funded by the fine revenues they generated through a hypothecation scheme. Since then, however, funding has been through a road safety grant where safety cameras have to compete against other road safety initiatives for financial support. This competition has therefore focused attention very sharply on safety cameras' value for money relative to other road casualty reduction measures, and the outcome of this competition for funds will determine whether or not local road authorities in the UK continue with the policy of traffic speed enforcement through speed cameras and at what level.
The main limitation of the study is the low rate of successful matches (44% and 48%) from the automatic linking of the casualty and clinical databases used in the initial study which are disappointing, and the problems of bias this can create. Although these rates are consistent with previous matching exercises, 8 14 overall rates have clearly not improved significantly over the past 15-20 years. Possible reasons for failed matches based on similar studies are suggested elsewhere. 15 Here, common issues that had to be resolved in the matching process included: ▸ Casualties of the same age and gender from separate collisions on the same day attending different hospitals; ▸ Incorrect casualty ages and/or dates being recorded, for example for the collision or accident and emergency admittance; ▸ Casualties injured in late-evening collisions (before midnight) arriving at hospital the following day causing a date mismatch (between the date of collision and the date of admittance); ▸ Casualties from the same collision attending different hospitals; ▸ Police data recording casualties' age and hospitals recording date of birth. These issues meant that the automatic linking procedure still had to be supplemented by time-consuming manual methods to identify unmatched casualties and boost the sample size. Approval had to be gained from appropriate data protection officials to allow direct enquires to Northumbria Police to release further casualty information. As already mentioned, the issue of unmatched casualties introduces the potentially serious problem of bias, in this case, into the estimates of the cost savings. Clearly, it is not known as to how the unmatched casualties in the 'before' period are distributed between the accident and emergency and in-patient HRG combinations. If this distribution of unmatched casualties is weighted more towards the higher-cost combinations (compared to the distribution for matched casualties) then the cost savings will be underestimated as the casualties that did not occur in the 'after' period will be under-represented in the higher-cost combinations and vice versa.
An approach involving an integrated casualty and clinical database would overcome many of these data issues, or at least a higher degree of consistency between the two in recording key information such as dates of birth and dates of collision and hospital admittance. Indeed, single databases for injury research have been advocated for some time now. [16][17][18] This would make larger-scale assessments of road safety schemes, for example at the national and international scale, much more feasible from a human resource perspective. The data-linking approach described here also serves as a reminder of the benefits of such an approach to evaluation compared to using simple casualty classifications of fatal, serious and slight as misclassifications can often occur 7 8 and there was evidence of some possible discrepancies revealed in this study.
From the statistical analysis of the effect of confounding factors, it is clear that site selection issues are extremely important in determining the location of camera sites to generate maximum return in terms of casualty reduction. From the evidence here, regression to mean effects can significantly reduce the apparent impact of mobile safety cameras, which may cause a re-evaluation of the current belief that road safety schemes generate high value for money. Deployment sites in Great Britain are selected typically on the casualty history during a 3-year baseline period. Extending this period up to (say) 5 years or longer where data exist would help highlight either a growing casualty problem that a safety camera might help solve, or simply a short-term 'blip' after which annual casualty rates would return to existing levels without the necessary expense of an intervention, allowing limited resources to be deployed elsewhere. Also, it is recommended avoiding aggregate casualty figures for the baseline period as this can mask downward trends in casualty numbers or atypically 'bad' years.
In conclusion, this paper has presented evidence to suggest that conventional approaches to account for regression to mean effects that rely on Empirical Bayes techniques could lead to overoptimistic assessments of the value of road safety measures, suggesting that value-for-money decisions may not be optimal. This problem becomes more serious when the frequency distribution of predicted casualties at a treatment site shows clear positive skewness with an increasing difference between the predicted mean and median values. It is recommended here that a Fully Bayesian approach is adopted which, it is argued, is statistically more appropriate to handling casualty data and flexible enough to allow confounding factors to be incorporated more rigorously with the end result of more reliable estimates of the impacts of road safety measures. This study has demonstrated the value of including the effects of (generally downward) trends in casualty profiles to provide a more accurate picture of scheme-only impacts. Unfortunately, these effects are often omitted. The 'cost of treatment saved' suggested here may seem modest once the confounding factors have been accounted for appropriately. In context, reported casualties in the Northumbria region in 2010 represent only a very small percentage (approximately only 2.5%) of the total in the case of Great Britain. If the savings suggested here were replicated proportionally elsewhere, then the total savings in terms of treatment could run into many millions of pounds over the lifetime of safety camera partnerships. Further, these calculations do not include the additional 'costs' mentioned in the opening section to this paper that are borne by society in general, such as pain, grief and suffering and loss of output resulting from road traffic collisions, which would increase these estimates considerably.
|
2017-04-20T03:54:28.926Z
|
2012-11-19T00:00:00.000
|
{
"year": 2012,
"sha1": "8ef810b77d1f4e085f029f300237865234e6bc3d",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/2/6/e001304.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "341f98db2c103ea2b24ca0202293eb2fa53b049f",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209597990
|
pes2o/s2orc
|
v3-fos-license
|
Dopamine role in learning and action inference
This paper describes a framework for modelling dopamine function in the mammalian brain. It proposes that both learning and action planning involve processes minimizing prediction errors encoded by dopaminergic neurons. In this framework, dopaminergic neurons projecting to different parts of the striatum encode errors in predictions made by the corresponding systems within the basal ganglia. The dopaminergic neurons encode differences between rewards and expectations in the goal-directed system, and differences between the chosen and habitual actions in the habit system. These prediction errors trigger learning about rewards and habit formation, respectively. Additionally, dopaminergic neurons in the goal-directed system play a key role in action planning: They compute the difference between a desired reward and the reward expected from the current motor plan, and they facilitate action planning until this difference diminishes. Presented models account for dopaminergic responses during movements, effects of dopamine depletion on behaviour, and make several experimental predictions.
Introduction
Neurons releasing dopamine send widespread projections to many brain regions, including basal ganglia and cortex (Bjö rklund and Dunnett, 2007), and substantially modulate information processing in the target areas. Dopaminergic neurons in the ventral tegmental area respond to unexpected rewards (Schultz et al., 1997), and hence it has been proposed that they encode reward prediction error, defined as the difference between obtained and expected reward (Houk et al., 1995;Montague et al., 1996). According to the classical reinforcement learning theory, this prediction error triggers update of the estimates of expected rewards encoded in striatum. Indeed, it has been observed that dopaminergic activity modulates synaptic plasticity in the striatum in a way predicted by the theory (Reynolds et al., 2001;Shen et al., 2008). This classical reinforcement learning theory of dopamine has been one of the greatest successes of computational neuroscience, as the predicted patterns of dopaminergic activity have been seen in diverse studies in multiple species (Eshel et al., 2016;Tobler et al., 2005;Zaghloul et al., 2009).
However, this classical theory does not account for the important role of dopamine in action planning. This role is evident from the difficulties in initiation of voluntary movements seen after the death of dopaminergic neurons in Parkinson's disease. This role is consistent with the diversity in the activity of dopaminergic neurons, with many of them responding to movements (da Silva et al., 2018;Dodson et al., 2016;Howe and Dombeck, 2016;Jin and Costa, 2010;Lee et al., 2019;Schultz et al., 1983;Syed et al., 2016). The function of dopamine in energizing movements is likely to come from the effects it has on the excitability or gain of the target neurons (Lahiri and Bevan, 2020;Thurley et al., 2008). Understanding the role of dopamine in action planning and movement initiation is important for refining treatments for Parkinson's disease, where the symptoms are caused by dopamine depletion.
A foundation for a framework accounting the role of dopamine in both learning and action planning may be provided by a theory called active inference (Friston, 2010). This theory relies on an assumption that the brain attempts to minimize prediction errors defined as the differences between observed stimuli and expectations. In active inference, these prediction errors can be minimized in two ways: through learning -by updating expectations to match stimuli, and through action -by changing the world to match the expectations. According to the active inference theory, prediction errors may need to be minimized by actions, because the brain maintains prior expectations that are necessary for survival and so cannot be overwritten by learning, e.g. an expectation that food reserves should be at a certain level. When such predictions are not satisfied, the brain plans actions to reduce the corresponding prediction errors, for example by finding food.
This paper suggests that a more complete description of dopamine function can be gained by integrating reinforcement learning with elements of three more recent theories. First, taking inspiration from active inference, we propose that prediction errors represented by dopaminergic neurons are minimized by both learning and action planning, which gives rise to the roles of dopamine in both these processes. Second, we incorporate a recent theory of habit formation, which suggests that the habit and goal-directed systems learn on the basis of distinct prediction errors (Miller et al., 2019), and we propose that these prediction errors are encoded by distinct populations of dopaminergic neurons, giving rise to the observed diversity of their responses. Third, we assume that the most appropriate actions are identified through Bayesian inference (Solway and Botvinick, 2012), and present a mathematical framework describing how this inference can be physically implemented in anatomically identified networks within the basal ganglia. Since the framework extends the description of dopamine function to action planning, we refer to it as the DopAct framework. The DopAct framework accounts for a wide range of experimental data including the diversity of dopaminergic responses, the difficulties in initiation of voluntary movements under dopamine depletion, and it makes several experimentally testable predictions.
Results
To provide an intuition for the DopAct framework, we start with giving its overview. Next, we formalize the framework, and show examples of models developed within it for two tasks commonly used eLife digest In the brain, chemicals such as dopamine allow nerve cells to 'talk' to each other and to relay information from and to the environment. Dopamine, in particular, is released when pleasant surprises are experienced: this helps the organism to learn about the consequences of certain actions. If a new flavour of ice-cream tastes better than expected, for example, the release of dopamine tells the brain that this flavour is worth choosing again.
However, dopamine has an additional role in controlling movement. When the cells that produce dopamine die, for instance in Parkinson's disease, individuals may find it difficult to initiate deliberate movements. Here, Rafal Bogacz aimed to develop a comprehensive framework that could reconcile the two seemingly unrelated roles played by dopamine.
The new theory proposes that dopamine is released when an outcome differs from expectations, which helps the organism to adjust and minimise these differences. In the ice-cream example, the difference is between how good the treat is expected to taste, and how tasty it really is. By learning to select the same flavour repeatedly, the brain aligns expectation and the result of the choice. This ability would also apply when movements are planned. In this case, the brain compares the desired reward with the predicted results of the planned actions. For example, while planning to get a spoonful of ice-cream, the brain compares the pleasure expected from the movement that is currently planned, and the pleasure of eating a full spoon of the treat. If the two differ, for example because no movement has been planned yet, the brain releases dopamine to form a better version of the action plan. The theory was then tested using a computer simulation of nerve cells that release dopamine; this showed that the behaviour of the virtual cells closely matched that of their real-life counterparts.
This work offers a comprehensive description of the fundamental role of dopamine in the brain. The model now needs to be verified through experiments on living nerve cells; ultimately, it could help doctors and researchers to develop better treatments for conditions such as Parkinson's disease or ADHD, which are linked to a lack of dopamine. in experimental studies of reinforcement learning and habit formation: selection of action intensity (such as frequency of lever pressing) and choice between two actions.
Overview of the framework
This section first gives an overview of computations taking place during action planning in the Dop-Act framework, and then summarizes how these computations could be implemented in neural circuits including dopaminergic neurons.
The DopAct framework includes two components contributing to planning of behaviour. The first component is a valuation system, which finds the value v of reward that the animal should aim at acquiring in a given situation. A situation of an animal can be described by two classes of factors: internal factors connected with level of reserves such as food, water, etc. to which we refer as 'reserves', and external factors related to the environment, such as stimuli or locations in space, to which we refer as a 'state' following reinforcement learning terminology. The value v depends on both the amount of reward available in state s, and the current level of reserves. For example, if animal is not hungry, the desired value is equal to v ¼ 0 even if food is available. The second component of the DopAct framework is an actor, which selects an action to obtain the desired reward. This paper focusses on describing computations in the actor. Thus, for simplicity, we assume that the valuation system is able to compute the value v, but this paper does not describe how that computation is performed. In simulations we mostly focus on a case of low reserves, and use a simple model similar to a critic in standard reinforcement learning, which just learns the average value v s ð Þ of resource in state s (Sutton and Barto, 1998). Extending the description of the valuation system will be an important direction for future work and we come back to it in Discussion.
The goal of the actor is to select an action to obtain the reward set by the valuation system. This action is selected through inference in a probabilistic model, which describes relationships between states, actions and rewards, which we denote by s, a and R. Following reinforcement learning convention, we use R to denote the total reward defined in Equation 1.1 of Figure 1A, which includes the current reward r, and the future reward value v computed by the valuation system. The DopAct framework assumes that two systems within the actor learn distinct relationships between the variables, shown in Figure 1A. The first system, shown in orange, learns how the reward depends on the action selected in a given state, and we refer to it as 'goal-directed', because it can infer actions that typically lead to the desired reward. The second system, in blue, learns which actions should generally be chosen in a given state, and we refer to it as 'habit', because it suggests actions without considering the value of the reward currently available. Both goal-directed and habit systems propose an action, and their influence depends on their relative certainty. Figure 1B gives an overview of how the systems mentioned above contribute to action planning, in a typical task. During initial trials, the valuation system (shown in red) evaluates the current state s and computes the value of desired reward v, and the goal-directed system selects the action a. At this stage the habit system contributes little to the planning process as its uncertainty is high. As the training progresses, the habit system learns to mimic the choices made by the goal-directed system (Miller et al., 2019). On later trials the action is jointly determined by the habit and goal-directed systems ( Figure 1B), and their relative contributions depend on their levels of certainty.
The details of the above computations in the framework will be described in the next section, and it will be later shown how an algorithm inferring action can be implemented in a network resembling the anatomy of the basal ganglia. But before going through a mathematical description, let us first provide an overview of this implementation ( Figure 1C). In this implementation, the valuation, goaldirected and habit systems are mapped on the spectrum of cortico-basal ganglia loops (Alexander et al., 1986), ranging from valuation in a loop including ventral striatum, to habit in a loop including the dorsolateral striatum that has been shown to be critical for habitual behaviour (Burton et al., 2015). In the DopAct framework, the probability distributions learned by the actor are encoded in the strengths of synaptic connections in the corresponding loops, primarily in cortico-striatal connections. As in a standard implementation of the critic (Houk et al., 1995), the parameters of the value function learned by the valuation system are encoded in cortico-striatal connections of the corresponding loop.
Analogous to classical reinforcement learning theory, dopaminergic neurons play a critical role in learning, and encode errors in predictions made by the systems in the DopAct framework. However, by contrast to the standard theory, dopaminergic neurons do not all encode the same signal, but instead dopaminergic populations in different systems compute errors in predictions made by their corresponding system. Since both valuation and goal-directed systems learn to predict reward, the dopaminergic neurons in these systems encode reward prediction errors (which slightly differ between these two systems, as will be illustrated in simulations presented later). By contrast, the habit system learns to predict action on the basis of a state, so its prediction error encodes how the currently chosen action differs from a habitual action in the given state. Thus these dopaminergic neurons respond to non-habitual actions in the DopAct framework. We denote the prediction errors in the valuation, goal-directed and habit systems by d v , d g and d h , respectively. The dopaminergic neurons send these prediction errors to the striatum, where they trigger plasticity of cortico-striatal connections.
In the DopAct framework, habits are formed through a process in which the habit system learns to mimic the goal-directed system. Unlike in a previous model of habit formation (Daw et al., 2005), in the DopAct framework learning in the habit system is not driven by a reward prediction error, but by a signal encoding a difference between chosen and habitual actions. At the start of training, when an action is selected mostly by the goal-directed system, the dopaminergic neurons in the habit system receive an input encoding the chosen action, but the striatal neurons in the habit system are not yet able to predict this action, resulting in a prediction error encoded in dopaminergic activity (left display in Figure 1D). This prediction error triggers plasticity in the striatal neurons of the habit system, so they tend to predict this action in the future (right display in Figure 1D). The systems communicate through an 'ascending spiral' structure of striato-dopaminergic projections identified by Haber et al., 2000. These Authors observed that dopaminergic neurons within a given loop project to the corresponding striatal neurons, while the striatal neurons project to the dopaminergic neurons in the corresponding and next loops, and they proposed that the projections to the next loop go via interneurons, so they are effectively excitatory ( Figure 1C). In the DopAct framework, once the striatal neurons in the valuation system compute the value of the state v, they send it to the dopaminergic neurons in the goal-directed system.
In the DopAct framework, dopamine in the goal-directed system plays a role in both action planning and learning, and now an overview of this role is given. In agreement with classical reinforcement learning theory, the dopaminergic activity d g encodes reward prediction error, namely the difference between the reward R (including both obtained and available reward) and the expected reward (Schultz et al., 1997), but in the DopAct framework the expectation of reward in the goaldirected system is computed on the basis of the current action plan. Therefore, this reward expectation only arises from formulating a plan to achieve it. Consequently, when a reward is available, the prediction error d g can only be reduced to zero, once a plan to obtain the reward is formulated.
To gain an intuition for how the goal-directed system operates, let us consider a simple example of a hungry rat in a standard operant conditioning experiment. Assume that the rat has been trained that after pressing a lever a food pellet is delivered (Figure 2A). Consider a situation in which a lever is suddenly made available to the animal. Its sight allows the valuation system to predict that reward is available, and it sends an estimated value of the reward to the goal-directed system. Such input induces a reward prediction error in the goal-directed system, because this system has received information that a reward is available, but has not yet prepared actions to obtain the reward, hence it does not expect any reward for its action. The resulting prediction error triggers a process of planning actions that can get the reward. This facilitation of planning arises in the network, because the dopaminergic neurons in the goal-directed system project to striatal neurons ( Figure 1C), and increase their excitability. Once an appropriate action has been computed, the animal starts to expect the available reward, and the dopamine level encoding the prediction error decreases. Importantly, in this network dopamine provides a crucial feedback to striatal neurons on whether the current action plan is sufficient to obtain the available reward. If it is not, this feedback triggers changes in the action plan until it becomes appropriate. Thus the framework suggests why it is useful for the neurons encoding reward prediction error to be involved in planning, namely it suggests that this prediction error provides a useful feedback for the action planning system, informing if the plan is suitable to obtain the reward.
It is worth explaining why the reward expectation in the goal-directed system arises already once an action is computed and before it is implemented. It happens in the DopAct framework, because . Schematic illustration of changes in dopaminergic activity in the goal-directed system while a hungry rat presses a lever and a food pellet is delivered. (A) Prediction error reduced by action planning. The prediction error encoded in dopamine (bottom trace) is equal to a difference between the reward available (top trace) and the expectation of reward arising from a plan to obtain it (middle trace). (B) Prediction errors reduced by both action planning and learning. the striatal neurons in the goal-directed system learn over trials to predict that particular pattern of activity of neurons encoding action in the basal ganglia (which subsequently triggers a motor response) leads to reward in the future. This mechanism is fully analogous to that in the temporal-difference learning model used to describe classical conditioning, where the reward expectation also arises already after a stimulus, because the striatal neurons learn that the pattern of cortical inputs to the basal ganglia encoding the state (i.e. the stimulus) will lead to a reward (Schultz et al., 1997). In the goal-directed system of DopAct, an analogous reward prediction is made, but not only on the basis of a state, but on the basis of a combination of state and action.
The prediction error in the goal-directed system also allows the animal to learn about the rewards resulting from actions. In the example we considered above such learning would be necessary if the amount of reward changed, for example to two pellets ( Figure 2B). On the first trial after such change, a prediction error will be produced after reward delivery. This prediction error can be reduced by learning, so the animal will expect such increased reward in the future trials and no longer produce prediction error at reward delivery. In summary, the prediction errors in the goaldirected system are reduced by both planning and learning, as in active inference (Friston, 2010). Namely, the prediction errors arising from rewards becoming available are reduced within trials by formulating plans to obtain them, and the prediction errors due to outcomes of actions differing from expectations are reduced across trials by changing weights of synaptic connection encoding expected reward.
The next three sections will provide the details of the DopAct framework. For clarity, we will follow Marr's levels of description, and discuss computations, an algorithm, and its implementation in the basal ganglia network.
Computations during planning and learning
To illustrate the computations in the framework we will consider a simple task, in which only an intensity of a single action needs to be chosen. Such choice has to be made by animals in classical experiments investigating habit formation, where the animals are offered a single lever, and need to decide how frequently to press it. Furthermore, action intensity often needs to be chosen by animals also in the wild (e.g. a tiger deciding how vigorously pounce on a prey, a chimpanzee choosing how strongly hit a nut with a stone, or a sheep selecting how quickly eat the grass). Let us denote the action intensity by a. Let us assume that the animal chooses it on the basis of the reward it expects R and the stimulus s (e.g. the size of prey, nut or grass). Thus the animal needs to infer an action intensity sufficient to obtain the desired reward (but not larger to avoid unnecessary effort).
Let us consider the computation in the DopAct framework during action planning. During planning, the animal has not received any reward yet r ¼ 0, so according to Equation 1.1, the total reward is equal to the reward available R ¼ v. While planning to obtain this reward, the actor combines information from the goal-directed system (encoding how the reward depends on actions taken in given states), and the habit system (encoding the probability distribution of generally selecting actions in particular states). These two pieces information are combined according to Bayes' theorem (Equation 3.1 in Figure 3), which states that the posterior probability of selecting a particular action given available reward is proportional to the product of a likelihood of the reward given the action, which we propose is represented in the goal-directed system, and a prior, which we propose is encoded by the habit system.
In the DopAct framework, an action a is selected which maximizes the probability P ajR; s ð Þ. An analogous way of selecting actions has been used in models treating planning as inference (Attias, 2003), and it has been nicely summarized by Solway and Botvinick, 2012 'The decision process takes the occurrence of reward as a premise, and leverages the generative model to determine which course of action best explains the observation of reward.' In this paper, we make explicit the rationale for this approach: The desired amount of resources that should be acquired depends on the levels of reserves (and a given state); this value is computed by the valuation system, and the actor needs to find the action depending on this reward. Let us provide a further rationale for selecting an action a which maximizes P ajR; s ð Þ, by analysing what this probability expresses. Let us consider the following hypothetical scenario: An animal selected an action without considering the desired reward, that is by sampling it from its default policy P ajs ð Þ provided by the habit system, and obtained reward R. In this case, P ajR; s ð Þ is the probability that the selected action was a. When an animal knows the amount of resource desired R, then instead of just relying on the prior, the animal should rather choose an action maximizing P ajR; s ð Þ, which was the action most likely to yield this reward in the above scenario.
One may ask why it is useful to employ the habit system, instead of exclusively relying on the goal-directed system that encodes the relationship between rewards and actions. It is because there may be uncertainty in the action suggested by the goal-directed system, arising for example, from noise in the computations of the valuation system or inaccurate estimates of the parameters of the goal-directed system. According to Bayesian philosophy, in face of such uncertainty, it is useful to additionally bias the action by a prior, which here is provided by the habit system. This prior encodes an action policy that has overall worked in the situations previously experienced by the animal, so it is a useful policy to consider under the uncertainty in the goal-directed system.
To make the above computation more concrete, we need to specify the form of the prior and likelihood distributions. We first provide them for the example of choosing action intensity. They are given in Figure 3B In this example the stimulus intensity is equal to s ¼ 1, the valuation system computes desired reward R ¼ 2, and the parameters of the probability distributions encoded in the goal-directed and habit systems are listed in the panel. The blue curve shows the distribution of action intensity, which the habit system has learned to be generally suitable for this stimulus. The orange curve shows probability density of obtaining reward of 2 for a given action intensity, and this probability is estimated by the goal-directed system. For the chosen parameters, it is the probability of obtaining 2 from a normal distribution with mean a. Finally, the green curve shows a posterior distribution computed from Equation 3.1.
mean and variance S. In a case of the prior, we assume that action intensity is normally distributed around a mean given by stimulus intensity scaled by parameter h, reflecting an assumption that a typical action intensity often depends on a stimulus (e.g. the larger a nut, the harder a chimpanzee must hit it). On the other hand, in a case of the probability of reward R maintained by the goaldirected system, the mean of the reward is equal to a product of action intensity and the stimulus size, scaled by parameter q. We assume that the mean reward depends on a product of a and s for three reasons. First, in many situations reward depends jointly on the size of the stimulus, and the intensity with which the action is taken, because if the action is too weak, the reward may not be obtained (e.g. a prey may escape or a nut may not crack), and the product captures this dependence of reward on a conjunction of stimulus and action. Second, in many foraging situations, the reward that can be obtained within a period of time is proportional to a product of a and s (e.g. the amount of grass eaten by a sheep is proportional to both how quickly the sheep eats it, and how high the grass is). Third, when the framework is generalized to multiple actions later in the paper, the assumption of reward being proportional to a product of a and s will highlight a link with classical reinforcement learning. We denote the variances of the distributions of the goal-directed and habit systems by S g and S h . The variance S g quantifies to what extent the obtained rewards have differed from those predicted by the goal-directed system, while the variance S h describes by how much the chosen actions have differed from the habitual actions. Figure 3C shows an example of probability distributions encoded by the two systems for sample parameters. It also shows a posterior distribution P ajR; s ð Þ, and please note that its peak is in between the peaks of the distributions of the two systems, but it is closer to the peak of a system with smaller uncertainty (orange distribution is narrower). This illustrates how in the DopAct framework, the action is inferred by incorporating information from both systems, but weighting it by the certainty of the systems.
In addition to action planning, the animal needs to learn from the outcomes, to predict rewards more accurately in the future. After observing an outcome, the valuation system no longer predicts future reward v ¼ 0, so according to Equation 1.1 the total reward is equal to the reward actually obtained R ¼ r. The parameters of the distributions should be updated to increase P Rjs ð Þ, so in the future the animal is less surprised by the reward obtained in that state ( Figure 3A).
Algorithm for planning and learning
Let us describe an algorithm used by the actor to infer action intensity a that maximizes the posterior probability P ajR; s ð Þ. This posterior probability could be computed from Equation 3.1, but note that a does not occur in the denominator of that equation, so we can simply find the action that maximizes the numerator. Hence, we define an objective function F equal to a logarithm of the numerator of Bayes' theorem (Equation 4.1 in Figure 4). Introducing the logarithm will simplify function F because it will cancel with exponents present in the definition of normal density (Equation 3.3), and it does not change the position of the maximum of the numerator because the logarithm is a monotonic function. For example, the green curve in Figure 4B shows function F corresponding to the posterior probability in Figure 3C. Both green curves have the maximum at the same point, so instead of searching for a maximum of a posterior probability, we can seek the maximum of a simpler function F.
During action planning the total reward is equal to reward available, so we set R ¼ v in Equation 4.1, and we find the action maximizing F. This can be achieved by initializing a to any value, and then changing it proportionally to the gradient of F (Equation 4.2). Figure 4B illustrates that with such dynamics, the value of a approaches a maximum of F. Once a converges, the animal may select the action with the corresponding intensity. In summary, this method yields a differential equation describing an evolution of a variable a, which converges to a value of a that maximizes P ajR; s ð Þ. After obtaining a reward, R is equal to the reward obtained, so we set R ¼ r in Equation 4.1, and the values of parameters are changed proportionally to the gradient of F (Equations 4.3). Such parameter updates allow the model to be less surprised by the rewards (as aimed for in Figure 3A), because under certain assumptions function F expresses 'negative free energy'. The negative free energy (for the inference problem considered in this paper) is defined as F ¼ ln P Rjs ð Þ À KL, where KL is the Kullback-Leibler divergence between P ajR; s ð Þ and an estimate of this distribution (a detailed definition and an explanation for why F given in Equation 4.1 expresses negative free energy for an analogous problem is given by Bogacz, 2017). Importantly, since KL ! 0, the negative free energy provides a lower bound on P Rjs ð Þ (Friston, 2005). Thus changing the parameters to increase F, rises the lower bound on P Rjs ð Þ, and so it tends to increase P Rjs ð Þ. Let us derive the details of the algorithm (general form of which is given in Figure 4A) for the problem of choosing action intensity. Let us start with considering a special case in which both variance parameters are fixed to S g ¼ S h ¼ 1, because then the form of the algorithm and its mapping on the network are particularly beautiful. Substituting probability densities of likelihood and prior distributions (Equations 3. Figure 1C, and additionally 'Output' denotes the output nuclei of the basal ganglia. (C) Definition of striatal activity in the goal-directed system.
constants 1= ffiffiffiffiffiffi 2p p ), we obtain the expression for the objective function F in Equation 5.1 ( Figure 5A). We see that F consists of two terms, which are the squared prediction errors associated with goaldirected and habit systems. The prediction error for the goal-directed system describes how the reward differs from the expected mean, while the prediction error of the habit system expresses how the chosen action differs from that typically chosen in the current state (Equations 5.2). As described in the previous section, action intensity can be found by changing its value according to a gradient of F (Equation 4.2). Computing the derivative of F over a, we obtain Equation 5.3, where the two colours indicate terms connected with derivatives of the corresponding prediction errors. Finally, when the reward is obtained, we modify the parameters proportionally to the derivatives of F over the parameters, which are equal to relatively simple expressions in Equations 5.4. Figure 5A illustrates the key feature of the DopAct framework, that both action planning and learning can be described by the same process. Namely in both planning and learning, certain variables (the action intensity and synaptic weights, respectively) are changed to maximize the same function F (Equations 5.3 and 5.4). Since F is a negative of the sum of prediction errors (Equation 5.1), both action planning and learning are aimed at reducing prediction errors.
Network selecting action intensity
The key elements of the algorithm in Figure 5A naturally map on the known anatomy of striatodopaminergic connections. This mapping relies on three assumptions analogous to those typically made in models of the basal ganglia: (i) the information about state s is provided to the striatum by cortical input, (ii) the parameters of the systems q and h are encoded in the cortico-striatal weights, and (iii) the computed action intensity is represented in the thalamus ( Figure 5B). Under these assumptions, Equation 5.3 describing an update of action intensity can be mapped on the circuit: The action intensity in the model is jointly determined by the striatal neurons in the goal-directed and habit systems, which compute the corresponding terms of Equation 5.3, and communicate them by projecting to the thalamus via the output nuclei of the basal ganglia. The first term d g qs can be provided by striatal neurons in the goal-directed system (denoted by G in Figure 5B): They receive cortical input encoding stimulus intensity s, which is scaled by cortico-striatal weights encoding parameter q, so these neurons receive synaptic input qs: To compute d g qs, the gain of the striatal neurons in the goal-directed system needs to be modulated by dopaminergic neurons encoding prediction error d g (this modulation is represented in Figure 5B by an arrow from dopaminergic to striatal neurons). Hence, these dopaminergic neurons drive an increase in action intensity until the prediction error they represent is reduced (as discussed in Figure 2). The second term hs in Equation 5.3 can be computed by a population of neurons in the habit system receiving cortical input via connection with the weight h. Finally, the last term Àa simply corresponds to a decay.
In the DopAct framework, dopaminergic neurons within each system compute errors in the predictions about the corresponding variable, i.e. reward for the goal-directed system, and action for the habit system. Importantly, in the network on Figure 5B this computation can be performed locally, i.e. the dopaminergic neurons receive inputs encoding all quantities necessary to compute their corresponding errors. In the habit system, the prediction error is equal to a difference between action a and expectation hs (blue Equation 5.2). Such error can be easily computed in a network of Figure 5B, where the dopaminergic neurons in the habit system receive effective input form the output nuclei equal to a (as they receive inhibition equal to Àa), and inhibition hs from the striatal neurons. In the goal-directed system, the expression for prediction error is more complex (orange Equation 5.2), but importantly, all terms occurring in the equation could be provided to dopaminergic neurons in the goal-directed system via connections shown in Figure 5B (qs could be provided by the striatum, while a thorough an input from the output nuclei which have been reported to project to dopaminergic neurons [Watabe-Uchida et al., 2012]).
Once the actual reward is obtained, changing parameters proportionally to prediction errors (Equations 5.4) can arise due to dopaminergic modulation of the plasticity of cortico-striatal connections (represented in Figure 5B by arrows going from dopamine neurons to parameters). With such a modulation, learning could be achieved through local synaptic plasticity: The update of a weight encoding parameter h (blue Equation 5.4) is simply proportional to the product of presynaptic (s) and dopaminergic activity (d h ). In the goal-directed system, orange Equation 5.4 corresponds to local plasticity, if at the time of reward the striatal neurons encode information about action intensity (see definition of G in Figure 5C). Such information could be provided from the thalamus during action execution. Then the update of synaptic weight encoding parameter q will correspond to a standard three-factor rule (Kus´mierz et al., 2017) involving a product of presynaptic (s), postsynaptic (a) and dopaminergic activity (d g ).
The model can be extended so that the parameters S g and S h describing variances of distributions are encoded in synaptic connections or internal properties of the neurons (e.g. leak conductance). In such an extended model, the action proposals of the two systems are weighted according to their certainties. Figure 6A shows the general description of the algorithms which is analogous to that in Figure 5A. The action intensity is driven by both goal-directed and habit systems, but now their contributions are normalised by the variance parameters. For the habit system this normalization is stated explicitly in Equation 6.2, while for the goal-directed system it comes from a normalization of prediction error by variance in orange Equation 6.3 (it is not necessary to normalize habit prediction error by variance because the contribution of the habit system is already normalized in Equation 6.2).
B)
Σ 1/Σ 3.2-3.3 to 4.1: Planning: Learning: Errors: (6.8) (6.9) C) Figure 6. Description of a model selecting action intensity. (A) Details of the algorithm. The update rules for the variance parameters can be obtained by computing derivatives of F, giving d 2 g À 1=S g and d 2 h =S 2 h À 1=S h ; but to simplify these expressions, we scale them by S 2 g and S 2 h , resulting in Equations 6.5. Such scaling does not change the value to which the variance parameters converge because S 2 g and S 2 h are positive. (B) Mapping of the algorithm on network architecture. Notation as in Figure 5B. This network is very similar to that shown in Figure 5B, but now the projection to output nuclei from the habit system is weighted by its precision 1=S h (to reflect the weighting factor in Equation 6.2), and also the rate of decay (or relaxation to baseline) in the output nuclei needs to depend on S h . One way to ensure that the prediction error in goal-directed system is scaled by S g is to encode S g in the rate of decay or leak of these prediction error neurons (Bogacz, 2017). Such decay is included as the last term in orange Equation 6.7 describing the dynamics of prediction error neurons. Prediction error evolving according to this equation converges to the value in orange Equation 6.3 (the value in equilibrium can be found by setting the left hand side of orange Equation 6.7 to 0, and solving for d g ). In Equation 6.7, total reward R was replaced according to Equation 1.1 by the sum of instantaneous reward r, and available reward v computed by the valuation system. (C) Dynamics of the model.
There are several ways of including the variance parameters in the network, and one of them is illustrated in Figure 6B (see caption for details). The updates of the variance parameters (Equations 6.5) only depend on the corresponding prediction errors and the variance parameters themselves, so they could be implemented with local plasticity, if the neurons encoding variance parameters received corresponding prediction errors. Figure 6C provides a complete description of the dynamics of the simulated model. It parallels that in Figure 6B, but now explicitly includes time constants for update of neural activity (t , t d ), and learning rates for synaptic weights (a with corresponding indices).
As described in the Materials and methods, a simple model of the valuation system based on standard temporal-difference learning was employed in simulations (because the simulations corresponded to a case of low level of animal's reserves). Striatal neurons in the valuation system compute the reward expected in a current state on the basis of parameters w t denoting estimates of reward at time t after a stimulus, and following standard reinforcement learning we assume that these parameters are encoded in cortico-striatal weights. The dopaminergic neurons in the valuation system encode the prediction error similar to that in the temporal-difference learning model, and after reward delivery, they modulate plasticity of cortico-striatal connections. The Method section also provides details of the implementation and simulations of the model.
Simulations of action intensity selection
To illustrate how the model mechanistically operates and to help relate it to experimental data, we now describe a simulation of the model inferring action intensity. On each simulated trial the model selected action intensity, after observing a stimulus, which was set to s ¼ 1. The reward obtained depended on action intensity as shown in Figure 7A, according to r ¼ 5 tanh 3a=5 ð Þ À a. Thus, the reward was proportional to the action intensity, transformed through a saturating function, and a cost was subtracted proportional to the action intensity, that could correspond to a price for making an effort. We also added Gaussian noise to reward (with standard deviation s r ¼ 0:5) to account for randomness in the environment, and to action intensity to account for imprecision of the motor system or exploration. Figure 7AB shows how the quantities encoded in the valuation system changed throughout the learning process. The pattern of prediction errors in this figure is very similar to that expected from the temporal difference model, as the valuation system was based on that model. The stimulus was presented at time t ¼ 1. On the first trial (left display) the simulated animal received a positive reward at time t ¼ 2 (dashed black curve) due to stochastic nature of the rewards in the simulation. As initially the expectation of reward was low (dashed red curve), the reward triggered a substantial prediction error (solid red curve). The middle and right plots show the same quantities after learning. Now the prediction error was produced after the presentation of the stimulus, because after seeing the stimulus a simulated animal expected more reward than before the stimulus. In the middle display the reward received at time t ¼ 2 was very close to the expectation, so the prediction error at the time of the reward was close to 0. In the right display the reward happened to be lower than usual (due to noise in the reward), which resulted in a negative prediction error. Note that the pattern of prediction errors in the valuation system in Figure 7B resembles the famous figure showing the activity of dopaminergic neurons during conditioning (Schultz et al., 1997). Figure 7C shows the prediction errors in the actor and action intensity on the same trials that were visualised in Figure 7B. Prediction errors in the goal-directed system follow a similar pattern as in the valuation system in the left and middle displays in Figure 7C, that is before the behaviour becomes habitual. The middle display in Figure 7C shows simulated neural activity that was schematically illustrated in Figure 2A: As the valuation system detected that a reward was available (see panel above), it initially resulted in a prediction error in the goal-directed system, visible as an increase in the orange curve. This prediction error triggered a process of action planning, so with time the green curve representing planned action intensity increased. Once the action plan has been formulated, it provided a reward expectation, so the orange prediction error decreased. When an action became habitual after extensive training (right display in Figure 7C), the prediction error in the goal-directed system started to qualitatively differ from that in the valuation system. At this stage of training, the action was rapidly computed by the habit system, and the goal-directed system was too slow to lead action planning, so the orange prediction error was lower. This illustrates that in the DopAct framework reward expectations in the goal-directed system can arise even if an action is computed by the habit system.
The prediction error in the habit system follows a very different pattern than in other systems. Before an action became habitual, the prediction errors in the habit system arose after the action has been computed (middle display in Figure 7C). Since the habit system has not formed significant habits on early trials, it was surprised by the action, and this high value of blue prediction error drove its learning over trials. Once the habit system was highly trained (right display in Figure 7C) it rapidly drove action planning, so the green curve showing planned action intensity increased more rapidly. Nevertheless, due to the dynamics in the model, the increase in action intensity was not instant, so there was a transient negative prediction error in the habit system while an action was not yet equal to the intensity predicted by the habit system. The prediction error in the habit system at the time of action execution depended on how the chosen action differed from a habitual one, rather than on the received reward (e.g. in the right display in Figure 7C, d h >0 because the executed action was stronger than the planned one due to motor noise, despite reward being lower than expected). Figure 7D shows how the parameters in the model evolved over the trials in the simulation. The left display shows changes in the parameters of the three systems. A parameter of the valuation system correctly converged to the maximum value of the reward available in the task w 1 » 2 (i.e. the maximum of the curve in Figure 7A). The parameter of the habit system correctly converged to h » 2, i.e. typical action intensity chosen over trials (shown by a green curve in the right display of Figure 7D). The parameter of the goal-directed system converged to a vicinity of q » 1, which allows the goal-directed system to expect the reward of 2 after selecting an action with intensity 2 (according to orange Equation 3.2 the reward expected by the goal-directed system is equal to aqs » 2 Â 1 Â 1 ¼ 2). The right display in Figure 7D shows how the variance parameters in the goaldirected and habit systems changed during the simulation. The variance of the habit system was initialised to a high value, and it decreased over time, resulting in an increased certainty of the habit system.
Dopaminergic neurons in the model are only required to facilitate planning in the goal-directed system, where they increase excitability of striatal neurons, but not in the habit system. To illustrate it, Figure 7E shows simulations of a complete dopamine depletion in the model. It shows action intensity produced by the model in which following training, all dopaminergic neurons were set to 0. After 119 trials of training, on the 120th trial, the model was unable to plan an action. By contrast, after 359 training trials (when the uncertainty of the habit system has decreased -see the blue curve in right display of Figure 7D), the model was still able to produce a habitual response, because dopaminergic neurons are not required for generating habitual responses in the model. This parallels the experimentally observed robustness of habitual responses to blocking dopaminergic modulation (Choi et al., 2005).
Simulations of effects observed in conditioning experiments
This section shows that the model is able to reproduce two key patterns of behaviour that are thought to arise from interactions between different learning systems, namely the resistance of habitual responses to reward devaluation (Dickinson, 1985), and Pavlovian-instrumental transfer (Estes, 1943).
In experiments investigating devaluation, animals are trained to press a level (typically multiple times) for reward, for example food. Following this training the reward is devalued in a subgroup of animals, e.g. the animals in the devaluation group are fed to satiety, so they no longer desire the reward. Top displays in Figure 8A replot experimental data from one such study (Dickinson et al., 1995). The displays show the average number of lever presses made by trained animals during a testing period in which no reward was given for lever pressing. The dashed and solid curves correspond to devaluation and control groups, and the two displays correspond to groups of animals trained for different periods, that is trained until they received 120 or 360 rewards respectively. Figure 8A illustrates two key effects. First, all animals eventually reduced lever pressing with time, thus demonstrating extinction of the previously learned responses. Second, the effect of devaluation on initial testing trials depended on the amount of training. In particular, in the case of animals that received moderate amount of training (top left display) the number of responses in the first bin was much lower for the devaluation group than control group. By contrast, highly trained animals (top right display) produced more similar numbers of responses in the first bin irrespective of devaluation. Such production of actions despite their consequence being no longer desired is considered as a hallmark of habit formation.
The model can also produce insensitivity to devaluation with extensive training. Although the experimental tasks involving pressing levers multiple times is not identical to choosing intensity of a single action, such tasks could be conceptualized as a choice of the frequency of pressing a lever, that could also be described by a single number a. Furthermore, the average reward rate experienced by an animal in paradigms typically used in studies of habit formation (variable interval schedules that will be explained in Discussion) may correspond to a non-monotonic function similar to that in Figure 7A, because in these paradigms the reward per unit of time increases with frequency of lever press only to a certain point, but beyond certain frequency, there is no benefit of pressing faster.
To simulate the experiment described above, the model was trained either for 120 trials (bottom left display in Figure 8A) or 360 trials (bottom right display). During the training the reward depended on action as in Figure 7A. Following this training, the model was tested on 180 trials on which reward was not delivered, so in simulations r ¼ Àa reflecting just a cost connected with making an effort. To simulate devaluation, the expectation of reward was set to 0.
Bottom displays in Figure 8A show the average action intensity produced by the model, and they reproduce qualitatively the key two effects in the top displays. First, the action intensity decreased with time, because the valuation and goal-directed systems learned that the reward was no longer available. Second, the action intensity just after devaluation was higher in the highly trained group (bottom right display) than in moderately trained group (bottom left display). This effect was produced by the model because after 360 trials of training the variance S h in the habit system was much lower than after 120 trials (right display in Figure 7D), so after the extended training, the action intensity was to a larger extent determined by the habit system, which was not affected by devaluation.
The model can be easily extended to capture the phenomenon of Pavlovian-instrumental transfer. This phenomenon was observed in an experiment that consisted of three stages (Estes, 1943). First, animals were trained to press a lever to obtain a reward. Second, the animals were placed in a cage without levers, and trained that a conditioned stimulus predicted the reward. Third, the animals were placed back to a conditioning apparatus, but no reward was given for lever pressing. Top display in Figure 8B shows the numbers of responses in that third stage, and as expected they gradually decreased as animals learned that no reward was available. Importantly, in the third and fifth intervals of this testing phase the conditioned stimulus was shown (highlighted with pink background in Figure 8B), and then the lever pressing increased. Thus the learned association between the conditioned stimulus and reward influenced the intensity of actions produced in the presence of the stimulus.
The bottom display of Figure 8B shows the action intensity produced by the model in simulations of the above paradigm. As described in Materials and methods, the valuation system learned the rewards associated with two states: presence of a lever, and the conditioned stimulus. During the first stage (operant conditioning), the reward expectation computed by the valuation system drove action planning, while in the second stage (classical conditioning), no action was available, so the valuation system generated predictions for the reward without triggering action planning. In the third stage (testing), on the highlighted intervals on which the conditioned stimulus was present, the expected reward v was increased, because it was a sum of rewards associated with both states. Consequently, the actor computed that a higher action intensity was required to obtain a bigger reward, because the goal-directed system assumes that the action intensity is proportional to the mean reward (orange Equation 3.2). In summary, the model explains the Pavlovian-instrumental transfer by proposing that the presence of the conditioned stimulus increases the reward expected by the valuation system, which results in actor selecting higher action intensity to obtain this anticipated reward.
Extending the model to choice between two actions
This section shows how models developed within the DopAct framework can also describe more complex tasks with multiple actions and multiple dimensions of state. We consider a task involving choice between two options, often used in experimental studies, as it allows illustrating the generalization, and at the same time results in a relatively simple model. This section will also show that the models developed in the framework can under certain assumptions be closely related to previously proposed models of reinforcement learning and habit formation.
To make dimensionality of all variables and parameters explicit, we will denote vectors with a bar and matrices with a bold font. Thus s À is a vector where different entries correspond to intensities of different stimuli in an environment, and a À is a vector where different entries correspond to intensities of different actions. The model is set up such that only one action can be chosen, so following a decision, a i ¼ 1 for the chosen action i, while for other actions a j6 ¼i ¼ 0. Thus symbol a À still denotes action intensity, but the intensity of an action only takes binary values once an action has been chosen.
Equation 9.1 in Figure 9A shows how the definitions of the probability distributions encoded by the goal-directed and habit systems can be generalized to multiple dimensions. Orange Equation 9.1 states that the reward expected by the goal-directed system has mean a ÀT Qs À , where Q is now a matrix of parameters. This notation highlights the link with the standard reinforcement learning, where the expected reward for selecting action i in state j is denoted by Q i;j : Note that if a À and s À are both binary vectors with entries i and j equal to 1 in the corresponding vectors, and all other entries equal to 0, then a ÀT Qs À is equal to the element Q i;j of matrix Q.
In the model, the prior probability is proportional to a product of three distributions. The first of them is encoded by the habit system and given in blue Equation 9.1. The expected action intensity encoded in the habit system has mean H s À , and this notation highlights the analogy with a recent model of habit formation (Miller et al., 2019) where a tendency to select action i in state j is also denoted by H i;j . Additionally, we introduce another prior given in Equation 9.2, which ensures that only one action has intensity significantly deviating from 0. Furthermore, to link the framework with classical reinforcement learning, we enforce a third condition ensuring that action intensity remains between 0 and 1 (Equation 9.3). These additional priors will often result in one entry of a À converging to 1, while all other entries decaying towards 0 due to competition. Since in our simulations we also use a binary state vector, the reward expected by the goal-directed system will often be equal to Q i;j as in the classical reinforcement learning (see paragraph above).
Let us now derive equations describing inference and learning for the above probabilistic model. Substituting probability densities from Equations 9.1 and 9.2 into the objective function of Equation 4.1, we obtain Equation 9.4 in Figure 9B. To ensure that action intensity remained between 0 and 1 (Equation 9.3), a i was set to one of these values if it exceeded the range during numerical integration.
To obtain the equations describing action planning or learning, we need to compute derivatives of F over vectors or matrices. The rules for computing such derivatives are natural generalizations of the standard rules and they can be found in a tutorial paper (Bogacz, 2017). During planning, the action intensity should change proportionally to a gradient of F, which is given in Equation 9 where the prediction errors are defined in Equations 9.6. These equations have an analogous form to those in Figure 6A, but are generalized to matrices. The only additional element is the last term in Equation 9.5, which ensures competition between different actions, i.e. a 1 will be decreased proportionally to a 2 , and vice versa. During learning, the parameters need to be updated proportionally to the corresponding gradients of F, which are given in Equations 9.7 and 9.8. Again, these equations are fully analogous to those in Figure 6A. Both action selection and learning in the above model share similarities with standard models of reinforcement learning and a recent model of habit formation (Miller et al., 2019). To see which action is most likely to be selected in the model, it is useful to consider the evolution of action intensity at the start of a trial, when a i » 0, because the action with a largest initial input is likely to win the competition and be selected. Substituting orange Equation 9.6 into Equation 9.5 and setting a i ¼ 0, we obtain Equation 9.9 in Figure 9C. This equation suggests that probabilities of selecting actions depend on a sum of inputs form the goal-directed and habit systems weighted by their certainty, analogously as in a model by Miller et al., 2019. There are also similarities in the update rules: if only single elements of vectors a À and s À have non-zero values a i ¼ 1 and s j ¼ 1, then substituting Equations 9.6 into 9.7 and ignoring constants gives Equations 9.10. These equations suggest that the parameter Q i;j describing expected reward for action i in state j is modified proportionally to a reward prediction error, as in classical reinforcement learning. Additionally, for every action and current state j the parameter describing a tendency to take this action is modified proportionally to a prediction error equal to a difference between the intensity of this action and the intensity expected by the habit system, as in a model of habit formation (Miller et al., 2019).
The similarity of a model developed in the DopAct framework to classical reinforcement learning, which has been designed to maximize resources, highlights that the model also tends to maximize resources, when animal's reserves are sufficiently low. But the framework is additionally adaptive to the levels of reserves: If the reserves were at the desired level, then R ¼ 0 during action planning, so according to Equation 9.9, the goal-directed system would not suggest any action.
Let us now consider how the inference and learning can be implemented in a generalized version of the network described previously, which is shown in Figure 10A. In this network, striatum, output nuclei and thalamus include neural populations selective for the two alternative actions (shown in vivid and pale colours in Figure 10A), as in standard models of action selection in the basal ganglia (Bogacz and Gurney, 2007;Frank et al., 2007;Gurney et al., 2001). We assume that the connections between these nuclei are within the populations selective for a given action, as in previous models (Bogacz and Gurney, 2007;Frank et al., 2007;Gurney et al., 2001). Additionally, we assume that sensory cortex includes neurons selective for different states (shown in black and grey in Figure 10A), and the parameters Q i;j and H i;j are encoded in cortico-striatal connections. Then, the orange and blue terms in Equation 9.5 can be computed by the striatal neurons in goal-directed and habit systems in exactly analogous way as in the network inferring action intensity, and these terms can be integrated in the output nuclei and thalamus. The last term in Equation 9.5 corresponds to mutual inhibition between the populations selective for the two actions, and such inhibition could be provided by inhibitory projections that are presents in many different regions of this circuit, e.g. by co-lateral projections of striatal neurons (Preston et al., 1980) or via a subthalamic nucleus, which has been proposed to play role in inhibiting non-selected actions (Bogacz and Gurney, 2007;Frank et al., 2007;Gurney et al., 2001). The prediction error in the goal-directed system (orange Equation 9.6) could be computed locally, because the orange dopaminergic neurons in Figure 10A receive inputs encoding all terms in the equation. During learning, the prediction error in the goal-directed system modulates plasticity of the corresponding cortico-striatal connections according to orange Equation 9.7, which describes a standard tri-factor Hebbian rule (if following movement the striatal neurons encode chosen action, as assumed in Figure 5C).
The prediction error in the habit system (blue Equation 9.6) is a vector, so computing it explicitly would also require multiple populations of dopaminergic neurons in the habit system selective for available actions, but different dopaminergic neurons in the real brain may not be selective for different actions (da Silva et al., 2018). Nevertheless, learning in the habit system can be approximated with a single dopaminergic population, because the prediction error d À h has a characteristic structure with large redundancy. Namely, if only one entry in the vectors a À and s À is equal to 1 and other entries to 0, then only one entry in d À h corresponding to the chosen action is positive, while all other entries are negative (because parameters H i;j stay in a range between 0 and 1 when initialized within this range and updated according to blue Equation 9.7). Hence, we simulated an approximate model just encoding the prediction error for the chosen action (Equation 10.1). With such a single modulatory signal, the learning rules for striatal neurons in the habit system have to be adjusted so the plasticity has opposite directions for the neurons selective for the chosen and the other actions. Such modified rule is given in Equation 10.2 and corresponds to tri-factor Hebbian learning (if striatal neurons in the habit system have activity proportional to a À during learning, as we assumed for the goal-directed system). Thanks to this approximation, the prediction error and plasticity in the habit system take a form that is more analogous to that in the goal-directed system. When the prediction error in the habit system is a scalar, the learning rule for the variance parameter (blue Equation 9.8) becomes the same as in the model in the previous section (cf. blue Equation 6.5). Materials and method section provides the description of the valuation system in this model, and describes details of the simulations.
Simulations of choice between two actions
To illustrate predictions made by the model, we simulated it in a probabilistic reversal task. On each trial, the model was 'presented' with one of two 'stimuli', that is one randomly chosen entry of vector s À was set to 1, while the other entry was set to 0. On the initial 150 trials, the correct response was to select action 1 for stimulus 1 and action 2 for stimulus 2, while on the subsequent trials, the correct responses were reversed. The mean reward was equal to 1 for a correct response and 0 for an error. In each case, a Gaussian noise (with standard deviation s r ¼ 0:5) was added to the reward. Figure 11A shows changes in action intensity and inputs from goal-directed and habit systems as a function of time during planning on different trials within a simulation. On an early trial (left display) the changes in action intensity were primarily driven by the goal-directed system. The intensity of the correct action converged to 1, while it stayed at 0 for the incorrect one. After substantial training (middle display), the changes in action intensity were primarily driven by the faster habit system. Following a reversal (right display) one can observe a competition between the two systems: Although the goal-directed system had already learned the new contingency (solid orange curve), the habit system still provided larger input to the incorrect action node (dashed blue curve). Since the habit system was faster, the incorrect action had higher intensity initially, and only with time, the correct action node received input from the goal-directed system, and inhibited the incorrect one. Figure 11B shows how parameters in the model changed over trials. Left display illustrates changes in sample cortico-striatal weights in the three systems. The valuation system rapidly learned the reward available, but after reversal this estimate decreased, as the model persevered in choosing the incorrect option. Once the model discovered the new rule, the estimated value of the stimulus increased. The goal-directed system learned that selecting the first action after the first stimulus gave higher rewards before reversal, but not after. The changes in the parameters of the habit system followed those in the goal-directed system. The right display shows that the variance estimated by the habit system initially decreased, but then increased several trials after the reversal, when the goal-directed system discovered the new contingency, and thus the selected actions differed from the habitual ones. Figure 11C shows an analogous pattern in dopaminergic activity, where the neurons in the habit system signalled higher prediction errors following a reversal. This pattern of prediction errors is unique to the habit system, as the prediction errors in the goal-directed system (orange curve) fluctuated throughout the simulation following the fluctuations in reward. The increase in dopaminergic activity in the habit system following a reversal is a key experimental prediction of the model, to which we will come back in Discussion.
Let us consider the mechanisms of reversal in the model. Since the prediction errors in the habit system do not directly depend on rewards, the habit system would not perform reversal on its own, and the goal-directed system is necessary to initiate the reversal. This feature is visible in simulations, where just after the reversal the agent was still selecting the same actions as before, so the habits were still being strengthen rather weakened (the blue curve in left display of Figure 11B still increased for~20 trials after the reversal). When the goal-directed system learned that the previously selected actions were no longer rewarded, the tendency to select them decreased, and other actions had higher chances of being selected due to noise (although the amount of noise added to the choice process was constant, there was a higher chance for noise to affect behaviour, because the old actions were now suggested only by the habit rather than both systems). Once the goaldirected system found that the actions selected according to new contingency gave rewards, the probability of selecting action according to the old contingency decreased, and only then the habit system slowly unlearned the old habit. It is worth adding that the reversal was made harder by the fact that a sudden change in reward increased the uncertainty of the goal-directed system (the orange curve in the right display of Figure 11B increased after reversal), which actually weakened the control by that system. Nevertheless, this increase of uncertainty was brief, because the goal-directed system quickly learned to predict rewards in the new contingency and regained its influence on choices.
Discussion
In this paper, we proposed how an action can be identified through Bayesian inference, where the habit system provides a prior and the goal-directed system represents reward likelihood. Within the DopAct framework, the goal-directed and habit systems may not be viewed as fundamentally different systems, but rather as analogous segments of neural machinery performing inference in a hierarchical probabilistic model ( Figure 1A), which correspond to different levels of hierarchy.
In this section, we discuss the relationship of the framework to other theories and experimental data, mechanisms of habit formation, and suggest experimental predictions and directions for future work.
Relationship to other theories
The DopAct framework combines elements from four theories: reinforcement learning, active inference, habit formation, and planning as inference. For each of the theories we summarize key similarities, and highlight the ways in which the DopAct framework extends them.
As in classical reinforcement learning (Houk et al., 1995;Montague et al., 1996), in the DopAct framework the dopaminergic neurons in the valuation and goal-directed systems encode reward prediction errors, and these prediction errors drive learning to improve future choices. However, the key conceptual difference of the DopAct framework is that it assumes that animals aim to achieve a desired level of reserves (Buckley et al., 2017;Hull, 1952;Stephan et al., 2016), rather than always maximize acquiring resources. It has been proposed that when a physiological state is considered, the reward an animal aims to maximize can be defined as a reduction of distance between the current and desired levels of reserves (Juechems and Summerfield, 2019;Keramati and Gutkin, 2014). Under this definition, a resource is equal to such subjective reward only if consuming it would not bring the animal beyond its optimal reserve level. When an animal is close to the desired level, acquiring a resource may even move the animal further from the desired level, resulting in a negative subjective reward. As the standard reinforcement learning algorithms do not consider physiological state, they do not always maximize the subjective reward defined in this way. By contrast, the Dop-Act framework offers flexibility to stop acquiring resources, when the reserves reach the desired level.
The DopAct framework relies on a key high-level principle from the active inference theory (Friston, 2010) that the prediction errors can be minimized by both learning and action planning. Furthermore, the network implementations of the proposed models share a similarity with predictive coding networks that the neurons encoding prediction errors affect both the plasticity and the activity of its target neurons (Friston, 2005;Rao and Ballard, 1999). A novel contribution of this paper is to show how these principles can be realized in anatomically identified networks in the brain.
The DopAct framework shares a feature of a recent model of habit formation (Miller et al., 2019) that learning in the habit system is driven by prediction errors that do not depend on reward, but rather encode the difference between the chosen and habitual actions. The key new contribution of this paper is to propose how such learning can be implemented in the basal ganglia circuit including multiple populations of dopaminergic neurons encoding different prediction errors.
Similarly as in the model describing goal-directed decision making as probabilistic inference (Solway and Botvinick, 2012), the actions selected in the DopAct framework maximize a posterior probability of action given the reward. The new contribution of this paper is making explicit the rationale for why such probabilistic inference is the right thing for the brain to do: The resource that should be acquired in a given state depends on the level of reserves, so the inferred action should depend on the reward required to restore the reserves. We also proposed a detailed implementation of the probabilistic inference in the basal ganglia circuit.
It is useful to discuss the relationship of the DopAct framework to several other theories. The tonic level of dopamine has been proposed to determine the vigour of movements (Niv et al., 2007). In our model selecting action intensity, the dopaminergic signals in the valuation and goaldirected systems indeed influence the resulting intensity of movement, but in the DopAct framework, it is the phasic rather than tonic dopamine that determines the vigour, in agreement with recent data (da Silva et al., 2018). It has been also proposed that dopamine encodes incentive salience of the available rewards (Berridge and Robinson, 1998;McClure et al., 2003). Such encoding is present in the DopAct framework, where the prediction error in the goal-directed system depends on whether the available resource is desired by an animal.
Relationship to experimental data
To relate the DopAct framework to experimental data, we need to assume a particular mapping of different systems on anatomically defined brain regions. Thus we assume that the striatal neurons in valuation, goal-directed, and habit systems can be approximately mapped on ventral, dorsomedial, and dorsolateral striatum. This mapping is consistent with the pattern of neural activity in the striatum, which shifts from encoding reward expectation to movement as one progresses from ventral to dorsolateral striatum (Burton et al., 2015), and with increased activity in dorsolateral striatum during habitual movements (Tricomi et al., 2009). This mapping is also consistent with the observation that deactivation of dorsomedial striatum impairs learning which action leads to larger rewards (Yin et al., 2005), while lesion of dorsolateral striatum prevents habit formation (Yin et al., 2004). Furthermore, we will assume that dopaminergic neurons in valuation, goal-directed, and habit systems can be mapped on a spectrum of dopaminergic neurons ranging from ventral tegmental area (VTA) to substantia nigra pars compacta (SNc). VTA is connected with striatal regions we mapped on the valuation system, while SNc with those mapped on the habit system (Haber et al., 2000), so we assume that d v and d h are represented in VTA and SNc respectively. Such mapping in consistent with lesions to SNc preventing habit formation (Faure et al., 2005). The mapping of the dopaminergic neurons from the goal-directed system is less clear, so let us assume that these neurons may be present in both areas.
The key prediction of the DopAct framework is that the dopaminergic neurons in the valuation and goal-directed systems should encode reward prediction errors, while the dopaminergic neurons in the habit system should respond to non-habitual actions. This prediction can be most directly compared with the data in a study where rewards and movements have been dissociated. That study employed a task in which mice could make spontaneous movements and rewards were delivered at random times (Howe and Dombeck, 2016). It has been observed that a fraction of dopaminergic neurons had increased responses to rewards, while a group of neurons responded to movements. Moreover, the reward responding neurons were located in VTA while most movement responding neurons in SNc (Howe and Dombeck, 2016). In that study the rewards were delivered to animals irrespectively of movements, so the movements they generated were most likely not driven by processes aiming at achieving reward (simulated in this paper), but rather by other inputs (modelled by noise in our simulations). To relate this task to the DopAct framework, let us consider the prediction errors likely to occur at the times of reward and movement. At the time of reward the animal was not able to predict it, so d v >0; d g >0, but it was not necessarily making any movements d h ¼ 0, while at the time of a movement the animal might have not expected reward d v ¼ d g ¼ 0, but might have made non-habitual movements d h >0. Hence the framework predicts separate groups of dopaminergic neurons to produce responses at times of reward and movements, as experimentally observed (Howe and Dombeck, 2016). Furthermore, the peak of the movement related response of SNc neurons was observed to occur after the movement onset (Howe and Dombeck, 2016), which suggests that most of this dopaminergic activity was a response to a movement rather than activity initiating a movement. This timing is consistent with the role of dopaminergic neurons in the habit system, which compute a movement prediction error, rather than initiate movements.
While discussing dopaminergic neurons, one has to mention the influential studies showing that VTA neurons encode reward prediction error (Eshel et al., 2016;Schultz et al., 1997;Tobler et al., 2005). So for completeness, let us reiterate that in the DopAct framework the valuation system is similar to the standard temporal difference learning model, hence it inherits the ability to account for the dopaminergic responses to unexpected rewards previously explained with that model ( Figure 7B).
The DopAct framework also makes predictions on dopaminergic responses during movements performed to obtain rewards. In presented simulations, such responses were present in all systems ( Figure 7B-C), and indeed responses to reward-directed movements were observed experimentally in both VTA and SNc (Engelhard et al., 2019;Schultz, 1986). The framework predicts that the responses to movements should be modulated by the magnitude of available reward in the valuation and goal-directed systems, but not in the habit system. This prediction can be compared with data from a task in which animals could press one of two levers that differed in magnitude of resulting rewards (Jin and Costa, 2010). So for this task, the framework predicts that the dopaminergic neurons in the valuation and goal-directed systems should respond differently depending on which lever was pressed, while the dopaminergic response in the habit system should depend just on action intensity but not reward magnitude. Indeed, a diversity of dopaminergic neurons have been observed in SNc, and the neurons differed in whether their movement related response depended on reward available ( Figure 4j in the paper by Jin and Costa, 2010).
In the DopAct framework, the activity of dopaminergic neurons in the goal-directed system is normalized by the uncertainty of that system. Analogous scaling of dopaminergic activity by an estimate of reward variance is also present in a model by Gershman, 2017. He demonstrated that such scaling is consistent with an experimental observation that dopaminergic responses adapt to the range of rewards available in a given context (Tobler et al., 2005).
In the DopAct framework the role of dopamine during action planning is specific to preparing goal-directed but not habitual movements ( Figure 7E). Thus the framework is consistent with an observation that blocking dopaminergic transmission slows responses to reward-predicting cues early in training, but not after extensive training, when the responses presumably became habitual (Choi et al., 2005). Analogously, the DopAct framework is consistent with an impairment in Parkinson's disease for goal-directed but not habitual choices (de Wit et al., 2011) or voluntary but not cue driven movements (Johnson et al., 2016). The difficulty in movement initiation in Parkinson's disease seems to depend on whether the action is voluntary or in response to a stimulus, so even highly practiced movements like walking may be difficult if performed voluntarily, but easier in response to auditory or visual cues (Rochester et al., 2005). Such movements performed to cues are likely to engage the habit system, because responding to stimuli is a hallmark of habitual behaviour (Dickinson and Balleine, 2002).
Finally, let us discuss a feature of the DopAct framework related to the dynamics of competition between systems during action planning. Such competition is illustrated in the right display of Figure 11A, where after a reversal, the faster habit system initially prepared an incorrect action, but later the slower goal-directed system increased the intensity of the correct action. Analogous behaviour has been shown in a recent study, where human participants were extensively trained to make particular responses to given stimuli (Hardwick et al., 2019). After a reversal, they tended to produce incorrect habitual actions when required to respond rapidly, but were able to produce the correct actions given sufficient time.
Mechanisms of habitual behaviour
Since the mechanisms of habit formation in the DopAct framework fundamentally differ from a theory widely accepted by a computational neuroscience community (Daw et al., 2005), this section is dedicated to comparing the two accounts, and discussing the properties of the habit system in the framework.
An influential theory suggests that two anatomically separate systems in the brain underlie goaldirected and habitual behaviour and a competition between them is resolved according to uncertainty of the systems (Daw et al., 2005). The DopAct framework agrees with these general principles but differs from the theory of Daw et al., 2005 in the nature of computations in these systems, and their mapping on brain anatomy. Daw et al., 2005 proposed that goal-directed behaviour is controlled by a cortical model-based system that learns the transitions between states resulting from actions, while habitual behaviour arises from a striatal model-free system that learns policy according to standard reinforcement learning. By contrast, the DopAct framework suggests that goal-directed behaviour in simple lever-pressing experiments does not require learning state transitions, but such behaviour can be also supported by a striatal goal-directed system that learns expected rewards from actions in a way similar to standard reinforcement learning models. So in the DopAct framework it is the goal-directed rather than habit system that learns according to reward prediction error encoded by dopaminergic neurons. Furthermore, in the DopAct framework (following the model by Miller et al., 2019) habits arise simply from repeating actions, so their acquisition is not directly driven by reward prediction error, unlike in the model of Daw et al., 2005.
The accounts of habit formation in the DopAct framework and the model of Daw et al., 2005 make different predictions. Since the theory of Daw et al., 2005 assumes that a system underlying habitual behaviour learns with standard reinforcement learning, it predicts that striatal neurons supporting habitual behaviour should receive reward prediction error. However, the dopaminergic neurons that have been famously shown to encode reward prediction error (Schultz et al., 1997) are located in VTA, which does not send major projections to the dorsolateral striatum underlying habitual behaviour. These striatal neurons receive dopaminergic input from SNc (Haber et al., 2000), and it is questionable to what extent dopaminergic neurons in SNc encode reward prediction error. Although such encoding has been reported (Zaghloul et al., 2009), studies which directly compared the activity of VTA and SNc neurons demonstrated that neurons encoding reward prediction error are significantly more frequent in VTA than SNc (Howe and Dombeck, 2016;Matsumoto and Hikosaka, 2009). So the striatal neurons underlying habitual behaviour do not seem to receive much of the teaching signal that would be expected if habit formation arose from the processes of reinforcement learning proposed by Daw et al., 2005. By contrast, the DopAct framework assumes that the habit system learns on the basis of a teaching signal encoding how the chosen action differs from the habitual one, so it predicts that SNc neurons should respond to non-habitual movements. It has indeed been observed that the dopaminergic neurons in SNc respond to movements (Howe and Dombeck, 2016;Schultz et al., 1983), but it has not been systematically analysed yet if these responses preferentially encode non-habitual movements (we will come back to this key prediction in the next section).
It is worth discussing how the habits may be suppressed if previously learnt habitual behaviour is no longer appropriate. In the DopAct framework, old habits die hard. When the habitual behaviour is no longer rewarded, the negative reward prediction errors do not directly suppress the behaviour in the habit system. So, as mentioned at the end of the Results section, in order to reverse behaviour, the control cannot be completely taken over by the habit system, but the goal-directed system needs to provide at least some contribution to action planning to initiate the reversal when needed. Nevertheless, simulations presented in this paper show that for certain parameters the control of habit system may be released when no longer required, and the model can reproduce the patterns of behaviour observed in extinction experiments ( Figure 8). However, simulations by Miller et al., 2019 show that their closely related model can sometimes persist in habitual behaviour even if it is not desired. Therefore, it is possible that there may exist other mechanisms that may help the goaldirected system to regain control if habitual behaviour ceases to be appropriate. For example, it has been proposed that a sudden increase in prediction errors occurring when environment changes may attract attention and result in the goal-directed system taking charge of animals' choices (FitzGerald et al., 2014).
Finally, let us discuss the relationship of the DopAct framework to an observation that habits are more difficult to produce in variable ratio schedules than variable interval schedules (Dickinson et al., 1983). In the variable ratio schedules a lever press is followed by a reward with a fixed probability p. By contrast in the variable interval schedule a lever press is followed by a reward only if the reward is 'available'. Just after consuming a reward, lever pressing has no effect, and another reward may become "available" as time goes on with a fixed probability per unit of time. An elegant explanation for why habit formation depends on the schedule has been provided by Miller et al., 2019, and a partially similar explanation can be given within the DopAct framework, as we now summarize. Miller et al., 2019 noticed that reward rate as a function of action frequency follows qualitatively different relationships in different schedules. In particular, in the variable ratio schedule the expected number of rewards per unit time is directly proportional to number of lever presses, i.e. E r ð Þ ¼ pa. By contrast, in the variable interval schedule, the reward rate initially increases with the number of level presses, but beyond some frequency there is little benefit of responding more often, so the reward rate is a nonlinear saturating function of action frequency. The model selecting action intensity in the DopAct framework assumes a linear dependence of mean reward on action intensity (orange Equation 3.2), so in the variable ratio schedule, it will learn q ¼ p, and then predict mean reward accurately no matter what action intensity is selected. By contrast, in the variable interval schedule the predictions will be less accurate, because the form of the actual dependence of reward on action frequency is different to that assumed by the model. Consequently, the reward uncertainty of the goal-directed system S g is likely to be lower in the variable ratio than variable interval schedule. This decreased uncertainty makes the goal-directed system less likely to give in to the habit system, resulting in less habitual behaviour in the variable ratio schedule.
Experimental predictions
We start with describing two most critical predictions of the DopAct framework, testing of which may validate or falsify the two key assumptions of the framework, and next we discuss other predictions. The first key prediction of the DopAct framework is that the dopaminergic neurons in the habit system should respond to movements more, when they are not habitual, e.g. at an initial phase of task acquisition or after a reversal ( Figure 11C). This prediction could be tested by monitoring the activity of dopaminergic neurons projecting to dorsolateral striatum in a task where animals are trained to perform a particular response for sufficiently long that it becomes habitual, and then the required response is reversed. The framework predicts that these dopaminergic neurons should have higher activity during initial training and in a period after the reversal, than during the period when the action is habitual.
The second key prediction follows from a central feature of the DopAct framework that the expectation of the reward in the goal-directed system arises from forming a motor plan to obtain it. Thus the framework predicts that the dopaminergic responses in the goal-directed system to stimuli predicting a reward should last longer if planning actions to obtain the reward takes more time, or if an animal is prevented from making a response. One way to test this prediction would be to optogenetically block striatal neurons expressing D1 receptors in the goal-directed system for a fixed period after the onset of a stimulus, so the action plan cannot be formed. The framework predicts that such manipulation should prolong the response of dopaminergic neurons in that system. Another way of testing this prediction would be to employ a task where goal-directed planning becomes more efficient and thus shorter with practice. The framework predicts that in such tasks the responses of dopaminergic neurons in the goal-directed system during action planning should get briefer with practice, and their duration should be correlated with reaction time across stages of task acquisition.
The DopAct framework also predicts distinct patterns of activity for different populations of dopaminergic neurons. As already mentioned above, dopaminergic neurons in the habit system should respond to movements more, when they are not habitual. When the movements become highly habitual, these neurons should tend to more often produce brief decreases in response ( Figure 7C, right). Furthermore, when the choices become mostly driven by the habit system, then dopaminergic neurons in the goal-directed system should no longer signal reward prediction error after stimulus ( Figure 7C, right). By contrast, the dopaminergic neurons in the valuation system should signal reward prediction error after stimulus even once the action becomes habitual ( Figure 7B).
Patterns of prediction errors expected from the DopAct framework could also be investigated with fMRI. Models developed within the framework could be fitted to behaviour of human participants performing choice tasks. Such models could then generate patterns of different prediction errors (d v , d g , d h ) expected on individual trials. Since prediction errors encoded by dopaminergic neurons are also correlated with striatal BOLD signal (O'Doherty et al., 2004), one could investigate if different prediction errors in the DopAct framework are correlated with BOLD signal in different striatal regions.
In the DopAct framework dopaminergic neurons increase the gain of striatal neurons during action planning, only in the goal-directed but not in the habit system. Therefore, the framework predicts that the dopamine concentration should have a larger effect on the slope of firing-Input curves for the striatal neurons in the goal-directed than the habit system. This prediction may seem surprising, because striatal neurons express dopaminergic receptors throughout the striatum (Huntley et al., 1992). Nevertheless, it is consistent with reduced effects of dopamine blockade on habitual movements (Choi et al., 2005) that are known to rely on dorsolateral striatum (Yin et al., 2004). Accordingly, the DopAct framework predicts that the dopaminergic modulation in dorsolateral striatum should primarily affect plasticity rather than excitability of neurons.
Directions for future work This paper described a general framework for understanding the function of dopaminergic neurons in the basal ganglia, and presented simple models capturing only a subset of experimental data. To describe responses observed in more realistically complex tasks, models could be developed following a similar procedure as in this paper. Namely, a probabilistic model could be formulated for a task, and a network minimizing the corresponding free-energy derived, simulated and compared with experimental data. This section highlights key experimental observations the models described in this paper are unable to capture, and suggests directions for developing models consistent with them.
The presented models do not mechanistically explain the dependence of dopamine release in ventral striatum on motivational state such as hunger or thirst (Papageorgiou et al., 2016). To reproduce these activity patterns, it will be important to extend the framework to describe the computations in the valuation system. It will also be important to better understand the interactions between the valuation and goal-directed systems during the choice of action intensity. In the presented model, the selected action intensity depends on the value of the state estimated by the valuation system, and conversely, the produced action intensity influences reward and thus the value learned by the valuation system. In the presented simulations the parameters (e.g. learning rates) were chosen such that the model learned to select action intensity giving highest reward, but such behaviour was not present for all parameter values. Hence it needs to be understood how the interactions between the valuation and goal-directed systems need to be set up so the model robustly finds the action intensity giving the maximum reward.
The models do not describe how the striatal neurons distinguish whether dopaminergic prediction error should affect their plasticity or excitability, and for simplicity, in the presented simulations we allowed the weights to be modified only when reward was presented. However, the same dopaminergic signal after a stimulus predicting reward may need to trigger plasticity in one group of striatal neurons (selective for a past action that led to this valuable state), and changes in excitability in another group (selective for a future action). It will be important to further understand the mechanisms which can be employed by striatal neurons to appropriately react to dopamine signals (Berke, 2018;Mohebi et al., 2019).
The models presented in this paper described only a part of the basal ganglia circuit, and it will be important to include also other elements of the circuit. In particular, this paper focussed on a subset of striatal neurons expressing D1 receptors, which project directly to the output nuclei and facilitate movements, but another population expressing D2 receptors projects via an indirect pathway and inhibits movements (Kravitz et al., 2010). Computational models suggest that these neurons predominantly learn from negative feedback (Collins and Frank, 2014;Mikhael and Bogacz, 2016;Mö ller and Bogacz, 2019) and it would be interesting include their role in preventing unsuitable movements in the DopAct framework.
The basal ganglia circuit also includes a hyperdirect pathway, which contains the subthalamic nucleus. It has been proposed that a function of the subthalamic nucleus is to inhibit non-selected actions (Gurney et al., 2001), and the hyperdirect pathway may support the competition between actions that is present in the framework. The subthalamic nucleus has also been proposed to be involved in determining when the planning process should finish and action should be initiated (Frank et al., 2007). For simplicity, in this paper the process of action planning has been simulated for a fixed interval (until time t ¼ 2 in Figures 7 and 11). It will be important to extend the framework to describe the mechanisms initiating an action. If actions were executed as soon as a motor plan is formed, the increase in the habit prediction error would be briefer than that depicted in Figure 7C.
In such an extended model the valuation and goal-directed systems would also need to be modified to learn to expect reward at a particular time after the action.
The presented models cannot reproduce the ramping of dopaminergic activity, observed as animals approached rewards (Howe et al., 2013). To capture these data, the valuation system could incorporate synaptic decay that has been shown to allow standard reinforcement learning models to reproduce the ramping of prediction error (Kato and Morita, 2016).
It has been also observed that dopaminergic neurons respond not only to unexpected magnitude of reward, but also when the type of reward differs from that expected (Takahashi et al., 2017). To capture such prediction errors, the framework could be extended to assume that each system tries to predict multiple dimensions of reward or movement (cf Gardner et al., 2018).
Finally, dopaminergic neurons also project to regions beyond basal ganglia, such as amygdala, which plays a role in habit formation (Balleine et al., 2003), and cortex, where they have been proposed to modulate synaptic plasticity (Roelfsema and van Ooyen, 2005). It would be interesting to extend the DopAct framework to capture dopamine role in learning and action planning in these regions.
Materials and methods
This section describes details of simulations of models developed within the DopAct framework for two tasks: selecting action intensity and choice between two actions. The models were simulated in Matlab (RRID:SCR_001622), and all codes are available at MRC Brain Network Dynamics Unit Data Sharing Platform (https://data.mrc.ox.ac.uk/data-set/simulations-action-inference).
Selecting action intensity
We first describe the valuation system, and then provide details of the model in various simulated scenarios.
The valuation system was based on the standard temporal difference model (Montague et al., 1996). Following that model we assume that the valuation system can access information on how long ago a stimulus was presented. In particular, we assume that time can be divided into brief intervals of length I. The state of the environment is represented by a column vector s À v with entries corresponding to individual intervals, such that s v;1 ¼ 1 if the stimulus has been present in the current interval, s v;2 ¼ 1 if the stimulus was present in the previous interval, etc. Although more realistic generalizations of this representation have been proposed (Daw et al., 2006;Ludvig et al., 2008), we use this standard representation for simplicity. Figure 12A lists equations describing the valuation system, which are based on temporal difference learning but adapted to continuous time. According to Equation 12.1, the estimate of the value of state s converges in equilibrium to v ¼ w À s describing how much reward can be expected after stimulus appearing in a particular interval. Equation 12.2 describes the dynamics of the prediction error in the valuation system, which converges to a difference between total reward (r þ v) and the expectation of that reward made at a previous interval (v tÀI ), as in the standard temporal difference learning (Sutton and Barto, 1998). The weight parameters are modified proportionally to the prediction error as described by Equation 12.3, where a v is a learning rate, and e À are eligibility traces associated with weights w À , which describe when the weights can be modified. In basic reinforcement learning e À ¼ s ÀT v , i.e. a weight can only be modified if the corresponding state is present. Equation 12.4 describes the dynamics of the eligibility traces, and if one ignored the first term on the right, it would converge to e À ¼ s ÀT v . The first term on the right of Equation 12.4 ensures that the eligibility traces persist over time, and parameter l describes what fraction of the eligibility traces survives from one interval to the next (Ludvig et al., 2008). Such persistent eligibility traces are known to speed up learning (Sutton and Barto, 1998). The first term on the right of Equation 12.4 includes an eligibility trace from time t À I À 3t , that is from a time slightly further than one interval ago, to avoid the influence of transient dynamics occurring at the transition between intervals. It is also ensured in the simulations that parameters w À do not become negative, as the desired reward value v computed by the valuation system should not be negative. Thus if any element of w À becomes negative, it is set to 0. Finally, Equation 12.5 describes the dynamics of the reward signal r, which follows the actual value to reward r 0 . This dynamics has been introduced so that the reward signal rises with the same rate as the value estimate (the same time constant is used in Equations 12.1 and 12.5), and these quantities can be subtracted to result in no prediction error when the reward obtained is equal to that predicted by the valuation system. In simulations involving selection of action intensity, the time represented by the valuation system was divided into intervals of I ¼ 0:2. The stimulus was presented at time t ¼ 1, while the reward was given at time t ¼ 2, thus the valuation system represented the value of 5 time intervals (i.e. vectors w À , s À v and e À had 5 elements each). The parameters controlling retention of eligibility trace was set to l ¼ 0:9. The state provided to the actor was equal to s ¼ 1 from time t ¼ 1 onwards. We assumed that the intensity of action executed by the agent was equal to the inferred action intensity plus motor noise with standard deviation s a ¼ 1 (this random number was added to action intensity at time t ¼ 2). During intervals in which rewards were provided (from t ¼ 2 onwards) the parameters were continuously updated according to Equations 6.8-9. In simulations the learning rates were set to: a v ¼ 0:5, a g ¼ 0:05, a h ¼ 0:02, a Sg ¼ 0:05, a Sh ¼ 0:1. The time constants were set to: t ¼ 0:05, t d ¼ 0:02, and the differential equations were solved numerically using Euler method with integration step 0.001. The model parameters were initialized to: v i ¼ q ¼ 0:1, h ¼ 0, S g ¼ 1 and S h ¼ 100.
To simulate devaluation, the expectation of reward was set to 0 by setting v i ¼ q ¼ 0, as a recent modelling study suggests that such scaling of learned parameters by motivational state is required for reproducing experimentally observed effects of motivational state on dopaminergic responses encoding reward prediction error (van Swieten and Bogacz, 2020).
In the simulations of Pavlovian-instrumental transfer, the valuation system was learning the values of two states corresponding to the presence of the lever and the conditioned stimulus. Thus the state vector s À v had 10 entries, where the first 5 entries were set to 1 at different intervals after 'lever appearance', while the other 5 entries were set to 1 at different intervals after conditioned stimulus.
Consequently, the vector of parameters of the valuation system w À also had 10 entries. The simulations of the first stage (operant conditioning) consisted of 100 trials in which the model was trained analogously as in the simulations described in the above paragraph. At this stage only first 5 entries of vector s À v could take non-zero values, and hence only the first 5 entries of w À were modified. The state provided to the actor was equal to s ¼ 1 when 'lever appeared' that is from time t ¼ 1 onwards. The simulations of the second stage (classical conditioning) consisted of 100 trials in which only the valuation system was learning. At this stage, the conditioned stimulus was presented at time t ¼ 1, and the reward r ¼ 1 was given at time t ¼ 2, thus s v;6 ¼ 1 for t 2 1; 1:2 ½ ; s v;7 ¼ 1 for t 2 1:2; 1:4 ½ , etc. The simulations of the third stage (testing) consisted of 60 trials in which only negative reward accounting for effort r ¼ Àa was given. On trials 21-30 and 41-50, both 'lever and conditioned stimulus were presented', that is s v;1 ¼ s v;6 ¼ 1 for t 2 1; 1:2 ½ , etc., while on the other trials only the 'lever was presented'. The model was simulated with the same parameters as described in the previous paragraph, except for modified values of two learning rates a g ¼ 0:015, a h ¼ 0:005, to reproduce the dynamics of learning shown by experimental animals.
In all simulations in this paper, a constraint (or a 'hyperprior') on the minimum value of the variance parameters was introduced, such that if S g or S h decreased below 0.2, it was set to 0.2.
Choice between two actions
Analogously, as in the previous section, we first describe the valuation system, and then provide the details of the simulations.
In the simulations of choice, we used a simplified version of the valuation system, which for each state j learns a single parameter w j (rather than the vector of parameters encoding the reward predicted in different moments in time). The equations describing this simplified valuation system are shown in Figure 12B. According to Equation 12.6, the estimate of the value of state s converges in equilibrium to v ¼ w À s À . Following reward delivery, parameters w j are modified according to Equation 12.7, where v is taken as the estimated value at the end of simulation of the planning phase on this trial.
In order to simulate the actor, its description has been converted to differential equations in analogous way as in Figure 6C. At the end of the planning phase, Gaussian noise with standard deviation s a ¼ 2 was added to all entries of the action vector (to allow exploration), and the action with the highest intensity was 'chosen' by the model. Subsequently, for the chosen action i the intensity was set to a i ¼ 1, while for the other action it was set to a k6 ¼i ¼ 0. For simplicity we did not explicitly simulate the dynamics of the model after the delivery of reward r, but we computed the prediction errors in the goal-directed and habit system in an equilibrium (orange Equation 9.6 and Equation 10.1), and updated the parameters. In simulations the learning rate in the valuation system was set to a v ¼ 0:5 on trials with d v >0, and to a v ¼ 0:1 when d v 0. Other learning rates were set to: a g ¼ 0:1, a h ¼ 0:05, a S ¼ 0:01. The remaining parameters of the simulations had the same value as in the previous section.
|
2019-11-14T17:06:13.278Z
|
2019-11-11T00:00:00.000
|
{
"year": 2020,
"sha1": "ca5473409d1b6cdcf767102f0bac74fc526c2cd7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.53262",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a48831f88e5fe73a8e7ce55ce51777726cd0e6d",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology",
"Computer Science",
"Biology"
]
}
|
17903273
|
pes2o/s2orc
|
v3-fos-license
|
Socioeconomic Disadvantage Moderates the Association between Peripheral Biomarkers and Childhood Psychopathology
Background Socioeconomic disadvantage (SED) has been consistently associated with early life mental health problems. SED has been shown to impact multiple biological systems, including the regulation of neurotrophic proteins, immune-inflammatory and oxidative stress markers, which, conversely, have been reported to be relevant to physiological and pathological neurodevelopment This study investigated the relationship between SED, different domains of psychopathology, serum levels of interleukin-6 (IL6), thiobarbituric acid-reactive substance (TBARS) and brain-derived neurotrophic factor (BDNF). We hypothesized that a composite of socioeconomic risk would be associated with psychopathology and altered levels of peripheral biomarkers. In addition, we hypothesized that SED would moderate the associations between mental health problems, IL6, TBARS and BDNF. Methods and Findings Using a cross-sectional design, we measured the serum levels of IL6, TBARS and BDNF in 495 children aged 6 to 12. We also investigated socio-demographic characteristics and mental health problems using the Child Behaviour Checklist (CBCL) DSM-oriented scales. SED was evaluated using a cumulative risk model. Generalized linear models were used to assess associations between SED, biomarkers levels and psychopathology. SED was significantly associated with serum levels of IL6 (RR = 1.026, 95% CI 1.004; 1.049, p = 0.020) and TBARS (RR = 1.077, 95% CI 1.028; 1.127, p = 0.002). The association between SED and BDNF was not statistically significant (RR = 1.031, 95% CI 0.997; 1.066, p = 0.077). SED was also significantly associated with all CBCL DSM-oriented scales (all p < 0.05), whereas serum biomarkers (i.e. IL6, TBARS, BDNF) were associated with specific subscales. Moreover, the associations between serum biomarkers and domains of psychopathology were moderated by SED, with stronger correlations between mental health problems, IL6, TBARS, and BDNF being observed in children with high SED. Conclusions In children, SED is highly associated with mental health problems. Our findings suggest that this association may be moderated via effects on multiple interacting neurobiological systems.
Introduction
In children and adolescents, mental health problems are highly prevalent, debilitating and one of the main predictors of adult mental disorders [1][2][3][4][5]. It is well established that childhood psychopathology emerges in the context of an intricate relation between genetic and environmental risk factors [6][7][8]. Among these environmental risk factors, socioeconomic disadvantage (SED) has been described as one of the major contributors for the development and persistence of mental health problems [9][10][11][12][13]. Epidemiological and clinical evidence indicates that SED is associated with multiple dimensions of psychopathology, with more robust effects on externalizing problems, such as aggressive and delinquent behaviors, and a less robust, but still significant, association with internalizing symptoms, such as anxiety and depression [10][11][12]14].
Several mechanisms have been proposed to explain the effects of SED on psychopathology. Low socioeconomic position is often associated with material deprivation, as well as with residence in neighborhoods where crime and substance abuse tend to be more prevalent; and educational/economic opportunities less available [15][16][17]. Exposure to chronic stress, frequently, but not exclusively, related to the experience of "social defeat", or the experience of being excluded or isolated, has also been conceptualized as key factor [18,19]. More recently, an association between SED and the neural substrates of psychopathology has been highlighted. Neuroimaging studies have reported and association between SED and alterations in brain structure and function, characterized, for example, by a decreased volume in the prefrontal cortex and its subdivisions (e.g. orbitofrontal cortex, anterior cingulate cortex), areas prominently involved in cognitive and emotional processing [13,[20][21][22]. Longitudinal studies have documented that SED was associated with divergent neurodevelopmental trajectories, with children from low socioeconomic background having slower gray matter growth during childhood [23,24].
From a molecular perspective, neurotrophic proteins, immune-inflammatory and oxidative stress markers have been consistently reported to be associated with brain structure and function and to be relevant to physiological and pathological neurodevelopment [25,26]. Associations between alterations in these systems have been reliably described in children and adults, across disparate mental disorders [27][28][29][30][31][32]. Moreover, convergent evidence indicates that early life SED is independently and strongly associated with inflammation in children [33], as well as prospectively in adults [34][35][36][37]. Conversely, serum levels of brain-derived neurotrophic factor (BDNF) and functional variations of the BDNF gene were also shown to be affected by SED [38,39].
We recently demonstrated that SED is associated with general psychopathology, independently from co-occurring risk factors (i.e. parental mental disorders, perinatal complications) (Mansur et al., unpublished data). Moreover, we also documented that exposure to environmental risk factors moderates the association between IL6 and general psychopathology [40]. However, the association between peripheral biomarkers and mental health problems is not completely understood, especially regarding the factors that mediate and/or moderate this relationship. Considering that SED could potentially impact the neural systems that underlie psychopathology, as well modulate systemic adaptations (e.g. immune and endocrine changes), it is possible that this factor could, at least partially, explain the association between SED and different domains of psychopathology. Herein we sought to extend results from these previous studies by evaluating the impact of SED on serum IL6, BDNF and the marker of lipid peroxidation thiobarbituric acid-reactive substance (TBARS). We also aimed to assess the impact of SED on the association between biomarkers and dimension of psychopathology. We hypothesized that (1) SED would be associated with altered levels of IL6, TBARS and BDNF levels; and (2) SED would moderate the association between biomarkers and psychopathology, wherein the correlation between biomarkers and dimensions of psychopathology would be stronger in children with exposure to high SED.
Participants
The sample herein is part of the High Risk Cohort Study for Psychiatric Disorders Study, which has been reported elsewhere [41]. From the total cohort of 2,512 subjects, 1,004 children were invited to participate in enriched imaging/biomarker cohort. A total of 741 subjects completed the imaging procedures and 495 children provided valid blood samples for the study herein. Primary reasons for missing blood samples were: caregiver refusal, children refusal and technical complications during blood processing procedures. Written informed consent was provided by all parents of participants, and verbal consent was obtained from all children. The study was approved by the Ethics Committee of the Universidade de São Paulo (IORG00048 84). All families were invited for an appointment with a trained psychologist and social worker in case they were interested in receiving the results of the study evaluation. All children identified as being under the need of care were referred for clinical evaluation. Situations involving serious risk of physical or psychological harm received special attention in accordance to competent authorities' guidelines.
Measurements
Environmental risk factors. Questions about risk factors were determined after a critical review of the extant literature that has primarily reported on risk factors for mental disorders [41] and included inquiries about demographic and social factors (e.g. socio-economic status, parental education). We created a cumulative risk index, conceptualized as each individual's cumulative exposure to a set of indicators of SED, according to previous studies [42][43][44][45][46]. Definitions and descriptive statistics of risk factors indicators are reported in Table 1. Each indicator was weighted equally and summed. For analyses of interaction we created a dichotomous variable for high exposure, defined as exposure to 2 or more indicators of SED.
Child Behavior Checklist (CBCL). Psychopathology was assessed dimensionally; using the CBCL, which is a parent-report questionnaire that assesses various behavioral and emotional problems. The CBCL is a widely used standardized measure of maladaptive behavior and emotional complications in individuals between ages 4 and 18 [47,48]. For the study herein, we used the DSM-oriented scales (i.e. depressive problems, anxiety problems, somatic problems, attention deficit/hyperactivity problems, oppositional defiant problems and conduct problems), which have good validity and clinical usefulness [49,50].
Blood samples collection and biomarkers assessment. Whole blood samples were obtained from all children. All samples were obtained between 10:00am and 4:00pm. After collection, blood was allowed to clot by leaving it undisturbed at room temperature and then serum extracted after blood had been processed at 1,000-2,000 x g for 10 minutes in a refrigerated centrifuge. Serum was kept at −80°C until further analyzed. As the samples were labeled with numbers, without any group identification, the investigators were blinded for all procedures.
BDNF serum levels were measured with sandwich-ELISA, using a commercial kit according to the manufacturer's instructions (Milipore, USA). For assessment of oxidative stress, serum levels of malondialdehyde (MDA), a product of lipid peroxidation, were measured by the TBARS (thiobarbituric acid reactive substances) method [51]. Serum IL6 levels were measured by flow cytometry using the Cytometric Bead Array (CBA) Flex Set Kit (BD Biosciences, San Jose, CA) (Cat. #558276). Acquisition was performed with a FACSCanto II flow cytometer (BD Biosciences, San Jose, CA). The instrument has been checked for sensitivity and overall performance with Cytometer Setup and Tracking beads (BD Biosciences) prior to data acquisition. Quantitative results were generated using FCAP Array v1.0.1 software (Soft Flow Inc., Pecs, Hungary).
Statistical analyses
All statistical analyses were conducted using SPSS software for Windows (version 23.0). For the comparison of demographic and clinical data, the independent samples t-test was used for quantitative variables; the Chi-square test was used for categorical variables. Generalized linear models were used to assess associations between SED, biomarkers levels and psychopathology. We used linear, Poisson (for count data, e.g. CBCL scales) and gamma (for positively skewed distribution, e.g. serum TBARS and IL6 levels) distributions, as appropriate. Interactions between SED and biomarkers were assessed by adding the product term (i.e. SED Ã IL6) to the tested models. Due to the non-linearity of the models, the estimated β coefficients were transformed into rate ratio (RR) estimates. Post hoc correction to control for the false discovery rate was applied according to the Benjamini Hochberg procedure [52].
Socioeconomic disadvantage and peripheral biomarkers
There was a positive correlation between the SED index, serum IL6 (r = 0.121, p = 0.007), TBARS (r = 0.145; p = 0.001) and BDNF levels (r = 0.098, p = 0.022). After adjustments for age, gender and ethnicity, the association between SED, IL6 (RR = 1.026, 95% CI 1.004; 1.049, p = 0.020) and TBARS remained significant (RR = 1.077, 95% CI 1.028; 1.127, p = 0.002), whereas there was a trend for BDNF (RR = 1.031, 95% CI 0.997; 1.066, p = 0.077). Table 2 shows that SED was positively associated with all CBCL scales when analyzed separately (Model 1) and together with the biomarkers (Model 2). The biomarkers had somewhat distinct patterns of associations with the different scales, with IL6 being more strongly associated with scores on the depressive and anxiety scales; and TBARS being associated with the anxiety and conduct scales. BDNF was only associated with the anxiety, attention and conduct scales when analyzed separately.
Socioeconomic disadvantage, peripheral biomarkers and psychopathology
Interaction analyses indicated a significantly positive interaction between high SED and BDNF on depressive problems, indicating that a positive correlation between BDNF and depressive problems was only positive in the children from the high SED group (Fig 1). The moderating effect of SED on the association between BDNF and psychopathology was specific to depressive problems, as the results were non-significant for measures on other scales (Table 3). Moderating effects on IL6 and TBARS were, on the other hand, pleiotropic, as it was significant in almost all subscales, with the only exceptions being depressive and oppositional defiant problems, for IL6, and somatic problems, for TBARS ( Table 3). All of the significant interactive effects with IL6 and TBARS were positive, indicating a stronger correlation between biomarkers and psychopathology in children from the high SED group, compared to children in the low SED group (Fig 1).
Discussion
Our results indicate that SED is robustly associated with multiple domains of psychopathology. The strongest association in our sample was with conduct problems; an association with anxiety symptomatology was also significant [11,53]. Evidence indicates that familial context (i.e. parents educational level or occupational status) and residence in deprived neighborhoods, with more exposure to deviant peer behavior and lower social support, are more related to externalizing problems [54,55]. Internalizing symptoms, on contrast, would be more, but not exclusively, associated with individual temperament and genetic vulnerabilities [12,56].
Serum biomarkers also had differential associations with mental health problems, with the inflammatory cytokine IL6 being more strongly associated with the internalizing dimension (i.e. depressive, anxiety and somatic problems) and the oxidative stress marker TBARS with externalizing symptoms (i.e. attentional, oppositional and conduct problems). The neurotrophin BDNF had, in comparison, weaker associations, especially in the models that analyzed all the variables together. Effect sizes were relatively small, although were largely consistent with previous mechanistic studies [29,33]. Considering the complexity of mental illnesses etiology, which involves multiple interacting genetic, environmental and biological factors; large effect sizes are unlikely to be detected; therefore the magnitude of the associations described in this community-based study are noteworthy and possibly indicative of clinical relevance. Evidence on putative specific effects of different pathophysiological pathways is scarce. There are reports of positive correlations between plasma markers of oxidative stress and attention deficit hyperactivity disorder, as well as aggressive behavior in adults [57,58]. Oxidative stress induces damage to nucleic acids or lipids, which has the potential to impair basic cellular/neuronal functions [59]. Indeed, there is evidence that lipid peroxidation is associated with white matter damage, which could potentially affect the circuits that regulate aggressive/confrontational behavior [60,61]. Consistent with our results, alterations in inflammatory markers, including an increase in serum IL6, have been observed in children and adolescents with major depressive disorder [32]. The relationship between inflammation and mood has been extensively studied, with potential mechanisms including, but not limited to, effects on monoamine levels through activation of indoleamine 2,3-dioxygenase (IDO), which degrades tryptophan, and pathologic microglial cell activation [62,63].
Peripheral biomarkers were positively correlated with SED, with higher levels of IL6, TBARS and BDNF being found in children with low socioeconomic position. As hypothesized, SED moderated the association between IL6, TBARS, BDNF and domains of psychopathology. All significant interactions were in the same direction, with a stronger association between peripheral markers and mental health problems in children exposed to high SED. Interestingly, we observed an interaction between SED and BDNF on depressive problems, even though there was no significant association between BDNF and depressive symptoms, indicating that BDNF's relationship with this domain of psychopathology may be fully dependent of socioeconomic position. Associations between IL6 and internalizing problems, as well as between TBARS and externalizing symptoms, were also modified by the presence of SED. These results indicate that the differential effects of SED on each domain of psychopathology may be subserved by differential activations of neurobiological pathways.
Conceptually, the accumulation of multiple adverse conditions (e.g. SED) may lead to several different types of emotional or behavioral outcomes, which has been termed multifinality [64]. This model has been empirically supported [53,65,66]; nonetheless, there is evidence that specific developmental trajectories are more likely than others. For example, some psychopathology constructs are more likely to predict themselves (i.e. homotypic continuity), whereas some domains are more likely to predict others (i.e. heterotypic continuity). Evidence indicates that conduct problems are mostly stable over time; oppositional/defiant problems, instead, are stronger predictors of affective and attentional problems [66,67]. These developmental pathways are dynamically influenced by genetic and non-genetic factors [6,68,69]. Interestingly, this transition from oppositional problems toward mood/anxiety symptoms seems to be partially mediated by environmental risk factors [66,70]. Our data suggest that SED's relationship with different neurobiological substrates, likely determined by each individual genetic vulnerabilities and/or previous or co-occurring exposure to other environmental factors, accounts, at least partially, to its differential associations with disparate domains of psychopathology. Nevertheless, the principle of equifinality, which refers to a diversity of pathways leading to same phenotype, may also apply (62). Low socioeconomic position is frequently correlated with other risk factors, including, but not limited to, parental mental disorders and exposure to perinatal complications [11,[71][72][73]. We recently reported, using data from this same sample, that SED is associated with parental mental disorders, but that its association with general psychopathology was independent and did not interact with familial mental illness (Mansur et al., unpublished data), a finding that is consistent with other studies [10]. However, we separately documented an interaction between SED and parental psychopathology on IL6 levels [40], indicating that the results described herein may not represent an isolated effect of SED. Our sample size and study design does not allow the disentangling of multiple risk factors' effects, therefore, these findings need to be replicated and refined, with consideration for reciprocate and interacting associate factors, in larger, prospective samples. This study has limitations that limit inferences and interpretations of the data. The crosssectional design precludes conclusions about causality. It is not possible to determine, based on our data, whether exposure to SED or alterations in serum biomarkers precede the onset of psychopathology. We used a cumulative composite of SED, therefore other important determinants of environmental factors impact, such as extent and timing of exposure, were not directly assessed. Our SED index weighted equally all components. The studied factors are not interchangeable and may impact different etiological pathways; it is also possible that different combinations may have divergent effects [74,75]. Moreover, as there are no longitudinal studies evaluating the markers assessed in this study, there are questions about their stability over time. We collected the samples in a relatively narrow period of the day; however, it is possible that the biomarker's levels were affected by temporal and contextual factors. Nonetheless, our study also has a number of strengths. Our study population was derived from a large, community-based sample, enriched for the presence of psychopathology. We used a multi-informant clinical evaluation with validated instruments, thus obtaining data directly from parents and limiting rater and information biases. Finally, we simultaneously assessed a range of dimensional domains of psychopathology, which provides more insight on the diverse effects of SED.
In summary, SED was associated with disparate domains of psychopathology in children, as well as with increased serum levels of IL6, TBARS and BDNF. In addition, SED was also shown to moderate the association between IL6, TBARS and BDNF, and mental health problems, suggesting that SED's different associations with psychopathology are, at least partially, related to its engagement of different neurobiological pathways. Prospective evaluation of this cohort may provide further information about the interaction between SED, serum biomarkers, psychopathology, and the onset of psychiatric disorders.
|
2018-04-03T02:42:55.142Z
|
2016-08-04T00:00:00.000
|
{
"year": 2016,
"sha1": "9a7fdde265b90a107e0498a9daaf22a8ed9111a6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0160455&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a7fdde265b90a107e0498a9daaf22a8ed9111a6",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14910874
|
pes2o/s2orc
|
v3-fos-license
|
Ecosystem Services Valuation of Lakeside Wetland Park beside Chaohu Lake in China
Wetland ecosystems are one of the three great ecosystems on Earth. With a deepening of research on wetland ecosystems, researchers have paid more and more attention to wetland ecosystem services such as flood mitigation, climate control, pollution prevention, soil-erosion prevention, biodiversity maintenance, and bio-productivity protection. This study focuses on a lakeside wetland ecosystem in Hefei, a city in central China, and estimates the value of ecosystem services such as material production, air purification, water conservation, biodiversity, recreation, species conservation, education and scientific research. We adopted the market value method, carbon tax method, afforestation cost method, shadow engineering method and contingent value method (CVM) using questionnaire survey data during the study period. The results show that the total value of the ecosystem services of Lakeside Wetland Park was 144 million CNY in 2015. Among these services, the value of society service is the maximum at 91.73 million CNY, followed by ecological service and material production service (42.23 million CNY and 10.43 billion CNY in 2015 respectively). When considering wetland ecosystems for economic development, other services must be considered in addition to material production to obtain a longer-term economic value. This research reveals that there is scope for more comprehensive and integrated model development, including multiple wetland ecosystem services and appropriate handling of wetland ecosystem management impacts.
Introduction
The wetland ecosystem is one of the most significant ecosystems on Earth.Its unique ecological system features interaction between water and land.Wetland ecosystems offer animals, plants, and microorganisms a place to live, while also being rich in biodiversity.Wetland ecosystems are known as the "kidney of the Earth" because they purify the environment by processing pollutants.The evaluation of ecosystem services reveals the important contribution of this ecological system to human beings' welfare and provides the basis for the establishment of ecological compensation standards, the participation of stakeholders, and the decisions of management.This is the main reason that a large number of scholars research the evaluation of ecosystem services [1].
At present, the evaluation of the ecosystem service of wetland ecosystems is mainly concentrated on the classification of wetland ecosystem services and different methods [2][3][4][5][6][7][8] to calculate the value of services.The Millennium Ecosystem Assessment divides ecosystem services into provision services, regulating services, cultural services, and support services [5].From the application aspect, value can be divided into "use value" (UV) and "non-use value" (NUV).The UV is divided into "direct use value" (DUV) and "indirect use value" (IUV) including the ecological services value, whereas the NUV mainly contains "option value" (OV), "existing value" (EV), and "heritage value" [9,10].
The option value (OV) is the same as an insurance premium for an uncertain future [11].The existing value (EV) is considered to be the intrinsic value of the ecological system.It is the evaluation of the ecological environment of the capital and its value mainly depends on humans' subjective consciousness, which implies that it changes continuously with human understanding of the services of the wetland ecosystem.In addition, the non-use value also includes the heritage value [5].The relationship among these services is complex, as shown in Figure 1.
The option value (OV) is the same as an insurance premium for an uncertain future [11].The existing value (EV) is considered to be the intrinsic value of the ecological system.It is the evaluation of the ecological environment of the capital and its value mainly depends on humans' subjective consciousness, which implies that it changes continuously with human understanding of the services of the wetland ecosystem.In addition, the non-use value also includes the heritage value [5].The relationship among these services is complex, as shown in Figure 1.Recent researches include such projects as, "Intergovernmental Platform on Biodiversity and Ecosystem Services" [12], "Ecosystem Services Partnership" [13], the establishment of "Integrating Biodiversity Science for Human Wellbeing" [14], and "The Economics of Ecosystems and Biodiversity" [15].Domestic researches in China have mainly focused on wetland ecosystem service evaluation using such methods as the market value method, shadow engineering method, market price method and contingent value method.In 1999, Ouyang calculated the Chinese land ecosystem service value altogether [16].Chen [17] used the classification by Constanza as a reference and evaluated the benefit of Chinese ecological system services.Cui [18] analyzed the dominant services of the Poyang Lake wetland and evaluated the services of water conservation, flood regulation, carbon oxygen release, pollutants degradation, soil conservation, and biological habitat protection.
Recent researches show that there are significant changes in evaluating methods of ecosystem services value.Some surveys establish new models to assess the stability and sustainability of ecosystems [19,20].Continuing research on complex interaction of wetland ecological services is significant in China, because the interaction of small scale wetland is closely related to residence factors which frequently affect human production activities.At the same time, wetland ecosystem service value assessment in China is more focused on direct value.These researches underestimate indirect value such as the recreation value of wetland ecosystem service, and they also lack data on the interaction effects between humanity and ecology in small scale wetland.
A wetland ecosystem service assessment system should fully reflect the direct contribution of wetland ecosystems to human well-being while improving the reliability of evaluation results and avoiding overly complex calculations.At the same time, the evaluation methods and evaluation parameters should provide the basis for the establishment of a wetland ecosystem service index system and improve the repeatability, scalability, and management efficiency of wetland ecosystem services.Based on these goals, this paper focuses on the lakeside wetland ecosystem surrounding Recent researches include such projects as, "Intergovernmental Platform on Biodiversity and Ecosystem Services" [12], "Ecosystem Services Partnership" [13], the establishment of "Integrating Biodiversity Science for Human Wellbeing" [14], and "The Economics of Ecosystems and Biodiversity" [15].Domestic researches in China have mainly focused on wetland ecosystem service evaluation using such methods as the market value method, shadow engineering method, market price method and contingent value method.In 1999, Ouyang calculated the Chinese land ecosystem service value altogether [16].Chen [17] used the classification by Constanza as a reference and evaluated the benefit of Chinese ecological system services.Cui [18] analyzed the dominant services of the Poyang Lake wetland and evaluated the services of water conservation, flood regulation, carbon oxygen release, pollutants degradation, soil conservation, and biological habitat protection.
Recent researches show that there are significant changes in evaluating methods of ecosystem services value.Some surveys establish new models to assess the stability and sustainability of ecosystems [19,20].Continuing research on complex interaction of wetland ecological services is significant in China, because the interaction of small scale wetland is closely related to residence factors which frequently affect human production activities.At the same time, wetland ecosystem service value assessment in China is more focused on direct value.These researches underestimate indirect value such as the recreation value of wetland ecosystem service, and they also lack data on the interaction effects between humanity and ecology in small scale wetland.
A wetland ecosystem service assessment system should fully reflect the direct contribution of wetland ecosystems to human well-being while improving the reliability of evaluation results and avoiding overly complex calculations.At the same time, the evaluation methods and evaluation parameters should provide the basis for the establishment of a wetland ecosystem service index system and improve the repeatability, scalability, and management efficiency of wetland ecosystem services.Based on these goals, this paper focuses on the lakeside wetland ecosystem surrounding Chaohu Lake
Research Area
There are abundant wetland resources surrounding Chaohu Lake, where the ecosystem is protected by the building of a series of wetland parks, and the region was listed in the first group of 66 national level eco-civilized pioneering demonstration parks in China.The Lakeside Wetland Park in the basin, located in Baohe District of Hefei City, is the first ecologically restored park of grain-for-green at the national level across China and is in prime geographical position with Chaohu Lake-China's fifth largest freshwater lake-to the south.The Lakeside Wetland Park covers an area of 1072 hm 2 , with a forest coverage (rate) of 74.58% and a crown density of 0.70-0.90.The water area is 262.6 hm 2 , accounting for 24.50% of the total protective region, which includes five rivers, 75 ditches and 204 ponds.The combination of a natural water system and the forest environment in the park forms a multi-layer, multi-function forestry eco-network structure and landscape effect.The park, with 2698 negative oxygen ions per cubic centimeter in the air, is up to the standard of class-6 national health resorts.Because of the integration of urban forest, wetland forest and cultural forest, the Lakeside Wetland has diverse services including water purification, water conservation, air purification, biodiversity, recreation, production and education (Figure 3).
Research Area
There are abundant wetland resources surrounding Chaohu Lake, where the ecosystem is protected by the building of a series of wetland parks, and the region was listed in the first group of 66 national level eco-civilized pioneering demonstration parks in China.The Lakeside Wetland Park in the basin, located in Baohe District of Hefei City, is the first ecologically restored park of grain-for-green at the national level across China and is in prime geographical position with Chaohu Lake-China's fifth largest freshwater lake-to the south.The Lakeside Wetland Park covers an area of 1072 hm 2 , with a forest coverage (rate) of 74.58% and a crown density of 0.70-0.90.The water area is 262.6 hm 2 , accounting for 24.50% of the total protective region, which includes five rivers, 75 ditches and 204 ponds.The combination of a natural water system and the forest environment in the park forms a multi-layer, multi-function forestry eco-network structure and landscape effect.The park, with 2698 negative oxygen ions per cubic centimeter in the air, is up to the standard of class-6 national health resorts.Because of the integration of urban forest, wetland forest and cultural forest, the Lakeside Wetland has diverse services including water purification, water conservation, air purification, biodiversity, recreation, production and education (Figure 3).
Research Area
There are abundant wetland resources surrounding Chaohu Lake, where the ecosystem is protected by the building of a series of wetland parks, and the region was listed in the first group of 66 national level eco-civilized pioneering demonstration parks in China.The Lakeside Wetland Park in the basin, located in Baohe District of Hefei City, is the first ecologically restored park of grain-for-green at the national level across China and is in prime geographical position with Chaohu Lake-China's fifth largest freshwater lake-to the south.The Lakeside Wetland Park covers an area of 1072 hm 2 , with a forest coverage (rate) of 74.58% and a crown density of 0.70-0.90.The water area is 262.6 hm 2 , accounting for 24.50% of the total protective region, which includes five rivers, 75 ditches and 204 ponds.The combination of a natural water system and the forest environment in the park forms a multi-layer, multi-function forestry eco-network structure and landscape effect.The park, with 2698 negative oxygen ions per cubic centimeter in the air, is up to the standard of class-6 national health resorts.Because of the integration of urban forest, wetland forest and cultural forest, the Lakeside Wetland has diverse services including water purification, water conservation, air purification, biodiversity, recreation, production and education (Figure 3).
Geographic Location
The Lakeside Wetland Park, located in the southeast of the city of Hefei city, is at the junction of North China (of the Palaearctic Realm) and Central China (of the Oriental Realm), near where the Nanfei River flows into the Chaohu Lake.It has a total planned area of 1072.00 hm 2 (31 ˝42 1 45"~31 ˝45 1 24" N, 117 ˝22 1 32"~117 ˝23 1 29" E).Refer to Figure 4 for the exact location.The park, surrounded by flat peripheral terrain, is on the south side of the Jianghuai watershed, with the topography of a plain formed by the Nanfei River, the Paihe River and Chaohu Lake.The Lakeside Wetland Park, located in the southeast of the city of Hefei city, is at the junction of North China (of the Palaearctic Realm) and Central China (of the Oriental Realm), near where the Nanfei River flows into the Chaohu Lake.It has a total planned area of 1072.00 hm 2 (31°42′45″~31°45′24″ N, 117°22′32″~117°23′29″ E).Refer to Figure 4 for the exact location.The park, surrounded by flat peripheral terrain, is on the south side of the Jianghuai watershed, with the topography of a plain formed by the Nanfei River, the Paihe River and Chaohu Lake.
Climate Condition
The Park is influenced by the subtropical humid monsoon climate, which has an annual average temperature of 15-16 °C (1.5-5.0 °C in January and 27-28 °C in July) and an annual frost-free period of 245 days.The annual average relative humidity is 76% and the annual average precipitation is 1057 mm inside the park.The latter is mainly from June to September.With abundant sunshine in the park, the mean annual sunshine duration is 2287.9h and the gross radiation intensity is 110-120 kcal/cm 2 .The maximum radiation is in July, while August has the maximum sunshine duration.
Hydrologic Condition
The Lakeside Wetland Park has five surface rivers: the Nanfei River, Shiwuli River, Jiazi River, Weixi River, and Jiaomu River (Table 1).They all belong to the Chaohu Lake water system of the Yangtze River basin.The total length of surface rivers inside the park is 12.8 km, with 62.18 hm 2 of water surface area.
Besides those five rivers, there are 75 ditches in the park, with a total length of 119.5 km and a water surface area of 39.75 hm 2 .There are altogether 204 ponds in the park, with a total area of 160.67 hm 2 .Ditches, channels, roads, forests and water have formed many waterside plant associations and a forest landscape that has ecological, scenic, and strolling features.
Climate Condition
The Park is influenced by the subtropical humid monsoon climate, which has an annual average temperature of 15-16 ˝C (1.5-5.0 ˝C in January and 27-28 ˝C in July) and an annual frost-free period of 245 days.The annual average relative humidity is 76% and the annual average precipitation is 1057 mm inside the park.The latter is mainly from June to September.With abundant sunshine in the park, the mean annual sunshine duration is 2287.9h and the gross radiation intensity is 110-120 kcal/cm 2 .The maximum radiation is in July, while August has the maximum sunshine duration.
Hydrologic Condition
The Lakeside Wetland Park has five surface rivers: the Nanfei River, Shiwuli River, Jiazi River, Weixi River, and Jiaomu River (Table 1).They all belong to the Chaohu Lake water system of the Yangtze River basin.The total length of surface rivers inside the park is 12.8 km, with 62.18 hm 2 of water surface area.
Besides those five rivers, there are 75 ditches in the park, with a total length of 119.5 km and a water surface area of 39.75 hm 2 .There are altogether 204 ponds in the park, with a total area of 160.67 hm 2 .Ditches, channels, roads, forests and water have formed many waterside plant associations and a forest landscape that has ecological, scenic, and strolling features.
Natural Resources
There are 86 families, 204 genera, and 281 species of vegetation in the Lakeside Wetland Park that need maintenance and management (Table 2).The outstanding environment provides favorable conditions for birds and animals to multiply.Inside the park, there are 18 orders of vertebrates, and 47 families, 75 species, and 50 kinds of birds.Among these, the Eurasian Spoonbill, little swan, duck, peregrine falcon, short-eared owl, and small coucal owl are six animal species that belong to class two of national protected animals.There are 26 animal species being protected by Anhui Province.There are tens of thousands of migratory birds that spend the winter in the park.Reed is one of the most typical plants in the park, with an area of 78.6 hm 2 .It is an important production material in agriculture, the salt industry, fisheries, aquaculture, and the weaving industry with high economic value and ecological value.There are altogether 175.4 hm 2 of grapes located in the northern part of the park.It also has abundant herbal resources: goose grass, dayflower, crabgrass, duchesnea, bidens grass, pennisetum, and ferns.In the wetland muck, there is a rich array of mollusks.There is also an artificial pond with 30 thousand farmed fish.
Society and Environment Condition
The Baohe District, which contains the wetland park, is the "first urban area" of Anhui Province, located in the southeast of Hefei City.It connects to "five rivers" (Bao, South Fei, Shiwuli, Tangxi, and Pai) and leads to "one lake" (one of the five major freshwater lakes in China-Chaohu Lake).The entire district has an area of 340 km 2 (among which the Chaohu basin has a water area of 70 km 2 ).The permanent residential population is 1.26 million.In 2012, the GDP of Baohe was 55 billion CNY, the financial revenue was 2.93 billion CNY, and the total retail sales of social consumer goods was 20.85 billion CNY.The urban disposable income per capita was 26,583 CNY, and the pure income per capita for farmers was 11,409 CNY.The total forest area of Baohe District is 5213.33 hm 2 , and the total area of forest and wood is 7553.33 hm 2 .The forest coverage rate is approximately 28.2%; the green coverage rate in urban areas is 44.5%; and the green space rate in urban areas is 41.3%.The public green area per capita is 13.6 m 2 .
Data Sources
The data in the current article are mainly from field surveys and the local statistical yearbook.In order to grasp the status of flora and fauna in the wetland, methods like route surveys and investigations based on observation points and sampling locations are used.An overall exploration was done in the park in 2015-2016.In addition, the monitoring data of water and atmospheric air quality are from the park and the park regional environmental quality monitoring report which was composed by the local environmental protection department of Hefei.
The data involved in this study are divided into two categories: background data and questionnaire data.The background data of this study are divided into two parts: current data and historical data.The current data are mainly from 2015 to 2016 sampling survey, which includes wetland water quality data, biological data (including phytoplankton, benthic animals, and fish) and plant data.We select 5 m ˆ5 m woody quadrant investigation in each sample, and within each woody sample select five 1 m ˆ1 m herb samples, including a total of 150 samples.The research content includes many kinds of woody plants (noting physical quantity), and herbaceous vegetation types (noting physical quantity, coverage, abundance, average height, etc.).To merge the field investigation results and the previous data, Global Positioning System (GPS) precise positioning is done to establish corresponding interpretation signs in ERDAS 8.6 (Intergraph Corporation, Huntsville, AL, USA) for man-machine interactive interpretation to edit and verify the wetland area data.Historical data is provided by various management departments such as the local forestry bureau and the Bureau of Wetland Nature Reserves.Vegetation identification and classification are based on "the ecological types of Anhui vegetation", with "China's wetland vegetation" as the standard.
Wetland resource status data included many aspects: the general situation of the wetland, situation of wetland resources, wetland management situation (including wetland area and wetland type), wetland evaporation, average temperature of wetland, wetland material output, wetland aquatic vegetation, wetland water conservation amount, wetland species number, higher floristic composition, zooplankton species quantity, species of benthic animal quantity, species quantity, birds species number, number of mammal species, wetland management agencies basic situation, protection of wetland resources input, wetland water quality index data, and wetland annual number of tourists.
The questionnaire data mainly includes a survey questionnaire to determine willingness to pay for the recreational value of the wetland.The questionnaire was conducted in the wetland park to understand the awareness degree of both the surrounding residents and tourists to the urban wetland.The main contents of the questionnaire are as follows: (1) investigate the awareness of wetlands including the degree of familiarity and protection awareness (e.g., willingness to pay, etc.); (2) investigate the basic situation of the survey respondents: education level, economic conditions, etc.
In this study, the data sources of recreation services valuation are based on the random sampling principle.Questionnaires were used in the field survey.A total of 280 questionnaires were handed out and 267 questionnaires (95.36%) were actually recovered.Among them, 254 questionnaires were valid, giving a 95.13% effective recovery rate of questionnaires.
Market Value Method
The market value method is used in the study to evaluate the value of wetland material production services [21].The market value method measures the economic benefits or losses of changes in environmental quality by using changes in regional output or profits caused by environmental quality changes.The superiority and inferiority of an environment and the size of an ecological effect are reflected in the quality of its related products.Using this method to estimate the value of material production and services in the wetland park is usually considered looking at the direct benefits but not the indirect benefits.The exchange value of goods or tangible commodities is considered while the ecological service value generated is virtually ignored.Thus, the calculation results may be relatively one-sided.Despite this, the market value method is one of the most direct ways to estimate the value of material production and services of the wetland system.The formula is given in detail as follows: Here, U 1 is the material production value of wetland resources (CNY¨a ´1).S i is the area (hm 2 ) of the material resources of type i.W i is the per unit output (kg¨hm ´2) of type i material.P i is the average market price (CNY¨kg ´1) of type i material for the year.
Carbon Tax Method and Afforestation Cost Method
In this study, the carbon tax and afforestation cost method were used to estimate the service value of carbon sequestration and oxygen release by the wetland ecosystem.The carbon tax and afforestation cost method are the two most-used methods for the evaluation of these service values.For the carbon tax method, we get the quantity relationship of fixed CO 2 and O 2 release from the photosynthesis reaction equation, and then multiply by national or international standards on CO 2 emission charges, which converts material quantity to value amount thereby getting the value of fixed CO 2 .The afforestation cost method refers to the construction cost of the forest that can absorb equal amounts of CO 2 to replace the value of other means to absorb CO 2 [22].Here we use the fixed CO 2 amounts sequestered by the ecosystem, multiplied by the average unit cost of afforestation and forest stock needed to sequester the same amount, and use that to estimate the value of fixed CO 2 of ecosystem.According to the photosynthesis equation 1.63 g CO 2 is needed and 1.20 g O 2 will be released by a plant to produce 1 g of dry matter, wherein the fixed pure amount of C is 0.44 g; that is, the content of C element accounts for about 45% of dry matter.Combining this result with the biomass of wetland plants, the fixed CO 2 and O 2 releases of wetland plants can be calculated separately.
Generally, the Swedish tax rate will be used in the carbon tax calculation and then converted to a tax rate for fixed CO 2 .For the afforestation cost, the average reforestation cost of fir, pine, and paulownia will be used, and then converted into the cost of fixed CO 2 [23].In practice, when estimating the value of carbon sequestration and oxygen release, the estimation of carbon sequestration normally uses the average value of carbon taxes and afforestation costs.In this way, it is even closer to the real value and this method has been widely used [24].For the evaluation of oxygen release value, the average value of afforestation costs and industrial oxygen production costs are applied.The formula is as follows: where U 2 represents the value of climate regulation functions, namely, fixed CO 2 and O 2 release value; C represents carbon sequestration; P c is the average value of the international universal carbon tax rate and the cost of reforestation in China; O is the amount of oxygen released; and P o is the average price of afforestation and industrial oxygen production costs.We have C " O " where NPP i is the first productivity of i type plant, and S i is the area of i species plant.NPP i is calculated as follows: Water 2016, 8, 301 8 of 19 NPP (net primary productivity) is an estimated result.The photosynthetic products fixed in plant photosynthesis are reduced by respiration-based consumption of the plants themselves, also known as primary productivity.The Chikugo model [25] is the mechanism of calculating plant growth through considering adequate soil moisture and vegetation growth conditions, and calculating the net radiation and radiation dryness to get NPP model.The formulas are as follows: NPP " 0.29 Exppp´0.216pRDIq 2 q ˆRn (6) where NPP is vegetation's net primary productivity (tDM¨hm ´2¨a ´1), RDI is radiation dryness, L is the latent heat of evaporation in J¨g ´1, r is annual precipitation in cm¨a ´l, and Rn is the net amount of radiation obtained by the land surface (kcal¨cm ´2¨a ´1).Thereby, the value of the main carbon fixation and oxygen release plants in Lakeside Wetland Park on climate regulation can be calculated.The latent heat of evaporation has the following relation with temperature t:
Shadow Engineering Method
The shadow engineering method can be used to estimate the value of flood control and water storage.The gross storage capacity of Lakeside Wetland Park, as the usable storage, can be used to estimate the service value of flood regulation and water storage.To be specific, we use the building cost of projects required to store a corresponding volume as an estimate for the service value, based on the fact that 0.67 CNY [26] is invested to build 1 m 3 of reservoir capacity in China at present.The water area of Lakeside Wetland Park is 262.6 hm 2 .Reed fields, fishponds, canals, and rivers flowing through the park have water storage functions.The total amount of flood regulation and storage V in the wetland is the sum of the above items.When the shadow engineering method is used to estimate the value of flood regulation and water storage, the construction cost of building a reservoir with storage capacity equal to the water yield of this wetland park is the estimate.Then, the mathematical expression of the shadow engineering method that is used to estimate the value of flood regulation and water storage is as follows: where U 3 is the value of flood control and water storage, V represents the total wetland flood storage capacity, and t represents unit cost of capacity.The specific V calculation formula is: where V is the volume of water storage, A i is the area of wetland use of i type, and h i is the water storage balance in wetland type i use.
Results Reference Method
Not only do wetlands have the service value of material production, flood regulation, and water storage as well as carbon fixation and oxygen release, but they also have the service values of purifying gas, biodiversity, species conservation, and cultural education.Since the Lakeside Wetland Park was only opened in 2014, the results reference method will be used on the evaluation of those above-mentioned service values.The results reference method is to use one or more evaluation methods to estimate the economic value of a similar environmental service function.This estimator will be amended and adjusted and then applied to the regional environment of interest [27].The basic steps of the results reference method are as follows.First, analyze previous research results to find and evaluate similar cases.Then, obtain the currency value of the environmental service function through basic economic methodology and calculate the value of unit time.Apply the result to the region to be estimated and acquire evaluation results.The cost of the results reference method is very low since it is convenient and efficient to find the research results for reference, analyze the data rationality, and reconstruct it if necessary.The accuracy of this method can be relatively increased if two similar evaluation objects can be found.Otherwise, the possible error can be relatively large.Shao [28] employed this method to evaluate the value of biological habitats provided by the Yinchuan Lakeside Wetland ecosystems.Gi [29] used this method to evaluate the cultural value and values for scientific research, as well as the value of pollutant degradation and biodiversity conservation, of the east Chongming marsh ecosystem.
Because Lakeside Wetland Park is located by the shore of Chaohu Lake with rich wetland resources providing the service of pollutant degradation, it helps to maintain a good ecological environment.However, Lakeside Wetland Park's phase I and phase II opened only in 2012 and 2013, respectively, and its phase III has not been fully completed yet, so its processing capability for pollutants such as heavy metals, chemical oxygen demand (COD), and biochemical oxygen demand (BOD) is still unknown.Therefore, results using the reference method are used to evaluate the value of pollutant degradation and purification services of this wetland park.
where U 4 is the value function for pollutant degradation, A is the area of wetland, and W 1 is the value of the purification service per unit area, using as reference the public value from Constanza [30] (refer to Table 3).The results reference method is also used to calculate biodiversity value.According to results of an investigative report by Xie [31], the unit value of wetland ecosystem biodiversity conservation is 2212.2(hm 2 ¨a).Substituting this into Formula (12), the biodiversity conservation value of Lakeside Wetland Park is U 5 " AW 2 (12) where U 5 is the value function for biodiversity, A is the area of wetland, and W 2 is the value of the purification service of a unit area, with the reference value being the public value from Constanza [30] (refer to Table 3).
The results reference method is used to calculate species conservation value.Take Constanza's research results into consideration (see Table 3).The unit area value of species' habitats function of wetland ecosystems is U$304/hm 2 , equivalent to 1939.52 CNY/hm 2 (exchange rate of US$1 = 6.38 CNY), and Xie [31] 2089 CNY/hm 2 , as per-unit area was used as the basis of species' habitats functional value.Formula (13) calculates the total value of species' habitats of Lakeside Wetland Park as: where U 6 is the value function for species habitat, A is the area of wetland, and W 3 is the value of the purification service of a unit area (reference value from Constanza [30] in Table 3).
The unique land and water interaction topography and abundant natural resources have given this lakeside wetland a considerably high scientific research and cultural value, which we calculate with Formula ( 14): where U 7 is the value function for scientific research and cultural value, A is the area of wetland, and W 4 is the value of the purification service of a unit area (reference value from Constanza [30] in Table 3).
Contingent Value Method (CVM)
In this study, we use the conditional value method to estimate the value of recreational services of this park.The CVM method uses questionnaires to put the non-market environmental resources or services on a virtual market.Estimated market information was provided by the questionnaires, through asking people about their maximum willingness to pay (WTP) through the improvement of environmental quality or minimum willingness to accept (WTA) through toleration of environmental losses.These data are used to work out the value of the environmental goods [32].To address the core issue of CVM valuation and according to previous research questionnaire results [33], this study used a payment card questionnaire and asked respondents to select their WTP/WTA from a given set of values.
WTA or WTP applicability depends on whether the respondent has clear rights to the environmental goods.If the consumer has clear legal rights to the environmental goods, and you are asking him or her to give up these rights, WTA should be used; otherwise, WTP should be used.This paper follows the above viewpoints and WTP was used in this CVM research evaluation.
TNUV " E pWTPq ˆN (15) where N is the annual number of tourists, E (WTP) is their willingness to pay per capita, and TNUV is the recreational value.
In order to determine the effectiveness of the CVM method [34], we need to check whether the relevance obtained from WTP and individual socio-economic variables is consistent with the principles of economics.Logistic Regression is a widely used qualitative variables regression.This study is based on field research data, and we used the introduction of dummy variables to describe whether there is willingness to pay when these are used as dependent variables.By an independence test, this study can discover the dependency relationship with the dependent variable that is related to social characteristics and treat it as an argument to examine the validity of WTP by carrying out logistic regression: In the above formula, p is the probability of willingness to pay, x mi is observed dependence relationship between individual i and variable m, β mi is coefficient, and ε i is an error term in N p0, 1q.
Material Production Value
The main products of Lakeside Wetland Park are reeds, grapes, and freshwater fish.Since material goods have exact market prices, the market value method will be used to value material production.There is an area of 71.5 hm 2 reeds in Lakeside Wetland Park.In 2015, the reeds buying price was about 0.27 CNY/kg.The grape section has an area of 175.4 hm 2 , and in 2015 the market price of grapes was about 5 CNY/kg.There are aquatic, wetland plants and other fruit of economic value in the park, but due to the small growing area and their scattered planting, their production value will be omitted.In addition, there is a fish pond of 120.6 hm 2 in the park.The production value of these various types of substances can be calculated according to Formula (1) (for details see Table 4).After calculation, the total material production value of Lakeside Wetland Park is 21.18 million CNY.However, since the grapes and fish ponds are semi-artificial ecosystems, massive material costs and labor costs are needed in the material production process.Therefore, input costs should be deducted from the total value in the actual calculation.Net output calculated in this way gives the real value of material production functions.
From Table 5, we know the material production service value of U 1 of Lakeside Wetland in 2015 was 10.4325 million CNY.Since Lakeside Wetland Park opened in 2014, only major products are considered in its production service value assessment, but this still has some significant meaning in the park's total value assessment.
Carbon Fixation and Oxygen Releasing Value
According to meteorological statistics for Hefei City, the average annual rainfall within the park is 1057.2mm, and the total solar radiation amount is 120 kcal¨cm ´2¨a ´1 (1 kcal = 4.184 kJ).In terms of the global average, 31% of solar radiation is reflected or scattered back into space, 24% is absorbed by the atmosphere directly, 45% reaches the ground, and for wetland about 10% is reflected by land.Calculating based on all of these, the net radiation in the region was 42 kcal¨cm ´2¨a ´1.The annual average temperature of the park is 16 ˝C.The latent heat of vaporization in Lakeside Wetland Park L is equal to 2469.16J¨g ´1 by substitution into Formula (3).Therefore, RDI is equal to 0.67.
According to [25], when RDI < 4, the Chikugo model is applicable and can be used to estimate the net productivity.According to research by Jing Li et al. [35], the primary productivity of subtropical deciduous forest, coniferous forest, mixed deciduous trees, and fruit trees is 11.1 t¨hm ´2¨a ´l, 12.8 t¨hm ´2¨a ´l, 10.88 t¨hm ´2¨a ´l, and 9.41 t¨hm ´2¨a ´l, respectively.Combined with data for the trees in the park's wetlands, we can calculate that the annual organic production masses for these four categories were 5936.28 tons, 32 tons, and 1650.51tons, respectively.The net reeds primary productivity was estimated according to Equation (4), giving us NPP = 11.05t¨hm ´2¨a ´l.
As of 2015, the current universal international carbon tax rate is U$150/t, equivalent to 957 CNY/t (according to the average exchange rate of the CNY against the US dollar in 2015 of 1:6.38, the same as below).In China, the sequestration afforestation cost is 260.90CNY/t, so the average P c sequestration afforestation cost is 608.95CNY/t, which is calculated as a carbon tax standard.Our P o oxygen production cost is 352.93 CNY/t, and the value is calculated as oxygen fixation.Inputting these into Formulas (3) and ( 4), the value of carbon sequestration and oxygen release by various kinds of vegetation can be calculated (for details refer to Table 6).The carbon sequestration and oxygen release value of Lakeside Wetland Park is shown in Table 7.According to photosynthesis equation and value evaluation formula, we can calculate the carbon fixation value of park vegetation is 8.54 million CNY per year, the value of oxygen released is 3.6419 million CNY, and the total value of U 2 is 12.18 million CNY.
Water Conservation Value
Lakeside Wetland Park has a water area of 262.6 hm 2 , including reed fields, ponds, rivers, ditches and rivers flowing through the wetland, all with water storage functions.V is the sum of the total amount of flood control and water storage of the wetland.According to the survey, the peak of maximum flood control and water storage capacity of five rivers in the Lakeside Wetland Park is 150.93 ˆ10 4 m 3 , as shown in Table 1.
Reed fields, ditches, and ponds which provide flood regulation and water storage in Lakeside Wetland Park cover areas of 78.6 hm 2 , 39.75 hm 2 , and 120.6 hm 2 , respectively.According to Formula (10), the maximum amount of water capacity V at flood peak time can be estimated, which is 219.08 ˆ10 4 m 3 .
Therefore, the total water storage capacity in the peak period is 370.01 ˆ10 4 m 3 .Using the shadow engineering method, an input cost of 0.67 CNY is needed to build 1 m 3 of reservoir capacity per year in China; thus, according to Formula (10), the flood control and water storage value of Lakeside Wetland Park as follows: U 3 = 370.01ˆ10 4 m 3 ˆ0.67CNY/m 3 = 2.48 million CNY.
Purification Value
Consider the wetland's service value per unit area using Constanza's global wetland ecosystems figures [30] and the pollutant degradation value per unit area using the waste processing function in Xie [31] for Chinese terrestrial wetland ecosystems.We use the average of these two values as the pollutant degradation value per unit area.The unit value of pollutant degradation of global wetland ecosystems is U$4,177/(hm 2 ¨a), which is 26,649.26CNY/(hm 2 ¨a), and the unit value of pollutants degradation of wetland ecosystem in China is 16.09 billion/(hm 2 ¨a), so the average of these two values is 21,367.93CNY/(hm 2 ¨a).The regional area of Lakeside Wetland Park is 1072 hm 2 , so using Formula (11) we calculate the purification function value of Lakeside Wetland Park as 22.91 million CNY.
Biodiversity Value
According to the field survey, there are 86 families, 204 genera, and 281 species of vegetation that belong to the maintaining or controlling category in Lakeside Wetland Park.Among these, three kinds are national class 1 protected plants and seven kinds are national class 2 protected plants.An incomplete survey recorded vertebrates of 18 orders, 47 families, and 75 species in total.There are more than 50 species of birds, including six kinds of national class 2 protected birds.Water resources are abundant in the park, and amphibians can be seen everywhere; they are an important scenic resource.The results reference method is used to estimate this environment service function, by using investigation of Xie [31]: the unit value of biodiversity protection of the wetland ecosystem is 2212.2CNY/(hm 2 ¨a), which, according to Formula (12), gives the biodiversity value of Lakeside Wetland Park wetland as 2212.2/(hm 2 ¨a) ˆ1072 hm 2 = 2.37 million CNY.
Species Conservation Value
In this study, for the calculation of species conservation value, the results reference method was used.Considering research results by Constanza [30] (see Table 3), the species' habitat functional value of wetland ecosystem per unit area is U$304/hm 2 , equivalent to 1939.52 CNY/hm 2 (exchange rate of US$1 = 6.38 CNY), was combined with the Xie [31] 2344 CNY/hm 2 for China's terrestrial wetland ecosystem assessment, yielding the average 2141.76CNY/hm 2 as the basis of species' habitat functional value.The wetland area of Lakeside Wetland Park is about 1072 hm 2 ; thus, according to Formula (13), the total species habitat value of Lakeside Wetland Park is 2.30 million CNY.
Education Value
There is a wetland biodiversity science demonstration area inside Lakeside Wetland Park, which provides access to the general public to understand wetlands, precious wetlands, and wetland conservation.Indirect estimation is used to do education value estimation.Usually, investment in scientific research and actual investment by researchers is adopted for estimations; however, due to the fact that construction of Lakeside Wetland Park has not been completed, research work has just started.The investment that has occurred so far is far below its real scientific value, so based on the scientific research rating standard, both domestic and international, we make an ad hoc valuation.In this article, we use the mean of three figures: the average scientific and cultural value per unit area estimation of ecosystems in China [17], which is 382 CNY/hm 2 ; the global estimate of this, which is 881$/hm 2 ; and Constanza's estimate, which is 861$/hm 2 .(For the latter two, we used the exchange rate US$1 = 6.38 CNY).The mean of these three is 3831.99CNY/hm 2 .The total area of Lakeside Wetland Park is 1072 hm 2 , with a forest area of 799.5 hm 2 and wetland water area of 262.6 hm 2 ; thus, according to Formula ( 14), the scientific research and education value of Lakeside Wetland Park is 4.11 million CNY.
Recreational Value
At present, the methods of calculating tourism and leisure value are the expenditure method, the travel cost method and the willingness surveying method [36].In this research we use the conditional valuation method to estimate the landscape recreation service value of Lakeside Wetland Park.For travelling cost calculations, refer to Formula (15).
From Table 8, we can calculate the average willing-to-pay amount per capita equals 194.29 CNY.By retrieval of local statistics, we know there are an average of 451,000 visitors to Lakeside Wetland Park every year.Based on this, the average spending amount in the entire region equals the recreational value of Lakeside Wetland Park, which is 87.62 million CNY.The CVM model uses willingness to pay as the dependent variable, and personal characteristics of respondents as an independent variable, and then uses a logistic model to test the validity of willingness to pay.Logistic regression explores influence factors and then predicts the probability of even occurrence; it is a generalized linear regression.Here, we first investigate whether there is a relationship between willingness to pay and social characteristics.A virtual variable was introduced: existence of willingness to pay is 1, nonexistence of willingness to pay is 0.An independence test shows that dependency relationships exist among the education background, age, income, and willingness to pay.Whether willingness to pay exists or not is the dependent variable.Education background, age, and income are independent variables.The following model can be obtained after F test and t-test: odds " expp0.9327´0.9108Education ´0.00015Incomeq Odds " where odds is the occurrence ratio and p is the probability of willingness to pay.The results show that the occurrence of the education variable ratio is 0.402 (Table 9), indicating that with the increase of educational experience, the possibility of willingness to pay will increase 0.402 times; if income increases, the possibility of willingness to pay will increase 1.141 times.
The Whole Value and Its Structural Analysis
The above results are summarized in Table 10 below.The total value of the ecosystem services of Lakeside Wetland Park in 2015 is 144 million CNY.Within this, the ecological service value is 42.23 million CNY, accounting for 29.25% of the total value; the production service value is 10.43 million CNY, accounting for 7.22% of the total value; and the social service value is 91.73 million CNY, accounting for 63.52% of the total value.Within the social service category, the value of landscape recreation service is highest, accounting for 60.68% of the value.The order of the ecosystem service aspect values for Lakeside Wetland Park is: landscape recreation service (87.62 million CNY) > purification services (22.90 million CNY) > air purification service (12.17The order of the unit value of the wetland ecosystem aspects is: recreational service (81,739.65 CNY) > purification services (21,367 6).From these results, we can see that the proportion of ecological service and social service in the whole value is not reasonable.Due to the limitations of research data, we cannot calculate the complete value of ecological services, such as nutrient cycling and storage or reducing soil erosion, which result in underestimation of the total value.The recreation value we calculate is quite high, which reflects the fact that Lakeside Wetland Park creates some problems for environmental service zoning and construction.In the park, the ecological conservation district is quite a bit smaller than the environmental display district, a situation that offers more recreation service but ignores the conservation of ecological environment.We need to gradually improve construction in this aspect in the future to balance these services.
According to relevant research, the total value of the leading service of Poyang Lake wetland is 36.27 billion CNY, with an area of 36,285 hm 2 [18,37].Another wetland park, the Xixi Wetland Park From these results, we can see that the proportion of ecological service and social service in the whole value is not reasonable.Due to the limitations of research data, we cannot calculate the complete value of ecological services, such as nutrient cycling and storage or reducing soil erosion, which result in underestimation of the total value.The recreation value we calculate is quite high, which reflects the fact that Lakeside Wetland Park creates some problems for environmental service zoning and construction.In the park, the ecological conservation district is quite a bit smaller than the environmental display district, a situation that offers more recreation service but ignores the conservation of ecological environment.We need to gradually improve construction in this aspect in the future to balance these services.
According to relevant research, the total value of the leading service of Poyang Lake wetland is 36.27 billion CNY, with an area of 36,285 hm 2 [18,37].Another wetland park, the Xixi Wetland Park in Hangzhou, has the similar rank of values to Lakeside Wetland Park in Hefei: landscape recreation service (294 million CNY) > purification services (33.98 million CNY) > material production service (13.75 million CNY) > scientific research and public education service (10.17 million CNY) > air conditioning service (5.53 million CNY) > flood control (4.8 million CNY) > biodiversity services (4.27 million CNY) > species conservation service (3.35 million CNY) [26].Compared with these wetland ecosystem valuation publications, we find that the value of services is different between natural wetland and urban wetland.
Comparison of Lakeside Wetland Park with Xixi Wetland Park in Hangzhou, the two urban wetland parks have similar total area, but Xixi has more watershed while Lakeside has more forest.Because forest is richer in gas purification services, the results are different in gas purification and air purification services, with Lakeside higher than Xixi.The rank of the value of wetland ecosystem services is also different between these two wetland parks.The valuation of urban wetland parks can be regarded as a useful tool for achieving sustainable wetland ecosystems in cities.Thus, by comparing the relative significance of these indicators, we can effectively determine the significance of each service in different wetland parks for planning and monitoring sustainable wetland ecosystems, potentially contributing to the management of different types of wetland ecosystems.
The Sustainable Development of the Wetland Surrounding Chaohu Lake
Because of its unique ecological structure and geographical features, Lakeside Wetland Park has its own biological diversity.Therefore, the Lakeside Wetland Park has a very important role to enrich the lives of the residents of Hefei, improve the ecological environment in Hefei, and enhance the regional ecological environment quality and ecosystem health.This evaluation of the environmental services can provide suggestions for further study of the establishment of the ecological compensation from the development and utilization of the wetland ecosystems.Rational utilization of Lakeside Wetland Park is an important foundation for a sustainable development of economy and society in Hefei.With the further improvement and maturity of the wetland ecosystem, the value of the service will be further improved.
The Influence of Different Evaluation Methods
In the evaluation of material production services, we have only considered the main products in Lakeside Wetland Park, which underestimate the results if all factors were included.We use the carbon tax method and cost method to calculate the value of air purification; this result is comprehensive estimated and has credibility.We evaluate the water conservation service using an accurate measure of water, so this facet of the result is accurate.Leisure entertainment, culture education, and biodiversity are basically estimates, but using the whole region estimation values gives reasonable accuracy.Wetlands can provide many environmental services, so the provision and efficiency of the services have obvious externalities [38][39][40][41].A limitation is that this research only calculated the use-value in Lakeside Wetland Park, omitting the non-use value.In addition, although we use a variety of methods to estimate the services, we have not established a complete evaluation system, so the evaluation methods need further investigation in order to guide economic construction better.
This study used different methods to effectively obtain the relative values of different services in the urban wetland park.Different research methods affect the results, so it is very important to establish unified evaluation criteria.The calculation of the cost and benefits from different angles will affect the calculated value of wetland ecosystem services.The literature shows that urban wetland parks have a huge recreational value in urban wetland ecosystems.For instance, the value of this part in Beijing Olympic Forest Park Wetland, the West Lake in Hangzhou, and Wuhan Yuehu is, respectively, 39.87%, 99.43%, and 26.83% [42].Parks such as the Lakeside Wetland Park, as an urban wetland with a complex natural-economic-social ecosystem, are known to provide many valuable ecological benefits to urban ecosystems.Hefei is the capital city of the Anhui Province, and Anhui is undergoing a rapid urbanization process similar to that in many parts of the world.There is an urgent need for urban wetland restoration and management as a key element of urban master planning, and this implies that it is of great significance to evaluate urban wetland ecosystems correctly.Different regions of different types of urban wetland ecosystem structures, resources and functions should get more attention to enhance the scientific management level and rational utilization of resources.
Figure 1 .
Figure 1.The classification of wetland ecosystem services.
Figure 1 .
Figure 1.The classification of wetland ecosystem services.
China.This research includes the value of material production services and society services, as shown in Figure 2 (the first five services shown are the indirect use value, while the others represents direct use value).Water 2016, 8, 301 3 of 18 Chaohu Lake in Hefei, China.This research includes the value of material production services and society services, as shown in Figure 2 (the first five services shown are the indirect use value, while the others represents direct use value).
Figure 2 .
Figure 2. The classification of ecosystem services of Hefei lakeside wetland.
Figure 3 .
Figure 3. (a) Lakeside Wetland Park; and (b) aerial view of Lakeside Wetland Park.
Figure 2 .
Figure 2. The classification of ecosystem services of Hefei lakeside wetland.
Chaohu
Lake in Hefei, China.This research includes the value of material production services and society services, as shown in Figure2(the first five services shown are the indirect use value, while the others represents direct use value).
Figure 2 .
Figure 2. The classification of ecosystem services of Hefei lakeside wetland.
Figure 3 .
Figure 3. (a) Lakeside Wetland Park; and (b) aerial view of Lakeside Wetland Park.
Figure 3 .
Figure 3. (a) Lakeside Wetland Park; and (b) aerial view of Lakeside Wetland Park.
Figure 4 .
Figure 4.The geographic location of Lakeside Wetland Park.
Figure 4 .
Figure 4.The geographic location of Lakeside Wetland Park.
Figure 5 .
Figure 5.The composition of value in different service of Lakeside Wetland Park (unit: 10 4 CNY).
Figure 6 .
Figure 6.The composition of value in per unit in different service of Lakeside Wetland Park (unit: CNY).
Figure 6 .
Figure 6.The composition of value in per unit in different service of Lakeside Wetland Park (unit: CNY).
Table 1 .
The main rivers of Lakeside Wetland Park.
Table 2 .
The forest resource distribution of Lakeside Wetland Park.
Table 3 .
The value of ecosystem services of global wetlands.
average 2344 CNY/hm 2 to China's terrestrial wetland ecosystem assessment, which is
Table 4 .
The material production value of Lakeside Wetland Park.
Table 5 .
The net material production value of Lakeside Wetland Park.
Table 6 .
The plant carbon fixation and oxygen release in Lakeside Wetland Park.
Table 7 .
The climate regulation service value in Lakeside Wetland Park.
Table 8 .
Willingness to pay (WTP) in survey.
Table 9 .
The occurrence estimation.
Table 10 .
The value of services of Lakeside Wetland Park in Hefei, China.
|
2016-07-25T08:52:20.182Z
|
2016-07-19T00:00:00.000
|
{
"year": 2016,
"sha1": "67d2afb2674bfcf286a243fda58c6f92ed89c624",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/8/7/301/pdf?version=1468923079",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "67d2afb2674bfcf286a243fda58c6f92ed89c624",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
16367514
|
pes2o/s2orc
|
v3-fos-license
|
Expansion of Lysine-rich Repeats in Plasmodium Proteins Generates Novel Localization Sequences That Target the Periphery of the Host Erythrocyte*
Repetitive low complexity sequences, mostly assumed to have no function, are common in proteins that are exported by the malaria parasite into its host erythrocyte. We identify a group of exported proteins containing short lysine-rich tandemly repeated sequences that are sufficient to localize to the erythrocyte periphery, where key virulence-related modifications to the plasma membrane and the underlying cytoskeleton are known to occur. Efficiency of targeting is dependent on repeat number, indicating that novel targeting modules could evolve by expansion of short lysine-rich sequences. Indeed, analysis of fragments of GARP from different species shows that two novel targeting sequences have arisen via the process of repeat expansion in this protein. In the protein Hyp12, the targeting function of a lysine-rich sequence is masked by a neighboring repetitive acidic sequence, further highlighting the importance of repetitive low complexity sequences. We show that sequences capable of targeting the erythrocyte periphery are present in at least nine proteins from Plasmodium falciparum and one from Plasmodium knowlesi. We find these sequences in proteins known to be involved in erythrocyte rigidification and cytoadhesion as well as in previously uncharacterized exported proteins. Together, these data suggest that expansion and contraction of lysine-rich repeats could generate targeting sequences de novo as well as modulate protein targeting efficiency and function in response to selective pressure.
Tandemly repeating protein sequences are common in most eukaryotes but are particularly abundant in protozoan parasites such as Plasmodium falciparum (1,2), the species responsible for the most severe form of malaria in humans. Repetitive sequences can form through slipped strand mispairing during DNA replication or unequal crossover of chromosomes in meiosis (3). This is a dynamic process with repetitive sequences often expanding and contracting at a greater rate than that of single nucleotide mutation (4). Over half of the open reading frames in the parasite genome encode repetitive sequences (1), from modular arrays of folded domains to polyasparagine sequences, which are prone to aggregation during malarial fevers (5,6). Hydrophobic residues are underrepresented in many P. falciparum repetitive sequences (7), and these are therefore predicted to be intrinsically disordered (8). To date, very few repetitive sequences of this variety have been characterized.
The host erythrocyte undergoes drastic changes during the blood stage of the parasite life cycle (9 -11). Based on the presence of a conserved Plasmodium export element (PEXEL) 2 or host-targeting (HT) motif, Ͼ400 proteins are predicted to be exported by P. falciparum into the infected cell (12,13). These proteins, as well as a group of PEXEL-negative exported proteins (14), mediate erythrocyte modifications necessary for the parasite to survive; the nutrient-permeability of the membrane increases (15), and protrusions referred to as knobs are assembled at the erythrocyte plasma membrane. These spiral-shaped scaffolds present proteins from the PfEMP1 (P. falciparum erythrocyte membrane protein 1) family on the erythrocyte surface, which mediate the adhesion of infected erythrocytes to blood vessel endothelial cells (16 -18). The erythrocyte cytoskeleton, which is composed of flexible ␣ and -spectrin filaments (19), is also rigidified upon infection (20). Cytoadhesion and the increased rigidity of infected cells contribute to parasite sequestration in specific tissues; sequestered parasites evade clearance in the spleen and are linked to severe disease outcomes, such as cerebral malaria (21).
Many proteins associated with erythrocyte rigidification and cytoadhesion contain tandem repeats (22)(23)(24)(25)(26)(27)(28)(29), yet their role in protein function remains unclear. Some repeating sequences appear to be under immune selection (30), and many are highly antigenic (31); it has been proposed that this may allow the parasite to evade the host immune system by diverting B-cell responses toward non-protective epitopes (32) or promoting an inferior T-cell-independent maturation of B-cells (33,34). Such general roles for tandemly repeating sequences may explain their broad distribution in parasite proteins.
Repetitive sequences in some proteins may be removed with no consequence for protein function (35), suggesting that they are encoded by functionally neutral "junk DNA" that has expanded due to errors in DNA replication. However, removal of the repetitive regions of other proteins can affect activity; deletion of repeat regions of the parasite circumsporozoite protein and ring-exported protein 1 (REX1) lead to loss of protein function (36,37). The knob-associated histidine-rich protein (KAHRP) is involved in both rigidifying the host cell (23) and the formation of cytoadherent knob structures (16), and deletion of a C-terminal sequence encompassing two lysine-rich repetitive sequences results in smaller knob structures and reduced cytoadhesion (38). The Lysine-rich membrane-associated PHISTb protein (LYMP) also modulates cytoadhesion (39). PHISTb proteins are a subgroup of the PHIST family of exported proteins that contain a Plasmodium RESA N-terminal (PRESAN) domain (40,41). Several PRESAN domain-containing proteins have been shown to localize to the erythrocyte periphery (42,43), and in the case of LYMP, this domain has been shown to bind to PfEMP1 (44,45). Its C terminus, which includes tandem repeats rich in lysine, has been shown to interact with the cytoskeletal component band 3 (44). The role of tandemly repeating sequences in functionally important regions of both LYMP and KAHRP suggests that these are not erroneous expansions but may be directly involved in modulating the cytoadhesive properties of the infected host cell.
Other known cytoskeleton-binding proteins also contain repetitive sequences, many of which are rich in lysine and glutamate residues (46,47). A role for these highly charged sequences in protein function has yet to be demonstrated, and cytoskeleton-binding sites for the proteins RESA, Pf332 (P. falciparum protein 332), PfEMP3, MESA, and the PHISTa protein PF3D7_0402000 have previously been identified in non-repetitive regions (27, 48 -53).
Here we show that lysine-rich repeating sequences constitute targeting modules that direct a number of exported parasite proteins to the periphery of the infected erythrocyte. Based on the observation that targeting efficiency is dependent upon repeat length, we present a model in which repeat expansion and contraction can generate novel targeting modules or modulate the targeting efficiency of exported parasite proteins.
Multiple Lysine-rich Repeating Sequences within Glutamic Acid-rich Protein (GARP) Localize to the Infected Erythrocyte
Periphery-GARP is an 80-kDa protein encoded by the P. falciparum gene PF3D7_0113000 (54). It contains an N-terminal signal sequence for targeting to the parasite endoplasmic reticulum and a PEXEL/HT motif sequence, RLLNE, enabling the protein to be exported into the host erythrocyte. GARP is a highly charged protein; it contains 24% glutamic acid, 21% lysine, and 9% aspartic acid residues. These charged residues are concentrated within six tandemly repeated sequences, which each contain a unique repeated motif. The first four repeat sequences are lysine-rich, and the C terminus of the protein contains an acidic stretch composed of two different repeating units (Fig. 1, A and B, and Table 1). Beyond the N-terminal signal sequence, GARP contains very few hydrophobic residues, suggesting that it does not contain stable folded domains. Indeed, protein disorder analysis using the program DISOPRED (55) suggests that the entire sequence of GARP is intrinsically disordered (Fig. 1C).
To determine the localization of GARP, the protein was GFP-tagged and expressed in the blood stage of P. falciparum parasites using the calmodulin promoter. GFP fluorescence was localized at the periphery of the red blood cell, indicating that the protein is recruited either to the plasma membrane or the adjacent spectrin cytoskeleton of the infected erythrocyte (Fig. 1D). Quantification of relative fluorescence intensity indicated a 3.27 Ϯ 0.86-fold increase in fluorescence intensity at the erythrocyte periphery relative to the cytoplasm (see supplemental Fig. 1 and Table 2 for additional images and quantification of all parasite lines).
Because GARP is composed mainly of repetitive, low complexity, and intrinsically disordered sequences, it is likely that at least some of these sequences constitute novel modules that can target a protein to the periphery of the infected cell. To test this, fragments encoding the three lysine-rich repeating sequences were GFP-tagged and fused to the N-terminal signal sequence and PEXEL/HT motif of the protein REX3 (residues 1-61), which has been used previously to mediate protein export ( Fig. 1, E-G) (41,43). This REX3 fragment alone does not target proteins to the erythrocyte periphery (supplemental Fig. 1B). The first lysine-rich repeat sequence contains a three-residue motif that is repeated 15 times. The consensus sequence of the repeated motif, defined by the program XSTREAM, is EKK. The first residue in this motif varies (represented by E, D, H, or K residues), but the two lysine residues are highly conserved (Fig. 1B). A GFP fusion protein containing the first lysine-rich repeat region (GARP(119 -163)) is efficiently exported and localized to the periphery of the infected erythrocyte (Fig. 1E). GARP(253-340), which contains the second lysine-rich repeat comprising seven repeats of the degenerate amino acid sequence E-KE-K-KKQ-(where a hyphen indicates that a gap is most commonly found at a particular position), is also efficiently localized to the erythrocyte periphery ( Fig. 1, B and F). Similarly, GARP(372-446), encompassing the third and fourth repeats, which are immediately adjacent and comprise nine repeats of the sequence EEHKE followed by five repeats of the sequence KGKKD, also exhibits a clear localization at the periphery of the infected erythrocyte (Fig. 1, B and G). Conversely, the acidic C terminus of GARP, GARP(535-673), remains in the erythrocyte cytosol (Fig. 1H). GFP accumulation in the food vacuole is also seen in some parasites, probably due to endocytosis of the erythrocyte cytoplasm by the parasite. This is also seen in other parasite lines but is generally less apparent when proteins are localized to the erythrocyte periphery because this probably reduces the efficiency with which these proteins are endocytosed. Likewise, the uncharged N terminus of the protein, GARP (50 -118), is not peripherally targeted (Fig. 1I). Expression of all proteins was confirmed by Western blotting (Fig. 1K). The full-length GARP protein appears as a blurred band, and most constructs migrate at a mass higher than that expected; this is probably due to the highly charged and repeating nature of the proteins. Taken together, these data indicate that at least three lysine-rich repeating and intrinsically disordered regions within GARP are sufficient to form targeting modules that localize to the erythrocyte periphery.
The Targeting Efficiency of Lysine-rich Repeat Sequences Is Length-dependent-Because each periphery-targeting sequence of GARP is repetitive in character, we tested whether the length of the lysine-rich sequence affects its targeting efficiency. The first lysine-rich repeat sequence of GARP was truncated from 45 residues to 30 and 15 residues, containing 15, 10, and 5 repeats, respectively (Fig. 2). An additional linker sequence of 12 residues was inserted between GFP and the GARP fragments to ensure that proximity to GFP did not compromise potential interactions of the lysine-rich fragments. As expected, GARP(119 -163), which encodes all 15 repeats, is localized at the erythrocyte periphery, indicating that the addition of the linker sequence does not alter the targeting function of the first lysine-rich repeat sequence (Fig. 2, A and D). GARP(134 -163), which contains only 10 repeats, is also localized to the erythrocyte periphery, but targeting is less efficient (Fig. 2, B and D); fluorescence intensity at the erythrocyte periphery relative to the erythrocyte cytoplasm is reduced. The shortest construct, GARP(149 -163), only encodes five repeats and is not efficiently recruited to the periphery; the protein is predominantly localized diffusely in the erythrocyte cytoplasm (Fig. 2, C and D). Expression of each of the proteins was confirmed by Western blotting (Fig. 2E).
These data show that multiple repeats are required for the targeting of lysine-rich sequences to the erythrocyte periphery and that the efficiency of targeting increases as the number of repeats increases. In the context of the first GARP repeat, a FIGURE 1. GARP is targeted to the erythrocyte periphery by three lysine-rich repeat regions. A, representation of the GARP protein. The lysine-rich repeating regions are highlighted in blue and labeled R1-R4, and the number of repeating units in each is indicated. The acidic C terminus is shown in red. The export sequence is colored purple and represents both the signal sequence and PEXEL/HT motif. B, sequence logos for the four lysine-rich repeats; residue position is shown on the x axis, and conservation is indicated on the y axis (bits). C, disorder prediction for GARP (using DISOPRED (55)). Amino acids are considered disordered if they have a confidence score Ͼ0.5, represented by a dotted line. D-I, GFP-tagged full-length GARP and truncations expressed using the calmodulin promoter in P. falciparum parasites. J, GFP-tagged full-length GARP expressed using the GARP promoter. GFP fluorescence and phase-contrast images are shown on the left and right, respectively. A representation of each construct is shown below. Scale bar, 2 m. For quantification of fluorescence, see supplemental Fig. 1 and Table 2. K, anti-GFP Western blot (top), with anti-HAP used to confirm equal loading (bottom).
TABLE 1 P. falciparum proteins with charged repeat sequences predicted to target to the erythrocyte periphery
Proteins selected to be GFP-tagged and expressed in P. falciparum are highlighted in grey. Asterisks indicate tested sequences which did not localise to the erythrocyte periphery. The consensus sequence, position within the protein, repeat unit length, number of repeat units, and the error from consensus were defined by XSTREAM (110). Non-integer numbers of repeat units indicate degeneration at the ends of repetitive sequences. Theoretical pI calculated by PROTPARAM is shown for each fragment (112).
TABLE 2 Quantification and Statistical Analysis of GFP Fluorescence at the Periphery of Infected Erythrocytes
The -fold difference in fluorescence intensity at the erythrocyte membrane relative to the cytosol was calculated as described in the legend to supplemental Fig. 1. Statistical analysis using one-way analysis of variance was performed, and multiple comparisons were made between each parasite line and a line expressing GFP-tagged REX3 only. Images of 20 parasites were quantified per parasite line (n ϭ 20). p values and levels of significance are indicated, from not significant (ns) to extremely significant (*** and ****). FL, full-length. sequence of ϳ30 amino acids in length is necessary for robust peripheral targeting.
Expansion of Repeating Lysine-rich Sequences Can Generate Sequences with a Targeting Function in Exported Parasite
Proteins-Repetitive DNA sequences are highly mutable and are prone to expansion and contraction (4). Given the preceding data, this suggests that sequences with a peripheral targeting function may arise de novo simply by expansion of short non-functional lysine-rich motifs.
To test whether this phenomenon can be observed over evolutionary time, we compared the GARP sequences of P. falciparum with those of closely related Plasmodium species (56, 57). The P. falciparum and Plasmodium reichenowi genes encoding GARP are syntenic; the latter also encodes an exported protein that contains four lysine-rich repeats and a C-terminal acidic sequence. Whereas the first lysine-rich repeat of the P. falciparum protein corresponds to 15 copies of the (E/D/K/H)KK motif, the first lysine-rich repeat of PrGARP contains only five repeats conforming to this consensus (Fig. 3A). Instead, in the P. reichenowi protein, a more acidic DE(T/K) repeat has expanded in this region (Fig. 3A). Analysis of the equivalent Plasmodium gaboni GARP sequence indicates that yet another repeat motif, (H/D/N)KN, has expanded in addition to four repeats of the (E/D/K/H)KK motif (57).
Although the second GARP repeat sequence is similar in all three parasite species, the third and fourth sequences in the P. gaboni protein have not expanded and comprise only one or two highly degenerate repeats (Fig. 3A). DECEMBER 9, 2016 • VOLUME 291 • NUMBER 50
JOURNAL OF BIOLOGICAL CHEMISTRY 26193
To test whether the expansion of the first, third, and fourth repeat sequences has led to the formation of functional targeting sequences in P. falciparum GARP, the localization of GFP fusion proteins derived from these sequences from different species was compared. Consistent with this model, the GFPtagged first repeat from P. falciparum GARP (PfGARP (119 -163)) is localized to the erythrocyte periphery (Fig. 3, B and F), but the equivalent GFP-tagged P. reichenowi GARP fragment (PfGARP(71-130)), which contains fewer lysine-rich repeats, is diffusely localized in the erythrocyte cytoplasm (Fig. 3, C and F). Likewise, the region of PfGARP (PfGARP(372-446)), comprising the third and fourth repeats, is localized to the red cell periphery (Fig. 3, D and F), but the equivalent region from the P. gaboni protein (PgGARP(381-412)) is not (Fig. 3, E and F). Anti-GFP Western blotting confirmed the expression of proteins at the expected size (Fig. 3G).
Although the sequence of the common ancestor of these proteins is not known, these experiments suggest that expansion of non-functional, short lysine-rich repeats can lead to the formation of novel protein modules that can direct the localization of exported parasite proteins within the infected erythrocyte.
Lysine-rich Repeat Regions from Multiple Exported P. falciparum Proteins Confer Peripheral Localization in the Infected
Erythrocyte-Many Plasmodium proteins contain repetitive sequences enriched in charged residues. To investigate whether sequences similar to those in GARP are capable of targeting to the erythrocyte periphery, we identified putative exported proteins, characterized by an N-terminal signal sequence or transmembrane domain and an RXL motif, that also contain repeating sequences Ն30 residues in length with a lysine content Ն20%.
Lysine-rich and repetitive sequences were identified in exported protein sequences using a sliding window algorithm and the program XSTREAM, respectively. Thirty-five sequences, including those within GARP, were found to conform to the above criteria, with some proteins containing multiple repeating lysine-rich sequences (Table 1).
Sequences encoding lysine-rich repeat sequences from 10 proteins (Table 1, highlighted) were expressed as GFP fusion proteins, and their localization was assessed by fluorescence microscopy. PF3D7_1102300 protein, like GARP, is predicted to be entirely intrinsically disordered; the majority of the sequence is lysine-rich and repeating (Fig. 4A). A fusion protein that included the N terminus of REX3, GFP, and the lysine-rich sequence of PF3D7_1102300 comprising residues 121-415 (PF3D7_1102300 (121-415)), was expressed in parasites. Within the infected erythrocyte, GFP fluorescence was localized at the periphery of the infected cell (Fig. 4A). GEXP12 (gametocyte-exported protein 12) contains an N-terminal PRESAN domain belonging to the PHISTc family and a C-terminal lysine-rich sequence. A similar pattern of peripheral GFP fluorescence is seen in erythrocytes infected with parasites expressing an exported GFP protein that includes this fragment (GEXP12(231-370)) (Fig. 4B); some brighter foci of fluorescence are also seen in some cells.
A number of proteins known to target the erythrocyte cytoskeleton via defined motifs in non-repeating sequences also contain lysine-rich repeating sequences that have not previ-ously been shown to function as independent targeting domains in vivo. The lysine-rich repeat regions of the PHISTb proteins LYMP (LYMP(419 -528)) and PF3D7_1476200 (PF3D7_1476200(443-512)) and the PHISTa protein PF3D7_ 0402000 (PF3D7_0402000(305-428)) were expressed as GFP fusion proteins; peripheral GFP fluorescence was seen in erythrocytes infected with all three parasite lines (Fig. 4, C-E, respectively). A GFP fusion protein encompassing the lysine-rich region of the PHISTb/c protein PF3D7_1201000 (PF3D7_ 1201000(292-397)) exhibited a weak localization at the periphery that was visible in only a fraction (50 -80%) of infected cells (Fig. 4F).
The N terminus of MESA contains a 20-residue cytoskeleton-binding MEC motif (51). The remainder of the MESA sequence consists of various charged repetitive sequences, three of which have a lysine content of Ͼ20%. The second lysine-rich repeat sequence and flanking sequence has duplicated to form the third repeat. A GFP fusion protein that contains both of these sequences (MESA(850 -1147)) also localizes to the erythrocyte periphery (Fig. 4G). Similarly, KAHRP contains an N-terminal histidine-rich sequence that is sufficient to target to the erythrocyte periphery (58) but also contains two lysine-rich repeat regions that are important for protein function (38). A GFP fusion protein encompassing the first of the lysine-rich repeats (5Ј repeats) is also targeted to the periphery of the infected erythrocyte (Fig. 4H).
Hyp12 contains a lysine-rich C-terminal sequence; the repeats in the sequence are highly degenerate. When fused to GFP in the absence of other sequences, the repeat sequence localizes to the erythrocyte periphery (Fig. 4I).
Protein PF3D7_0114200 is predicted to contain a C-terminal transmembrane domain as well as a lysine-rich sequence. The lysine-rich sequence was fused to REX3:GFP and expressed in parasites (PF3D7_0114200(97-420)). In this case, the fluorescence remained localized within the cytosol of the erythrocyte, with no peripheral targeting (Fig. 4J). PF3D7_1149100.1 contains six repetitions of a 40-residue motif, but the lysine content of this sequence is only 17% lysine, and this fragment also remained in the erythrocyte cytosol (Fig. 4K). Additionally, the C-terminal lysine-rich repeat sequence (3Ј repeats) from KAHRP (KAHRP(540 -600)) does not localize to the cell periphery despite having a lysine content of 20% (supplemental Fig. 1W). Although it is difficult to interpret a negative result, this suggests that a certain threshold of lysine residues is required for peripheral localization within the erythrocyte and that the distribution of residues within repeats may also be important. In the case of the KAHRP 3Ј repeats, the lack of peripheral targeting could also be due to partial degradation of the protein because several bands are seen on Western blots of parasites expressing this protein (Fig. 4L).
These data indicate that many diverse repetitive lysine-rich sequences, in which the size of the repeating unit can vary from 3 to 30 residues in length, have a propensity to localize to the periphery of the infected erythrocyte. Of the 11 repetitive sequences with a lysine content Ͼ20% that were tested, nine were localized to the erythrocyte periphery. Although many of the repeating sequences contain both acidic and basic residues, most sequences capable of targeting the erythrocyte periphery had a theoretical isoelectric point value of Ͼ9 (Table 1). The two exceptions, MESA (pI of fragment, 4.90) and the PHISTb/c protein PF3D7_1476200 (pI of fragment, 4.71), both display the least prominent peripheral targeting, and the aspartate-rich repeats of PF3D7_0114200 (pI 5.12) remained entirely cytosolic. Acidic residues may therefore interfere with the peripheral localization of some lysine-rich repeating sequences.
Having determined that lysine-rich repetitive sequences, when fused to GFP, can localize to the periphery of the infected erythrocyte, we next tested whether these sequences function similarly in the context of the corresponding full-length proteins. GFP-tagged PF3D7_1102300, GEXP12, PF3D7_0402000, and PF3D7_1201000 were expressed, and all showed peripheral localization (Fig. 5, A-D, respectively). PF3D7_1201000 DECEMBER 9, 2016 • VOLUME 291 • NUMBER 50 showed a very weak localization to the cell periphery, which is similar to the localization of the isolated lysine-rich fragment; GFP fluorescence was also accumulated in the parasitophorous vacuole in this case. LYMP, MESA, and KAHRP have previously been localized to the periphery of infected cells by immunofluorescence (39,45,59,60). GFP-tagged PF3D7_1476200, when expressed from the calmodulin promoter, has previously been localized to the periphery of the infected erythrocyte (43). When expressed from its own promoter, the protein localization is similar (Fig. 5E). Similarly, GARP, when expressed from its own promoter, is also peripherally localized (Fig. 1J). Detec-tion by Western blotting of this protein is variable; smeared bands and prominent fragments of the protein are often detected (Fig. 1K). Transcripts of GEXP12 and PF3D7_1102300 are enriched in gametocyte stage parasites relative to the asexual stage (61). To test whether lysine-rich sequences can also target proteins to the erythrocyte periphery in this life cycle stage, we expressed GFP-tagged PF3D7_1102300 from its own promoter. Within a mixed culture, most brightly GFPexpressing parasites were gametocytes ( Fig. 5F and supplemental Fig. 1, AJ). The GFP localization is consistent with the protein targeting to the periphery of the gametocyte-infected cell. Expression of proteins was confirmed by Western blotting (Fig. 5J). Gametocyte-enriched parasites were purified for Western blots of PF3D7_1102300, which was detected as a smeared band at a molecular weight higher than expected. Other proteins were detected at approximately the expected sizes.
Targeting Role of Repetitive Plasmodium Sequences
The Targeting Function of the Lysine-rich Sequence in Hyp12 Is Masked by an Acidic Sequence-We also localized GFP tagged full-length Hyp12 protein. The lysine-rich fragment of Hyp12 is efficiently recruited to the periphery of the red cell (Fig. 4I). By comparison, the full-length protein with either a Cor N-terminal GFP tag is not efficiently recruited to the cell periphery (Fig. 5, G and H, respectively). This localization for the full-length protein has also been described previously (62).
Hyp12 contains a C-terminal lysine-rich sequence but also a highly acidic N-terminal sequence. The acidic sequence is also repetitive and is predicted to be intrinsically disordered. To test the possibility that this sequence is able to inhibit the targeting function of the lysine-rich sequence, the C terminus of Hyp12 protein lacking the acidic sequence was expressed. This protein is robustly recruited to the cell periphery, suggesting that the acidic sequence masks the targeting function of the lysine-rich sequence within this protein (Fig. 5, I, K, and L).
Variation in Length between Lysine-rich Repeat Regions in Different P. falciparum Strains-The length of repeat sequences often varies between different parasite strains (63), and the preceding experiments suggest that variation in length of lysine-rich repeats may influence the efficiency with which these sequences can target proteins to the erythrocyte periphery. To determine the extent of repeat length variation seen in lysine-rich repeat sequences, we analyzed sequences from the genomes of several laboratory strains of parasites (3D7, DD2, HB3, IT, and 7G8) as well as 11 parasites isolated from infected people from diverse geographic locations ("long read" sequence data generated by the Pf3k consortium was used for these analyses to ensure the correct assembly of repetitive regions).
Significant variation in repeat number is seen in many lysinerich targeting sequences. The C-terminal repeating sequence of LYMP, the first repeat of GARP, and the PHISTa protein PF3D7_0402000 contain 5-7, 12-17, and 9 -14 copies of repeating motifs, respectively (Fig. 6, A-C). Although unlikely to lead to a complete loss or gain of peripheral localization, these changes may modulate the targeting efficiency of these protein sequences. The C-terminal repeat region of PF3D7_ 1102300 contains either 13 or 14 copies of the repeat motif EREKREKKEKE, but the repeat sequences of PHISTB protein PF3D7_1476200, PHISTc protein GEXP12, and Hyp12 are invariant (Fig. 6, D-G). The 5Ј repeats of KAHRP do not vary, but variations in repeat number are observed for the 3Ј repeats (Fig. 6H), as has been reported previously (64,65). In 3D7 parasites, the protein PF3D7_1201000 contains two PRESAN domains, which are separated by 18 units of the sequence DEKEK. In all other parasites, the repeat unit number has increased, in some cases by as much as 2-fold (Fig. 6I).
In 3D7 parasites, MESA contains five repeat sequences; all except for the second repeat sequence vary significantly in length. In the 3D7 genome, the sequence encoding the third repeat region, which is itself variable in length (Fig. 6J), is dupli-cated to form the fourth repeat. In other genomes, the sequence is further duplicated, resulting in three or four copies of this repeat sequence and its flanking regions. GFP-tagged lysinerich sequences from both MESA and PF3D7_1201000 display a weak fluorescence signal at the erythrocyte periphery, and duplication and extension of the repeat regions may increase the targeting efficiency of these sequences.
Peripheral Targeting of Lysine-rich Repeating Sequences Is Conserved between Plasmodium Species-To investigate whether the targeting of lysine-rich repeat regions to the erythrocyte periphery is conserved, we also searched other parasite genomes for putative exported proteins that contain lysine-rich repeating sequences. The proteins predicted to contain sequences with a targeting function are shown in supplemental Table 1. The largest numbers of potential periphery-targeting sequences were found in the P. reichenowi genome, with 20 proteins containing lysine-rich repeats, most of which are syntenic to those identified in P. falciparum. The genomes of three closely related species that infect primates, Plasmodium knowlesi, Plasmodium vivax, and Plasmodium cynomolgi, contained 19, 15, and 6 proteins containing lysine-rich repetitive regions, respectively, whereas fewer sequences were predicted for Plasmodium species infecting rodents; Plasmodium yoelii, Plasmodium chabaudi, and Plasmodium berghei (supplemental Table 1).
To test whether lysine-rich sequences from parasites other than P. falciparum have targeting functions, we tested the localization of the P. knowlesi protein PKNH_1325700 in P. falciparum-infected erythrocytes. This protein contains a PEXEL/HT sequence, RSLSV, and two repetitive lysine-rich stretches at its C terminus (Fig. 7A). Full-length PKNH_ 1325700 was efficiently exported to the erythrocyte, where the GFP signal appears as a partially punctate distribution around the periphery of the red blood cell (Fig. 7B). In younger parasites, fewer of these puncta were present, and a continuous line of fluorescence was apparent around the periphery of the cell (Fig. 7C). To test whether the lysine-rich sequence alone is able to target to the erythrocyte periphery, REX3 and GFP were fused to residues 303-445 of PKNH_1325700; this includes the first and second lysine-rich repeat regions, which have lysine contents of 12.5 and 40%, respectively. This GFP-tagged protein formed a continuous ring at the erythrocyte periphery (Fig. 7D), indicating that lysine-rich sequences from multiple parasite species can form modules with a targeting function. Anti-GFP Western blotting confirmed the expression of proteins at the expected size (Fig. 7E).
A Conserved Protein Family Containing an EMP3-KAHRPlike Domain and Expanded Repeated Sequences-Notably, PKNH_1325700 also contains an N-terminal 70-residue sequence, which is predicted to form a folded domain (Fig. 8A) and is homologous to the N terminus of P. falciparum KAHRP (41). Although the repeating motifs found in the C-terminal sequences of PKNH_1325700 and KAHRP are not related, they are similar in that they are lysine-rich, and both sequences target to the erythrocyte periphery. The presence of an N-terminal conserved domain and C-terminal lysine-rich repeating sequences in both KAHRP and PKNH_1325700 suggests that these proteins may to some extent be functionally related.
Given that KAHRP is a key cytoskeleton-associated protein involved in sequestration of P. falciparum-infected erythrocytes, we searched for proteins that have similar domain architecture in other species. In P. falciparum, the conserved N-terminal domain is also found at the N terminus of the erythrocyte cytoskeleton-associated PfEMP3 protein; the remainder of this protein is also formed of repeating sequences, including a central lysine-rich region (Fig. 8C). Because the domain is present in both EMP3 and KAHRP, we refer to it as the EMP3-KAHRPlike (EKAL) domain. KAHRP-like proteins have previously been identified in some species (41); we identify additional EKAL domain-containing proteins in the genomes of the primate-infecting parasites P. reichenowi, P. knowlesi, P. vivax, P. cynomolgi, Plasmodium fragile, Plasmodium ovale, and Plasmodium inui (Figs. 8 (B and C) and 9). These proteins can be grouped into seven branches; five branches are closely related to PfKAHRP, whereas two represent homologs of the EMP3 protein (Fig. 8C). Remarkably, each parasite genome encodes at least one protein with a KAHRP-like EKAL domain that is followed by a C-terminal lysine-rich repeating sequence that may target the protein to the periphery of the infected host cell (Fig. 8C). Although sequence homology in PfEMP3-and KAHRPlike proteins is largely restricted to the EKAL domain, it is likely that in many cases, the expansion of divergent repetitive lysinerich sequences has generated protein modules that contribute to the peripheral localization of this protein family in the infected erythrocyte.
Discussion
Repetitive sequences in many organisms are crucial for protein function (66 -73) (reviewed in Ref. 4), yet there are currently few functions assigned to repeats in Plasmodium. We show that several proteins from P. falciparum contain lysine-rich tandemly repeating sequences that confer a peripheral localization in the infected erythrocyte. Four of the nine proteins identified were previously uncharacterized, including GARP, which contains three distinct lysinerich repeat sequences with a targeting function.
The rapid expansion and contraction of repeating sequences suggests that they can contribute significantly to protein evolution and the generation of novel functional modules (4,63,74,75). Within PfGARP, decreasing the number of repeating units within the N-terminal lysine-rich sequence proportionally decreases the efficiency of targeting. Given this, it is likely that exported parasite proteins can rapidly evolve novel localization domains by expanding short low affinity lysine-rich motifs to create high avidity targeting sequences. Comparison of the repeating sequences of P. falciparum GARP with those found in GARP from P. reichenowi and P. gaboni provides two examples of such repeat expansion occurring. In the first repeating sequence of P. falciparum GARP, the repeat EKK has expanded to generate a periphery-targeting sequence, whereas in P. reichenowi, a more acidic repeat has expanded, which does not efficiently localize to the periphery.
Smaller changes in repeat number may also subtly modulate the targeting efficiency of lysine-rich repeating sequences (Fig. 10). Within proteins that modulate key properties of the host cell, such as rigidity, cytoadhesion, and nutrient import, such changes could confer a selective advantage. Indeed, correlation between repeat sequence length and phenotype has been observed in other organisms (68 -70), and the number of repeat motifs within the functionally important C-terminal domain of P. falciparum RNA polymerase II also varies between isolates (76). Analysis of lysine-rich targeting sequences from laboratory and field strains of P. falciparum parasites confirms that repeat units can be both lost and gained from these sequences. There is a high level of conservation between repeat motifs within targeting sequences; this is a common feature of disordered repetitive sequences and suggests that the repeats were recently expanded and may be particularly dynamic (77,78). This may allow rapid adaption of parasites under selective pressure.
Although Hyp12 contains a lysine-rich sequence that targets the cell periphery, the targeting function is masked by an acidic repetitive sequence. Expansion or contraction of either sequence in Hyp12 could lead to a change in protein localization. Contraction of the acidic sequence might reduce the inhibitory propensity of this sequence, whereas expansion of the lysine-rich sequence might allow it to overcome the inhibi-FIGURE 7. The P. knowlesi protein PKNH_1325700 contains a C-terminal periphery-targeting repetitive sequence and an N-terminal domain also found in PfKAHRP. A, representation of P. falciparum KAHRP (top) and P. knowlesi protein PKNH_1325700 (bottom), with lysine-rich repeat regions shown in blue and their consensus motifs shown above. The first repeat of PKNH_1325700 contains 12.5% lysine residues and is colored light blue. The conserved region found in both proteins is shown in yellow, the histidine-rich region in orange, and the export sequence in purple. B and C, P. falciparum parasites expressing the GFP-tagged full-length PKNH_1325700 in late parasites and early parasites, respectively. D, GFP-tagged C-terminal repeat region of PKNH_1325700. A schematic of the protein, a GFP fluorescence image, and a phase-contrast image are shown from left to right. Scale bar, 2 m. E, Western blots with anti-GFP (top). Anti-HAP was used to confirm equal loading (bottom). DECEMBER 9, 2016 • VOLUME 291 • NUMBER 50 tion by the acidic sequence. Over evolutionary time, the localization of this protein may be determined by two "competing" repetitive, low complexity, disordered sequences. It remains unclear whether there is a physiological stimulus that might unmask the lysine-rich sequence in Hyp12; proteolytic cleavage or changes in ionic composition or temperature could potentially regulate this process. Notably, deletion of the gene encoding Hyp12 leads to a change in infected cell rigidity (79).
Targeting Role of Repetitive Plasmodium Sequences
Targeting of proteins by lysine-rich repeating sequences is not restricted to P. falciparum proteins. The protein encoded by the P. knowlesi gene PKNH_1325700 contains an N-terminal EKAL domain, with homology to the N terminus of P. falciparum KAHRP (41), and two adjacent lysine-rich repeat sequences at its C terminus. Although the repeated motifs differ from those in P. falciparum KAHRP, the lysine-rich repeats of both proteins localize uniformly to the erythrocyte periphery. The full-length PKNH_1325700 protein, however, appears as a number of peripherally located disperse dots, suggesting that the N terminus is prone to self-association. KAHRP is a key component of the electron-dense cytoadherence-related knob structures that are seen in P. falciparum infected cells and that are also observed in P. fragile-infected rhesus monkey erythrocytes (80). However, although P. vivax-and P. knowlesi-infected cells adhere to specific ligands (81)(82)(83), knoblike structures are not seen on erythrocytes infected with these parasites. In addition to PKNH_1325700, we find at least one KAHRPlike gene characterized by an EKAL domain and a repetitive lysine-rich sequence in the genomes of P. reichenowi, P. vivax, P. ovale, P. cynomolgi, P. fragile, and P. inui. Knob structures cluster PfEMP1 proteins in P. falciparum-infected cells. Although parasites other than P. falciparum and P. reichenowi do not express PfEMP1 proteins, other variant surface antigens have been identified in other species (84); it is possible that the KAHRP homologues in these species play a role in clustering of these proteins on the surface of infected cells in structures that are not morphologically distinctive or electron-dense. Notably, EKAL domains and repeating sequences are also found in PfEMP3 and its homologues. Like KAHRP, PfEMP3 is involved in PfEMP1 trafficking, localizes to the Maurer's clefts and cytoskeleton of infected cells, and affects infected cell rigidity (23,25,26,48). Expansion of different repeat sequences may represent a means of diversifying the function of EKAL domaincontaining proteins. Table 2B). P. ovale proteins were assembled de novo and have been named EKAL1-4. P. fragile and P. inui proteins are named according to their assigned gene names preceded by PFR or PI, respectively. Numbers at each node represent quartet puzzling (QP) support values predicted by TREEPUZZLE, where values represent the reliability of groupings (118). Right, diagrams representing each protein sequence, with EKAL domains in yellow. Export sequences are shown in purple (signal sequence and PEXEL/HT motif). Many proteins contain lysine-rich tandemly repeated sequences (blue) as well as repeating sequences that do not contain Ͼ20% lysine (green). The first repeating sequence of PKNH_1325700 is shown in light blue because only 12.5% of residues are lysine. The histidine-rich regions of P. falciparum and P. reichenowi KAHRP are shown in orange. Schematics are approximately to scale, with PVX_003525, Pf3D7_0201900, and PO_EKAL2 scaled down by half. PCYB_001100 and PFR A0A0D9QJA3 sequences are truncated due to gaps in the assembled sequences.
Although several of the identified lysine-rich targeting sequences are found in proteins with known interacting partners, the identity of the binding partner of the lysine-rich sequences remains unclear. We show that a fragment of KAHRP encompassing the 5Ј lysine-rich repeats is sufficient to target to the erythrocyte periphery in vivo. This region is important for the cytoadhesion-modulating function of the protein (38); however, the binding partners of the KAHRP repeating sequences remain controversial. It has been suggested that the 5Ј lysine-rich repeat region interacts with PfEMP1 (86, 87), but this interaction was not observed in other studies (88). Fragments of KAHRP that include the 5Ј lysine-rich repeat sequence also bind to spectrin in vitro (89). Although the repeat sequence alone was not sufficient for this interaction under P. falciparum, P. reichenwowi, P. knowlesi, P. vivax, P. cynomolgi, P. fragile, P. inui, and P. ovale. Proteins were aligned using T-COFFEE (113). Residues with Ͼ70% identity or similarity are shaded in dark gray and light gray, respectively, using Multiple Align Show (114). A black line above the alignment represents the highly conserved EKAL domain, and a dotted line represents an extended conserved domain used for assembling phylogenic trees. DECEMBER 9, 2016 • VOLUME 291 • NUMBER 50
JOURNAL OF BIOLOGICAL CHEMISTRY 26201
previous experimental conditions (89,90), recent work indicates that the 5Ј repeats are sufficient for spectrin binding. 3 In vitro, the C terminus of LYMP interacts with inside out erythrocyte vesicles (39) and with purified band 3 (44). It is unclear whether the lysine-rich repeats, which are located in the final 100 residues of this fragment, contribute to this interaction, but a fragment comprising only the lysine-rich repeats of LYMP does not bind to inside-out erythrocyte vesicles in vitro (39). In MESA, the lysine-rich sequence shown here to localize to the erythrocyte periphery was also shown to be insufficient for binding inside-out erythrocyte membranes (51). This may indicate that these lysine-rich repeats interact with Plasmodium proteins or cytoskeletal components that are post-translationally modified during infection (91,92). Given the diversity of lysine-rich repeat sequences that can target to the erythrocyte periphery, it is possible that they interact with different host or parasite proteins.
Several proteins that contain lysine-rich targeting sequences also contain other well characterized cytoskeleton-targeting domains, suggesting that they cross-link multiple components of the erythrocyte cytoskeleton or membrane. Indeed, LYMP functions by linking PfEMP1 and band 3 via its PRESAN domain and C terminus, respectively (44,45). A lysine-rich repeating C terminus is also seen in other proteins with PRESAN domains capable of targeting the periphery, including PF3D7_0936800 (45) and PF3D7_1476200 (43). Two other uncharacterized proteins with peripherally localized lysinerich repeating sequences also contain PRESAN domains: the PHISTC protein GEXP12 and PF3D7_1201000, which contains N-and C-terminal PRESAN domains from the PHISTb and -c families, respectively. It is possible these proteins play roles similar to that of LYMP at the erythrocyte periphery. However, not all PRESAN domains interact with PfEMP1; the PHISTa protein PF3D7_0402000 binds to band 4.1 (52). Both PF3D7_ 0402000 and MESA contain lysine-rich repeat sequences capable of associating with the erythrocyte periphery in addition to band 4.1-binding domains. Although previous immunofluorescence experiments suggest that PF3D7_0402000 co-localizes with band 4.1, a significant fraction of the protein was localized in the parasitophorous vacuole. This is not consistent with the localization that we observe for the GFP-tagged protein; it is possible that the antibody epitope is hidden when the protein is bound to the erythrocyte cytoskeleton (52).
The proteins GARP and PF3D7_1102300 are predicted to be entirely intrinsically disordered, and repeating sequences make up 44 and 66% of the mature proteins, respectively. It is therefore possible that the interaction of the lysine-rich sequences with their target fulfills the function of the protein. Interestingly, expression of GARP is up-regulated in parasites isolated from children with severe malaria (93), and GARP is differentially expressed in parasites selected for adherence to different ligands (94). PF3D7_1102300 is up-regulated during heat shock (40) and also in parasites selected for cytoadhesion (95). Deletion of the genes encoding GARP and PF3D7_1102300 as well as the PHISTa protein PF3D7_0402000 and PHISTb/c protein PF3D7_1201000 does not result in a striking phenotype; however, some decrease in infected cell rigidity is observed (79). Given the similarity between many of the lysine-rich proteins that we have characterized, it is likely that individual genes may be functionally redundant and that deletion of single genes may not be sufficient to reveal a phenotype (79).
Some proteins may also function in the gametocyte stage, during which the rigidity of the infected cell changes (96). GEXP12 transcripts and peptides are detected in both asexual stage parasites and gametocytes (97,98). Because we have used the calmodulin promoter to express GFP-tagged GEXP12, we are only able to assess its localization in asexual stages; this shows that the protein has a propensity to localize to the erythrocyte periphery. Notably, when GFP-tagged PF3D7_1102300 is expressed from its own promoter, the protein is localized to the periphery of gametocyte-infected cells, indicating that proteins containing lysine-rich sequences can also be similarly targeted during this life cycle stage. Given this, it might be expected that GEXP12 would also localize to the cell periphery in the gametocyte stage.
Electrostatic interactions between the basic lysine residues and a negatively charged surface, either protein or lipid, are probably responsible for the peripheral localization of the repeating sequences. Other basic residues may confer a similar localization. A polyhistidine sequence in KAHRP also targets the erythrocyte periphery (58); however, arginine residues are underrepresented in the AT-rich parasite genome (3). Interestingly, despite the high predicted isoelectric points of most of the sequences, many peripherally localized repeats also contain acidic residues, and targeting does not appear to require a strict sequence consensus or repeat length. This makes accurate prediction of sequences with a targeting function difficult. Two lysine-rich proteins tested did not associate with the erythrocyte periphery, and whereas some untested proteins, such as (79,101). The observation that repetitive lysine-rich sequences in Plasmodium can target proteins to the periphery of the infected erythrocyte suggests that such proteins will perform key functions at the host parasite interface. Moreover, the potential for expansion and contraction of these sequences to modulate targeting efficiency or to generate novel targeting sequences suggests that they play important roles in evolution of proteins targeted into the host erythrocyte.
Experimental Procedures
Plasmids and Parasite Transfection-Gene sequences were amplified from P. falciparum (3D7), P. knowlesi (A1H.1), or P. reichenowi genomic DNA and inserted into P. falciparum expression plasmids containing an attP site. Gene expression was controlled by the P. falciparum calmodulin promoter and P. berghei dihydrofolate reductase-thymidylate synthase 3Ј-untranslated region. Gene sequences encoding full-length proteins were cloned in frame with 3Ј GFP and STREPII tags. Constructs with 5Ј truncations were fused to a sequence encoding the N-terminal 61 residues of PFI1755c (REX3); this sequence contains the N-terminal signal sequence and PEXEL/HT motif of REX3. These plasmids contained REX3(1-61), GFP, a linker sequence (LESGSGTGASDV), and the lysine-rich sequenceencoding fragment, followed by a STREPII tag. The linker was not included in the following constructs shown in Fig. 1: GARP (50 -118), GARP(119 -163), GARP(253-340), GARP (372-446), and GARP(535-673). All cloned P. falciparum sequences matched the 3D7 genome sequence; however, two silent base pair mutations were made in the sequences of GARP(134 -163) and GARP(149 -163) to facilitate cloning through overlap PCR. The P. knowlesi gene PKNH_1325700 contained an insertion corresponding to one repeat of the KKEQA motif in both the full-length and truncated constructs. The P. gaboni GARP fragment was constructed de novo using multiple primers based on a DNA sequence assembled from multiple short sequencing reads (see below for details). Fulllength GARP, PF3D7_1102300, and PF3D7_1476200 were also expressed under their own promoters; the PfCAM promoter was replaced with sequences starting 932, 967, and 1084 bp upstream of the start codon for each gene, respectively.
Microscopy-Parasites were fed 1 day before imaging. A drop of culture material in RPMI was placed between a microscope slide and coverslip. Phase-contrast and GFP fluorescence images were acquired at room temperature with a Zeiss Axiovert 200M microscope equipped with an HBO100 lamp and a ϫ100 oil lens with a numerical aperture of 1.30. Images were taken with an AxioCam MR camera using AxioVision software release 4.8.2. Z-stacks of images were collected and deconvolved by iterative restoration (confidence limit, 95%; iteration limit, 10) using Volocity; a single image from the Z-stack is presented. Images were cropped, and automatic brightness and contrast settings were applied using ImageJ.
Statistical Analysis-The average fluorescence intensity at the periphery relative to the cytoplasm of infected cells was quantified in ImageJ as described in supplemental Fig. 1. Statistical analysis was performed in GraphPad Prism version 7 using ordinary one-way analysis of variance with each parasite line compared with the GFP-tagged REX3(1-61) fragment to establish whether proteins were significantly enriched at the erythrocyte periphery. Fisher's uncorrected least significant difference test was used for multiple comparisons. p values are reported in Table 2. Labels represent significance. ns, *, **, ***, and **** indicate not significant (p Ͼ 0.05), p Յ 0.05, p Յ 0.01, p Յ 0.001, and p Յ 0.0001, respectively.
Comparison of GARP Genes from Closely Related Parasite Species-A GARP homologue from P. gaboni was assembled from two incomplete protein coding sequences deposited in the NCBI (GenBank TM accession numbers KYN95113.1 and KYN95116.1); the connecting region was assembled from sequence reads (GenBank TM biosamples SAMN04053641 and SAMN04053639) (57). The P. reichenowi GARP gene sequence (PRCDC_0111200) was from PlasmoDB (version 26).
Identification of Putative, Exported, Lysine-rich, Repeating Protein Sequences-Protein coding sequences from P. falciparum, P. vivax, P. knowlesi, P. cynomolgi, P. reichenowi, P. berghei, P. chabaudi, and P. yoelii (17X) were downloaded from PlasmoDB (version 26). Putative exported proteins were identified by the presence of either a signal sequence (defined by Sig-nalP (108)) or a transmembrane domain within the first 100 residues (defined using MPEX translocon TM analysis (109)) and an RXL motif in the 50 residues following the signal sequence/transmembrane segment. Proteins containing more than four transmembrane segments within the coding sequence are unlikely to be exported and were excluded from further analysis.
A custom perl script utilizing a sliding window algorithm was used to identify proteins containing stretches of amino acids of Ն30 residues in length with with a lysine content of Ն20%. Within the set of lysine-rich sequence fragments, repeating Targeting Role of Repetitive Plasmodium Sequences DECEMBER 9, 2016 • VOLUME 291 • NUMBER 50 protein sequences were identified using the tandem repeat predictor XSTREAM (110). Parameters for XSTREAM were as follows: minimum word match ϭ 0.6, minimum consensus match ϭ 0.6, maximum period ϭ 30, miss penalty ϭ Ϫ3, and gap penalty ϭ Ϫ3 (111). Another custom perl script was used to interpret the output of XSTREAM and select proteins in which the sequence region composed of repeats was Ͼ30 residues in length. Multiple lysine-rich repeat sequences were found in some proteins. XSTREAM was used to define the consensus sequence of each repeated array, the consensus error value for each repeat array, and the position of the repeated array within each protein. More stringent parameters were used to reduce the number of gaps in the consensus sequence, with minimum word match ϭ 0.6, minimum consensus match ϭ 0.65, miss penalty ϭ Ϫ3, and gap penalty ϭ Ϫ5. Maximum period value was set to 30 residues unless shorter repeats were apparent within the predicted consensus sequence; other parameters were set to default values. The consensus sequences of the degenerate repeats of Hyp12 and PF3D7_0106600 were defined using less stringent criteria. Theoretical isoelectric point values were predicted by PROTPARAM (112).
Sequence Analysis of Proteins from Different Parasite Isolates-Protein sequences of lysine-rich proteins from different P. falciparum parasite strains were extracted from unassembled long read PACBIO genome sequencing data obtained from the Pf3k consortium. Five laboratory isolates were included (3D7, DD2, IT, 7G8, and HB3) as well as 11 field isolates from Gabon, Guinea, United Kingdom, Kenya, Mali, Sudan, Senegal, Democratic Republic of the Congo, Togo, and Cambodia. No KAHRP genes were found in the DD2 or Kenyan isolates. LYMP was not found in one of the two Cambodian isolates. All alignments were created with T-COFFEE (113) and represented with "Multiple Align Show" (114). 10 of 141 gene sequences, indicated in supplemental Table 2A, contain frameshift point mutations. It is unclear whether these represent genuine mutations or sequencing errors in database sequences; for the purpose of sequence alignment, the reading frames were restored (see supplemental Table 2A).
Sequence Analysis of the KAHRP Conserved Domain-Proteins with homology to the conserved domain of KAHRP and PfEMP3 were identified by HMMer (115). Additionally, homologous sequences within the P. ovale genome were identified from unassembled sequence reads acquired from the Sanger Institute through the use of the in-built BLAST server. Sequence reads from P. ovale containing EKAL domains were assembled using the SEQman Ngen software (116). Introns were manually annotated within genes from P. ovale, P. fragile, P. inui, and P. cynomolgi where necessary. Potential sequencing errors resulting in frameshift mutations were corrected, and introns were annotated based on known Plasmodium splice sites. These modifications were made in five proteins from P. inui and P. cynomolgi (see supplemental Table 2B for details). Sequences were aligned with T-COFFEE (113) in Jalview (117). Maximum likelihood estimation with TREE-PUZZLE (118) was used to create phylogenic trees based on an extended conserved domain (see Fig. 9 for details), which were assembled with FigTree version 1.2.4 (119). Secondary structure predictions and disorder predictions were made by PSIPRED (85) and DISOPRED (55), respectively.
|
2018-04-03T01:29:42.754Z
|
2016-10-24T00:00:00.000
|
{
"year": 2016,
"sha1": "280a8aef43c66e5e3e17a136a40b8bf367273a88",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/291/50/26188.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bf861c2851eea5243163b67f6e0b46753521421",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
29162089
|
pes2o/s2orc
|
v3-fos-license
|
The Investigation of the Effect and Mechanism of Sophora moorcroftiana Alkaloids in Combination with Albendazole on Echinococcosis in an Experimental Rats Model
Echinococcosis is a worldwide anthropozoonosis which is highly endemic over large animal husbandry areas in northwestern China. The current clinical therapeutic medicine against echinococcosis is albendazole, although it caused serious side effects in patients. The component in traditional Chinese herb medicine, Sophora moorcroftiana alkaloids (SA), is thought to be a potential drug to treat echinococcosis. In order to explore the effect and mechanism of SA treatment against echinococcosis, we established animal echinococcosis model and treated rats with albendazole alone, alkaloids alone, and combined therapy. The combined treatment showed effective inhibition against parasite infection due to induction of host response and alleviated liver injury; meanwhile albendazole caused serious liver problem. The proteomics study revealed that the combined therapy might induce complement activation through C3, C4, C5, SERPINA1, and SERPINC1 proteins and cell adhesion by ANXA2, EZR, YWHAB, HSP90AN1, and PRKAR2A proteins, while albendazole treatment could induce liver injury through CRYAB, YWHAZ, SLC25A24, and HSPA1B proteins that were involved in cell death. In all, we consider that the combinational treatment displayed better therapeutic effects against liver echinococcosis as well as alleviated liver injury, which could be considered as an effective strategy to treat echinococcosis clinically.
Introduction
Echinococcosis is a worldwide anthropozoonosis which is caused by Echinococcus granulosus [1]. In China, it is highly endemic over large animal husbandry areas in northwestern provinces. As estimated, approximately one percent of the farmer population in these areas was infected by Echinococcus granulosus. In humans, ingested eggs can be mainly distributed to liver and lung, leading to cystic echinococcosis (CE) and alveolar echinococcosis (AE). CE infection is the leading consequence, which is responsible for over 98 percent of all echinococcosis cases [2]. The current clinical treatment strategies against echinococcosis are surgery and chemotherapy; other approaches including gamma-ray treatment are still limited to bench level [2,3]. However, surgery is prone to cause parasite lesion residual or unfortunate parasite dissemination by inappropriate operation, leading to disease relapse. Meanwhile, the use of chemotherapy also does not achieve desirable effect. Albendazole is the most common clinical drug to treat echinococcosis [4]. But it showed poor solubility in gastrointestinal (GI) tract, causing low drug concentration in liver. Albendazole also causes serious adverse side effects in patients such as encephalitis syndrome, influenza-like syndrome, allergic purpura, and drug rash. Furthermore, it has been reported that Echinococcus granulosus protoscolices have developed resistance to albendazole [4][5][6]. Thus, it is urgent to develop new therapeutic strategies against echinococcosis.
Sophora moorcroftiana, also known as Tibet S. viciifolia, thorn firewood, is an endemic leguminous shrub widespread 2 Evidence-Based Complementary and Alternative Medicine in valleys of Tibet plateau in China. The decoction of its seeds has been commonly used in folk medicine to treat parasitosis by local people for years. The main composition of alkaloids in the seed decoction includes oxymatrine, sophora, sophorine, and matrine, which was also used as an emetic, detoxicant, and antiphlogistic and in verminosis in traditional Chinese medicine [7,8]. Clinically, its seeds decoction is combined with albendazole to treat echinococcosis [9]. It was reported that the alkaloids from Sophora moorcroftiana is the potential active ingredient in this folk medicine [7,10]. In the present study, we not only investigated the therapeutic effect of the combinational treatment of Sophora moorcroftiana alkaloids and albendazole against echinococcosis in an experimental rats model, but also explored the underlying molecular mechanism of this strategy by proteomics. First, we evaluated the effect of combination therapy by measuring several blood biochemical indicators and histological observation; then, we employed quantitative proteomic assays using isobaric tags for relative and absolute quantitation (iTRAQ), combined with high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS), to detect proteome alteration in different treatment. Additional bioinformatics analyses were used to analyze the differential proteins (DPs) to investigate the key pathways underlying the mechanism of combinational treatment. The results showed that the combination therapy was effective in treating echinococcosis in animal model. More importantly, it was found that the combination therapy leads to complement activation and elevated cell adhesion, while the treatment with albendazole alone induced cell death which might cause hepatic injury.
Materials and Methods
2.1. Materials. Sophora moorcroftiana used in this study was purchased from Linzhi, Tibet. Alkaloids (purity > 90%) were extracted from S. moorcroftiana seeds in our laboratory and prepared for use as described previously [9]. Albendazole was purchased from Zhejiang Wanma Pharma Ltd. Co., Hangzhou, China. The RPMI medium, IL-2, IL-6, IL-10, IgE, and TNF-ELISA detection kits were purchased from Invitrogen, USA. The aspartate aminotransferase (AST) activity assay kit and the alanine transaminase (ALT) activity assay kit were obtained from Sigma-Aldrich, USA.
Protoscolex
Collection. Echinococcus granulosus protoscolices were kindly provided by Qinghai Institute for Endemic Disease Prevention and Control, China. The protoscolices were aseptically removed from liver hydatid cysts obtained from cattle and washed several times with saline containing 1500 U/mL penicillin and 1000 U/mL streptomycin [11]. The survival rate of the protoscolices exceeded 95% after these procedures.
Animal Study.
The experimental animal protocols were approved by the Experimental Animal Care and Ethics Committees of Qinghai University. 54 female Sprague-Dawley rats were purchased from Research Laboratory Center of Gansu University of Traditional Chinese Medicine (Gansu, China).
All rats were 10 weeks old with a body weight between 180 g and 200 g (certification number: SCXK (gan) 2011-0001). All rats were randomly divided into two groups, 64 rats in experiment group and 10 rats in normal group. The rats in experiment group were inoculated intraperitoneally with 4,500 viable protoscolices in 0.3 mL RPMI medium, while the rats in normal group were injected intraperitoneally with 0.3 mL normal saline. The rats were housed under standard conditions (temperature: 18-22 ∘ C, humidity: 50-60%) with free access to food and water. After 30 days (12), four rats from experiment group were randomly sacrificed for histological observation, in order to ensure successful establishment of echinococcosis animal model.
The 40 infected rats were divided into four groups (10 rats per group). Rats were administered with Sophora moorcroftiana alkaloids (SA) alone (SAT group, 8 mg/kg per day, once a day), albendazole (A) alone (AT group, 20 mg/kg per day, once a day), and combined treatment ( SAT + AT group, 8 mg/kg per day SA + 10 mg/kg A per day, once a day) by gavage, respectively. The rats in model group (M group) were given equivalent volume of normal saline. The normal group (N group) of 10 uninfected rats was also treated with normal saline.
All rats were anesthetized and sacrificed under the experimental protocols described above and all efforts were made to minimize animal suffering.
Blood Indicators Examination.
Thirty days after treatment, rats were sacrificed and blood was collected. Serum was obtained by centrifugation. The level of IL-2, IL-6, IL-10, IgE, and TNF-was measured by a microplate reader (BioRad, xMark-10483) using ELISA detection kits (Invitrogen, USA). The AST and ALT level in serum were also detected by Sigma-Aldrich kits (USA).
Pathologic Histology Analysis.
For pathological analysis, rats were sacrificed and the hydatid cysts were harvested from peritoneal cavity and liver. The thymus and spleen were also collected. The weight of the hydatid cysts, thymus, and spleen was measured, respectively. Thymus index, spleen index, and inhibition rate of cysts were calculated as follows: thymus index = (thymus weight/body weight) × 10; spleen index = (spleen weight/body weight) × 10; inhibition rate of cysts = (the weight of cysts in model group − the weight of cysts in experiment group)/the weight of cysts in model group × 100%.
To observe histological changes after treatment, liver and spleen were collected, sectioned, and stained with hematoxylin-eosin (H&E) staining. Observation was performed under microscope.
Proteomic Analysis.
In order to study the molecular mechanism of SA plus albendazole combination therapy treating liver echinococcosis, we performed proteomic analysis on animal samples from five experiment groups (SAT-L group, AT group, SA + AT group, model group, and normal group). One gram of rats liver from each group (0.1 g per rat) was collected for protein extraction. Then, the protein Evidence-Based Complementary and Alternative Medicine 3 Table 1: The level of immunological factors in rats serum from different groups.
Group
IL-2 (pg⋅ml −1 ) I L -6( p g ⋅ml −1 ) IL-10(pg⋅ml −1 ) I g E( p g ⋅ml −1 ) TNF-(pg⋅ml −1 ) SAT group ( (100 ug) was digested with trypsin for 12 h at 37 ∘ C (protein/enzyme = 100/3.3). After iTRAQ (AB Science) labeling, equal amounts of labeled peptides from each group were mixed and resolved into 15 fractions by high performance liquid chromatography (HPLC), followed by Q Exactive mass spectrometry (Thermo Fisher Scientific). The resulting MS/MS data were qualitatively and quantitatively analyzed by Mascot 2.3.01 with the following parameters: protein identification using nonredundant International Protein Index rat protein database (version 3.72), full trypsin digest with maximum 1 missed cleavage, peptide tol., and MS/MS tol. were 0.05 Da. Scaffold software was used to identify the differential proteins (DPs). Proteins with < 0.05 and fold change higher than 1.2 or lower than 0.833 were DPs.
Statistical Analysis and Data
Preprocessing. The data are presented as mean ± standard deviation (SD). Statistical comparisons among experimental groups were made by Student's -tests using SPSS 22.0 software. Difference was considered significant when > 0.05. The GO and KEGG pathway enrichment analysis of DPs were performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID) [12]. Table 1, the level of IL-2 increased when SA was given to rats. The IL-2 level in SAT + AT group was significantly higher than that in model group, while no obvious difference was observed between each treatment group and normal group. For IL-6, its amount in SAT group showed significant difference compared with model group, but there is no significant difference among other groups. The level of IL-10 was also increased when albendazole was administered to rats. When combined with SA treatment, the combinational treatment induced the highest IL-10 expression, which was significantly higher than that in model group yet not significantly different from that in normal group. The IgE expression in treatment group was obviously lower than that in model group ( > 0.05), but showed no difference compared with normal group. In addition, the level of TNF-among all groups displayed no significant difference. Table 2, the thymus index was increased when SA was administered to rats, but there was no significant difference compared with model group. Similarly, the spleen index was also elevated when SA was given. However, the combinational therapy group displayed significant higher spleen index value than model group, while no obvious difference was observed between each treatment group and normal group. Besides, we detected that the cyst weight in treatment group was significantly lower than that in model group. Moreover, the combined treatment group showed significantly lower cysts weight and higher inhibition rate of cysts than SA alonetreated groups.
Pathologic Histology Analysis. As shown in
We also investigated histological changes in hydatid cysts, liver, and spleen tissues after treatment. As shown in Figure 1, the tissues of the hydatid in model group were well developed, and the protoscolex and the intact brood capsule could be found. In SAT group, the structure of the brood capsule was shrinking and collapsing, indicating the therapeutic effect of SAT. In the AT and SAT + AT group, we observed that the nuclear germinal layer cells shrink, dissolved, and even disappeared; no protoscolex structure was detected. Meanwhile, necrosis was observed in the surrounding tissues.
As shown in Figure 2, obvious deposition could be observed in both liver and spleen tissue from AT group; meanwhile such deposition was alleviated in other treatment groups. However, a fair number of monocytes filtrated in spleen tissue and a few polykaryocytes could also be detected in spleen tissue in all treatment groups.
Liver Function Evaluation.
With the administration of SA or albendazole, the AST level of rats increased (Table 3). Thereinto, SAT + AT group induced lowest AST expression, but still obviously higher than normal group. It is clear that all treatments caused hepatic injury. As for ALT level, the difference among all groups was not significant. However, the ALT level in combinational therapy group was lowest. Data were expressed as mean ± SD; * < 0.05, compared with normal group; * * < 0.01, compared with normal group.
SAT + AT SAT AT Model
Improvement of hepatic echinococcosis
Hepatic injury
Proteomics Analysis.
To explore the underlying molecular mechanism of combination treatment, the liver tissues from five groups were collected for proteomics analysis using an iTRAQ approach. A total of 711 proteins were identified. There were 156 DPs between model and normal group, 126 DPs between SAT group and model group, 123 DPs between AT group and model group, and 138 DPs between SAT + AT group and model group. As shown in Figure 3 (Table 4). Further investigation of these DPs' biological functions revealed that they were enriched in complement activation process (Table 5). There were 32 proteins abnormally expressed in model group and normalized in AT group (named AT-normalized DPs), of which 12 DPs were upregulated in model group and downregulated in AT group and 20 DPs were downregulated in model group and upregulated in AT group (Table 4). These DPs were found associated with cell adhesion and cell death ( Table 5). There were 34 proteins abnormally expressed in model group and normalized in SAT + AT group (named SAT + AT-normalized DPs), of which 18 DPs were upregulated in model group and downregulated in SAT + AT group and 16 DPs were downregulated in model group and upregulated in SAT + AT group (Table 4). These DPs were found involved in complement activation and cell adhesion (Table 5).
We investigated the associated function and potential relationship of the enriched normalized DPs. As shown in Figure 3(b), there were three groups of DPs. C3, C4, C5, SER-PINA1, and SERPINC1 and their interaction were found to be involved in complement activation procedure; all of them were downregulated in model group and upregulated in SAT and SAT + AT group. ANXA2, EZR, YWHAB, HSP90AN1, PRKAR2A, and their interactions were also found to be involved in cell adhesion; they were downregulated in model group and upregulated in AT and/or SAT + AT group. Meanwhile, CRYAB, YWHAZ, SLC25A24, and HSPA1B were found associated with cell death; they were downregulated in model group and upregulated in AT group.
Discussion
Echinococcosis is a widespread zoonosis caused by Echinococcus granulosus, which is a very popular disease in large western region of China [1,7]. Clinically, albendazole was normally used to treat echinococcosis. However, its poor solubility and severe side effects limit its application. In animal husbandry areas of Tibet and some other western provinces of China, people have used decoction of Sophora moorcroftiana to treat echinococcosis patients for years. But the mechanism of this Chinese traditional medicine has not been investigated so far. Previous study indicated alkaloids extracted from Sophora moorcroftiana were the most effective active ingredients [7,9]. Thus, it might be a potential way to treat echinococcosis by using Sophora moorcroftiana alkaloids in treatment.
In present study, we treated experimental echinococcosis animal model with Sophora moorcroftiana alkaloids and also with Sophora moorcroftiana alkaloids combined with the clinical medicine albendazole. Compared with albendazolealone treatment, Sophora moorcroftiana alkaloids alone or the combined treatment with albendazole showed obvious 6 Evidence-Based Complementary and Alternative Medicine therapeutic effects against echinococcosis in infected rats. In in vivo study, SAT treated animals showed inhibited cysts development compared with model rats. The cysts weight was significantly reduced by SAT and the inhibition rate was between 30% and 40%. The clinical medicine albendazole had greater effective inhibitory efficacy against echinococcosis infection. The inhibition rate of cysts even achieved 80%. However, the combined treatment was the most potent therapeutic strategy. The rats treated with SAT plus AT showed the lowest cysts weight and the highest inhibition against echinococcosis. In histological observation, we found that echinococcosis infection induced deposition in liver cells, resulting in cells swelling and alveolar wall thickening. When treated with AT alone, this situation was not changed obviously. In contrast, it was attenuated by SAT or SAT + AT treatment. In spleen tissue, it was the same scenario. Large number of lymphocytes filtrated in liver and spleen tissue due to immune response to infection. It caused swelling of infected tissues and increased tissue volume. This result coincided with the comparison of spleen weight in all groups. It revealed that such histological change could be alleviated by SAT + AT therapy. Cytokines play important roles in host response to infections. Thus we examined IL-2, IL-6, IL-10, IgE, and TNF-level in rats serum. We did not detect significant difference in IL-6 and TNF-expression among all groups. However, the expression of IL-2 in AT group and SAT + AT group was significantly higher than that in model group, while SAT-treated alone group did not show significant difference compared with model group. As an important mediator in inflammatory and immune response of several infectious diseases, IL-2 is able to enhance host immunity and inhibit growth of tumors and parasites [13][14][15]. Once rats were infected by Echinococcus granulosus, the IL-2 receptors mIL-2R in target cells could be overexpressed. The IL-2 level in serum was reduced due to the binding of IL-2 to overexpressed mIL-2R, thus leading to suppression of immune activity of T cells and favoring the parasites survival. Therefore, the expression of IL-2 was increased after treatment, especially in SAT + AT group, which enhanced host immunity against infection, accelerated clearance of polypide, and inhibited parasite growth. IL-10 is a multifunctional cytokine synthesized by Th2 cell subpopulation, which is associated with humoral immunity regulation and host susceptibility to certain disease [16]. Overexpression of IL-10 revealed elevated humoral immunity and suppressed T cells activity, challenging survival of parasites. The IL-10 level in AT group and SAT + AT group was obviously greater than that in model group, but not significantly different compared to normal group. The result indicated that IL-10 played an important part in immune response to echinococcosis infection, although the regulatory mechanism was not clear. It was estimated that IL-10 was associated with Tc cells function, antibody-dependent and complement participative autoimmune response, although the detailed mechanism needs further investigation. Hydatid cysts secrete multiple antigens in host body during its growth and development, which will stimulate different variety of antibodies like Ig G, Ig M, IgA, and IgE. Our result showed that the level of specific antibody IgE in model group was apparently higher, while the IgE level in treated animals was decreased to normal level. It indicated that these treatment strategies exert therapeutic effect against hydatid cyst, which could be considered as an index for evaluating treatment efficacy and prognosis.
We also examined liver function of each group by testing AST and ALT concentration in serum. High level of AST and ALT indicated hepatic injury and liver dysfunction. The model group showed higher AST and ALT level compared Evidence-Based Complementary and Alternative Medicine 9 to normal group. After treatment, the AST and ALT level in SAT + AT group was decreased while only ALT level in other treatment groups was attenuated. It suggested that echinococcosis infection induced hepatic dysfunction and SAT and AT treatment alone might aggravate hepatic injury (especially AT alone even induced highest AST and ALT concentration) while the combinational therapy ameliorated liver damage [17,18].
The underlying mechanism was also explored by proteomics analysis. To characterize the molecular mechanism of SAT or AT or SAT + AT treatment, a proteomics analysis was performed to investigate multitargets characteristics. All abnormal expression of DPs was normalized and analyzed, respectively. As shown in Figure 3(b), functional enrichment analysis indicated that these DPs took important part in complement activation, cell adhesion, and cell death. Thus, we explored how these DPs linked to physiological alterations in serum indicators and histological changes in animal tissues. At first, C3, C4, C5, SERPINA1, and SERPINC1 were downregulated in model group and upregulated in SAT and SAT + AT group, which were found to be involved in complement activation procedure. Therein, the three proteins C3, C4, and C5 are key components in complement system, whose activation enhances the ability of antibodies and phagocytic cells to clear antigens, promotes inflammation, and attacks the pathogen's plasma membrane [19]. Alkaloids treatment (SAT group and SAT + AT group) significantly stimulated expression of these complement activation-associated proteins and enhanced host immune response against parasite infection by increasing IL-2 and IL-10 level in serum. Second, albendazole treatment (AT group and SAT + AT group) could upregulate expression of ANXA2, EZR, YWHAB, HSP90AB1, and PRKAR2A, all of which are involved in cell adhesion [20,21]. The production of cell-adhesion molecules, as well as inflammatory cytokines, was activated by host's macrophages [22]. It is estimated that albendazole treatment may be able to induce activation of macrophage, although the molecular mechanism needs further investigation. These results indicated that the combined treatment with SAT and AT could activate complement system and induce macrophage activation to produce cell-adhesive molecules, leading to improvement of hepatic echinococcosis. On the other side, albendazole treatment alone (AT group) also induced expression of CRYAB, YWHAZ, SLC25A24, and HSPA1B, which were found to be involved in cell death; meanwhile the expression of these proteins was not upregulated in combinational therapy group [23]. It might be able to explain why the liver injury in AT group was obviously more serious than that in SAT + AT group. Albendazole treatment upregulated the expression of these proteins associated with cell death, leading to hepatic injury and liver dysfunction (significantly high level of AST and ALT in AT group), while SAT was able to attenuate tissue damage and loss of liver function [18]. In this scenario, the combinational treatment displayed better therapeutic effects against liver echinococcosis as well as alleviated liver injury, which could be considered as an effective strategy to treat echinococcosis clinically.
|
2018-05-25T23:38:05.797Z
|
2018-03-28T00:00:00.000
|
{
"year": 2018,
"sha1": "9ccf5cec108a5d35102424c5dd4c2c6653e560e6",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2018/3523126.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98d46f56a95c00555a7d0bc9707d6c0ab51488e4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14905793
|
pes2o/s2orc
|
v3-fos-license
|
Sweeter and stronger: enhancing sweetness and stability of the single chain monellin MNEI through molecular design
Sweet proteins are a family of proteins with no structure or sequence homology, able to elicit a sweet sensation in humans through their interaction with the dimeric T1R2-T1R3 sweet receptor. In particular, monellin and its single chain derivative (MNEI) are among the sweetest proteins known to men. Starting from a careful analysis of the surface electrostatic potentials, we have designed new mutants of MNEI with enhanced sweetness. Then, we have included in the most promising variant the stabilising mutation E23Q, obtaining a construct with enhanced performances, which combines extreme sweetness to high, pH-independent, thermal stability. The resulting mutant, with a sweetness threshold of only 0.28 mg/L (25 nM) is the strongest sweetener known to date. All the new proteins have been produced and purified and the structures of the most powerful mutants have been solved by X-ray crystallography. Docking studies have then confirmed the rationale of their interaction with the human sweet receptor, hinting at a previously unpredicted role of plasticity in said interaction.
Scientific RepoRts | 6:34045 | DOI: 10.1038/srep34045 of point mutations affecting the potency of monellin, brazzein and thaumatin [23][24][25] . The widely accepted idea is that both proper surface charge distribution and three-dimensional shape have to be maintained in order to trigger the sweet sensation 23,[25][26][27][28] . We have focused our attention on MNEI, a single chain derivative of monellin, a small (~11 KDa), globular protein. Wild type monellin has a cystatin-like fold, composed of two non-covalently linked chains [29][30][31] , which dissociate when heated above ~50 °C. This is accompanied by taste loss and prevents the use of the protein as a sweetener above this temperature. To circumvent this inconvenience, single chain derivatives with higher thermostability, among which MNEI, have been designed 31,32 . MNEI has the same sweetness as native monellin, with a recognition threshold of only 1.43 mg/L (127 nM) 33 and a melting temperature of about 80 °C 34,35 . Nonetheless, even this protein can lose its sweetness if slight deformations of the three dimensional shape occur. For instance, mutation G16A, involving a buried residue of MNEI, only modifies the protein flexibility, but induces nearly complete loss of the sweet taste [36][37][38] .
The other factor that most significantly correlates with sweetness is surface charge: in fact, the surface of the T1R2-T1R3 complex that is described to bind sweet proteins is characterised by the presence of a large amount of acidic amino acids 17,21 . Studies on single and double chain monellins 23,28,39,40 , thaumatin 24,[41][42][43] , brazzein [44][45][46] and lysozyme 47 have shown that, in general, mutations increasing the acidic character would consistently decrease or even cancel sweetness, whereas the outcome of the introduction of a positive charge is not immediately predictable. For instance, among four surface mutations, namely M42R, Y63R, Y65R and D68R, only Y65R would increase sweetness, whereas the other mutations, despite introducing a positive charge, would abate the taste intensity 23 . This is a consequence of the non-homogeneous charge distribution on the receptor surface, which implies that, in order to potentiate the effect of sweet proteins, positive charges have to be located in specific positions on their surface. Docking studies can only provide limited indications, as the structure of the receptor, built by homology, allows only low resolution predictions 17,21 . Recently, more advanced models have been built in terms of topological refinement 25 , by taking into account the information deriving from previous mutagenesis studies. These models have been able to account for many of the experimental outcomes of mutations of charged surface residues, and proved the possibility to predict at the atomic detail the complexes of mutants of MNEI and brazzein with the sweet receptor. More recently, a similar approach has been used to design and validate a new super-sweet mutant of thaumatin 24 . In the present study, we have started from the sweeter variant Y65R-MNEI 23 and have introduced new, rationally predicted mutations to further potentiate sweetness. We then incorporated in the sequence of the sweetest construct mutation E23Q, which has been recently shown to increase stability at neutral to alkaline pH 35 . This latter mutation was not expected to affect MNEI sweetness, since the side chain of residue 23 is buried in a hydrophobic pocket and not exposed to interactions with the receptor. Surprisingly instead, it produced an additional gain in sweet taste, as proven by sensory evaluation, which ranked this construct as the sweetest protein ever produced. To elucidate details of their mode of action, the structure of the new mutants has been solved by X-ray crystallography and docked onto the sweet taste receptor. The results confirm the predictive value of the wedge model to design MNEI mutations and offer a new picture of the interaction between sweet proteins and the receptor.
Results
Design and characterisation of MNEI charge mutants. In order to select the best possible mutations for new MNEI constructs, we analysed the electrostatic surface potentials of MNEI in comparison to a model of its sweeter and well characterised mutant Y65R-MNEI 23 , which we used as the starting point for the present mutagenesis experiments. Since residue 65 had previously been linked to sweetness enhancement, the surrounding region is likely involved in the interaction with the receptor. A comparison of the electrostatic potential maps for MNEI and Y65R-MNEI is presented in Fig. 1. We designed mutations localised in the same surface area of the protein, adding positive charge density on either the same or the opposite side of the surface with respect to R65. In the first case, we tested the additional mutations S67K and D68N, i.e. we added a basic residue and removed an acidic one, respectively (Fig. 1A). Residues S67 and D68 are both located on the loop connecting the β 3 and β 4 strands (L34) and were chosen since their mutation is not expected to significantly affect structural stability. Similar considerations were made when selecting mutation Q28K. In this case, with the introduction of a lysine residue, we aimed at reducing the negative charge density at the C-terminus of the helix, as evidenced in Fig. 1B. Mutation C41S was also introduced in both constructs. C41 is the only cysteine in the sequence and it is not involved in the formation of disulphide bridges, although it has been identified as the source of destabilisation of MNEI at extremely high pHs 48 . Mutation of C41 to serine was then introduced, despite previous studies on synthetic single chain monellin had correlated it to a minor decrease in sweetness 40 , to avoid undesired dimeric artefacts, which could arise from partial protein denaturation and oxidation. It is in fact known that both MNEI and Y65R-MNEI display a tendency to aggregate and form dimers and multimers, in particular at neutral to alkaline pHs 34 . The resulting constructs prepared for this study were therefore C41S, Y65R, S67K, D68N-MNEI (Mut1) and Q28K, C41S, Y65R-MNEI (Mut2). Both proteins were produced recombinantly in Escherichia coli BL21 and purified by ion exchange chromatography with slight modifications of previously published protocols 49 . Mut1 showed a marked tendency to precipitate during the purification, leading to significantly lower yields compared to Mut2. Nonetheless, both proteins could be obtained at high purity, and CD spectra were recorded. Comparison with Y65R-MNEI showed a nearly identical global fold and the persistence of the β -sheet rich structure typical of monellins ( Supplementary Fig. S1). CD spectroscopy was also used to record thermal denaturation profiles (Fig. 2a). The experiments were performed at neutral pH, at which native monellin displays lower stability and higher propensity to aggregation 34,48,50 . Mut2 exhibited a stability comparable to that of Y65R-MNEI (T m 70.1 and 71.4, respectively), whereas Mut1 appeared much less stable, with a Tm of 52.1 °C, about 20 °C lower than Mut2 and Y65R-MNEI (Fig. 2b). This destabilisation is probably the result of excessive positive charge density within a small surface area and could explain the solubility problems encountered in the purification of this mutant. Irrespective of its sweetness, the poor performance of Mut1 in terms of stability makes it an unlikely candidate for food and beverage applications, where high temperatures are often encountered. Mut2, on the other hand, appears more promising, to the point that we decided to incorporate in its sequence the stabilising mutation E23Q, which has been shown to remove the stability dependency from pH 35 . Mut3 (E23Q, Q28K, C41S, Y65R-MNEI) was expressed and purified and its thermal denaturation profile was evaluated. As shown in Fig. 2, Mut3 is more resistant to thermal denaturation (T m 77.8 °C) than Y65R-MNEI and Mut2, thus seeming a more appealing candidate for the development of industrial applications.
Sensory evaluation of the sweet proteins. In order to assess the validity of the design in terms of sweetness improvement, all proteins were subjected to taste assessment. Relative sweetness was compared to that of Y65R-MNEI, and is reported in Fig. 3. Both Mut2 and Mut3 resulted sweeter than Y65R-MNEI. Sweetness thresholds were evaluated by triangle test by a panel of tasters, and resulted 2.50, 0.40 and 0.28 mg/L (223, 36 and 25 nM) for Mut1, Mut2 and Mut3, respectively. In comparison, Y65R-MNEI exhibited a threshold of 0.62 mg/L (55 nM), in agreement with previous results 34 . Surprisingly, although containing the same surface mutations, Mut3 and Mut2 showed different sweetness thresholds, suggesting that the stabilisation of the structure at neutral pH might play a role in the interaction of the mutants with the receptor. Structural characterisation of Mut2 and Mut3. In the attempt to rationalise the differences in term of taste potency between Mut2 and Mut3, we solved the structure of the two proteins by X-ray crystallography. Mut2 and Mut3 were crystallised under the same experimental conditions, resulting in two different space groups (Supplementary Table S1). For both mutants, the asymmetric unit (a.u.) contains two protein molecules strongly interacting with each other and resulting in a dimer. It is worth noting that MNEI can form crystals containing either a monomer (PDB code 2O9U) 51 or a dimer (PDB code 1IV7) in the a.u., depending on the crystallisation conditions. Both the electron-density maps of Mut2 and Mut3 structures are very well defined, with the only exception of loop L23 (residues 47-56) connecting strands β2 and β3. This loop is usually highly flexible or disordered in the structures of MNEI and its derivatives. Therefore, these residues were excluded from any comparative analyses and from root mean square deviation (RMSD) calculation among structures. In both Mut2 and Mut3, mutations do not alter the overall protein fold: the structure is very similar to that of MNEI. RMSDs between main chain atoms of Mut2 and Mut3 in comparison to the reference structure for MNEI (PDB code 2O9U) are reported in Table 1. Mutation sites of Mut2 and Mut3 were analysed by visual inspection of the structures. In MNEI, C41 is located in a hydrophobic region lined by side chains of residues I5, I6, T12 and L62. In the X-ray structure of MNEI solved at atomic resolution (PDB code 2O9U), the side chain of C41 adopts two different conformations with occupancy 0.3 and 0.7, hereafter referred to as 1 (χ = − 82°) and 2 (χ = − 171°), respectively (Fig. 4a). In structure 1IV7, instead, the side chain of C41 adopts only conformation 2 (Fig. 4b).
On the contrary, in both molecules present in the a. u. of Mut2 and Mut3, S41 side chain adopts conformation 1 and forms a hydrogen bond with a water molecule that connects S41 to main chain atoms of residues P40, I38, Y63 ( Fig. 4c and Supplementary Fig. S2). Y65 is located on the surface of MNEI. The mutation Y65R introduces at this site a charged residue, whose side chain is highly flexible and explores different conformations in the structures of Mut2 and Mut3, forming several interactions with solvent molecules, sometimes involved in packing contacts ( Supplementary Fig. S3). In the wild-type protein, Q28 is a solvent exposed residue; its side chain adopts two distinct conformations, one of which is in direct contact with the side chain of residue E23 that is buried in a hydrophobic cavity formed by residues I26, Y29, L86 and F89 (Fig. 5a). In Mut2, containing the mutation Q28K, the lysine side chain is pushed away toward the solvent, forming a stabilising interaction with the hydroxyl group of Y47 from a symmetry related mate in one of the two molecules present in the a.u. and remaining disordered in the other one (Fig. 5b). The introduction of the additional mutation E23Q significantly alters the structure of the surrounding residues: in Mut3, in fact, Q23 assumes a different conformation compared to E23 in both Mut2 and in MNEI and establishes new hydrogen bonds with the main chain atoms of Y29 and G30. The conformational variation of the side chain of Q23, when compared to E23, allows the rearrangement of the side chain of K28, which forms stabilising hydrogen bonds with main chain and side chain atoms of N90 (Fig. 5c).
Interaction with the sweet taste receptor. The interaction of MNEI with the T1R2-T1R3 has been interpreted in the framework of the first mechanism proposed for the interaction of the three sweetest natural proteins, the so-called wedge model 17,21 . Although this model is still the only general model for the interpretation of the interaction of sweet proteins with the receptor, it has been the subject of some criticism. For instance, Assadi-Porter et al. claimed that receptor mutations based on the wedge model did not suppress the interaction of brazzein with the receptor 44 . However, the failure to predict correct mutations in the receptor, was due to the blind use of the model 25 . In its simplest formulation, in fact, the wedge model only yields an ensemble of protein molecules that bind with different orientations and even with slightly different parts of their surface 25 and unsuccessful predictions based on the model were obtained when a single orientation was arbitrarily chosen 44 . By using a tethered docking approach, we were able to show that topologically correct models of complexes of monellin and brazzein with the sweet receptor are indeed consistent with the distribution of charged residues and explain data not used in their initial derivation 25 . Recent experimental work from Assadi-Porter et al. indeed confirmed the coherence of the revised model with their observations on new brazzein mutants 52 . Accordingly, we decided to use the wedge model to interpret the huge increase in sweetness observed in Mut3. When checking the consistency of the new mutations, which lead to a protein even sweeter than Y65R-MNEI, and particularly the crucial Q28K mutation, it was natural to try to align the X-ray structure of Mut3 with the structure of MNEI in the topological complex 25 . Although the conformations of side chains are inevitably different, most of the charged residues previously selected for the tethered docking are still at distances compatible with good electrostatic interaction with receptor residues of opposite charges. However, the side chain of K28 is far from the interface between MNEI and the receptor. This result is apparently inconsistent with the wedge model: it can be explained by accepting either that the complex generated by the mentioned tethered docking is inaccurate, or by hypothesising that it is possible to have multiple interaction surfaces. We checked this possibility by first trying to bring the side chain of K28 closer to the receptor and then mapping the new interaction interface. It was soon clear that a simple rotation of ca. 30° along the long axis of the molecule of Mut3, in the orientation consistent with the model complex of MNEI 21 was all that was needed. What came as a big surprise was the permanence of several interactions, notably those involving D7, R39, R88 and R65. In other words, it appears that these crucial residues are in a pivotal position with respect to the mentioned rotation. This result was double-checked using the tethered docking approach previously used to refine the complex of MNEI 21 . After minor adjustments we found that the main contacts between Mut3 and receptor residues can be summarised as follows: the C γ atom of D7 of Mut3 is at 3.3 Å from the C ζ atom of R247 of T1R3; likewise C ζ of R39 is 7.1 Å from C β of D169 of T1R2, C ζ of R88 is 3.46 Å from C γ of E47 of T1R3, Cε of K28 is 5.0 Å from C γ of E48 of T1R3 and C ζ of R65 is 4.23 Å from C β of D456 of T1R2. The relationship between Figure 5. E23-Q28 mutation sites. Residue Q28 in the structure of MNEI deposited under PDB code 2O9U adopts two alternative conformations one of which is in direct contact with the side chain of residue E23 that is buried in a hydrophobic cavity formed by residues I26, Y29, L86 and F89 (a). In Mut2, upon Q28K mutation, this interaction is lost (b). In Mut3, E23Q mutation allows a rearrangement of K28, whose side chain forms an additional H-bond with N90 (c). The 2F o -F c electron density maps are contoured at 1.0 σ . the two interacting surfaces is illustrated in Fig. 6. The resolution of the docking model allows detecting the contact points between the sweet protein and T1R2-T1R3, but is unfortunately not sufficient to individuate the subtle differences in closely related proteins such as Mut2 and Mut3, which translate in the different biological activity.
Discussion
Changes in dietary habits have led to an increase in pathologies related to carbohydrates metabolism, such as obesity, diabetes, hyperlipidaemia and caries, with repercussions on life style and health care costs. Food and beverage industries are in constant search of new sweetening compounds, whose ideal characteristics would be safety and palatability. Sweet proteins represent a potential resource in this respect: their proteinaceous nature hints at safety, their amazing sweetening power allows for the use of minimum quantities and the possibility of obtaining them through recombinant technologies opens the way to large scale production 10,53 . Moreover, protein design can help to improve their characteristics, tuning their performance in view of real life applications. We have used this approach to enhance sweetness and resistance to thermal denaturation and pH variations of MNEI, a single chain monellin, as these features are of primary importance for applications to food and beverages. Starting from the well characterised mutant Y65R-MNEI 23,34 and based on the prediction of the surface of interaction with the T1R2-T1R3 sweet receptor, we have designed two different charge mutants. One of them, Mut1 (C41S,Y65R,S67K,D68N-MNEI), exhibited a drop in sweetness, despite presenting an increased positive surface charge. The other construct, Mut2 (Q28K,C41S,Y65R-MNEI), displayed instead amazing potency, with a recognition threshold of only 36 nM, roughly 30% lower than Y65R-MNEI and 3.5 times sweeter than the parent protein, according to literature data for MNEI 34 . These results underline the importance, for sweet proteins, to present the correct pattern of positive charges on the surface of interaction with the receptor and is in line with the outcome of previous mutagenesis studies 23,28,41,42 . Although recombinant production of the sweet taste receptor has been achieved 14,44 and experimental mapping of the interactions between sweet proteins and their receptor would at this point be feasible, a similar approach is very time and resources consuming. Instead, our results support the validity of the wedge model in predicting and explaining the physiological effects of mutated sweet proteins, confirming its applicability to drive in silico sweet proteins' design. Among the designed constructs, Mut1 was less thermally stable than the reference protein Y65R-MNEI. Since stability towards pH and temperature variations is indeed a desirable attribute for a protein with potential applications to large scale processes, we designed Mut3, with the same sequence of Mut2 and the additional stabilising mutation E23Q 35 . The increased stability introduced by this mutation had been linked to the formation of hydrogen bonds between the side chain of Q23 and the backbone atoms of Y29 and G30 35 . The crystal structures confirmed the existence of these contacts and highlighted additional stabilising interactions between the side chain of K28 and neighbour residues, triggered by the conformational change of the side chain of K28 induced by mutation E23Q, which further clarify the gain in thermal stability. In previous studies, residue E23 had been the target of several mutations, since this amino acid, located in a hydrophobic pocket of MNEI structure, has a crucial role in the stability of the protein. Mutations introducing hydrophobic residues at this position consistently increased thermal stability of MNEI 39,48,54 . In terms of biological activity, alanine replacement had no effect on sweetness 39 , whereas replacement with other hydrophobic amino acids, such as leucine, phenylalanine or tryptophan was accompanied by a slight flavour decrease compared to MNEI, despite helping to retain sweetness even after prolonged treatments at elevated temperatures 54 . These effects could be the result of minor modifications of the protein structure or flexibility, undetectable by the spectroscopic techniques (i.e., CD) employed to characterise the constructs 39,54 . Mutation E23Q, which improves to the same extent MNEI thermal stability, has the opposite effect, resulting in a further decrease of the sweetness threshold, down to 25 nM, which makes of Mut3 the sweetest protein designed to date. Such exceptional sweetness, compared to Mut2, could be ascribed to various causes: the above described subtle structural differences around K28 may play a role also in defining the interaction with T1R2-T1R3 receptor. Moreover, small differences in flexibility between the two proteins could affect the binding to the receptor. The model based on tethered docking suggests for Mut3 a new interacting surface. Within the complex, all the pivotal interactions, previously detected in the topologically refined complex between MNEI and the receptor, are retained, but, in addition, the new interacting side can be obtained from that of MNEI 25 by a rotation of ca. 30° along the longest protein axis. While further supporting the validity of the wedge model, these results present us for the first time with the idea that the interaction between sweet proteins and the receptor might be endowed with a certain plasticity: the possibility of multiple mutual orientations of the sweet protein and the active form of the receptor suggests that entropic factors might also be involved and play a determinant role, in providing sweet proteins with their extreme potency.
Expression and purification of the mutants. Synthetic genes encoding for the sequence of Mut1, Mut2 and Mut3 were purchased from Eurofins Genomic. The genes were cloned in the pET22b(+ ) expression vector between the NdeI and SacI sites. Protein were expressed in Escherichia coli BL21(DE3) and purified from the cell lysate by a coupled anion/cation exchange procedure as previously described 49 .
Circular Dichroism Spectroscopy. Circular dichroism (CD) spectra were recorded on a Jasco J-715 spectropolarimeter equipped with a Peltier temperature control system (PTC-348WI). Molar ellipticity per mean residue [θ ] in deg cm 2 dmol −1 was calculated from the equation: [θ ] = [θ ] obs mrw/(10 × l × C), where [θ ] obs is the ellipticity measured in degrees, mrw is the mean residue molecular weight of the protein (Da), C is the protein concentration in g/mL and l is the optical path length of the cell in cm. Cells of 0.1 cm path length were used. CD spectra were recorded with a time constant of 4s, a 2 nm band width and a scan rate of 20 nm/min, and the signal was averaged over three scans and baseline corrected by subtracting the buffer spectrum. Spectra were recorded in 20 mM phosphate buffer pH 6.8 at a concentration of 0.2 mg/mL protein, as determined by UV absorbance at 280 nm.
Thermal denaturation experiments were recorded following the signal at 215 nm while varying the temperature from 30 to 95 °C at a rate of 1 °C/min. For each condition, three independent measures were performed. Experimental points were fitted to a Boltzmann curve, and fraction of unfolded protein (f u ) was calculated according to the formula (1): where θ f and θ u are the CD signal of the folded and unfolded state from the fitted curve, respectively, and θ is the CD signal at each temperature.
Crystallisation and structure determination. Mut2 and Mut3 were dissolved in 10 mM HCl up to a concentration of 5.0 mg/mL. Crystals of both mutants were obtained at 20 °C using hanging-drop vapour-diffusion method and mixing an equal volume of protein and of a reservoir solution containing 30% PEG4K, 0.1 M sodium acetate at pH 4.6 and 0.2 M ammonium sulphate. X-ray diffraction data were collected at XRD1 beamline of Elettra Synchrotron (Trieste, Italy), using a detector Pilatus-6M (Dectris) and the wavelength of 1.065 Å. Before being exposed to the X-ray beam, the crystals were soaked into a cryo-solution consisting of mother liquor added of 30% glycerol and flash cooled in liquid nitrogen. Data sets were indexed, integrated, reduced and scaled using XDS and SCALA 58 . Data collection statistics are reported in Supplementary Table S1. The structures of Mut2 and Mut3 were solved by molecular replacement using the program Phaser 59 and the structure of MNEI, without water and ligands, as the search model (PDB code 2O9U) 51 . Structures of the mutants were improved by iterative cycles of manual fitting using Coot 60 and were refined by REFMAC5 61 and Phenix 62 . 5% of the data was used for calculation of the R-free value. The structure of Mut2 was refined at 1.70 Å resolution to an R-factor of 18.8% Scientific RepoRts | 6:34045 | DOI: 10.1038/srep34045 (R-free 22.9%); the structure of Mut3 was refined at 1.55 Å resolution to a R-factor of 19.8% (R-free 23.9%). 98.2% and 1.8% residues in Mut 2 and 97.7% and 2.3% residues in Mut3 are located in the most favourable and allowed regions of the Ramachandran plot, respectively Refinement statistics are reported in Supplementary Table S1. Final coordinates and structure factors were deposited in the Protein Data Bank under the accession code 5LC6 for Mut2 and 5LC7 for Mut3.
Complex refinement. The first ensembles of complexes of MNEI with different models of the T1R2-T1R3 receptor were built using the GRAMM software in a low resolution mode 63 . A single topological model of the complex was later obtained 25 using GRAMM-X, a version of GRAMM accessible in Internet (http://vakser.bioinformatics.ku.edu).
In the web version of GRAMM it is possible to add residues suggested by the low-resolution ensemble that may belong to the interface of the complex and hypothetical residues suggested from mutagenesis data. Altogether, we favoured charged residues, mainly because of the mentioned importance of electrostatic interactions in the wedge model. The following residues were selected: (T1R2)_ D169, E170, R172, D173, K174, R176, D213, R217, D218, D456, R457, and (T1R3)_ R177, D190, R191, D216 and key charged residues of Mut3, i.e. D7, K28, R39, R65 and R88. Other parameters were maximised as described before 25 .
|
2018-04-03T06:07:09.254Z
|
2016-09-23T00:00:00.000
|
{
"year": 2016,
"sha1": "b1113e8491e9848e828a55219f0a371021e1ab08",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/srep34045",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1113e8491e9848e828a55219f0a371021e1ab08",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
201019415
|
pes2o/s2orc
|
v3-fos-license
|
Ossifying fibroma in the mandibular angle mimicking metastatic clear cell renal cell carcinoma
Abstract Rationale: Ossifying fibroma is benign fibro-osseous neoplasm. The authors report a case of ossifying fibroma in the mandibular angle suspected as metastasis of clear cell renal cell carcinoma. Patient concerns: A 74-year-old man presented to the primary hospital complaining of frequent urination. A tumor in the left kidney was detected via an abdominal computed tomography scan. The patient then visited the Department of Urology at our hospital. Diagnoses: According to whole-body imaging examinations, the patient was suspected of having renal cancer with mandibular metastasis. Also, a cystic lesion of the maxilla was revealed. Interventions: Left nephrectomy was performed by urologists, and the patient was diagnosed with clear cell renal cell carcinoma of the left kidney. Approximately 1 month later, resection with a safety margin of the mandibular lesion and removal of the maxillary lesion were performed by oral and maxillofacial surgeons. Outcomes: The patient was diagnosed with ossifying fibroma of the mandible and an odontogenic keratocyst of the maxilla via a histopathological examination. Eighteen months have passed since the operation without clinical and imaging findings associated with recurrence. Lessons: Ossifying fibroma in the mandibular angle of elderly patients is extremely rare. Surgeons should consider the possibility of metastasis when osteolytic lesions of the jaw are found in patients with cancer.
Introduction
Ossifying fibroma (OF) is an uncommon benign fibro-osseous neoplasm affecting the jaws and the craniofacial skeleton. [1][2][3] It is included in fibro-osseous and osteochondromatous lesions in the World Health Organization's classification of head and neck tumors. [1] OF is mainly composed of fibrous stroma and bone elements with various degrees of maturation. [4] It commonly occurs in the mandibular premolar-molar region and occurs in the second to fourth decades of life. [2][3][4][5][6][7] Additionally, it has a female predilection with a male to female ratio of 1:5. [2,3,6,7] It is usually initially asymptomatic, and pain and paresthesia are rare. [2] Most cases are small and incidentally detected by routine dental radiographs, and there are few cases of multiple occurrences associated with familial inclination. [2] However, it can cause facial deformity, displacement of the teeth, pathological fracture, and extend into the intracranial and intraorbital regions due to progressive and destructive growth. [2][3][4][5] Previous studies suggested that OF arises from the periodontal ligament. [2,4,7,8] Radiographically, OF is usually a well-defined unilocular lesion with or without a sclerotic margin overlapping the roots with or without root resorption, and the internal structure is often a mixture of radiolucent and radiopaque densities. [2][3][4]6] However, these radiographic findings are inconclusive. [6] The differential diagnosis of such lesions might include benign maxillofacial bone and cartilage tumors, benign epithelial odontogenic tumors, odontogenic and non-odontogenic developmental cysts, benign mesenchymal odontogenic tumors, chronic sclerosing osteomyelitis, primary and metastatic malignant tumors, and so on. [3,6,7] In jaw lesions without significant expansion or destruction of the cortical bone and without displacement of the inferior mandibular canal, it is especially difficult to differentiate between benign and malignant lesions via radiological examinations. [6] In such cases, the lesions are often not easily accessible for biopsies due to soft tissues and thick cortical bone. Therefore, image assessments such as magnetic resonance imaging (MRI) and nuclear medical examination may play an important role in the preoperative differential diagnosis.
The authors report a case of ossifying fibroma in the mandibular angle suspected as metastasis of clear cell renal cell carcinoma in an elderly man.
Consent
Written informed consent was obtained from the patient for publication of the case and any accompanying images.
Case report
A 74-year-old man presented to the primary hospital complaining of frequent urination. A tumor in the left kidney was revealed via an abdominal computed tomography (CT) scan (Fig. 1). The patient then visited the Department of Urology at our hospital. Whole-body bone scintigraphy using technetium-99m methylene diphosphonate (Tc-99m MDP WBBS) demonstrated an abnormally increased uptake in the left mandibular angle (Fig. 2). The patient was referred to the Department of Dentistry and Oral Surgery at our hospital for further evaluation. A panoramic radiograph and CT scan of the maxillofacial region revealed an osteolytic lesion accompanied by a slight expansion of the cortical bone in the mandibular angle and a cystic lesion accompanied by expansion of the cortical bone in the maxillary anterior region (Fig. 3). He had no subjective symptoms in the maxillofacial region, and a physical examination revealed no symptoms. An intraoral examination demonstrated swelling of the maxillary anterior region. MRI of the mandibular lesion showed low signal intensity on a T1 weighted image and high signal intensity on a T2 weighted image in the central area ( Fig. 4). The maxillary lesion showed high signal intensity on the Approximately 1 month later, resection of the mandibular lesion with a safety margin and removal of the maxillary lesion were performed under general anesthesia by oral and maxillofacial surgeons. The findings of an intraoperative rapid-frozen pathological assessment of the mandibular lesion suggested that it was not a metastatic lesion from RCC. Concomitantly, mandibular reconstruction using a titanium plate and screws was performed to prevent mandibular fracture. The patient was diagnosed with ossifying fibroma of the mandible and an odontogenic keratocyst (OKC) of the maxilla via a postoperative histopathological examination (Figs. 5 and 6). His postoperative course has been uneventful (Fig. 7).
There was no evidence of recurrence of RCC and jaw lesions 18 months after the second surgery.
Discussion
OF is a benign neoplasm, and the surgical approach remains controversial. Treatment methods for OF have been reported including curettage, enucleation, and radical resection. [2,4] Previous studies reported the rate of recurrence after curettage was 28%, and that after partial or incomplete resection ranging from 30% to 56%. [5,8] The residual outer lamella of the OF is considered as a factor for recurrence. [5] Therefore, complete removal of the lesion is widely supported considering the possibility of recurrence although the size and location of the lesion often affects the choice of surgical approach. [2][3][4][5] However, previous studies suggested that for asymptomatic OF, a wait-andscan strategy can be applied in selected cases considering the www.md-journal.com anatomical location. [5] It is difficult to differentiate between benign and malignant lesions of the jaw using only imaging findings, and biopsy usually plays an important role in differential diagnosis. In the present case, OF occurred in the mandibular angle away from the root-apex area in the elderly patient, and there was a problem of a differential diagnosis between the benign and metastatic malignant lesions because the patient was diagnosed with clear cell RCC. Therefore, the patient was treated via radical resection with a safety margin combined with an intraoperative rapid-frozen pathological assessment because there is the possibility of shelling out the tumor during surgical procedures, and mandibular reconstruction using a titanium plate and screws was performed to prevent mandibular fracture. Recurrence of OF did not occur during the follow-up period in this case.
Metastases to the oral and maxillofacial region from gastrointestinal or respiratory cancer are uncommon conditions and represent <1% of all malignancies in that region. [9][10][11] The lung is the most common primary site metastasizing to that region, and metastatic RCC is extremely rare. [10,11] There are reports showing that metastatic RCC can be the only presenting sign for RCC. [12] The site will be most commonly angle of mandible (molar) as blood supply is abundant. Some studies reported metastatic clear cell RCC to the head and neck region affecting the parotid and submandibular glands, mandible, maxilla, paranasal sinuses, intraoral soft tissues, and so on. [11] Although head and neck metastatic clear cell RCC is usually detected 1 to 7 years after the diagnosis of the primary tumor, some studies reported that metastases to the oral and maxillofacial region were detected synchronically or before the detection of the distant primary tumor. [10,11] Therefore, clinicians should consider the possibility that the symptoms or results of imaging examinations in those regions may be the first clinical signs of an undiscovered distant primary tumor, and also the previous history of malignant disease is very important. [10,11] The differential diagnosis of metastatic clear cell RCC to the head and neck region may vary depending on the location of the metastasis. [11] If metastatic clear cell RCC to the jaws is suspected, clear cell odontogenic carcinoma and other odontogenic tumors containing clear cells should be considered using histopathological and immunohistochemical findings. [11] Although OF is best imaged by CT, additional imaging examinations might play an important role in the differential diagnosis of osteolytic lesions in patients with malignant diseases. OF usually has low to intermediate signal intensity on T1 weighted images and variable signal intensity on T2 weighted images on MRI. [13] On T2 weighted images, low signal intensities may be predominant depending on the degree of calcification. [13] Furthermore, low signal intensity is typically observed in the ossified peripheral areas, and high signal intensity is observed in the non-ossified central areas on T2 weighted images in cases of OF. [13] Although the MRI findings of the mandibular lesion in this case were consistent with the previous report of OF, these could not deny the possibility of metastasis from clear cell RCC to the mandible because these were not specific findings. Therefore, this case suggested that MRI may play a limited role in the differential diagnosis of benign and metastatic lesions of the mandible in patients with malignant tumors.
Bone lesions related to RCC are typically observed in osteolytic lesions. [14] Wu et al [15] evaluated the diagnostic utility of FDG-PET and Tc-99m MDP WBBS for detecting bone metastases in 18 patients with RCC. They reported that the diagnostic sensitivity and accuracy of FDG-PET were 100% and 100% and those of Tc-99m MDP WBBS were 77.5% and 59.6%, respectively. [15] Wu et al [15] evaluated the role of FDG-PET for the detection of distant metastases in 24 patients with clear cell RCC. They reported the sensitivity, specificity, and positive predictive values were 63.6%, 100%, and 100%, respectively. [16] They also reported that the mean lesion size of distant metastases with falsenegative FDG-PET images was 1.0 cm. [16] Tc-99m MDP WBBS show an abnormally increased uptake in benign lesions such as OF, fibrous dysplasia, and enchondroma. [17] Therefore, it is difficult to differentiate these benign osteolytic lesions from bone metastases using Tc-99m MDP WBBS. [17] Although imaging examination for detecting bone metastases in patients with RCC remains controversial, FDG-PET/CT may be more useful for detecting bone metastases than Tc-99m MDP WBBS. [14,15] In the present case, Tc-99m MDP WBBS demonstrated abnormally increased uptake in the left mandibular angle, and FDG-PET/CT showed no abnormal FDG uptake in that region. As a result, the rare condition that OF occurred in the mandibular angle of an elderly patient with RCC presented a preoperative diagnostic challenge.
The maxillary lesion with cortical bone expansion showed high signal intensity on a T2 weighted image. Additionally, Tc-99m MDP WBBS and FDG-PET/CT did not show an abnormal uptake. These findings suggested that the lesion was a benign cystic lesion, and it was diagnosed as OKC via a histopathological examination. Recurrence of OKC did not occur during the follow-up period in this case.
In conclusion, OF in the mandibular angle of elderly patients is extremely rare. Surgeons should consider the possibility of metastasis when an osteolytic lesion of the jaw is revealed in patients with cancer.
|
2019-08-17T13:04:13.842Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "19e6bb8ac06939b11483882422387b7ccf6eabca",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000016595",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2ff6c80d465d142ce547205cef556a8d7895cba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232370977
|
pes2o/s2orc
|
v3-fos-license
|
Cost-effectiveness of Triple Therapy vs. Biologic Treatment Sequence as First-line Therapy for Rheumatoid Arthritis Patients after Methotrexate Failure
Introduction A clinical trial (RACAT) reported the noninferiority of triple therapy compared to biologic agents (etanercept + methotrexate), and previous studies confirmed that biologic disease-modifying antirheumatic drugs (bDMARDs) are more expensive but less beneficial than triple therapy for patients with rheumatoid arthritis (RA) in whom methotrexate (MTX) fails. However, from the perspective of the Chinese healthcare system, the cost-effectiveness of triple therapy versus bDMARD treatment sequences as a first-line therapy for patients with RA is still unclear. Methods An individual patient simulation model was used to extrapolate the lifetime cost and health outcomes by tracing patients from initial treatment through switches to further treatment lines in a sequence. Therapeutic efficacy and physical function were evaluated using the American College of Rheumatology (ACR) response, 28-Joint Disease Activity Score (DAS28), and Health Assessment Questionnaire score. All input parameters in the model were derived from published studies, national databases, local hospitals, and experts’ opinions. Both direct costs and indirect costs were taken into consideration. Probabilistic and one-way sensitivity analyses were performed to test the uncertainty of the model, as were multiple scenario analyses. Results The lifetime analysis demonstrated that triple therapy was associated with lower costs and quality-adjusted life years (QALYs) than bDMARD sequences. These resulted in incremental cost-effectiveness ratios (ICERs) ranging from $87,090/QALY to $104,032/QALY, higher than the willingness-to-pay (WTP) threshold in China ($30,950/QALY). The baseline DAS28 impacted the model outcomes the most. Scenario analyses indicated that adding triple therapy to bDMARD sequences as a first-, second-, third-, or fourth-line therapy is very cost-effective, at a WTP of $10,316/QALY. Conclusions From a Chinese payer perspective, triple therapy as first-line treatment in treatment sequence could be regarded as cost-effectiveness option for patients who failed MTX, compared to bDMARDs as first-line treatment, and instead of prescribing triple therapy as a substitute for bDMARDs as a first-line treatment, adding triple therapy to the bDMARD treatment sequence is likely to be very cost-effective for patients with active RA compared to bDMARD sequences. Supplementary Information The online version contains supplementary material available at 10.1007/s40744-021-00300-4.
From the perspective of the Chinese healthcare system, the cost-effectiveness of triple therapy versus biologic diseasemodifying antirheumatic drugs (bDMARD) treatment sequences as a firstline therapy for patients with RA is still unclear.
We hypothesize that triple therapy could likely be cost-effective compared to bDMARD sequences as a first-line treatment for patients with RA unresponsive to MTX.
What was learned from the study?
From a Chinese payer perspective, triple therapy as first-line treatment in treatment sequences is likely to be a costeffective option comparing bDMARDs as first-line treatment for RA patients who failed MTX.
Instead of prescribing triple therapy as a substitute for bDMARDs as a first-line treatment, adding triple therapy to the bDMARDs treatment sequence is likely to be very cost-effective for patients with active RA compared to bDMARDs sequences.
INTRODUCTION
Rheumatoid arthritis (RA) is a chronic autoimmune disease that can occur at any age, with a high incidence in patients ranging from 30 to 50 years old [1,2]. The incidence of RA is 0.5-1% worldwide, while the prevalence is 0.28% in China, indicating that the total number of Chinese patients is approximately 4 million; additionally, the ratio of affected males and females is approximately 1:4 [2][3][4]. In China, the disability rates of RA patients with disease durations of 1-5 years, 5-10 years, 10--15 years, and C 15 years are 18.6, 43.5, 48.1, and 61.3%, respectively [5]. With increasing disease duration, the incidence of disability and functional limitation increases [5]. The average direct cost per RA patient is $1917.21 ± $2559.06/year, with drug costs accounting for more than 50% of the total cost ($1283.89 ± $1898.15) [6]. Therefore, RA not only causes a decline in patients' physical function, quality of life (QoL), and social participation, but also places a major economic burden on patients' families and society [7,8].
For patients with active RA, although methotrexate (MTX) as the conventional disease-modifying antirheumatic drug (cDMARD) prescribed most commonly, the use of MTX is limited because of poor tolerability and inadequate efficacy [9,10]. Then, a combination of cDMARDs, such as triple therapy with MTX, sulfasalazine, and hydroxychloroquine, was considered for use in RA patients who have a suboptimal response to MTX [11]. After the failure of monotherapy or a combination of cDMARDs, biologic disease-modifying antirheumatic drugs (bDMARDs), including tumor necrosis factor (TNF) and non-TNF inhibitors, are recommended for patients with active RA on the basis of the guidelines of the American College of Rheumatology (ACR), European League Against Rheumatism, and Chinese Rheumatology Association [9,12,13]. Consequently, TNF inhibitors (etanercept, adalimumab, infliximab, certolizumab, and golimumab) and non-TNF inhibitors (abatacept, rituximab, and tocilizumab) have been approved by the Chinese National Medical Products Administration and have become widely used [14]. Although the use of biologic agents has contributed significantly to the effective control and early treatment of active RA to prevent permanent disability, the use of biologics in early RA remains limited due to cost considerations [15][16][17][18][19].
According to previous studies, triple therapy is not only noninferior to but also as safe as adding a biologic to MTX [19,20]. A systematic review and network meta-analysis demonstrated that triple therapy and the combination of most bDMARDs with MTX had clinical efficacy in controlling disease progression [21]. Moreover, a multicenter, phase III, randomized controlled trial (RCT) (RACAT) compared triple therapy with biologic treatment (etanercept ? MTX) as a first-line treatment for patients with active RA in whom MTX monotherapy failed [22]. The results of this study confirmed the noninferiority of triple therapy and suggested that patients with active RA achieved neither significant clinical improvement nor favorable responses after 24 weeks of treatment with etanercept ? MTX compared to triple therapy [22]. According to the evidence of triple therapy noted above, the objective of this study was to consider the costeffectiveness of implementing triple therapy compared to bDMARD sequences as a first-line treatment for patients with RA unresponsive to MTX.
Model Structure Overview
To best simulate the heterogeneity of RA patients and reflect clinical practice, an individual patient-level iviRA model in which 20,000 patients transitioned through a predefined treatment sequence was performed using R software (version 4.0.3, https://www.rproject.org/) (Fig. 1). The iviRA model (version 2.0) is an open-source project for value assessments in RA; the model was developed by the Innovation and Value Initiative (IVI) and simulates the health outcomes and costs related to all DMARDs [23]. Based on this model, we evaluated the cost-effectiveness of strategy initiation with triple therapy compared to strategy initiation with bDMARDs over lifetime horizons (50 years). In the treatment sequence, patients were evaluated in every model cycle (6 months). Patients were able to remain on the current treatment if they achieved a favorable response to the current treatment and did not experience adverse events (AEs); otherwise, the patients were switched to the next-line therapy. The main outcomes of this cost-effectiveness analysis were the total cost, life year (LY), qualityadjusted life year (QALY), and the incremental cost-effectiveness ratio (ICER). According to the economic evaluation guidelines in China, both costs and outcomes were discounted at 3% per year.
Model Inputs
All information about model input parameters, including patient characteristics and distributions of variables, is listed in Table 1. The characteristics of the patients, including age, sex, baseline Health Assessment Questionnaire (HAQ) score, and baseline 28-Joint Disease Activity Score (DAS28), were obtained from a phase III RCT (ORAL Sync) that recruited Chinese patients with RA [24,25]. On the basis of the report on Chinese nutrition and chronic disease in 2015, the sex-specific weights for the Chinese population were also taken into consideration [26]. This is a model-based economic evaluation for which the patient data were all obtained from previous published studies. It does not contain any studies with animals or human participants.
Treatment Sequence
The model simulates up to four lines of treatment, which were obtained based on the 2018 RA treatment guidelines in China and current clinical practice. In the comparator group, TNF inhibitor bDMARDs (e.g., etanercept) were used as a first-line strategy when patients were inadequately responsive to MTX [27]. After failure of the first-line treatment, patients who had experienced unresponsiveness or intolerance to one TNF inhibitor would not use another brand-named TNF inhibitor due to unfavorable Chinese reimbursement policies [27]. Then, patients were switched to a non-TNF inhibitor bDMARD (e.g., abatacept, rituximab, or tocilizumab) as second-line treatment; non-TNF inhibitor bDMARDs are another class of biologic agents with different mechanisms of action [28,29]. After second-line treatment failure, patients were treated with a Janus kinase (JAK) or signal transducer and activator of transcription (STAT) inhibitor (e.g., tofacitinib) as a third-line treatment. Finally, after third-line treatment failure, patients were eventually switched to the nonbiologic therapy (NBT) phase, which mainly comprised cDMARDs, such as MTX, hydroxychloroquine, cyclosporine, and leflunomide, until death [27].
In the study group, aside from the administration of triple therapy instead of TNF inhibitors as a first-line treatment, the treatment sequence was the same as that in the comparator group. Three baseline analyses were simulated; the difference among them was that three non-TNF inhibitors (abatacept, rituximab, and tocilizumab), which have been approved for the market in China, were used as the second-line treatment. All target DMARDs (tDMARDs, including bDMARDs and JAK and STAT inhibitors) in the treatment sequences were administered in combination with MTX.
The response level was estimated after the first 6 months of every treatment in the sequence except for in the NBT phase because the NBT would not have an associated initial ACR response level [23]. The therapeutic efficacy of DMARDs was obtained from a network metaanalysis that included 96 unique RCTs of RA treatment performed by IVI [23]. Another core parameter in the model was the RA activity measured by the DAS28, which was associated with treatment switching after the first 6 months for every treatment line. The DAS28 could be classified as severe ([ 5.1), moderate (3.3-5.1), mild (2.6-3.2), or absent (\ 2.6) [12]. All patients commenced with severe activity, and the relationship between the ACR response level and change in RA activity is listed in Table 1, as evaluated by Aletaha et al. [30,31]. We supposed that the ACR response rates in patients who had received bDMARDs before dropped to 84% of that in bDMARD-naïve patients (treatment effect factor), consistent with a study by Carlson et al. [32]. According to the DAS28, discontinuation probability and AEs, patients passed through the treatment sequence and finally progressed to the NBT phase. During every model cycle, patients continued the current treatment or switched to the next-line treatment according to a previous study and international recommendations: treatment failure was diagnosed if the DAS28 was [ 3.2 or less than 1.2 [9,12,33]. As we mentioned above, there was a certain probability of treatment discontinuation among patients who continued the current treatment, including all-cause discontinuation and the time to discontinuation. The discontinuation probability was evaluated on the basis of the survival curve obtained from the CORRONA database using a generalized gamma distribution model [34] (Supplementary Material Table 1). According to a study conducted by Zhang et al., patients with mild or no disease activity had approximately 0.52 times the odds of treatment discontinuation as patients with moderate RA activity [35]. We adjusted the curve from the CORRONA database using an odds ratio (OR = 0.52) and estimated the treatment duration for patients with mild or no disease activity since patients in the CORRONA database have nearly moderate RA activity [35]. Apart from the lack of response to the current treatment and the discontinuation of treatment, AEs may also result in treatment switching. Based on a study by Stevenson et al., we only considered serious infection (i.e., pneumonia) in the model since only severe infection significantly impacted the cost and QoL [36,37].
Disease Progression and HAQ Score Change
The HAQ score, an instrument for measuring the physical function and disease progression of patients, ranges from 0 to 3 in multiples of 0.125 (a higher score indicates greater disease progression). The HAQ score depends on the disease activity, and both of these factors impact the utility value, mortality risk, and hospitalization cost in the model. The HAQ score changed over time in the model; however, the change was not related to treatment but was associated with the ACR response and the time spent in the NBT phase. The relationship between the ACR response level and the change in the HAQ score after the first 6 months was reported by Carlson et al. and is displayed in Table 1 [32]. After the first 6 months of each treatment line, the HAQ score decreased by subtracting the baseline HAQ score of patients to simulate improvement with treatment. After the initial 6 months, apart from the NBT phase, a constant annual rate (no disease progression according to the HAQ score) was applied for long-term treatment if patients continued the current therapy [38,39]. In the NBT phase, an annual rate and an age-specific rate were used to model HAQ score progression, which was obtained from an observational study conducted by Wolfe et al. and a longitudinal study performed by Michaud et al., respectively [40,41]. At the time patients switched to the next-line treatment, they the HAQ score rebounded, which means that any improvement in the HAQ score obtained from the last treatment line was lost at the time of initiation of a new treatment [42]. It was assumed that the HAQ score of patients would increase back to their baseline score at the beginning of the initial 6 months for each treatment line. All parameters are listed in Table 1.
Mortality
Death could occur at any point in time in the model. Based on the probability of death from the Chinese life table and HAQ score, a function of age-and sex-specific mortality was simulated [43]. We applied an OR (2.22) for the effect of the HAQ score on the mortality rate from the life table, which was estimated by Wolfe et al. [44]. Moreover, we also considered the impact of the change in the HAQ score on mortality; with every 0.25-unit HAQ score increase, the mortality rate of the subsequent 6 months increased to a certain degree, according to the hazard ratio reported by Michaud et al. [45] ( Table 1).
Cost and Utility Estimates
In this study, both direct medical costs (including the costs of drug acquisition, AE management, administration, monitoring, and hospitalization), and indirect costs (such as the costs of productive loss), were considered in the model. All unit costs were derived from national databases, local hospitals, previously published studies, and the consensus of experts. Drug acquisition costs were obtained from the website of China Medical Bidding [46]. General management costs, including those of routine clinical tests, X-ray examinations, and outpatient follow-up visits, were based on a cost-effectiveness analysis performed by Wu et al. [47]. The annual days of hospitalization were associated with the HAQ score, so we estimated the cost of hospitalization according to a study by Carlson et al. due to the paucity of relevant information about this relationship in China [32]. The average expense of patients in the hospital per day was derived from a study by Wu et al. [47]. We assumed that the cost of AE management was the same across different tDMARDs and equal to the cost of treating pneumonia ($1761.4), according to Tian et al. [48]. All costs in this study were converted into 2019 US dollars (1 USD = 6.83 RMB), and the Chinese consumer price index was also used to adjust the costs from past sources to 2019 USD [49].
Based on the following algorithm, which was obtained from a previous cost-effectiveness study for Chinese RA patients conducted by Tian et al., the health-related QoL was estimated by mapping HAD-DI scores to EuroQol five-dimensional three-level (EQ-5D-3L) utility values [48].
Utility ¼ 0:74 À 0:17 Â HAQ: In this study, we only included serious infection (i.e., pneumonia) as an AE in the model because the safety profiles among tDMARDs are similar and would not impact the results significantly [23]. The impact of pneumonia on QoL was measured by the health disutility weight, which dropped by 0.156 units of utility during the month of infection [36,50].
Sensitivity and Scenario Analyses
We performed multiple analyses, including oneway sensitivity analyses, probability sensitivity analyses (PSAs), and a series of scenario analyses, to explore the uncertainty and robustness of the model. We changed the variables over a reasonable range and plausible distribution to determine crucial drivers in the model; for example, we varied the upper and lower limits of the drug price by 20%. PSAs were conducted for 2000 sets of 5000 patients by Monte Carlo simulation. The willingness-to-pay (WTP) threshold was set at $30,950/QALY or $10,316/ QALY in China by using three times or one time the per-capita gross domestic product, as recommended by WHO guidelines as a ''cost-effective'' threshold or ''very cost-effective'' threshold, respectively [51,52].
To understand the comprehensive cost-effectiveness of setting triple therapy as a first-line treatment for patients with RA who are unresponsive to MTX in China, six scenario analyses were performed: (1) Based on the original baseline treatment sequence, we replaced DMARDs (ETN) as the first-line treatment in the comparator group with TNF inhibitors of other brands, such as adalimumab, infliximab, certolizumab, and golimumab (Supplementary Material Table 3). (2) We varied the price of all tDMARDs to 75, 50, and 25% of their price, considering the potential implication of drug tapering and biosimilars (Supplementary Material Table 4). (3) On the basis of the rule of clinical practice we mentioned in the treatment sequence section, 36 possible strategies were formulated (Supplementary Material Table 5). All bDMARD sequences were compared to triple therapy followed by NBT. (4) According to the results of scenario 3, we selected the treatment sequence (TT-IFX-RTX-TOF-NBT) that had the lowest ICER, and in addition to the first-line triple therapy, the location of other drugs in the sequence was adjusted for comparison with the sequence without triple therapy (IFX-RTX-TOF-NBT) (Supplementary Material Table 6). Table 7). In this scenario, triple therapy was inserted at different positions in the treatment sequence as a first-, second-, third-, and fourth-line therapy to assess its impact on the cost-effectiveness analysis.
Base Case Results
The results of base case analyses obtained by running 50,000 Monte Carlo simulations are presented in Table 2. Three base case analyses showed that triple therapy was associated with lower costs but also fewer LYs and QALYs than bDMARD sequences. These produced ICERs ranging from $87,090/QALY to $104,032/ QALY, above the WTP threshold of $30,950/ QALY.
Sensitivity and Scenario Analyses
Because there were three pairs of baseline results with small differences in health outcomes, a tornado diagram was prepared to compare the TT-TCZ-TOF-NBT and ENT-TCZ-TOF-NBT strategies, which had the lowest ICERs among the base case analyses (Fig. 2). The model was highly sensitive to the baseline DAS28, the ACR response rate of tofacitinib, and the therapeutic efficacy. Other parameters, such as the baseline HAQ score, drug costs, sex proportion, and initial age, had a moderate or minor impact on the results. The PSA results showed that the three bDMARD strategies had a 0% likelihood of being cost-effective compared to the triple therapy strategy at a WTP threshold of $30,950/ QALY (Fig. 2). The PSA results of scenario 3 showed that half of the strategies had probabilities of 80% or over of being considered costeffective treatments compared to triple therapy. The results of the analyses of the first scenario were similar to the results of analyses of the base case, which revealed that bDMARDs as first-line treatment in treatment sequences is unlikely be cost-effectiveness comparing with triple therapy ( Table 2, online Supplementary). In scenario 2, except for the TT-RTX-TOF-NBT and TT-TCZ-TOF-NBT treatment strategies, all strategies could be regarded as cost-effective compared to bDMARDs, and when we varied all DMARDs price to 25% of its original, TT-TCZ-TOF-NBT versus ENT-TCZ-TOF-NBT have the lower ICER at 19,501/QALYs. In scenario 3, we only compared 36 bDMARD strategies with triple therapy because there were slight differences in the QALYs among the bDMARD strategies, which may cause misunderstanding in considering the ICERs and cost-effectiveness. In scenario 3, 36 strategies were produced ICERs ranging from $20,560/QALY to $45,018/QALY, and over half of the bDMARD sequences yielded values lower than the WTP in China and could be considered cost-effective strategies compared to triple therapy monotherapy. Although the TT-IFX-RTX-TOF-NBT strategy had the lowest ICER ($20,150.6/QALY) compared to the other treatment sequences, it still could not be regarded as a ''very cost-effective'' strategy, since the ICER The results showed that compared to bDMARD treatment, adding triple therapy before bDMARD treatment as a first-line therapy instead of using triple therapy to replace first-line bDMARD treatment can be recognized as a very cost-effective treatment option. The results of scenario 5 showed that inserting triple therapy into the bDMARD sequence as a first-, second-, third-or fourthline treatment could be considered to be very cost-effective compared to no triple therapy. Among them, the RTX-TOF-TT-IFX-NBT sequence had the lowest ICER, at $894.8/QALY, compared to the comparator group.
DISCUSSION
Compared to bDMARDs, triple therapy is less commonly used in clinical practice as a first-line treatment after MTX failure, although triple therapy has been promoted for many years. Current guidelines, clinical practice, and reimbursement policies permit the initiation of bDMARDs after an inadequate response to MTX, but this could cause the inefficient use of medical sources. Therefore, we conducted the first study to evaluate the cost-effectiveness of triple therapy versus bDMARD treatment sequences with or without triple therapy in patients with RA who were unresponsive to MTX in China. This topic is relevant to patients, rheumatology immunologists, and policymakers. The results indicate that using triple therapy as a first-line treatment is likely to be cost-effective compared to bDMARDs. However, the implication of this study is not that triple therapy should be substituted for bDMARDs or that bDMARDs should be withheld from patients with RA in whom MTX failed. Rather, this study suggests that triple therapy should be prescribed within the bDMARD treatment sequence. The results of scenario analyses indicate that when triple therapy is inserted as a first-, second-, third-, or fourth-line therapy in the bDMARD sequence, all sequences could be regarded as ''very cost-effective'' compared to sequences involving bDMARDs only. Although prescribing triple therapy after bDMARDs failure is seldom in clinical practice, our study The results of our study are consistent with those of other economic analyses studying triple therapy compared to biologics, although they only compared triple therapy with single biologic agents instead of bDMARD treatment sequences. A study in the United States calculated the cost-effectiveness of triple therapy versus etanercept plus MTX as a first-line treatment and found that commencing bDMARDs without trying triple therapy first yielded a minimal incremental benefit but an increase in cost. A study in Sweden showed that the use of infliximab cost €20,916 more than triple therapy and only gained a 0.01 increase in QALYs over a duration of 21 months, leading to an ICER of €2,404,197/QALY. Our study was based on the perspective of Chinese health care system, so the conclusion of this study might not be applicable to other countries because of the differences in the costs, clinical guidelines, policies, and health systems among countries. To our knowledge, this is the first modeling study from the Chinese health care system perspective to explore the cost-effectiveness of treatment sequences for RA patients who failed MTX treatment.
As with any model, there are some limitations to this study. First, we derived the therapeutic efficacy from a network meta-analysis in which most RCTs were conducted over the short term (approximately 12-24 months). This would introduce some biases to the results since the model extrapolates lifetime efficacy and costs. However, most RCTs had this unavoidable limitation due to the resources and major expenditures required for long-term follow-up. Thus, we not only constructed a decision analytic model that reflects the heterogeneity of patients but also made plausible assumptions and conducted multiple sensitivity analyses and scenario analyses to improve the robustness of the model. Second, in the absence of reliable data, we hypothesized that the ACR response rate after bDMARD failure would be decreased to 84% in patients who had not previously received a biologic. However, the sensitivity analyses showed that the impact of the treatment effect factor on the model outcomes was marginal. Third, in this study, we assumed that patients will finally switch to the NBT phase, which is commonly and justifiably used in economic evaluations in RA [23,27]; however, the treatment sequence in clinical practice will have more permutations and combinations than the limited treatment sequences in this study. Finally, our model considered neither other AEs nor the possibility that the risk of AEs might differ among different treatment sequences, which might underestimate the direct costs and overestimate the health outcomes, such as the efficacy of bDMARDs. However, a published study suggested that since the safety profiles of bDMARDs are similar, the outcomes of the model would not be changed when adding other AEs into the model [32].
CONCLUSIONS
From the perspective of the Chinese healthcare system, compared to bDMARD treatment sequences, triple therapy is estimated to be costeffective for patients with active RA, at a WTP threshold of $30,950/QALY. Furthermore, a very cost-effective level was obtained regardless of the position of triple therapy within the bDMARD treatment sequence compared to sequences not including triple therapy.
Author Contributions. Sini Li constructed the model, collected and analyzed data, and drafted the manuscript. Xiaomin Wan conceptualized the study and provided the model framework. Yamin Li contributed to the revision of the manuscript. Liubao Peng and Jianhe Li was the guarantor of the study and provided technical and material support. Thanks to Mr. Zhu Wenjie for his love and encouragement to the first author. All authors gave final approval for this version to be published.
Compliance with Ethics Guidelines. This is a model-based economic evaluation for which the patient data were all obtained from previous published studies. It does not contain any studies with animal or human participants.
Data Availability. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/bync/4.0/.
|
2021-03-27T14:00:54.496Z
|
2021-03-27T00:00:00.000
|
{
"year": 2021,
"sha1": "eefe40ec5ae5080078a98cd9c9a18119411d0f98",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40744-021-00300-4.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cf87c1d9c1007a45199536647c812f22f2d7e2db",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268857678
|
pes2o/s2orc
|
v3-fos-license
|
Sex differences in pain catastrophizing and its relation to the transition from acute pain to chronic pain
Background and importance Differences exist between sexes in pain and pain-related outcomes, such as development of chronic pain. Previous studies suggested a higher risk for pain chronification in female patients. Furthermore, pain catastrophizing is an important risk factor for chronification of pain. However, it is unclear whether sex differences in catastrophic thinking could explain the sex differences in pain chronification. Objectives The aim of this study was to examine sex differences in pain catastrophizing. Additionally, we investigated pain catastrophizing as a potential mediator of sex differences in the transition of acute to chronic pain. Design, settings and participants Adults visiting one of the 15 participating emergency departments in the Netherlands with acute pain-related complaints. Subjects had to meet inclusion criteria and complete questionnaires about their health and pain. Outcomes measure and analysis The outcomes in this prospective cohort study were pain catastrophizing (short form pain catastrophizing) and pain chronification at 90 days (Numeric Rating Scale ≥ 1). Data was analysed using univariate and multivariable logistic regression models. Finally, stratified regression analyses were conducted to assess whether differences in pain catastrophizing accounted for observed differences in pain chronification between sexes. Main results In total 1,906 patients were included. Females catastrophized pain significantly more than males (p < 0.001). Multiple regression analyses suggested that pain catastrophizing is associated with pain chronification in both sexes. Conclusions This study reported differences between sexes in catastrophic cognitions in the development of chronic pain. This is possibly of clinical importance to identify high-risk patients and ensure an early intervention to prevent the transition from acute to chronic pain. Supplementary Information The online version contains supplementary material available at 10.1186/s12871-024-02496-8.
Introduction
Pain is one of the most common complaints in emergency departments (ED) [9].Even though 70-90% of patients visiting the ED complain of pain [2], undertreatment remains a problem [4].Undertreatment increases the risk of developing chronic pain [19].Many definitions of chronic pain have been proposed, one of which is pain persisting beyond three months [30].Chronic pain forms an enormous burden on health care in the Netherlands with an overall prevalence of 18% for moderate to severe pain in 2010 [1].Chronification of pain has many consequences, such as decreased quality of life, overutilization of healthcare, loss of productivity and possibly opioid dependency [18].
Multiple pre-hospital risk factors for development of chronic pain have been identified.These include older age, female sex, pain catastrophizing, high-intensity acute pain, less than college education, low socio-economic status, anxiety, and depression [3].Pain catastrophizing is an emotional and cognitive response to pain and is comprised of a tendency to ruminate, magnify, or feel helpless [16].Previous studies found that pain catastrophizing contributed to a higher probability of developing chronic pain [3; 8; 10; 12; 19].Besides an individual association with pain chronification, interactions between these risk factors exist as well.For example, studies have shown that pain catastrophizing interacts with depression, pain intensity, age, level of education, employment status, alcohol dependency, smoking, satisfaction with care received and marital status/relationship [6-8; 10; 24].
As of yet, the direct relationship between pain catastrophizing, sex, and pain chronification is unknown.Previous studies showed that females are more at risk for developing chronic pain [3; 17; 19].Differences between sexes also exist in pain intensity [8; 12; 13; 16; 17].This could imply that there are sex differences in the way pain is catastrophized.Previous clinical and experimental studies have been inconsistent about this [7; 8; 12; 13; 16; 22; 24].Some studies suggested that catastrophizing cognitions or coping strategies were more frequent in females [8; 12; 13; 24].This may suggest that pain catastrophizing is a potential intermediate in sex differences in occurrence of pain and its chronification.However, other studies concluded no significant differences between sexes in pain catastrophizing [7; 16; 22].Furthermore, previous studies only determined the association between pain catastrophizing and sex in specific patient groups.They only investigated certain pain causes, such as osteoarthritis, musculoskeletal injury, or motor vehicle accidents with acute whiplash injury [3; 12; 13; 19; 22].Also, they only included patients with specific locations of pain such as neck, shoulders, lower back, or knee pain [6; 10; 12; 13; 22].An intervention on pain catastrophizing could be used to prevent the transition from acute to chronic pain.The four-item short form of the pain catastrophizing scale (PCS) could be considered as a screening variable to identify high-risk patients, since it is brief and accessible.In the 13-item PCS a score of 30 or more indicates a high level of catastrophizing, which is clinically relevant [25].As far as we know, no cut-off score for the fouritem short form of the PCS has been determined yet.Pain catastrophizing could also be applied as a target for intervention and treatment in an early stage of pain.Earlier studies with cognitive behavioural interventions have found improvements in pain and disability with reduction in pain catastrophizing [20].
To our knowledge, the relationship between pain catastrophizing, sex, and development of chronic pain within all patients presenting with pain in the ED has not been studied yet.The primary aim is to study the potential differences in pain catastrophizing between sexes.Our second aim is to study the relationship between pain catastrophizing, sex, and pain chronification in all patients presenting in the ED with a pain related problem who are discharged the same day.Our hypothesis is that sex differences in the risk of developing chronic pain are (partly) explained by the sex differences in pain catastrophizing.
Design and subjects
This article is a substudy of the PRACTICE study with the aim to study the relationship between sex, pain catastrophizing and pain chronification.For this study, data from the PRACTICE study was used.The PRACTICE study is a prospective, multicentre, longitudinal study aimed at developing a prediction model for patients at risk of developing chronic pain.The full description of design, subjects, and procedure of this study can be found in the study of Ten Doesschate, et al. [26].This study was conducted between August 2018 and April 2020 in 15 EDs in the Netherlands including hospitals of all types.The study population was representative for the Dutch population regarding injury, age and sex.Data were collected with questionnaires about health, quality of life, and pain with a total follow-up of 180 days.Patients of 18 years and older were included when visiting the ED for an acute pain related cause and discharged without admission.Only patients without admission were included because we were interested in studying these patients exclusively.
Exclusion criteria were cognitive impairment, illiteracy, a language barrier, a current diagnosis of chronic pain located at or near the location of their current complaint, a hospital admission or acute pain within seven days after surgery.
Ethic approval and consent to participate
The Medical research ethics committee (METC, Protocol 2018-39) approved the study.Local approval was obtained by all participating centres and was conducted in accordance to the principles of the Declaration of Helsinki.Patients provided written informed consent according to the procedure approved by the METC.
Procedure
All consecutive patients presenting at the emergency department with an acute pain-related complaint were asked for participation if meeting the in-and exclusion criteria.Patients were recruited consecutively as they presented to the ED.During the first month of the study, patients received questionnaires on paper.During the rest of the study, patients received questionnaires in a web-based electronic application.Paper questionnaires were collected the first month to validate the electronic application.The study protocol for both groups were equal.In the emergency department, patients received usual care without additional interventions.
Outcome measures
Baseline characteristics were collected from electronic patient records.These include age, sex, date and time of arrival and discharge, treatment time, triage priority, numeric rating scale (NRS) of pain on arrival at the ED (NRS0), location and cause of pain, type of injury, painmanagement and follow-up.Other variables (e.g.pain catastrophizing, pain lasting more than 90 days (NRS90) were collected from questionnaires.
During seven consecutive days after discharge patients were asked daily for their NRS, the use of painkillers (and specification of the used painkillers) and extra visits.Furthermore, patients were queried during these days about depression and treatment, whether or not patients are in a relationship, pre-existing chronic pain, alcohol consumption, education, employment and sick leave, smoking and satisfaction with emergency department care (supplemental Table 1).Education level was categorised in low, intermediate or high.Low level of education: primary school, Pre-vocational secondary education, Secondary vocational education level 1 Or completion of the first three years of Senior general secondary education or Pre-university education.Intermediate level of education: graduation on senior general secondary education, pre-university education, secondary vocational education level 2-4.High level of education: Graduation at least university of applied sciences.
On the fifth and sixth day patients received the fouritem short form of the PCS, which measures the level of pain catastrophizing [5; 15].It is a five-point self-report scale indicating the degree to which participants experience certain thoughts or feelings when having pain (0 = not at all, 4 = all the time).A higher score indicates more catastrophic thinking.On the seventh day after discharge, they also received the Euroqol five-dimension five-level (EQ-5D-5 L) questionnaire.The NRS, the EQ-5D-5 L questionnaire, question 7 and 8 of the 36-item Short Form Survey (SF-36) and the Brief Pain Inventory (BPI)were asked at day 90 and 180.
Outcomes were PCS and pain chronification.The development of chronic pain was based on dichotomisation of reported severity of pain on day 90, in which NRS = 0 was defined as no chronic pain, and NRS ≥ 1 as chronic pain.
Statistical analysis
Descriptive statistics were reported as frequency (%) for categorical data and mean ± standard deviation (SD) or median with interquartile range (IQR, 25th-75th percentile) for continuous data.
All relevant questionnaires were examined for missing data.Missing data was imputed using multiple imputation by chained equations (MICE) [29] with outcome and baseline variables (sex, age, NRS0, pain location, trauma, fracture, satisfaction with care received, depression and treatment, relationship, pre-existing chronic pain, alcohol consumption, education, employment and sick leave, smoking, PCS, NRS90, and pain chronification) in the imputation model to create 100 imputed data sets.Imputation was only done after testing that data was missing at random.Supplemental Table 2 gives a complete overview of the imputed variables.
The PCS of patients with and without pain chronification were compared.Univariate and multivariable logistic regression models were performed on imputed datasets to correct for possible confounders while studying sex differences in pain catastrophizing and the risk of developing chronic pain.Baseline variables were tested as possible confounders.A chi-squared test was conducted for categorical data.Wilcoxon-Mann-Whitney test was conducted for numerical, non-normally distributed data.Normally distributed numerical data was compared using the student's t-tests.
A regression analysis studying the relation between PCS and chronic pain was performed corrected for sex and other confounders.Tested variables were chosen based on previous literature, clinical reasoning or identified through regression analysis.Logistic regression analysis with interaction terms were performed to identify effect modifiers.Potential confounders were chosen based on a model built on previous literature, clinical reasoning, clinical experience and by drawing a causal directed acyclic graph (DAG) (Supplemental Fig. 1) [27].Based on the DAG, the algorithm selects variables for which needs to be corrected to allow for an estimation of the causal effect of the exposure.Data was presented as odds ratio (OR) with 95% confidence intervals (95% CI).Finally, stratified regression analyses were conducted to assess whether differences in pain catastrophizing accounted for observed differences in pain chronification between sexes.This in order to exclude effect modification by gender.
Sensitivity analyses were conducted with different definitions of pain chronification.These include any pain (NRS ≥ 1) and moderate to severe pain (NRS ≥ 4) lasting more than 90 days [1; 19].These cut-offs represent different definitions of chronic pain used in literature.Baseline characteristics were compared between responders and non-responders (missing data) to check whether data was missing-at-random.In all analyses, a p-value < 0.05 was considered statistically significant.The data were analyzed using R version 4.0.2[21].
Results
In total, 1965 patients were included of which 1906 patient remained after in-and exclusion criteria were applied.Of the 1,906 analysed patients, 6 participants (0.3%) had missing sex data.1,009 participants (52.9%) responded to the four-item short form of the PCS, and 825 participants (43.3%) returned the questionnaires on pain after 90 days (Fig. 1).Missing data for other variables ranged from 0.3 to 46.7%.Baseline patient characteristics differed significantly between responders and nonresponders on several variables.Patients in the responder group were older (47 vs. 45) and more often female (53.1% females vs. 45.9%males).Responders visited the emergency department more often with fractures (57.9% vs. 48.4%)and were more often non-smokers (12.7% vs. 24.3%).Patients in the non-responder group had more comorbidities (55.3% vs. 48.0%).
A description of baseline characteristics is provided in Table 1.Significantly more males had pain in upper extremities (27.2% vs. 21.9%,p < 0.001) while females had more pain in the lower extremities (15.6% vs. 18.9%, p < 0.001).There were no significant differences found between males and females in employment, level of education, and relationship status.Univariately, females had on average a higher PCS compared to males, although the median PCS was similar (Fig. 2).Regardless of the definition used, significantly more females developed chronic pain than males.The incidence of chronic pain did not differ between participating centres (p = 0.339).
The role of pain catastrophizing on pain chronification was examined using both a regression analyses corrected for sex as a stratified regression analyses by sex (Tables 2 and 3).The odds ratio reported here are per step on the pain catastrophizing scale (4-step scale, 0 = no pain catastrophizing, 0 = reference category).A significant association was found after correction for confounders between pain chronification and pain catastrophizing (OR 1.17; 95% CI: 1.05-1.29;P < 0.01).Using the alternative definition for pain chronification (NRS ≥ 4), we only found a significant association for the female group (OR 1.11; 95% CI: 1.03-1.20;P < 0.01).Several confounders, such Fig. 1 Flowchart as age, were entered in this stratified regression analyses (Table 3).Age was significantly associated with chronic pain development irrespective of sex or outcome definition used.Education was significantly associated with pain chronification if NRS ≥ 1 was used as outcome.Stratified analyses showed no indication of effect modification by sex (Table 3).
Discussion
In this study, we studied potential differences in pain catastrophizing between sexes.We showed that females catastrophize pain more often than males.Furthermore, our data showed that pain catastrophizing increased the risk of chronic pain in both males and females when chronic pain was defined as an NRS ≧ 1 at 90 days.When chronic pain was defined as an NRS ≧ 4 at 90 days, pain catastrophizing was associated with chronic pain in females.In males, the association was not statistically significant.To our knowledge, this was the first study investigating the relationship between pain catastrophizing, sex, and pain chronification in patients presenting in an emergency department with any cause of pain.Pain catastrophizing increased the risk of pain chronification, irrespective of sex.An intervention to reduce pain catastrophizing might thus reduce the risk of pain chronification, although this cannot be concluded based on our research.
Our finding that females tend to catastrophize more was consistent with previous studies.These studies showed that females reported higher levels of catastrophic thinking in both healthy people and chronic pain patients [7,8,12,13].Females reported more pain, a higher pain intensity, more frequent and longer episodes of pain, poorer pain-related outcomes, and lower pain tolerance [7,8,28].They used more emotion-based coping strategies whereas males used more problem focused ones [13].There were sex differences in pain due to socialization, social and cultural norms, and expectations regarding social roles [28].For example, males are expected to be stoic, minimizing, and enduring pain, which could lead to underreporting of pain and catastrophizing by males [28].Females reported pain sooner to reduce its impact as they were often fulfilling more roles, like taking care of children or elderly people, household, and work [28].
We have shown a statistically significant relationship between pain catastrophizing and pain chronification (NRS ≧ 1) irrespective of sex.If chronic pain was defined as NRS at 90 days ≧ 4, it was only statistically significant in females.Our results were partly consistent with earlier experimental and clinical studies in specific populations [7; 12; 13; 24].Pierik, et al. stated that patients who catastrophized their pain were three times more prone for transition into chronic pain [19].Multiple studies have also shown that catastrophizing (partly) mediated for other pain-related outcomes, such as pain intensity, pain tolerance, and pain disability [6; 7, 12, 16, 24].Although our data suggests a differential relationship between pain catastrophizing, chronic pain and sex depending on the cut-off point for chronic pain (NRS ≧ 1 vs. NRS ≧ 4), this might be due to a lack of power.This might be due to several limitations.Firstly, despite reminders, 48.3% of participants did not respond to all four questions about pain catastrophizing and 56.8% did not report their NRS after 90 days.This could limit the strength and representativeness of the results.This low response rate might explain the lack of association of depression and smoking with chronic pain that previous studies have reported [12; 24].We imputed data to reduce the chance of a type II error.
Our results showed that 28.4% of males and 39.3% of females had chronic pain, which seems unlikely compared to the 18% prevalence previously stated in other studies.This could be explained by different definitions of chronic pain.Chronic pain in our study was defined as any pain lasting more than 90 days (NRS ≥ 1).Other studies defined chronic pain as moderate to severe pain (NRS ≥ 4) [1; 19].Using the latter definition, 8.5% and 15.6% (males and females respectively) developed chronic pain.Our analysis showed similar results for different definitions of pain chronification, which strengthens our findings.
In addition, all participants participated voluntarily.Participants who completed questionnaires might differ from patients refusing participation, patients lost to follow up or quitting the study.Unfortunately, we could not compare baseline characteristics between patients who provided informed consent and those who did not, since no consent was given for collecting data from patient registries.Baseline characteristics of patients with and without missing data were mostly comparable (Supplemental Table 3).Furthermore, patients free from pain might not feel the need to report their NRS90.A catastrophic mindset could also lead to more willingness to participate and report, which could have led to overestimation.This selection bias is a common problem in studies requiring volunteers [12].
Finally, the relationship between variables, such as pain intensity, satisfaction with care received, and catastrophizing could be confounded by cause and treatment of the underlying affliction.In this study cause and pain characteristics (for example nociceptive, neuropathic) were not included in the analysis due to the large number of variables that were already examined.Many variables should be considered, such as causes, comorbidity, type and time of intervention and/or medications, and whether patients followed given advice.Treatment by attending physician or location is possible but unlikely given the absence of differences in incidence rate of chronic pain between different locations.Despite the limitations, our study has several strengths.This study showed results that have important clinical implications for pain treatment in the acute setting.Furthermore, we conducted a study in patients with any cause or severity of pain, which makes it more likely that these findings can be generalized.
In conclusion, this study confirms the sex differences in pain catastrophizing in patients visiting the ED for pain-related complaints.Our data suggested that pain catastrophizing increased the risk of pain chronification, irrespective of sex.This study could be relevant for the assessment and management of acute pain in the ED to prevent transition into chronic pain.Highrisk patients, namely those who catastrophize their pain, could be detected, and pain chronification might be prevented.Whether reducing pain catastrophizing indeed leads to less pain chronification is a topic in need of further studying.* Inclusion sites.A significant correlation was found between pain chronification and pain catastrophizing for both sexes if NRS.A similar effect of pain catastrophizing on pain chronification between sexes was shown.The odds ratio reported here are per step on the pain catastrophizing scale
Fig. 2
Fig. 2 Pain scores at 90 days and pain catastrophizing per gender
( 4 -
step scale, 0 = no pain catastrophizing, 0 = reference category) Education level was self-reported by the patient.Low level of education: primary school, Pre-vocational secondary education, Secondary vocational education level 1 Or completion of the first three years of Senior general secondary education or Pre-university education Intermediate level of education: graduation on senior general secondary education, pre-university education, secondary vocational education level 2-4 High level of education: Graduation at least university of applied sciences A chi-squared test was conducted for categorical data.Wilcoxon-Mann-Whitney test was conducted for numerical, non-normally distributed data NRS: Numeric Rating Scale, PCS: Pain Catastrophizing Scale, CI: Confidence interval
Table 1
Baseline characteristics for male and female participants Significant differences were found between sexes in age, NRS0, fractures, satisfaction with treatment, depression, chronic pain in other locations, alcohol consumption, and smoking NRS: Verbal Numeric Rating Scale, NRS90: (Verbal) Numeric Rating Scale at day 90, PCS: Pain Catastrophizing Scale, n: Number of samples, IQR: Interquartile range, SD: Standard deviation * Wilcoxon-Mann-Whitney tests, ** Student's t-tests
Table 2
Regression analysis on the association between pain catastrophizing and pain chronification corrected for genderPain catastrophizing was significantly associated with chronification of pain.We corrected for multiple possible confounders Education level was self-reported by the patient.Low level of education: primary school, Pre-vocational secondary education, Secondary vocational education level 1 Or completion of the first three years of Senior general secondary education or Pre-university education
Table 3
Multiple logistic regression analysis on the association between PCS and pain chronification, stratified by sex and corrected for potential confounders
|
2024-04-03T03:44:36.973Z
|
2024-04-02T00:00:00.000
|
{
"year": 2024,
"sha1": "80c3be89d81d05498dabb8f22761eb0ff2bd12aa",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6f1ae3d9d5a34cb4b9f4853a2f75989ba55e0499",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8151413
|
pes2o/s2orc
|
v3-fos-license
|
Retroperitoneal Schwannoma: A Rare Case
Introduction. Schwannomas are quiet rare in the retroperitoneal region. Here, we describe an incidentally detected retroperitoneal schwannoma in the abdominal computerized tomography (CT) of a patient with acute appendicitis. Case Presentation. A 38-year-old woman was admitted to the emergency service with the complaints of progressive abdominal pain and nausea for the last 24 hours. Abdominal examination was compatible with acute abdomen. Acute appendicitis was diagnosed by CT. During CT evaluation, a round shaped soft-tissue mass at the retroperitoneal area inferior to the right kidney was detected, The mass was resected and histology revealed schwannoma. Conclusion. Rare tumoral lesions with benign course such as schwannoma can be detected incidentally.
Introduction
Primary tumors of the retroperitoneal region are quiet rare, and schwannomas comprise only 1-10% of them. Schwannomas originate from Schwann cells of the peripheral nerve fibers and are usually located in the head, neck, and flexor surfaces of the extremities. Schwannomas are quiet rare in the retroperitoneal region. Among all schwannomas, only 0.7% of benign ones and 1.7% of malignant ones are reported to be located in the retroperitoneal region. The majority of retroperitoneal schwannomas are benign in nature although malignant ones have also been reported [1][2][3].
Here, we describe an incidentally detected retroperitoneal schwannoma in the abdominal computerized tomography (CT) of a patient with acute appendicitis.
Case
A 38-year-old woman was admitted to the emergency service with the complaints of progressive abdominal pain and nausea for the last 24 hours. Physical examination revealed rebound abdominal tenderness at right lower quadrant. Laboratory tests showed increased white blood cell count (WBC) and mildly elevated erythrocyte sedimentation rate (ESR). Pelvic ultrasound (US) was not successful due to abundant gas. Therefore, CT (with a 64-slice scanner; Philips Brilliance 64, Best, NL after intravenous administration of nonionic contrast material) was performed to confirm acute appendicitis.
CT revealed increased diameter of appendix (13 mm) with contrast enhancement and periappendicial fat stranding consistent with inflammation. The diagnosis of acute appendicitis was confirmed with CT. During CT evaluation, a round shaped soft-tissue mass 4.5 × 3.5 cm in diameter, with minimal heterogeneous contrast enhancement at the retroperitoneal area inferior to the right kidney, was detected ( Figure 1). There were a few punctate calcifications inside the lesion on precontrast images. To further characterize the lesion, abdominal magnetic resonance imaging (MRI) was performed with a 3 Tesla scanner (Philips Intera Achieva, Best, NL) with Torso coil. The lesion was located just anterior to the iliopsoas muscle and inferior to the right kidney, was hypointense on T1 and heterogeneously hyperintense on T2 weighted images with moderate heterogenous contrast enhancement (Figures 2(a) and 2(b)). Owing to the location and signal characters, the presumptive diagnosis was a neurogenic or a fibrous tumor. Considering the possibility of a schwannoma, the presence of multiple associated schwannomas is eliminated with MRI examination of the whole spinal cord.
After midline abdominal incision, the abdomen was explored, and acute appendicitis was diagnosed. The right colon was mobilized and 5 × 6 × 5 cm in diameters mass was found. The mass was localized just above the right psoas muscle, lateral to the vena cava inferior and inferior to the right kidney, and also showed a close proximity to nerves ilioinguinalis and femorolateralis, and the mass was resected ( Figure 3).
Histopathologic examination of the lesion revealed a tumor composed of two different patterns characterized with cellular compact areas and loosely textured less cellular areas ( Figure 4). Tumor cells were strongly and diffusely expressed S-100 protein, immunohistochemically. CD 117 (C-Kit), smooth muscle actin (SMA), and desmin were negative.
Discussion
Schwannomas are nerve sheath tumors that are mostly benign in nature. These neoplasms are usually seen in adult population between the ages of 20 and 50. Symptomatology of benign schwannomas is highly nonspecific and depends on the location and size of the lesion.
Retroperitoneal region is a rare location for schwannomas except in patients having Von Recklinghausen's disease. It is also noteworthy to mention that malignant degeneration particularly takes place in association with Von Recklinghausen's disease. In general, since the retroperitoneal space is rather large and flexible, the diagnosis of retroperitoneal schwannomas is often delayed, and the lesion reaches a significant size at the time of diagnosis. The most common symptoms are abdominal pain and distention. Depending on the location of the lesions, a variety of symptoms such as secondary hypertension, hematuria, and renal colic have also been reported. Schwannomas are located typically eccentric in relation to the nerve of origin. This finding could clearly be seen both macroscopically and microscopically in our patient. They often have a true capsule which is composed of epineurium. In general, schwannomas are seen as hypointense on T1 and hyperintense on T2 weighted MR images. Calcification has been reported to have an incidence of only 23% in retroperitoneal schwannomas, which has been observed in our patient on precontrast CT images. Cystic degeneration has been reported more frequently in retroperitoneal schwannomas with an incidence of up to 66%.
In general, MRI is regarded as the diagnostic modality of choice in the evaluation of retroperitoneal tumors. MRI allows better evaluation of the origin, extent, and internal composition of these lesions. There are a few well-known imaging characters for schwannomas which are mainly target sign and fascicular sign. However, these typical signs are not seen frequently in retroperitoneal schwannomas. The "fascicular sign" stands for the the appearance of bundles which is a general property of neurogenic tumors. On the other hand, the "target sign" is the presence of hypointense center and hyperintense periphery on T2 weighted MRI. The lesion presented in this paper does not exert any of the above-mentioned typical diagnostic signs [4,5]. Hence, we believe that for the preoperative diagnosis of retroperitoneal schwannomas, a high index of suspicion is mandatory, especially in the absence of characteristic imaging features, as in our patient.
The differential diagnosis for retroperitoneal schwannomas includes other neurogenic tumors such as paraganglioma and pheochromacytoma as well as, liposarcoma and malignant fibrous histiocytoma. In addition to those, if the retroperitoneal schwannoma contains considerable amount of cystic degeneration, retroperitoneal cystic masses such as hematoma and lymphangioma should also be included in the diagnostic checklist.
Although rare, malignant counterparts of schwannomas also exist. Detection of a malignant schwannoma is highly important, since it will affect the treatment strategy. From the radiologist's point of view, malignant schwannomas have irregular contour and tend to show invasion to the adjacent structures. The retroperitoneal lesion in our patient had regular borders without any sign of adjacent organ invasion, which were highly suggestive of a benign lesion radiologically.
For the surgical treatment of retroperitoneal schwannomas, the current approach is endoscopic-assisted minilaparotomy. Aggressive surgery is not indicated for benign retroperitoneal schwannomas. Although local resection is generally enough, metastatic cases have been reported after resection [1]. Therefore, followup is important for this patient.
Conclusion
We presented a rare type of retroperitoneal tumor which was detected incidentally in a patient diagnosed with acute appendicitis. Rare tumoral lesions with benign course such as schwannoma can be detected incidentally.
|
2018-01-12T08:18:23.393Z
|
2011-07-14T00:00:00.000
|
{
"year": 2011,
"sha1": "637d9f76549ec41e313924968c7564211462cada",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crigm/2011/465062.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "055ddc3d5b214991d6bfefb1e001327cdd5cbadb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118996135
|
pes2o/s2orc
|
v3-fos-license
|
A new mechanism of mass protection for fermions
We present a way of protecting a Dirac fermion interacting with a scalar (Higgs) field from getting a mass from the vacuum. It is obtained through an implementation of translational symmetry when the theory is formulated with a momentum cutoff, which forbids the usual Yukawa term. We consider that this mechanism can help to understand the smallness of neutrino masses without a tuning of the Yukawa coupling. The prohibition of the Yukawa term for the neutrino forbids at the same time a gauge coupling between the right-handed electron and neutrino. We prove that this mechanism can be implemented on the lattice.
HIGGS MECHANISM (SM)
The Higgs mechanism is the mechanism to give mass to fermions and gauge bosons in the Standard Model (SM). However, in the SM there are massless fermions: the neutrinos. In fact, a righthanded neutrino ν R is not introduced so that it remains massless. However, most of the extensions of the SM imply the existence of a ν R . With the introduction of a ν R , the neutrino can be coupled to the Higgs field and get a Dirac mass m νe (ν eL ν eR + h.c.). A fundamental problem is then to understand why m νe /m e is such a small number (< 10 −5 ). In the following we will give a possible answer to this problem by means of a mechanism to protect a fermion coupled to the Higgs field from acquiring a mass from the Higgs vacuum.
MASS PROTECTION MECHANISM
We present the following mechanism, based on two characteristics of the SM: first, the freedom in the choice of the representations of the symmetries of the theory in which the elementary particles appear (we will consider specifically the translational symmetry) and second, that it is a low-energy effective theory. This last fact implies the presence of a momentum cutoff scale Λ. We will identify new representations of the translational symmetry considering that the momentum cutoff −Λ ≤ p µ ≤ Λ naturally reduces the Poincaré group of symmetry to a discrete subgroup: in Euclidean space, that generated by rotations of π/2 in each plane, and by translations of π/Λ in each direction.
To illustrate how the mass protection mecha-nism works, we will consider a chiral model with a left and a right fermion coupled to a complex scalar field, and a different representation under translations for each chirality of the fermion field. The physical interpretation of this will be that the two chiralities are coupled differently to the physics beyond the cutoff. Then, the usual Yukawa term in momentum space, is forbidden by translational invariance. In Eq. (5), [p + k] is the momentum compatible with the cutoff obtained by adding or substracting if necessary 2Λ to the components of p + k. The interaction term compatible with the new implementation of translations is where the tilde symbol was already introduced in Eq. (2). At leading order for the fermion propagator, one finds, in the case of the term (5), a free fermion with mass m = y φ , and in the case of the term (6), a massless fermion up to corrections proportional to inverse powers of the cutoff Λ. However, as the term (6) couples momentum modes that differ in Λ, a nonperturbative implementation of this mechanism could be problematic owing to the well-known fermion doubling phenomenon. Let us see that this is not the case.
LATTICE IMPLEMENTATION
On the lattice, we take for the representation of translations for the fermion field : under a translation of one lattice spacing in theμ direction. As in the previous discussion, we will take α L = 0 and α R = π in order to have compatibility with the usual representations of rotations. The translational invariant lattice action is where with F (p) is a form factor required to be 1 for p = 0 and to vanish when p equals any of the doubler momenta. With this method, we have a theory with 16 fermions, 15 of which do not interact with physical particles and decouple from the real world [1]. In order to do perturbation theory, let us set and consider a scalar field φ with a VEV φ 1x = v, φ 2x = 0. Then we write where η 1,2 represent the small perturbations. Let us first note that the presence in the action of the term S FB with such an unusual coupling does not modify the vacuum φ 1x = v. This is a consequence of both analytical and numerical studies of the antiferromagnetic (AFM) phase of the chiral Yukawa model [2]. Under the change of variables φ ′ x = ε x φ x , where ε x = (−1) ν xν , the action is invariant if the couplings are mapped according to With these couplings, a stable AFM phase exists where the scalar gets a staggered mean value φ ′ 1x = ε x v st . We can then conclude that the original vacuum φ 1x = v is also a stable vacuum for the action (8).
In momentum space, the inverse of the fermion propagator at tree-level order is where / s(p) = µ γ µ sin p µ , F π (p) ≡ F (p + π), and π ≡ (π, π, π, π). We have F π (0) = 0. This matrix is not diagonal in momentum space, as it connects p with p + π in a box of the form It can be diagonalized to give , m(p) = yvF (p)F π (p). (18) This propagator has 16 poles at momenta (0, 0, 0, 0), (π, π, π, π), (π, 0, 0, 0), (π, π, 0, 0), etc., which implies zero mass at tree level for the physical fermion and all the doublers. One can see through perturbative calculations and nonperturbative arguments that this masslessness is maintained at every loop order [3]. We are interested in the continuum limit of the theory because we want to apply it to energy scales E ≪ Λ. In the limit Λ → ∞, the propagator (18) becomes (−i)// p, that is, a massless fermion propagator. This limit is well defined because it corresponds to the second order phase transition of the AFM phase in the chiral Yukawa model, where we have restoration of rotational invariance and renormalizability of the theory. In summary, we have obtained a massless fermion in the low-energy theory by using transformation laws under the symmetries of the theory related to the presence of the scale Λ, that is, related to the properties of the theory at the next level E > Λ.
APPLICATION TO THE SM
The present mechanism could be applied in the framework of the SM with ν R to understand the absence of a neutrino Dirac mass, by simply choosing a different representation for the L and R chiralities of this fermion under translational symmetry. As in the SM e L and ν L are coupled by the gauge field, they should appear in the same representation, together with e R (in order to have the usual Higgs mechanism for the electron). Then the right-handed electron and the right-handed neutrino are in different representations and they cannot be in the same weak isospin multiplet. This situation is in fact assumed in the SM.
Recent oscillation results [4] suggest that the neutrino could have a small mass. The usual explanation for this requires the introduction of Majorana terms, which violate lepton number conservation. In the framework of the minimal SM as an effective theory, a Majorana mass can be generated for the neutrino by the dimension five operator In the SM with ν R , the see-saw mechanism [5] balances the Dirac termψ L (x)ψ R (x) and the Majorana term ψ T R (x)Cψ R (x) in order to explain a small mass for the neutrino.
The mass protection mechanism proposed in this work allows a ν R in the theory without the generation of a Dirac mass. Also, a scenario of almost-degenerate neutrinos (the relevant one in cosmology) could be explained in an easier way after having eliminated the hierarchy of the Dirac mass matrix. Besides that, if no Majorana terms are allowed in the model, the neutrino oscillations could be due to effects of order v/Λ, and compatible with lepton number conservation in the framework of this mass protection mechanism.
|
2019-04-14T02:30:48.703Z
|
1999-09-08T00:00:00.000
|
{
"year": 1999,
"sha1": "d75c1d34996c51d54f210c15e8b71679d8cafcb0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9909053",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "786f0d01e0819773eac20ab62593587157037bda",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
2191198
|
pes2o/s2orc
|
v3-fos-license
|
Hyponatraemia in patients with crush syndrome during the Wenchuan earthquake
Background Although sodium disturbances are common in hospitalised patients, no study has specifically investigated the epidemiology of hyponatraemia in patients with crush syndrome. Objectives To describe the incidence of hyponatraemia and assess its effect on outcome in patients with crush syndrome during the Wenchuan earthquake. Methods A retrospective study was conducted in 17 reference hospitals during the Wenchuan earthquake. We excluded patients younger than 15 years and those with missing sodium values within 3 days after being rescued from the ruins. Results Hyponatraemia (serum sodium concentration <135 mmol/l) was seen in 91/180 (50.6%) patients on admission. Compared with patients with normonatraemia, those with hyponatraemia were younger, had more severe traumatic injury and renal failure, underwent more fasciotomies, received more blood transfusion and renal replacement therapy. In the multivariable-adjusted model, the number of extremity injuries (OR=1.59, 95% CI 1.08 to 2.33) and serum creatinine (OR=1.30, 95% CI 1.07 to 1.59) were independently associated with the occurrence of hyponatraemia. Covariate adjusted multiple logistic regression analysis showed an independent mortality risk rising with hyponatraemia (OR=5.74, 95% CI 1.18 to 28.00). Conclusions Hyponatraemia was common in the patients with crush syndrome during the Wenchuan earthquake and associated with poor prognosis. Water, commercial drinks and hypotonic intravenous fluids should be supplied carefully to patients with crush syndrome.
INTRODUCTION
Hyponatraemia is the most common electrolyte disorder in adult patients admitted to the intensive care unit (ICU). The prevalence of hyponatraemia on ICU admission is between 13.7% and 17.7%. 1 2 The risk of death during hospitalisation is increased in patients admitted to hospital with hyponatraemia compared with normonatraemia. 3 Hyponatraemia present on admission to the ICU is an independent risk factor for poor prognosis. 1 Earthquake disasters result in a vast number of instant deaths owing to injuries to vital organs, and are also associated with a cluster of heavily wounded people, in whom crush injuries and prolonged compression of limbs are commonly found. Crush syndrome is the systemic manifestation of muscle cell damage resulting from crushing and affects many organs, resulting in hypovolaemic shock, acute kidney injury (AKI), arrhythmias, acute respiratory distress syndrome, sepsis and electrolyte disturbances. 4 Crush injury-related electrolyte abnormalities that occur as a result of the release of cellular components include hyperkalaemia, hyperphosphataemia, high aniongap metabolic acidosis and hypermagnesaemia. 5 Hyponatraemia is seldom reported in patients with crush syndrome in hospital as it is one of the most common electrolyte disorders found.
On 12 May 2008, western Sichuan in China was devastated by a deadly earthquake measuring 8.0 on the Richter scale, which was named the Wenchuan earthquake. The earthquake caused 69 227 deaths, 17 923 people were lost and 96 544 wounded. The disaster also resulted in hundreds of patients with crush syndrome. 6 The primary aim of this study is to describe the incidence of hyponatraemia and assess its effect on outcome in patients with crush syndrome. The study protocol was approved by the ethic committee of the Chinese PLA General Hospital.
MATERIALS AND METHODS Data collection
The Wenchuan earthquake-related AKI study group designed a questionnaire in accordance with the recommendations of the International Society of Nephrology's Renal Disaster Relief Task Force. The questionnaire was sent to 17 hospitals, in which the casualties were accepted and dialysis was available. Among a total of 286 feedback questionnaires, 242 from 10 centres met the criteria of crush syndrome. Double registrations were found in 14 patients. To avoid repetition, duplicate records were combined as one. 6 Definitions Crush syndrome was defined as crush injury with one of the following characteristics: urine output <400 ml/day, blood urea nitrogen >14.3 mmol/l, serum creatinine >176.8 μmmol/l, serum uric acid >475.8 μmmol/l, serum potassium >6 mmol/l, phosphorus >2.6 mmol/l or calcium <2 mmol/l. 7 The initial serum sodium concentration in this analysis was adjusted according to the concomitantly measured serum glucose level. If the glucose level was >5.55 mmol/l, the serum sodium level was adjusted upward by 2 mmol/l for each 5.55 mmol/l increment in serum glucose. 8 Normal serum sodium was defined as 135-145 mmol/l.
Open Access Scan to access more free content Hyponatraemia was defined as a serum sodium concentration <135 mmol/l. Hypernatraemia was defined as a serum sodium concentration >145 mmol/l. Based on the diagnoses in the hospitals, injury severity was measured by the Injury Severity Score (ISS). 9 10
Statistical analysis
Descriptive statistics of all numerical variables, including means, SD, together with the proportions of all categorical variables were calculated. Measurement data between the groups were compared by Student t test or Mann-Whitney test according to whether or not they conformed to normal distribution. Differences between group proportions were examined with a χ 2 test. Multivariate logistic regression analysis was performed to assess the possible predictors of hyponatraemia and mortality. Data were analysed by standard statistical software (SPSS V.13.0, Chicago, Illinois, USA).
RESULTS
A total of 228 patients with crush syndrome were admitted to 10 reference hospitals after the earthquake. Seventeen paediatric patients were excluded and 31 patients were not analysed owing to the absence of serum sodium values within 3 days after being rescued from the ruins. One hundred and eighty patients were included in this analysis. Of these 180 patients, 11 (6.1%) were hypernatraemic and 91 (50.6%) were hyponatraemic. After excluding the patients with hypernatraemia, the final analytical dataset comprised 169 patients with normonatraemia or hyponatraemia.
The characteristics of the study population at admission are summarised in table 1. The hyponatraemic patients were significantly younger than those with normonatraemia. The distribution of gender was similar in both groups. No significant differences were found in body temperature, heart rate, blood pressure and time spent under the ruins between the two groups, but a significantly lower urine output in the first 24 h was recorded in the hyponatraemic group.
For the laboratory data at admission, there were no significant differences in haemoglobin, white blood cell (WBC) count, platelet and serum albumin between the hyponatraemic and normonatraemic patients. Compared with normonatraemic patients, serum creatinine, blood urea nitrogen, potassium, phosphorus, uric acid and creatinine kinase were significantly higher, while serum calcium was significantly lower in hyponatraemic patients (table 1).
There were no significant differences in ISS, the incidence of chest and abdominal trauma between the two groups. Traumatic brain injury was seen in six patients in the hyponatraemic group, while none was found in the normonatraemic group (table 2). The incidence of medical complications, including traumatic shock, wound infection, sepsis, pneumonia, respiratory failure, acute respiratory distress syndrome and disseminated intravascular coagulation, did not differ significantly between the two groups. The hyponatraemic patients had a greater number of multiple extremity injuries and underwent more fasciotomies than the normonatraemic patients (table 2). The number of multiple extremity injuries showed positive correlations with serum creatinine (r=0.218, p=0.005) and potassium (r=0.362, p<0.001), while negative correlations were noted with urine output (r=−0.175, p=0.034), serum albumin (r=−0.243, p=0.003) and calcium (r=−0.187, p=0.031).
The quantity of fluids administered within the first 24 h of their hospitalisation was defined in 89 patients and the mean total volume was 3229±2328 ml (range 300-10 360 ml). Only 17/169 patients (10.1%) received fluid infusion >6000 ml/24 h. There was no significant difference in the volume of fluids administered between the hyponatraemic and normonatraemic patients. The hyponatraemic group received more blood transfusions, while no significant difference was found in plasma transfusions between the two groups. Renal replacement therapy was performed more often in patients with hyponatraemia than in those with normal serum sodium level (table 3).
Multivariate logistic regression analysis adjusted by age including the variables of number of extremity injuries, serum creatinine, potassium, phosphorus and calcium indicated that number of extremity injuries (OR=1.59, 95% CI 1.08 to 2.33) and serum creatinine (OR=1.30, 95% CI 1.07 to 1.59) were independently associated with the occurrence of hyponatraemia.
DISCUSSION
In this study we describe for the first time the prevalence of hyponatraemia in patients with crush syndrome after an earthquake. It demonstrates that hyponatraemia is common in patients with crush syndrome, and associated with poor prognosis. Although many reports have examined the prevalence, causes and outcomes of hyponatraemia, few have focused on hyponatraemia associated with crush syndrome. Oda et al retrospectively analysed eight patients with crush syndrome who were treated in the ICU of a university hospital. Reduced serum sodium concentrations, ranging from 119 to 133 mmol/l, were present in six patients. 11 Dönmez et al reported on 20 paediatric patients with crush syndrome, with serum sodium levels of 135.4 and 133 mmol/l in children with one extremity and multiple extremity injuries, respectively, meaning that nearly half of the patients developed hyponatraemia. 12 Adams et al recently conducted a prospective, observational study showing that AKI was present in 32% of patients with hyponatraemia. 13 However, these three studies all included small numbers of patients. In our study, hyponatraemia was detected in 50.6% of 180 adult patients with crush syndrome on admission, which is higher than that in unselected patients. In a prospective cohort study of 98411 adults, hyponatraemia was seen in 14.5% of patients on initial measurement. 3 Another retrospective study including 151486 adults in 77 ICUs showed that the frequency of hyponatraemia in critically ill patients was 17.7%. 1 Although the causes of hyponatraemia are varied, from a pathophysiological point of view, hypotonic hyponatraemia is the most common type, which is commonly caused by non-osmotic release of vasopressin. 14 This is especially true among patients with crush syndrome. After being crushed and trapped by debris during the earthquake, the victims had severe pain and extreme fear, which stimulated the release of vasopressin. Prolonged compression caused muscle ischaemia, and reperfusion contributed additionally to the injury. The sarcolemma loses its functional integrity, creating intracellular oedema and third-space loss, resulting in intravascular volume depletion, 15 which promotes homoeostatic activation of the renin-angiotensin system, vasopressin and the sympathetic nervous system. 16 Westermann et al found that vasopressin was significantly increased in patients with multiple trauma. 17 In our series, the patients with hyponatraemia had more severe traumatic injury, and hyponatraemia is independently associated with the number of crushed extremities, which reflected more third-space loss and the severity of the hypovolaemic condition. The decrease of urine output in the first 24 h also reflected the impaired ability of the kidney to excrete water, at least partly owing to increased vasopressin.
Although time spent under the ruins did not differ between the two groups, we could not exclude the possibility that after being rescued after prolonged periods in the rubble, the victims tended to drink large amounts of water to relieve thirst due to dehydration. It is possible for muscle compartments of a 75 kg adult to lose up to 12 litres of fluid in the first 48 h. 18 Therefore, vigorous fluid replacement is imperative to prevent hypovolaemia and acute renal failure. 19 Unfortunately, during a large-scale disaster, provision of fluids is more difficult to implement. Although for some victims of the Wenchuan earthquake fluid administration started before extrication from beneath the rubble, the fluid resuscitation was not as vigorous as recommended. 4 19 Only 10.1% of patients with AKI received fluid infusion of >6000 ml within the first 24 h of their hospitalisation The reasons for this include a shortage of medical supplies and lack of experience in dealing with crush-related AKI.
The same patterns were also reported in the Kobe earthquake 20 and the Marmara earthquake. 21 In the Kobe earthquake, most of the victims with crush syndrome received only 2000-3000 ml/day of infused fluids during the initial 3 days, and the mean volume of administered fluids was 5109 ml/day in the Marmara earthquake. In this setting, the victims might drink more water or commercial drinks. However, even the commercial sports drinks are hypotonic, with a sodium concentration of only about 18 mmol/l. 22 23 As a result, victims are prone to develop hyponatraemia owing to a relative excess of hypotonic fluid in conjunction with an underlying condition that impairs the kidney's ability to excrete water.
Although a number of reviews have mentioned renal failure as an important contributor to impaired renal water excretion, few cohorts with AKI and hyponatraemia have been reported. Adams et al recently conducted a prospective, observational study showing that 32% of the patients with hyponatraemia had AKI, most of which were prerenal AKI. 13 A characteristic feature of crush syndrome-related AKI is the presence of a low fractional excretion of sodium (<1%), reflecting the primacy of preglomerular vasoconstriction and tubular occlusion rather than tubular necrosis. 16 24 Adams et al suggested that AKI and hyponatraemia should be regarded as two different manifestations of one underlying cause. 13 In our study, although we could not determine whether the patients at hospital admission presented with prerenal failure, we found that the serum creatinine level was another predictor of hyponatraemia. Hyponatraemia has been shown to be a powerful risk factor for both morbidity and mortality. Hyponatraemia present on admission to the ICU is an independent risk factor for poor prognosis. 1 Recently, a large single-centre study showed that both community and hospital-acquired hyponatraemia were associated with increased mortality even when hyponatraemia was mild. 3 It remains unclear whether the relationship between hyponatraemia and adverse outcomes is causative or only associative. Chawla et al reported that the nature of underlying illness rather than the severity of hyponatraemia best explained mortality associated with hyponatraemia. 25 In our study, patients with hyponatraemia had more severe traumatic injury and renal failure, underwent more fasciotomies, received more blood transfusion and renal replacement therapy. Although it is hard to reach a definite conclusion that a direct relationship exists between hyponatraemia and mortality, the OR was still highly significant after adjustment for comorbidity, emphasising the association between hyponatraemia and a poor clinical outcome.
The potential morbidity and mortality from hyponatraemia provide the rationale for trying to maintain normonatraemia in all patients. 25 To prevent renal failure 4 19 and hyponatraemia in patients with crush syndrome, early, aggressive volume repletion before evacuating the patients is crucial. Adams et al reported that isotonic fluid replacement could correct hyponatraemia without overcorrection and led to a good outcome. 13 Because patients with crush syndrome have a greater tendency to develop hyponatraemia, these data emphasise that isotonic saline is the preferred repletion fluid. Water, commercial drinks and hypotonic intravenous fluids should be given cautiously to patients with crush injury.
This study has several limitations. First, our data included a number of missing values owing to the chaotic disaster conditions. This study did not obtain data on prehospital oral or intravenous fluids, which might be an important reason for the development of hyponatraemia in patients with crush syndrome during the earthquake. Lack of information about urine osmolality and sodium excretion rate made it difficult to assess the fluid balance and its cause. However, owing to delay in extricating people from the ruins and the long distance of transportation as well as the severe rhabdomyolysis, most patients in this study had already developed acute renal failure on admission, and thus assessment of the above values might not be as important as in the general population.
CONCLUSIONS
Hyponatraemia was common in the patients with crush syndrome during the Wenchuan earthquake and associated with poor prognosis. Water, commercial drinks and hypotonic intravenous fluids should be supplied carefully to patients with crush syndrome.
|
2017-06-18T11:29:10.311Z
|
2012-09-26T00:00:00.000
|
{
"year": 2012,
"sha1": "6a4306d705df11fcdc410803eddb51e9f5733303",
"oa_license": "CCBYNC",
"oa_url": "https://emj.bmj.com/content/30/9/745.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a4306d705df11fcdc410803eddb51e9f5733303",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268388875
|
pes2o/s2orc
|
v3-fos-license
|
Slow and steady saves the race: molecular and morphological analysis of three new cryptic species of Iberus land snails from the Iberian Peninsula
The Iberian Peninsula constitutes a diversity hotspot with a high number of endemisms, where the land snail genus Iberus is likely the best example. Despite this, its species diversity is still debated as it holds several cryptic species. In the present paper, we use molecular evidence (mitochondrial DNA cytochrome oxidase subunit I) to clarify the position of I. ortizi and three new cryptic species that are described herein: I. giennensis , I. axarciensis and I. antikarianus spp. nov. For this, we sampled 281 sampling points to delimitate a comprehensive geographic mapping of these species. Moreover, we carried out a comprehensive morphometric analysis based on 3205 shells. Our findings show that, morphologically, the three described species overlap in the form of their shells, their morphologies being very similar to other close species with nearby distributions ( I. ortizi , I. angustatus and I. marmoratus loxanus ). Still, all these species are well-defined by genetic distances, but display allopatric distributions, suggesting that they evolved by allopatric speciation as a consequence of biogeographic isolation. Hence, our findings show insights into the evolution of land snails in southeastern Spain, with implications for their conservation, given that our exhaustive sampling shows that the three species described here have very limited distribution ranges, especially I. antikarianus sp. nov. Our study, moreover, implies an integrated approach to the study of the evolution of land snails, including the sampling of the complete geographic area occupied by the genus, genetic analysis to delimit the actual species range, as well as morphometric analyses to understand the phenotypic differentiation and adaptations of the three new species
Introduction
The Iberian Peninsula constitutes a diversity hotspot with many endemic species, largely boosted by its geographical location and turbulent geological history.The Iberian Peninsula serves as a bridge between African and Eurasian faunas (Husemann et al., 2014) and was a glacial refuge favouring ulterior speciation processes (Abellán & Svenning, 2014).Moreover, the mountainous topography of the Iberian Peninsula, acting like biogeographic islands and barriers, has also contributed to the emergence and development of endemic species (López-Villalta, 2011).
Within the endemic species in the Iberian Peninsula, the land snails stand out for their high number of endemics (Cadevall & Orozco, 2016), with the genus Iberus Monfort, 1810 being the most representative Iberian endemic land snail.However, the species diversity of the genus Iberus is still debated.The species type for the genus is Helix gualtierana Linnaeus, 1758, currently designed as Iberus gualtieranus (Linnaeus, 1758).The taxonomy of the genus was revised by García San Nicolás (1957) based on morphological characters.Nevertheless, subsequent studies based on molecular techniques (sequencing of the cytochrome oxidase subunit I [COI] and RNA ribosomal 16S [16S rRNA]) showed that the morphology is of limited use to delimit species (Elejalde et al., 2005(Elejalde et al., , 2008a, b), b).The genus Iberus includes cryptic species such as I. ortizi García San Nicolás, 1957 or I. marmoratus (A. Férussac, 1821), genetically sufficiently distant to be considered well-differenced species (Elejalde et al., 2008a), but with shells similar enough that non-experts have difficulty distinguishing between them (Ruiz Ruiz et al., 2006).At the same time, supposed species such as I. rositai de Fez, 1950 andI. cobosi Ibáñez et Alonso, 1978, with well-differentiated shells (pale brown, flattened, keeled and very ornamented) from the typical shells of the genus (more or least globose, frequently banded and with brown tones) resulted to be morphs of the same species (I.marmoratus; Elejalde et al., 2008a).This leads to a paradoxical situation: the genus Iberus displays a high conchological diversity, with contrasted flattened-keeled and globose-smooth shells (Cadevall & Orozco, 2016;Liétor, 2014;Ruiz Ruiz et al., 2006), but shell morphology alone does not help to differentiate species (Elejalde et al., 2005(Elejalde et al., , 2008a, b), b).Similar findings have been reported for other land snails, in which among-populations morphological data do not match genetic distances (e.g.Haase & Bisenberger, 2003;Pfenninger & Magnin, 2001;Teshima et al., 2003).Similarly, the traditional use of genitalia morphology has also been proven ineffective for species delimitation in snails (Nantarat et al., 2019;Wilke et al., 2002).Therefore, the classification of snails exclusively based on morphological features can be misleading, as molecular techniques are necessary to properly delimit species (Pfenninger et al., 2006).
Hence, it is unclear how many species are included within the genus Iberus as well as their distributions and type morphologies.For this reason, to clarify the species diversity of this genus, we have performed a long-term study embracing the complete distribution area of the genus Iberus with the aim of unequivocally clarifying the diversity of species involved.Given that the genus includes cryptic species and foreseeing the occurrence of new cryptic species, we have tried to sample all possible populations (more than 1100 sampling points at the moment of which 281 are involved in the current study).Ulterior systematic genetic sequencing along with a comprehensive geographic mapping will allow us to delimit the actual species of the genus Iberus.
We consider that the branch of the evolutionary tree of the genus Iberus including the species I. ortizi and related species is solved in the present work.Here, we describe three new cryptic species of the genus Iberus: I. giennensis, I. axarciensis and I. antikarianus spp.nov., which, together with I. ortizi, constitute a consistent clade.One of the new species we describe here was already reported as a new species by Elejalde et al. (2008a) based on the genetic analysis of four specimens.Here, we confirm its phylogenetic position by adding new specimens and describe the species, delimiting its distribution.Elejalde et al. (2008a) detected a second new species in the phylogenetic tree supposed as I. loxanus (A.Schmidt, 1853) according to its shell morphology.However, this alleged I. loxanus was not clustered with the rest of I. loxanus and showed a divergence of 6% and 12% (16S and COI, respectively) with I. ortizi, the closest species.Given that this clade was originally defined by only one individual, we carried out intense field prospections to find new populations with similar haplotypes.Finally, a population matching the haplotype was found, which led us to describe the new species.Lastly, the third species we describe here was fortuity found during the field sampling routine when we sequenced individuals of a population suspected to be a new species close to I. ortizi.
To clarify the taxonomic identity of these three new species, besides the classical conchological description, we present a comprehensive dataset of morphometric analyses of hundreds of shells sampled in a large number of locations.Morphometric analyses allow us to make comparisons with other species of the genus Iberus that inhabit nearby geographic areas and present similar shells.Thus, the phylogenetic clarification of a large clade of the evolutionary tree of Iberus is addressed from an integrated perspective, using an intensive biogeographic characterisation, a phylogenetic study based on a significant number of samples and a morphometric study on a large number of shells that cover the broad phenotypic spectrum of the variability of these sister species.
Field sampling
For two decades, we have performed systematics field sampling consisting of more than 1100 sampling points throughout Spain to determine the distribution of the species within the genus Iberus.Sampling points were determined according to (i) previous citations in specialised literature, (ii) the presence of karstic habitats or sedimentary lithology that provide adequate levels of calcium to form the shells (Fournié & Chétail, 1984) and (iii) the prior knowledge and field experience of the researchers.For each sampling point, we recorded the geographic coordinates and representative photographs of the habitat.For each Iberus species, a set of shells was collected, cleaned, photographed (with a Sony RX100 camera) in lateral, ventral and dorsal positions, and measured to obtain a number of morphometrics.From these sampling points, 82 correspond to the three species here described (I.giennensis sp.nov.: 48; I. axarciensis sp.nov.; 28; I. antikarianus sp.nov.: 6), plus 199 for the nearest species.
Morphometric measurements
Shell morphometric parameters were obtained following López-Alcántara et al. (1985).We measured with a digital calliper (accuracy 0.01 mm): the largest and the smallest diameter (Ø) of the shell, shell height and major and minor external Ø of the peristome.According to these data, we estimated the shell and peristome area, by considering that both the shell and the peristome may resemble an ellipse, applying the formula area = π × [(major Ø)/2] × [(minor Ø)/2].On the basis of these measurements, we estimated the subsequent set of morphological ratios: shell height/ minor Ø of the shell (H/W ratio, as an indicator of shell globosity, more globose shells having a higher ratio); major Ø of the shell/minor Ø of the shell (as an indicator of shell circularity, so that the closer this rate is to unity, the greater the degree of circularity of the shell); major external Ø of the peristome/minor external Ø of the peristome (as an indicator of peristome circularity); percentage of the total surface of the shell occupied by the peristome (calculated as (peristome area × 100)/shell area).All measurements were carried out by the same researcher (JL).The repeatability of all measurements (estimated according to Senar, 1999) was always > 0.99.We checked for outliers for the morphometrics measurements by using the Cleveland plot (following Zuur et al., 2010).No possible outlier was detected, but we detected eight individuals (of 1374) with extreme values for some variable, representing only 0.6% of measured shells.These extreme values, however, were not necessarily outliers, but only extreme values within the distribution of the data (Quinn & Keough, 2002).We also found individuals with odd shells (less than 0.5% of shells; Supplementary Fig. S1), which were not included in the morphometric analysis.
Statistical comparisons between morphometric measurements were carried out with ANOVA tests when the variables were homoscedastic and normally distributed, otherwise using the Kruskal-Wallis test.In addition, a principal components analysis was carried out to determine the overlap between the described species in the morphospace.
Phylogenetic analysis
Among all the specimens collected alive in the field, those from key locations were selected for genetic analysis.We consider as key locations for each species those that, being as far apart as possible within the distribution of the species, would make it possible to cover the entire distribution area in a representative manner, trying to embrace the maximal intraspecific genetic diversity.Once in the laboratory, the specimens were sacrificed by drowning and a tissue sample was extracted for molecular analyses.For this study, samples belonging to 14 individuals were stored in absolute ethanol and maintained at −20 °C.Genomic DNA was extracted using QIAGEN DNeasy Blood & Tissue Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol.A fragment (~670 bp) of the mitochondrial cytochrome C oxidase subunit I (COI, standard barcoding fragment, with primers LCO and HCO; Folmer et al., 1994) was amplified by polymerase chain reaction (PCR).Following the standard protocols, negative controls were used in all PCR to detect possible contaminations.The obtained sequences (Genbank accession numbers: OR800623-36) were edited with Sequencher v5.4.6 (Gene Codes Corporation, Ann Arbor, MI, USA), and checked for potential contaminations using GenBank's BLASTn search (Altschul et al., 1990).Sequences were aligned in Seaview v.4.2.11 (Gouy et al., 2010) under ClustalW2 (Larkin et al., 2007) default settings.The final alignment comprised 634 base pairs (bp) from 159 individuals including the outgroup (Otala lactea, Helicella sp. and Eobania vermiculata) (Supplementary Table S1).Uncorrected p-distances with partial deletion were computed in MEGA (Kumar et al. 2018).Phylogenetic relations of Iberus sequences were analysed for Bayesian inference (BI) analysis using MrBayes v3.2.6 (Ronquist & Huelsenbeck, 2003).The best model of sequence evolution (TPM2uf+I+G) was selected following the AIC using jModelTest v2.1.6(Darriba et al., 2012).Two independent runs (each with four Markov chains for 5 × 107 generations) were performed.Trees and parameters were sampled every 1000 generations.Maximum likelihood (ML) searches were conducted in RAxML v7.0.4 (Silvestro & Michalak, 2012) using default settings and support was assessed by using 1000 bootstrapped replicates.The majority-rule consensus tree was estimated by combining results from duplicated analyses, after discarding 25% of the total samples as burnin.All phylogenetic analyses were performed in the CIPRES platform (Miller et al., 2010).The consensus tree was visualised and rooted using FigTree v1.4.4 (Rambaut, 2018), and later prepared as a graphic with the software Inkscape v1.0.1 (http:// www.inksc ape.org).
Phylogenetic position and genetic divergence
The results from the Bayesian inference and ML recovered the same three strongly supported (BI BPP = 1.00,ML BS = 100) monophyletic clades for the 14 new Iberus samples, but their positions differed between both analyses.Samples B1, B5, B6, B7, B8 (named Clade 1) were grouped with 4 sequences named Iberus sp. in Elejalde et al. (2008a) (Fig. 1).Sample U1 matched the sample I.loxanus01 in Elejalde et al. (2008a; Genbank accession EF440255) (named Clade 2).Lastly, samples U3, U4, U5, U7, AM1, AM3, AJ1, AJ2 formed the third clade (named Clade 3), which is a new clade formed only by our samples, with no close matches from Genbank.The BI recovered all clades sister to Iberus ortizi, again strongly supported.Clades 1 and 2 are sister clades with a BPP of 0.95, while clade 3 is a sister clade to the former with a weaker node supports (BPP = 0.84).The ML analyses recovered Clade 1 and Clade 2 as sister clades (BS = 60), these being sister to I. ortizi, but weakly supported (BS = 27).This group (Clade 1 + Clade 2 + I. ortizi) is sister to Clade 3 (BS = 86) (ML analyses are only reported and not shown).Future sequencing of additional markers might slightly change the phylogenetic positions between the three lineages here described.
Etymology
Twenty-seven of the 35 localities where I. giennensis sp.nov.was recorded belonged to the southern area of Jaen Province, most of them placed within the Sierra Sur mountain range (Fig. 2).Therefore, it seems appropriate to use the Latinism assignable to the name of the province where the species mainly inhabits to denominate this new species.
Holotypes and paratypes
Figure 3 shows the photographs of the holotype and paratype shells assigned to I. giennensis sp.nov.Morphological measurements of holotype and paratype shells of I. giennensis sp.nov.are available in Table 1.The type locality for I. giennensis sp.nov.consists of calcareous slopes surrounding Valdearazo River Canyon, Sierra Sur, Valdepeñas de Jaen (Jaen Province, Spain), with the following coordinates: 37° 36′ 57′′ N, -3° 41′ 35′′ W. S3 in Supplementary Material shows the average morphometric values for this species.
Type shell description
Iberus giennensis sp.nov.has a globose and not umbilicated shell, with 4-5 whorls of regular growth.The last whorl is convex, slightly compressed and more dilated than the others.The suture is simple and visible in all whorls.Protoshell has 1-1.5 turns with smooth and uniform light brown colour.The shell surface is irregularly striated giving a reticulum (except in the protoshell), with prominent radial ribs that are distributed in a non-regular pattern between less pronounced transverse striations.Shell aperture is large, oval-semilunar, wider than high, having a fine not reflected peristome (somewhat sharp in the palatal wall).The peristome shows a slight thickening in the area of the columellar wall, close to the umbilicus.There is no callus on the parietal edge and the umbilicus area.Thickening and whitish tone typical in the umbilicus area of other species of the genus Iberus are absent in I. giennensis sp.nov.Sometimes, the umbilicus area exhibits a slight depression.
The colour of the shell in the first three whorls is light brown and off-white (bone colour) in the rest.The body whorl of the shell is longitudinally crossed by five dark brown bands, of which the top three are frequently discontinuous.A minor percentage of shells may have upper continuous bands.The two lower bands are wider, continuous or discontinuous, being the top one between two and four times wider than the one at the bottom.The area over the two principal bands of the body whorl may exhibit a slightspotted pattern of white/light cream tones that turns dense in some cases.The colour of the lip is off-white, although some specimens are pink, even intense fuchsia.
Three predominant morphotypes are distinguished in I. giennensis sp.nov.based on their band patterns (Fig. 5).(i) Morphotype 1: consisting of two continuous dark brown lateral bands in the body whorl, being the upper one between two and four times wider than the lower one (only in a very few cases, both bands become similar in width).(ii) Morphotype 2: lateral bands in the body whorl turn intermittent or diffuse, sometimes with lighter or even pale tones.(iii) Morphotype 3 (the rarest one): the same as morphotype 1 but with bands which continue in the upper whorls until reaching the protoconch.Morphotypes 1, 2 and 3 represent 60%, 36% and 4% of the shells sampled, respectively.Further research will determine which pedoclimatic and ecological factors influence the relative abundance of each of these morphotypes to establish if they might be considered as ecotypes.
Habitat and distribution
The ecological niche of I. giennensis sp.nov.consists of rock formations on a calcareous-based lithology between 574 and 1443 m in altitude in Southern Jaen and Northern Granada Provinces (south Spain; Figs. 2 and 6).Although it is most common to find I. giennensis sp.nov.inside the cracks and cavities and under the stones of limestone pavements and rocky ridges, it also inhabits natural or planted Mediterranean scrublands and forests, being found under the leaf litter layer, even pine needles.
Conservation status
Iberus giennensis sp.nov.seems to present healthy populations based on the medium-high density of specimens found in the large number of locations sampled.In fact, I. giennensis sp.nov.has a wide potential distribution area, estimated at approximately 2000 km 2 , as it occupies calcareous mountain massifs that provide a continuity of potential habitats.However, I. giennensis sp.nov.presents a singular population in the Sierra Arana (Granada Province), separated from the main population nucleus at the north for a zone occupied by I. angustatus (Rossmässler, 1854).This population is probably isolated, deserving additional conservation effort.
Etymology
The name of I. axarciensis sp.nov.refers to the Axarquía, a region of the province of Malaga (South Spain), where most of the localities of I. axarciensis sp.nov.are located (Fig. 2).
Holotypes and paratypes
Figure 7 shows the photographs of the holotype and paratype shells assigned to I. axarciensis sp.nov.Morphological measurements of holotype and paratype shells of I. axarciensis sp.nov. in Table 2.The type locality for I. axarciensis sp.nov. is assigned to the surroundings of Alfarnate, Malaga Province (Spain), with the following coordinates: 37° 00′ 15′′ N, -4° 16′ 31′′ W.
Type shell description
Figure S3 in Supplementary Material shows a representative series of the conchological variability in I. axarciensis sp.nov.Alive specimens of I. axarciensis sp.nov.are represented in Fig. 8. Table S4 in Supplementary Material shows the average morphometric values for I. axarciensis sp.nov.Iberus axarciensis sp.nov.has a not umbilicated globose shell with 4-5 whorls of regular growth, the last of which is convex and slightly compressed, being more dilated than the others.Shells have simple and visible sutures in all whorls.Protoshell shows 1-2 smooth whorls with uniform light brown colour.Contrastingly, a fine and homogeneous transverse striation can be seen in the rest of the whorls.As a result of the radial striation being mixed with the longitudinal one, a fine and regular reticulation appears.Shell aperture is large, oval-semilunar, wider than high, having a fine variable peristome that sometimes is sharp but, in some localities, becomes slightly expanded.The peristome shows a slight thickening in the columellar wall, close to the umbilicus.There is no callus either on the parietal edge or in the umbilicus area.Thickening and whitish tones typical in the umbilicus area of other species of the genus Iberus are absent in I. axarciensis sp.nov.Sometimes, the umbilicus area exhibits a slight depression.
The colour of the shell in the first three whorls is light or pale brown, whilst the rest is off-white.Nevertheless, some populations show uniform off-white colour for all whorls.The shell body whorl is crossed by five dark brown bands, the two upper ones usually discontinuous.Nevertheless, a minor percentage of shells may have upper continuous bands.Regarding the three lower bands, the two at the bottom use to be wide whilst the upper one may be very fine in some cases.All these three bands can be continuous or discontinuous, even diffused sometimes.Considering the two Fig. 6 Some habitats of I. giennensis sp.nov. 1. Sierra Pelada, Íllora, Granada; 2. Hoya del Salobral, Noalejo, Jaen; 3. Río Susana, Valdepeñas de Jaen; 4. Sierra del Trigo, Noalejo, Jaen; 5. Cogollos de la Vega, Granada; 6. Sierra de Ahillos Alcaudete, Jaen; 7. Arroyo del Rigüelo, Fuensanta de Martos, Jaen; 8. Barranco de los Correlones, Fuensanta de Martos, Jaen; 9. Cerro Santa Merced, Montillana, Granada; 10.Cerro del Hoyo, Valdepeñas de Jaen; 11.Puerto de Navaleón, Noalejo, Jaen; 12. Cañada de las Hazadillas, Parque Periurbano Monte la Sierra, Jaen lower main bands, the top one may become between 0.5 and 3 times wider than the one at the bottom.The area over the three principal bands of the body whorl may exhibit a slight spotted pattern of white/light cream tones that turns dense in some cases.Lips of all the specimens sampled were offwhite (never light or dark pink).
Four morphotypes can be assigned to I. axarciensis sp.nov.(Fig. 9), of which the first two are clearly predominant: (i) Morphotype 1: flattened specimens with two continuous dark brown lateral bands, the upper one between two and four times wider than the lower one, although shells of some populations may show fine bands of similar width.(ii) Morphotype 2: the same as morphotype 1 but with lateral bands turning intermittent or diffuse, sometimes with lighter or even pale tones.(iii) Morphotype 3: globose specimens with two continuous dark brown lateral bands (the upper one, at most, twice as wide as the bottom one).Nevertheless, both bands may have the same width on some occasions.(iv) Morphotype 4: the same as morphotype 3 but with lateral bands turning intermittent or diffuse, sometimes with lighter or even pale tones.Morphometric parameters showed significant differences between the flattened and globose morphotypes (Table 3).Flattened morphotypes are larger, less tall and have a less circular shell and peristome (ratios between major and minor diameters farther than 1) than the globose morphotypes.Thus, the flattened morphotypes turned out to be significantly less conical.Although the peristome of the flattened morphotype is larger, there are no significant differences in the proportion of the total shell surface occupied by peristomes between both morphotypes.Of the whole set of shells of I. axarciensis sp.nov.checked, 30% correspond to the globose morphotype and the remaining 70% to the flattened one.Specimens with continuous bands accounted for 99% of the globose morphotype while they decreased to 57.8% in the flattened morphotype.
Habitat and distribution
The ecological niche of I. axarciensis sp.nov.consists of karstic limestone areas between 705 and 1318 m of altitude in the Northeast of Malaga and the Western end of Granada Provinces (south Spain; Figs. 2 and 10).Although it is most common to find I. axarciensis sp.nov.inside the cracks and cavities and under the stones of limestone pavements and rocky ridges, it also inhabits natural Mediterranean scrublands, even marginal and degraded areas near crops and ditches.
Etymology
The name of I. antikarianus sp.nov.refers to the town of Antequera (Malaga Province), whose name during Roman times was "Antikaria".Only three localities for I. antikarianus sp.nov.have been found (Fig. 2), one of them placed in Peña de los Enamorados (Antequera) being the most relevant in terms of abundance and diversity of specimens.
Type shell description
Figure S4 in Supplementary Material shows a representative series of the conchological variability in I. antikarianus sp.nov.Alive specimens of this species are represented in Fig. 8. Table S5 in Supplementary Material shows the average morphometric values for this species.
No remarkable conchological differences can be established between I. axarciensis and I. antikarianus spp.nov.Shape, structure and ornamentation in both species are considerably similar, although differences may be found at the population level.As I. axarciensis sp.nov., I. antikarianus sp.nov.has a not umbilicated globose shell with 4-5 whorls of regular growth, the last of which is convex and slightly compressed, being more dilated than the others.Shells have simple and visible sutures in all whorls.Protoshell shows 1-2 smooth whorls with uniform light brown colour.Contrastingly, a fine and homogeneous transverse striation can be seen in the rest of the whorls.As a result of the radial striation being mixed with the longitudinal one, a fine and regular reticulation appears.Shell aperture is large, ovalsemilunar, wider than high, having a fine variable peristome that sometimes is sharp but, in some localities, becomes slightly expanded.The peristome shows a slight thickening in the columellar wall, close to the umbilicus.There is no callus either on the parietal edge or in the umbilicus area.Thickening and whitish tones typical in the umbilicus area of other species of the genus Iberus are absent in I. antikarianus sp.nov.Sometimes, the umbilicus area exhibits a slight depression.The colour of the shell in the first three whorls is light or pale brown, whilst the rest is off-white.Nevertheless, some populations show uniform off-white colour for all whorls.The shell body whorl is crossed by five dark brown bands, the two upper ones usually discontinuous.Nevertheless, a minor percentage of shells may have upper continuous bands.Regarding the three lower bands, the two at the bottom use to be wide whilst the upper one may be very fine in some cases.All these three bands can be continuous or discontinuous, even diffused sometimes.Considering the two lower main bands, the top one may become between 0.5 and 3 times wider than the one at the bottom.The area over the three principal bands of the body whorl may exhibit a slight spotted pattern of white/light cream tones, that turns dense in some cases.Lips of all the specimens sampled were offwhite (never light or dark pink).
In I. antikarianus sp.nov., there are morphotypes based on the pattern of bands: Morphotype 1: specimens with two continuous dark brown lateral bands, the upper one between two and four times wider than the lower one, although shells of some populations may show fine bands of similar width.Morphotype 2: the same as morphotype 1 but with lateral bands turning intermittent or diffuse, sometimes with lighter or even pale tones.
Habitat and distribution
Iberus antikarianus sp.nov.has been only found in karstic areas between 481 and 905 m in altitude in a restricted area of a few mountains in Malaga province (Figs. 2 and 10).Again, live snails were always associated with limestone (in cracks, cavities, under stones).The potential distribution area of this species should be considered small and fragmented due to the amount of natural (rivers) and human barriers (orchard plantations and roads) surrounding it.
Conservation status
Unlike the previous two taxa, I. antikarianus sp.nov.presents a very small potential distribution, barely 50 km 2 when considering that the only three populations found inhabit isolated mountains separated from each other by cereal crops and olive groves as well as by a highway.We therefore consider that I. antikarianus sp.nov.should be one of the species of the genus Iberus candidate to be assigned an eventual conservation category.
Differences between shells of I. giennensis, I. axarciensis and I. antikarianus spp. nov.
Shells of I. giennensis sp.nov.and the pool constituted by I. axarciensis and I. antikarianus spp.nov.are certainly similar.Still, some distinguishing features may be listed; (i) peristome edge is slightly less projected (on average) in I. giennensis sp.nov.than in the other two species, which included some localities where peristome edge tends to spread.(ii) I. giennensis sp.nov.typically presents two main lateral bands in the body whorl, while the other two species typically present three bands in most of the specimens.(iii) Shell mottling is more intense in I. giennensis sp.nov.By contrast, shell mottling is sparser in the other two species, being absent in the individuals of some populations.(iv) Pinkish, even fuchsia lip is frequent in alive and fresh specimens of I. giennensis sp.nov., but this feature does not occur in I. axarciensis and I. antikarianus spp.nov.(v) Shell surface of I. giennensis sp.nov.shows non-periodical radial cords resulting in an irregular radial striation.Contrastingly, shells of I. axarciensis and I. antikarianus spp.nov.show a fine and homogeneous radial striation mixed with the longitudinal one, producing a fine regular reticulation, much finer than that found in I. giennensis sp.nov.
In addition, statistically, the shells of I. axarciensis sp.nov.exceeded on average in size (in wide, height and total area) those of I. antikarianus sp.nov., which, at the same time, were on average larger than those of I. giennensis sp.nov.(Table 5).However, the shape of the shells differed according to the species.The highest average H/W ratio was measured in I. giennensis sp.nov., being significantly higher than that in I. axarciensis and I. antikarianus spp.nov.(which showed a statistically similar H/W ratio, on average; Table 5).This indicates that shells in I. giennensis sp.nov.tended to be less flattened than shells in the other two species.The highest average ratio between the maximal and the minimal diameters (circularity) was found in I. antikarianus sp.nov., being significantly higher than that in I. axarciensis sp.nov.which, in turn, had a mean circularity ratio significantly higher than I. giennensis sp.nov.(Table 5).Considering that the closer this ratio to 1, the more circular the shell, I. giennensis sp.nov.had, on average, the most circular shell, while shells of I. antikarianus sp.nov.were the most ovalshaped shells, occupying those of I. axarciensis sp.nov.an intermediate position.
The form of the peristome also differed, on average, between the three species.Peristome was, on average, larger in I. axarciensis sp.nov., followed by I. antikarianus sp.nov., and the smallest in I. giennensis sp.nov.(Table 5).This coincides with the differences in the shell size.However, the mean area of peristome regarding the total shell area was the highest in I. antikarianus sp.nov., showing that this species possesses a peristome proportionally larger than the other two species.In addition, the peristome of I. antikarianus sp.nov.was more oval-shaped than that of I. axarciensis and I. giennensis spp.nov., which presented more rounded peristomes.
A PCA provided a first factor (PC1) accounting for 55.94% of the variance in shell morphology, which included shell area and percentage of shell surface that is occupied by the peristome.PC1 may be interpreted as a gradient of shell size and the relative size of the peristome.The second factor (PC2) explained 32.29% of the variance and included the ratio between shell height and major shell diameter, that is, the index of globosity.On one hand, the PC1 grouped populations of I. axarciensis and I. antikarianus spp.nov.as the largest with I. giennensis sp.nov.and I. ortizi having intermediate and small shells, respectively (Fig. 12).On the other hand, according to PC2, I. axarciensis, I. antikarianus spp.nov.and I. ortizi presented flatter shells than I. giennensis sp.nov., whose average shell was the more globose of the clade.However, the four species still showed considerable overlap in their morphology (Fig. 12).This variation did not show either apparent geographical pattern, except for I. axarciencis sp.nov., whose oriental populations were flattener globose than the occidental ones (Fig. 13).
Iberus taxa geographically close with similar shells
We analysed statistically the shell differences between the three new species described here and I. ortizi, I. angustatus and I. marmoratus loxanus, three taxa belonging to the Iberus genus which are both geographically and conchologically close (Fig. 14).The comparison shows statistical differences in the average shell morphology between them.Shells of I. giennensis sp.nov.were, on average, wider, taller and more globose than those of I. ortizi, I. angustatus and I. marmoratus loxanus (Table 6).However, the shells of I. ortizi and I. angustatus were slightly more circular than that of I. giennensis sp.nov.(ratio between major and minor shells diameters closer to 1 in the two first), but I. marmoratus loxanus tended to be less globose than I. giennensis sp.nov.(Table 6).Regarding the peristome, that of I. giennensis sp.nov.had on average a greater relative surface area with respect to the total surface area of the shell than peristomes of I. ortizi and I. angustatus, but differences with I. marmoratus loxanus were not found (Table 6).However, I. giennensis sp.nov.showed the most circular peristome in comparison with the other three close species (Table 6).
The shells of I. axarciensis and I. antikarianus spp.nov.were on average wider and taller than those of I. ortizi, I. angustatus and I. marmoratus loxanus (Table 6).However, shell shape varied among species.Meanwhile, I. axarciensis and I. antikarianus spp.nov.shells were less globose than those of I. ortizi but showed a significantly higher degree of globosity than I. angustatus and I. marmoratus loxanus shells (Table 6).Regarding circularity, I. axarciensis and I. antikarianus spp.nov.were on average less circular on average than those of I. ortizi and I. angustatus but showed a similar circularity to I. marmoratus loxanus (Table 6).Peristomes of I. axarciensis and I. antikarianus spp.nov.manifested a greater mean relative surface area with respect to the total surface area of the shell and were more circular than those of I. ortizi, I. angustatus and I. marmoratus loxanus (Table 6).
Two other shell features showed clear differences between the six species compared: the proportion of shells showing contrasting banding patterns (continuous versus discontinuous bands) and differential degrees of umbilicus opening (closed versus somewhat open) (Table 7).Continuous bands are frequent in populations of I. giennensis sp.nov., I. axarciensis sp.nov., I. ortizi and I. angustatus, but less than 50% of individuals in the populations of I. antikarianus spp.nov.and I. marmoratus loxanus exhibited continuous bands.Besides, a closed umbilicus was the norm in most of 7).
Discussion
In this study, we describe three new species of the genus Iberus: I. giennensis, I. axarciensis and I. antikarianus spp.nov.Morphologically, the three described species show overlap in the form of their shells, although with slight differences at the population level.Therefore, they should be considered as cryptic species.Furthermore, these three species show morphologies very similar to other close species with nearby distributions: I. ortizi, I. angustatus and I. marmoratus loxanus.All these species have to be identified by genetic distances rather than morphological features.Most of them present allopatric distributions, suggesting that they evolved by allopatric speciation as a consequence of Although he considered this species to be distributed in the Sierra de Alcaraz (Albacete province, Spain), I. alcarazanus is not present in this mountain, being its type locality sited in , 1957;Arrébola, 1995;Ruiz Ruiz et al., 2006;Liétor et al., 2014).Consequently, I. alcarazanus, due to its high conchological variability (Liétor et al., 2014), which makes it difficult to be differentiated from other taxa (Ruiz Ruiz et al., 2006), and to be considered as an invalid taxon or a junior synonym of Iberus alonensis (Férussac, 1821) (Martínez-Ortí & Robles, 2012), has not been take account in publications on the taxonomy of the genus Iberus (Cadevall & Orozco, 2016;Elejalde et al., 2008a).Given such a degree of uncertainty, we consider that it is pertinent to describe a new species called I. giennensis sp.nov. to designate the species of the genus Iberus with their own and well-differentiated characters that inhabits the southwest of the province of Jaen and the northwest of the province of Granada, and which matches with the species that was considered as I. alcarazanus in previous publications (García San Nicolás, 1957;Arrébola, 1995;Ruiz Ruiz et al., 2006;Liétor et al., 2014).Another species described here, I. antikarianus sp.nov., was detected in the phylogenetic analysis by Elejalde et al. (2008a) based on the sequencing of a single individual initially considered as I. loxanus according to its morphology.The genetic distance between this individual and I. ortizi (the nearest taxon) was sufficient to consider it as a new species, and Elejalde et al. (2008a, p. 196) claimed that "[m]ore intensive sampling in the east of Malaga could provide more information about the phylogenetic relationships of this unique population".Following their recommendation, we carried out an intense field sampling programme to obtain more individuals with similar haplotype and define the distribution range of this possible species.A new specimen was sequenced, confirming the existence of this species, whose distribution was defined.In addition, the morphology of this species has been characterised through the morphological analysis of 301 individuals.The scarce distribution of this species, restricted to no more than 300 km 2 (much less if cultivated areas and population centres interspersed in their potential area are ruled out), could imply a concern in terms of conservation.
The third species described here, I. axarciensis sp.nov.was identified on the basis of genetic analysis.This species remained unnoticed until now as a consequence of its limited distribution and the similarity of its shell with other close and nearby Iberus species.No specimens of this species were considered in the study by Elejalde et al. (2008a), highlighting the importance of developing a research plan to achieve a comprehensive phylogeny and assess the systematics of the Iberus genus.The main criterium that has allowed researchers to describe new Iberus species in recent decades has been based on conchological data.However, it is likely that the genus Iberus hides a number of cryptic species that have not been identified based on the shell characteristics.Therefore, it seems necessary to conduct a systematic prospection and ulterior genetic sequencing of every population of Iberus sp. to assess the presence of cryptic species.
With this phylogenetic clade and their distribution at hand, we can postulate on some evolutionary processes of the species conforming to this clade.The two more ancestral species, I. angustatus and I guiraoanus (L.Pfeiffer, 1853), inhabiting Sierra Mágina and Sierra de Cazorla, Segura y las Villas, respectively, are located further north of all other species within the clade.Thus, this distribution suggests a possible expansion of species towards the southwest.Iberus ortizi sister clade relationship to all three new species described suggests conditions in which isolated ancestral I. ortizi populations eventually favoured speciation processes in the region.However, the lack of timing of the phylogeny limits the accuracy of understanding such processes.Today, I. giennensis sp.nov.coexists with I. angustatus in the eastern area of its distribution, without any apparent geographical barrier separating them.This sympatric distribution, as well as the presence of possible hybrids (personal observations) between the two species, merits further investigation.
Lastly, our findings not only have evolutionary and taxonomic implications.The three species considered here show very limited distributions, especially I. antikarianus sp.nov.This probably will involve conservation concerns.Although we have delimited the distribution of these species, population size and trends, as well as their threats, remain unknown.Iberus gualtieranus is the most studied Iberus species (Moreno-Rueda, 2011) and the only one currently included in the Spanish red list of invertebrates (Verdú et al., 2011).Iberus gualtieranus is considered as "Endangered" by the IUCN (Arrébola, 2011a).Still, the distribution of I. gualtieranus, with several isolated populations, is similar to or even larger than those of I. axarciensis and I. antikarianus spp.nov.In fact, I. ortizi, with a very small distribution range, similar to that of I. axarciensis and I. antikarianus spp., is considered as "Vulnerable" by the IUCN (Arrébola, 2011b).These discrepancies probably relate to the scarce information for I. ortizi in comparison with I. gualtieranus.Detailed studies of population size, structure and trends (as those done for I. gualtieranus; Moreno-Rueda & Pizarro, 2007) should be carried out for I. ortizi and the three new species described in this study to understand the conservation concerns of these endemics.Description of new endemic cryptic species and their distribution ranges will likely reveal a geographic mosaic of several morphologically similar species with restricted distributions with important conservation concerns.Many of these species, given their reduced distribution range, will probably gather the conditions needed to be catalogued as "Vulnerable" or even "Endangered".
Figure
Figure S2 in Supplementary Material shows a representative series of the conchological variability in I. giennensis sp.nov.Alive specimens of I. giennensis sp.nov.are represented in Fig. 4. TableS3in Supplementary Material shows the average morphometric values for this species.Iberus giennensis sp.nov.has a globose and not umbilicated shell, with 4-5 whorls of regular growth.The last whorl is convex, slightly compressed and more dilated than the others.The suture is simple and visible in all whorls.Protoshell has 1-1.5 turns with smooth and uniform light brown colour.The shell surface is irregularly striated giving a reticulum (except in the protoshell), with prominent radial ribs that are distributed in a non-regular pattern between
Fig. 1
Fig. 1 At the left, a Bayesian tree inferred in MrBayes based on Iberus species COI sequence data.The branch examined in the present study is indicated in the tree in a different colour.At the right, an amplified vision of the branch analysed in the present study.In red, sequences of the Clade 1 (I.giennensis sp.nov.), in blue Clade 2 (I.antikarianus sp.nov.), and in green Clade 3 (I.axarciensis sp.nov.)
Figure 11
Figure 11 shows the photographs of the holotype and paratype shells assigned to I. antikarianus sp.nov.Morphological measurements of holotype and paratype shells of I. antikarianus sp.nov.are available in Table 4.The type locality for I. antikarianus sp.nov.consists of calcareous rocks of limestone areas with Mediterranean scrublands.The type
Fig. 12
Fig. 12 Distribution of I. giennensis sp.nov.(23 localities), I. axarciensis sp.nov.(18 localities), I. antikarianus sp.nov.(2 localities) and I. ortizi (21 localities) in the bi-dimensional space generated by the two first principal components of a PCA analysis.Each point in the graph represents a single sampling locality.Coordinates of centroids for each species have been calculated as the average X and Y coordinates of the points included in the corresponding clouds
Table 1
Location and basic morphometrics of holotype and paratypes assigned to I. giennensis sp.nov.ID codes of the Zoology Collections of the University of Granada are added
Table 2
Location and basic morphometrics of holotype and paratypes assigned to I. axarciensis sp.nov.ID codes of the Zoology Collections of the University of Granada are added
Table 4
Location and basic morphometrics of holotype and paratypes assigned to I. antikarianus sp.nov.ID codes of the Zoology Collections of the University of Granada are added
Table 5
Morphometric comparisons between the three new Iberus species presented in this work.All comparisons were significant.K: Kruskal-Wallis; A: ANOVA (tests used according to the normality of the variables).Data are means with standard deviation.Sample size is between brackets.Super indexes indicate significant differences when letters differ.Comparisons between pairs of variables were carried out with the Tukey test (HSD) when normally distributed, or with the H test when non-normally distributed the species, except for I. angustatus, in which only 37% of individuals present a completely closed one (Table
Table 7
The proportion of individuals with band patterns and the open or closed umbilicus showed by small-sized Iberus species from Eastern Andalusia (Spain)
|
2024-03-15T15:25:33.974Z
|
2024-03-12T00:00:00.000
|
{
"year": 2024,
"sha1": "d793e1ea1dde0362458210a154d45514a4748431",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13127-024-00640-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aa309f2d03cdb4dcd8b1bec0a8c7e6d4cf4e5a24",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
6796566
|
pes2o/s2orc
|
v3-fos-license
|
Two-dimensional phos-tag zymograms for tracing phosphoproteins by activity in-gel staining
Protein phosphorylation is one of the most common post-translational modifications regulating many cellular processes. The phos-tag technology was combined with two-dimensional zymograms, which consisted of non-reducing IEF PAGE or NEPHGE in the first dimension and high resolution clear native electrophoresis (hrCNE) in the second dimension. The combination of these electrophoresis methods was mild enough to accomplish in-gel activity staining for Fe(III)-reductases by NADH/Fe(III)-citrate/ferrozine, 3,3′-Diaminobenzidine/H2O2 or TMB/H2O2 in the second dimension. The phos-tag zymograms can be used to investigate phosphorylation-dependent changes in enzyme activity. Phos-tag zymograms can be combined with further downstream analysis like mass spectrometry. Non-reducing IEF will resolve proteins with a pI of 3–10, whereas non-reducing NEPHGE finds application for alkaline proteins with a pI higher than eight. Advantages and disadvantages of these new methods will be discussed in detail.
Introduction
Protein phosphorylation, one of the most common post-translational modifications, can alter enzyme activity and subcellular localization as well as target proteins for degradation and can effect changes in protein-protein interactions (Cousin et al., 2013;Gerbeth et al., 2013;Uhrig et al., 2013). Monitoring the phosphorylation status of proteins is, thus, very important for the evaluation of diverse biological processes. Methods to quantify particular phosphorylation events include radioactive labeling, immunodetection of site-specific phosphorylations, phospho-specific site mapping in peptide mass fingerprinting, chemical labeling but also in-gel phospho stainings (e.g., Pro-Q Diamond R , all blue and quercetin staining) (Ferrão et al., 2012;Wang et al., 2014).
Complex protein samples are often separated by polyacrylamide gel electrophoresis (PAGE), before mass spectrometry (MS) analysis. After PAGE, immunodetection or phospho staining are the most commonly applied techniques to detect phosphorylated proteins.
The fluorescent stain, Pro-Q Diamond R by life technologies, gives the opportunity to detect phosphoserine-, phosphothreonine-, and phosphotyrosine-containing proteins without sequence or context specificity (Miller et al., 2006). Currently Pro-Q Diamond R is a standard staining for SDS-PAGE. In contrast it is not often combined with native PAGE (Tsunaka et al., 2009) and no literature can be found on the combination with zymograms. An alternative to phospho staining is phos-tag PAGE, a phosphate affinity electrophoresis for the mobility shift of phosphoproteins (Kinoshita et al., 2006;Kinoshita-Kikuta et al., 2007;Kinoshita and Kinoshita-Kikuta, 2012). A dinuclear metal [Mn(II) or Zn(II)] complex of 1,3-bis[bis(pyridin-2ylmethyl)-amino]propan-2-olato acts as a phosphate-binding tag molecule, phos-tag, in an aqueous solution under physiological conditions. Recently Mn(II)-phos-tag Blue Native PAGE (BNE) was accomplished in the first dimension (Deswal et al., 2010).
Native PAGE methods in combination with phosphorylation analysis are mainly needed for the characterization of proteinprotein interactions, complex assembly and activity regulation, which are prerequisite for the understanding of cellular processes. A variety of native PAGE methods exist (BNE, CNE, native Tris-PAGE, native Acetate-PAGE) and the most optimal can be chosen depending on the sample and the scientific question to be answered. Native PAGEs, with more or less modified protocols, are often used for zymograms because of reduced denaturizing conditions, e.g., high salt concentrations, reducing agents, and strong detergents Schägger, 2005, 2008;Wittig et al., 2006Wittig et al., , 2007Burré et al., 2009;Führs et al., 2009Führs et al., , 2010 which can effect the activity of a protein. It is likely that strong detergents (e.g., SDS), reductants [dithiothreitol (DTT), 2-mercaptoethanol] or heating could not only influence the protein activity but also the phosphorylation.
The standard zymograms (non-reducing SDS-PAGE, without heating of the protein sample) are commonly used for proteolytic enzymes (Vandooren et al., 2013), but combination of different electrophoresis methods and various in-gel activity staining makes the method applicable for different enzyme activities (Manchenko, 2002). In the past, activity in-gel stainings after isoelectric focusing (IEF) slab gels were reported for different enzymes, e.g., malate dehydrogase, peroxidase, quinone reductase, Fe(III)-reductase, superoxide dismutase, catalase and others (Mika et al., 2010;Meisrimler et al., 2011;Kukavica et al., 2012;Lüthje et al., 2014). In the standard IEF-PAGE, protein separation is based on their pI and oriented from basic to acidic pH. A related method, the non-equilibrium pH gel electrophoresis (NEPHGE), also separates proteins by their pI. In NEPHGE, the protein separation is reversed in comparison to IEF-PAGE. NEPHGE was developed to resolve proteins with extremely basic pI (pH 8.5-12.0) (Lopez, 2002). During NEPHGE, proteins are not focused to their pI as in the standard IEF-PAGE. Instead proteins move through the gel based on their charge. For this reason, the accumulated volt hours (Vh) determine the protein pattern across the gel and have to be kept constant to ensure reproducibility.
To date, native PAGE methods, as the described above, are well-established systems but none of them is usually named a standard method. Especially the combination of non-reducing IEF/NEPHGE with one or the other native PAGE in the second dimension has rarely been performed and is scarcely found in the literature, but has been important for two-dimensional zymograms . After multiple modifications and trials we now developed a protocol which we report in the present paper. It offers good resolution for the combination of nonreducing IEF or NEPHGE in the first dimension with hrCNE in the second dimension. The hrCNE was combined with the phos-tag to separate proteins depending on their phosphorylation. Various activity in-gel stainings can be accomplished in the first dimension and in the second dimension hrCNE or phos-tag hrCNE. For the first time, we attempted to directly link the phosphorylation status of an enzyme to its activity using 2D zymograms by combining several gel electrophoresis methods based on size, charge and affinity.
Plant Material
Proteins were obtained from leaves of 4 week old corn plants (Zea mays L. cv. Goldener Badischer Landmais, Saatenunion, Hannover, Germany) and roots of 19 day old pea (Pisum sativum L.) plants (Sperli cv. vroege, Lüneburg, Germany). Soluble proteins of corn and pea were separated by differential centrifugation from the microsomal fraction as described elsewhere (Meisrimler et al., 2011;Lüthje et al., 2014) and stored at −76 • C until use. Total protein extracts from corn roots (12 days) were acquired by grinding with liquid nitrogen, followed by extraction in Tris-HCl buffer pH 7.6 (50 mM NaCl, 1 mM DTT, 1% Triton X-100) for 1 h at 4 • C. Extraction was followed by centrifugation at 10,000 g for 10 min (Beckman, Avanti, Germany). All extraction buffers contained protease inhibitors (Sigma Aldrich, France) and phosphatase inhibitors (Sigma Aldrich, France). Protein amounts were quantified as described by Bradford (1976) in the presence of 0.01% Triton X-100 using bovine serum albumin as the standard.
First Dimension-Non-Reducing IEF and NEPHGE
Similar gels were used for IEF and NEPHGE. Gels consisted of 4.5% acrylamide, 2% ampholytes pH 3-10 (Serva, Heidelberg, Germany), 4 M urea and 2% CHAPS. Gels were always prepared maximum 24 h before use. Minimum polymerization time was 1.5 h at 34 • C. Polymerization was triggered by 0.1% ammoniumpersulfat (APS) and 0.01% N,N,N ′ ,N ′ -tetramethylethylendiamin (TEMED). Sample buffer was prepared as 4× buffers for both separation methods (IEF, NEPHGE). Samples loaded on the gel contained 1 M urea, 10% glycerol, 0.5% CHAPS and 2% ampholytes. Before samples were applied, a pre-run of the gels was accomplished for 45 min at 30 V with no further restrictions. Electrophoresis conditions were described by Lüthje et al. (2014). For NEPHGE, the polarity and the IEF buffer system was reversed (Figure 1). Electrophoresis FIGURE 1 | Separation model of IEF/phos-tag hrCNE and NEPHGE/phos-tag hrCNE. (A) The buffer system in the IEF consisted of NaOH (upper chamber) and H 3 PO 4 (lower chamber). Both, buffer system and polarity of IEF were reversed for separation by NEPHGE. Proteins moved through the IEF until they reached their pI within the pH gradient of the gel [alkaline (blue) to acidic (red)]. During NEPHGE, proteins moved in the direction of the cathode due to their pI. NEPHGE was stopped before the pH equilibrium was reached to keep proteins with an alkaline pI in the gel. After separation by pI, gel lanes were sliced out, equilibrated and transferred to the second dimension. (B) In the second dimension proteins were separated by hrCNE due to their molecular weight. Phos-tag hrCNE separated proteins based on their affinity to the phos-tag. Cathode and anode are indicated as (−) and (+), respectively. An arrow on the left of the gels indicates the direction of separation. Low (L) and high (H) pH are labeled on the right of the first dimensions. Three steps are indicated for first and second dimension: (i) loading the sample, (ii) separation of proteins by electrophoresis, and (iii) final position of the proteins after stopping the electrophoresis.
Pro-Q Diamond R staining for phosphoproteins was accomplished after in-gel activity staining and fixation using the fast staining protocol as recommended by the provider. IEF or NEPHGE gels were fixed in 20% TCA and hrCNE or phos-tag hrCNE were fixed in 40% MeOH and 10% acetic acid overnight. All gels were washed once for 30 min and twice for 10 min in ultrapure water before phospho staining. After destaining gels were washed three times with ultrapure water, followed by detection using a CCD camera at 560 nm (Biorad, Chemdoc, Germany).
At least three independent technical replicates were accomplished per staining in the second dimension to show specificity of the spots in relation to their phosphorylation (Supplemental Datas 1, 2). Students T-test was used to statistically test the protein separation shift between hrCNE and phos-tag hrCNE for significance.
Protein Digestion and Mass Spectrometry
Gel spots were cut out and proteins reduced with DTT, alkylated with iodoacetamide and digested with trypsin by standard protocol described in Meisrimler et al. (2014). After digestion, the gel pieces were repeatedly extracted (50% acetonitrile/5% formic acid) and the combined extracts dried down in a vacuum concentrator.
For QTOF, Premier tandem MS analysis peptide extracts were dried in a vacuum concentrator and resuspended in 20 mL 0.1% formic acid. The samples were centrifuged at 16,000 rpm and 2-4 µL of the digest were used for LC-MS runs which were done on a QTOF Premier tandem mass spectrometer (Waters-Micromass, Eschborn, Germany) equipped with an Aquity UPLC (Waters, Eschborn, Germany). Samples were applied onto a trapping column (Waters nanoAquity UPLC column, C18, 180 µm × 20 mm), washed for 10 min with 5% acetonitrile, 0.1% formic acid (5 µL/min) and then eluted onto the separation column (Waters nanoAquity UPLC column, C18, 1.7 µm BEH130, 75 µm × 200 mm, 200 nL/min) with a gradient (A, 0.1% formic acid; B, 0.1% formic acid in acetonitrile, 5-50% B in either 60 or 120 min). The spray was done from a silica emitter with a 10 µm tip (PicoTip FS360-20-10, New Objective) at a capillary voltage of 1.5 kV. For data acquisition, the MSE technique was applied: alternating scans (0.95 s, 0.05 s interscan delay) with low (4 eV) and high (ramp from 20 to 35 eV) collision energy was recorded (Silva et al., 2005;Li et al., 2009). The data were evaluated with the software package Protein Lynx Global Server version 2.5.2 (Waters, Eschborn, Germany) searching the Uniprot database and Uniprot tremble (Jan 2014 update). At intervals of 10 s, a lockspray spectrum (1 pmol/µL [Glu1] Fibrinopeptide B (Sigma)) was recorded. Using lockspray correction, a mass accuracy of <7 ppm was achieved in the MS mode.
The LC-ESI-OT-MS data were processed with Proteome Discoverer v1.4.1.14 (Thermo Scientific) using the following parameters: precursor mass tolerance 10 ppm, fragment mass tolerance 0.2 Da, 1 missed cleavage, carbamidomethylation on Cys as fixed and oxidation on Met and phosphorylation on Ser, The and Tyr as variable modifications. All peptide assignments were verified by manual inspection.
Two-Dimensional Zymograms
For the separation in the first dimension non-reducing IEF and NEPHGE were accomplished, separating proteins based on their pI. For NEPHGE the pH gradient was directed in the opposite direction (acidic to alkaline) than for IEF (Figure 1). Protein separation by NEPHGE was stopped before the pH equilibrium was reached. Therefore, NEPHGE could not be used to calculate the pI of a protein. NEPHGE is normally used for highly alkaline proteins (e.g., membrane proteins) that otherwise would be lost for any analysis by PAGE and the following MS identification. To reach comparability of NEPHGE replicates, the Vh were kept constant between different gel runs (Lopez, 2002). IEF and NEPHGE could be used to separate the same, differently phosphorylated, enzyme, based on their pI shift (Zhu et al., 2005). The shift is introduced by the extra negative charge of the phosphorylation and was also used for separation of phosphoproteins in IPG-strip/SDS-PAGE (Larsen et al., 2001).
The non-reducing IEF sample buffer contained 1 M urea and only CHAPS as detergent, resulting in a clear resolution in the first dimension of soluble proteins and microsomes (Figure 2). The pre-run before IEF and NEPHGE increased resolution and activity of the bands. Similar effects have been shown for native Tris-PAGEs in the past (Weydert and Cullen, 2010).
Urea can denature proteins because it diminishes the hydrophobic effect by displacing water in the solvation shell and because it specifically binds to amide units. It has been shown that urea interacts differently with different functional groups, resulting in heterogenic effects on the protein activity. Also, effects of urea have been shown to be reversible, if not used directly in an assay. Therefore, inhibitory concentrations of urea on protein activities strongly rely on the type of protein (Rajagopalan et al., 1961;Kim and Woodward, 1993;Zou et al., 1998;Garfin, 2003;Choi et al., 2004). For more urea (denaturing compounds) sensitive proteins, the optimal urea concentration of gels and sample buffers have to be adjusted based on the level of enzyme activity assayed in the urea-containing enzyme reaction buffer. After separation by pI, gel lanes were sliced out, equilibrated and transferred to the second dimension as described earlier by Lüthje et al. (2014).
One of the most critical points for two-dimensional PAGE was the transfer of the proteins from the first to the second dimension. For the equilibration of the first dimension IEF/NEPHGE, the second dimension hrCNE gel-buffer was supplemented with 0.1% Triton X-100 and 0.07% DOC and gels were equilibrated by continuous shaking at room temperature. This equilibration buffer was applicable for all soluble samples, whereas microsomal fractions showed inferior separation (Supplemental Data 3) due to increased hydrophobicity often observed with membrane protein samples (Meisrimler and Lüthje, 2012). Higher concentrations of detergent had a negative influence on the separation of the proteins and produced irregularities in the separation pattern (data not shown). Sample-dependent adaptions on the presented method are possibly needed for strongly hydrophobic proteins, e.g., testing different detergent combinations, concentrations and solubilization time.
Second dimension standard hrCNE separates proteins based on the size of a protein. In phos-tag hrCNE phosphoproteins were separated by their affinity to the phos-tag under native conditions that has been shown to be highly specific by Kinoshita et al. (2006) and Kinoshita-Kikuta et al. (2007).
In comparison to standard two-dimensional gel electrophoresis (e.g., IPG-strip/SDS-PAGE) the combination of non-reducing IEF/NEPHGE with phos-tag hrCNE excludes the effects of DTT, precipitation and heating. These treatments can affect the activity of a protein and their phosphorylations. In-gel staining like Pro-Q Diamond R , all blue and quercetin, most commonly applied after IPG-strip/SDS-PAGE, only show the current form of phosphoproteins in the gels (Orsatti et al., 2009;Wang et al., 2014). In case of phosphorylation loss before staining, the information would be lost for further analysis. Also, multiple post-translational modifications per protein could affect the pI shift in the first dimension and analysis will be difficult. Phostag hrCNE was focused only on phosphoproteins comparable to affinity chromatography, e.g., IMAC (Machida et al., 2007). Other post-translational modifications were excluded as effectors in the second dimension and therefore results were easier to interpret.
Binding abilities and optimal concentrations of the phos-tag in the second dimension hrCNE were tested using phosvitin as a standard for protein phosphorylations (Samaraweera et al., 2011). Alongside, partially dephosphorylated phosvitin was used as a control. First dimension non-reducing IEF confirmed the theoretical pI of 4.5 of phosvitin, showing a pI of 4.4-4.6 for the phosphorylated phosvitin. The partially dephosphorylated protein showed bands with pI of 5.2 and higher (Figure 2A). Combinability of the non-reducing IEF with Pro-Q Diamond R was first tested with phosvitin and was followed for the combination of native in-gel staining followed by Pro-Q Diamond R (Figure 2).
In the second dimension phosvitin was observable in the 0.5 µM, 1 µM, and 10 µM phos-tag hrCNE (Figure 3). Phosvitin was not visible in 0.1 µM, similar to the standard hrCNE without phos-tag or the dephosphorylated protein (Figure 3). The concentration of 0.1 µM phos-tag was under the limit of the binding ability for phosphoproteins. Overall, best resolution of the phosvitin was achieved in the 0.5 µM phos-tag hrCNE.
The fact that phosvitin was only detectable in its phosphorylated form in the second dimension was caused by the resolution of the hrCNE. hrCNE is normally used for the separation of native protein complexes which have fairly high molecular masses. Proteins with lower molecular masses than 45 kDa were not found after separation in the hrCNE or move very close to the separation front . Based on this fact, the combination of non-reducing IEF/NEPHGE with hrCNE was most useful for proteins with a size above 50 kDa. This fact was one of the major constraints of the presented method. This restriction could possibly be overcome by exchanging the hrCNE to a native Tris-PAGE (Weydert and Cullen, 2010). This has to be further investigated in the future.
The optimal concentration of 0.5-1 µM phos-tag used in the presented protocol was found to be in the range reported for first dimension BNE (Deswal et al., 2010). However, the needed phostag concentration is much lower than in phos-tag SDS-PAGE. Deswal et al. (2010) discussed the difference in the needed phostag concentration between first dimension phos-tag BNE and phos-tag SDS-PAGE, speculating that it might be related to the difference in bis-acrylamid to acrylamide ratio used in the two methods. For the presented hrCNE protocol we used a similar bis-acrylamid to acrylamid ratio than used for standard SDS-PAGE, showing that the effect of decreased need of phos-tag was not related to this ratio, but more to the fact that proteins were closer to native conditions. It is highly possible that the treatment of samples with SDS, reducing agents like DTT and heating in the standard protocol affects the phosphorylation sites or the accessibility of the phosphorylation sites that bind to the phos-tag, similar to the treatment before standard IPG-strip/SDS-PAGE.
Phosvitin was not detectable in non-reducing NEPHGE (450 Vh) optimized for highly alkaline proteins. Therefore, phosvitin cannot be used as standard for the pre-separation by non-reducing NEPHGE in the first dimension. An ideal standard for the separation in the alkaline range has still to be found. NEPHGE protocols found in the literature normally use higher Vh than in the present protocol (Lopez, 2002). Preliminary work with plant samples showed that strong alkaline bands already moved out of the gel for higher Vh (data not shown). Based on these results, separation in NEPHGE was done constantly at 450 Vh to make replicates comparable (Supplemental Data 4).
Colorimetric Staining and Identification of Proteins
Functionality of the two dimensional zymograms was tested with soluble proteins from corn leaves, soluble proteins of pea roots and total protein extracts of corn roots (Figure 4). The sample variety showed the independence of the method from the origin and age of a sample.
TMB, DAB, and ferrozine staining were chosen to test the properties of the conventional two dimensional zymograms and the phos-tag zymograms. Before accomplishing staining procedures in the second dimension compatibility with the first dimension non-reducing IEF was tested (Figure 2). Different samples were separated on non-reducing IEF and stained with TMB, ferrozine (Figures 2B, 4), DAB (Figure 4), and NBT (Supplemental Data 5). Detectable bands in TMB and ferrozine staining are indicated with their corresponding pI (Figure 2). All stainings have been published to be specific for different protein groups. TMB/H 2 O 2 staining has commonly been used for the detection of Fe and Cu containing proteins, e.g., flavocytochromes, peroxidases, blue copper proteins. DAB/H 2 O 2 has been shown to be specific for oxygen radical producing enzymes, mostly hemecontaining proteins, e.g., peroxidases, but not for Cu-containing enzymes . The Fe(III)-reductase staining with ferrozine and NADH is highly specific for enzymes that are able to reduce Fe(III) to Fe(II) at the given pH using NADH as a co-substrate (Holden et al., 1991;Meisrimler et al., 2011). After reduction, the Fe(II) is bound in a stable complex with ferrozine (Viollier et al., 2000). NBT/NADH staining was also accomplished in the first dimension. This staining was published to be specific for NADH using reductases like quinone reductases (Yan and Forster, 2009;Meisrimler et al., 2011). The formazan salt formed in the reaction with NBT was too stable to be removed from the gel and stainings were not compatible with Pro-Q Diamond R staining (Supplemental Data 5).
TMB, DAB and Fe(III)-reductase staining procedures were accomplished in the second dimension with and without phostag, proving the functionality of non-reducing IEF/NEPHGE with the second dimension hrCNE and phos-tag hrCNE as zymograms. However, separation in the second dimension appeared to be the most problematic step. Especially the highly sensitive TMB staining for the relatively extensive group of heme and Cu proteins showed higher backgrounds ( Figure 4B). The phos-tag hrCNE exhibited the highest background, making it difficult to analyze gels.
Migration of proteins was compared for hrCNE and phostag hrCNE. The phosphorylation of a protein causes a slower migration in phos-tag hrCNE due to their affinity to the phos-tag, leading to a measurable shift between hrCNE and phos-tag hrCNE. However, six spots (9, 13-16) showed no significant shift on the phos-tag hrCNE, when compared to the hrCNE. These proteins had no affinity for the phos-tag and were not phosphorylated. For spot 1 and 2, the shift was not computable due to the high background in the top of the phos-tag hrCNE gel. All other spots (4-8, 10-12) had a significant migration shift of more than 10% of the total migration distance (gel length).
The main spots, showing a significant shift on the phostag hrCNE compared to the hrCNE with a clear appearance on both gels, were picked and identified by LC-MS (Table 1). Spots 4-6 were identified as fructose bisphosphate aldolase on both gels. Spot 8 was identified as fruit protein (B4FRC8) and as an uncharacterized protein on the phos-tag hrCNE ( Table 1; Supplemental Table 1). Based on the small amount of peptides detectable it was not possible to detect specific phosphopeptides in the analyzed spots. The fruit protein was identified also in a former phosphoproteome study available at http://www.ebi.ac.uk/pride/archive/projects/PRD000721 (Bonhomme et al., 2012). Spot 8 was additionally analyzed using LC-MS Orbitrap. Further proteins were significantly identified but not all were related to the TMB staining (Supplemental Table 2).
Phosphorylation sites were verified by in-silico analysis for all proteins identified (Table 1). Over all, MS based identification after zymograms is often the biggest challenge. The low abundance of proteins stained in zymograms is based on the high sensitivity of these staining methods [e.g., Fe(III)-reductase or TMB staining] which is often higher than for silver staining. If primary MS results enable good protein identification, phosphopeptide enrichment is recommendable in a second MS analysis to verify the results from the phos-tag zymograms (Dunn et al., 2010). In contrast to the TMB staining, specific protein activities like the NADH-dependent Fe(III) reduction and the DAB staining led to a clear separation of proteins (Figure 4) but were more problematic for protein identification. The protocol for non-reducing IEF or NEPHGE/hrCNE presented is also the first functional protocol for Fe(III)-reductase detection in the second dimension. This staining was only published for the first dimension IEF to date (Holden et al., 1991;Meisrimler et al., 2011). Fe(III)-reductase activity was really sensitive and samples had to be treated carefully, avoiding multiple freeze thawing cycles. For spot 11 on the ferrozine stained phos-tag hrCNE only the peptide ISEYVTQLR was identified for ferritin, which is possibly regulated by phosphorylation under different physiological conditions (Beazley et al., 2009). Ferritin has also a Feoxidoreductase function, therefore it is potentially stainable by the ferrozine method. Overall, spots 10-12 were only detectable in the phos-tag hrCNE but not in the hrCNE. Therefore, the calculated shift of these spots was 100%. Detected proteins might have been of small size and migrated close to the front in the hrCNE. In phos-tag hrCNE, they have a strong affinity to the phos-tag and migrate slower. To understand the migration of the ferrozine spots further investigations are needed. Interestingly, ferrozine activity detected in the soluble fraction was exclusively detectable in the alkaline pH using NEPHGE (Supplemental Data 4), whereas in microsomal fractions it was only detectable at more acidic pH with the pI 5.6, 6.7, and 7.2 (Figure 2). The band with a pI of 5.6 was identified as quinone reductase family protein NP_194457 with the peptide AFLDATGGLWR (sequence coverage 5%, score of 26) by manual sequencing.
Spot 9 was identified as 6,7-dimethyl 8-ribityllumazine synthase with two peptides (FNEIITRPLLEGAVATFK and GAEAALTAIEMASLFEHHLK) that has no activity correlated to the Fe-reduction stained in the gel. In both cases, spot 5 and 6, found peptides were verified manually but final scores were to low for significant identification. Overall, multiple proteins per spot can be a problem for the identification of low abundant proteins responsible for an activity detected in zymograms and MS data have to be handled critically.
Spot 15 and 16 were identified on the DAB stained hrCNE as peroxidases. For all the DAB stained spots no significant shift appeared when the sample was separated by phos-tag hrCNE compared to the hrCNE. The identified peroxidases were part of the class III peroxidases, which have not been shown to be the aim of phosphorylation events. Especially, peroxidases of the excretory pathway seem not to be regulated by phosphorylation possibly due to the lack of excreted kinases and phosphatases.
If a specific protein with known activity has to be analyzed proteins can be pre-separated by chromatography (e.g., affinity, ion exchange). Phosphoprotein enrichment is another option and different variations of the technique are available (different IMACS, phosphoprotein enrichment kits). For plant samples IMAC was successfully applied (Tang et al., 2008), but the protocol might need adaption, as application was not relying on activity preservation. Furthermore, non-reducing IEF/NEPHGE were stained with Pro-Q Diamond R directly after in-gel activity for ferrozine, TMB, and NBT, resulting in a low amount of phosphoproteins detected in the IEF. No phosphoproteins were detected in NEPHGE, which is possibly due to the high alkaline pH (ampholytes) (Figure 2). Pro-Q Diamond R was also applied in the second dimension after in-gel staining (ferrozine). A few spots were detectable in the standard hrCNE but in phostag hrCNE no signal could be found at all (Supplemental Data 3). The reason for the incompatibility is not clear but the phos-tag possibly blocks the phosphorylation site for the staining.
In any case, to identify detected proteins, spots should be picked and analyzed by MS and/or by Western blot. Other specific staining methods are available for different enzyme activities (e.g., malate dehydrogenase, lipoxygenase, superoxide dismutase and others) (Manchenko, 2002). Combination of native two-dimensional gel electrophoresis with the phos-tag technology and the use of specific activity stainings has the benefit that changes of these activities by phosphorylation can be directly monitored. Applications of the method can be the observation of specific activities by phosphorylation under differential stress conditions (Supplemental Data 6). The method itself should not be used as a stand-alone technique but together with Western blot, MS and specific point mutation of phosphorylation sites it can be used for a dynamic analysis of reactions to stress factors.
Concluding Remarks
In the past years, various MS based approaches have been developed to identify phosphorylated peptides and proteins. In several techniques, phos-tag related molecules were used for the enrichment of phosphorylated peptides. In contrast to MS methods, phos-tag gels can easily be performed using general gel electrophoresis equipment and radioactivity is avoided. Furthermore, all phosphorylations can be detected. Different phosphorylated forms of the same protein can be distinguished. The combination of phos-tag with zymograms allows estimation of the effects of phosphorylation on protein activity. This allows following activation of proteins by phosphorylation and dephosphorylation. The combination with native IEF for low alkaline to acidic proteins and NEPHGE for highly alkaline proteins is helpful to separate proteins by pI, resulting in a higher resolution of different iso-enzymes. Phos-tag gels were not compatible with Pro-Q Diamond R . Protein identification is possible by MS and results can be confirmed by Western blot. In some cases phosphoprotein enrichment by IMAC or alternative might be needed before phos-tag zymograms to get better identifications by MS.
|
2016-06-17T23:57:35.458Z
|
2015-04-14T00:00:00.000
|
{
"year": 2015,
"sha1": "34a403861e0550fb9f500d9aee0d04579f1aaddc",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00230/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34a403861e0550fb9f500d9aee0d04579f1aaddc",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
234744099
|
pes2o/s2orc
|
v3-fos-license
|
Electrical Conduction Properties of Hydrogenated Amorphous Carbon Films with Different Structures
Hydrogenated amorphous carbon (a-C:H) films have optical and electrical properties that vary widely depending on deposition conditions; however, the electrical conduction mechanism, which is dependent on the film structure, has not yet been fully revealed. To understand the relationship between the film structure and electrical conduction mechanism, three types of a-C:H films were prepared and their film structures and electrical properties were evaluated. The sp2/(sp2 + sp3) ratios were measured by a near-edge X-ray absorption fine structure technique. From the conductivity–temperature relationship, variable-range hopping (VRH) conduction was shown to be the dominant conduction mechanism at low temperatures, and the electrical conduction mechanism changed at a transition temperature from VRH conduction to thermally activated band conduction. On the basis of structural analyses, a model of the microstructure of a-C:H that consists of sp2 and sp3-bonded carbon clusters, hydrogen atoms and dangling bonds was built. Furthermore, it is explained how several electrical conduction parameters are affected by the carrier transportation path among the clusters.
Introduction
Hydrogenated amorphous carbon (a-C:H) films are a non-crystalline material [1]. They consist of sp 2 and sp 3 -hybridized bonded carbon atoms and hydrogen atoms. The most popular process to synthesize a-C:H films is plasma-enhanced chemical vapor deposition (PECVD) [2]. The mechanical and tribological properties of a-C:H films depend on their atomic structure [3]. These films, and particularly what is called diamond-like carbon (DLC) films, have outstanding mechanical properties, such as high hardness, low coefficients of friction and excellent corrosion and wear resistance. Fiaschi et al. reported that the hardness of DLC films depends on their sp 3 content and that the hardness affects tribological properties [3]. Many researchers have mentioned that the friction coefficient and specific rate of the films were affected by their hydrogen content [4][5][6]. From these studies, there have been clarifications made of the relationship between the mechanical properties of a-C:H film and the sp 2 /sp 3 carbon-bonding ratio, and also the relationship between the mechanical properties of a-C:H films and the amount of hydrogen. As a result of this clarification, the a-C: H film with the optimum mechanical properties for use in a given environment has been applied to the surfaces of mechanical parts. [1,[7][8][9]. Another notable feature of a-C:H films is that they have a wide range of physical properties resulting from their structural flexibility. The sp 2 /sp 3 ratio and hydrogen content can be controlled in the range of 0.3-0.9 and 0.1-0.6, respectively [8,10]. These structural factors depend on the deposition condition [1]. Their electrical and electronic properties also change. The optical band gap and electrical resistivity are in the range of 0.5-4.5 eV [11][12][13], and 10 2 -10 16 Ω·cm [14], respectively. These films therefore have the potential to be used as semiconductor materials that offer widely variable optical and electrical properties.
The most commonly used amorphous semiconductor material is hydrogenated amorphous silicon (a-Si:H). The properties of a-Si:H have been studied intensively for electronic applications in industry [15][16][17]. Spear and Le Comber found that a-Si:H could be substitutionally doped by boron and phosphorus by adding diborane or phosphine to the source silane gas stream [18]. Hydrogen incorporated in a-Si:H terminates the dangling bonds, which causes structural relaxation. As a result, localized states in the band gap significantly decrease and doping is made possible.
Unlike silicon, carbon can form three types of covalent bonds because of its ability to form hybrid orbitals of sp, sp 2 and sp 3 . In a-C:H, there exist mainly sp 2 and sp 3 hybridized carbon simultaneously [8,19]. For sp 2 hybridized carbon, the 2s orbital is mixed with two 2p orbitals to form three equivalent sp 2 orbitals and one remaining p orbital. The overlap of p orbitals on adjacent carbon atoms forms π bonds. These π bonds are accompanied by sp 2 hybridization that creates electronic states near the Fermi level and thus determines the electronic properties [20][21][22][23]. The π states are strongly affected by the interatomic distance and atomic coordination. It is therefore expected that the mechanism of electrical conduction in a-C:H differs considerably from that of a-Si:H. While many a-C:H films have been applied in the mechanical field, the scientific reports of electrical and electronic properties of the films are very limited and the relationship between the basic structural factor and these properties is still not clear.
Hydrogenated amorphous carbon began to attract attention as a semiconductor material after a-Si:H had already become the most commonly used amorphous semiconductor material [13]. As with a-Si:H, a-C:H can be synthesized over large areas and grown to have graded properties by controlling deposition conditions during the film growth. Meyerson and Smith found that a-C:H could be doped by adding B 2 H 6 or PH 3 in a similar manner to a-Si:H [24]. In addition, a-C:H films can be produced from relatively safer and lower-cost source materials such as hydrocarbon and graphite. Hence, a-C:H films are expected to be applied to devices such as solar cells and thin-film transistors (TFT) [13]. Fundamental research focused on the electrical conduction mechanism in a-C:H films was conducted by Fabisiak et al. [25]. They showed that the temperature dependence of the electrical conductivity of a-C:H films indicate thermally activated band conduction at high temperatures and variable-range hopping (VRH) conduction at low temperatures [25]. The functional relationship for thermally activated band conduction in the extended state is given by the following expression [26]: and the functional relationship for VRH conduction in the localized states near the Fermi level is given by the following expression [27]: where E a is the activation energy, k B the Boltzmann constant, T 0 the Mott's characteristic temperature and T the absolute temperature.
There are, however, few practical applications of a-C:H films in the field of electrical and electronic engineering because their electrical-conduction mechanism has not yet been fully revealed. There has been only a limited number of studies on a-C:H films seeking to explain the relationship between their electrical properties and film structure and, in particular, to explain the effect of sp 2 /sp 3 ratio and hydrogen content on this relationship. This is thought to be because it is difficult to evaluate the sp 2 /sp 3 ratio quantitatively.
The effect of the presence of both sp 3 and sp 2 bonding on the properties of the films is still unclear. For electrical and electronic applications using a-C:H films to progress, it is important to investigate the relationship between film structure and electrical properties and it is necessary to determine which structures are better for such applications.
In this study, three types of a-C:H films were deposited from three different hydrocarbons, and then the relationship between the film structure and conduction characteristics of the films was investigated. To determine the film structure and hydrogen content, the sp 2 /(sp 2 + sp 3 ) ratio and sp 2 cluster size in the films were evaluated. Temperature dependence of electrical conductivity is characterized by several parameters that indirectly reflect the atomic structure: activation energy (E a ), transition temperature of conduction mechanisms (T c ) and Mott's characteristic temperature (T 0 ). On the basis of this dependence, a model giving an explanation of how the electrical conduction mechanism will change due to the variation of the structure was proposed.
Materials and Methods
The a-C:H films were prepared by pulsed plasma-enhanced chemical vapor deposition (CVD) [28]. We deposited 3 types of a-C:H films on 100 p-Si substrates (thickness of 625 µm) from acetylene (C 2 H 2 , purity of 98%), ethylene (C 2 H 4 , purity of 99.5%) and methane (CH 4 , purity of 99.999%). The Si substrates were ultrasonically cleaned in distilled water (10 min), then ethanol (10 min) and finally acetone (10 min). This sequence of cleaning was performed twice. The substrates were then placed on a negative electrode in a vacuum chamber, as illustrated in Figure 1. Prior to the deposition, the natural oxidation layer on the Si substrates was removed by argon (Ar, purity of 99.9999%) plasma irradiation for 30 min. Argon gas was introduced at 20 cm 3 /min, and the pressure was maintained at 2 Pa. The applied voltage was −3.5 kV at a frequency of 14.4 kHz. The deposition conditions of the a-C:H films are shown in Table 1. Although the temperature during film deposition cannot be measured because the voltage is directly applied to the substrate, it was estimated to be 200 • C or less by using the temperature label for vacuums. The fact that the apparatus could be held in the hand immediately after deposition suggests that the estimated temperature was correct. The composition of the films was determined using glow-discharge optical emission spectroscopy (GDOES, JY-5000RF, HORIBA, Ltd.; Kyoto, Japan) and a near-edge X-ray absorption fine structure (NEXAFS) technique. In GDOES measurements, a calibration curve was used for the evaluation of hydrogen content of the a-C:H films. The calibration curve was prepared from the relationship between the hydrogen content and optical-emission intensity of standard samples: pure Si wafer and 2 types of a-C:H films, of which hydrogen contents were previously measured by Rutherford backscattering spectrometry (RBS) with elastic recoil detection analysis (ERDA) [28]. The NEXAFS measurements were performed at the beamline 3.2 Ub of the Synchrotron Light Research Institute (SLRI) in Thailand. The NEXAFS spectra were measured in the total electron yield (TEY) mode. The sp 2 /(sp 2 + sp 3 ) ratios of the a-C:H films were determined by comparison with the NEXAFS spectrum of highly oriented pyrolytic graphite (HOPG). The sp 2 cluster size of the a-C:H films was evaluated using Raman spectroscopy (NRS-4100; JASCO Corp.; Tokyo, Japan, wavelength of 532 nm). In the Raman spectra of a-C:H films, the area ratio of the D and G band (I D /I G ) is inversely proportional to the average grain size of the sp 2 cluster [29]. In this study, multi-Gaussian fitting was used to deconvolute the Raman spectra, and the sp 2 cluster size of the a-C:H films was evaluated from the I D /I G ratio. To investigate the temperature dependence of electrical conductivity of the films, Ti/a-C:H/Ti devices were fabricated on glass substrates (S9224, Matsunami Ind., Ltd; Japan). The structure of the device is illustrated in Figure 2. The glass sub was cut into 15 × 25 × 1.3 mm 3 . These glass substrates were cleaned with pure wate anol and acetone in the same way as the Si substrates were cleaned. Upper and b Au/Ti electrodes were deposited on the glass using DC magnetron sputtering. (99.99% JEOL Ltd. ; Japan) plate with 46.2 mm and Cu and Ti (99.5%, Kojundo Che Lab. Co., Ltd; Japan.) plates with Φ 100 mm were used as targets. First, the lower tita electrode was deposited on the glass substrate. Argon was introduced at a rate cm 3 /min in a chamber and pressure was adjusted to 2 Pa. A DC of 0.4 A and 0.3 k applied to the Ti target to generate Ar plasma, and the Ti film was deposited on a substrate to form the lower electrode. The deposition duration was 50 min. After stainless steel mask with a hole (Φ 18 mm in diameter) was attached, and an a-C: H with 18 mm was prepared on the Ti electrode under the conditions in Table 1. T stainless steel mask with holes (2 mm diameter) was attached, and Ti and Cu films w 2 mm were sequentially deposited above the a-C: H layer for 30 min, under the sam ditions as the Ti layer fabrication, to form Cu/Ti electrodes. Finally, Au was deposi an oxidation protective film on the bottom Ti electrode and the upper Cu/Ti electro magnetron sputtering. The sputtering gas was Ar, the voltage was 1.2 kV, the curren 10 mA and the deposition time was 5 min. The sample shown in Figure 2 was obt The film thickness was measured by cross-sectional observation using a field em scanning electron microscope (FE-SEM, JSM-7500F; JEOL Ltd.; Japan). The devic mounted on the cold head of a 4K Gifford-McMahon cryocooler (RDK-101D/CAN Sumitomo Heavy Industries, Ltd.; Japan) and was connected to an ultra-high resi meter (R8340A, ADVANTEST Corp.; Japan). The I-V characteristic was measured in the voltage range of 0 ± 0.5 V to confirm the formation of Ohmic contact between and the electrode system. The temperature dependence of the electrical conductiv To investigate the temperature dependence of electrical conductivity of the a-C:H films, Ti/a-C:H/Ti devices were fabricated on glass substrates (S9224, Matsunami Glass Ind., Ltd; Ltd Osaka, Japan). The structure of the device is illustrated in Figure 2. The glass substrate was cut into 15 × 25 × 1.3 mm 3 . These glass substrates were cleaned with pure water, ethanol and acetone in the same way as the Si substrates were cleaned. Upper and bottom Au/Ti electrodes were deposited on the glass using DC magnetron sputtering. A Au (99.99% JEOL Ltd.; Tokyo, Japan) plate with Φ 46.2 mm and Cu and Ti (99.5%, Kojundo Chemical Lab. Co., Ltd.; Sakado, Japan) plates with Φ 100 mm were used as targets. First, the lower titanium electrode was deposited on the glass substrate. Argon was introduced at a rate of 20 cm 3 /min in a chamber and pressure was adjusted to 2 Pa. A DC of 0.4 A and 0.3 kV was applied to the Ti target to generate Ar plasma, and the Ti film was deposited on a glass substrate to form the lower electrode. The deposition duration was 50 min. After that, a stainless steel mask with a hole (Φ 18 mm in diameter) was attached, and an a-C: H film with Φ 18 mm was prepared on the Ti electrode under the conditions in Table 1. Then, a stainless steel mask with holes (2 mm diameter) was attached, and Ti and Cu films with Φ 2 mm were sequentially deposited above the a-C: H layer for 30 min, under the same conditions as the Ti layer fabrication, to form Cu/Ti electrodes. Finally, Au was deposited as an oxidation protective film on the bottom Ti electrode and the upper Cu/Ti electrode by magnetron sputtering. The sputtering gas was Ar, the voltage was 1.2 kV, the current was 10 mA and the deposition time was 5 min. The sample shown in Figure 2 was obtained. The film thickness was measured by cross-sectional observation using a field emission scanning electron microscope (FE-SEM, JSM-7500F; JEOL Ltd.; Tokyo, Japan). The device was mounted on the cold head of a 4K Gifford-McMahon cryocooler (RDK-101D/CAN-11B; Sumitomo Heavy Industries, Ltd.; Tokyo, Japan) and was connected to an ultra-high resistance meter (R8340A, ADVANTEST Corp.; Tokyo, Japan). The I-V characteristic was measured at RT in the voltage range of 0 ± 0.5 V to confirm the formation of Ohmic contact between a-C:H and the electrode system. The temperature dependence of the electrical conductivity of the a-C:H films were measured in the range of 30-300 K. The applied voltage was 0.1 V, and the temperature steps were 5 K per step (30-100 K) and 10 K per step (100-300 K). The current measurements were performed after the temperature became stable at the set points. Figure 3 shows the C K-edge NEXAFS spectra of the a-C:H films. The analyses of the NEXAFS spectra showed the sp 2 /(sp 2 + sp 3 ) ratio of carbon in the film changed from 68.6% to 69.8% depending on the deposition condition. The spectra were deconvoluted into multiple peaks, and these peaks are assigned to each structure as shown in Figure 3. The presence of pre-edge resonance at a photon energy of 284.6 eV were assigned to the transition from the 1s orbital to the unoccupied π* orbitals that principally originated from the sp 2 site (C=C), and the value included the contribution of the sp sites (C≡C) if present [30]. The edge jump from 288.0 to 330.0 eV was related to direct ionization from the 1s orbital [31]. The other peak positions at 286.6, 287.5, 288.8, 293.0 and 303.8 eV were attributed to the σ* (C-H), π* (C≡C), σ* (C-C), σ* (C=C) and σ* (C≡C) states, respectively [32]. The hydrogen content obtained from GDOES analysis changed from 15.3 to 22.9 at.%. The hydrogen content of the films had a tendency to increase as the concentration of hydrogen atoms in the source material increased. Figure 4 shows the Raman spectra of the a-C:H films. The value of the ID/IG ratio was calculated after the deconvolution into the D and G band by multi-Gaussian fitting. Structures of the a-C:H films are summarized in Table 2. These structures depended on hydrogen content. The sp 2 /(sp 2 + sp 3 ) ratio decreased, and the ID/IG ratio increased with increasing hydrogen content. These results indicate that hydrogen atoms in the films terminated dangling bonds on the surface of sp 2 clusters, and consequently the sp 2 clusters' size was reduced with increasing hydrogen content.
Results
The temperature dependences of electrical conductivity were measured for all a-C:H films. Figure 5 shows the dependencies in the form of an Arrhenius plot. The temperature dependence of the conductivity changed at a temperature Tc. The temperature dependence above Tc showed a linear relationship between the log (σ) and the 1/T. In contrast, the temperature dependence below Tc showed a non-linear relationship. As mentioned in the introduction, a-C:H films exhibit two different conduction mechanisms. In this study, it was assumed that the conduction at temperatures above Tc is band conduction and the conduction at temperatures below Tc is VRH conduction. Figure 3 shows the C K-edge NEXAFS spectra of the a-C:H films. The analyses of the NEXAFS spectra showed the sp 2 /(sp 2 + sp 3 ) ratio of carbon in the film changed from 68.6% to 69.8% depending on the deposition condition. The spectra were deconvoluted into multiple peaks, and these peaks are assigned to each structure as shown in Figure 3. The presence of pre-edge resonance at a photon energy of 284.6 eV were assigned to the transition from the 1s orbital to the unoccupied π* orbitals that principally originated from the sp 2 site (C=C), and the value included the contribution of the sp sites (C≡C) if present [30]. The edge jump from 288.0 to 330.0 eV was related to direct ionization from the 1s orbital [31]. The other peak positions at 286.6, 287.5, 288.8, 293.0 and 303.8 eV were attributed to the σ* (C-H), π* (C≡C), σ* (C-C), σ* (C=C) and σ* (C≡C) states, respectively [32]. The hydrogen content obtained from GDOES analysis changed from 15.3 to 22.9 at.%. The hydrogen content of the films had a tendency to increase as the concentration of hydrogen atoms in the source material increased. Figure 4 shows the Raman spectra of the a-C:H films. The value of the I D /I G ratio was calculated after the deconvolution into the D and G band by multi-Gaussian fitting. Structures of the a-C:H films are summarized in Table 2. These structures depended on hydrogen content. The sp 2 /(sp 2 + sp 3 ) ratio decreased, and the I D /I G ratio increased with increasing hydrogen content. These results indicate that hydrogen atoms in the films terminated dangling bonds on the surface of sp 2 clusters, and consequently the sp 2 clusters' size was reduced with increasing hydrogen content. The curve fitting of the measured Arrhenius plots was performed using the mixed form of The temperature dependences of electrical conductivity were measured for all a-C:H films. Figure 5 shows the dependencies in the form of an Arrhenius plot. The temperature dependence of the conductivity changed at a temperature T c . The temperature dependence above T c showed a linear relationship between the log (σ) and the 1/T. In contrast, the temperature dependence below T c showed a non-linear relationship. As mentioned in the introduction, a-C:H films exhibit two different conduction mechanisms. In this study, it was assumed that the conduction at temperatures above T c is band conduction and the conduction at temperatures below T c is VRH conduction.
Results
The curve fitting of the measured Arrhenius plots was performed using the mixed form of and the electrical conduction parameters of E a , T 0 and T c for each a-C:H film were obtained. The values of E a , T 0 , T c and the electrical conductivity at 300 K represented as σ 300 are summarized in Table 3.
and the electrical conduction parameters of Ea, T0 and Tc for each a-C:H film were obtained. The values of Ea, T0, Tc and the electrical conductivity at 300 K represented as σ300 are summarized in Table 3. 4. Discussion Figure 6 shows the relationship between the film structure factor (sp 2 /(sp 2 + sp 3 ) ratio, ID/IG ratio and hydrogen content, and electrical conduction characteristics (σ300, Ea, T0 and Tc). As shown in Figure 6a and b, with an increasing sp 2 /(sp 2 + sp 3 ) ratio, Ea and T0 show an increasing trend, and σ300 and Tc show a decreasing trend. In Figures 6c,d, with an increasing ID/IG ratio, Ea and T0 show a decreasing trend, and σ300 and Tc show an increasing trend. In Figure 6 e,d, Ea tends to decrease as the hydrogen content increases, while σ300, T0, and Tc do not depend on the hydrogen content. It is necessary to consider that Ea is related to the band conduction and that T0 and Tc are mainly related to the VRH conduction. For VRH conduction at low temperatures, Equation (2) Figure 6 shows the relationship between the film structure factor (sp 2 /(sp 2 + sp 3 ) ratio, I D /I G ratio and hydrogen content, and electrical conduction characteristics (σ 300 , E a , T 0 and T c ). As shown in Figure 6a,b, with an increasing sp 2 /(sp 2 + sp 3 ) ratio, E a and T 0 show an increasing trend, and σ 300 and T c show a decreasing trend. In Figure 6c,d, with an increasing I D /I G ratio, E a and T 0 show a decreasing trend, and σ 300 and T c show an increasing trend. In Figure 6e,d, E a tends to decrease as the hydrogen content increases, while σ 300 , T 0 , and T c do not depend on the hydrogen content. It is necessary to consider that E a is related to the band conduction and that T 0 and Tc are mainly related to the VRH conduction. For VRH conduction at low temperatures, Equation (2) can be written as
Discussion
In this equation, the dimension of k B T 0 is energy, and k B T 0 can be regarded as an energy barrier for carrier transport in VRH conduction. The height and thickness of the energy barrier between the hopping sites simultaneously affects the probability of the occurrence of carrier transport by the phonon-assisted tunneling process. In other words, an increase in T 0 in the low-temperature region corresponds to an increase in the energy barrier and a decrease in the probability of carrier transport occurring, which are both caused by an increase in the distance between the hopping sites.
In this equation, the dimension of kBT0 is energy, and kBT0 can be regarded as an energy barrier for carrier transport in VRH conduction. The height and thickness of the energy barrier between the hopping sites simultaneously affects the probability of the occurrence of carrier transport by the phonon-assisted tunneling process. In other words, an increase in T0 in the low-temperature region corresponds to an increase in the energy barrier and a decrease in the probability of carrier transport occurring, which are both caused by an increase in the distance between the hopping sites.
To explain the relationship between film structure and electrical properties, the carrier transport path in the microstructure of a-C:H films was considered. On the basis of the results of the structural analyses, the structure model of a-C:H film as illustrated in Figure 6. Relationship between the structural factor (horizontal axis) and electrical conduction characteristics (vertical axes). (a) sp 2 (sp 2 + sp 3 ) to σ 300 or E a , (b) sp 2 (sp 2 + sp 3 ) to T 0 or T c , (c) I D /I G ratio to σ 300 or E a , (d) I D /I G ratio to T 0 or T c , (e) Hydrogen content to σ 300 or E a , and (f) Hydrogen content to T 0 or T c .
To explain the relationship between film structure and electrical properties, the carrier transport path in the microstructure of a-C:H films was considered. On the basis of the results of the structural analyses, the structure model of a-C:H film as illustrated in Figure 7 was proposed. This model describes that with increasing hydrogen content the sp 2 clusters' size is reduced and the sp 2 /(sp 2 + sp 3 ) ratio becomes low. In this model, the carrier moves among the sp 2 and sp 3 clusters by band conduction or VRH conduction. The electrical conductivity of sp 2 clusters in an a-C:H film is expected to increase with decreasing temperature, which is what happens with multi-layer graphene [33]. Thermally activated band conduction does not readily occur inside the sp 3 cluster at low temperatures. The mechanism of decreasing electrical conductivity in the low-temperature region is hence assumed to be related to the carrier transport path at the interface between the sp 2 clusters and the sp 3 clusters. The carriers that have passed through the sp 2 clusters are transported through DB on the cluster interface by hopping.
Conclusions
The structures of the a-C:H films depend on the hydrogen content and electrical conduction properties of the film changes. The electrical conduction mechanism of all of the a-C:H films changed, at a transition temperature Tc, from VRH conduction at low temperatures to band conduction at high temperatures. The changes of T0 and Tc, which relate to VRH conduction, depend on the size of sp 2 clusters in the films. When the average size of an sp 2 cluster is large, the distance between hopping sites is large and the number density of hopping sites is low. This results in an increase of T0 and a decrease of Tc. The changes in Ea, which relates to band conduction, depend on the hydrogen content. When the hydrogen content is high, the hydrogen atoms terminate dangling bonds at the edges of sp 2 clusters and within the sp 3 domain. As a result, the Fermi level becomes closer to the valence band, and Ea decreases. In Figure 6b, the Mott's characteristic temperature T 0 increases with the increase in the sp 2 /(sp 2 + sp 3 ) ratio. When the sp 2 /(sp 2 + sp 3 ) ratio is small and the sp 2 cluster size is also small, the clusters are close to each other (Figure 7b). The DBs at the cluster interface hence are also close to each other, and thus the energy barrier between the hopping sites is thin. In contrast, when the sp 2 /(sp 2 + sp 3 ) ratio and the sp 2 clutter size become large, the distance between the clusters increases and the hopping distance also increases (Figure 7a), which increases the energy barrier. Therefore, T 0 increases as the sp 2 /(sp 2 + sp 3 ) ratio increases. The transition temperature T c tends to decrease as the sp 2 /(sp 2 + sp 3 ) ratio increases, as shown in Figure 6b. The temperature T c is at the intersection point of the σ b and σ h curves, and T c is hence the temperature at which the main mechanism of carrier transport changes from band conduction to VRH conduction. In Figure 5, the σ h curves shift downward as the sp 2 /(sp 2 + sp 3 ) ratio increases, and consequently, the intersection with σ b curves moves to the right. In this case, the decreasing tendency of T c results from the decrease in the pre-exponential factor σ 00 , i.e., the VRH conduction itself. If T 0 is taken into consideration, it suggests that the carrier transport at low temperatures is affected by the size and/or number density of the sp 2 cluster and by the DB distribution at the cluster interface. Hence, it is reasonable to suppose that the increase in the sp 2 cluster size reduces the surface area of the cluster, which results in the reduction of the total number of DBs responsible for VRH conduction. As a result, the T c decreases because the σ 00 decreases as the sp 2 /(sp 2 + sp 3 ) ratio increases. In Figure 6c,d, the changes in the electrical conduction characteristics as the I D /I G ratio increases are opposite to the changes in the electrical conduction characteristics as the sp 2 /(sp 2 + sp 3 ) ratio in Figure 6a,b increases. This is because the I D /I G ratio and sp 2 /(sp 2 + sp 3 ) ratio are closely related to each other due to the sp 2 cluster size but there is a negative correlation between them. In Figure 6e, the activation energy E a decreases with increasing hydrogen content. When the hydrogen content is high, the sp 2 cluster is smaller and there are many DBs and hydrogen atoms at the edge of the cluster. In a-C:H film, hole transport dominates, and the Fermi level lies nearer the valence band. Additionally, the slope of the valence band tail is sharper than the conduction band tail [19]. As a result of the increase in the number of DBs, the localized states in the band gap become apparent, and the Fermi level approaches the valence band to satisfy charge neutrality. Thus, the E a decreases with the increasing termination of the sp 2 clusters by hydrogen atoms. The conductivity at room temperature σ 300 does not have a clear dependence on the sp 2 /(sp 2 + sp 3 ) ratio or hydrogen content. Since the DB density in the film is large, the electrical conduction mechanism near room temperature is considered to be a superposition of band conduction and VRH conduction. Furthermore, the electrical conductivity of the a-C:H film is strongly affected by the state of the sp 2 cluster interface in the film. Thus, σ 300 may have a complicated tendency.
Conclusions
The structures of the a-C:H films depend on the hydrogen content and electrical conduction properties of the film changes. The electrical conduction mechanism of all of the a-C:H films changed, at a transition temperature T c , from VRH conduction at low temperatures to band conduction at high temperatures. The changes of T 0 and T c , which relate to VRH conduction, depend on the size of sp 2 clusters in the films. When the average size of an sp 2 cluster is large, the distance between hopping sites is large and the number density of hopping sites is low. This results in an increase of T 0 and a decrease of T c . The changes in E a , which relates to band conduction, depend on the hydrogen content. When the hydrogen content is high, the hydrogen atoms terminate dangling bonds at the edges of sp 2 clusters and within the sp 3 domain. As a result, the Fermi level becomes closer to the valence band, and E a decreases.
|
2021-05-18T05:18:05.798Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a7453be20077ff6eb4a0b25143587bb0db5207dc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/9/2355/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7453be20077ff6eb4a0b25143587bb0db5207dc",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225679576
|
pes2o/s2orc
|
v3-fos-license
|
Effect of population level of various hybrid corn strains on growth and yield
The success of plant breeding efforts is the availability of genetic diversity in the population so that people can choose what is preferred due to limited optimal land by utilizing sub-optimal land. The level of plant population is carried out to obtain information about a prospective maize variety that has a high productivity in land with limited / minimal level of sunlight. The higher level of plant population means the lower the reception of light by plants, therefore we need a variety that is shade resistant. The purpose of this study was to determine the effect of the population level of each unit area on various shade-resistant lines with high productivity which is expected to be applied on intercropping land (plantations / forests) that have high shade levels. The study was conducted at Cereals Plant Research Institute, Maros in August - November 2016. The study used a randomized block design in the form of a split plot with 3 replications. The main plot is the plant population, which is a medium population of 70 cm x 20 cm (population 71,428 plants/ha) and a population of height 70 cm x 15 cm (population 95,238 plants/ha). As subplots were 10 prospective varieties of hybrid corn strain. The results showed that the shade resistant strain of 70 x 15 cm plant spacing highest yields of 1044-9 x 1027-11 (7.75 t/ha) and 70 x 20 cm spacing yield (t/ha) the highest CY 15 x MAL 03 (10.07 t/ha).
Introduction
Agricultural extensification for maize is highly required along with the shifting of fertile land for nonagriculture purposes, thus maize crops directed to problem areas such as drought-prone land and utilizing land under hardwood stands [1]. Therefore, the mass conversion of fertile land for nonagriculture needs a breakthrough by develop shade resistant maize genotypes to cultivate under the stands of woody plants (i.a teak and coconut) and through an increase plant population [2][3][4]. Requirements for successful breeding efforts is the availability of genetic diversity in the population and how large the genetic diversity is [5,6]. Furthermore, a population does not show genetic diversity, the diversity that is seen is the diversity of phenotypes which is diversity caused by environmental factors [7].
Maize productivity is influenced by, among others, varieties and environmental factors. As for increasing production, among others, by spacing (population level per unit area) and having leaf types on the cob is upright [8]. Types of upright leaves the level of receiving sunlight is greater than the type of flat/drooping leaves. Regulating plant population by adjusting spacing accordingly is one of the intensification programs to increase the rate of crop production, but indirectly spacing can affect the intensity of sunlight which is an energy source for plant photosynthesis [9].
Increasing plant population regulation causes density per unit area of land. Several experiments proved that the increasing of density affected to the response in the morphological and physiological characteristics of maize, including delaying the release of anthesis and increasing the number of empty cobs. Increased empty cob because of the lack of synchronization of pollination of male and female flowers caused by plant spacing (population) can also occur due to drought stress in that phase [10][11][12] . Plant population (spacing) is one of the factors that can influence yield, therefore to increase yield of maize can be attempted through planting density until it reaches the optimal population [13] Efforts to increase plant populations can be pursued by finding lines that are resistant to shade. Shade resistant strains are expected with limited light capable of producing optimally with low photosynthesis. The process of photosynthesis is important in plant growth and seed formation. Photosynthesis is a basic process in plants to produce food for the availability of energy for plant growth and development and some of the results of photosynthesis are translated into seeds [14]. The purpose of this study was to determine the effect of the population level of each unit area on various shade-resistant lines with high productivity which is expected to be applied on intercropping land (plantations/forests) that have high shade levels.
Materials and Methods
The study was conducted in August -November 2016 at the Maros experimental station, Indonesian Cereals Research Institute. The study used a randomized block design in the form of a split plot with 3 replications. The main plot is the plant population, which is a medium population of 70 cm x 20 cm (population 71,428 plants/ha) and a population of height 70 cm x 15 cm (population 95,238 plants/ha). As subplots were 10 prospective varieties of hybrid corn strain. Planting 2 seeds per hole is planted and at 10 days thinning is carried out until one plant/clump is grown. The plot size of each treatment is 2.8 m x 6 m.
The dosage of fertilizer used is 400 kg/ha ponska, and 400 kg/ha urea. Fertilization is done 2 times, all Ponska fertilizers are given at the age of 7-10 days after planting and urea fertilizer is given at 40 days after planting. For maintenance, weeds are mainly carried out by spraying the herbicide Calaris/chonvey with a dose of 2.0 l/ha given before the first fertilization. Weeding and growing before the second fertilization is done manually in each row of plants.
Observed data: plant height (30 and 75 days after planting), height of the cob (75 days after planting), leaf angle (75 days after planting), leaf chlorophyll value (30 and 75 days after planting), leaves (long, width, amount) 75 days after planting, age of flowering (male and female), yield (t/ha) and yield components (yield, length and diameter of cobs, amount of rows and seeds in rows .
Vegetative and Gernerative Character
The results of the analysis showed that plant height with spacing of 70 x 15 cm at 30 days after the various lines were significantly different between lines, but this did not significantly indicate that between lines still gave the same ability to plant height at 30 days after planting. The highest plant height values were strains AMB 07 x CML 161 (54.72 cm) and the lowest were strains MAL 3 x CY 4 (36.66 cm) ( Table 1).
Plant height and location of the cobs at 75 days after planting between lines showed significantly different at 70 x 20 cm spacing. Of the various lines with spacing, the values give significant differences in certain lines to plant morphology. This shows that the lines gave a high response to plant spacing, besides being influenced by the gene factors of each genotypes. The highest plant height value of 75 days after planting is G 02 x 5 line (211.66 cm) and the lowest is MAL 3 x CY 4 (169.11 cm), while the height of the highest cob location is G 02 x 5 (91.44 cm) and the lowest is MR 12 x MAL 04 (66.33 cm) ( Table 1).
The results of the analysis showed that the leaf angle was significantly different in various lines with a spacing of 70 x 20 cm. This result shows that the various lines have different plant morphological 3 characteristics which are influenced by the nature of plant genes and the level of density of plant populations. Morphological characteristics of plants that have a small leaf angle will receive greater solar radiation compared to plants that have a large leaf angle. A low leaf angle will provide a large opportunity for weed growth, but the population rate per hectare can be increased. Plant density regulation aims to minimize competition between plants so that the canopy and plant roots can utilize the environment optimally, but dense plant numbers will reduce yields due to competition for nutrients, water, solar radiation and growing space so that it will reduce the number of seeds per plant [15,16]. The highest leaf angle values of G 02 x 7 (62,500) and the lowest AMB07 x CML 161 (42,880) ( Table 1).
Chlorophyll value of leaves in 30 days after planting showed that from various lines ranging from 43.71 -50.32 units, between lines gave a different chlorophyll value, indicating that the ability of plants to absorb nutrients was still the same (Table 1). While the chlorophyll value in 75 days after planting of all lines was higher than 30 days after each line. The highest value was 75 days after planting strain MAL 01 x 4 (57.89 units) and the lowest was MAL 3 x CY 4 (52.24 units) ( Table 1). The results of the analysis showed that the leaf chlorophyll value at 75 days after planting was very significantly different, this showed that the ability of plants to absorb nutrients available in the soil greatly affected the leaf chlorophyll value. The level of ability and needs differ depending on the type of gene and strain, thus showing differences in the value of chlorophyll.
The analysis showed that leaf length and width 75 days after planting were significantly different. This shows that each strain has different morphological characteristics that are affected by harvest age and planting distance. Yulisma (2011) reports that in general the morphological differences between deep and early maturing varieties include plant height, leaf length and width. The longest value of leaf length G 02 x 5 (83.77 cm) and the shortest MAL 8 x MAL 01 (67.44 cm), while the largest leaf width is Mal 01 x 4 (9.48 cm) and the smallest MR 12 x MAL 04 ( 7.23 cm) ( Table 1).
The number of leaves 75 days after planting shows significantly different from various lines, this is the number of leaves affected by the morphological nature of the plant. The highest number of leaves is 1044-9 x 1027-11 (13.66) and the smallest is MR 12 x MAL 04 (11.88) ( Table 1). In general, the number of leaves correlates with the number of segments and plant height that affect the yield of seeds. The number of leaves of the C3 population varies between 10-14 and correlates closely and positively to the yield and heritability value of broad meaning, therefore the number of leaves can be used for selection to improve yield [17][18][19].
The age of flowering male and female differ significantly from the various lines, this shows that the age of flowering is influenced by the genetic traits of the strain and population density per hectare. The highest age of male flowering is B 11 x 11 (55 days after planting) and MR 12 x MAL 04 (49.33 days after planting), while the age of flowering females is the highest value of MAL 3 x CY 4 (58.66 days after planting) and lowest MAL 8 x MAL 01 (52 HST) ( Table 1).
Criteria for assessing high and low population diversity based on genetic diversity coefficient values are low diversity (> 25%), rather low (25% <50%), high enough (50% <75%) and high (> 75%) (Moedjiono and Mejaya, 1994 in [20]). Table 1 shows that the diversity coefficient values ranged from 2.09% -18.70% in vegetative and generative characters, so including the low coefficient of diversity values. The coefficient of diversity is a measure of the diversity of characters observed in a population [21]. The results of the analysis showed that significantly different plant spacing of 70 x 20 cm in 30 days after planting, but the difference was not significant. This shows that the spacing still provides almost the same ability depending on the nature of the line genes. While in 75 days after planting height was significantly different between lines. This shows that the nature of genes has been seen that the ability to plant spacing and morphological characteristics of plant genes are significant. High density plant received less radiation, and trigger the elongation of cell through several metabolism system in the result plant stem growth higher than sufficient light plant [22][23][24][25]. The highest value of plants at 30 days after planting the highest G 02 x 7 line (50.55 cm) and the lowest MAL 3 x CY 4 (25.16 cm), while the highest value at 75 days after planting G 02 x 7 (195.33 cm) and the lowest MAL 3 x CY 4 (25.16 cm) ( Table 2).
The highest value of the location of the cobs in 75 days after planting with the highest spacing of 70 x 20 is G 02 x 5 (93.50 cm) and the lowest is MR 12 x MAL 04 (65.77 cm) ( Table 2). The results of the analysis showed that the height of the location of the cobs was significantly different between the lines, but it did not signify the difference. In general, the height of the location of the cobs correlates with the height of the plant, ie the greater the value of the plant height, the higher the location of the cobs is also higher.
The results of the analysis showed that the leaf angles of the various strains were significantly different. The largest leaf angle values are G 02 x 7 (60 o ) and the smallest is AMB 07 x CML 161 (41 o ) ( Table 1). The leaf angle value will affect the level of sunlight's acceptance by plants and the level of nutrient absorption by plants. Planting distance that is too wide in addition to reducing plant population per unit area also causes a reduction in the use of direct sunlight to the ground and a reduction in nutrients that occur evaporation due to direct sunlight to the soil surface, so that nutrients are lost due to evaporation and leaching [13]. The analysis showed that leaf chlorophyll 30 days after planting and 75 days after planting at 70 x 20 cm spacing were significantly different. But of the various lines the difference is not significant, this shows that each line still provides almost the same ability between lines. The highest value of 30 days after planting chlorophyll G 02 x 5 (52.55 units) and the smallest MR 12 x MAL 04 (46.26 units), were 75 days after planting the highest value of MAL 8 x MAL 01 (58.73 units) and the smallest MAL 01 x 4 (53.13 units) ( Table 2).
The length and width of the leaves of the analysis result show that the difference is significant, the difference is very significant, this shows that the character of the plants from the various lines gives the optimum in accordance with the ability of the plant itself to plant spacing. The longest leaf length values G 02 x 5 (86.83 cm) and the shortest MR 12 x MAL 04 (66.83 cm), the widest leaf width MAL 01 x 4 (10.50 cm) and the narrowest MAL 8 x MAL 01 ( 7.30 cm) ( The number of leaves 75 days after planting with a spacing of 70 x 20 cm results of the analysis show that it is significantly different, this shows that the number of leaves is influenced by the nature of the plant genes that are able to provide in accordance with the ability of the plant itself which will ultimately affect the yield of seeds. According to Sudika et. al (1998) reported that the number of leaves of the C3 population varied from 10 to 14 which was closely and positively correlated to the yield and heredity of broad significance was 77.75%, so the number of leaves could be used as a selection to improve yields.
The results of the analysis showed that the spacing of 70 x 20 cm age of male and female flowering was significantly different, this shows that each genotype had different properties according to the ability of the plant itself to plant spacing. Plant spacing will affect flowering age depending on the level of population density due to competition for sunlight and nutrient availability in the soil. Similar to Increasing of density per unit area can result in changes in the morphological and physiological characteristics of maize, including delays in the anthesis and an increase in the number of non-seeded cobs which is positively correlated with increasing levels of plant population density [16]. The highest value of flowering age is MAL 3 x CY 4 (55.33 days) and the smallest MR is 12 x MAL 04 (48.66 days), female flowers have the largest value of MAL 3 x CY 4 (57.66 days) and the smallest MR 12 x MAL 04 (51.00 days) ( Table 2). The difference in the value of these parameters is caused by environmental factors and the type of strain caused by different types of plants. The effect of the observed varieties on the observed variables was due to differences in genetic factors possessed by each maize variety and the ability to adapt to the environment [26].
Yield and Yield Components
Analysis of the various lines showed that the yield of dried seeds per hectare and seed yield was significantly different, but the difference was not significant between lines. This shows that with a spacing of 70 x 15 cm several lines obtained yields per hectare and seed yield is almost the same due to the ability of plants to plant spacing in utilizing sunlight and nutrients in the soil that are almost the same besides being influenced by genetic factors. The highest yield is 1044-9 x 1027-11 (7.75 t / ha) and the smallest is B 11 x 11 (5.04 t / ha), while the highest yield of seeds is 1044-9 x 1027-11 and MAL 01 x 4 (0.75%) ( Table 3).
The results of the analysis showed that the weights of 100 seeds, cob length, ear diameter, number of rows per cob and number of seeds in one row were significantly different. This shows that from various varieties with a spacing of 70 x 15 cm had a significant effect on these parameters. The density of plants will affect the physiological properties of plants including seed weight, cob length, ear diameter, number of rows and number of seeds in a row. The highest weight value of 100 seeds MAL 01 x 4 (37.85 g) and the smallest MAL 3 x CY 4 (25.19 g), the longest cob length MR 12 x MAL 04 (15.42 cm) and the smallest B 11 x 11 ( 12.96 cm), the biggest cob diameter is 1044-9 x 1027-11 (4.59 cm) and the smallest is B11 x 11 (4.20 cm), the largest number of rows is 1044-9 x 1027-11 (15.22) and (4.20), the number of seeds in the largest row 1044-9 x 1027-11 (31.61) and the smallest B 11 x 11 (24.66) ( Table 3). The number followed by the same letter is not significantly different at the 5% level according to the Duncan test.
The results of the analysis of the results of tons per hectare that are significantly different, this shows that in planting 70 x 20 cm each line gives optimal results according to the ability of plants to yield. With these plant spacing, each plant can improve optimal individual growth, so that the level of sunlight and nutrient competition in the soil is low and gives optimal seed yield according to its ability. Sparse spacing (low population) of plants will improve individual growth, but wide spacing will not only reduce plant populations but also result in reduced utilization of sunlight and nutrients by plants, because some of the light will fall to the soil surface and nutrients will be lost causing evaporation and washing (Yulisma, 2011). The highest yield was AMB 07xCML 161 (9.74 t/ha) and the lowest was MR 12 x MAL 04 (4.99 t/ha) ( Table 4).
Seed yield analysis results that are not significantly different, this shows with a spacing of 70 x 20 cm gives a high seed yield according to the ability of plants to produce seed yield. Seed yields ranged from 0.73 to 0.76% (Table 4). The weight of 100 seeds, the length of the cob and the diameter of the cob the results of the analysis were significantly different, this shows that the spacing of 70 x 20 cm significantly affected these parameters, but did not significantly affect the number of rows per cob and number of seeds in the row. The weight of 100 seeds can also be influenced by genotype/strain and environmental factors, namely the high plant population tying competition between plants is higher so that it affects the size of the seeds [27][28][29][30]. The highest 100 seeds weight G 02 x 7 (38.46 g) and the smallest MR 12 x MAL 04 (24.42 g), the highest cob length MAL 01 x 4 (16.62 cm) and the smallest MR 12 x MAL 04 (13 , 85 cm), the biggest cob diameter is G 02 x 7 (4.42 cm) and the smallest is MR 12 x MAL 04 (4.00 cm), the largest number of rows is 1044-9 x 1027-11 (14.44 lines) and the smallest G 02 x 7 (12.89 rows), and the largest number of seeds in rows AMB 07 x CML 161 (33.33 seeds) and the smallest MR 12 x MAL 04 (28.50 seeds) ( The number followed by the same letter is not significantly different at the 5% level according to the Duncan test.
Conclusion
Shade resistant genotypes significantly affected by plant spacing in the growth phase parameters, the higher the plant population the parameter values tend to increase, this is due to the level of sunlight and nutrient competition in the soil. Shade-resistant genotypes significantly affect plant spacing in yield components (t/ha), in general, the higher the population, the lower the yield and the percentage of yield decrease (t/ha) (decreasing the size of cob and seeds) depending on the type of strain. The highest yield (t/ha) at 70 x 15 cm plant spacing was tolerant genotype 1044-9 x 1027-11 (7.75 t ha) and the highest yield at 70 x 20 cm plant spacing was CY 15 x MAL 03 (10.07 t/ha).
|
2020-06-25T09:07:34.650Z
|
2020-06-20T00:00:00.000
|
{
"year": 2020,
"sha1": "f7911ee661678c224e3e9ac21afe16c8af82f587",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/484/1/012067",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bb621008e9c3f2cb93b7ffa66ba9c9a976815ae3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
210615994
|
pes2o/s2orc
|
v3-fos-license
|
Experimental Investigation of a Downwind Coned Wind Turbine Rotor under Yawed Conditions: Preliminary Results
The growing number of Floating Offshore Wind Turbine (FOWT) concepts that utilize a single point mooring and therefore rely on the self-alignment capabilities of the wind turbine (e.g. SCD nezzy or SelfAligner by CRUSE Offshore) demands an extension of the simulation methods used for their development. A crucial issue for these concepts is the accurate prediction of forces and moments, which contribute to the self-alignment. In contrast to the well-studied behaviour of torque and thrust, yaw moment and lateral forces on a rotor under yawed conditions have not been in focus of previous experimental tests for the validation of aerodynamic simulation tools. In the present work, a model turbine equipped with a 6-axis force/moment sensor to capture the complete load on the rotor is presented. A detailed study of the two-bladed model turbine’s aerodynamic behaviour under yawed conditions was carried out within a range of yaw angles between -55 to + 55° with steps of 1 – 2.5°.
Introduction
The large number of recently erected prototypes and funded demonstration projects in the field of floating offshore wind turbines worldwide shows the extent to which governments and the industry are interested in this technology. Two key aspects are the reason for this interest: The low dependency of costs on water depth and the potentially easier installation procedure in comparison to bottom-fixed offshore wind turbines. Most current designs utilize a conventional offshore wind turbine mounted on an individually designed floating platform like the Hywind Spar 1 or the Floatgen Demonstrator 2 . Another approach makes use of the floating platform`s manoeuvrability instead of a yaw bearing at the tower top to align the rotor with the wind. Several concepts of these self-aligning floating wind turbines were presented in the past (eg. SCD nezzy 3 , HEXICON 4 , EOLINK 5 or SelfAligner 6 by CRUSE Offshore. The main engineering challenge in this case is to maintain a proper alignment of rotor and wind direction even in unfavourable environmental conditions like wind-wave or wind-current misalignment. Therefore, in addition to power and thrust, yaw moment and lateral force of the wind turbine rotor in a wide range of yaw angles need to be predicted accurately in the design process. In contrast to this need, conventionally used blade element momentum theory based methods and also more complex methods have not been validated with respect to these requirements, as a detailed set of validation data has not been published until now. In order to provide such validation data, an experimental measurement campaign with close attention to the rotor yaw moment was conducted in the wind tunnel of the Hamburg University of Technology (TUHH). The forces and moments acting on a two-bladed rotor with a downwind cone angle of 5° were studied in a range between -55 and 55°. This paper presents an overview on the setup of the model test, the design and the manufacturing of the model turbine as well as an analysis of the results of the measurement campaign. The scientific context of previous experiments is given in the following section.
Previous investigations
A considerable number of wind turbine experiments in a wind tunnel environment under yawed conditions were conducted in the past 40 years. However, only a few of them paid attention to the yaw moment. Micallef and Sant give an overview of the research activities and most of the relevant experimental studies in this field going back to 1982 in [1]. Most of the earlier studies analysed the flow field in the wake in order to understand the physics of a yawed inflow. Consequently, only a few investigations monitored the loads on the rotor. Two of them are the well-known Nasa-Ames and MEXICO experiments, which are unique in terms of their small scaling ratio and the elaborate testing methodology. The blade root of the both turbines were equipped with strain gauges [2] [3], which were used to measure the blade loads as well as torque and yaw moment. Alternatively, torque, thrust and yaw moment could be computed based on the surface pressure at different blade sections that were recorded using a high number of pressure sensors on the blade surface. Furthermore, Maeda et al. [4] conducted an experimental study using a model turbine with 2.4 m rotor diameter that focused on measurements of the blade pressure but additionally recorded the torque using an unspecified 'torque meter'. Thrust and yaw moment could therefore be obtained from the local blade pressure measurements but were not computed or published for this experiment. Krogstad and Adaramola [5] also utilized a torque sensor integrated in the shaft of their model turbine with 0.9 m rotor diameter. Additionally, a six-component balance located below the wind tunnel floor was used to measure the thrust force. An additional test was carried out without the blades to investigate the contribution of the drag force acting on the comparatively large tower and nacelle on the thrust force and the results were used to correct the thrust. The yaw moment can also be measured using the six-component balance but strong deviations due to tower and nacelle drag force need to be taken into account.
Considering the above-mentioned studies only the Nasa-Ames and the MEXICO experiment deliver reliable data for the absolute yaw moment. During the tests at Nasa-Ames a moderate downwind cone angle of 3.4° was applied to the rotor. Measurements at 0°, 10°, 20°, 45° and 90° yaw angle were conducted and therefore a coarse picture of power, thrust and yaw moment of a downwind coned rotor could be drawn from the measurements. However, to the authors' best knowledge an investigation focussing on the yaw moment has not been published in the past. Two different cone angles in upwind and downwind configuration were considered in water tank experiments by Kress et al. [6]. Similar to the present work, special attention was paid on the yaw moment, which was investigated at -10°, 0° and 10°. In addition to the cone angle, a tilt angle of 8° was applied in order to account for its effect on the yaw moment simultaneously.
Downwind coned rotors are expected to deliver a higher stabilising yaw moment in comparison to non-coned or upwind coned rotors. Fundamental research on downwind turbines has been published in the past [7], but as described above, experimental investigations with regard to the yaw moment of downwind coned rotors are sparse. Therefore, the present investigation aims at providing validation data to allow a deeper understanding of the underlying effects in the future.
Experimental setup
The model tests were conducted in the wind tunnel of the Hamburg University of Technology with a test section of 2 m in height and 3 m in width. In order to keep blockage effects on the measurements small, a rotor diameter of 0.925 m and a comparatively low blockage ratio of 11.2 % were chosen. Extreme values for the blockage are 17.6 % for the MEXICO rotor where blockage effects were present but limited [8] and 8.8 % for the NASA-Ames Experiment where blockage effects were found to be negligible [9][10]. Following Krogstad and Lund [11], it is assumed that a blockage ratio slightly higher than in the NASA-Ames experiment will still have very limited influence on the measured quantities at moderate thrust coefficients. The downwind cone angle of 5° was applied to achieve a high yaw moment while staying within the limitations of a realistic commercial design. A two bladed configuration was used instead of a three-bladed, which leads to a higher local chord length and therefore to a higher local Reynolds number at the blade sections.
The blade design was driven by a low sensitivity to changes in the Reynolds number and a high peak power coefficient in order to mitigate fluctuations due to viscous effects and achieve realistic operation conditions. A high rotational speed ensures a high Reynolds number and therefore a low sensitivity to changes in the local flow conditions, while high centrifugal loads due to the rotational speed lead to a flap-wise blade bending moment when considering a coned rotor. Therefore, the selection of the used airfoil and the rotational speed of the rotor is driven by a compromise between airfoil thickness, which strongly influences the dependency of the airfoil characteristics on Reynolds number changes, and the Reynolds number itself. The SD7062 airfoil meets this requirements, as its thickness is 14 % and the drag coefficient as well as its sensitivity to changes in the Reynolds number is comparatively low. In order to realise a finite thickness at the trailing edge, the airfoil was cut at 96% of the chord length and then scaled to the original chord length again. To avoid undefined airfoil shapes due to an adjustment of the contours between two different airfoil geometries, only one airfoil shape is used over the blade radius. Similarly, the distributions of the chord length and twist (see table A1) form a compromise between a possibly high chord length, which directly contributes to the Reynolds number, and a high lift to drag ratio, which is necessary to achieve a high power coefficient. The considerations above lead to a rated rotational speed of 1200 RPM, a rated tip speed ratio of 6.25 and a local Reynolds number of 1.5 x 10 6 at the tip. Figure 1 shows a photograph of the blades mounted on the turbine. The local Reynolds number, which is calculated from the free stream velocity and speed of the blade section, is maintained but decreases slightly until 30 % of the rotor radius and falls to 1.0 x 10 6 in the root region (see Figure 2). The rotation of the blades in combination with the cone angle induce extreme loads on the structure: At 0.5 R (radius), approximately 30 times the gravitational acceleration acts on the blade in the flap-wise direction. To withstand these accelerations, the blades were manufactured from a carbon fiber prepreg material, as an extremely lightweight and rigid material was demanded. A CNC-milled hard resistance foam core and a prepreg shear web were inserted into the blade and tempered together with the hull in an aluminium mould. Additionally, an aluminium part with threads at the blade root was inserted and aligned in the mould. Due to the high risk of undesired twisting, which may occur from the heating of the anisotropic material, it was necessary to conduct a 3D scan of the blades. Both blades showed a twist deviation below 0.2° from the original model. Additionally, a bending in flap-wise direction occurred with a magnitude below 0.3 % of the blade length at the blade tip. A second scan under operation-equivalent loading showed a higher flap-wise bending of approximately 0.6 % of the blade length but no significant bend-twist coupling.
The calibration of the sensor was carried out with 12 different load vectors in the expected field of operation to consider the effects of cross talking, as not only the aerodynamic loads but also high loads due to a small rotor imbalance are expected. The analysis of the calibration results shows that the expectable measurement uncertainty is 0.128 N in thrust, 0.024 Nm in torque, 0.008 Nm in yaw moment and 0.024 N in lateral force (with a confidence level of 95 %). In relation to the loads at rated conditions, a measurement uncertainty of 0.5 % in thrust and 2.3 % in torque and power is expected. Different yaw angles from -55 to 55° were adjusted by an underfloor turntable with an uncertainty of below 0.25°. Two steps were conducted for the initial alignment of the 0° position: First, the rotor axis was aligned parallel to the wind tunnel floor using a digital level. Second, the blade tips were aligned with a line laser that projected a plane perpendicular to the wind tunnel floor and inflow direction. A maximum uncertainty of 0.5° is expected, as the wind tunnel floor cannot be assumed to be perfectly even.
All signals from the load cell were recorded over time and a low pass filter with a corner frequency of 40 Hz was applied. Extreme outliers were removed from the signal using a standard deviation-based filter. As the native coordinate system of the sensor is positioned on its top (see Figure 3), a coordinate transformation was applied. It was translated along the z-axis (perpendicular to the wind tunnel floor), such that the x-axis is equal to the rotor axis pointing in downwind direction. This transformation is based on the assumption that the lateral force on nacelle and rotor can be described as a single vector whose point of application lies on the rotor axis. This assumption seems reasonable as the nacelle is rotational symmetric. A further translation of the coordinate system into the rotor centre is not applicable Figure 3: Sketch of the model turbine.
as the exact point of application of the lateral force in x-direction in unknown. Therefore, the origin of the coordinate system the presented loads refer to is located in a distance of 80.6 mm to rotor centre in downwind direction while the x-axis is identical to the rotor axis. Finally, a mean value was calculated for all signals in a window of 1 s, which corresponds to 20 rotor revolutions. Figure 4 illustrates the measured power, thrust and yaw moment coefficients as well as the lateral force of the model turbine at rated conditions (see table 1) and different yaw angles . A selection of the measured values can be found in table A2. The yaw angles range from -55 to 55° and are distributed symmetrically (apart from 3 points missing in the positive region). In sum, the measurements contain 54 different yaw angles. In the diagrams, black crosses connected with lines indicate the measurement points whereas blue crosses form the reflection of the measurement points at the y-axis. In case of the yaw moment, the blue crosses are additionally reflected at the x-axis in order to account for the change of the sign at the origin. Apart from the lateral force, all curves show a very smooth behaviour. When comparing the measurement points of power and thrust coefficients with their reflection, the differences are barely visible.
Results
In Figure 4 (c), the yaw moment coefficient and the absolute yaw moment are shown on the primary and the secondary y-axis. The yaw moment coefficient is based on the idea of the thrust coefficient and is defined as follows: Where denotes the yaw moment with respect to the z-axis and air density, wind speed, rotor swept area and rotor radius are described by , , and , respectively. Analogous to the thrust coefficient, the denominator forms an ideal reference value, which is defined as the yaw moment arising from the maximum thrust force (referring to a thrust coefficient of 1) acting on a single blade tip when the blade is in a horizontal position. With this definition, the authors do not follow the suggestion of Kress et al. [6], as their definition depends on the blade surface rather than on the rotor swept area, which does not allow to compare two different rotor designs with respect to the yaw moment directly. The measured yaw moment can be described by a linear function with no offset and a slope of approximately 0.011 Nm per degree from -15 to 15°. With increasing yaw angle, the slope decreases until a maximum is reached between 37.5 and 40°. Comparing the measured values with their reflection at the y-axis, higher deviations as observed in torque and thrust occur. The lateral force shown in Figure 16th 4 (d) essentially behaves like a linear function even though the course of the curve is less smooth in comparison to the other quantities. A notable offset of approximately -0.3 N from the origin can be observed a 0° yaw angle. In comparison to the thrust force, the lateral force is small, which shows that the force acting on the rotor is aligned with the rotor axis but not with the wind direction. As the measurements contain forces on the rotor as well as on the nacelle, it is assumed that the main contribution of the lateral force arises from the nacelle drag force. However, the offset at 0° still cannot be explained in this way, as the drag force on the nacelle acting in the direction of the lateral force should be zero at this angular position.
When interpreting the results of the presented measurement campaign, it should be noted that although the experiments were carried out under controlled conditions in the wind tunnel, some uncertainties cannot be avoided and must therefore be taken into account. A systematic influence on the measurements is the contribution of the nacelle drag force to the yaw moment. As it is unknown at which position the resulting vector of the drag force on the nacelle is acting, the rotor yaw moment is superposed by a moment with a known force and an unknown lever arm. The mounting position of the nacelle on the sensor in x-direction is quite near to its centre, such that the lever arm can be assumed to be short in comparison to the nacelle length. Therefore, a deviation of the measured total yaw moment to the rotor yaw moment in a single-digit percentage must be considered when interpreting the results.
Summary and Conclusions
A description and analysis of the experimental study on a downwind coned, two-bladed rotor under yawed conditions was presented. The model description showed that a comparatively high and nearly constant Reynolds number over the blade span and an accurate manufacturing of the blades could be achieved. Results for thrust, power and yaw moment coefficients showed a symmetric and smooth behaviour over the yaw angle. In the course of the lateral force an offset was observed. The measured yaw moment is not fully equivalent to the rotor yaw moment. The reason for this is that the coordinate system used for the measurements is not located exactly in the centre of the rotor, but displaced in axial direction. Finally, no other significant uncertainties were observed in the measurement data.
A number of findings regarding the quality and results of the conducted experimental investigation based on the observations described in this work are elaborated below: The symmetry in the measured power and thrust coefficients as well as the zero crossing of the yaw moment at approximately 0° yaw angle confirms that the applied adjustment procedures were sufficient. The measurements clearly show that the direction of the rotor thrust force is aligned with the rotor axis but not with the wind direction, which sometimes leads to misunderstandings. When comparing the measured yaw moment to results of simulations, a careful analysis needs to be undertaken. On the one hand, the general shape and the characteristic point of maximum yaw can be compared with high accuracy. On the other hand, the absolute values may contain an uncertainty in a single-digit percentage.
As all relevant information needed for numerical simulations is given, this work offers a set of validation data open for all interested researchers. Special focus of this data set is the consideration of yaw moment, lateral force and a downwind coned rotor at different yaw angles. The authors therefore hope to contribute to the validation of current numerical models under yawed conditions.
|
2019-06-06T22:42:57.396Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "1e7b419c347346d3f3e89f51d73eec3f167d3f52",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1356/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "eb007b4ce9cb81dc93d8f66e007f1fdf960b9db6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
235377647
|
pes2o/s2orc
|
v3-fos-license
|
Blended teaching versus traditional teaching for undergraduate physiotherapy students at the University of the Witwatersrand
Background Shifting from face-to-face teaching to incorporating technology may prepare students better for future work as health professionals. Evidence of blended teaching’s effect on the academic performance of undergraduate physiotherapy students’ performance is scarce. Objective The purpose of our study was to determine students’ theoretical and clinical performance in a blended teaching module compared to their own performance in two knowledge areas taught face to face, and student perceptions of blended teaching in the third-year physiotherapy curriculum. Methods The cross-sectional study design included 47 third-year physiotherapy students. The orthopaedic module was delivered using a blended teaching approach in two consecutive semesters, whilst two other physiotherapy knowledge areas, neuromusculoskeletal and cardiopulmonary, in the same semesters were delivered face to face. Theoretical and clinical performances of students were compared for significance and effect. Students were assessed on their theoretical and clinical knowledge in all areas using the same assessment methods. The students (n = 43) also completed a survey on their blended teaching experience. Results Significantly higher theoretical marks for orthopaedics were calculated compared to neuromusculoskeletal and cardiopulmonary for both semesters with a large positive effect (average Cohen d = 4.44) for blended teaching on theoretical examination performance; no statistically significant difference for clinical performances. Students felt engaged in the blended teaching process, and 72% preferred blended teaching over face-to-face teaching or online delivery. Conclusion Blended teaching improved the theoretical marks, demonstrating that knowledge acquisition was improved, but not clinical performance. Clinical implications The study contributes to the knowledge base of blended learning in Health Science Education in South Africa. The authors identified a gap where future studies should investigate the effect of blended learning on clinical performance outcomes as a continuation from this one.
Introduction
The coronavirus disease 2019 pandemic caused an instant shift in teaching across the world. A shift from traditional face-to-face teaching to e-learning or a blended teaching approach occurred. Even prior to COVID-19, blended learning has rapidly grown in education (Vallée et al. 2020).
To adequately prepare students for a changing workforce, educators need to reflect on their teaching strategies to incorporate the 21st-century learning skills of critical thinking, collaboration, communication, innovation and creativity, independent and self-directed learning, and using technology to learn (Kennedy & Heineke 2014;Kereluik et al. 2013;Little 2013). Traditional face-to-face teaching approaches have been, and are still being, used in health sciences education to prepare undergraduate students with the necessary theoretical knowledge as well as clinical skills to enter a clinical work environment. However, technology has changed the teaching and learning culture (Department of Education 2004;Hämäläinen, Kiili & Smith 2017).
Background: Shifting from face-to-face teaching to incorporating technology may prepare students better for future work as health professionals. Evidence of blended teaching's effect on the academic performance of undergraduate physiotherapy students' performance is scarce.
Objective: The purpose of our study was to determine students' theoretical and clinical performance in a blended teaching module compared to their own performance in two knowledge areas taught face to face, and student perceptions of blended teaching in the thirdyear physiotherapy curriculum.
Methods:
The cross-sectional study design included 47 third-year physiotherapy students. The orthopaedic module was delivered using a blended teaching approach in two consecutive semesters, whilst two other physiotherapy knowledge areas, neuromusculoskeletal and cardiopulmonary, in the same semesters were delivered face to face. Theoretical and clinical performances of students were compared for significance and effect. Students were assessed on their theoretical and clinical knowledge in all areas using the same assessment methods. The students (n = 43) also completed a survey on their blended teaching experience.
Since the emergence of technology-based teaching platforms, electronic learning (e-learning) has increased in popularity, and traditional teaching approaches have been augmented with an e-learning component (Department of Education 2004;Liu et al. 2016). Higher educational institutions are increasingly incorporating e-learning into health education in a blended teaching format (Means et al. 2013), and authors have even called the blended mode of teaching the 'new normal' (Norberg, Dziuban & Moskal 2011).
Blended teaching incorporates the traditional face-to-face lecturing style with a synchronous or asynchronous e-learning component (Garrison & Kanuka 2004;Liu et al. 2016). It is distinctly different from online teaching, where there is no face-to-face component. The strength of a blended teaching approach lies in the collective advantages of both face-to-face and e-learning approaches (Wu, Tennyson & Hsia 2010). In traditional face-to-face lectures, although there is the cost of transport and the time of each student participating in the lecture, a sense of community is fostered (Kemp & Grieve 2014). E-learning has the advantages of saving transport costs, the convenience of learning remotely and up-to-date information being available at the touch of a button (Liu et al. 2016). The strengths of blended learning, underpinned by Siemens' 2006 theory of connectivism, lie in the creation of an extended community, whereby students can engage in dialogue, debate and have open lines of communication with experts and the global community (McDonald et al. 2014;Siemens 2006). The extended community can be built into blended teaching by means of reflection, discussion groups, debates and seeking of information in small groups. This fosters critical thinking and reflection; supports flexibility, independence and collaborative learning; and enhances positive motivation amongst students (McDonald et al. 2014;López-Pérez, Pérez-López & Rodríguez-Ariza 2011). Liu et al. (2016) conducted a systematic review on the effectiveness of a blended teaching approach compared to no intervention, traditional face-to-face and e-learning approaches. Although high article heterogeneity and publication bias were concerns, the pooled effect size of 0.81 indicated that a blended learning approach may be more effective than traditional lecture-based or e-learning only for acquiring knowledge amongst healthcare students (Liu et al. 2016). Stander, Grimmer and Brink (2019) conducted a systematic scoping review exploring the learning styles amongst physiotherapy undergraduate students (n = 910), postgraduate students (n = 361) and qualified physiotherapists (n = 23) over a 26-year period. Students included in the time period of the review sourced information from traditional sources which has since shifted to learning electronically. There was inconsistent evidence on how learning amongst physiotherapy students occurs, especially in developing countries. In conclusion, Stander, Grimmer and Brink stated that active learning with a clear understanding of theoretical concepts through a blended learning approach may guide physiotherapy students' learning.
Knowledge acquisition is a crucial part of students' learning in health education. Vallée et al. (2020) evaluated the effectiveness of blended teaching on knowledge acquisition in a systematic review and meta-analysis. Albeit a wide variety of different blended teaching variants were included, consistent improved knowledge outcomes were seen for students receiving blended teaching compared to traditional teaching. A recommendation of further studies to confirm the improved knowledge outcomes for blended teaching is answered with our study. Students in the third year of their undergraduate physiotherapy training at our university enter the clinical field of orthopaedics for the first time. This may be a challenging experience whereby they must integrate theoretical knowledge into practical skills, and blended teaching could ease this transition. This premise is supported by Motsumi, Bedada and Ayane (2019), who found that blending their traditional lecture-based surgical skills training with Moodle, which hosted 3D aminations, resulted in significantly higher pre-post-test knowledge impact scores and high learning satisfaction compared to the traditionally taught group. Barnard-Ashton, Koch and Rothberg (2014) investigated the influence of blended teaching on student performance in the undergraduate occupational therapy curriculum at the University of the Witwatersrand. They showed that when students had a significantly higher access footprint to the e-learning content of their course, there was a small but relevant positive effect (average d = 0.31) on student performance (Barnard-Ashton et al. 2014). In their systematic literature review aimed at determining the role of blended teaching approaches on healthcare students' clinical education, Rowe, Frantz and Bozalek (2012) found that the gap between theoretical knowledge and clinical practice can be bridged through blended teaching. However they concluded that there is a need for future research to establish the use of a blended teaching approach and how it impacts students' clinical practice, further supporting the need for our study.
Our study aimed to determine the effect of blended teaching compared to traditional face-to-face teaching approaches and gauge the perceptions of students regarding learning through a blended teaching approach. What makes our study more pertinent is the change that has occurred since the start of the COVID-19 pandemic. Lecturers face the challenge of reimagining their teaching, where blended teaching may become the 'new normal'. Understanding the impact and feasibility of a blended teaching approach may be useful for lecturers to inform future teaching styles, as well as enhance learning for their students.
Methods
A cross-sectional study included a convenience sample of third-year undergraduate physiotherapy students at the University of the Witwatersrand. Physiotherapy students in their third year are divided into two groups for the year and change over the knowledge areas taught between the first and second semester in order to manage the class size and clinical placement burden. One group (half of the class) (n = 24) was taught orthopaedic physiotherapy in January and February, and the other (n = 23) was taught orthopaedic physiotherapy in July. Both groups were taught the same orthopaedics content using a blended teaching approach.
Third-year physiotherapy students who were repeating their year of study were excluded.
The blended learning programme consisted of a revision quiz, online activities and face-to-face lectures. The online activities consisted of videos sourced from the internet, podcasts, group case studies and online quizzes in lesson plans on Moodle, the learning management system that is used in the School of Therapeutic Sciences at the University of the Witwatersrand. The content was constructed by the orthopaedics lecturer with the help of a blended learning expert. The concepts were broken down into components and the online content was constructed based on the level of understanding required for each concept. In addition to the face-to-face lectures, there was a face-to-face debate task. In total there were six online components, eight faceto-face lectures and the face-to-face debate task. The faceto-face teaching covered four general orthopaedic lectures (complications of fractures, principles of fracture management, orthopaedic radiology and amputations), one lower limb lecture (distal femur, tibia, fibula and ankle fractures), two upper limb lectures (pathologies, fractures and dislocations of the shoulder, distal forearm, wrist and hand, and hand injuries) and an arthroplasty lecture. At the end of the blended teaching orthopaedic teaching period, the students were taken to one of the academic hospitals, where they were orientated to the clinical setting, and they had the opportunity to assess and treat patients under supervision. They were divided into groups of two and tasked to compile a video of management of their patients (with permission), which was to be shown to the rest of the class.
For the neuromusculoskeletal (NMS) and cardiopulmonary (CP) courses, physiotherapy knowledge areas covered in the same semester as the orthopaedic area, only traditional faceto-face lectures were used. The CP knowledge area consisted of 13 face-to-face lectures and three practical sessions. The NMS knowledge area consisted of two lectures and two practical sessions. At the end of the teaching period in both CP and NMS, and similar to orthopaedics, the students were taken to a clinical placement area, where they were orientated to the clinical area and assessed and treated patients under supervision.
The performance of the students on all knowledge areas was evaluated through a knowledge test at the end of the teaching block, and a clinical performance mark was given at the end of the clinical placement.
Study instrument and procedure
After giving informed consent, students completed a questionnaire based on the work of Owston, York and Murtha (2013) that was developed using REDcap (Research Electronic Data Capture), which is a secure, web-based software platform designed to support data capture for research (Harris et al. 2009(Harris et al. , 2019. This questionnaire was used in a study by Owston et al. (2013), which similarly assessed the perceptions and performance of students doing blended learning in a university environment. The questionnaire was compiled from other blended learning questionnaires. It scored a high reliability of 0.908 (Cronbach's alpha coefficient). The questionnaire assessed the student perspectives regarding blended teaching compared to a traditional teaching approach during the preclinical teaching block. A five-point Likert scale allowed students to select between 1 (strongly agree), 2 (agree), 3 (neutral), 4 (disagree) and 5 (strongly disagree). The blended teaching section covered aspects of engagement with content, interaction, understanding, access to resources, reflection opportunities, usage of technology and course factors. The questionnaire ended with questions investigating which teaching style was favoured by the students.
The results of one theoretical examination and a summative clinical mark were obtained for undergraduate students (n = 47) in the field of orthopaedics. These results were entered onto an Excel spreadsheet, and statistical analysis was performed. The marks for the NMS and CP physiotherapy knowledge areas that were taught by traditional face-to-face lectures within the same semester were recorded and acted as paired comparison data (of the students' own marks) to their orthopaedic performance mark.
Statistical analysis
Descriptive statistics were undertaken to reduce the data, and a two-tailed Student's t-test was used to determine if there was a significant difference (α = 0.05) between the orthopaedic marks and the marks obtained in the NMS and CP areas for both the theoretical examination and the summative clinical assessment. Cohen's d was applied to determine direction of difference and effect size, where 0.2 is considered a small effect, 0.5 a medium effect and > 0.8 a large effect (Ellis 2009). Likert scale responses to the survey were descriptively analysed.
Ethical considerations
This study was approved by the Medical Human Research Ethics Committee of the University of the Witwatersrand, the Dean of Student Affairs and the Head of the Department of Physiotherapy (ethical clearance number M170571). All participants gave informed consent prior to taking part in our study.
Theoretical and clinical performance
The theoretical examination marks of the two groups of students in the orthopaedics area were similar between the two semesters (Table 1). When comparing the performance of students in orthopaedics in both semesters, when the blended teaching approach was used, to their own performance in NMS and CP areas, the students performed significantly higher in the theoretical examination on the orthopaedics content. This is further evidenced by the large positive effect of blended teaching on the theoretical examination performance over their own performance on theoretical examinations on content delivered by conventional teaching methods (average Cohen's d = 4.44).
In the clinical summative assessment marks (Table 2), there was no statistically significant difference between students' orthopaedics clinical performance assessment in both semesters when compared with their performance in the NMS and CP areas. An average small negative effect (average Cohen's d = -0.26) was evident, indicating that the students performed slightly worse on their clinical orthopaedics assessments than on their other two specialities.
Student perceptions of blended teaching
In the blended teaching questionnaires, the Likert scale ranged from 1 (strongly agree) to 5 (strongly disagree). Of the students who consented to participate in our study, 43 (91.5%) completed the survey on their perception of blended teaching. The survey covered three aspects: engagement and affect, knowledge and learning, and the blended teaching process.
Thirty-two students (74.4%) felt that they were more engaged; 67.4% felt that the amount of interaction with other students increased, whilst 46.5% felt that interaction with the lecturer increased through the blended teaching approach (Figure 1). Whilst 32.6% of students felt overwhelmed by the resources in the course, only 16.3% felt that blended teaching made them more anxious.
Questions relating to knowledge and learning can be seen in Figure 2. The students predominantly agreed or strongly agreed that the web resources were helpful, that a blended approach to teaching provided more opportunities to use and access information, that resources were easy to access, that they understood concepts better and that their understanding of the course material was improved. Only one student disagreed (strongly) that the blended course content was well organised and easy to understand.
The majority of students indicated that using their devices for learning was useful (83.7%) and that they were able to use the technology and software needed to complete the course (79.1%). The students (83.7%) also believed that the online and face-to-face components of the course enhanced each other, and 86% agreed that they would take another blended teaching course if given the opportunity (Figure 3). Seventy-two per cent of students indicated that they would prefer a blended teaching approach as indicated in Figure 4.
When asked to explain their preference of teaching style, one student responded, 'I enjoy being able to learn on my own but still have the opportunity to have things explained or confirmed by the lecturer and the opportunity to ask questions' (Participant number [PN] 3, female, physiotherapy student). Further students voiced their perspectives: 'I feel that not every lecture has to be given face to face, but some things require explanation in person' (PN 11, male, physiotherapy student), 'More convenient in terms of not having to sit in 1.5 hours of traffic daily! Also, more importantly I am able to grasp concepts better watching podcasts as I can rewind, whereas in a lecture it is difficult to always ask the lecturer to repeat herself' (PN 18, male, physiotherapy student). Students preferring face-to-face teaching voiced their opinions; 'easier to ask questions and discuss the content' (PN 14, female, physiotherapy student) face-to-face, 'online does not emphasise interaction, participation, attention, it does not allow us to ask ' (PN 24, female, physiotherapy student).
Perceptions on discussion sessions
The students were asked whether they preferred a classroom or an online discussion. Six students preferred classroom discussions, stating that they had 'more interaction and [found it] easier to focus and learn' (PN 14, female, physiotherapy student). Eight students preferred online discussions, stating that 'podcasts are amazing; it was really helpful, and I can always go back when revising and when studying lecture material; I can study while listening' (PN 25, male, physiotherapy student). The remaining respondents preferring a combination of online and classroom discussion, stating that 'both have great benefits and enhance my learning' (PN 28, female, physiotherapy student).
Perceptions of blended teaching and the least useful aspects
The students were asked which aspects of the blended course they found least useful. At times the students experienced poor Wi-Fi and internet connection, posing challenges, made apparent by a student responding 'being dependent solely on technology, when the server crashed, we were unable to Given the opportunity I would take another course in the future that has both face-to-face and online components Being able to use my personal technology devices (e.g. cell phone, MP3 player, PDA) for learning was useful The online and face-to-face components of this course enhance each other I was able to use all the different technologies and soŌware programmes required for this course
Perceptions of blended teaching and the most useful aspects
When the students were asked which aspect of the blended course they found most useful, they answered, 'I only enjoyed working on my own as I was able to start past paper questions in the same time rather than sitting in a classroom' (PN 45, female, physiotherapy student). '[T]he ability to go through the work at a pace that suited me, as well as having the ability to ask the lecturer a question when having a lecture face-to-face.' (PN 43,female,physiotherapy student) 'Hospital visit and practical on amputation' (PN 30, female, physiotherapy student), 'online and group discussions' (PN 25, male, physiotherapy student) and 'podcasts' (PN 26, female, physiotherapy student). The students mentioned that they preferred the lecturer podcasts to YouTube videos.
Discussion
Our study compared a blended teaching approach in an orthopaedics module to two physiotherapy modules taught face to face. All three modules were offered in the same semester, over two consecutive semesters.
The students scored significantly higher theoretical marks in both semesters for the orthopaedics module, showing an average large effect (average d = 4.44) of blended teaching over the face-to face approach. This is similar to the findings of a systematic review conducted by Liu et al. (2016), where blended teaching approaches were found to be more effective or as effective when compared to a face-to-face approach or purely e-learning teaching. Vallée et al. (2020) study supports the finding that blended teaching and learning have consistently superior effects on health education outcomes with different blended design variants. When comparing the summative clinical marks between the three different physiotherapy knowledge areas, however, there was no significant difference and little effect, indicating that the transfer of knowledge to the clinical setting was not improved by the higher theoretical marks achieved in the blended teaching module. Whilst these results contribute to our understanding of the impact of blended teaching on clinical performance, as suggested by Rowe et al. (2012), we need further research on the factors that contribute to this outcome.
Regarding the students' perceptions of blended teaching, predominantly positive responses were seen for engagement, interaction and helpful resources, and the students indicated that the blended teaching provided them more opportunity to access and use information. The students also felt that their understanding of key concepts was improved with the blended teaching approach, and this is evident with the higher theoretical examination marks obtained for orthopaedics. They agreed that the blended teaching process was clear and organised to support their learning. Not having a pure online teaching approach but including face-to-face lectures was in retrospect a good decision, as 72% of students indicated that they preferred the blended teaching module. Consideration should, however, be given to the 14 students (32.6%) who indicated that they were overwhelmed with the resources; a possible reason for this may be that it was the first time orthopaedic or physiotherapy content was delivered in a blended format. Connectivity and Wi-Fi issues interrupted the students' teaching, but the students agreed that what made blended learning useful was the fact that the material could be revisited at any time, making connectivity a problem that can be overcome with lesson plans and podcasts. In the blended learning orthopaedic teaching module, the same benefits of e-learning, namely convenience and transcending the boundaries of space and time (Liu et al. 2016:e2), provided students with the latest information at their fingertips, and the same 21st-century skills of collaborative interaction were observed by the authors as what was found in other studies (Wu et al. 2010:155-164;Peng et al. 2014:16).
Other 21st-century skills included the teacher being a facilitator and students having to do self-directed learning, whereby they were responsible for doing the online activities themselves. At the end of the activities, there were quizzes to test their knowledge of the lesson. This was a huge change for the students as they were accustomed to just presenting themselves for a lecture. Self-directed learning is an important skill for students, as when they are qualified health professionals they will be responsible for their own continuous professional development. The group case discussions also fostered collaboration and communication, with the critical thinking and creativity of finding solutions in an online group setting. However, the authors will make certain changes to the blended teaching that was provided. In future, the blended teaching material will be improved, where podcasts made by the lecturers will replace YouTube videos. To answer any questions the students have on the online content, an additional online discussion session will be introduced, so that students do not have to wait for face-to-face interactions for their questions to be answered. During the hospital visit students will not take a video of their patient management. It was their first time treating patients, and they were not comfortable with their peers seeing these videos. Discussion of their patients and the experiences will be face to face. Future research investigating the knowledge acquisition and carryover of theoretical knowledge into the clinical setting is suggested.
The limitations of our study include the omission of determining the student perceptions of face-to-face teaching. A retrospective analysis looking at comparisons of students' theoretical and clinical marks for orthopaedic, NMS and CP knowledge areas could not be performed because of incomplete orthopaedic, CP and NMS theoretical and clinical student marks.
Conclusion
A blended teaching approach significantly improved the students' theoretical marks in a physiotherapy orthopaedics module, when compared to traditional face-to-face teaching in other areas, but not their clinical performance. The students were supportive of the use of blended teaching when surveyed regarding their experience. With regard to fostering the 21st-century learning skills of critical thinking, communication, collaboration, flexibility, creativity and selfdirected learning amongst undergraduate physiotherapy students going into clinical practice, the results from our study appear to be promising. Caution should however be taken to not assume that clinical skills are enhanced with a blended teaching approach.
|
2021-06-09T13:15:05.692Z
|
2021-05-17T00:00:00.000
|
{
"year": 2021,
"sha1": "707e068d577c40a5e1a4fbf19c3242b4428a33d0",
"oa_license": "CCBY",
"oa_url": "https://sajp.co.za/index.php/sajp/article/download/1544/2415",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2af642a1cd75308830c17ed13118bc63408a8ef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
86260041
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy of Exogenous Calcium Applications for Reducing Upper Leaf Necrosis in Lilium `Star Gazer'
Additional index words. lily, calcium deficiency disorder, foliar calcium sprays, bulb calcium dipping, calcium nutrition, leaf scorch, tipburn Abstract. Upper leaf necrosis (ULN) on LiliumʻStar Gazerʼis a calcium deficiency disorder. In this study, we evaluated the efficacy of foliar Ca sprays and bulb Ca dipping on reduc- ing ULN. Necrosis severity of a single leaf was determined by an index from 0 (healthy) to 5, based on symptom progression and necrosed leaf area. Single leaf severity was then summed for all leaves to yield a whole-plant severity rating. Single daily applications of 25 mM calcium chloride or calcium nitrate sprays for 14 days significantly suppressed the degree of symptom expression; whole-plant severity was reduced from 18 (severely necrosed) to below 3 (essentially unnoticeable). Five single applications at 3.5-day intervals were not effective, even at concentrations up to 150 mM. At concentrations of 100 and 150 mM, 14 daily sprays of calcium chloride or calcium nitrate were toxic and caused leaf tip yellowing; calcium chloride caused more severe phytotoxicity than did calcium nitrate. For effectiveness of foliar Ca sprays, it was necessary to have the Ca solution reach the enclosed, young, expanding leaves. Preplant bulb immersion in calcium chloride was not effective even at concentrations as high as 400 mM for up to 16 hours.
Upper leaf necrosis (ULN) on Lilium ʻStar Gazerʼ is a calcium deficiency disorder (Chang, 2002), as is bitter pit in apple (Malus ×domestica Borkh.) (Ferguson and Watkins, 1989) and tipburn in lettuce (Lactuca sativa L.) (Collier and Tibbitts, 1982). It has been shown that there are two primary mechanisms leading to ULN. The first is a very low bulb calcium content that cannot meet Ca demand when the upper leaves are expanding (Chang and Miller, 2003). The second is that young expanding leaves of Lilium ʻStar Gazerʼ are highly overlapped before flower buds are visible. This leaf "enclosure" reduces transpiration of young leaves and encourages the development of ULN (Chang and Miller, 2004). As a result of these factors, necrosis commonly occurs on the upper leaves (Chang, 2002). On average, a ʻStar Gazerʼplant grown from a 16to 18-cm bulb has 44 leaves. Only ≈15 leaves on the top of the plant are susceptible to ULN (Chang, 2002). The susceptible period for ULN is 25-50 d after planting (Chang and Miller, 2004). ULN decreases the market value of the plant and reduces its appeal to consumers. The current industry practice is the manual removal of necrosed leaves before plants are marketed, a very labor intensive proposition.
Plants are able to absorb mineral nutrients rapidly through foliage. Foliar sprays thus have great practical utility to overcome nutrition deficiencies caused by micronutrients. However, due to the limited phloem mobility of calcium, foliar Ca sprays are not always effective (Marschner, 1995). Therefore, the effectiveness of foliar Ca sprays to control Ca deficiency disorders was somewhat controversial, such as for overcoming tipburn on lettuce (Collier and Tibbitts, 1982;Kruger, 1966). Nevertheless, there are reports that foliar Ca sprays are effective in reducing Carelated disorders in several crops, including tipburn in Asiatic hybrid lily (Lilium L.) ʻPirateʼ (Berghoef, 1986), marginal bract necrosis in poinsettia (Euphorbia pulcherrima Willd. ex Klotzsch) (Wissemeier, 1993), and bitter pit in apple fruit (Ferguson and Watkins, 1989). Effectiveness of foliar Ca sprays depends on environmental conditions. Spraying with 0.7% calcium nitrate was effective in preventing tipburn in Chinese cabbage at normal relative humidity (RH), but was not effective at higher RH (van Berkel, 1988). The most effective Ca salt concentration also varied by crop. For Asiatic lily ʻPirate,ʼ68 mm (1%) calcium chloride was able to reduce tipburn to an acceptable level in most cases, but in others a concentration up to 204 mm (3%) was required (Berghoef, 1986). Calcium chloride at 0.35% or 0.7% calcium hydroxide reduced rain splitting in sweet cherries (Meheriuk et al., 1991). Only 5 mm (0.07%) calcium chloride was required to reduce marginal bract necrosis on poinsettia (Wissemeier, 1993).
It has been shown that lily bulbs are able to absorb plant growth regulators (PGR). Dipping bulbs in PGR solutions for only 1 min was effective for lily height control (Ranwala et al., 2002). In apple, postharvest dips in 1% to 4% Ca salt solutions are effective for reducing bitter pit (Ferguson and Watkins, 1989). In contrast, soaking lily bulbs for 24 h in 136 mm (2%) or 272 mm (4%) calcium chloride did not reduce tipburn on Asiatic hybrid lily ʻPirateʼ (Berghoef, 1986).
Given the paucity of data on calcium absorption and distribution in bulbous crops, and on the role of Ca in this important disorder, we aimed in this study to increase Ca content in the Ca sink (young leaves, by foliar Ca sprays) as well as in the Ca source (bulb scales, by preplant bulb dips into calcium solution) in order to reduce ULN.
The incidence of ULN occurrence was defined as the percentage of plants that had any level of symptom expression. When the environment is not conducive to ULN occurrence, incidence is a good enough parameter to distinguish differences between treatments. However, in ULN-favorable environments, ULN occurrence is widespread, but plants exhibit a large variation in severity. Therefore, a more detailed parameter, "ULN severity," is needed, in order to further refine differences among treatments. An index from 0 to 5, based on symptom progression and necrosed leaf area, was used to describe the severity of necrosis on individual leaves: 0 = no visible necrosis symptoms; 1 = chlorotic spots; 2 = curled leaf margin; 3 = marginal necrosis; 4 = dead leaf tip; and 5= >50% of the leaf area was necrotic. We have previously demonstrated that leaf Ca concentration is negatively correlated with necrosed leaf area (i.e., single leaf severity index) (Chang, 2002). The severities of the individual leaves were then summed to determine whole-plant severity. Since a ULNaffected plant may have only one leaf with very slight symptoms, or have many leaves with severe necrosis, in this study whole-plant severity is a better descriptor than necrosis incidence when the environment is favorable to ULN. When whole-plant severity was <5, the symptoms were very light, and would not draw the consumersʼ attention. This index system was also adapted to describe the phytotoxicity symptoms of leaf tip yellowing caused by high concentrations of Ca salts.
Foliar sprays of calcium salts. Three experiments were conducted to determine the effect of foliar Ca sprays. Two (reagent grade) Ca salts, calcium chloride (CaCl 2 ·2H 2 O) and calcium nitrate (Ca(NO 3 ) 2 ·4H 2 O), were used. Salts were dissolved in distilled water and 0.1% surfactant (Tween 20, Sigma Chemical Co., St. Louis) was added. Plants were sprayed with Ca solution to runoff, and controls received water plus surfactant. Each plant received ≈20 mL of solution. Experiment 1 began on 4 Dec.; the average bulb fresh weight was 66.3 ± 0.4 g. Five concentrations, 0, 10, 50, 100, and 150 mm, of calcium chloride or calcium nitrate were used, with a total of five sprays applied at 15, 18, 22, 25, and 29 d after planting (DAP). Each treatment had 18 single-plant replicates in a completely randomized design (CRD).
Experiment 2 began 27 Jan., using bulbs with an average fresh weight of 69.2 ± 0.5 g. Calcium chloride and calcium nitrate were used at concentrations of 0, 25, 50, 100, and 150 mm and application frequency was once per day. Besides spraying to runoff, extra Ca solution, ≈5 mL, was sprayed directly toward the shoot apex in order to have the solution reach young, folded leaves. Each treatment had 18 single-plant replicates in a CRD. A total of 14 sprays was applied daily from 22 to 35 DAP. Experiment 3 began 30 May with bulbs weighing 65.5 ± 0.5 g. Treatments included 0, 12.5, 25, 50 mm calcium chloride and calcium nitrate with extra spraying into the apex (as described above), and 50 mm without the extra directed spray. Application frequency was once a day for 14 d (30-43 DAP). Each treatment had 18 single plant replicates in a CRD.
Calcium chloride bulb dips. Two experiments were conducted to evaluate the efficacy of bulb Ca dips for preventing upper leaf necrosis. On 18 Oct., uniform bulbs (61.8 ± 0.4 g) were randomly selected and weighed (a process taking ≈4 h at room temperature). Bulbs were then dipped in calcium chloride solution for 15 min, with concentrations of 0, 25, 50, 100, 200, and 400 mm. The dipped bulbs were allowed to dry at 3 °C overnight and planted on 19 Oct. In the second experiment, three Ca concentrations (0, 200, and 400 mm, from calcium chloride) were tested, with dipping times of 0, 1, 4, or 16 h. Bulbs (weighing 68.7 ± 0.4 g) were randomly selected and divided into seven groups to receive treatments, a process taking ≈7 h and causing a water loss of 3 g per bulb (on average, based on 56 bulbs). After dipping, excess solution was allowed to runoff for 30 min; then bulbs were reweighed. Solution absorption per bulb was calculated as: fresh weight after dipping -initial fresh weight + 3 g (for water loss during the process). Dipping was conducted 9 July and bulbs planted 10 July.
In both experiments, a CRD was used. There were 32 single-plant replicates per treatment for the first experiment, and 28 for the second. Distilled water was used to make the solutions and 0.1% Tween 20 was added as a wetting agent.
Statistical analysis. All statistical tests were conducted using SAS version 8.01 (SAS Institute, Cary, N.C.). Incidence of ULN, calcium phytotoxicity, and lethal calcium damage were tested using the chi-square test of independence, and ULN severity was tested using one-way analysis of variance (ANOVA). Duncanʼs multiple range test tested for differences among treatment means. Because the levels of the independent variables (calcium concentration and dipping time) are in fact numerical, we additionally tested for trends in ULN severity and phytotoxicity severity using linear regression. Results are only presented for phytotoxicity as no trends were evident for ULN severity (all R 2 ≤ 0.20).
Results
Effect of foliar calcium sprays. In the first foliar Ca spray experiment, with a spray frequency of twice a week, no effects on ULN were observed, even when the Ca concentration was as high as 150 mm (Table 1). In the second and third experiments, with 14 daily sprays, both calcium salts were effective for reducing the degree of symptom expression (Tables 2 and 3). Experiment 2 was done in a drier greenhouse and the plants were lightly affected by ULN. When ULN was light, both calcium chloride and calcium nitrate were able to reduce ULN incidence in a concentrationdependent manner (Table 2). When ULN was severe, 14 daily foliar Ca sprays significantly reduced ULN severity to an acceptable level (whole-plant severity <5, which would not be noticed by consumers) (Fig. 1 and Table 3).
Both calcium chloride and calcium nitrate were effective, and there was no notable difference in the effectiveness of these two salts (Tables 2 and 3). However, increased concentrations of both salts caused higher severity of yellowing and browning injury on leaf tips (Table 2). Mean comparisons indicated that at concentrations of 100 and 150 mm, calcium chloride was more phytotoxic than calcium (Tables 2 and 3). The marginal effectiveness of the additional directed spray depended on the calcium salt used. With calcium nitrate, the extra spray had little effect. With calcium chloride, the 50-mm directed spray gave a further reduction in ULN severity as compared to 50 mm Ca without the directed spray (Table 3). Without the extra directed spray, the effect of 50 mm calcium chloride was even less than that of 12.5 mm with directed spray. Ensuring Ca solution reached enclosed leaves was thus imperative for foliar Ca sprays to be effective (Table 3).
Effect of bulb calcium dipping. Solution uptake by bulbs depended on the concentra-tion and the dipping time (Table 5). Bulbs in the control group absorbed significantly more solution than in other treatments. After dipping in water for 16 h (control group), each bulb absorbed ≈5 mL solution. As concentration increased, less solution was absorbed by the bulb. In the 200-mm treatments, longer dipping time resulted in greater solution uptake (Table 5).
In both experiments, dipping bulbs in calcium chloride had no effect on ULN incidence. All dipping treatments had a ULN incidence greater than 81%, compared to the controls with 100% or 93% (Tables 4 and 5). In the first experiment, in which bulbs were dipped in calcium chloride solution for 15 min, ULN severity was slightly reduced at higher concentrations (200 and 400 mm). Controls had an average severity of 15.5, vs. 10.7 in the 400-mm treatment (Table 4). In the second experiment, however, there was no significant effect, even with much longer dipping times (Table 5).
Phytotoxicity was seen on young shoots due to high concentration of calcium chloride in the 4-or 16-h dips. These treatments proved fatal up to 11% of the plants, which died in early development stages (Table 5).
Discussion
The effectiveness of foliar Ca sprays to reduce the risk of Ca deficiency disorders has been controversial. Calcium sprays were effective in reducing marginal bract necrosis on poinsettia (Wissemeier, 1993) and tipburn on the Asiatic hybrid lily ʻPirateʼ (Berghoef, 1986). With lettuce, researchers have reached differing conclusions on the utility of foliar sprays in reducing leaf tipburn: Some researchers show positive results of Ca sprays (Kruger, 1966;Thibodeau and Minotti, 1969), while others report no effect (Collier and Tibbitts, 1982;Misaghi et al., 1981). The contrasting results may be attributed to genetic variation, Ca salt, Ca concentration, application timing, and frequency.
In this study, we demonstrated that 14 daily foliar sprays of 25 mm calcium chloride or calcium nitrate are effective in reducing the risk of ULN, and efficacy was improved by directing Ca to the enclosed leaves (Table 3). Calcium is an immobile nutrient element that is translocated mainly in the xylem. It is well established that Ca does not move from old leaves to young ones (Kirkby and Pilbeam, 1984;Marschner, 1995). Symptoms of ULN develop only on young expanding upper leaves (Chang, 2002), and expanding leaves are known to have a high calcium demand (Collier and Tibbitts, 1982;Kirkby and Pilbeam, 1984). Therefore, it is understandable that applying Ca to foliage twice a week didnʼt reduce ULN (Table 1), since it could not meet the high demands of rapidly growing leaves. In tipburn of Asiatic lily ʻPirate,ʼ it was reported a single Ca spray was not effective (van Nes, 1978), but daily sprays were (Berghoef, 1986;Berghoef et al., 1981). When calcium hydroxide was applied to reduce rain splitting in sweet cherries, multiple sprays gave better protection than a single spray (Meheriuk et al., 1991). Since the lower leaves are not susceptible to ULN (Chang, 2002), it is not necessary to spray calcium onto the lower leaves.
In foliar Ca spray experiments, there was no difference in the effectiveness of the two calcium salts used (Tables 2 and 3). Similar results were seen with calcium applications to control bitter bit in apple (Sharples and Little, 1970). However, it was reported that calcium nitrate seemed to be less effective than calcium chloride for reducing tipburn on ʻPirateʼlily (Berghoef et al., 1981). With daily foliar sprays at Ca concentrations of 100 and 150 mm, phytotoxicity, in the form of leaf tip yellowing, was observed. Similar toxicity from foliar Ca sprays was also observed on Asiatic hybrid lily ʻPirateʼ (Berghoef, 1986) and apple (Sharples and Little, 1970).
Dipping bulbs in calcium chloride failed to control ULN. In the first experiment, dipping bulbs in 400 mm CaCl 2 ·2H 2 O for 15 min reduced ULN severity from 15.5 to 10.7 (Table 4). However, the same trend was not observed in the second experiment (Table 5). Since bulbs were immersed in calcium chloride for a longer time in the second experiment, we concluded bulb Ca dipping is not a feasible method to solve the problem. Similarly, 24-h bulb soaks in 136 or 272 mm calcium chloride showed no positive effects on Asiatic hybrid lily ʻPirateʼ (Berghoef, 1986).
The effects of applying Ca to young leaves and to bulbs in order to reduce ULN were completely different. Bulb Ca dipping was not effective, but some foliar Ca treatments were. It is understandable that spraying Ca directly to young expanding leaves was effective to reduce ULN and other Ca deficiency disorders, since mineral nutrient entry could occur through cuticular pores (Marschner, 1995) or stomatal openings (Levy and Horesh, 1984).
As a result of this research, growers interested in using calcium foliar sprays to reduce this problem could be advised to spray calcium nitrate or calcium chloride at no more than 25 mm daily, for 14 d starting 30 DAP. Furthermore, an effort to direct spray into the congested leaves should be made. Whether or not this is an economically viable treatment would need to be determined by the individual grower.
|
2019-03-30T13:12:30.791Z
|
2004-04-01T00:00:00.000
|
{
"year": 2004,
"sha1": "196359afd3ac61e2c38a6d957e0fc493b83d8235",
"oa_license": null,
"oa_url": "https://journals.ashs.org/downloadpdf/journals/hortsci/39/2/article-p272.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "06928baeaf157f7bb2f4419e7755ffdc7dc396fe",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
120198594
|
pes2o/s2orc
|
v3-fos-license
|
Stability and Hierarchy of Quasi-Stationary States: Financial Markets as an Example
We combine geometric data analysis and stochastic modeling to describe the collective dynamics of complex systems. As an example we apply this approach to financial data and focus on the non-stationarity of the market correlation structure. We identify the dominating variable and extract its explicit stochastic model. This allows us to establish a connection between its time evolution and known historical events on the market. We discuss the dynamics, the stability and the hierarchy of the recently proposed quasi-stationary market states.
Introduction
Big data is the buzzword of recent years, reflecting an ever increasing amount of electronically available data that demands analysis and interpretation. Our focus is on complex dynamical systems such as financial markets, where huge data sets exist in the form of multivariate time series. The dynamical behavior of such systems may reduce their complexity by self-organization [1]. System variables, which are measured as single time series, couple together to a few dominating variables, which accurately describe the system dynamics and allow for predictions. The self-organization may produce patterns in observed data which are generally difficult to uncover. A wide range of data analysis techniques is available and widely used, including graph theoretical information filtering [2,3,4,5,6,7], data clustering [8,9,10,11,12,13] and geometric approaches [14,15,16,17,18]. All these techniques are based on a similarity measure between the data points. There is a major disadvantage in this approach: The time information of the measured data is neglected. Thus, the system dynamics is not explicitely taken into account. On the other hand, dynamical variables of complex systems have been successfully described by stochastic processes [19,1,20]. In this description the variables evolve in time according to deterministic dynamics, which gives access to system stability and fixed points and is exposed to generally non-trivial stochastic fluctuations. Here, we combine the data set analysis with stochastic methods in order to capture the full dynamics of the system. We apply our approach to stock market data. Similar techniques have proven successful in the description of complex dynamical systems [21,22,23]. The paper is organized as follows: We present the data set and perform a geometric data analysis to uncover the dominating variable in Sec. 2. In Section 3 we identify the quasi-stationary states of the financial market following Ref. [10]. We draw connections to known historical events. We present the stochastic analysis in Sec. 4 and discuss our results in Sec. 5.
Analyzed Data
In Sec. 2.1 we introduce our data set and the analyzed quantities. We perform a geometric analysis of the data in Sec. 2.2.
Observed Quantities
We analyze daily adjusted closing stock prices S i (t) i = 1, ..., K of the K companies in the S&P500 Index over the period of 21 years ranging from early 1992 to the end of 2012. The data is freely available at finance.yahoo.com. To measure the correlations, we use the daily returns and normalize them locally [24], to smooth out trends on very short times. We measure the time t in trading days. We then calculate the K × K correlation matrices C(t) by averaging over a time window of T = 42 which is moved in one-day steps through the data. The elements of C(t) are the Pearson correlation coefficients Here σ (T ) is the time-dependent volatility and the sample average of a quantity f (t) is evaluated over the T data points before t. We note that in contrast to the stock prices S i and price returns r i , the correlation coefficients C ij (t) are bounded quantities. All together we obtain N = 5169 correlation matrices. The correlation matrices calculated on the short intervals T are noisy. We reduce the noise by averaging over the correlation coefficients which yields the mean correlation coefficient Here ... ij denotes the average over all d = (K 2 −K)/2 = 46971 independent correlation coefficients of every correlation matrix C(t).
We recall the spectral decomposition of the K × K correlation matrix C(t) [17,18]. Here λ a (t) denotes the ath eigenvalue of C(t), u a (t) the corresponding normalized eigenvector and u † a (t) its transpose. The rank of C(t) is T and therefore only the first T eigenvalues are non-zero. For our data the first and the largest eigenvalue λ 1 (t) = λ max (t) is sufficiency larger than the other eigenvalues. All components of u 1 (t) are approximately equal to 0.05, while the components of the other T −1 eigenvectors spread around zero for every time t. Therefore u 1 (t) corresponds to the dynamics of the whole market as in Refs. [17,18]. Hence averaging over the correlation coefficientsc we recover the largest eigenvalue. Here is an empirical factor which appears due to the noise in the data. The time evolution of the largest eigenvalue is strongly correlated with the mean correlation coefficientc(t), the Pearson correlation is 0.998. The quantities λ max (t) andc(t) share therefore the same dynamics. We will show in Sec. 2.2 thatc(t) has as much variability in the values as possible for our data. Figure 1 (a) shows the time evolution ofc(t). We also present the time evolution of the S&P500 Index in Fig. 1 (b).
Geometric Approach: Principal Component Analysis
We identify each correlation matrix C(t) with a correlation vector in the real d-dimensional Euclidian space R d . Here c i (t) is the ith component of c(t). We then apply the principal component analysis (Pearson [15], Hotelling [14]) to quantify orthogonal and therefore uncorrelated one-dimensional subspaces in our time series c i (t), i = 1, ..., d.
The first principal component is defined as the line in R d with the largest possible variance of the data values. The other principal components are those with the largest data variance and orthogonal to the preceding components. The number of the principal components is smaller or equal to d. The principal components are spanned by the orthogonal eigenvectorsv i , i = 1, .., d of the symmetric d × d covariance matrix Here A is the d × T data matrix with d empirical times series c i (t) as rows and A † denotes its transpose.
The rank of W is min(d, T ) and we can not apply the PCA to our full data so we applied the principal component analysis (PCA) to randomly chosen 100 stocks ending up with d = (100 2 − 100)/2 = 4950 time series of length T = 5169. Fig. 2 (a) shows the eigen vector components distribution for the first ten principal components. The components of the first normalized eigen vector are concentrated around a constant value 0.014, while the values of the other nine are symmetrically distributed around zero. Therefore the direction with the largest variance in data values is the subspace spanned by the vector The variance of data values for the first ten principal components are shown in Fig. 2 (b). The variance of the first principal component is much larger than the others. The correlation matrices C(t) from our data set seen as vectors c(t) ∈ R d are thus distributed alongv 1 . Figure 3 shows the projection of our data onto the first three principal components in a scatter plot. The distribution of the data points along the first principal component is dominating. The contribution of the correlation matrix C(t) to the first principal component at time t is given by the scalar product and turns out to be the mean correlation coefficient (4) times the fixed number √ d. The dynamics of the market is therefore dominated by the movement alongv 1 which is given byc(t). Eq. (11) confirms the spectral analysis results discussed in Sec. 2.1. We note that spectral analysis of the correlation matrix C(t) is the principal component analysis of the standardized returnsr treated as element of R K . Therefore the projection of (12) on the first principal component in R K at time t is equal to the non-weighted average ofr i (t).
The projections c(t),v 2 and c(t),v 3 describe system dynamics along the second and third principal component and are shown in Fig. 4.
Market States: Distinct Periods of the Market
We cluster the data following Ref. [10] and identify the quasi-stationary states of the financial market which we present in Sec. 3.1. We connect the characteristic states on the market to the known historical events in Sec. 3.2. In the previous section we showed that our data is spread along a few dominating subspaces in R d . To quantify the similarity between any two correlation matrices C(t a ) and C(t b ) we calculate the distance Figure 6. Clustering tree of the market states clustering. The mean value ofc(t) within the states is given in parentheses.
Market States
As the next step we use the bisecting k -means clustering algorithm [12]. At the beginning of the clustering procedure all of the correlation matrices are considered as one cluster, which is then divided into two sub clusters using the k-means algorithm with k = 2. For each cluster α we then calculate its cluster center which is the mean correlation matrix in this cluster. Here N α denotes the number of the cluster elements and t ∈ α symbolically denotes all time t for which c(t) is in the cluster α. The separation procedure is repeated until the cluster size is smaller than a given threshold for every cluster α. We choose the mean distance to be smaller than 0.164 to achieve 8 clusters as in Ref. [10]. The market is said to be in a market state α at time t, if the corresponding correlation matrix C(t), and hence the correlation vector c(t), is in the cluster α. The time evolution of the market states is shown in Fig. 5. In Figure 6 the corresponding clustering tree is shown. The state occupied on the first day of our data is labeled by one. The remaining states are labeled according to the mean value ofc(t) within the states as shown in Fig. 6. We group the states into three main classes. The market states one, two and three represent calm states. The states four, five and six are intermediate states. The states seven and eight are the turbulent states. The financial market evolves between these different states. New states form and existing states vanish in the course of time. For example the first four years are dominated by the states 1 and 2 in the last four years mainly the states 8 and 7 are occupied.
Distinct Time Periods
We divide the entire time period into six dynamically and economically distinct intervals.
(i) Early 1992 to spring of 1996: in this rather calm periodc(t) varies between 0 and 0.2. The S&P500 Index continuously grows with moderate volatility. The market mainly occupies the first and the second state.
(ii) From spring 1996 until spring 2000: the range thatc(t) explores as well as the S&P500 Index drastically increase. The volatility also becomes larger. The increase ofc(t) is explained by the appearance of strongly correlated industrial sectors during this period, especially the technology sector. The market state two almost disappears and the market jumps mainly between states five and one. We note that the fifth state appears only during this period.
(iii) Spring 2000 to the second half of 2003: this period fully covers the dot-com bubble and is known as a very turbulent time in financial markets due to the crisis. The S&P500 Index drops continuously, losing about half of its value. The mean correlation coefficient reaches its maximum at 0.48. At the beginning of the crises state 3 appears for about one year. This state appeared only once during the entire time period. In the second half the market is switching between states four and six and occupies state seven by the end of 2002. This period includes the market response to the 9/11 attacks.
(iv) From the second half of 2003 until fall of 2007: this period covers the four years period before the recent global financial crises up to the 1 year period before the collapse of Lehman Brothers. As seen from the S&P500 Index in Fig. 1, the market seems to recover after the dot-com crisis butc(t) does not calm down and strongly fluctuates around a mean value 0.28. The market is jumping between states four, six and seven. State six is occupied mainly during this interval. (vi) March 2009 to end 2012: the market seems to slowly recover as the S&P500 Index grows again. The growth interrupted by drastic drops. This is reflected in high peaks ofc(t), which accounts its maximum value 0.77 in the analyzed 21 years. The mean correlation coefficient does not relax to the values it had before the crisis. The market is switching between states seven and eight and decays for short time into the states four and six.
Stochastic Analysis
We describe the stochastic process used to modelc(t) in Sec. 4.1. In Sec. 4.2 we explain how the explicit model is extracted form the time series. We describe the stochastic analysis of the market states in Sec. 5.4.
Stochastic Processes
We modelc(t) as a stochastic process described by a Langevin equation i.e. a stochastic differential equation (SDE) for the variablec(t) ∈ R. Here f is the deterministic part of (16) -the drift function and g is the diffusion function, which defines the stochastic part of (16). Γ(t) is the δ-correlated Gaussian white noise with Γ(t) = 0 and Γ(t 1 )Γ(t 2 ) = δ(t 1 − t 2 ). We note that for the dimensionless variablē c(t) the drift function has a dimension of inverse time and the diffusion function has a dimension of inverse square root of time.
The solution of (16) is defined in terms of stochastic integrals, which depend on the choice of the discretization [25,26,27]. Throughout this paper we use Itô's choice (see Itô's interpretation of SDEs [25,28]). The advantage of Itô's definition is that the diffusion term g is uncorrelated with the Gaussian white noise g(c, t)Γ(t) = 0 [25]. The drift and diffusion terms can therefore be obtained as conditional moments [25,29] f (c, t) = lim τ →0 Here c denotes the value of the stochastic variablec(t) at which the value of the drift or the diffusion is evaluated. At this one instant we distinguish betweenc(t) and a particular numerical value c. The average in Eqs. (17) and (18) is performed over all realizations ofc(t) for which the conditionc(t) = c holds. These equations express therefore the time derivative of the mean displacement and its square ofc(t) at c.
Expressions (17) and (18) allow one to estimate the drift and diffusion directly from the empirical data as shown in Refs. [30,19] and sketched below, see Ref. [20,31,1,32] for applications. In the present work we modelc(t) by an Itô stochastic process and estimate the deterministic as well as the stochastic part of the corresponding SDE from the empirical time series.
Estimation of the Conditional Moments
For the estimation of the drift and the diffusion directly from the data set we mainly follow Refs. [30,20,19,31]. Here, we briefly sketch the estimation procedure for the drift function, i.e. the first conditional moment (17), as the estimation of the diffusion function (18) works accordingly. We first introduce a new function for which the drift function is obtained at τ = 0. We note that we dropped the time variable t in the argument of M in Eq. (19) for brevity. For the estimation of M c (τ ) at fixed c as a function of τ we divide the time seriesc(t) into bins with equal number of data points. For every bin I the function M c (τ ) is then estimated as Herec I is the mean value ofc(t) in bin I and the average is performed over all data in this bin. We note that for the empirical data this estimation can only be done for discrete values of τ = 1, 2, 3.... We then fit a second order polynomial in τ to the empirically estimated values of (21), extracting the desired value of the drift atc I as the constant coefficient of the fitted function. The estimation of (19) is only possible for the realized values of the empirical times seriesc(t). Instead of analyzing the drift function (17) itself, it is more convenient to consider the potential function defined as the negative primitive integral of f . The minus sign is a convention. The dynamics of the system is encoded in the shape of V (c, t): the local minima of the potential function correspond to the quasi-stable equilibria, or quasi-stable fixed points, around which the system oscillates. In contrast, local maxima correspond to unstable fixed points. We note that potential functions are defined up to an additive constant. For the dimensionless variablec(t) the dimension of the potential function is the inverse time.
Market States Dynamics
To quantify the market dynamics while it is in a fixed market state α we restrict the estimation of (21) and evaluate only the data points for each state α. We therefore consider only displacements along the first principal component within the market states. No state transitions are allowed. Potential functions estimated this way provide information about the stability of the market states and reveal the fixed points. As we mentioned in Sec. 3.1 we group the states into the three main classes according to the hierarchical structure as shown in Fig. 6. We estimate the potential functions for each class A evaluating only the data points Here t ∈ A symbolically denotes all time points at which market is in a state of the class A. For example the market might be in the state 1 at time t and in the state 2 at time t + τ , as these two clusters belong to the same class. We therefore consider only displacements within A and allow for state transitions between state of the same class.
Results
We show the estimated diffusion function (18) in Sec. 5.1 and discuss the estimated potential function (22) in Sec. 5.2. In Section 5.3 we take a closer look at the dot-com bubble. A detailed study of the market states dynamics is presented in Sec. 5.4.
Diffusion Term
To quantify the time dependency of the diffusion function g(c, t) we estimated the second conditional moment (18) on a time window of four trading years (1008 trading days) which is moved in steps of two trading months (42 trading days). All together we obtain 100 estimates for g(c, t) which we present in Fig. 7. As we explained in Sec. 4.2, the estimation is only possible for the realized values ofc(t). We therefore put all estimated values in a single diagram. We then fit the estimated values by the time-independent function which fits our data well, see Fig. 7. The diffusion function (25) is widely used to model the stochastic correlation [33,34,35,36,37], as it limits the values of the correlation to the range [c min , c max ]. From the estimated parameters we obtain the characteristic time scale of the system which turns out to be approximately one third of the analyzed period. For consistency we estimate (18) for the entire time seriesc(t) at once, as shown in Fig. 7. We note that we fitted (25) only to the data obtained on the sliding window.
Time Evolution of the Potential Functions in the Entire Time Period
To quantify the time dependence of the drift function f (c, t) we estimate the first conditional moment (17) on a time window of four trading years (1008 trading days) which is moved in steps of two trading months (42 trading days). All together we obtain 100 estimates for f (c, t). We then calculate the potential functions (22) which are presented in Fig. 8 (a)-(b). The dates mark the time points in the middle of the estimation time windows. In contrast to the diffusion function, the drift function turns out to be time-dependent. Therefore it is difficult to graphically present many curves in a single diagram, as the potential function (22) is defined up to an additive constant.
To work around this problem we set wherec 0 denotes the value at which V (c, t) has its minimum in the first half of its values. In this representation the deeper a potential function is, the higher are the boundaries. We showed thatc(t) is described by a stochastic process (16) with a timeindependent diffusion term and a time-dependent drift function. In Sec. 2 we showed that the mean correlation coefficient is the dominating variable of the collective market dynamics. The non-stationarity of the potential function is therefore explained by deterministic changes in the collective correlation structure on the market.
Zooming into the Dot-com Bubble
In the previous section we showed that the market evolves in time, switching back and forth between different market states. As an example of a state transition we estimate V (c, t) in the period from early 1999 to early 2006. The interval covers the dot-com bubble. To achieve higher time resolution we perform the estimation on a time window of two trading years (512 trading days), sliding it in steps of one trading month (21 trading days). Figure 9 shows the time evolution of the estimated potential function. It is flat at the beginning where the market is mainly in the states 1 and 2, see Fig. 5. During the crisis the values ofc(t) increase. Therefore the estimated potential function By the end of 2003 the market settles into state 6 with only short jumps into the states 4 and 1. The potential function becomes constant but has changed its shape compared to the pre-crisis period. The market therefore jumps from a stable state to a turbulent state and then down to another stable state.
Market States Dynamics: Stability, Hierarchy and State Transition
In the previous sections we showed that the mean correlation coefficient is described by a stochastic process (16) with the time-independent diffusion function (25) and the timedependent drift function. Especially, calm and turbulent periods can be distinguished by the shape of V (c, t). To quantify the market dynamics in a given market state we estimate the potential function for the data points (23). Thus, we only accounted for displacements within a fixed market state.
The time series for the states 3, 4 and 5 are too short for the estimation of (17), so we combined the time series of the states 2 and 3 together as well as 4 and 5. We denote the resulting states by 2 + 3 and 4 + 5 respectively. As shown in Fig. 6, these pairs consist of states of the same class. Figure 10 (a)-(c) shows the resulting potential functions for each market state.
Potential functions provide information about the stability of market states. This notion of the stability is not due to the time which the market spends occupying a certain state, but is given by the dynamics of the market. States 1, 2+3, 6 and 8 are stable states, as their potential functions have a single deep minimum and therefore a clearly defined fixed point. State 8 mainly appears during the latest financial crisis and represents a strong collective correlation on the market. In contrast state 7 is very unstable. Not only has its potential function two local minima, but it is also the deepest one. The correlation structure is non-stationary within the market state 7. The combined state 4+5 has a half-open potential function. States 4 and 5 are intermediate states between calm and turbulent periods, see Fig. 5. We note that within stable states c(t) is described by SDE (16) with the diffusion function (25) and a linear drift function. In Section 3.1 we grouped the market states into three classes according to the clustering tree, see Fig. 6. Not all of the market states appear simultaneously in a given time interval, as shown in Fig. 5. The first four years of the analyzed time period are dominated by the states 1 and 2, which belong to the first class. In the last four years basically only the states 7 and 8 appear, which build the third class. To quantify the hierarchical structure of the states we estimate V (c, t) for the points (24). We therefore account for displacements within the classes including state transitions. The resulting potential functions for the three classes are shown in Fig. 10 (a)-(c). These curves envelope the potential functions of the market states of the corresponding class.
Similar to the envelopes we estimated V (c) on the entire time period at once, as shown in Fig. 10 (d). The potential function of each market state has a distinct position along the first principal component, i.e. a distinct value ofc(t). We therefore conclude that, while the market is in a given (stable) state, the mean correlation coefficient fluctuates around a mean value, which is defined by the minimum of the potential function, see Figs. 10 and 6. As we showed in Sec. 2.2, the movement along the first principal component is given by the time evolution ofc(t). Hence the market dynamics within a fixed state is given by the movement along the second and higher principal components, see Figs. 3 and 4. Large changes of the mean correlation coefficient yield state transitions. The market is therefore "hopping" from state to state in the potential landscape, which is shown in Fig. 10 (d). For consistency we calculate the daily steps of the market and the absolute increments ∆(t) =|c(t + 1) −c(t) | .
ofc(t). Figure 11 (a)-(b) shows the distribution of the steps (31) and the increments (32) within market states compared to the jumps during a state transition. Both the steps and especially the increments are on average larger during state transitions that within states as we claimed.
Conclusion
The combination of geometric data analysis and stochastic methods sheds new light on the collective dynamics of complex systems. We applied these techniques to stock market data and evaluated the correlation structure on a sliding time window for a period of 21 years. The collective market dynamics in terms of the principal components is given by the average correlation coefficient. We extracted the underlying stochastic process which turns out to have a time-independent stochastic term and a time-dependent deterministic term. The latter is represented graphically as a potential landscape and provides information on stability and system fixed points. We established the connection between distinct historical periods on the market and the time evolution of the potential function. The non-stationary market dynamics can be attributed to changes in the deterministic part of the collective market dynamcis. We identified quasi-stationary states of the market following Ref. [10] and distinguished three main classes of market dynamcis: Calm, intermediate and turbulent states. To quantify the market states dynamics we estimated the potential functions, accounting only for displacements within a fixed state. In a given state the average correlation fluctuates around a distinct mean value, which defines a fixed point. The market dynamics within a market state is given by the movement along higher principal components. State transitions are reflected in large changes of the average correlation and correspond to the hopping in the potential landscape. Our results are consistent with the random matrix approach of Ref. [38] and contribute to a better overall understanding of market dynamics. While we highlighted the application to financial data in this paper, our approach should prove useful for the study of any quasi-stationary complex system.
|
2015-03-02T06:57:21.000Z
|
2015-03-02T00:00:00.000
|
{
"year": 2015,
"sha1": "a3cbfe55123d22de367cb52c9f52ab5f90b858d6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1503.00556",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "04d880f47f309dc5788a0c302839fa3478614dd9",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Physics"
]
}
|
269270323
|
pes2o/s2orc
|
v3-fos-license
|
Insect and spider biodiversity: A dataset of mountainous wetland sites in Aspromonte National Park (Calabria, southern Italy)
Wetland areas encompass a range of natural habitats characterized by high animal and plant biodiversity. Understanding the impacts of environmental decline in such areas requires in-depth knowledge of the overall biodiversity. This study dataset provides a first evaluation of important sites of insect and arachnids biodiversity in peat bogs, marshes, and streams in Aspromonte National Park in Calabria, southern Italy. It is a basic faunal survey that aids understanding of the importance of these large faunal groups in sites mainly within this national park. The data obtained highlight a rich insect and spider diversity in this region and provide useful information to use to outline strategies for the conservation and the management of inland aquatic environments at risk from climate change. Moreover, as baseline data, these will be useful for future monitoring and management of other inland aquatic environments that are similar to those of the study sites reported herein.
a b s t r a c t
Wetland areas encompass a range of natural habitats characterized by high animal and plant biodiversity.Understanding the impacts of environmental decline in such areas requires in-depth knowledge of the overall biodiversity.This study dataset provides a first evaluation of important sites of insect and arachnids biodiversity in peat bogs, marshes, and streams in Aspromonte National Park in Calabria, southern Italy.It is a basic faunal survey that aids understanding of the importance of these large faunal groups in sites mainly within this national park.The data obtained highlight a rich insect and spider diversity in this region and provide useful information to use to outline strategies for the conservation and the management of inland aquatic environments at risk from climate change.Moreover, as baseline data, these will be useful for future monitoring and management of other inland aquatic environments that are similar to those of the study sites reported herein.
© The data are freely accessible to scholars and their use must be agreed with the authors.For ethical reasons, the data will not be made available if they are to be used for commercial purposes.
Value of the Data
• Freshwater ecosystems are biodiversity hotspots [1] .These current data highlight the insect and spider diversity of the southernmost inland wetlands sites of the Italian Apennines; • The specimen data obtained provide useful information necessary to outline strategies for the conservation and management of these important wetland areas; • Data relating to the presence and distribution of endangered species with high conservation values can be a catalyst for further research and act as a starting point for modeling the presence of such species and identifying areas where they are at risk, as reported elsewhere [2] ; • The upland bogs and other wetlands of the Aspromonte National Park could be recognized by governmental and conservation agencies as specific habitats for distinctive species not found elsewhere; • For some species detected in our study, this was the first report of their occurrence in not only this study area, but also Italy as a whole.Such an awareness of these species in wetland areas could enable assessment of their future risk following the environmental decline of such habitats; • Four hundred species of insects and 71 species of arachnids were identified, including species of Trichoptera (Kirby, 1813), Hemiptera (Linnaeus, 1758), Diptera (Linnaeus, 1758), Coleoptera (Linnaeus, 1758), and Lepidoptera, (Linnaeus, 1758), and members of the Araneae; in addition, species of Opiliones, and Scorpiones were also identified.There were differences between sample sites in terms of the most relevant orders of aquatic arthropods, which are important members of complex trophic chains within these wetland ecosystems.
Background
The first objective for creating this dataset on insect and spider of the freshwater ecosystems was to provide useful information to use to outline strategies for the conservation and the management of inland aquatic environments at risk from climate change.Second, the upland bogs and other wetlands of the study area could be recognized by governmental and conservation agencies as specific habitats for distinctive species not found elsewhere.Moreover, as baseline data for specific habitats, these will be useful for future monitoring and management of other inland aquatic environments that are similar to those of the study sites reported herein.
Data Description
This report describes datasets linked to a repository of insects and spiders collected from March to November 2018 and March to November 2019 at different wetland sites of the inland Aspromonte National Park, Italy ( Fig. 1 ).The study comprised 35 survey sites sampled using using direct and undirect collecting methods.The survey sites and taxa were categorized according to the type of environment, altitude, and endemic plant species at each site ( Table 1 ).Of these sites, 29 occurred within the different zoning designations of Aspromonte National Park and six occurred outside the park.From direct observation, some sites were subjected to overgrazing, with no action to protect against excessive growth of vegetation and afforestation.The natural or planted wooded phase too close to humid sites is, in fact, unfavorable for the maintenance of this type of habitat, especially in the dry climate zones [3][4] The data for the insect and spider specimens collected at each sample point include geographical coordinates, Class, Order, Family, Genus and Species, and the number of specimens (abundance).An overview of these sheets is provided in Table 2 .
Study region
Aspromonte National Park is located south of the Italian Apennines.The name derives from the Latin meaning 'rugged', or from the Greek 'aspròs', meaning white [10] .According to Blasi [6] , the area falls within the Mediterranean ecoregion and lies in the southern province of Calabria.Its vegetation is varied, with widespread mesophilous and deciduous forests of beech, oaks and hornbeam ( Fagus sylvatica, Quercus robur, Quercus petraea, Quercus cerris, Quercus pubescens , and Carpinus betulus ).Key physiognomic vegetation types include coniferous forests of Abies alba and Pinus nigra , areas of Juniperus shrubs and Dianthus rupicola , Vaccinium heaths as well as Carex, Sesleria, Nardus , and Festuca grasslands.Studies of the arthropod biodiversity of this area are relatively sparse, because the area has been studied only sporadically; for example, the interesting entomological endemism of the area was reported only recently [11][12] .This lack of knowledge is also partially related to the difficulty of accessing sites because of the complex orography of the area.Thus, there is a need for further work to fully characterize the specific features of this region, in terms of not only its arthropod fauna but also its habitat types and geo-orography, given that other studies have highlighted features specific to this area, reinforcing the importance of this national park [13] .
Climate of the area
Aspromonte National Park is located in the central Mediterranean Basin, surrounded from east to west by the sea.It is characterized by a heterogeneous topography and altitudes up to 1,956 m above sea level (asl).Precipitation shows strong seasonal variability as a consequence of the Mediterranean climate of the region.Maximum precipitation occurs in winter (550 mm); followed by autumn (450 mm) and spring (320 mm), and is very low during the summer (100 [14] , the seasonal precipitation pattern in this national park is strongly related to its orography and the surrounding sea, reflecting the synoptic influence of its geographical features.The southeast region is the most drought-prone, whereas the western section of the park experiences higher yearly precipitation and the eastern region is affected by more intense rainstorms.
Habitat types in Aspromonte National Park
Aspromonte National Park is characterized by a variety of habitats related to the climate of the area (see above) and to various anthropological interventions.It has also been subject to increased afforestation over the past few decades.Table 1 provides describes the habitats that occur in Aspromonte National Park.Despite this remarkable heterogeneity, S. calabrella and W. radicans were used as reference sites for sampling because both species are representative of wetlands of considerable community importance.These streams and rivulets of the mountainous southern Apennines are characterized by weakly flowing, well-oxygenated waters and macrophytic herbaceous communities that host various endemic species.The location of each of the 35 study sites is provided in Fig. 1 .All sites were located at an altitude above 400 m asl.
Species collection and identification
At each study site, arthropod samples were collected by using different types of trap, such as pitfall traps with attractant, light traps with liquid, and visible collection with a mowing net and/or entomological aspirator.Collected samples were preserved immediately in 75 % ethanol and transferred to the LEEA Laboratory, Dipartimento PAU, Università Mediterranea di Reggio Calabria, where they were then cleaned and stored in 75 % ethanol.All samples were identified to either the morphotype or species level by using the most recently published keys.In some cases, the samples were compared with insect collection and photos of holotypes.Specimens were labeled and preserved in the LEEA Collection, Civic Museum of Natural Sciences 'E. Caffi', Museum of Natural History of Verona.All data were organized alphabetically by class, family, genus and species in a checklist following scientific nomenclature and registered as a dataset in the Mendeley Data Repository (DOI: 10.17632/hcxvkfncgc.2 ).
Insects and spiders species composition and distribution
Of the 471 terrestrial insect species identified, there were 163 species of Coleoptera, 77 species of Lepidoptera (mainly butterflies), 51 species of Trichoptera, 50 species of Hemiptera, and 44 species of Diptera, among others.Of the 73 species of Arachnida identified, there were 71 species of spider, two species of Opiliones, and one species of Scorpiones.The highest number of species of insects and spiders was recorded in peat bog sites (372 species) ( Fig. 2 ).The caddisfly Allogamus silanus (Trichoptera, Limnephilidae) was reported for only the second time, to our knowledge [16] .A species of hemipteran new to this region of Calabria, Psammotettix aspromontanus n.sp., was recorded, which is morphologically similar to others Psammotettix spp.from which it is distinguished by the shape of the aedeagus; it was only collected from the study site at Montalto marsh at ∼1,800 m asl, ( Fig. 1 , site 4) although its host plants were not determined [15] .The funnel weaver Aterigena aspromontensis , recently described by Bolzern et al. (2010) [16] , was among the spiders recorded in the present study and reported [17] .The high number of endemic species recorded in this study could be because of the geological age of Aspromonte [ 10 , 18 ] and the fact that, similar to the other mountainous areas of the southern Apennines, it has been isolated from other European and Mediterranean territories for a significant length of time, enabling its fauna to evolve in isolation.
This dataset provides a first insight into the insects and spiders of freshwater and wetland sites in the Calabrian Apennines mountain massif located in southern Calabria, and it can serve as a model of a Mediterranean insular freshwater ecosystem.In addition to providing specific information on the distribution of the species identified in the different habitats investigated, this species dataset provides information on the freshwater community models necessary to create a detailed picture that is fundamental to understanding the effects of climate change.The information on taxonomic units represents data that could easily be repurposed, both through the addition of new data on regional biodiversity and by adding to the completeness of local reference databases.The public availability of these and other data for fragile areas around the world means that such information could assist in future conservation planning, as well as additional data interpretation.
Ethics Statement
Authors have read and follow the ethical requirements for publication in Data in Brief and confirming that the current work does not involve human subjects, animal experiments, or any data collected from social media platforms
. The dataset contains six sheets [(a) Complete list of arthropod species; (b) peat bogs; (c) mountain stream (riverine) environment (characterized by Soldanella calabrella ); (d) mountain stream environments; (e) environments characterized by Woodwardia radicans ; and (f) stream environments] in a single Excel file.
2024The Author(s).Published by Elsevier Inc.This is an open access article under the CC BY-NC license ( http://creativecommons.org/licenses/by-nc/4.0/ ) The first stage of data collection involved identifying wetland sites within the Aspromonte National Park, including marshes, peat bogs, and streams.Samples of adult insects and spiders were collected by using different types of trap, such as pitfall traps with attractant, light traps with liquid, and visible collection with a mowing net and/or entomological aspirator.Specimens were identified to a species level by the authors using a stereoscope and by specialist entomology and spider taxonomists around the world using taxonomic keys
Table 1
Types of inland wetland monitored and vegetational species group according to the annex to the Habitats Directive (Council Directive 92/43/EEC).
|
2024-04-21T15:44:57.672Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "37e17342477c9bc721c2c6cbecf8195b1f8754f3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.dib.2024.110435",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "500d45639e342a400eec25bdc0ca61eef526af2a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269886690
|
pes2o/s2orc
|
v3-fos-license
|
NMDA Receptor Antagonists: Emerging Insights into Molecular Mechanisms and Clinical Applications in Neurological Disorders
Neurodegenerative disorders (NDs) include a range of chronic conditions characterized by progressive neuronal loss, leading to cognitive, motor, and behavioral impairments. Common examples include Alzheimer’s disease (AD) and Parkinson’s disease (PD). The global prevalence of NDs is on the rise, imposing significant economic and social burdens. Despite extensive research, the mechanisms underlying NDs remain incompletely understood, hampering the development of effective treatments. Excitotoxicity, particularly glutamate-mediated excitotoxicity, is a key pathological process implicated in NDs. Targeting the N-methyl-D-aspartate (NMDA) receptor, which plays a central role in excitotoxicity, holds therapeutic promise. However, challenges, such as blood–brain barrier penetration and adverse effects, such as extrapyramidal effects, have hindered the success of many NMDA receptor antagonists in clinical trials. This review explores the molecular mechanisms of NMDA receptor antagonists, emphasizing their structure, function, types, challenges, and future prospects in treating NDs. Despite extensive research on competitive and noncompetitive NMDA receptor antagonists, the quest for effective treatments still faces significant hurdles. This is partly because the same NMDA receptor that necessitates blockage under pathological conditions is also responsible for the normal physiological function of NMDA receptors. Allosteric modulation of NMDA receptors presents a potential alternative, with the GluN2B subunit emerging as a particularly attractive target due to its enrichment in presynaptic and extrasynaptic NMDA receptors, which are major contributors to excitotoxic-induced neuronal cell death. Despite their low side-effect profiles, selective GluN2B antagonists like ifenprodil and radiprodil have encountered obstacles such as poor bioavailability in clinical trials. Moreover, the selectivity of these antagonists is often relative, as they have been shown to bind to other GluN2 subunits, albeit minimally. Recent advancements in developing phenanthroic and naphthoic acid derivatives offer promise for enhanced GluN2B, GluN2A or GluN2C/GluN2D selectivity and improved pharmacodynamic properties. Additional challenges in NMDA receptor antagonist development include conflicting preclinical and clinical results, as well as the complexity of neurodegenerative disorders and poorly defined NMDA receptor subtypes. Although multifunctional agents targeting multiple degenerative processes are also being explored, clinical data are limited. Designing and developing selective GluN2B antagonists/modulators with polycyclic moieties and multitarget properties would be significant in addressing neurodegenerative disorders. However, advancements in understanding NMDA receptor structure and function, coupled with collaborative efforts in drug design, are imperative for realizing the therapeutic potential of these NMDA receptor antagonists/modulators.
Introduction
Neurodegenerative disorders (NDs) are chronic disorders characterized by the progressive loss of neuronal cells, leading to neuronal dysfunctions.These disorders are associated with a wide range of cognitive, behavioral and motor dysfunctions, including memory loss, dyskinesia, paralysis, lack of coordination, and dysphasia.These widely studied NDs, among others, include Alzheimer's disease (AD), Parkinson's disease (PD), Amyotrophic lateral sclerosis (ALS), Huntington's disease (HD), Lewy body dementia, depression, and multiple sclerosis.Although they share similarities in pathology and molecular mechanisms, each ND exhibits distinct clinical, neurobiological, and pathological characteristics influenced by risk factors such as geographical variations, age, race, sex, pre-existing pathological conditions, and xenobiotics.Nevertheless, the mechanism of degeneration in each disease remains inadequately defined [1][2][3][4].The most common forms of NDs are AD and PD.Currently, the estimated number of people worldwide suffering from dementia is 55 million.This figure is projected to rise to 78 million by 2030 and 139 million by 2050.Regarding Parkinson's Disease (PD), approximately 3% of the global population aged over 65 years is affected [3,5].At this rate, there is an expected substantial increase in the economic, financial, and social burden, which could have serious consequences for the overall quality of life, especially in developing countries [2,3].In 2021, the USA spent an estimated sum of USD 355 billion and USD 52 billion on dementia and PD, respectively.Globally, the cost associated with AD is approximated at USD 1 trillion annually, and this amount is projected to increase in the future due to the high ageing population [6,7].
Over the years, numerous studies have identified molecular and cellular mechanisms, giving rise to several etiological hypotheses.However, the mechanism of degeneration is still poorly defined.Despite many therapeutic trials derived from these hypotheses, none has been successful, as current treatments only offer symptomatic relief without halting the degenerative process or regenerating the neurons [8].The challenge could be attributed to the multifactorial nature of these NDs, as each disorder is a result of interrelated processes that include oxidative stress, excitotoxicity, neuroinflammation, genetic mutation, endoplasmic reticulum dysfunction, protein aggregation and mitochondrial dysfunction [9,10].Since the majority of NDs are sporadic, excitotoxicity is prominent among the proposed degenerative mechanisms.Despite the development of several molecules to address some of these mechanisms of degeneration, many have failed primarily due to their inability to cross the blood-brain barrier (BBB), a tightly spaced network of blood arteries and endothelial cells that makes ND treatments extremely complex and challenging [11,12].
Glutamate, the most vital excitatory neurotransmitter in the central nervous system (CNS), plays a crucial role in regulating various metabolic pathways.Under physiological conditions, the concentration of glutamate within the synapse is carefully regulated and maintained through neuron-astrocyte interaction, ensuring a physiological concentration in the extracellular space [13,14].This glutamate homeostasis together with ion homeostasis is essential for preserving normal glutaminergic brain functions, including synaptic formation and signaling, neuronal plasticity, neurotransmission, learning, memory, and ageing.However, in a diseased state, this homeostatic balance is compromised, leading to an increase in glutaminergic neurotransmission and dysfunction resulting in excitotoxicity.In excitotoxicity, excessive extracellular glutamate overactivates the N-methyl-D-aspartate (NMDA) receptor, causing a significant intracellular calcium overload.This overload triggers a cascade of events that eventually leads to neuronal cell death either by apoptosis or necrosis (Figure 1) [15][16][17][18][19][20].This form of neuronal death occurs gradually over a long period and has been implicated in the physiopathology of the most common NDs, including AD, PD, ALS and HD, especially in their early phases [21].As such, minimizing glutamate activity, either through the synaptic clearing of excess glutamate or modulating NMDA receptors, could be therapeutically beneficial in addressing excitotoxic-mediated neuronal cell death.While the former exists under physiological conditions through astrocyte-neuron interactions, it becomes compromised in the diseased state, as observed in most NDs.This makes antagonizing the NMDA receptor a compelling strategy in slowing or halting the degenerative process and relieving symptoms associated with NDs, especially where excitotoxic-mediated death is concerned.Several NMDA receptor antagonists have been explored, but only a few, including amantadine for PD, memantine for AD, and riluzole for ALS, have been successful in clinical trials.However, these successes are not without drawbacks, as they are associated with several side effects that hinder their adherence.Moreover, many NMDA receptor antagonists have failed in clinical trials due to undesirable extrapyramidal effects and pharmacokinetic challenges.The review aims to comprehensively explore the molecular mechanism underlying NMDA receptor antagonists at their respective binding sites.Before delving into this, we will offer insight into the structure and functions of NMDA receptors.We will also categorize these antagonists/modulators based on their binding sites and highlight the associated side-effect profiles, particularly those that have impeded their development.Furthermore, we will propose new directions or avenues for investigating NMDA receptor antagonists as potential treatments for NDs.Understanding these concepts could pave the way for innovative strategies in the treatment of neurodegenerative disorders.Additionally, current challenges and future perspectives in the field of NMDA receptor antagonists will also be discussed.
Structure and Functions of NMDA Receptors
The NMDA receptor is one of the ionotropic glutamate receptors that carries out excitatory neurotransmission in the CNS [22].Under resting conditions, the NMDA receptor, primarily located in the postsynaptic site of neurons, is blocked by Mg 2+ .However, upon activation by glutamate or postsynaptic depolarization, it becomes highly permeable to cations, predominantly calcium ions.The NMDA receptor is divided into three subunits: GluN1, GluN2 and GluN3 subunits.The GluN2 subunits are further divided into four subtypes (GluN2 A-D), while GluN3 is subdivided into two types (GluN3 A-B).Despite the distinct biochemical and biophysical properties exhibited by GluN1 and GluN2 subunits, their combination forms the traditional heterotetrameric NMDA receptor.The resulting complex or channel is composed of one or more of the GluN2 subtypes coupled with the GluN1 subtype.This receptor, upon binding with glycine or D-serine (co-agonists) and glutamate (agonist), respectively, is necessary for optimal NMDA receptor functions or activation.This dual agonism on the NMDA receptor for its cellular function, a distinct feature that sets it apart from other neurotransmitter receptors, remains a topic of debate.However, the consensus is that glutamate is responsible for triggering NMDA receptor activation, and glycine or serine is essential for controlling the level of receptor activity.While the presynaptic function of NMDA receptors mediates neurotransmitter release and long-term plasticity, activities at the postsynaptic part of neurons are responsible for its slow current and synaptic plasticity.The binding affinity of glycine or D-serine to the GluN1 subtype of NMDA receptor is contingent on the specific brain region [7,16,[23][24][25][26][27][28][29][30].In contrast, the NMDA receptor channel formed by GluN1/GluN3 subunits is less sensitive to Ca 2+ influx, not readily influenced by Mg 2+ block, and can only be activated by glycine alone [28].Thus, this type of NMDA receptor complex is less involved in Ca 2+ -medicated responses and is likely to have minimal impact on excitotoxicity.
Interestingly, GluN1 and GluN2 subunits share a fundamental structural similarity with other glutamate-gated ion channels or ionotropic glutamate receptors.All ionotropic glutamate receptor structures are classified into domains, and each subunit polypeptide chain consists of an amino-terminal domain (ATD), a ligand-binding domain (LBD), a transmembrane domain (TMD), and a carboxy-terminal domain (CTD).While the ATD is mainly responsible for the assembly and regulation of subunits, CTD plays a major role in receptor transport and anchoring the receptor to other intracellular molecules, enabling optimal interaction.However, the distinction lies in the presence of asparagine residue within the second transmembrane domain (M2 loop) of the GluN1 and GluN2 subunits.This region is suggested to serve as the pore-forming part of the NMDA receptor subunit and may be responsible for the ion permeability of the channel.Other transmembranes associated with the ionotropic structure are three membrane-spanning helices identified as M1, M3, and M4 [23,24,28,[30][31][32][33][34].Another distinctive feature between the NMDA receptor and other ionotropic receptors is the proximity of GluN2-ATD to the LBD of a GluN1/GluN2C receptor complex, enabling the binding of a positive allosteric modulator [34].In terms of localization in the CNS, the GluN1 subunit is expressed ubiquitously at every developmental stage.However, this is not the case for the GluN2 subunit, as some subtypes exhibit uneven distribution, especially in the adult stage.While the GluN2A subtype is expressed widely, GluN2B, GluN2C and GluN2D are predominantly expressed in the forebrain (cortex, striatum, and hippocampus), cerebellum and midbrain, respectively.With this differential distribution, there is a possibility of targeting specific GluN2 subtypes with scaffolds that address a particular neurodegenerative disease and exhibit a good side-effect profile [7,31,33,35].However, GluN2A and GluN2B are the main functional ion channels in the CNS.This is due to the low probability of channel opening in GluN2C and GluN2D subtype receptors [36].Therefore, selectively targeting GluN2A or GluN2B subunits could be significantly impactful in the development of potential therapeutical agents.However, one needs to be mindful of the potential adverse effects posed by these agents.
As illustrated in Figure 2, the binding of glutamate and/or glycine to the LBD of GluN2 or GluN1 subunits, respectively, which occurs during the transition of the NMDA receptor from a resting state to an active state, causes the LBD bi-lobe to close and subsequently pulls the LBD-TMB linkers.This results in the opening of the ion channel pore, leading to the influx of Ca 2+ [28,30].This NMDA receptor-mediated Ca 2+ response is responsible for mediating long-term potentiation and synaptic plasticity, the cellular basis of learning and memory, and maintaining neuronal health.The physiological functions of NMDA receptors are determined by their subunit composition, the location of subunits within the CNS, and the developmental stages of the brain (from embryo to adult) [26,34].Noteworthy, recent studies in mice have observed a reduction in the expression and functions of the NMDA receptor, and the diffusion of the NMDA receptor, especially the GluN2B subtype, to the dendritic spine, leading to the formation of extrasynapse.This phenomenon is associated with advanced ageing, and the activation of these extrasynaptic NMDA receptors is implicated in neuronal cell death and accelerated age-related cognitive decline [37].As such, targeting the NMDA receptor has been suggested to be therapeutically relevant and useful in addressing various neurodegenerative disorders.So far, a great deal of these targets have been developed and explored [28].
Types and Molecular Mechanisms of NMDA Receptor Antagonists
Several studies have implicated excitotoxicity as a prominent factor in the pathogenesis of most neurodegenerative disorders.This makes exploring NMDA receptor antagonists a potentially viable therapeutic approach to addressing these disorders.A number of these NMDA antagonists have been explored and investigated as potential neuroprotective agents.Despite the extensive research in this area, there has been an uphill challenge in developing these antagonists into effective therapeutical tools due to the compromised physiological function of NMDA receptors, resulting in extrapyramidal side effects such as cognitive impairment, hallucination, and psychosis.Nevertheless, the distinct subunits feature in different parts of the brain, particularly the GluN2 subunit, offering hope for the design and development of subunit-type antagonists with acceptable side-effect profiles [38].These antagonists are categorized based on their binding sites into competitive, non-competitive, and negative allosteric antagonists, and their NMDA receptor subunits/subtypes, pharmacological and side-effect profiles are highlighted in Table 1.
Competitive NMDA Receptor Antagonist
Understanding the molecular mechanism underlying neurodegenerative disorders led to the development of competitive NMDA receptor antagonists (Figure 3 .These antagonists bind directly to the binding sites of glycine or glutamate at the GluN1 or GluN2 subunits, respectively.To understand the binding interaction of these antagonists, analysis of the cryo-EM structure of an intact NMDA receptor complex bound to antagonists or FRET analysis of the crystal structure of the LBD indicates an increase in the opening of the GluN1 and/or GluN2 clamshell by various degrees (13-28 • ) when compared to glycine or glutamate binding, respectively, occupying the same active binding site.Subsequently, these clamshell openings lead to the relaxation of the tension in the LBD-TMD linker, resulting in the closure of the ion channel pore (Figure 4) [28,[39][40][41].Functional NMDA receptor antagonists have been shown to exhibit anticonvulsant, anti-ischemic, antidepressant-like and anxiolytic-like properties [39,42,43].Despite their strong activities in attenuating glutamate-medicated excitotoxicity, these antagonists have been marked by unfavorable side effects such as hallucination, agitation, confusion, paranoia, delirium, drowsiness, and coma.These adverse effects render them unsafe for human use, leading to their failure in clinical trials [44,45].A notable example is D-CPP-ene, initially touted as a promising antiepileptic agent due to the absence of phencyclidinelike adverse effects observed at therapeutic doses in pre-clinical studies.It was also well tolerated in phase I clinical trials, with healthy volunteers exhibiting tolerance up to a dose of 2000 mg/day.However, D-CPP-ene was terminated at phase II due to severe adverse effects, including confusion, disorientation, gait ataxia and sedation, or worsened seizures noted at daily doses of 500-1000 mg/day [42].Moreover, the majority of competitive NMDA antagonists except SDZ 220-581 permeate poorly across the BBB due to their hydrophilic nature [44,46,47].Table 1 illustrates the pharmacological action and adverse effects associated with a few competitive NMDA antagonists that were investigated for neurological disorders.Despite a decade-long search for a competitive antagonist with a good safety profile and minimal side effects, none has completed clinical trials due to the associated psychotomimetic or dopaminergic transmission side effects.Generally, antagonists targeting GluN2 (A-D) are more prone to these unwanted adverse effects than the GluN1 subunits.While several non-selective NMDA or GluN2 antagonists have displayed psychotomimetic and/or dopaminergic side effects similar to MK-801, GluN1 antagonists have shown more favorable outcomes [48][49][50][51].Therefore, it is suggested that GluN1 antagonists could potentially address various neurological disorders.However, available information is derived only from preclinical studies on disorders such as anxiety, depression, and epilepsy.Moreover, the clinical relevance of a GluN1 antagonist is highly debatable because of the wide distribution of GluN1 subunits in the central nervous system or the important functions of co-agonists in NMDA receptor physiological functions.Clinical studies for this class of antagonists are needed to confirm their therapeutic relevance.Noteworthy is the limited preclinical data for complex neurodegenerative disorders like AD and PD [28,39,41,42,46,[52][53][54][55][56][57][58][59][60].Preclinical and clinical studies are essential to bridge this knowledge gap and determine the potential efficacy of GluN1 antagonists in treating these neurodegenerative disorders.
Uncompetitive or Non-Competitive NMDA Receptor Antagonists
Decades ago, the failure of competitive NMDA receptors to effectively address neurological disorders clinically redirected research towards non-competitive NMDA receptor antagonists.During this period, the focus was on targeting this binding site to tackle neurodegenerative disorders such as depression [61], PD, and AD.The aim was to alleviate the undesirable adverse effects associated with competitive NMDA receptor antagonists.These non-competitive NMDA receptor antagonists, also known as channel blockers, include phencyclidine (PCP), dizocilpine maleate (MK-801), ketamine, and tiletamine (Figure 5).They act by binding with high affinity to the PCP binding site at the entrance of the channel gate, as shown in Figure 6, to block calcium-mediated responses.Consequently, they have demonstrated neuroprotective effects in conditions such as stroke, cardiac arrest, and neurodegenerative disorders.Moreover, they have been shown to display anti-dyskinetic and anticonvulsant effects, although variations may arise depending on the rodent strains or models employed [62][63][64][65][66][67].These variations have led to ambiguous effects in the literature data on some of these open-channel blockers.For instance, one group found no anticonvulsant effects with ketamine in a 4-aminopyridine (4-AP) induced epileptic model of hippocampal slices, while another group was able to indicate its anticonvulsant properties in a 4-AP-induced seizure model of male Wistar rats [62].The differences in effects could be attributed to the variation in NMDA receptor subunit complexes expressed by the cells or animals, influenced by age.Despite their efficacy, due to their high affinity, they bind rapidly and dissociate slowly, prolonging the calcium-mediated response and resulting in unfavorable clinical outcomes [68].Similar to most competitive NMDA receptor antagonists, they are known to induce adverse effects, such as neuropsychological, psychotomimetic, and dopaminergic transmission effects, which limit their clinical use.Some of these antagonists can induce schizophrenia-like symptoms, even in healthy volunteers.The dopaminergic transmission effects of non-competitive NMDA receptors stem from their ability to activate the dopaminergic system, subsequently increasing dopamine synthesis, release and metabolism in various parts of the brain [64,68,69].Confounding the problem is the influence of these antagonists on the increase in turnover and release of serotonin, which are known to exacerbate schizophrenia-like symptoms [70].
The rediscovery of clinically tolerated memantine and amantadine as non-competitive NMDA receptor antagonists marked a significant breakthrough in the treatment of neurodegenerative disorders.Much like MK-801, both drugs bind to the PCP binding site of the ion channel complex.However, the efficacy and clinical use of amantadine and memantine as anti-dyskinesia (in PD) and a neuroprotective agent (in AD), respectively, are partially dependent on its weak NMDA receptor antagonist [66,71].This clinical tolerability is attributed to the considerably shorter residence time within the channel in an open state when compared to MK-801.Interestingly, a study demonstrated the ineffectiveness of memantine at low-level NMDA receptor activation but found it to be highly efficacious in the overactivation state.This favorable kinetics makes these blockers better neuroprotective agents with minimal side-effect profiles when compared to MK-801 [71][72][73][74][75]. Additionally, both amantadine and memantine are known to attenuate epileptiform activity behaviors induced by 4-AP in a rat model.When compared to amantadine, memantine has displayed better therapeutical indices in the management of epilepsy and other neurological or psychological disorders [62].
Similar to competitive NMDA receptor antagonists, some non-competitive antagonists have been associated with undesirable adverse effects.These effects are not only attributed to their strong binding affinity to the PCP binding site but also to their promiscuity.For example, antagonists like ketamine are known to bind to other receptors, enhancing the activity or transmission of other neurotransmitters such as dopamine, serotonin, noradrenaline α-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA), acetylcholine, opioid, and gamma-aminobutyric acid (GABA) [76][77][78].This can lead to unwanted side effects, including addiction, dependency, and tolerance, particularly with prolonged use.
Allosteric NMDA Receptor Antagonists
Despite numerous studies on competitive and uncompetitive NMDA receptor antagonists, addressing neurodegenerative disorders still poses enormous challenges, as current treatments only offer symptomatic relief.To date, no drug has successfully halted or slowed down the degenerative process.An alternative to competitive and noncompetitive antagonists is the allosteric modulation of NMDA receptors.The influence of GluN2 on the biophysical characteristics of NMDA channels provides a potential allosteric target [79][80][81].For instance, the GluN2B subunit presents an attractive site for therapeutic interventions in chronic neurodegenerative diseases like PD, AD, ALS, HD, and multiple sclerosis, as well as in acute neuronal diseases like traumatic brain injury, epilepsy, and stroke.What makes this target particularly intriguing is its enrichment in perisynaptic and extrasynaptic NMDA receptors, major contributors to excitotoxic-induced neuronal cell death.However, antagonists targeting this subunit have failed in clinical trials due to their lack of benefit in PD patients.Like non-competitive antagonists, the binding of allosteric NMDA antagonists to their receptor is independent of the presence or absence of agonists (glutamate, glycine or serine) at the binding site [27,[82][83][84][85][86][87][88][89][90][91][92][93][94].Allosteric modulators (Figure 7) of NMDA receptor channels can be classified into positive and negative modulators, and several of them have been explored extensively.Both positive and negative modulators bind to the ATD part of the NMDA receptor channel to potentiate or block, respectively, Ca 2+ -mediated responses.As shown in Figure 8, modulators like ifenprodil, a selective GluN2B inhibitor, act by binding to the interface between the GluN1 and GluN2B ATDs, and the mobility of the GluN2 lower lobe is vital for NMDA receptor inhibition [34,80,88,89,[95][96][97][98].This was confirmed by a cross-linking study that indicated a decrease in distance between the lobes of GluN2 ATD for the ifenprodil-bound NMDA receptor and immobile GluN (1 and 2) ATD upper lobes, despite conformation changes within the ATDs [80].Similar to ifenprodil, are radiprodil and Ro 25-6918, which have been shown to selectively inhibit GluN2 subunits to antagonize NMDA receptors.These selective GluN2B antagonists are efficacious with reduced side effects against a few neurodegenerative disorders.However, clinical trials for ifenprodil and radiprodil were terminated early due to poor bioavailability and lack of recruitment, respectively.For instance, radiprodil was initially investigated for infantile spasm syndrome due to its stronger anticonvulsant effects in younger rat pups compared to adult animals.However, it was terminated in the early stage of phase 2 clinical trials because of challenges in recruiting infant patients within the prescribed timeframe [89,99,100].Ifenprodil was investigated as an adjunct therapy in PD patients with waning efficacy of levodopa but failed in phase 2 clinical trials as the drug failed to reduce tremor, rigidity and bradykinesia due to poor BBB permeability [101].However, it found success in treating cerebral ischemic disease and is currently marketed in Japan and France as Cerocral ® , a cerebral vasodilator [102].Other negative allosteric modulators include DQP-1105 which has been shown to selectively block the GluN2D subtype to regulate synaptic transmission in the subthalamic nucleus, substantia nigra, striatum and spinal cord, and the GluN2C to presynaptically modulate gabaminergic synaptic transmission in the suprachiasmatic nucleus [103].TCN-201 (sulfonamide derivative) was also identified as a promising selective GluN2A allosteric modulator but could not proceed to biological studies due to its poor solubility [27,82,96,104].However, analogues of TCN-201 like MPX-004 and MPX-007 with enhanced solubility and increased efficacy have been designed and explored.These analogues could provide opportunities to further understand the mechanism of GluN2A NMDA receptor allosteric modulators [27,30].Regrettably, the majority of active drugs targeting allosteric sites have not produced the desired therapeutic success in clinical trials.However, recent developments have identified a series of phenanthroic and naphthoic acid derivatives with enhanced GluN2B, GluN2A or GluN2C/GluN2D selectivity, depending on their functional moieties, and improved pharmacodynamic properties [92,96,[104][105][106]].Additionally, these derivatives are amphipathic, possessing both hydrophobic and charged moieties.This characteristic promotes BBB permeation via the neurosteroid transporter, suggesting a good bioavailability for this group of compounds, particularly in addressing NDs [92,96].
Current Challenges and Future Perspectives
Over the years, translating preclinical studies of NMDA receptor antagonists into successful clinical drugs has been an uphill task.This is mainly due to the complexity of the NMDA receptors, as their optimum function is needed for several brain physiological functions.Deviation in the form of hypofunction or hyperfunction is detrimental to general well-being [30].Despite decades of studies, the use of NMDA receptor antagonists to address these defects in many neurodegenerative disorders while still maintaining the optimum physiological function of the NMDA receptor has yet to produce the desired outcome.Not only are there conflicting results in preclinical studies, but therapeutic effects observed in animal studies have failed to translate into human studies.This suggests a limited understanding of the NMDA receptor's physiological functions in human brains.Confounding this issue is the presence of the heterodimeric (GluN1/GluN2B or GluN1/GluN2D) heterotrimeric (GluN1/GluN2B/GluN2D) structure of NMDA receptors in certain neurons, which significantly increases the difficulty in resolving the function of different NMDA receptor subtypes [96,150].Moreover, several of these antagonists are marked by undesirable adverse effects like psychotomimetic, dopaminergic transmission, or schizophrenia-like effects.These adverse effects are linked to either the competitive inhibition of endogenous excitatory neurotransmitters, the strong binding affinities of these antagonists, the activities at other neurotransmitter receptors, or the inability of NMDA receptors to metamodulate other neurotransmitter receptors [25].Even though clinically tolerated NMDA antagonists (memantine and amantadine) are available, they are not without side effects.Interestingly, a few selective allosteric modulators (ifenprodil) targeting NMDA receptors have shown promising results with good side-effect profiles when compared to competitive and noncompetitive NMDA receptor antagonists.However, they are often poorly bioavailable, and their selectivity to a particular NMDA receptor subtype or subunit is relative.Not only are they difficult to study despite being knowledgeable about the binding sites [88], but they are also known to interact with other neurotransmitter receptors.For instance, ifendropil, a selective GluN2B negative allosteric modulator, has been shown to also act at adrenoceptor 1 & 2 (α 1&2 ), sigma and (serotonin) 5-TH 1A&2 receptors [85,90,152].Similar GluN2B antagonists.like Ro-25698, with a low crossreactivity with adrenoceptors, are known to bind to 5-HT, histamine-1 (H 1 ) and sigma receptors [152].Thus, they can cause unwanted side effects.Nevertheless, the significance of the GluN2B subunit, particularly those at the extrasynaptic sites, in excitotoxic-induced cell death makes them a favorable therapeutic target for any potential modulators.Modulators with enhanced solubility and bioavailability must be specific and selective to the GluN2B subtype.To achieve this, a better understanding of the mechanism of allosteric binding of antagonists or modulators to NMDA subunits is needed, but remains poorly defined to date.
To improve the solubility and bioavailability of these GluN2B antagonists, researchers could explore nano-drug formulations, focusing on solid lipid nanoparticles.This form of drug delivery system would involve incorporating the drug molecules in stearic acid/ poloxamer188 nanoparticles and coating them with chitosan.These nanoparticles, with an average size of 300 nm, through intranasal administration, have been shown to effectively cross the BBB in brain endothelial cell permeation and uptake studies.Moreover, the cationic nature of chitosan may aid adhesion to brain endothelial cells, improving the transport of nanoparticles.Additionally, the nose-brain delivery system provides an enriched blood vessel that enhances the bioavailabilities of loaded drugs.Similar drug delivery systems have been used to improve the BBB permeability of dopamine and riluzole, with the potential to treat PD and ALS, respectively [153][154][155].Surprisingly, to date, no studies have explored or reported on this form of drug delivery system for ifenprodil and its derivatives, which were once touted as promising drug agents for the treatment of NDs.Exploring such an avenue could provide the much-needed breakthrough for this class of antagonists in halting or slowing the degenerative processes mediated by glutamate-induced toxicity.
Recently, fluoroethylnormemantine (FNM), a novel NMDA receptor antagonist derived from memantine, has been developed for the treatment of stress-induced maladaptive behavior associated with depression.When compared to (R, S)-ketamine, FNM has been shown to exert rapid antidepressant actions with a low side-effect profile in mice by selectively antagonizing NMDA receptors [156].Similar to memantine, FNM binds in a non-competitive manner and an open active state to the PCP site of the NMDA receptor as observed in radioligand binding studies ([ 18 F]-FNM) [157].FNM is currently in phase 1 clinical trials, offering hope for the treatment of neurodegenerative disorders, including post-traumatic stress disorder, AD, major depressive disorder and treatment-resistant depression [158].Another promising rapid antidepressant agent is esmethadone (REL-1017), a dextro isomer of methadone with little or no activity towards the opioid receptor.Like memantine, esmethadone is a low-affinity non-competitive NMDA receptor antagonist.Esmethadone is currently in a phase 3 clinical trial for the treatment of major depressive disorder [159].The development of these antagonists (Figure 9) emphasizes the prominent effect of excitotoxicity in neurodegenerative disorders and redefines polycyclic cage structures with NMDA receptor selectivity and fast kinetic interaction.Moreover, these polycyclic cages are permeable to the BBB with a minimal side-effect profile.This offers the opportunity to explore more polycyclic cages acting at the PCP binding site in an open active NMDA receptor state.Several structural-related polycyclic cages have been shown to display neuroprotective effects against glutamate-induced toxicity and other degenerative processes [160].However, the reported findings are based solely on experimental data, and further exploration through clinical studies on these groups of NMDA receptor antagonists is warranted.The multifaceted nature of many neurodegenerative disorders makes designing and developing potential treatments complex and highly challenging.Factors contributing to the degenerative process are interrelated, including excitotoxicity, oxidative stress, neuroinflammation, protein aggregation, and mitochondria dysfunction.The majority of NMDA receptor antagonists are designed to target excitotoxicity, corresponding to a single-target approach.Moreover, a few of these antagonists fail to cross the BBB due to their hydrophilic nature.In recent years, the focus has been on designing and developing multifunctional agents with the potential to address glutamate-induced toxicity and other therapeutic targets, adopting a multi-target approach to drug development.These antagonists, sharing functional and structural similarities to amantadine and memantine, exhibit polycyclic cage structures that incur lipophilicity and enhanced permeation via the BBB.For example, a series of triazole-bridged aryl adamantane derivatives have been explored as multifunctional agents for the potential treatment of AD.These derivatives demonstrated potent inhibition of acetylcholinesterase enzymes, Aβ aggregation, and NMDA receptor, as well as good BBB permeability and a good safety profile in neuronal cell lines like SH-SY5Y neuroblastoma cells.This unique attribute makes them promising candidates for the treatment of AD [161].Similarly, a series of polycyclic propargylamine and acetylene derivatives were investigated, revealing multifunctional activities, including neuroprotection, monoamine oxidase (MAO) inhibition, anti-apoptotic activities, and inhibition of NMDA receptors and voltage-gated calcium channels [162].Another study explored some carbamate-based cholinesterase inhibitors, with structural similarities to acetylcholine, as potential multifunctional agents for AD treatments.These inhibitors exhibit diverse scaffolds, such as physostigmine, isosorbide, quinazoline, quinoline, xanthone, chalcone, flavonoid, indole-like, resveratrol and coumarin derivatives.In addition to their anti-cholinesterase activity, preclinical findings suggest multiple activities, including antioxidant properties, anti-neuroinflammation, metal chelation, neuroprotection, monoamine oxidase inhibition, neurotrophic effect and/or reduction in Aβ aggregation.Hence, they represent promising multifunctional candidates for the treatment of AD [163,164].However, the majority of the available data for these inhibitors or antagonists stems from preclinical or experimental studies.There is a pressing need for in vivo and clinical studies to establish the therapeutical clinical efficacy of these groups of compounds.
Currently, only two FDA-approved multifunctional drugs (Namzaric ® and Auvelity ® ) are available, each containing an NMDA receptor antagonist and another therapeutic agent, for the treatment of neurodegenerative disorders.Namzaric ® is marketed for moderate to severe AD, while Auvelity ® is designated for the treatment of agitation associated with AD.Despite the enhanced activity and formulation adherence of each drug over its components, they have demonstrated similar side-effect profiles to their parent drugs [165][166][167].Therefore, there is a need for a multifunctional hybrid or hybrids capable of antagonizing NMDA receptors, providing symptomatic relief, and targeting other degenerative processes such as neuroinflammation, oxidation and mitochondrial dysfunctions.Designing and developing selective GluN2B antagonists/modulators with polycyclic moieties and multitarget properties would be highly desirable.Such a multifaceted approach, with polycyclic scaffolds that incur good bioavailability, holds significant promise in addressing neurodegenerative disorders [168].
Conclusions
The significance of glutamate-induced excitotoxic death in the pathogenesis of neurodegenerative disorders is well established.With this knowledge, the ideal approach would be to use NMDA receptor antagonists to halt the degenerative process.Despite years of research, developing such agents has yielded little success, as current treatments, such as amantadine and memantine, only offer symptomatic relief.Many competitive and noncompetitive NMDA receptor antagonists have been explored but are marked by undesirable psychotomimetic side effects.These adverse effects are linked to the strong NMDA receptor-binding affinity or metamodulation of NMDA receptors by these antagonists that negatively influence their physiological functions.Interestingly, a more detailed exploration of the structure and function of NMDA receptors has led to the development of some selective negative allosteric modulators with good side-effect profiles.These modulators offer promise in addressing neurodegenerative disorders.
The distinctive biophysical features and localization of NMDA subunits provide a great opportunity for developing clinically effective drugs with optimum safety profiles.For instance, extrasynaptic neurons are rich in the GluN2B subunit and serve as key mediators of excitotoxic neuronal cell death.Therefore, selectively blocking this subunit would be therapeutically beneficial in addressing glutamate-induced cell death, especially in conditions such as AD and PD, where neurons in the brain cortex, hypothalamus, or striatum are predominantly affected.However, some of the developed antagonists fail in clinical trials due to poor bioavailability or lack of recruitment.One could explore the use of nano-drug formulations like solid lipid nanoparticles and nanostructured lipid carriers to improve the BBB permeability of these antagonists.Moreover, their selectivities toward the different GluN2 subtypes are often relative.This is majorly due to the amino acid sequence similarities existing among GluN2 subunits.For example, the overall amino acid sequence homology of GluN2A and GluN2B subunits are nearly identical, making it difficult to determine the strong subunit specificity of certain ligands [148,169].Designing and developing agents that specifically and selectively target only GluN2B subunits is crucial for overcoming these challenges and improving the success rate in clinical trials [84].This could be achieved by utilizing a receptor-based virtual screening method to identify amino acid residues that are unique to GluN2B subunits.Subsequently, a structure-based virtual screening technique would be employed to identify small molecules with optimum binding interactions with these targets.This targeted approach holds promise for the development of novel NMDA receptor antagonists that effectively address glutamateinduced excitotoxicity with minimal side effects.
However, the challenge is the subunit diversity in the NMDA receptor channel complex, which is highly complex and not completely understood, despite decade-long x-ray crystallography studies on these subunits [148,170].One of the most intriguing and challenging aspects of studies involving certain NMDA receptor antagonists is the discrepancy between preclinical and clinical findings, ultimately resulting in clinical trial failures.This may, in part, be linked to the differences in LBD residues that exist between human and animal (rodent) NMDA receptor subunits.Additionally, the expression of these subunits may differ at each development stage of the animal as observed in rodents, leading to translational failure [148].This highlights our limited understanding of the NMDA receptor structure and function, particularly in humans, which remains poorly defined to date.Further research and refinement in drug design and understanding the molecular structure and functions of the NMDA receptor are crucial to fully unlock the therapeutic potential of this strategy in treating neurodegenerative disorders.Elucidating the structure of human NMDA receptor subunits will be a step in the right direction.However, it requires the collaborative efforts of medicinal chemists, physicists, bioinformaticists, computational chemistry, and structural biology.
Figure 3 .
Figure 3. Structures of selected competitive NMDA receptor antagonists.
Figure 4 .
Figure 4. Schematic representation of competitive antagonism at NMDA receptor.
Figure 5 .
Figure 5. Structures of channel blockers or non-competitive NMDA receptor antagonists.
Figure 6 .
Figure 6.Schematic representation of NMDA receptor antagonism by channel blockers or noncompetitive antagonists.
Figure 8 .
Figure 8. Illustration of negative allosteric modulators at NMDA receptor binding site.
Figure 9 .
Figure 9. Non-competitive NMDA receptor antagonist presently in clinical trials.
Table 1 .
Pharmacological and side-effect profiles of developed NMDA receptor antagonists.
|
2024-05-19T15:28:34.187Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "5d94487959e7a5d927418473bc5648063c8d0c02",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c51c0d05f6be48a66ecec992bab32aa44abef7cc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
144104363
|
pes2o/s2orc
|
v3-fos-license
|
Education Sciences Role of Leading Programs in Doctoral Education: a New Type of Leadership Education in the Sciences at University of Hyogo, Japan
Fostering global leaders for the next generation is an important mission of universities. In Japan, Leading Programs in Doctoral Education (LP) has been implemented in many graduate schools. The main goal of this program is to foster PhDs with deep specialization and peer leadership who will be able to compete well internationally. The Graduate School of Life Science, University of Hyogo is implementing a LP to foster global leaders using cutting-edge technology. They are also trying to create new evaluation criteria of human resource development with their corporate sponsors. The success of LP depends not only on how many graduates can play leading roles globally, but also how university staff can create a superior new evaluation criteria of human resource development and how much it can be shared with universities and industry. Development of students and graduates with a high level of ability takes time, thus it is important to consider the continuous development of LP.
Introduction
Universities are faced with the challenge of educating students to become global leaders for the next generation.Many programs have been implemented in various fields to foster global leaders all over the world [1][2][3][4].However, most of them have either been academically specialized or comprised only of vocational training such as laboratory based technical training.These programs have indeed produced many academic researchers and trained workers, but have not generated "global leaders" who could play leading roles in industrial and governmental sectors.In this study, we discuss a new type of leadership education and create its evaluation criteria by introducing the case of science and leadership education at University of Hyogo.
The Leading Programs in Doctoral Education (LP) was started by the Ministry of Education, Culture, Sports, Science and Technology, Japan (MEXT) in 2011 to advance the establishment of university graduate schools of the highest caliber by supporting dramatic reform of their education curricula so that their degree programs will be recognized as of top quality around the world [5].To foster excellent students who are both highly creative and internationally attuned and who will play leading roles in global academic, industrial, and governmental sectors, the program brings top-ranking faculty and students together from both within and outside Japan and enlists participation from other sectors in its planning and execution, while creating continuity between master and doctoral programs and implementing curricula that cross over various fields of specialization.
LP sounds very much like the US IGERT (Integrative Graduate Education and Research Traineeship) -the National Science Foundation's flagship interdisciplinary training program, educating PhD scientists and engineers by building on the foundations of their disciplinary knowledge with interdisciplinary training.Collaborative research that transcends traditional disciplinary boundaries and requires teamwork provides students with the tools to become leaders in the science and engineering of the future [6].However, LP intended not only for science and engineering but also law and literature fields is a novel concept.
LP is divided into three categories: all-around, composite, and only-one (Figure 1).The all-around category involves constructing a degree program that uses an integrated arts and sciences model, and pools the university's collective wisdom to nurture future political, financial, administrative, and academic leaders who will become active in Japan and abroad, and lead global societies.The composite category entails a degree program that cuts across multiple fields to train leaders who can supervise industry-academia-government projects and who can drive innovation in solving the problems that society encounters.The only-one category comprises a degree program to develop leaders who would develop new fields utilizing the exceptional resources that are unique to Japan.By the end of March 2014, 62 programs had been adopted (Table 1).
The Necessity of Strengthening Leadership Education
Representatives of industry, especially in Japan, have indicated that more employees are needed that are well educated in leadership in order to compete successfully in global markets.The conventional wisdom regarding leadership has been that leadership stems primarily from authority, higher position, and a charismatic personality.However, during the last two decades, industry and the general public have called for new models of leadership without these features, namely, peer leadership, of which the most distinctive feature is that the skill of leadership can be mastered with training that is not derived from inborn personality and the environment [7,8].Individuals with peer leadership skills are always prepared to face difficulties with the understanding that their mission is to resolve issues by offering constructive suggestions or cooperating with others.The following is a list of primary skills that Japanese industry (Keidanren) expects of new employees [9].
Ability to tackle challenges Expertise/skills to take charge of tasks Problem solving ability Sense of responsibility Understanding various cultures and values Communication skills Therefore, MEXT decided to start the LP to foster competent PhD students with deep specialization and peer leadership for the sustainable development of Japan.
Outline
The University of Hyogo established a new department, Department of Picobiology, in April 2013 to implement the LP entitled "Next Generation Picobiology: Focused on Photon Sciences".This 5-year PhD program offers students a monthly scholarship of JPY 200,000 to support their studies and research.This program comprises a combination of the most advanced sciences, liberal arts, and leadership education.It also allows students to access some of the world's most advanced analytical equipment, such as Raman spectrophotometers, Super Photon Ring 8 GeV (SPring-8, a third-generation synchrotron radiation facility, RIKEN), neutron diffractors (J-PARC, JAEA), X-ray-free electron lasers (SACLA, RIKEN), electron microscopes, and a K computer (Fujitsu supercomputer with a Linuxbased operating system, RIKEN).Through cooperation with the RIKEN SPring-8 Center, students have the flexibility to discover issues that interest them along with the challenge of creatively developing new insights through the rigors of sciences that have been developed for this program.Students work closely with world-class researchers that are recruited for the program and benefit from many opportunities to foster their technical, analytical, and expressive abilities, as well as the IT, language, and problem-solving skills that are essential for future global leaders (Figure 2).
Quality Assurance and Industry University Collaboration System
Quality assurance is defined as a mechanism through which higher education institutions secure the quality of their educational and research offerings to build stakeholder confidence.Such approaches will include achieving intended outcomes and fulfilling stakeholder needs, as well as conforming to evaluation standards and the basic requirements stipulated by law [10][11][12].
The University of Hyogo has invited supporters (stakeholders) from not only life science corporations but also publishers, news agencies, and journalists to implement the LP.Many active leaders in industry have been invited as lecturers for common subjects such as Advanced Course of Global Leadership and Advanced Course of Career Paths for PhD.Students participate in internship programs provided by corporate stakeholders for one to three months.This industrial-academic collaboration has worked very well, since guest speakers bring real issues and challenges to the table that they have actually experienced [6].This increases interest and a desire to learn among students because they are not dealing with fictional issues invented by academics.
Corporate sponsors also benefit from this collaboration system: students, as their future consumers, can offer them ideas for their products; sponsors can learn about the actual conditions under which the students learn and live, data that are useful for designing training plans for new employees; and they can educate employees using the collaboration system.However, new corporate sponsors are truly difficult to attract when only university staff are involved and other marketing ideas are needed to sustain this collaboration.
In Japan, it can be said that the recruitment process of new employees seems to prioritize academic clique over applicants' ability, and PhD students have much more difficulty in finding suitable jobs according to their background than Bachelors.Therefore, such collaboration activities may change the recruitment system of PhD students in the Japanese industry and government in the future.
Create New Evaluation Criteria of Required Leadership and Science Education
The University of Hyogo is now trying to create new evaluation criteria of required leadership and science education in their LP from three standpoints: (1) growth of student as a scientist; (2) growth of student as a global leader; and (3) trends and needs in society.It is a noteworthy fact that they are creating the criteria in collaboration with their corporate sponsors.They recognize that past failure in leadership education, especially in Japan, has been caused by the mismatch between the expectations held by universities and industry: attaching too much emphasis to academic studies and lacking technical, human, and conceptual skills.This is why they decided to create new evaluation criteria with their corporate sponsors.
Graduates with leadership skills are unique and varied according to their specialized background.Therefore, human resources must be developed carefully considering such diverse types of individuals.On the other hand, they also have to produce PhDs who intently meet standards as global leaders.
To solve these problems, it is effective to create new evaluation criteria that meet the expectations of leadership according to many stakeholders.If their evaluation criteria can be considered as an agreed standard of human resources shared with industry and all universities, their LP will be praised for generations to come.
Concluding Remarks
Peer leadership does not require authority, high position, or a charismatic personality; rather, it is a skill that can be acquired by training.LP designed by MEXT is fostering competent PhD students to become global leaders with deep specialization and peer leadership.
The University of Hyogo is implementing the program to foster global leaders with excellence in science by providing access to the world's most advanced analytical equipment through cooperation with the RIKEN SPring-8 Center.This is a completely innovative education program to foster PhDs with both knowledge of cutting-edge science and peer leadership.They are now trying to create new evaluation criteria of required leadership and science education in their LP with their corporate sponsors from three standpoints: growth of student as a scientist, growth of student as a global leader, and trends and needs in society.
The success of these special programs depends on not only how many graduates can play an active part in their future, but also how university staff can create a superior new evaluation criteria of human resource development and how much it can be shared with universities and industry.Additionally, this is directly linked to the success of global academic, industrial, and governmental sectors.However, the development of graduates and employees with a high level of special abilities takes time.Therefore, the LP should be stable and continued over the long term.
Figure 1 .
Figure 1.Categories of Leading Program in Doctoral Education (LP).
Figure 2 .
Figure 2. Various subjects for the Leading Program (LP) at University of Hyogo.
Table 1 .
Sixty-two Leading Programs in Doctoral Education (LP) by the Ministry of Education, Culture, Sports, Science and Technology, Japan (MEXT).
Graduate Course for system-inspired leaders in material scienceOsaka UniversityInteractive materials science cadet program Kyushu University Graduate school for molecular system & device science
|
2015-09-18T23:22:04.000Z
|
2015-01-12T00:00:00.000
|
{
"year": 2015,
"sha1": "8b60a85fa86daf8a085174aeddd80744afddafda",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7102/5/1/2/pdf?version=1421072186",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8b60a85fa86daf8a085174aeddd80744afddafda",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
233842366
|
pes2o/s2orc
|
v3-fos-license
|
Agroforestry to Achieve Global Climate Adaptation and Mitigation Targets: Are South Asian Countries Sufficiently Prepared?
: Traditional agroforestry systems across South Asia have historically supported millions of smallholding farmers. Since, 2007 agroforestry has received attention in global climate discussions for its carbon sink potential. Agroforestry plays a defining role in offsetting greenhouse gases, providing sustainable livelihoods, localizing Sustainable Development Goals and achieving biodiversity targets. The review explores evidence of agroforestry systems for human well-being along with its climate adaptation and mitigation potential for South Asia. In particular, we explore key enabling and constraining conditions for mainstreaming agroforestry systems to use them to fulfill global climate mitigation targets. Nationally determined contributions submitted by South Asian countries to the United Nations Framework Convention on Climate Change acknowledge agroforestry systems. In 2016, South Asian Association for Regional Cooperation’s Resolution on Agroforestry brought consensus on developing national agroforestry policies by all regional countries and became a strong enabling condition to ensure effectiveness of using agroforestry for climate targets. Lack of uniform methodologies for creation of databases to monitor tree and soil carbon stocks was found to be a key limitation for the purpose. Water scarcity, lack of interactive governance, rights of farmers and ownership issues along with insufficient financial support to rural farmers for agroforestry were other constraining conditions that should be appropriately addressed by the regional countries to develop their preparedness for achieving national climate ambitions. Our review indicates the need to shift from planning to the implementation phase following strong examples shared from India and Nepal, including carbon neutrality scenarios, incentives and sustainable local livelihood to enhance preparedness. forest disturbance, livelihoods, and biodiversity.
Introduction
Climate change is a reality and it is well established that the planet is facing climate emergency [1]. Emissions from the agriculture sector alone emits 6 billion metric tons of greenhouse gases (GHG) into the environment per annum [2]. Climate change impacts in certain regions have been more damaging and devastating because of the enhanced exposure to climatic hazards, already prevailing vulnerabilities and lower adaptive capacity [3,4]. Climate change mitigation, food security, conservation of biodiversity, restoration of ecosystems and localizing the sustainable development goals (SDGs) are the fundamental global challenges of present times [5]. With increasing natural disasters and climate variability there is growing urgency for recognizing and supporting efforts for climate adaptation and mitigation [6]. Of these, adaptation efforts to improve land and water management related practices have been identified as central to boosting capacity for overall resilience to climate vulnerability [7].
The South Asia region includes the countries of Bangladesh, Bhutan, India, the Maldives, Nepal, Pakistan, and Sri Lanka. S. Asia has huge range of human, cultural, and ecosystem diversity [8]. S. Asia's rapid population growth, widespread poverty, large dependence on natural resources and inadequate adaptive capacity has made the region highly vulnerable to climate change. The region is home to more than one fifth of the world's population, and is one of the most climate disaster-prone areas on earth [9][10][11]. Agriculture and pasture land in the region accounts for one third of the total land cover [2]. Fulfilling the food requirements of a fast-growing population without affecting land use is a primary challenge due to sustenance agriculture, and this has resulted in widespread food shortages [12,13]. Agriculture expansion and intensification are drivers of deforestation and biodiversity loss in the region. Due to low per capita land available for agriculture, production of food with a marginal ecological footprint becomes essential [12]. There are growing expectations on multifunctional land use systems, to fulfill mounting regional land and food demands while addressing emerging climate hazards, as they support sustenance of productive landscapes, habitats, social, economic, and also regulatory aspirations [14].
Adaptation is an urgent requirement under the present climate change scenario, particularly in developing and underdeveloped countries, which are anticipated to be severely impacted by climate extremes [15]. The contribution made by agriculture to achieve the SDGs will require climate adaptation followed by cropland advances that are affordable and profitable to the poor [16]. The Intergovernmental Panel on Climate Change (IPCC) in its first, second, and third assessment reports (1990, 1996 and 2001) have acknowledged the South Asian region for its capacity to incorporate adaptation and mitigation approaches that can also facilitate pro-poor development through carbon-offset arrangements such as farmer managed natural regeneration, agroforestry, and adaptive agriculture practices [17]. While synergies in adaptation and mitigation approaches need to be addressed, they should not be limited to income diversification from tree or forestbased products. Adaptation and mitigation approaches should ideally include approaches for improving soil health and biodiversity, and reducing fire risks, through restoration of natural ecosystems [18]. Intended Nationally Determined Contributions (INDCs) have emerged as the principal tool for benchmarking and reporting under the Paris Agreement. Likewise, removing atmospheric carbon and storing it in terrestrial vegetation is a feasible adaptation and mitigation option that contributes to the NDCs. Researchers have identified agroforestry among critical landscapes as an approach that can fulfill NDC commitments, particularly in developing countries [19,20].
Trees outside forests (TOFs) substantively contribute to livelihood improvement, while also enhancing biomass and carbon stocks. In the last few decades, policy makers have recognized the significance of TOFs, and included them in the national forest inventories [21]. Indigenous and traditional resource management by agroforestry is proven to benefit livelihood benefits in terms of provisioning, regulating, and supporting ecosystem services [22]. Trees on arable land have the potential to support carbon sinks under Nature-based Solutions (NbS) contributing to climate change adaptation and mitigation through carbon sequestration [23][24][25][26]. Understanding the regional agroforestry status, creating opportunities for further promotion to fulfill climate promises, and ensuring successful acceptance of agroforestry practices are all crucial and pertinent, in light of climate change [27]. For this paper, we performed an initial bibliometric analysis to understand the existing published literature on regional agroforestry practices and their importance in addressing global climate adaptation and mitigation targets. Based on the limitations of the analysis, we then conducted a detailed review of available literature on Scopus, Web of Science and Google Scholar to obtain a detailed overview of the potential for agroforestry systems (AFS) in supporting country-specific mitigation targets as well as supporting NDCs as proposed by countries in S. Asia. Additionally, this paper discusses the need for integrating AFS into MRV (Monitoring, Reporting and Verification) while providing a critical understanding of key gap areas, existing policies and concerns that need specific attention to be scaled up by adoption and promotion of agroforestry in the region. The review paper critically tries to address the following questions: 1. What is the substantial evidence that AFS and its practices deliver diverse ecosystem services, thereby ensuring human well-being in S. Asia? 2. What are the important climate discussions including agroforestry for climate adaptation and mitigation? 3. What are the key capabilities, and constraints when looking to include agroforestry into climate adaptation and mitigation?
Traditional Agroforestry Systems in South Asia
Agroforestry systems are dynamic, sustainable food production, and natural resource management systems with high prevalence and acceptance in developing countries in the tropics of South-East Asia, South Asia, and Central, and South America. These systems occupy more than 50% of the land coverage [28][29][30]. Despite global recognition and the presence of AFS, it is still a challenge to find reliable and accurate information on the extent for S. Asia. A list of land areas that are under agroforestry in different countries of the world including S. Asia was prepared by The International Assessment of Agricultural Knowledge, Science and Technology for Development (IAASTD) [31]. Nair et al. [32] estimated global agroforestry cover to be 1023 million hectares followed by Zomer et al. [33]. Zomer [29] projected global agroforestry cover to be 1020 million hectares [22], thereby agreeing with Nair et al. [32] (Table 1). South Asia is recognized for its AFS and its long history of acceptance and adoption of traditional practices across diverse agro-ecological conditions and agro-climatic zones. The diverse AFS in the region showcase the accumulated knowledge related to climate adaptation and mitigation approaches developed by millions of smallholding farmers and marginalized communities over centuries [34]. Approximately 60% of the research on AFS in the Asia-Pacific region has been carried out in India, China, Indonesia, and Australia, with a clear focus on silvi-pastoral systems. Shin et al. [35] provided details on the extensive research on AFS in India from 1970-2018. Nair et al. [36] provided a detailed overview on traditional AFS in S. Asia, along with other regions of the world.
Home gardens are the dominant AFS across S. Asian countries. Traditional AFS in S. Asia are trusted for their diverse benefits from the small land holdings (Table 2). In India, Nepal, Bhutan, Bangladesh, the Maldives and Sri Lanka, growing fuelwood, fodder and fruit trees on cropland bunds by local people is a common practice to fulfill energy and food demands, and are these practices that constitute important livelihood options for the region's rural poor [37,38]. However, in Pakistan, local farmers are hesitant to plant trees on cropland bunds to avoid competition between trees and crops. Hence, their fuelwood and fodder needs are mostly met from natural forests or wasteland vegetation. The magnitude of agroforestry in the region at present is highly underestimated, because of technical constraints to recognize low-density tree cover common the small landholdings of local farmers [20]. Agroforestry cover reported from different parts of Asia shows that there are fewer areas with trees in S. Asia region, compared to other regions in Asia (Table 3). The Central Agroforestry Research Institute (CAFRI) based in Jhansi, India estimated agroforests to span 13.75 million hectares in the country [41]. In the biennial State of Forest Report (ISFR) of India for 2019, AFS are located under trees outside forests (TOF) category, spanning an area of 2,93,840 km 2 , or about 8.94% of the geographical area of the country. More than 65% of the country's timber and more than 50% of the fuelwood requirements are supported by AFS. Oli et al. [42] reported higher tree species richness in agroforests of Nepal compared to natural forests. Chakraborty et al. [43] stressed the value of agroforests in Bangladesh. Agroforests in Bangladesh support household fuelwood needs and thus, help in reducing household expenses and dependence on wood from natural forests. The National Research Centre for Agroforestry projected the livelihood potential of 943 million person-days/annum from 25.4 million ha agroforests in India [44]. The Agroforests with species such as teak (Tectona grandis L.f.) or Silver Oak (Grevillea robusta A. Cunn. ex R.Br.) are an investment option for the region providing significant economic, and ecological returns, for ensuring long and short term diverse ecological and social benefits for local communities [39]. Fast growing high biomass yielding species like Poplar (Populus spp.) and Eucalyptus (Eucalyptus spp.) have gained larger acceptance and recognition in industrial plantations of Pakistan and India. Fast growing trees (Eucalyptus spp., Populus spp., Tectona grandis, Casuarina equisetifolia L. etc) are preferred in industrial agroforestry plantations and shelterbelts because of their economic and ecological values and fast growth rates [45]. Agroforestry trees that have market value are preferred by farmers in the region, as they have less susceptibility to fail as annual crops. Moringa oleifera trees are preferred in India because of the medicinal properties and market value of its all plant parts. Similarly, many traditional fodder trees like Grewia optiva J. R. Drumm. ex Burret, Carpinus viminea Wall. ex Lindl. etc., that can be harvested multiple times a year [22,46].
Noticeable examples of AFS include multifunctional landscapes such as home gardens that secure food and support conservation of lesser known underutilized biodiversity in Sri Lanka, Maldives, Bangladesh and India [47]. These tree-based land management practices (spice gardens in Kerala, India, and in Sri Lanka) have proven their potential in providing livelihood opportunities for rural industrialization. Integrated agri-silvi-horti production systems that favor resource conservation and support conservation of traditional agro-biodiversity also ensures climate adaptation and mitigation in the region [34].
Agroforestry Systems and Human Well-Being
Ecosystem services from natural ecosystems (or semi-natural) largely support and contributes various benefits for human well-being (environmental, material as well as psychological benefits) [48][49][50]. Agroforests on croplands or pasture lands as an important traditional land management practice and thus provide diverse socio-economic and ecological benefits including NbS for climate change adaptation [35,51]. Agroforestry delivers diverse provisioning, regulating and supporting ecosystem services, and climate adaptation is an important one to address global climate change [5]. Historically, AFS across S.
Asian countries have been designed to capitalize and harness diverse benefits for human well-being [52]. The presence of multifunctional landscapes, ensures the conservation of lesser known wild species, encourages traditional agrobiodiversity and also improves pollinator benefits [53]. These well-managed multifunctional sustainable AFS provide considerable livelihood benefits as well as safeguarding diverse ecological functions [42]. It is important to mention here that decisions by farmers for adoption of a land use is not dependent on a benefit cost ratio, but essentially rests on how much net income will be earned. Hence, horticulture-based agroforestry is preferred by farmers in Bangladesh over cropland and homestead agroforestry [54].
AFS have the potential to serve in the restoration and rehabilitation of degraded ecosystems, and could help to reinstate ecosystem services [55]. Food security, land tenure security, enhanced farm-based incomes, management of terrestrial and soil biodiversity, carbon sinks, hydrological functions, wildlife corridors, reduced soil erosion, biodiversity conservation, microclimate improvement, increased nutrient retention via root capture and cycling, etc. are some of the diverse benefits of AFS reported from the region [20,38,[56][57][58]. Supporting agroforestry interventions to ensure food security in Nepal includes high biomass of fodder, meat, and production by Non Timber Forest Produces (NTFPs) [59]. Areas under agroforestry are reported to result in reduced soil erosion and improved nitrogen fixation in Bhutan [60]. In Bangladesh, there was comparatively less nutrient depletion from soil erosion in AFS than in jhum/slash and burn agriculture [61]. There is considerable evidence that AFS support sustainable production, providing subsidiary household provisions with diversified products, conservation of natural resources, aquifer recharge, etc. [35,62]. According to Muschler [63] agroforests support "sustainable intensification" within a land use archetype that that are based more on ecology than on chemistry and climate science. Article 2 of the Paris Agreement proposed to strengthen global efforts to reduce climate impacts with reference to sustainable development and poverty alleviation. Hence, it is vital to recognize and acknowledge the role of agroforestry and to mainstream it at country level to address global climate targets. Leveraging the mitigation potential of land use sectors is crucial, in meeting emission reduction targets [64]. By endorsing the benefits of diverse AFS practiced across S. Asia, less fertile marginal croplands with low productivity can be included for income diversification. This can be achieved by restoring soil health, improving irrigation efficiency, creating carbon sinks [52,[65][66][67], thereby also strengthening adaptive rainfed dryland agriculture [68].
Bibliometric Analysis Agroforestry Systems in S. Asia Region
Bibliometric analysis was carried out to take stock of existing information on AFS in S. Asia. A total of 52 published works were retrieved from the Web of Science (WoS) database according to the keywords "Agroforestry" and "South Asia". The retrieved literature spans the period 1991 to 2019, covering 30 journal articles, 7 review papers, 5 proceedings. The metadata of the retrieved literature contains information about the author names, journal, title, abstract, author defined keywords, machine learning generated keywords (known as keyword-plus), local and global citations, referred articles, year of publication, etc. Analysis of the metadata associated with articles provide useful insights about the research structure and themes. In this study we used the bibliometrix library of R programming language for the analysis (https://www.bibliometrix.org, accessed on 25 June 2019). The annual scientific production pertaining to the study followed an average growth rate of 6.21%. Most relevant sources (and their h-index) in terms of journals from where maximum papers originated, are Agroforestry Systems (5), New Forests (3), and Society and Natural Resources (2). A word tree-map of keywords is a simple method to visualize the overall spread of the research field. The word tree-map for author keywords is shown in Figure 1, in which the area of the rectangle labelled with the keyword is proportional to the frequency of its occurrence in retrieved literature. Frequency analysis of author keywords indicate that author keywords-conservation, agroforestry, biodiversity, and management have appeared most frequently. Conservation and biodiversity, agricultural management, biomass, carbon sequestration, and climate change topics are also associated with the overall theme of agroforestry in South Asia. Topics related to socio-cultural aspects such as livelihoods of local people and shifting cultivation also appeared in the literature. The temporal evolution of the research topics can be understood by plotting the most frequent author keywords or keywords-plus with respect to the year of appearance. The trend of author keywords is shown in Figure 2 containing the keywords that have appeared at least twice in any year (considered between 2004-2019). Results indicate that there has been a shift in topics from the physical aspects related to agroforestry such as soil and water conservation, land productivity, and forestry to land use changes, forest disturbances, socio-economic development from 2004-2011. Studies in the last decade were related to shifting cultivation, livelihood of people, rubber plantations, oil palm farming, along with carbon sequestration. The trend in keywords do not reflect aspects related to climate change adaption and mitigation strategies, and the research momentum has not yet gained traction as expected. Co-word analysis was performed to capture the conceptual structure of research themes by analyzing co-occurrence of author keywords in the bibliometric collection. A bipartite matrix between author keywords and documents has been developed by the biliometrix library for analysis. Information on the group of keywords that appear together can be identified and made into clusters based on the k-mean clustering algorithm using R. In order to plot the clusters in a 2D plane, the multiple correspondence analysis dimensionality reduction method was used. The author keywords are grouped into clusters based on proximity in the 2D space, and the keywords that appear in a cluster share same substance of research. The keywords that are placed apart have appeared sparsely together in the collection. Based on our review, the clusters formed according to the analysis is shown in Figure 3. The clusters can be identified as Cluster-1 (land use, forest disturbances, and carbon sequestration), Cluster-2 (AFS), Cluster-3 (land use change, shifting agriculture, rubber plantation, oil palms, livelihood, land use productivity, S. Asia, etc.), and Cluster-4 (bioengineering technology, soil and water conservation, and socio-economic aspects). As the centroid of the Cluster-3 is positioned according to the positive values of X and Y in the 2D space, these themes are known as motor themes, and are central and highly developed themes in the agroforestry research. Cluster-1 and 2 centroids have negative X values and positive Y values, and are known as niche themes (or isolated themes) in the research landscape, focusing on land use change and AFS, respectively. Conversely, Cluster-4 indicates the themes that are central to the research area, but are less dense or transversal in nature. Overall Clusters 1 to 3 are close to each other and the themes also agree with literature discussed AFS, carbon sequestration, climate change, land use change, forest disturbance, livelihoods, and biodiversity. The bibliometric analysis was not able to capture the increasing concerns and interest of AFS in the climate dialogue. In general, most of the available information was very fragmentary and isolated in a few case studies. There is a need to further explore the literature to capture and synthesize the available information. Effort to consolidate the information and present it in this paper will be of significant interest to academicians, policymakers, and researchers working on AFS and for mainstreaming AFS in climate dialogues.
Global Climate Dialogue around Agroforestry Systems
The United Nations Framework Convention on Climate Change (UNFCCC) along with other prominent international environmental and scientific organizations have stressed the growing need for mainstreaming and implementation of sustainable land management approaches that specifically includes AFS [69][70][71]. AFS have received substantial recognition from international organizations like the UNFCCC, the Food and Agriculture Organization (FAO), the Convention on Biological Diversity (CBD), and the World Bank [72] (https://agroforestrynetwork.org/, accessed on 25 June 2019). Figure 4 presents an overview of major Conventions and reports that have brought AFS into global focus. The Kyoto Protocol was the first international arrangement to acknowledge the importance of AFS in climate mitigation. Since, then global attention for enhancing carbon sequestration using AFS has increased [30,70]. Although, the Kyoto Protocol was rooted in the Clean Development Mechanism (CDM), the addition of AFS into CDM was hindered due to a lack of uniform protocols to estimate carbon sinks, and associated land right concerns [73]. However, REDD+ (Reduced Emissions from Deforestations and Forest Degradation) brought AFS back into focus in 2007, and several countries have made considerable progress to improve their national planning by understanding the importance of agriculture, forestry, and other land-use (AFOLU) sectors in climate change adaptation and mitigation [74]. AFS are known for their potential to contribute to nine out of the 17 SDGs including SDG 15 (life on land), 13 (climate action), 12 (responsible production and consumption), 2 (zero hunger), 1 (no poverty), 3 (good health and well-being), 8 (decent work and economic growth), 5 (gender equality) and 10 (reduce inequalities) [75][76][77]. AFS are an important climate mitigation tool, and can help both developing and underdeveloped to achieve policy synergy amongst technologies, landscapes, rights and markets [78] while also improving localization of SDGs (especially 2.4; 13.
Agroforestry: Role in Climate Change Mitigation and Adaptation
Despite agroforestry being acknowledged for its carbon sequestration potential among all land uses considered in the IPCC (2000), the understanding of carbon sink in different AFS in the region is still very elementary because of insufficient authentic data on carbon stocks of AF interventions, in comparison to agriculture and forestry [85]. While agriculture along with forestry results in large amounts of emissions and also accounts for nearly 21% of the total emissions [86], AFS have significant mitigation potential that has not been scientifically evaluated in global carbon financial plans or national carbon accounts [30]. Limited studies at the global, national and zonal scale have reported carbon stocks in AFS (Table 4). However, for S. Asia, these studies and reports are mostly at the local level. In most of the studies, there is a lack of comprehensive information on both tree and Soil organic carbon (SOC) trends in carbon stocks [82,87,88]. It has been very challenging to gain an understanding of how diverse agroforestry practices can become potential carbon sinks [14,85,89,90].
In farmland biodiversity, the scattered trees in agroforests are the 'keystone species' that expedite and support the movement of wildlife through the landscape [91]. This role of AFS as wildlife corridors is significant under projected climate change as it allows species to adapt in response to unstable climatic conditions by providing the necessary migration paths [90]. In order to optimize the use of AFS in climate adaptation and mitigation, strategic integrated efforts to enhance benefits and reduce negative impacts on climate are needed. Mbow et al. [90] provided an overview of both positive and negative impacts of AFS on the adaptation and mitigation potential. Since most countries in the region are predominantly agrarian, S. Asia region has tremendous potential to promote agroforestry as a tool for climate adaptation and mitigation. A recent study claimed that 69% of the total geographical area of S. Asia retains 55% or even higher suitability for agroforestry [92].
Nationally Determined Contributions and Agroforestry
Under the Paris Agreement, countries submitted their Intended Nationally Determined Contributions (INDCs) under the Paris Agreement. INDCs, once submitted to UN-FCCC, are known as Nationally Determined Contributions (NDCs) and they are the key mechanism towards reducing emissions as per national urgencies, competencies and accountabilities. According to Duguma et al. [98], within the purview of NDCs, agroforestry can provide multi-dimensional benefits by supporting climate adaptation and mitigation actions [98,99]. Nearly 40% of the Non-Annex I countries (developing countries recognized by the UNFCCC as vulnerable to the adverse climate impacts, including areas threatened from sea level rise, desertification and drought) have explicitly proposed agroforestry in their NDCs. A total of 21% of Asian countries have proposed AFS in their NDCs, a ratio that is less than Africa (71%) and the Americas (34%) but higher than Oceania (7%) [20,58].
The S. Asian countries list adaptation actions both at the farm and landscape level. Bangladesh, Nepal, Sri Lanka and Bhutan have proposed "ecosystem-based adaptation" [100], which includes landscape-level actions, spanning management of water resources, crop management by crop rotation, agroforestry and management of natural vegetation. As the sum of carbon flux fundamentally depends on the composition of trees, there needs to be more understanding on it during the implementation phase [101][102][103].
It is evident from Table 5 that, although countries have not explicitly included agroforestry in their NDCs (Bhutan and Nepal), the existing traditional systems and support-ing policies in these countries indicate potential inclusion of AFS as part of a larger mitigation strategy. For example, in Bangladesh, the need to reduce emission from agriculture and further development of the forestry sector is indicated. In line with this, [20] the TOF (croplands, homestead and horticulture based agroforestry) provides significant opportunities in Bangladesh, as it already spreads over 4.1 million hectares or 27.7% of the total land area [20]. Table 5. Nationally determined contributions committed by S. Asian countries and role of agroforestry.
Bangladesh *
Emissions reduction from agriculture and development of forest sector. Unconditional contribution to reduce GHG emissions by 5% by 2030 in the power, transport and industry sectors, based on existing resources. Conditional 15% reduction in GHG emissions by 2030 in the power, transport, and industry sectors, subject to appropriate international support -No mention of agroforestry in the NDC Ecosystem based adaptation (incl. forestry co-management) -Community based conservation of wetlands and coastal areas -Green belt Afforestation and reforestation of mangroves
Bhutan "No NDC Available"
-Potential of climate-smart agriculture, particularly the development of agro-forestry, agri-silvi-pastoral systems for fodder production, organic agriculture and conservation agriculture are included as mitigation measures [104] India -Decrease emissions by 33-35% from the 2005 levels by the year 2030-to be achieved through increase in the segment of non-fossil fuel by 40%, along with sequestering an additional 2.5-3 billion tonnes of carbon through added tree cover by 2030 [105,106] -Despite India's INDC not mentioning agroforestry specifically, it is believed to play a critical, if not pivotal role in national carbon mitigation targets, given agroforestry is of the sub-missions of the Green India Mission-one of the eight missions under the National Action Plan on Climate Change (NAPCC) [107].
Nepal
-Decrease the dependency on fossil fuels by 2050 and further aim to bring at least 40% of the area of the country under forest cover.
-Ameliorative forest practices including agroforestry as a means to achieve the NDC targets included [107] Pakistan * -Mitigation target of 20% of the projected 2030 emissions-subject to international financial support -Agroforestry implementation included among mitigation strategies.
Sri Lanka *
-Increase forest cover from 29% to 32% by 2030, reduce emissions by 20% in the energy sector and by 10% in other sectors including forest, transport, industry, etc.
-No mention of agroforestry in the NDC.
South Asian Association for Regional Cooperation (SAARC) Member States (Afghanistan, Bangladesh, Bhutan, India, the Maldives, Nepal, Pakistan and Sri Lanka) developed the SAARC Regional Coordinated Programme on Agroforestry (SARCOPA) in 2016 that has received active facilitation and technical support from the World Agroforestry Center (ICRAF) and SAARC Agriculture Centre (SAC). The programme has been divided into two-phases, the first 6-year phase focused on establishing the mechanism and delivery systems and the second 6-year phase focused on upscaling and out scaling the AFS benefits to larger beneficiaries. SARCOPA's first phase is focusing on generating awareness and developing guidelines, policy, and databases of existing information on AFS. India and Nepal already have National Agroforestry Policy in place clearly showing their intent to promote AFS while, Bhutan and Bangladesh are working to develop a National Policy to endorse and recognize the benefits of AFS. In fact, a mere 30% increase in area under AFS is projected to significantly reduce India's total emissions by 2050 [110]. Under SAR-COPA there has been support provided for institutional and individual level capacity building and identifying and re-designing specific AFS, and sharing information on successful AFS. The Government of Nepal is implementing a Local Adaptation Plan of Actions through 90 Village Development Committees and seven municipalities. Additionally, about 375 local adaptation plans and approximately 2200 Community Adaptation Plans of Action for community forests have been enacted that will also include the benefits of natural forests, community conservation efforts and traditional AFS [97]. Agroforestry policy put in place by India in 2014 was the first in the region and was seen as a low hanging fruit to not only ensure the benefits from a successful land-use system, but also to harness its economic potential for locals as well for the country [111].
Sri Lanka also committed to supporting climate resilient human settlements, minimizing climate change impacts by ensuring food security, improving climate resilience for key economic support and protection of natural resources and biodiversity. Here again, although agroforestry is not explicitly mentioned, the country has a significant area of land under home gardens (13% of its current land area) that has historically helped in addressing drought and storms disasters, by supporting climate adaptation and so this, by default, will be part of the programme. The Government of Pakistan has initiated a 5year plantation programme of 100 million trees under the Green Pakistan Programme or Plantation Tsunami to achieve Bonn Targets [108]. Here again, AFS is not explicitly a part of the NDC, but could be included.
The review and synthesis of existing information makes it clear that in S. Asia, there is already a process and approach in place to harness the benefits of AFS in all countries in the region and they are collaborating to share experience and technical support to make implementation a reality across the region.
Agroforestry in REDD+ and Nationally Appropriate Mitigation Actions (NAMAs)
Trading carbon sinks could be a potential livelihood opportunity for marginalized communities of underdeveloped and developing countries who practice agroforestry [86]. In S. Asian countries, the demand for firewood and timber results in rapid loss of forests and fragmentation, and AFS can help conserve natural forests. REDD+ has been a key feature of climate negotiations in the UNFCCC since 2007. Through REDD+, countries have made considerable progress in national planning to include AFOLU sectors for mitigating extreme climate impacts [74]. The REDD+ policies propose to economically reward countries for improving forest health by conservation and management that reduces GHG emissions [73]. The REDD+ initiative has supported eco-agricultural practices, that help produce surplus food while safeguarding native biodiversity and includes AFS [109]. Cobenefits from AFS are significant to the Koronivia Joint Work on Agriculture (KJWA) of the UNFCCC that addresses resilience building, enhancing soil carbon stocks, soil health, biodiversity and fertility, by supporting sustainable livestock management as well as providing varied nutritional benefits and livelihood diversification [20,58]. However, AFS are not explicitly mentioned in the KJWA. There are also encouraging and substantial evidence to showcase the successful support of AFS by indigenous and local communities [110]. Under the premise of REDD+, activities that lead to improving the capacity of forests to sequester carbon, reduce pressure on forests, and advance diversified livelihood approaches are included. A review of REDD+ strategies in S. Asia show that REDD+ strategies in S. Asian countries are at different stages of development (Table 6). Covers forests and TOFs, which potentially includes AFS. The activities of REDD+ contribute to the objective of improving forest and tree cover, thereby ensuring alignment with the National Forest Policy.
Nepal
First draft of REDD+ strategy prepared in 2014, facilitating further consultations and drafting of Version 2 of REDD+ strategy.
-The REDD+ strategy statement established in line with the principles of sustainable development objectives that includes national forestry sector vision of forests for people's prosperity. Scope of the policy is limited to various forest classes including forests under Protected Areas as per Forest Act (1993), the National Parks and Wildlife Conservation Act (1973), and Forest Policy (2015). -Likelihood of inclusion of leasehold forests, sacred forests, forests on public lands and private forests at an advanced stage, to broaden the scope of REDD+ defined.
Pakistan REDD+ initiated in 2010, envisages forest ecosystems as public goods, a source of multiple benefits required for development and with potential to mitigate climate change, while, building community and ecosystem resilience.
Has key policies that support conservation of forests and ecosystems, viz. National -13 policies to address the identified drivers of forest cover change identified. -Policy measure, that cover other forested lands supports agroforestry models for addressing forest degradation, with an objective "to create enabling conditions for making existing agroforestry arrangements financially viable for adoption and implementation".
Constraints in Using Agroforestry for Meeting Global Climate Targets
There is a noteworthy gap in country-specific targets and their technical capabilities to measure agroforestry carbon stocks and report to the UNFCCC. SARCOPA will be a great support to bridge this gap in the coming years, but it will take time to develop capacities with reference to carbon stocks stores in AFS. Insufficient data on carbon stocks before land use change along with non-existent reporting on soil carbon stocks is one of the crucial limitations of the AFS database existing in the region [5]. Monitoring, Reporting and Verification (MRV) is a prerequisite for achieving climate adaptation and economic growth aims of countries [112]. Developing robust MRV for AFS in S. Asia is a crucial first stage to facilitate access to national and international funding sources and further backing. Despite the, mounting importance of AFS and TOF in global climate change dialogues, it has been difficult to integrate agroforestry in MRV systems, as proposed by the UNFCCC. MRV protocols developed by one country may not always work for another country. For example, Nepal has comparatively low forest threshold (0.5 ha, 10% tree cover) that supports the addition of AFS in MRV; whereas, in Bangladesh, TOF (also AFS) are omitted from the forest definition in the policies [20]. Local carbon stock change factors are mainly used, which is a limitation. Lack of continued financial support, deviations in government directives, along with the concerns and capacity for data gathering and analysis are projected as other potential constraints in realizing the benefits of AFS in the region. Limited investment in agroforestry sector compared to intensive agriculture adds a key structural restriction for adoption of AFS [18,90].
Institutional constraints have been the most common limiting factor in the majority of countries in S. Asia. Expectations of high agricultural production per hectare followed by non-existent markets, land rights, and technical support are other challenges that impede realization of benefits of AFS in climate policies and implementation. Small landholdings are key limitation for AFS adoption in the region. Livestock size, distance of forest from villages, and a lack of awareness among farmers meanwhile, are other local reasons that limit adoption of AFS. However, poor and marginalized famers show interest in adopting AFS [25]. Shortage of water is another major constraint for promotion and adoption of AFS [108]. In India, the Forest Conservation Amendment Act of 1988 banned wood felling from state forests, amplifying wood prices and providing financial motivation to adopt AFS [113].
Despite widespread environmental and economic benefits, there is still low adoption of AF is largely because of legal and policy constraints including insecure land tenure, complex transit rules, taxes on agriculture based commodities, and socio-economic marginalization of local farmers [61]. Certainly, some key requirements for adoption include a growing need in the regional countries to fulfill market requirements, and formulation of policies that provide clear information on land and tree rights and ownership to enable REDD+ and NAMA contributions. However, farmers in the region are hesitant to plant trees because they do not have the rights to fell the tree for economic benefits. Further, harvesting and transporting of the tree wood from cropland to market is not permissible without prior approval from the forest department, which again deters adoption and promotion of AFS [108]. Farmers in Nepal stress their inability to get financial benefits from AFS because of unsupportive regulations surrounding harvesting and marketing of trees [59]. Farmers and experts in Bangladesh support the need for regulations and guideless for effective implementation of AFS to harness its ecological, economic and climate benefits. In Pakistan, too few trained forest personnel, lack of technical support to farmers, insufficient understanding of tree species, and poor market access along with wood price emerged as major limiting factors [108]. The failure of agroforestry related extension services across S. Asian countries has severely limited the opportunity for AFS to improve land use systems and promote its adoption to address global climate dialogues.
Policy Concerns
The advantage of promoting AFS is the familiarity of small and medium holder farmers, thereby making it a potential low hanging fruit for achieving the NDCs, and contributing to climate mitigation and adaptation. Hence, promotion of AFS alone will not be enough to address the larger concern of using the practice to provide a solution to global climate change. Promotion of AFS in region needs to be backed with an enabling and effective legal policy environment and strategic implementation to achieve the NDCs. Such policy backing would guarantee rights and ownership to communities, and bring incentives and investments, thereby creating a market-based infrastructure. Given the multiple benefits of AFS, countries should consider giving AFS a special place in REDD+ and NA-MAs. However, the multiple challenges stressed in the previous sections should be appropriately discussed and addressed for agroforestry to reach its full potential The following approaches are recommended: -National and state policies should encourage ways to identify, classify and report on AFS, and expand the finance flow to AFS by increasing knowledge and cooperation among key stakeholders (Table 7). -National policies addressing agriculture, forest conservation and management practices are required to take stock of both efficient mitigation and adaptation approaches to position agriculture and forestry practices for worldwide sharing of pioneering technologies and improve efficient use of land resources (Table 7). -Financial incentives and regulatory approaches, are presently being used; however, effective enactment requires recognition of how land-use choices and emerging social-political and economic powers have the capacity to guide this practice in future [89]. -Policy framework to address climate risks need to be comprehensive enough to internalize the negative impacts of climate change, while promoting income from AFS [5]. While AFS in India, through the Agroforestry Policy, aims to contribute to the goal of enhancing forest cover from the existing 23% of geographical area to 33%, the REDD+ strategy aims to slow down forest degradation and halt deforestation. Another programme that is working in this direction, the Green India Mission is another programme that supports AFS in rural parts of the country [45]. The National Agroforestry Policy of Nepal follows up its Nationally Determined Contributions (2016) and the Climate Change Policy (2011) that recognizes forests and trees including AFS to promote climate adaptation and mitigation. A study in Bhutan initiated in June 2020 is facilitated by an EU funded project on Technical Assistance for Renewable Natural Resources and Climate Change Response and Local Governments and Decentralization-Bhutan (EU-TACS). Such agroforestry relevant policies are already being drafted and developed in other smaller countries like Bangladesh and Bhutan, and more efforts will be required under the larger umbrella of SARCOPA for Pakistan, Sri Lanka and the Maldives to draft agroforestry policies relevant for these countries and agro-climatic zones.
Recommendations to Improve Mainstreaming of AFS in Climate Change Dialogues
SARCOPA, with support from the World Agroforestry Center (ICRAF), SAARC Agriculture Centre (SAC) and all national governments is a landmark effort in the region to acknowledge and mainstream the benefits of AFS with a special focus on country specific climate action. The UNFCCC encourages countries to produce data from field-based local investigations and carry out reporting under MRV to help create country-specific factors for robust assessment of biomass and SOC stocks [114,115]. Two-phase sampling approaches using laser scanning followed by field-based surveys is an effective method for assessing TOF resources. The region requires more country-specific research on improving TOF models for biomass calculation, that are amended to AFS tree resources [21]. As a first step, it is important to standardize protocols for carbon stock estimation following national REDD+ strategy. India is one of the few countries in the region to pioneer regular basis satellite-based surveys involving RS-GIS tools and has been doing this since the 1980s to assess forest cover changes. India's NDC target could be met by TOF, so its National Agroforestry Policy formulated in 2014 and its National REDD+ strategy, 2018, will benefit the entire process. Incentives for AFS across the region will need more external financial support to strengthen the existing systems. Developing agroforestry pilots for REDD+ can be the next step to building capacity of foresters and local communities, and to generate awareness on mainstreaming AFS for increased benefits. Conflicts with reference to AFS could be avoided by adopting a cautious, site-specific, and participatory approach to project development [18,116]. Skill development and capacity building as per the first phase plan of action SARCOPA by creating model agroforestry farms are already underway across the SAARC region. Discussions on similar issues are becoming common at national and subnational levels especially in India, Nepal, Bhutan and Bangladesh. Forthcoming research in the region on AFS will requires more mechanistic and processbased surveys followed by models linking AFS and crop development with soil water, carbon and biogeochemical cycles [117].
Conclusions
The synthesis presented in this paper clearly supports the importance and potential of AFS in securing human well-being for marginalized and impoverished people that can also help the countries in S. Asia to meet their NDCs and contribute to mitigation of climate change. Although, there are already benefits from AFS that are considerable but they have not been sufficiently harnessed at the local or national level. One key enabling condition for mainstreaming AFS is a regional consensus at the country level and this has already begun as countries work on facilitating and extending support to each other under the larger umbrella of SARCOPA. It is important to mention here that national commitments to acknowledge benefits from AFS and recognize them under national agroforestry policies is the next important step. The phase-wise implementation as per SAARC Resolution on Agroforestry has been initiated and will continue for the next 12 years. These are promising commitments by regional countries and their governments. Countries like India and Nepal have proactively developed agroforestry policies considering AFS is a low hanging fruit that should be appropriately used. Recently, Bhutan, Bangladesh and the Maldives have also initiated their efforts in developing national agroforestry policies. It is certainly relevant for the mountain country of Bhutan, the coastal nation Bangladesh and island countries of Sri Lanka and the Maldives to proactively work in this direction to promote synergy for climate change mitigation and adaptation in the region. Around 21% of the agriculture land area in S. Asia is under trees which is less than other parts of Asia, except for Central Asia (Table 3). Countries across the region need to take steps to set an achievable target to restore degraded AFS and improve the systems by at least 50% in the coming five years as a first step. With years of experience and a traditional knowledge base of AFS across the region, this knowledge could be used to improve the conditions and address the NDCs. Moving beyond awareness and technical cooperation to realize the benefits, fulfilling local livelihood demands and creating more opportunities, is urgently needed to strengthen the ongoing momentum on AFS in the region. Important mechanisms to enhance agricultural productivity of forest dependent marginalized communities and farmers by using enhanced inputs, innovative technologies, and incentives to improve agricultural intensification, and livelihood diversification can help in achieving NDC targets and make headway on several SDGs.
|
2021-05-07T00:03:01.133Z
|
2021-03-06T00:00:00.000
|
{
"year": 2021,
"sha1": "488ed04cfc07e9a87a1c515d96294084ab6c2f91",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/12/3/303/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1d5ee57ecbbba6b2fd4cf64957be77ce8c61b922",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
3313995
|
pes2o/s2orc
|
v3-fos-license
|
The WISDOM Study: breaking the deadlock in the breast cancer screening debate
There are few medical issues that have generated as much controversy as screening for breast cancer. In science, controversy often stimulates innovation; however, the intensely divisive debate over mammographic screening has had the opposite effect and has stifled progress. The same two questions—whether it is better to screen annually or bi-annually, and whether women are best served by beginning screening at 40 or some later age—have been debated for 20 years, based on data generated three to four decades ago. The controversy has continued largely because our current approach to screening assumes all women have the same risk for the same type of breast cancer. In fact, we now know that cancers vary tremendously in terms of timing of onset, rate of growth, and probability of metastasis. In an era of personalized medicine, we have the opportunity to investigate tailored screening based on a woman’s specific risk for a specific tumor type, generating new data that can inform best practices rather than to continue the rancorous debate. It is time to move from debate to wisdom by asking new questions and generating new knowledge. The WISDOM Study (Women Informed to Screen Depending On Measures of risk) is a pragmatic, adaptive, randomized clinical trial comparing a comprehensive risk-based, or personalized approach to traditional annual breast cancer screening. The multicenter trial will enroll 100,000 women, powered for a primary endpoint of non-inferiority with respect to the number of late stage cancers detected. The trial will determine whether screening based on personalized risk is as safe, less morbid, preferred by women, will facilitate prevention for those most likely to benefit, and adapt as we learn who is at risk for what kind of cancer. Funded by the Patient Centered Outcomes Research Institute, WISDOM is the product of a multi-year stakeholder engagement process that has brought together consumers, advocates, primary care physicians, specialists, policy makers, technology companies and payers to help break the deadlock in this debate and advance towards a new, dynamic approach to breast cancer screening.
INTRODUCTION
Annual screening mammography-the most common approach in the US today-has its roots in the large, randomized screening trials of the 1980s. 1 The first trial of annual screening, the U.S. Health Insurance Plan of Greater New York, began in 1963 and included 31,000 women in each arm. 2 At 18 years of follow-up, it showed a 25% reduction in mortality, although benefit to women in their forties accrued after they were 50. The overview of the Swedish trials of bi-or triennial screening showed a relative reduction in breast cancer mortality of 21%, with maximum benefit for women in their sixties. 3 The degree and timing of benefit to younger women in particular has generated a great deal of controversy. 4 Even a decade later, there remains a continuing debate over the methodologic flaws of each of these studies, the net effect of which has impeded consensus on public recommendations for breast screening. [5][6][7] From the outset, translating these studies into populationbased screening recommendations stirred controversy, with debate focused on the frequency and most appropriate age to begin screening. The January 1997 Consensus Development Panel convened by the National Institutes of Health recommended women aged 40-49 be informed of the benefits and risks of screening and decide for themselves. 8 The National Cancer Institute (NCI) and American Cancer Society (ACS), however, recommended regular screening for women in their forties while disagreeing on screening frequency, with the former recommending every 1-2 years, the latter annually. Partly because of the controversy generated, the NCI later stopped issuing screening guidelines. Now, 20 years later, we find ourselves in a familiar place-still reviewing and reanalyzing data from the same trials, debating the optimum age to begin and interval, with professional societies that set guidelines compelled to "take a side" in the debate. The controversy following the 2009 JAMA commentary "Rethinking Screening" 9 and updates to USPSTF guidelines 10 illustrates how entrenched both sides have become. Consensus on recommendations remains distant.
The US Preventative Task Force systematic review concluded in 2015, 11 much like it had in 2009, 10 that mammographic screening benefits women over 50 and that biennial, not annual, screening was recommended for women ages 50-74. After weighing the balance of harms and benefits for women aged 40-49, screening was not recommended routinely for women in their forties. Instead, the USPSTF suggested an individualized approach taking patient risk and personal preference into account. In contrast, 2017 guidelines from the American College of Radiology and the Society of Breast Imaging currently recommend annual screening starting at age 40. 12 The American Cancer Society has revised their guidelines and recommend annual mammograms for women over 45 of average risk, with women between the ages of 40-44 provided the opportunity to begin annual screening. Women over the age of 55 are recommended to receive biennial screening, although annual screening may be considered. 13 Although the academic debate has progressed little in these 30 years, what has changed is public awareness of this issue. The vast press coverage of the Canadian National Breast Cancer Screening Study (CNBCSS) in 2014 brought the potential harms of screening into the public spotlight. Not only did the 25-year analysis of the CNBCSS show, as it had at both 10 and 15 years, that annual screening failed to reduce breast cancer mortality, it provided the first-ever estimates of overdiagnosis in a population-based annual screening program: half of all screen-detected non-palpable cancers were estimated to be indolent lesions that would otherwise never have come to clinical attention. 14 For some, this result simply confirmed previous findings showing that up to 20% of all cancers (50% of screen-detected cancers) fall into the category of overdiagnosis 15-17 -meaning a woman has a greater chance of being over-diagnosed than of having her life saved by screening. 18 Others pointed to studies demonstrating there is little, if any overdiagnosis in breast cancer, 19,20 labeling CNBCSS as a flawed analysis or simply the latest way to attack screening. The debate reached such an unhealthy tenor that published exchanges even included accusations of a scientific conspiracy to reduce access to mammography. 21,22 Like the screening debates that preceded it, the controversy surrounding overdiagnosis has now settled into a familiar pattern. It focuses on largely technical arguments over statistical assumptions, corrections for lead time bias and varied demographics 23,24 that create uncertainties in the data and ultimately have shifted the debate into the realm of opinion, rather than fact. Recent characterization of a molecular profile to define an ultralow risk biology may provide a tool to more objectively categorize ultralow risk breast cancers that have little systemic risk of progression. This is an important advance that may help us to improve our ability to treat the disease and further tailor individual screening recommendations. 25 We must remember, however, that the data we are arguing over are from decades-old trials, from an era before most of the effective systemic therapies were available. That there is an impact of modern systemic therapies on reducing breast cancer mortality is undeniable-some estimate that systemic therapy accounts for 2/3 of the observed reduction in mortality. 26,27 The rise of endocrine therapy 28,29 may also mitigate the impact of finding some cancers later.
Whether one believes these figures or not, the takeaway is that we are stuck in an endless cycle of academic debate, arguing over data that have little context in the modern treatment setting. Breast cancer treatment continues to rapidly evolve towards a patient-centered, precision medicine approach that recognizes what is perhaps the most important lesson we have learned over the past two decades of research: that breast cancer is not a monolithic entity, but a spectrum of disease. From indolent lesions of epithelial origin (IDLE) 9 requiring no treatment, to aggressive disease requiring equally aggressive treatment, it has resisted all our attempts to lump it into a single bin.
Yet we continue our one-size-fits-all approach to breast cancer screening. It is contrary to the very nature of the disease. We cannot continue to focus the entirety of our efforts on a screening approach that is based on an outdated understanding of breast cancer biology, expecting that the uncertainties and debate will finally be resolved. Instead, we must be willing to innovate and to entertain new paradigms of screening that incorporate our current understanding of breast cancer, its treatment and risk susceptibility by putting them to the test.
We may have little choice. Because the consequences of failing to do so may be to further alienate the very women screening is supposed to help.
WHAT WOMEN WANT: BETTER, NOT MORE SCREENING
Even though generations of women educated in the benefits of screening mammography generally regard it positively, experience shows it is a fragile trust. A single false positive can cause psychological distress for up to 3 years and reduce adherence to subsequent screening by 37%. [30][31][32][33][34] Considering the specificity of mammography is generally accepted to be~90% (e.g., 1 in 10 are false positive), whereas the real breast cancer rate is~5 in 1000 women, the majority of abnormal mammograms are, in fact, false positives. After 10 years of annual screening, over half of all women receive a false-positive recall and 7-9% have a false-positive biopsy. 35 Furthermore, in the wake of the CNBCSS, information concerning overdiagnosis is increasingly available to women, 36 undermining their confidence in screening. Women given controlled, qualitative, and quantitative education on the risks of overdiagnosis have less positive attitudes about screening and demonstrate reduced intent to screen. 37 Similarly, primary care physicians, key influencers in a woman's screening decisions, are far less willing to refer patients 40-49 for screening when fully educated about the potential risks/benefits of screening. 38 Further, our conflicting recommendations have made this divisive debate a public one, sowing distrust and a deepening confusion for women over how to prevent the disease that scares them the most. 39,40 The question we need to be asking, therefore, is not whether we should screen more or less, earlier or later. It is how can we make screening better for women, reduce false-positive recalls and improve our ability to more accurately prevent and detect clinically significant cancers sufficiently early. This is, after all, is what women tell us they want 41 and what we have observed to date in the WISDOM trial (described below).
The answer is simply that we must move on. We must begin developing and testing new and better approaches that respond to women's needs. Fortunately, in this respect, there is one thing upon which we all agree-women must have the opportunity to make informed screening choices.
INDIVIDUALIZED, INFORMED CHOICE
Overwhelmingly, women want information about their personal risk of breast cancer. 42,43 Currently, only 10% have accurate perceptions of their personal risk and 40% have never discussed their personal breast cancer risk with a doctor. 44 Yet, a realistic view of their risk is prerequisite to making informed screening decisions.
We have the tools to better inform women of their personal risk, through well characterized models that incorporate family history and breast density, endocrine exposures, gene mutations, and atypia, [45][46][47][48][49] along with a number of common gene variants. 50,51 They teach us that not all women whom we classify for screening purposes as "average-risk," actually have the same lifetime risk of breast cancer. Armed with a better understanding of their individual risk, such women will expect-and demand-screening recommendations commensurate with their personal risk.
Unless we are prepared to ignore the modern tools available to us, we are therefore compelled to shepherd breast cancer screening into the era of precision medicine. Now is the time to begin evaluating a patient-centric model, focusing on individually tailored recommendations on when to start, when to stop, and how often to screen, depending upon a woman's personal risk. Only through clinical testing can we establish the evidence that tells us how best to apply risk. The idea of risk-based screening is not revolutionary. In fact, we already do it, although in a crude fashion. It is standard of practice for high risk women with mutations in the BRCA genes and first degree relatives from high risk families to begin screening at a much earlier age, and to do so more frequently (annual mammogram alternating with annual MRI). 52 But our understanding of breast cancer risk goes much further than our current screening recommendations reflect. Our failure to incorporate our current understanding of personal risk into our screening recommendations means we may be asking some women to accept risk/benefit ratios they might not be comfortable with if they were fully informed.
Within the context of well-designed, randomized, controlled clinical trials, we have the ability to investigate new screening models in a safe, systematic manner, beginning with conservative estimates that minimize the chances of misclassification of risk and avoid underscreening. If we are successful, it could help establish a new baseline for cancer screening, reduce confusion and anxiety for women over conflicting recommendations, improve women's perception of their true risk of breast cancer, improve adherence, 53 and reinforce confidence in providers. Perhaps most importantly, it allows us to learn who is at risk for what kind of cancer, and establish a cycle of continuous improvement in breast cancer screening.
Risk-based screening may or may not be the answer to all of screening's shortcomings, but it is perhaps an answer to the current deadlock in which we find ourselves. In the words of philosopher David Hume, it is time "we start spilling our sweat, and not our blood."
THE WISDOM STUDY: OVERCOMING CHALLENGES
We have recently been awarded a grant from the Patient Centered Outcomes Research Institute (PCORI) to evaluate a risk-based screening approach within a pragmatic, controlled trial. The "WIDSOM" study (Women Informed to Screen Depending On Measures of risk) is a multicenter trial comparing risk-based screening to annual screening in 100,000 women aged 40-74, initially opening in the Athena Breast Health Network in California and the Midwest ( Table 1). The study is a "preference tolerant design" (Fig. 1) that encourages women to be randomized (n6 5,000) but also allows self-assignment for those with strong personal preference for either annual or risk-based screening (a pilot study was conducted in 2015 in which 74% of women agreed to randomization). Importantly, WISDOM is an adaptive design, allowing us to learn and adjust, continuing to improve the riskassessment and screening recommendation models over the course of the trial.
An essential aspect of developing WISDOM has been the engagement of all stakeholders, including consumers, policy makers/guideline organizations and multiple specialties, and payers, to agree upfront on metrics for success. This ensures the trial remains relevant to the needs of the end-user and sets the stage for rapid adoption should it prove successful.
Patients and advocates in particular, through the Athena Consumer and Community Advisory Committee, have been key partners in WISDOM since its conception. The preference-tolerant design that allows all women to participate regardless if they have strong personal reservations about being randomized grew from vigorous discussions with this group. The consumer voice is deeply embedded in WISDOM, with influence in all aspects of study design and planning, including enrollment strategies, consent processes, primary care physician outreach and education, risk notification, and participant retention.
The buy-in of health care payers is essential to enable rapid dissemination once results are presented. Modeling shows a riskbased strategy will be more cost effective in terms of screening, but requires an initial outlay of resources for one-time genetic testing and comprehensive risk assessments. After almost 2 years of discussion and negotiation, WISDOM's "Payer Working Group", led by Blue Shield of California and including all insurers in California, has reached an agreement to implement a "Coverage with Evidence Development" model to cover clinical costs not funded through PCORI. 54 This model allows innovative treatment approaches to be tested transparently. The use of a coverage model that fosters the development of evidence, using a coverage with trial participation approach, allows agreement on metrics for adoption and should shorten the timeline for adoption. By engaging payers early, should the study prove successful, we will have laid the foundations to address future challenges in implementation related to standard coverage. We are in the process of engaging other payers.
The most formidable challenge in terms of stakeholders in developing a trial of risk-based screening, given the voracity of the academic debate, lies within the academic community. Among the highest priorities has been to define acceptable parameters of risk assessment, stratification and screening recommendations. We have also invested considerable effort to reach consensus regarding what constitutes success. WISDOM's 'Risk Thresholds Group' and 'Primary Care Physician Working Group," consisting of primary care teams, representatives of the radiology community and others have shared these tasks.
RISK ASSESSMENTS AND RECOMMENDATIONS
The Breast Cancer Surveillance Consortium (BCSC) model was selected as the foundation of individual risk assessments for WISDOM, based on its accuracy, ease of implementation, its large (>1 million women) multiethnic target population, and incorporation of ethnicity and breast density as risk factors. 55,56 Additional assessments also include polygenic risk based on nearly 200 SNPs, as well as a 9 high-penetrance gene mutation panel.
In translating individual risk to screening recommendations, the primary consideration of the working groups was to develop guidelines that were sufficiently conservative to minimize risk of potential harm from underscreening, yet progressive enough to minimize potential harm from overdiagnosis, while permitting outcome measures with sufficient study power. The consensus risk stratification and related screening recommendations to be employed within WISDOM are shown in Table 2, and include more frequent screening for those at highest risk or those at risk for faster growing (e.g., hormone negative) cancer. In the riskbased assessment arm, no woman will receive a recommendation for less screening than current USPSTF guidelines-individual risk ≥ 1.3% over 5 years initiates screening. Because the uptake of risk-reducing interventions has been very poor despite level 1 evidence of benefit, we will use a stringent threshold (top 2.5 percentile of risk for breast cancer by age group, or lifetime risk in the range of 30% or higher) for identifying participants to target and counsel about endocrine risk-reducing therapy. The genebased tests also inform the risk for hormone positive or negative breast cancer and impact screening and prevention recommendations. Additional details on the rationale and evidence used to develop this model are published elsewhere. 57,58 A shared decision-making (e-prognosis) tool based on recent modeling of comorbidity and impact of screening 59 will be used to identify women unlikely to benefit from screening due to limited life expectancy. These rules will inform our risk assignments, age to start, age to stop, frequency, and appropriate modality of screening. The trial is designed to adapt over time, and refine categorization and screening frequency based on the actual cancer rate and biology of tumors that develop. • In >30 years of screening, little change in approach • No clear evidence that annual mammograms reduce breast cancer mortality rates compared to biennial mammograms • Morbidity associated with annual screening-false positives, over-diagnosis/overtreatment of indolent diseasecould safely be reduced • Conflicting screening recommendations for women in their 40's has resulted in confusion for patients, who want more personalized advice Hypotheses Personalized breast cancer screening recommendations based on individual risk assessments will: (1) be at least as safe and less morbid than annual screening; (2) result in improved breast cancer prevention; and (3) be readily accepted by women and preferred over standard annual screening.
Primary endpoint(s) (i) Safety: comparative rate of stage IIB cancers or higher diagnosed in annual vs. risk-based screening arms (noninferiority); and • Proliferative breast condition (atypia) • BI-RADS breast density score (ii) Genomic tests for rare high/moderate-penetrance mutations in a number of genes, including the following: BRCA1, BRCA2, ATM, CDH1, CHEK2, PALB2, PTEN, STK11 and TP53 (iii) Polygenic risk score from 96 lower-risk common genetic variants (SNPs) with known association to breast cancer (updated as data as data emerges) (iv) 10 • Willing to sign informed consent and provide follow-up data a As risk models improve over time, the optimal risk model will be updated and used for risk assignments, as we are testing the concept of risk-based screening, not simply a specific risk model
DEFINING SUCCESS
If, after completing WISDOM, we are to avoid simply adding additional fuel to the fire of the screening debate, the scientific questions we ask must be well defined and the answers definitive. This is particularly challenging given the nature of the current debate and is further complicated by statistical requirements, population size limitations and the 5-year follow-up limitation of the funding. Such deliberations within WISDOM working groups strengthened the study significantly, emphasized safety as the overriding priority and established a series of outcomes with achievable and highly relevant goals. WISDOM's primary endpoints are, first, to determine whether risk-based screening is non-inferior to annual screening for latestage cancers detected. The outcome is the number of Stage IIB or higher cancers found using personalized vs. annual screening. The study has been powered assuming annual incidence rates of 95 Stage IIB or higher cancers per 100,000 women in each arm. 60 Over 65,000 randomized patients, this provides 90% power to detect a difference lower than 0.05% in risk of being diagnosed with Stage IIB or higher in the personalized vs. annual arm in a given year (83% for a difference <0.035%). 58 Second, we will compare the morbidity of personalized vs. annual screening on the basis of the number of biopsies performed. Assuming 16% of first time mammograms and 8% of subsequent screens lead to false positive recalls, 61 65,000 patients equally randomized between annual and personalized screening offers 90% power to detect a difference as small as 1.1% (22 vs. 20.9%).
Additional secondary objectives will further our understanding of the impacts of personalized screening and include measures of morbidity (e.g., rates of systemic therapy, rates of DCIS, chemoprevention) and the comparative attitudes and acceptance of each screening modality by women enrolled in the trial (e.g., adherence, measures of anxiety, decision regret). Finally, we will determine whether an understanding of personalized risk, especially the ability to predict hormone positive breast cancer, will provide better motivation for and uptake of endocrine risk reducing therapies and lifestyle changes.
Since opening in September 2016, Over 4000 women have enrolled in WISDOM. About two-thirds have agreed to randomization. The other one-third opted to self-select their screening approach in the observation arm: 85% have elected for personalized screening. Although preliminary, our experience to date provides critical insight into the comfort women feel with the concept of individualized, risk-based screening.
CONCLUSIONS
The United States is the only country where annual screening starting at age 40 is standard practice, yet our breast cancer mortality rate is no better than countries that screen less. 62 Clearly, there is room for improvement. Progress will only come by investigating other possibilities. The WISDOM study will evaluate one such possibility-screening based on a woman's individual risk-opening its first site in August 2016, expanding to other sites nationally in 2017. It is certainly unlikely that all women benefit equally from screening. Investing in pragmatic studies like WISDOM allows us to learn who is at risk for what kind of breast cancer, tailor screening accordingly and build a new framework for continuous improvement.
|
2018-04-03T01:28:23.282Z
|
2017-09-13T00:00:00.000
|
{
"year": 2017,
"sha1": "1ba06a4df555a74d6ae5456367f12780fc1dfa14",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41523-017-0035-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25bbad823415cd84c8d00ecd8b831a32a757ee8e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
243356676
|
pes2o/s2orc
|
v3-fos-license
|
MODEL TO ESTIMATE NUTRITIONAL AND NON-NUTRITIONAL LIMITATIONS OF ‘PRATA-ANÃ’ BANANA CROPS GROWN IN DIFFERENT ENVIRONMENTS
The obtaining of a high banana yield requires that nutrients are in adequate quantities and proportions in the plant. Therefore, the use of methods that encompass nutritional balance and equilibrium is required for a good nutritional evaluation. The objective of this work was to model and determine nutritional and non-nutritional limitations of 'Prata-Anã' banana grown in the states of Ceará (CE) and Bahia (BA), Brazil, based on nutritional balance and equilibrium. The study was developed using the databank of leaf nutrient contents and banana yields of two farms of the Sítio Barreiras company, in Missão Velha (CE) and Ponto Novo (BA), Brazil. The parcels with banana yield above the average plus 0.5 standard deviation; and parcels with banana yield below of that limit were defined as low-yield areas and were used for nutritional diagnosis. The databank was divided into four: the first with 253 samples and a reference population with banana yield above 39.81 Mg ha year; the second with 553 samples and a low-yield population (Ceará); the third with 147 samples and a reference population with banana yield above 41.69 Mg ha year; and the fourth with 334 samples and a low-yield population (Bahia). Yield limitations in the 'Prata-Anã' banana crops due to nutritional causes reached 13.37% in Ceará, and 12.17% in Bahia. Non-nutritional factors, such as climate and biotic factors, limited the banana crop yields by up to 28.23% in Ceará, and 50.49% in Bahia.
INTRODUCTION
Brazil is the fourth largest banana producing country, after India, China, and Indonesia, with 6.67 million Mg over an area of 465,400 ha, and a mean yield of 14.34 Mg ha -1 (FAO, 2019). Despite this high production and large producing area, banana yield in Brazil is well below those of other countries, such as Costa Rica, Indonesia, Guatemala, Ecuador, India, and China. Understanding processes related to nutrition of fruit and identifying limiting factors for banana yield require diagnosis methods and ability to isolate nutritional and non-nutritional factors. Environmental and biological factors may affect banana yield even when there are no nutritional factors involved.
The obtaining of a high banana yield requires that the nutrients in plants are in adequate amounts and proportions. Therefore, the use of methods that encompass nutritional balance and equilibrium is required for a good nutritional evaluation. The use of two or more methods of nutritional diagnosis enables a better diagnosis by complementarity (BLANCO-MACÍAS et al., 2010;ALMEIDA et al., 2016).
In this context, the Balance Index Method of Kenworthy (1961) (BIMK) and the Diagnose and Recommendation Integrated System (DRIS) (BEAUFILS, 1973) are recommended for evaluations of nutritional balance and equilibrium, respectively.
The objective of this work was to model and determine nutritional and non-nutritional limitations of 'Prata-Anã' banana grown in the states of Ceará (CE) and Bahia (BA), Brazil, based on nutritional balance and equilibrium.
MATERIAL AND METHODS
The study was developed using the databank of leaf nutrient contents and banana yields of two farms of the Sítio Barreiras company, one in the municipality of Missão Velha, state of Ceará (7.3590°S, 39.2117°W, and altitude of 442 m) and other in the municipality of Ponto Novo, state of Bahia (10.5146°S, 40.0801°W, and altitude of 362 m), Brazil.
The climate of the region of Missão Velha, CE, is Aw, tropical, with a dry season in the winter and rainfall concentrated in the summer, according to the Köppen-Geiger classification, with a mean annual rainfall depth of 942 mm and a mean annual temperature of 25.8 °C. The soil of the area was classified as a Oxisol (Latossolo Vermelho-Amarelo distrófico) of weak A horizon and sandy texture. The area presented 57 parcels with fertigated 'Prata-Anã' banana (AAB), with a mean area of 3.26 ha.
The climate of the region of Ponto Novo, BA, is also Aw, according to the Köppen-Geiger classification, with a mean annual rainfall depth of 696 mm and a mean annual temperature of 24.1 °C. The soil of the area was classified as a Oxisol (Latossolo Amarelo distrófico) of weak A horizon and sandy texture. The area presented 100 parcels with fertigated 'Prata-Anã' banana, with a mean area of 4.53 ha.
The chemical characteristics of the soils are shown in Table 1. The data were based on the databank of Soil Analyses of the evaluated farms in Missão Velha (CE) and in Ponto Novo (BA). The soil pH was evaluated in water at the ratio of 1:2.5; P, K + were extracted by Mehlich-1; Ca 2+ and Mg 2+ were extracted by KCl 1mol L -1 ; soil organic matter contents were evaluated by multiplying the organic carbon by ; and the soil cation exchange capacity was evaluated at pH 7.0. The meteorological data of the areas, according to the meteorological databanks of automatic weather stations installed in the farms are shown in Table 2. Results of leaf tissue analyses from the databank of the Sítio Barreiras company were used. These data were from analyses done over several years, and included the banana yield of each parcel.
The leaf tissues were sampled according to recommendations of Rodrigues et al. (2010) and Costa et al. (2019). The sampling consisted of collecting the central part of the blade of the third leaf from the apex of plants at inflorescence stage, presenting two to three opened male bunches. The samples were processed and analyzed for macronutrients (N, P, K, Ca, Mg, and S) and micronutrients (B, Cu, Fe, Mn, and Zn), according to Sofi et al. (2017).
The banana yields were estimated in Mg ha -1 year -1 by weighing the bunches at the harvest. The leaf analyses were done twice a year. The parcels with banana yield above the average (mean plus 0.5 standard deviation) were defined as high-yield areas and their plants were used as a reference population to develop standards for the Balance Index Method of Kenworthy (1961) (BIMK) and the Diagnose and Recommendation Integrated System (DRIS) (BEAUFILS, 1973); and parcels with banana yield below of this limit were defined as low-yield areas and used for nutritional diagnosis.
The databank was divided into four databank groups, considering the environments and banana yields. The first and second databanks were from the Missão Velha, CE, with results of leaf tissue analyses collected twice a year and annual banana yields from 2010 to 2017. The yield databank had 806 yield records showing a mean ± standard deviation of 35.91 ± 7.8 Mg ha -1 year -1 and was divided into low-and high-yield populations; the high-yield population, with banana yield of 39.81 Mg ha -1 year -1 (72.72% of the maximum yield) and 253 samples, and the low-yield population with 553 samples. The third and fourth databank were from the Ponto Novo, BA, with results of leaf tissue analyses collected twice a year and annual banana yields from 2014 to 2016. The databank yield had 481 records showing a mean ± standard deviation of 34.89 ± 13.59 Mg ha -1 year -1 and was divided into low-and high-yield populations: the high-yield population with 41.69 Mg ha -1 year -1 (57.00% of the maximum yield) and 147 samples; and the low-yield population with 334 samples.
The mean and variability of leaf nutrient contents in the sampled population were evaluated, and the nutritional indexes were calculated by BIMK and DRIS, according to Rodrigues Filho (2018), whose standards of the reference population established for the same place and banana variety were used as parameters for nutritional diagnosis.
The indexes found by BIMK and DRIS for each nutrient in the nutritional diagnosis were substituted in the potential response curves obtained where NL is the nutritional limitation (%); Estimated Relative Yield obtained using the potential nutrient-response curve (%); and 100% is the ideal value of each nutrient for the plants to be under nutritional balance and equilibrium. Thus, the banana yield losses associated with nutritional factors were obtained.
The banana yield losses associated with nonnutritional factors were obtained using Equation 2: where NNL is the non-nutritional limitation (%); ARY is the Actual Relative Yield, calculated based on the highest yield (%).
RESULTS AND DISCUSSION
The potential response curves for levels of nutritional balance and nutritional equilibrium used to determine the limitation of banana yields caused by each nutrient developed by Rodrigues Filho (2018) are shown in Figures 1, 2, 3 and 4.
The quantitative participation of nonnutritional factors for the limitation of banana yields is shown in Table 3. Mn was the most limiting nutrient for banana yield of the farm in Missão Velha, CE, considering the nutritional balance level, presenting an estimated relative yield of 86.63% (Table 3); and S was the most limiting nutrient, considering the nutritional equilibrium level, with an estimated relative yield of 87.92%. Thus, the maximum banana yield that could be reached would be 86.63% for conditions of 100% nutritional balance and equilibrium. Therefore, the farm in Missão Velha, CE, supposedly had a banana yield loss of 13.37% caused by an inappropriate nutrition.
S was the most limiting nutrient for banana yield of the farm in Ponto Novo, BA, considering the nutritional balance level, which presented an estimated relative yield of 88.04%; and P was the = 100% − = − most limiting nutrient, considering the equilibrium level, with an estimated relative yield of 87.83%. Thus, the maximum banana yield that could be reached would be 87.83%, for 100% nutritional balance and equilibrium. Therefore, the farm in Ponto Novo, BA, supposedly had a banana yield loss of 12.17% due to an inappropriate nutrition. The farm in Missão Velha, CE, presented an actual relative yield of 58.40%, which would be higher, approximately 86.63%, when considering only the limitations caused by the plant nutritional status. The actual relative yield was lower than the estimated banana yield, indicating that 28.23% of the banana yield was limited by other factors (nonnutritional), such as the local climate. The high temperatures from August to December (above 34 °C ), low relative air humidity (lower than 50%), and high vapor pressure deficit (Table 2) in the region can cause thermal stress to banana, with decreases in photosynthesis rates and, consequently, banana yields (ARANTES et al., 2016;RAMOS et al., 2018).
The farm in Ponto Novo, BA, presented an actual relative yield of 37.34%, which would be higher, approximately 87.83%, when considering only the limitations caused by the plant nutritional status. The actual relative yield was lower than the estimated banana yield, indicating that 50.49% of the banana yield was limited by other factors (nonnutritional), such as the climate. Despite the region of the farm in Ponto Novo, BA (Table 2) presents mild maximum temperatures, except from February to April, and relative air humidity above 60% throughout the year, the maximum wind speed is above 5 m s -1 (except from November to December), which can damage the leaf blade, reducing the leaf area and, consequently, photosynthetic rates and banana yields (DONATO et al., 2016).
In addition, biotic factors, including incidence of pests and diseases, such as the wither caused by the fungus Fusarium oxysporum f. sp. cubense, may have limited the banana yield in both farms, considering that this pathogen is well disseminated in these areas.
Therefore, a proposal for more accurate interpretive diagnostics and cultural managements in the context of this discussion (DONATO et al., 2017) requires to consider the interactions between different factors (nutrient contents, solar radiation, water availability, temperature, and soil aeration) that affect the nutrient flow in the soil-plant system. This is required because the soil and its relation with plants and atmosphere is irreplaceable to predict nutrient availability to plants, which is not possible only by chemical analyses of soils and plant tissues (RESENDE;CURI;LANI, 2002). The overall loss of banana yield, which is the difference between the maximum achievable yield (100%) and the actual relative yield and denotes the total loss of banana yield considering nutritional and non-nutritional factors, were 41.6% for the farm in Missão Velha, CE, and 62.66% for the farm in Ponto Novo, BA.
The information presented in the present study may contribute to minimize misleading extrapolations by considering specificities, including the different environments and managements, and not only overall standards for diagnosis, regardless of how accurate and refined the tools available for diagnoses.
CONCLUSIONS
Yield limitations in 'Prata-Anã' banana crops due to nutritional causes reached 13.37% in the farm in Missão Velha, CE, and 12.17% in the farm in Ponto Novo, BA, Brazil.
Non-nutritional factors, such as climate and biotic factors, limited the yield of banana crops by 28.23% in the farm in Missão Velha, CE, and 50.49% in the farm in Ponto Novo, BA.
|
2021-04-24T01:58:20.933Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "5a747ab007f4154f253fe6898961570cf136b252",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/rcaat/a/JrQNPRydfFDz5HzNYjS5bwB/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5a747ab007f4154f253fe6898961570cf136b252",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
2038739
|
pes2o/s2orc
|
v3-fos-license
|
Induced pluripotent stem cell derived cardiomyocytes as models for cardiac arrhythmias
Cardiac arrhythmias are a major cause of morbidity and mortality. In younger patients, the majority of sudden cardiac deaths have an underlying Mendelian genetic cause. Over the last 15 years, enormous progress has been made in identifying the distinct clinical phenotypes and in studying the basic cellular and genetic mechanisms associated with the primary Mendelian (monogenic) arrhythmia syndromes. Investigation of the electrophysiological consequences of an ion channel mutation is ideally done in the native cardiomyocyte (CM) environment. However, the majority of such studies so far have relied on heterologous expression systems in which single ion channel genes are expressed in non-cardiac cells. In some cases, transgenic mouse models have been generated, but these also have significant shortcomings, primarily related to species differences. The discovery that somatic cells can be reprogrammed to pluripotency as induced pluripotent stem cells (iPSC) has generated much interest since it presents an opportunity to generate patient- and disease-specific cell lines from which normal and diseased human CMs can be obtained These genetically diverse human model systems can be studied in vitro and used to decipher mechanisms of disease and identify strategies and reagents for new therapies. Here, we review the present state of the art with respect to cardiac disease models already generated using IPSC technology and which have been (partially) characterized. Human iPSC (hiPSC) models have been described for the cardiac arrhythmia syndromes, including LQT1, LQT2, LQT3-Brugada Syndrome, LQT8/Timothy syndrome and catecholaminergic polymorphic ventricular tachycardia (CPVT). In most cases, the hiPSC-derived cardiomyoctes recapitulate the disease phenotype and have already provided opportunities for novel insight into cardiac pathophysiology. It is expected that the lines will be useful in the development of pharmacological agents for the management of these disorders.
INTRODUCTION
Cardiac arrhythmias can be life threatening and are a major cause of morbidity and mortality in developed nations (Wolf and Berul, 2008). In older patients, most arrhythmic sudden deaths occur in the setting of acute ischemia or coronary artery diseases (Zipes and Wellens, 1998). In younger patients, the great majority of sudden arrhythmic deaths have an underlying genetic cause (Wilde and Bezzina, 2005). These can be broadly subdivided into those associated with structural heart disease (such as hypertrophic cardiomyopathy) and those associated with electrical disease in the structurally normal heart (Wolf and Berul, 2008).
Over the last 15 years, much progress has been made in identifying the clinical phenotypes and cellular and genetic mechanisms underlying the various primary Mendelian arrhythmia syndromes, including the Long QT syndrome (LQTS), Brugada Syndrome (BrS), and catecholaminergic polymorphic ventricular tachycardia (CPVT) (Wilde and Bezzina, 2005). This has provided important insights into these disorders and as a consequence improved the management of affected patients. The availability of genetic tests has added an important diagnostic tool permitting early (presymptomatic) identification of patients at risk and allowing for the timely implementation of preventive strategies (Hofman et al., 2010). Studies into genotypephenotype relationships have uncovered important gene-specific aspects of disease and indicated that patient management must take the nature of the gene affected into consideration (Priori, 2004). However, there is considerable variation in phenotypic expression of arrhythmia syndromes even within families carrying the same mutation (Scicluna et al., 2008).
Studying the electrophysiological and molecular consequences of a mutation associated with cardiac arrhythmia is ideally done in the native cardiomyocyte (CM) environment. However, obtaining ventricular cardiac biopsies from patients is a highly invasive procedure and not without significant risk. Consequently the majority of functional studies on specific mutations associated with the Mendelian rhythm disorders have relied on heterologous expression systems, primarily Xenopus oocytes, human embryonic kidney (HEK) cells, and Chinese Hamster Ovary (CHO) cells (Watanabe et al., 2008), in which the mutated ion channel of interest is expressed. Such cellular models have significant shortcomings since they lack important constituents of cardiac ion channel macromolecular complexes that might be required to reproduce the exact molecular and electrophysiological phenotype of the mutation. For example, the behavior of the Na + channel in cell expression systems seems to be different from that in CMs (Remme et al., 2008). One way of overcoming this has been to generate transgenic mice carrying specific mutations (Sabir et al., 2008). However, the generation of such mouse models is costly and time-consuming and not practical for highthroughput screening of rare inherited arrhythmia mutations. Moreover, there remain crucial differences between mouse and human cardiac electrophysiological characteristics, such as the high basal heart rate (>500 bpm), the very negative action potential (AP) plateau phase, and the short AP compared of the mouse compared to human (Watanabe et al., 2011). These differences are amongst others due to the different biophysical properties in the transient outward currents (I to ) of human and mouse (for review, see Nerbonne and Kass, 2005).
The discovery of somatic cell reprogramming to generate induced pluripotent stem cells (iPSC) (Takahashi and Yamanaka, 2006) has created much excitement because of the possibility of producing unique patient-and disease-specific human iPSC (hiPSC) lines (Takahashi and Yamanaka, 2006;Takahashi et al., 2007;Yu et al., 2007). With this technique, somatic cells can be turned into embryonic stem cell-like cells which can differentiate into all cells of the human body and be propagated indefinitely in culture. Thus, hiPSC can provide investigators with genetically diverse human model systems to study mechanisms of disease and identify strategies for potential new therapies. Zhang et al. (2009) were the first to show that hiPSC can differentiate to functional CMs, making it possible to generate patient-specific human CMs which are by definition on different genetic backgrounds. hiPSCderived CMs (hiPSC-CMs) therefore represent a new model system for studying Mendelian arrhythmia syndromes.
Here we provide a short overview of hiPSC generation, culturing, and differentiation methods. Further we will discuss in detail the electrophysiological characteristics of hiPSC-CMs and review hiPSC models for cardiovascular diseases, including LQT1, LQT2, LQT3/BrS, LQT8/Timothy syndrome, and CPVT.
DERIVATION OF hiPSC MODELS CELL ORIGIN
Although the first hiPSC lines were derived from dermal fibroblasts (Takahashi et al., 2007) hiPSC can now be generated from a wide variety of somatic cells. It is important to consider easily accessible sources, which are efficient to reprogram and give minimal burden to the patient. Easily accessible sources used successfully for reprogramming include keratinocytes from skin or plucked hair (Aasen et al., 2008), peripheral blood (Loh et al., 2009), mesenchymal cells in fat (Sun et al., 2009), dental pulp (Tamaoki et al., 2010), and oral mucosa (Miyoshi et al., 2010).
CELL REPROGRAMMING
Somatic cells can be reprogrammed to a pluripotent state by introducing pluripotency-associated genes. The first iPSC reported were generated by transducing mouse fibroblasts with four retroviral vectors OCT4, SOX2, KLF4, and C-MYC (Takahashi and Yamanaka, 2006). The first hiPSC were generated using the same four retroviral vectors (Takahashi et al., 2007) or OCT4, SOX2, LIN28, and NANOG (Yu et al., 2007). Later studies reported reprogramming with other combinations and numbers of pluripotency factors. The hiPSC thus generated can be kept in culture indefinitely using a variety of undefined fibroblast feeder cells and fetal calf serum-based methods (Takahashi et al., 2007;Yu et al., 2007) or defined mTESr/Matrigel-based protocols. Transplantation of hiPSC into immune-compromised mice leads to the formation of teratomas with derivatives of the three embryonic germ layers, demonstrating the pluripotent potential of these cells (Takahashi et al., 2007;Yu et al., 2007). In addition, differentiation of hiPSC in vitro also results in derivatives of the three germ layers. The review by Narsinh et al. (2011) provides a detailed overview of the methods to reprogram somatic cells to iPSC and discusses the advantages and disadvantages of the different techniques.
GENERATION OF iPSC-DERIVED CARDIOMYOCYTES
When iPSC are removed from differentiation suppression conditions and/or when grown in suspension aggregates [called embryoid bodies (EBs)] spontaneous differentiation to cells of the three germ layers occurs. CMs originate from the mesodermal germ layer, so that CM differentiation first requires efficient differentiation toward the mesodermal lineage. Directed differentiation toward the cardiac lineage is mainly achieved by one of the following strategies: (1) The first involves the formation of EBs in the presence of growth factors and repressors known to influence heart development (Kehat et al., 2001); (2) The second relies on the influence of endoderm on cardiac differentiation during embryogenesis (Mummery et al., 2003). Here, co-culture of iPSC with mouse END-2 is used to produce CMs; (3) The third involves monolayer culture at high density of iPSC seeded on Matrigel with sequential treatment with activin A and BMP4 (Laflamme et al., 2007). This method was developed using human embryonic stem cells (hESC) but has been transferred to hiPSC. Beating areas from differentiated EBs usually appear in 7-10 days. These EBs can be microscopically dissected and dissociated in single cells. For electrophysiological and immunofluorescence analysis, the dissociated cells can be seeded onto glass coverslips.
MOLECULAR AND STRUCTURAL CHARACTERISTICS
The first hiPS-CMs were generated by Zhang et al. (2009). In these cells the investigators examined the gene expression of the transcription factor Nkx2.5, the myofilament proteins cardiac troponin T, α-myosin heavy chain, α-actinin, the atrial and ventricular isoforms of myosin light chain 2, atrial natriuretic factor, and phospholamban (PLN). Low levels of cardiac troponin T and the atrial isoform of myosin light chain 2 were found in undifferentiated hiPSC and high expression of all the cardiac genes were found in the hiPSC-CMs, which was comparable to the expression of these genes in adult ventricular myocardium. Immunohistochemistry showed a typically striated pattern for α-actinin and myosin light chain. However, these cells had multiangular morphologies and relatively disorganized sacromeres (Dick et al., 2010). Novak and co-workers demonstrated by transmission electron microscopy analysis that hiPSC-CM had an immature ultra structure without t-tubuli (Novak et al., 2012).
ACTION POTENTIALS
Using the patch clamp technique, Zhang et al. (2009) were the first to measure the APs in spontaneously contracting cells isolated from hiPSC-EBs. The majority of the cells showed ventricular-like APs (70-74% of cells for two distinct hiPSC lines), but atrial-like and nodal-like APs were also observed. The distinction was made on AP phenotype, with a negative diastolic membrane potential, a rapid AP upstroke and a long plateau phase for ventricularlike APs. The absence of a prominent plateau phase was a characteristic of atrial-like APs, resulting in shorter AP duration compared to ventricular-like APs. Nodal-like APs showed a more positive maximum diastolic potential (MDP), a slower AP upstroke and a prominent phase 4 depolarization. Other studies also described ventricular-like, atrial-like, and sometimes nodallike APs (Moretti et al., 2010;Fatima et al., 2011;Itzhaki et al., 2011a;Ma et al., 2011;Matsa et al., 2011;Jung et al., 2012;Lahti et al., 2012), with the ventricular-like phenotype being the most prominent AP form (76-48%) (Zhang et al., 2009;Moretti et al., 2010;Itzhaki et al., 2011a;Ma et al., 2011;Lahti et al., 2012).
Comparison of hiPSC-CMs to hESC-CMs seems valuable, since hESC-CMs are more established. In hESC-CM, ventricular-like APs are also observed more frequently [50-60% (Zhang et al., 2009(Zhang et al., , 2011]. Moretti et al. (2010) employed single-cell reversetranscriptase-PCR in combination with patch-clamp in the same cell to show that the designation as ventricular-like, atrial-like, and nodal-like APs based on cellular electrophysiological features correlated with gene-expression of specific myocyte-lineage markers. Table 1 summarizes the reported AP characteristics of hiPSC-CMs, hESC-CMs, and native ventricular CMs. The APs measured in hiPSC-CM differ from APs measured in freshly isolated native CMs ( Table 1). The first remarkable difference is that most of the hiPSC-CMs studied, including the ventricular-like and atrial-like cells, are spontaneously active, with beating rates between 28 and 108 bpm (Table 1). Whether, in these studies, the spontaneous activity was used as a tool for CM selection and that non-beating CMs were also present, or whether it is a typical feature of hiPSC-CMs is unknown. We recently (Davis et al., 2012) performed experiments on non-spontaneously beating hiPSC-CM. For this we selected quiescent cells which were able to contract upon field stimulation. In these non-spontaneous beating hiPSC-CM, the resting membrane potential (RMP) was more negative then the MDP in most studies reporting on spontaneously active hiPSC-CMs (Table 1). Compared with native human ventricular CMs, where the reported RMP varies from -81.8 to -87 mV ( Table 1), the MDP of ventricular-like hiPSC-CM APs is less negative with values ranging from -57 to -75 mV ( Table 1). In spontaneously beating hiPSC-CM, the ventricular-like AP has a maximal upstroke velocity (dV/dt max ) ranging from 9 to 40 V/s which is slow compared to those non-spontaneously beating hiPSC-CM with a dV/dt max of 115 V/s, and native ventricular CMs with a dV/dt max of 215-234 V/s (Magyar et al., 2000). The duration of ventricular-like hiPSC-CM APs, for example at 90% of repolarization (APD 90 ), is longer in spontaneous active cells (313-495 ms) compared to non-spontaneously active hiPSC-CMs (173 ms) and native freshly isolated CMs (213-351 ms) ( Table 1). The AP amplitude (APA) for most ventricular-like hiPSC-CM APs (87-113 mV) is comparable to native ventricular CMs (104-106 mV), which due to the depolarized MDP in hiPSC-CM results in a higher overshoot of the hiPSC-CM AP. The direct comparison between the APs is complicated by the differences in experimental techniques used. Most hiPSC-CM studies used a temperature between 35 and 37 • C; only in the study of Itzhaki was a temperature of 32 • C used. This could be a possible explanation of the depolarized MDP, slow dV/dt max and long APD 90 in this latter study (Itzhaki et al., 2011a). The majority of the studies used the ruptured whole-cell patch-clamp technique, and three studies (Ma et al., 2011;Davis et al., 2012;Lahti et al., 2012) used the perforated patch-clamp technique. In the latter technique it is possible to perform experiments more close to physiological conditions since cell dialysis is minimal and EGTA, a buffer for Ca 2+ ions, is absent.
MEMBRANE CURRENTS
The shape of the AP is the result of the various inwardly and outwardly directed ion currents present in the CM. A schematic overview of the different ionic membrane currents underlying the ventricular AP and their course is depicted in Figure 1. Because of the clear differences in AP shape between native CMs and hiPSC-CMs, one can assume that differences exist in the content and function of the various cardiac ion channels between the two cell types. Thus, before hiPSC-CMs can be used as a cell model in the study of cardiac arrhythmia syndrome, it is important to carry out a detailed comparison between the cardiac ion currents in hiPSC-CMs with those in native CMs. In the description of the cardiac ion currents below, a comparison between hiPSC-CMs displaying ventricular-like APs and healthy native human ventricular CMs is made, unless stated otherwise.
Sodium current
The cardiac Na + current (I Na ) is responsible for the AP upstroke in ventricular CMs [see (Berecki et al., 2010), and primary references cited therein]. Mutations in the genes encoding the αand β-subunits of the cardiac Na + channel can alter the kinetics and availability of the cardiac Na + current (Remme et al., 2008). As stated before, the upstroke velocity in hiPSC-CM APs is extremely low compared to the AP upstroke of freshly isolated human ventricular CMs. In hiPSC-CM, I Na was studied in detail in two reports (Ma et al., 2011;Davis et al., 2012). Ma et al. (2011) report a half-maximal potential (V 1/2 ) of activation and inactivation of -34.1 and -96.1 mV, respectively. Davis et al. (2012) report a V 1/2 of activation of ∼42 mV. The findings of these studies are consistent with values reported for native human ventricular CMs (Sakakibara et al., 1993) ( Table 2). The low temperature and delayed afterdepolarization (DAD) and its underlying mechanism. I Na , Na + current; I Ca,L , L-type Ca 2+ current; I Ca,T , T-type Ca 2+ current; I to1 , transient outward current type 1; I Cl(Ca) , Ca 2+ activated Cl − current, also called I to2 ; I Kur , ultra rapid component of the delayed rectifier K + current, I Kr , rapid component of the delayed rectifier K + current; I Ks , slow component of the delayed rectifier K + current; I K1 , inward rectifier K + current; I f , funny current; I NCX , Na + /Ca 2+ exchange current. iPS-CM ∼30 (Suppl. Figure 10A) reduced Na + concentration used to study the maximal peak I Na in native human ventricular CM (Sakakibara et al., 1993) prevents comparison with the maximal peak I Na measured in hiPSC-CM (Ma et al., 2011;Davis et al., 2012). Other I Na characteristics, such as recovery from inactivation and slow inactivation have not been reported to date. In the presence of the Na + channel blocker tetrodotoxin (TTX) the upstroke of the AP in hiPSC-CMs is delayed and the dV/dt max is reduced (Ma et al., 2011).
In hESC-derived CMs (hESC-CM) the Na + channel blocker lidocaine also reduced the spontaneous beating rate (Kuzmenkin et al., 2009). Whether I Na plays a role in spontaneous activity in hiPSC-CM is unknown. However, hiPSC-CMs have prominent Na + currents with characteristics close to that of native human ventricular CMs, despite any information to compare maximal peak I Na , it seems that the low dV/dt max of spontaneously active ventricular-like hiPSC-CM APs seems thus due to lower functional availability of Na + channels (related to the relative positive value of the RMP) rather than differences in I Na density.
Calcium current
Two types of Ca 2+ current exist in the mammalian heart, i.e., the L-type (I Ca,L ) and T-type (I Ca,T ) Ca 2+ current [for review, see (Nerbonne and Kass, 2005)]. Patch clamp studies have demonstrated the presence of the I Ca,L in hiPSC-CM with a V 1/2 of activation and inactivation of -15 and -29 mV, respectively (Ma et al., 2011). These values are more comparable to those found in native atrial CMs (-12 and -27 mV, for V 1/2 of activation and inactivation, respectively) (Mewes and Ravens, 1994) than native ventricular myocytes (-4.2 and -4.7 mV and -23.5 and -19.3 mV) (Mewes and Ravens, 1994;Magyar et al., 2000). The maximal peak I Ca,L in hiPSC-CM reported by Ma et al. (2011) is 16.4 pA/pF, which is much higher than the 3.3 pA/pF reported in hiPSC-CM by Yazawa et al. (2011). In native ventricular CMs the amplitudes of the maximal current density varies between 2.2 and 10.2 pA/pF (Mewes and Ravens, 1994;Magyar et al., 2000) The higher maximal peak I Ca,L in the study of Ma et al. (2011) may be explained by the higher extracellular Ca 2+ and higher temperatures used in their experiments. Blocking of the I Ca,L with nifedipine results in shortening of the AP duration and field potential duration (FDP) with minimal effects on dV/dt max (Itzhaki et al., 2011a;Ma et al., 2011). Long-term application of nifedipine resulted in cessation of beating in some EBs (Itzhaki et al., 2011a). Functional presence of the I Ca,T has not been reported in hiPSC-CM. Ma et al. (2011) did not find clear evidence for its presence. I Ca,T is present in the human heart conduction system, where it plays a role in facilitation of pacemaker depolarization, but is not functionally present in healthy human native ventricular CMs (Ono and Iijima, 2010). The T-type Ca 2+ channels are re-expressed in atrial and ventricular CMs under pathological conditions such as cardiac hypertrophy and heart failure (Ono and Iijima, 2010).
Transient outward current
Two transient outward current components are found in native mammalian cardiac cells, one carried by K + (I to1 ), the other by Cl − ions (I to2 ) [for review, see (Nerbonne and Kass, 2005)].
While it is not yet known whether I to2 is present in hiPSC-CMs, native human ventricular CMs are known to lack I to2 (Verkerk et al., 2001). On the otherhand, I to1 has been found in hiPSC-CMs (Moretti et al., 2010;Ma et al., 2011), but data about gating properties are not known. Reported peak current densities of I to1 in hiPSC-CM display large variation, namely between 2.4 (Ma et al., 2011) and 30 pA/pF (Moretti et al., 2010), both at 60 mV. Values reported for native ventricular CMs vary between 2.3 and 16 pA/pF (Wettwer et al., 1994;Nabäuer et al., 1996). Although an exact comparison of I to1 density between hiPSC-CM and native human CMs is hampered by differences in experimental conditions, I to1 current density also depends on the site from which the native ventricular CMs are isolated, with larger current densities reported in human subepicardial ventricular myocytes compared to endocardial ventricular CMs (Beuckelmann et al., 1993;Wettwer et al., 1994).
Studies of I to1 block on hiPSC-CM APs have not yet been performed. However, due to the depolarized MDP values in hiPSC-CM, I to1 function may be limited because most channels will be inactivated (Varro and Papp, 1992). Further studies are required to address the function of I to1 in hiPSC-CM in detail.
The delayed rectifier potassium current
In the mammalian heart, the delayed rectifier K + current (I K ) is composed of three different components: the ultrarapid (I Kur ), the rapid (I Kr ), and the slow (I Ks ) components [for review, see (Nerbonne and Kass, 2005)]. To our knowledge, studies to elucidate the presence and function of I Kur in hiPSC-CMs are lacking.
The presence of I Ks in hiPSC-CMs has been reported in two studies. Ma et al. (2011) found I Ks in 5 out of 16 cells studied, and when present the average I Ks density was 0.31 pA/pF. In contrast, Moretti et al. (2010) measured I Ks in all studied cells and the average density was around 2.5 pA/pF [estimated from Figure 4A (Moretti et al., 2010)]. In native left ventricular human CMs, Virag et al. (2001) identified I Ks in 31 out of 58 cells and the maximal current density was approximately 0.18 pA/pF. In hiPSC-CMs, blockade of I Ks by chromanol 293B results only in minimal prolongation of the AP (Ma et al., 2011). This is consistent with the relative small number of cells exhibiting I Ks and the small I Ks densities found in their study, but contrast with the effects of a loss-of-function I Ks mutation which results in a prominent AP prolongation (see paragraph "LQT1"). A study by Wang et al. (2011) in hESC-CMs suggests altered expression of the β-subunit mink encoded by the KCNE1 gene, as a mechanism for variable I Ks function in the developing heart and in disease.
Inward rectifier current
In atrial and ventricular CMs, the inward rectifier K + current (I K1 ) is an important contributor to the maintenance of the RMP and contributes to the terminal phase of repolarization (Dhamoon and Jalife, 2005). I K1 was found to be present in hiPSC-CM (Ma et al., 2011). I K1 density in hiPSC-CM is four times smaller than that reported in native ventricular CMs, 0.9 (Ma et al., 2011) and 3.6 pA/pF (Magyar et al., 2000), respectively. In hESC-CMs I K1 is significantly increased in longer cultured hiPSC-CMs, these cells also displayed a flattened diastolic depolarization rate and decreased spontaneous activity (Sartiani et al., 2007).
The small I K1 densities in hiPSC-CMs may explain the frequently observed spontaneous activity in these cells. However, whether all hiPSC-CMs have a low I K1 density or whether a bias is introduced by the selection of spontaneously active cells for patch-clamp needs to be elucidated.
The acetylcholine-activated K + current
The acetylcholine-activated K + current (I K,ACh ) is involved in parasympathetic regulation of heart rate (Tamargo et al., 2004). I K,ACh is to our knowledge not yet studied in hiPSC-CM. Studies addressing the presence of I K,ACh in hiPSC-CM might be particularly important in modeling atrial arrhythmias, as blockers of I K,Ach , which leave ventricular repolarization intact, are effective in the treatment of atrial fibrillation (Hashimoto et al., 2006).
The ATP-sensitive K + current
The ATP-sensitive K + current (I K,ATP ) has not been studied in detail in hiPSC-CMs. However, the I K,ATP channel openers nicorandil and pinacidil shorten the AP in hiPSC-CMs (Itzhaki et al., 2011a;Matsa et al., 2011), suggesting that I K,ATP channels are present in these cells. Further studies are required to address the presence and function of I K,ATP in hiPSC-CM in detail.
The hyperpolarization-activated "funny" current
The funny current (I f ) is an inward current activating at hyperpolarized membrane potentials [for review, see (Verkerk et al., 2007)]. In human sinoatrial node cells, the current density of I f at a membrane potential of -130 mV is reported to be 8 pA/pF (Verkerk et al., 2007). I f is also described in human atrial CMs (El Chemaly et al., 2007) and in human ventricular CMs during heart failure [ (Hoppe et al., 1998)]. The current densities reported, however, are much smaller, compared to human sinoatrial node cells. hiPSC-CMs also exhibit I f (Ma et al., 2011) and the reported current density is 4.1 pA/pF (Ma et al., 2011). The relatively high I f density in hiPSC-CMs compared to human ventricular CMs might be attributed to the fact that these cells express higher levels of the HCN isoforms (HCN1, 2, 4) as compared to adult human CMs. (Synnergren et al., 2012). In hiPSC-CMs, I f starts to activate at potentials negative of -60 mV and has a V 1/2 of activation of -84 mV (Ma et al., 2011) and may therefore have a role in spontaneous activity in these cells. hESC-CMs have comparable characteristics of I f , and in these cells blockade of I f with zatebradine resulted in slowing of spontaneous activity due to a reduced diastolic depolarization rate (Sartiani et al., 2007).
Na + -Ca 2+ exchange current
The Na + -Ca 2+ exchange current (I NCX ) is crucial for Ca 2+ extrusion from the cell and plays a role in the electric activity of mammalian CMs (Sipido et al., 2007). While the functional properties of I NCX have not been studied in hiPSC-CM, the presence of the Na + -Ca 2+ exchanger in hiPSC-CM has been demonstrated at the level of the protein (Lee et al., 2011). I NCX is present in hESC-CM and its density increases with maturation (Fu et al., 2010).
In human CMs, Ca 2+ extrusion by the Na + /Ca 2+ exchanger is the major mechanism to balance the Ca 2+ influx through the I Ca,L (Sipido et al., 2007). In addition, the Na + /Ca 2+ exchanger has a function during depolarization where it contributes in its reverse mode (Ca 2+ influx) to the total amount of Ca 2+ influx. The amplitude of the I NCX depends on the membrane potential and the intracellular levels of Na + ([Na + ] i ) and Ca + ([Ca 2+ ] i ) (Sipido et al., 2007). Altered [Na + ] i and/or [Ca 2+ ] i will lead to altered I NCX and can cause cardiac arrhythmias due to spontaneous Ca 2+ releases from the SR. Studying the I NCX in hiPSC-CM might be of particular interest in cardiac arrhythmia models of CPVT and LQT3. CPVT is associated with mutations in the RyR2 which can lead to increased [Ca 2+ ] i due to altered gating properties of the RyR2 receptor. In LQT3 syndrome there is an increased persistent I Na (Remme et al., 2006), which might lead to elevated levels of [Na + ] i .
EXCITATION-CONTRACTION COUPLING
hiPSC-CMs display clearly visible contractions. In native adult CMs, a small influx of Ca 2+ through the L-type Ca 2+ channels triggers a several-fold multiplied Ca 2+ release from the sarcoplasmic reticulum (SR) via ryanodine receptors (RyRs). This phenomenon is referred to as "Ca 2+− induced Ca 2+ release" (CICR) (Lee et al., 2011). CICR is the key mechanism underlying excitation-contraction coupling. The key Ca 2+ handling proteins, RyR2, SR Ca 2+ -ATPase (SERCA), junctin (Jun), triadin (TRDN), Na + /Ca 2+ exchanger (NCX), calsequestrin (CASQ2), L-type Ca 2+ channel (Ca v 1.2), inositol-1,4,5-trisphosphate receptor (IP3R2) and PLN are expressed in hiPSC-CM (Itzhaki et al., 2011b;Lee et al., 2011). Spontaneous rhythmic Ca 2+ transients are present in hiPSC-CM, and blocking of the I Ca,L by nifedipine, abolishes Ca 2+ transients (Itzhaki et al., 2011b). The presence of functional SR and RyRs was proven by application of caffeine, which induced a large Ca 2+ transient (Itzhaki et al., 2011b;Lee et al., 2011), consistent with findings in human ventricular CMs (Piacentino et al., 2003). In addition, ryanodine caused a reduction in the amplitude of the Ca 2+ transient (Itzhaki et al., 2011b;Lee et al., 2011). The pattern of Ca 2+ transient in hiPSC-CM was studied by transverse line-scan images and revealed a U-shape Ca 2+ wavefront (the rise of Ca 2+ in the periphery is faster than in the center of the cell), which is typical for t-tubule deficient cells (Lee et al., 2011). This suggests that hiPSC-CMs lack t-tubuli, an observation which is in line with that of Novak and co-workers (Novak et al., 2012) who did not find t-tubuli with transmission electron microscopy. This would mean that hiPSC-CMs likely have poor coupling between Ca 2+ influx through L-type Ca 2+ channels and Ca 2+ release from the SR through RyRs.
LQT1
Moretti and co-workers (Moretti et al., 2010) were the first to publish on a hiPSC-CM model for a primarily electrical disease, namely LQT1. LQT1 is a repolarization disorder identified by a prolongation of the QT interval on the ECG due to mutations in the KCNQ1 gene, encoding the α subunit of the K + channel responsible of I Ks (Wilde and Bezzina, 2005). They investigators obtained fibroblasts from two, so far, asymptomatic patients, with the KCNQ1-G569A mutation and two healthy controls (Moretti et al., 2010). These fibroblasts were infected with retroviruses encoding OCT3/4, SOX2, KLF4, and c-MYC; hiPSC-CMs were differentiated as EBs. In this study AP characteristics and K + currents were investigated in spontaneously beating cells. Three different types of AP s were distinguished, that were designated as ventricular-, atrial-, and nodal-like. These investigators also correlated these characteristics with gene-expression analysis of specific myocyte-lineage markers (MLC2v, MLC2a, and HCN4 for ventricular-, atrial-and nodal-like cells, respectively). The delayed rectifier currents were studied in ventricular-like myocytes. In hiPSC-CMs derived from the LQT1 patient (LQT1-iPSC-CM), I Ks peak and tail current densities were reduced by approximately 75%, and I Kr conductance was unaffected. APs of atrial-like and ventricular-like hiPSC-CMs were significantly prolonged in LQT1-iPSC-CMs compared to control (WT-iPSC-CMs). Adaptation of the AP duration to higher pacing frequencies and the response to isoproterenol were impaired in LQT-iPSC-CM. EADs were elicited in response to isoproterenol (a β-adrenergic agonist) in 6 out of 9 LQT1-iPSC-CM and never in WT-iPSC-CM. When propranolol (a non-selective β-blocker) was applied the effect of isoproterenol was blunted. These data are in line with observations in LQT1 patients as these patients suffer from arrhythmias during increased heart rates caused by emotional stress or exercise. The data is also in line with the beneficial effects of β-blockers in suppressing arrythmias in these patients (Ruan et al., 2008). Immunocytochemistry revealed that the KCNQ1-G569A mutation leads to impaired trafficking and localization of the mutant channels.
LQT2
Three groups have published on hiPSC-CM models of LQT2 (Itzhaki et al., 2011a,b;Matsa et al., 2011;Lahti et al., 2012). LQT2 is a repolarization disorder caused by mutations in KCNH2, encoding I Kr channels (Wilde and Bezzina, 2005). Itzhaki et al. (2011a) reported on a hiPSC-CMs model generated from dermal fibroblasts obtained from a 28-year-old woman with a diagnosis of familial LQT2 due to the KCNH2-A614V mutation. Clinical data of the patient were not shown. Fibroblasts were reprogrammed by retroviral infection with vectors encoding for SOX-2, KLF4, and OCT4. In this study EB formation was used for differentiation of the hiPSC into CMs. In this study APs were measured from spontaneously contracting clusters and the hiPSC-CMs were classified as nodal-, atrial-, and ventricular-like. Prolongation of repolarization and predisposition to the development of EADs was shown in cells with atrial-and ventricular-like APs. For voltage clamp experiments the spontaneously beating clusters were dissociated to single cells. Peak amplitudes of I Kr were found to be significantly smaller in LQT2-iPSC-CM compared to WT-iPSC-CM. FDP corrected for variations in beating frequency were longer in LQT2-iPSC-CM compared to WT-iPSC-CM. When I Kr was blocked by the hERG blocker E4021, the AP prolonged and EADs were seen in 66% of the cells studied. Furthermore, the effects of agents that may have a therapeutic effect in preventing arrhythmias were also studied. These agents include nifedipine, pinacidil, and ryanodine. Because Ca 2+ influx through L-type Ca 2+ channels contributes to AP duration and has a role in EAD formation, inhibition of I Ca,L by nifedipine was proposed to be anti-arrhythmic. Another anti-arrhythmic strategy proposed was to augment the repolarization currents, by the I K,ATP channel opener pinacidil. Both interventions resulted in AP shortening and abolished propensity to EADs. This study also demonstrated that ranolazine, a blocker of the persistent I Na , did not shorten the AP duration, but prevented EADs. Matsa et al. (2011) generated hiPSC-CMs of a symptomatic and an asymptomatic carrier of the G1681A mutation in KCNH2. The symptomatic patient, female with a QTc interval of up to 571 ms, experienced 11 episodes of syncope in 12 months. As is typical for LQT2, episodes occurred at arousal from sleep and not during competitive sports. Her mother was the asymptomatic individual studied; although her QTc-interval was prolonged she had not experienced any symptoms. The hiPSC-CMs were derived from punch biopsies of skin, which were reprogrammed by lentiviral delivery of OCT4, SOX2 NANOG, and LIN28; EB formation was used for differentiation of cells into CMs. I Kr current characteristics were not studied. The derived hiPSC-CMs showed APs which were categorized as ventricular-, atrial-and pacemaker-like. Ventricular-and atrial-like APs from the symotomatic patient and her mother showed increased durations compared to APs of the genetically unrelated control; AP duration was less in the maternal hiPSC-CMs compared to those of the patient. Application of isoprenaline resulted in 25% of LQT2-iPSC-CMs in electrophysiological abnormalities, including EADs. Isoprenaline-induced arrhythmias were ameliorated by nadolol or propanolol, non-selective β-blockers. This is in line with the clinical picture in LQT2 as these patients experience arrhythmias due to increased heart rates caused by emotional and similar stress, mainly auditory stimulation and arousal from sleep. As for LQT1, LQT2 patients are often treated with β-blockers for prevention of cardiac events (Ruan et al., 2008). I Kr blockade by E4031 resulted in prolongation of the AP duration and EADs in 30% of the LQT2-iPSC-CMs, but never in control cells. Nicorandil, an I K,ATP channel opener, and PD-118057, an I Kr channel enhancer, shortened the AP in LQT2-iPSC-CM, showing that potassium channel activators can normalize the prolonged repolarization in LQT2. Lahti et al. (2012) derived hiPSC-CMs from an asymptomatic carrier of the R176W mutation in the KCNH2. This mutation is one of the four founder mutations of LQTS cases in Finland, and present in one in 400 Finns (Marjamaa et al., 2009). The QTc intervals of patients carrying the R176W mutation range from 386 to 569 ms, with a mean of 448 ms (Fodstad et al., 2006). In the study of Lahti et al. (2012), fibroblasts were infected with lentivirus followed by retroviruses encoding for OCT4, SOX2, KLF4, and MYC to generate iPSC-CMs. CM differentiation was achieved by co-culturing hiPSC with END-2 cells. APs were divided into two types, atrial-and ventricular-like APs. Only the ventricular-like APs showed significantly increased APD 90 . The AP frequency had a tendency toward slower frequencies in LQT2-iPSC-CM. EADs were present in 1 of 20 LQT2 iPSC-CMs and were never observed in WT-iPSC-CMs. I Kr step and tail current densities were reduced by 40-46%. The APs of LQT2-iPSC-CMs had a significantly prolonged duration compared to WT AP, especially at low frequencies. The I Kr blocker E4031 provoked EADs in WT-iPSC-CM and LQT2-iPSC-CM, with the effect on LQT2-iPSC-CMs being more pronounced. Sotalol, a non-selective β-blocker elicited EADs only in LQT2-iPSC-CM.
These three studies on LQT2 show that LQT-iPSC-CMs of symptomatic patients show a more severe cellular phenotype than those obtained from asymptomatic patients with the same mutation. However, assessing severity in the hiPSC-CMs system is challenging. For instance, blocking I Kr by E-4031 in WT-iPSC-CMs leads to different findings in different studies. In the study of Matsa et al. (2011) no EADs were provoked by the application of E4031, whereas in the study of Ma et al. (2011) and Lahti et al. (2012) EADs could be provoked in >50% of WT-iPSC-CM. This might reflect diversity of hiPSC-CM lines. However, the differences between the outcomes of these studies might also be caused by the use of different concentrations E-4031. Ma et al. (2011) and Lahti et al. (2012) used a concentration of 100 nmol/l and 500 nmol/l, respectively. The concentration used by Matsa et al. (2011) is not known.
LQT3/Conduction disease/BrS
LQT3 is a repolarization disorder caused by gain-of-function mutations in SCN5A encoding the cardiac Na + channel. These mutations cause an increased persistent Na + current which acts to prolong CM repolarization and increase AP duration (Wilde and Bezzina, 2005). On the otherhand, SCN5A mutations associated with loss of channel function cause conduction disease and Brugada Syndrome (BrS). The latter is an arrhythmia syndrome characterized by ST segment elevation in the right precordial leads of the EGC; SCN5A mutations account for around 20% of BrS cases (Wilde and Bezzina, 2005). Loss of Na + channel function leads to a decreased peak I Na , which causes slowing of the upstroke velocity of the AP.
Recently, we have, generated an iPSC-CM model of a patient carrying the SCN5A-1795insD mutation (Davis et al., 2012). This mutation gives rise to a phenotype of LQT3 as well as BrS and conduction defects, caused by both gain-and loss-of-function effects on the cardiac Na + channel, respectively (Remme et al., 2006(Remme et al., , 2009. In this study we generated hiPSC-CMs by transducing fibroblasts with lentiviral vectors encoding OCT4, SOX2, KLF4 and C-MYC (Davis et al., 2012). Cardiac differentiation was induced by co-culture with END-2 cells. In line with the known effects of the mutation in a knock-in mouse model, and in line with the clinical presentation in mutation carriers, we observed a decrease in peak I Na and an increase in persistent I Na in the hiPSC-CMs with SCN5A-1795insD compared to genetically unrelated control. APs measured in non-spontaneously active hiPSC-CMs displayed a reduced upstroke velocity and a prolonged duration compared to those derived from a genetically unrelated control.
LQT8/Timothy syndrome
LQT8 and Timothy syndrome are caused by mutation in the CACNA1C gene encoding the L-type Ca 2+ channel. Repolarization disease is only one facet of LQT8 as CACNA1C mutations also give rise to other features including syndactyly, heart malformations, and autism spectrum disorders (Yazawa et al., 2011). Yazawa et al. (2011) studied hiPSC-CMs of two patients with LQT8. Fibroblasts were isolated from skin biopsies and were reprogrammed using four retroviruses containing SOX2, OCT3/4, KLF4, and MYC. EBs were used in the generation of hiPSC-CMs. EBs from LQT8/Timothy syndrome hiPSC lines contracted at 30 bpm, whereas control hiPSC-line EBs contracted at a rate of 60 bpm. The LQT8-iPSC-CM showed delay in inactivation of I Ca,L and abnormalities in intracellular Ca 2+ handling, with larger and prolonged Ca 2+ transients. Importantly, such aspects of I Ca,L mutations can not be revealed when studying the mutation in a heterologous cell system. The APs of LQT8/Timothy syndrome ventricular-like hiPSC-CMs were three times longer than those of WT hiPSC-CMs. Roscovitine, a compound that increases the voltage-dependent inactivation of the voltage dependent Ca 2+ channel, reverted the delayed inactivation and restored the irregular Ca 2+ transients associated with LQT8/Timothy syndrome.
CATECHOLAMINERGIC POLYMORPHIC VENTRICULAR TACHYCARDIA
CPVT is characterized by cateholamine/stress-induced ventricular arrhythmias that can lead to sudden cardiac death in young individuals (Priori and Chen, 2011). CPVT is linked to mutations in the RYR2 gene, encoding an intracellular Ca 2+ release channel, and mutations in CASQ2, encoding a calcium binding protein in the SR which stores Ca 2 . RyR2 and CASQ2 play a role in Ca 2+ cycling and contractile activity of the CM (Priori and Chen, 2011). To date, three groups have published a CPVT hiPSC model (Fatima et al., 2011;Jung et al., 2012;Novak et al., 2012). Fatima et al. (2011) studied the F243I mutation in the RYR2 gene. A skin biopsy of a patient with CPVT carrying the F243I mutation in the RYR2 gene was taken and fibroblasts derived from this biopsy were infected with retroviruses encoding OCT3/4. SOX2, KLF4, and c-MYC. Cardiac differentiation was achieved by co-culturing with END-2 cells. APs were measured in spontaneously beating single hiPSC-CMs and were categorized as ventricular-, atrial-, and nodal-like APs. Isoproterenol was used to evoke the phenotype. In 22 out of 38 CPVT-iPSC-CMs isoprenaline resulted in a negative chronotropic response and 13 cells exhibited delayed afterdepolarizations (DADs), which are afterdepolarizations after and AP ( Figure 1C) due to spontaneous SR Ca 2+ release what activates I NCX (Verkerk et al., 2000a). All control hiPSC-CM showed normal positive chronotropic response. Confocal fluorescence imaging revealed spontaneous local Ca 2+ release events of higher amplitude and longer duration in CPVT-iPS-CMs. In addition, the CPVT-iPSC-CMs showed a decrease in I Ca,L and Ca 2+ transients in the presence of forskolin, an adenyl cyclase activator. As the authors state, this is likely due to the large and sustained rise of intracellular Ca 2+ concentration. Jung et al. (2012) studied the RYR2 mutation S406L. Fibroblasts of the patient were transduced with a retroviral vector encoding SOX2, OCT4, KLF4, and c-MYC. To direct the hiPSCs to the cardiac lineage, EB differentiation was used. The CPVT-iPSC-CM showed elevated Ca 2+ concentrations, a reduced SR Ca 2+ content, and increased susceptibility to DADs under catecholaminergic stress induced by isoproterenol. Further the authors investigated the ability of dantrolene to rescue the disease phenotype. Dantrolene is a hydrantoin derivative and muscle relaxant, currently used as therapy in cases with malignant hyperthermia, a disorder caused by mutations in the skeletal ryanodine receptor (RYR1) (Kobayashi et al., 2009). Dantrolene, restored normal Ca 2+ spark properties and the arrhythmogenic phenotype. Novak et al. (2012) studied the effect of the autosomal recessive missense mutation D307H in the CASQ2 gene. Dermal fibroblasts of two mutation carriers were transduced with a single lentiviral vector containing OCT4, SOX2, KLF4, and c-MYC. Differentiation toward hiPSC-CMs was achieved by EB formation. Spontaneous beating rate of differentiated EBs was significantly lower in CPVT-iPSC-CMs (∼26 bpm) compared to control iPSC-CMs (∼39 bpm). Isoproterenol induced DADs, oscillatory arrhythmic prepotentials (diastolic voltage oscillation, which appear during the late diastolic depolarization) and increased [Ca 2+ ] i .
CONCLUSIONS AND FUTURE PERSPECTIVES
The hiPSC-CM models described in this review show that it is possible to recapitulate in vitro in the hiPSC-CM system the disease phenotype of patients with Mendelian cardiac rhythm disorders. Furthermore, different studies have shown that LQT-iPSC-CMs of symptomatic patients show a more severe cellular phenotype than those obtained from asymptomatic patients with the same mutation (Itzhaki et al., 2011a;Matsa et al., 2011;Lahti et al., 2012). Moreover, hiPSC-CMs can recapitulate a phenotype that cannot be shown in a heterologous expression system (Lahti et al., 2012). For example, Lahti et al. (2012) reported a decrease of ∼43% in I Kr density in LQT2-iPSC-CMs with the R176W mutation, which was not revealed in a heterologous expression system. This might reflect, amongst others, differences in cellular environment between the two cell systems or may be due to the effect of high transgene expression as a consequence of the use of a strong promoter in the heterologous expression system. A significant advantage of the hiPSC-CM system is that in this system, in contrast to heterologous expression systems, it also possible to study the effects on the AP and Ca 2+ cycling.
The responses to some pharmacological agents are studied in hiPSC-CMs. The results are in line with what is seen in patients, healthy human beings, and adult CM. For example β-adrenergic stimulation with isoproterenol leads to a positive chronotropic effect and AP shorterning (Zhang et al., 2009;Moretti et al., 2010) and application of β-blockers blunt the effect of isoproterenol (Moretti et al., 2010). Also the AP shortening effects of I K,ATP openers pinacidil and nicorandil and the I Ca channel blocker nifedipine was captured in hiPSC-CM.
Functional I Na (Ma et al., 2011;Davis et al., 2012), I Ca,L (Itzhaki et al., 2011a;Ma et al., 2011;Yazawa et al., 2011), I Kr (Itzhaki et al., 2011a;Ma et al., 2011;Matsa et al., 2011;Lahti et al., 2012) and I Ks (Ma et al., 2011) have been demonstrated in hIPSC-CMs and mutations affecting these channels as well as pharmacological ion channel blockade were shown to impact on the AP. While the functional presence of SR, RyRs and the Ca 2+ binding protein CASQ2was demonstrated (Itzhaki et al., 2011b;Lee et al., 2011;Novak et al., 2012), studies have shown that the coupling between Ca 2+ influx through L-type Ca 2+ channels and Ca 2+ release from the SR through RyRs is poor as a consequence of the lack of t-tubuli in hiPSC-CMS. Thus, the use of hiPSC-CMs to study certain cardiac arrhythmia syndromes, such as CPVT and LQT8, caused by mutations in one of the Ca 2+ handling proteins is limited to the study of the biophysical properties of the affected protein. While I to1 , I K1 , and I f are present in hiPSC-CMs (Ma et al., 2011), their contribution to electrical activity in these cells has not been proven by pharmacological blockade or through the effect of mutations in the respective genes. Openers of I K,ATP shorten the AP (Itzhaki et al., 2011a;Matsa et al., 2011), so functional presence can be presumed but needs to be studied in more detail. The presence of I NCX is also not studied in detail but its functional presence can also be presumed since intact Ca 2+ handling has been demonstrated (Itzhaki et al., 2011b;Lee et al., 2011). Up till now there is no evidence for the functional presence or absence of I K,ACh .
A difficulty in the use of hiPSC-CM models is their immature electrophysiological phenotype, with depolarized MDP or RMP and slow AP upstroke velocities compared to native CMs. The MDP and upstroke velocity of hiPSC-CMs resemble more those of fetal CMs than adult CMs (Davis et al., 2011). Furthermore, most studies report that hiPSC-CM beat spontaneously, which is also characteristic of fetal CMs (Mummery et al., 2003). Of note, in our study on quiescent hiPSC-CMs we recorded a more-negative RMP and a faster AP upstroke velocity (Davis et al., 2012). Considering the importance of I K1 for setting the RMP, it is likely that the quiescent hiPSC-CMs have a larger I K1 than spontaneously beating hiPSC-CMs. It is not known whether quiescent cells were also present among the hiPSC-CM generated in studies in which spontaneously beating hiPSC-CMs were studied. Possibly, in these studies, the spontaneous activity was used as a tool to recognize CMs. A further consideration is that cultured adult CMs show a depolarized MDP and slower AP upstroke velocity compared to freshly isolated adult CMs. The depolarized MDP in cultured CMs is known to be caused by progressive decline in I K1 (Mitcheson et al., 1998). Another similarity between cultured ventricular myocytes and hiPSC-CMs is that different AP phenotypes are observed. In many hiPSC-CM studies different AP characteristics are observed and are classified as ventricular-like, atrial-like and nodal-like AP. Similarly, when ventricular CMs are cultured different AP phenotypes are observed after one day in culture, and more pronounced variability is evident after four days in culture (Mitcheson et al., 1998).
Another difficulty is the purity of the population of hiPSC-CMs acquired. With the current techniques it is not possible to acquire a pure population of CMs, the fraction of CMs obtained may vary from 1% to ∼50% of the total cells (Dambrot et al., 2011). Moreover, as discussed above, the CMs that are generated are in fact a mixed population of CMs displaying different AP characteristics. One way in which this issue might be addressed is through the use of selectable markers driven by CM lineagespecific promoter elements. However, while this might be useful in selecting CMs as opposed to other cell types, more research is still required for the identification of promoter elements that may be used in selecting for specific CM types (e.g., ventricular versus atrial). Another solution might be found in chemical enhancement of cardiac differentiation. For instance, ascorbic acid enhances cardiac differentiation and minimizes the interline variance and facilitates the structural and functional maturation of hiPSC-CMs (Cao et al., 2012). From hESC-CM research we know that bone morphogenetic protein (BMP) signaling inhibition after mesodermal formation facilitates cardiac development. In this study, also a possibility for ventricular or atrial specific differentiation is shown (Zhang et al., 2011). Inhibition of retinoid (RA) signaling by noggin leads to CM specification into ventricular cells, whereas RA treatment leads to atrial specification (Zhang et al., 2011). Direction of differentiation to nodal like CMs can be achieved by activation of the Ca2 + activated potassium channels of small and intermediate conductance (SKCas) by 1-ethyl-2-benzimidazolinone (Kleger et al., 2010). Ventricular specification and electrophysiological maturation of hESC-CMs may also be promoted by microRNAs (miRs). MiR-499 was shown to promote ventricular specification; miR-1 facilitates electrophysiological maturation (Fu et al., 2011).
In summary, iPSC-CM models recapitulate the phenotype of patients with cardiac arrhythmia syndromes. However, the interpretation of electrophysiological data derived from these cells, should be done with caution, since hiPSC-CMs present immature phenotypes and do not recapitulate all the electrical characteristics of an adult CM. Thus, studies addressing the maturity and purity of the hiPSC-CMs acquired are needed, as well as studies to characterize the electrophysiological and pharmacological characteristcs in more detail. Because of this, there are still limitations for the use of this model system in studies on cardiac rhythm disorders, especially if the disease causing mutation is not known. So far, hiPSC-CMs have only been applied as models to the study of disorders for which mutations in particular genes have been identified. Future studies are likely to demonstrate the potential of these cell systems in pointing us to pathophysiological mechanisms for those cases for which no gene mutations are yet known. Another future application of hiPSC-CMs is the use in cardiac safety pharmacology and the development of new drugs. hiPSC-CMs provide researchers with CMs of human origin which are better-suited than CMs of animal origin or heterologous cell systems. Furthermore, hiPSC-CMs provide us with the opportunity to test drugs in disease-specific CMs instead of healthy CMs. Thus, in future studying hiPSC models might lead to novel insights in pathophysiology, improve understanding of genotypephenotype relationships and could be used in the development and testing of pharmacological agents to treat human cardiac disease.
|
2016-05-12T22:15:10.714Z
|
2012-08-31T00:00:00.000
|
{
"year": 2012,
"sha1": "fa5fd84eace0cf08dba8b5853cad33704ac11e43",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fphys.2012.00346",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa5fd84eace0cf08dba8b5853cad33704ac11e43",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7662274
|
pes2o/s2orc
|
v3-fos-license
|
The molecular basis of phenylketonuria in Latvia
Characterization of the molecular basis of phenylketonuria (PKU) in Latvia has been accomplished through the analysis of 96 unrelated chromosomes from 50 Latvian PKU patients. Phenylalanine hydroxylase (PAH) gene mutations have been analyzed through a combined approach in which R158Q, R252W, R261Q, G272X, IVS10‐11G>A and R408W mutations were first screened for by PCR or restriction generating PCR amplification of PAH gene exons 5, 7, 11 and 12 followed by digestion with the appropriate diagnostic enzyme. Subsequently ‘broad range’ denaturing gradient gel electrophoresis analysis of the 13 PAH gene exons has been used to study uncharacterized PKU chromosomes. A mutation detection rate of 98% was achieved. 12 different mutations were found, with the most frequent mutation, R408W, accounting for 76% of Latvian PKU alleles. Six mutations (R408W, E280K, R158Q, A104D, R261Q and P281L) represent 92% of PKU chromosomes. PAH VNTR and STR alleles have been also identified and minihaplotype associations with PKU mutations were also determined. © 2003 Wiley‐Liss, Inc.
INTRODUCTION
Phenylketonuria (PKU, MIM# 261600) is an autosomal recessive disease caused by the deficiency of hepatic enzyme phenylalanine hydroxylase (PAH), a non-heme iron mono-oxygenase that catalyzes the conversion of phenylalanine to tyrosine.It is characterized by hyperphenylalaninemia leading to impaired cognitive development and function.PKU diagnosis through newborn screening programs (Guthrie and Susi, 1963) allows early introduction of a low-phenylalanine diet therapy that depends on the severity of the disease and can prevent the neurotoxic effects of phenylalanine and its metabolites.PKU is caused by mutations in the PAH gene (GenBank AF404777) that spans about 90 kbp on chromosome 12q22-q24.1 and contains 13 exons.More than 400 PKU mutations in different populations have been identified until now and are available in the public-domain PAH gene mutation database (Novacki et al., 1998; see http://data.mch.mcgill.ca/pahdb_new).The heterogeneity of PAH gene mutations seems to be the major determinant of phenotypic variability in PKU and a clear genotypephenotype correlation has been recently established (Guldberg et al., 1998).Thus, definition of PKU-causing PAH mutation profile in a given population seems worthwhile in order to anticipate dietary requirements through mutation analysis (Güttler and Guldberg, 2000).Mild PAH deficiency causes only mild hyperphenylalaninemia that does not require dietary treatment (MHP).
58 PKU cases were identified in Latvia from 1980 to 2001, 45 through the neonatal screening program and 13, aged from 7 month to 4 years, during genetic counseling.These 13 PKU patients were born either outside of Latvia or earlier than the PKU screening was begun.This corresponds to an incidence of the disease of 1:8170 live births.Treatment was undertaken for fifty probands.Here the molecular characterization of PAH gene from these Latvian PKU patients is reported.PAH mutation has been characterized for 98% of PKU Latvian chromosomes through the combination of a mutation screening method which takes advantage of the presence of diagnostic restriction sites in the PAH gene and a mutation scanning method, such as denaturing gradient gel electrophoresis (DGGE).
SUBJECTS AND METHODS
Fluorimetric method was used for newborn PKU screening followed by HPLC amino-acid analysis.Patients were diagnosed on the basis of data on either dietary phenylalanine tolerance (in patients with PKU) or pretreatment blood-phenylalanine levels (in individuals with MHP) (Guldberg et al., 1998).50 patients, corresponding to 96 unrelated PKU chromosomes, and their parents when available, were enrolled in this study.In 19 families (20 probands) grandparents were Latvians, in 5 families one parent was Latvian, in 2 families parents were of mixed East Slavic/Ugro-Finnic origin, and in the remaining families grandparents were Balto-Slavonian or Slavonian including Russian, Bielorussian, Ukrainian and Polish.
Blood for DNA extraction was obtained from 50 patients as well as from both parents in 30 families and from one parent in 16 families.Mutations R158Q, R252W, R261Q, G272X, IVS10-11G>A and R408W were analysed by PCR or restriction site-generating PCR amplification of PAH gene exons 5, 7, 11 and 12 followed by digestion with the appropriate enzyme (Eiken et al., 1991).The remaining uncharacterized PKU chromosomes were analysed by 'broad range' denaturing gradient gel electrophoresis (DGGE) of the 13 PAH gene exons and intron/exon junctions (Guldberg and Güttler, 1994).DNA fragments which displayed an abnormal DGGE pattern were analyzed using ABI PRISM 310 Sequence Analyser (Perkin Elmer).
When parents were available, PAH minihaplotypes (combination of VNTR and STR alleles) were analysed according to and Giannattasio et al., 1996.
RESULTS AND DISCUSSION
Screening of PAH gene exons for the presence of R158Q, R252W, R261Q, G272X, IVS10-11G>A and R408W mutations allowed the characterization of 82 (85%) of Latvian PKU chromosomes.The remaining 14 PKU chromosomes were scanned for PAH gene mutations by DGGE analysis, which provides simultaneous amplification and electrophoretic analysis of the 13 PAH gene exons and intron/exon junctions.Results are summarized in Table 1.
The combined approach used allowed to identify disease-causing mutations in 94 (98%) out of 96 Latvian PKU chromosomes analysed.12 different mutations were found, including 8 missense, 2 nonsense and 2 splice mutations.The most frequent mutation, R408W, accounts for 76 % of Latvian PKU alleles whereas six mutations (R408W, E280K, R158Q, A104D, R261Q and P281L) represents about 92 % of PKU chromosomes.For two patients compound heterozygotes for R408W DGGE failed to reveal the presence of the second PAH mutation.
Twenty-eight (58%) of 48 completely characterized PKU patients are homozygous for R408W, the remaining 20 patients are compound heterozygous.This is consistent with the fact that most part of Latvian PKU patients have the classical form of the disease.These results show a high degree of homogeneity in the molecular basis of PKU in Latvia in agreement with what found in other populations from Eastern and Central Europe (Charikova et al., 1993;Jaruzelska et al., 1993;Lillevali et al., 1996;Giannattasio et al., 1997;Kozak et al., 1997).
PAH minihaplotypes (combination of PAH gene VNTR and STR alleles) for the mutant chromosomes have been identified when parents were available and their association with PKU mutations is reported in Table 2. Eight different PAH gene minihaplotypes have been identified associated with Latvian PKU chromosomes.The association of the most frequent R408W mutation with 3/238 minihaplotype supports the Balto-Slavic origin of the R408W mutation, which probably spread across Europe from the northeast to the southwest (Giannattasio et al., 1997).*Ethnicity: Armenian (A), Latvian (Lv), Polish (P), Russian (R), Ukrainian (U) and unknown (u) origin Data obtained, together with family ancestry information available for this study, show an overall homogeneity of the PKU mutation spectrum in a heterogeneous population such as Latvians.The two unidentified PKU chromosomes may harbor large deletions or mutations in PAH gene regions not scanned by DGGE and thus deserve further investigation.
PAH Mutations in Latvia
Since most mutations at the PAH locus are substitutions, small deletions or insertions, PKU mutation detection rate will be high in this population.Restriction enzyme digestion of amplified exon 12 to detect R408W is highly efficient because of the high relative frequency of this mutation in Latvians.This assay should always be undertaken as the first step of a PKU molecular diagnostic protocol whenever a new sample is received in the laboratory.Once R408W mutation has been screened for, newly developed DGGE technology would be the eligible method to detect the remaining PAH mutations causing PKU in Latvia.
|
2018-04-03T03:41:17.944Z
|
2003-04-01T00:00:00.000
|
{
"year": 2003,
"sha1": "619773db0b0e596d615c1d0925b4575603396f8b",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/humu.9114",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "9cb7bdbbd5e077efae2528a8de3d2bcafc5f8bd5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
234198450
|
pes2o/s2orc
|
v3-fos-license
|
STUDY ON SEDIMENT DISASTERS IN TENNOU, KURE CITY, HIROSHIMA PREFECTURE AND SEDIMENT AND FLOOD DAMAGE IN DOWNSTREAM AREA
. 6) Nakatani, K., Kosugi, M., Satofuka, Y. and Mizuyama, T.: Influence of housing and roads on debris flow flooding and deposition in alluvial fan areas : Case study on debris flows in Hiroshima, Japan, in August 2014, Journal of the Japan Society of Erosion Control Engineering, Vol. 69, No. 5, pp. 3-10, 2017 (in Japanese with English abstract). 7) Takahashi, T. and Nakagawa H. : Prediction of stony debris flow induced by severe rainfall, Journal of the Japan Society of Erosion Control Engineering, Vol. 44, No. 3, pp. 47-52, 1991 (in Japanese with English abstract).
INTRODUCTION
Typhoon Prapiroon, which passed from western Kyushu to the Sea of Japan on July 3, 2018, supplied large amounts of moisture to the rainy season front. This caused continuous rainfall throughout western Japan from July 5, which caused sediment disasters such as landslides and debris flows in many areas. The number of deaths and missing persons totaled 245 in 13 prefectures, and 120 (including 5 missing) in Hiroshima Prefecture. The highest number of victims in Hiroshima Prefecture was in Kure City. Hiroshima Prefecture had experienced a number of huge sediment disasters caused by heavy rains, such as the Kake Town (now Akiota Town) in 1988, the June 29th Hiroshima Sediment Disaster in 1999, and the August 20th Hiroshima Sediment Disaster in 2014, and the damage from sediment disasters has been in creasing 1), 2) .
On the sediment disasters caused by the heavy rain event of July 2018, there had been reports of sediment and flood damage caused by sediment flowing into residential areas several kilometers downstream from the collapse site in Tennou Kure City, Koyaura Saka-cho and Saka Saka-cho 3) . The upstream of the basin consists of a valley plain and both sides of the river are surrounded by steep slopes. Therefore, when sediment deposition occurs, the riverbed only rises, but does not spread widely. If the longitudinal slope of the current river becomes steep and sediment runoff continues for a long time, there is a possibility that sediment will move downstream. The slope of a river usually becomes mild on the downstream side, and sediment deposition tends to occur in the downstream of the channel. Due to the river bed rise from deposition, the sediment and flood damage that cause flooding and deposit overflowing from the river seem to occur.
The disaster situations were different from debris flow in which sediment moved more to the downstream side from the sediment disaster-prone areas. In a designated high-risk area, building damage is generally assumed due to the direct impact of debris flow. Events are assumed to happen within a short time; i.e., from several minutes up to ten minutes with several meters of flow depth and deposition. However, the phenomena of this event were different. Sediments were gradually deposited on the river due to flood, and there was also flooding and deposition outside of the river, causing several meters of deposition in the surrounding roads and burying the lower floors of buildings. It is necessary to examine the factors and the process of the phenomenon because the influence of sediments and flood damage spread widely in the downstream area. In this study, we aimed to clarify the disaster situation by field survey and to understand the time series phenomenon of sediment and flood damage using numerical simulation, targeting the Tennou area where the damage to humans was the highest. Figure 1 shows the rainfall data of the rain gauge station at Tennou, Kure City. The rain started from 9:00 on July 5 and continued until 9:00 on July 7. Then it stopped for a while, and continued raining for 15 hours from 21:00 on July 7. It went on for 82 hours from the beginning to the end of the event. Total rainfall was 459 mm, with a maximum hourly rainfall of 55 mm/h and a maximum 24 hour rainfall of 305 mm. Calculating from the probability rainfall intensity equation for Hiroshima Prefecture, hourly rainfall corresponds to approx. 12 years return period and 24 hours. Rainfall corresponds to approx. 168 years return period. Compared to the past sediment disasters' rainfall data in Hiroshima Prefecture, the hourly rainfall is small, but the total rainfall is large, and the rainfall duration is long. Figure 2 shows a longitudinal section of the main river Oya-Ohkawa and the field investigation results. In the River where sediment and flood damage occurred in the downstream of the basin, the flow characteristic was different in the upstream and midstream of the basin. Hereafter, we call the area as upstream, midstream, and downstream. The collapse site of the upstream was 14 m wide, 16 m long, and 1.0 m deep. The bed of the downstream was bare and covered with relatively fresh 1.5-2.0 m diameter boulders. There were many exposed rocks on the riverbed, and sediment and woody debris accumulated. We assumed that not only the collapse occurred, but also the afforestation on the riverbank was eroded and moved as woody debris. The existing sabo dam, shown in Fig.2, 0.5-2.0 m diameter of gravel and woody debris deposited in the upstream surface layer. Traces of sediment runoff was left upstream of the dam. Boulders and woody debris were accumulated in the series of ground sills down-stream, but no boulders or woody debris were found in the residential areas further downstream. Focusing on the longitudinal slope, the slope becomes milder from upstream to downstream, so selective transport by grain size occurred in this section. Structures may have contributed to the trapping of large gravels and wood debris due to the sediment conditions in sabo dams and ground sills. Figure 3 shows the results of the trench survey on sedimentation upstream of the sabo dam located in the middle stream of Oya-Ohkawa. We used a backhoe to take one sample of 2 m width, 5 m length, and 2 m depth, and conducted a grain size test in five layers from the surface layer to the lower layer in vertical direction. Figure 4 shows the results of grain size distribution. It was confirmed that each layer had different grain size and there were five layers of sediment deposited on the sabo dam. The bottom layer was an old sediment deposition containing deciduous leaves. Although the grain size of the front part and the subsequent flow may be different even in one debris flow and formed several layers, we considered that the upper layers except for the bottom old sediment was deposited by four sediment runoffs in this paper. We focused on the grain size distribution of each layer. The second and fourth layers showed the same grain size distribution and were smaller in size than the first and third layers. This was probably because sediment production occurred at each flood and formed deposit layer and also because the runoff scale was different. Figure 5 shows the results of trace surveys of the sediment deposition area, inundation damage area, area. The sediment thickness and inundation depth were measured by comparing traces just after the sediment disaster and after sediment removal, and also by using aerial photographs. The widest area of sediment deposition was from the area with longitudinal slope, which was less than 2 deg. to the Hiroshima Kure Road. A box culvert with a longitudinal crosssection approx. 50 m existed in the river where the Hiroshima-Kure Road is located. It appeared to be blocked by sediment deposition during the heavy rain. As a result, flooding and sedimentation occurred upstream of the box culvert. The flooding spread much wider on the right bank side of the river than on the left bank side, and was partly influenced by the Setono River branch. The sediment deposition occurred approx. 350 m in the longitudinal direction and maximum 230 m in the transverse direction upstream from the Hiroshima Kure Road. The maximum sediment height was 1.7 m and was remarkable around the river 120 m upstream from the box culvert. On each site shown in the photograph, sediment height was approx. 1.5 m. According to the Hiroshima Prefecture report 4) , the estimated time of sediment disasters' occurrence in Tennou was 19:00-21:00 on July 6 based on information from the police and fire departments. The evacuation order was canceled in the morning on July 9. After the release, it was confirmed that the river and houses were buried with sediment deposition for several meters. From the situations of houses shown on the upper right and lower left of Fig.5, it is estimated that the sediment deposition progressed over a long period of time, taking dozens of hours to deposit, without the destruction or significant damage usually seen in debris flows.
DEBRIS FLOW SIMULATIONS
In applying sediment runoff simulations in Tennou, Kure City, we used the geographic information system (GlS)-related Hyper KANAKO system 5), 6) . In the Hyper KANAKO system, the applied sediment runoff simulation model is proposed by Takahashi 7) including debris flow, sediment sheet flow, and bed load. The model considers the erosion/deposition due to equilibrium concentrations. It includes equations of momentum considering riverbed shearing stress, continuation, riverbed deformation, and erosion/deposition. The authors applied Hyper KANAKO for other sediment runoff cases that occurred and confirmed the validity of the flow behaviors and deposition distributions. In the simulations, we assumed that culverts located on Hiroshima Kure Road to be blocked and the flows to move down on the roads from the culverts.
(1) Landform data
We applied the digital elevation model (DEM) data, the ground elevation in which the heights of the buildings and trees were excluded with a 1 m mesh resolution taken before the 2018 disaster with airborne LiDAR. In this study, we set a 1D simulation area and 2D simulation area. Usually, 1D area is set where the valley is deep and both sides are surrounded by steep slope. In the 1D area, it is suitable that the flow direction is controlled only for the longitudinal directions, and will not spread in transverse directions. However, the phenomenon that occurred in Tennou did not show typical debris flows characterized by large discharge, high sediment concentration within a short time. It is assumed that smaller discharge flow occurred after the debris flow. Riverbed erosion and lower concentration of sediment runoff continued for a long period, and caused sediment runoff to the downstream side. Therefore, erosion occurred due to the subsequent flow from the deposition in the upstream region of sabo dam. In this study, we aimed to consider the erosion of the deposition from debris flow, cross-sectional river variation from erosion/deposition, and river shape. Thus, we set a wide 2D area from the steep valley area to the downstream side.
As shown in the red square in Fig.6 designated as 2D area, we set a 1D area upstream of the inflow point measuring 50 m long. The 1D area was set as the supplying section of the water and sediment. In the upstream area, it was assumed that four sediment runoffs occurred from the survey of the deposition upstream of the sabo dam. Each event might happen in different discharge, moved sediment and other conditions, but in this study, we set all four events on the same scale and conditions. Not to affect the former event by the latter one, we set an interval time after each runoff with the same time period of the runoff continuance time. From the simulation start point, we supplied water and sediment as described in the latter paragraph (2). In the disaster situation, the moved sediment volume was larger in the right branch compared to the high-risk torrent before the disaster. Therefore, we used the moved sediment volume from the right branch and also assumed the sediment runoff from it. The interval of the 1D simulation was set at 5 m, and the 2D area simulation mesh was set at 2 m. There were some check dams for forest conservation and ground sills in the Oya-Ohkawa river, and we set the constructions as fixed bed landform from the details of the DEM data.
(2) Simulation conditions
The moved sediment volume was estimated as approx. 50,000 m 3 including void from the difference between DEM before and after the disaster taken with airborne LiDAR. Therefore, we supplied water and sediment by supplying the mixture discharge and sediment concentration from the upstream. The sediment concentration of the flow was not clear during the event. From the field surveys, it is reported that damaged houses located near Hiroshima Kure Road were buried by sediment deposition, but the window glasses and walls were not broken or destroyed seriously. Therefore, we assumed that the phenomenon was that of bed load continuing over a long period rather than destructive debris flows. In the Oya-Ohkawa River, some sections were mild slope less than 2 deg., and the equilibrium concentration in the mild area was calculated as 0.0387). We set the sediment concentration as 0.03 for the sediment runoff moving to the downstream mild slope area.
We considered the water supply from the rainfall to the target basin 3.1 km 2 from the upstream of the 2D area. The target rainfall we applied was 459 mm accumulation precipitation in the Tennou Rain Observation Station in Kure City. Using the rainfall and considering runoff rate as 0.7 in the mountainous area, the total runoff was calculated. Considering the long period of sediment runoff, we supplied constant discharge with constant sediment concentration in different continuation times. When supplying 50 m 3 /s, we supplied 20,000 s (5.6 hrs.); similarly, 25 m 3 /s with 40,000 s (11.1 hrs.) and 10 m 3 /s with 100,000 s (27.8 hrs.). We also considered the high concentration and short period of debris flow scenario for comparison.
To set the debris flow scenario, the maximum height was 3.1 m from the trace survey. The height might include the riverbed change, but it was difficult to estimate where the riverbed was during the peak time, so we set it as the flow depth. The river width was 16.8 m. The cross-section was shallow on the right bank and deep on the left bank with a cross-sectional area of approx. 30 m 2 . Considering the mountainous torrent, we set the Manning's coefficient as 0.04 m -1/3 s, which was a medium value 8) . Applying hydraulic mean depth as 1.5 m and slope 0.064, the debris flow peak was calculated as 250 m 3 /s. We set the continuous time of the debris flow as 300 s from the reports of debris flow observed in Japan 9) . From the continuous time and considering four debris flows, we set the sediment concentration as 0.1 for debris flows. We also supplied water flows in the interval and also for the subsequent flow. From trial simulations, sediment moved down to the downstream side when setting 10 m 3 /s. Therefore, after supplying the debris flow, we supplied 10 m 3 /s water flow for 300 s in the interval. We repeated supplying debris flow and water flow four times. Then, we supplied water flow as subsequent flow in 10 m 3 /s with 70,000 m 3 (19.4 hrs.), to make the total water supplying condition similar for the other cases. From the field survey of the sediment deposition in the box culvert and also from the recent studies considering the phase shift due to fine sediment containing debris flows 10) , we set the 0.01 m as the representative grain diameter and fluid density value with 1,400 kg/m 3 . Other parameters were set referring to recent studies of sediment runoff simulations. In this study, we did not consider the height of the buildings. Only the foundations of the buildings were described in the DEM landform data. The influence from the branches was not considered in the simulation. Table 1 shows the simulation cases. Figure 7 shows the sediment deposition after the simulation. In all cases, sediment deposition is remarkable at the bight and the slope change point (steep to mild section) with narrow section in the upstream river. Comparing Run1, 2, and 3 with different constant flow rates in low concentration over a long time, Run1 showed a wide range of sedimentation width in the upstream part (area indicated by the red arrow in Fig.7), which showed valley topography. In Run2 and 3, flow rate is smaller and the width in the upstream part was narrower than in Run1. Generally, the flow rate and the water depth are proportional. When large amounts of sediment deposit in a river, the flood will spread to the roads from the river, and deposition is likely to spread widely. Furthermore, in the area indicated by the red arrow, the sediment deposition in Run4, considering the debris flow scenario, was larger than in Run1-3. Where larger deposition occurred, it was primarily because the sediment concentration was higher than the equilibrium concentration. In Run4 with a combination of debris flow and subsequent water flow, debris flow causes sedimentation first. Since only part of the sediment was eroded in small flow rate, the amount of sediment runoff to downstream side is less than in other cases. On the other hand, in the area shown by the black arrow in the downstream, flooding occurred outside the river in Run1 and 4, but not in Run2 and 3. In the disaster situation, it was reported that the flooding did not occur in this section. Focusing on the Hiro-shima Kure Road in the downstream area, Run4 with 250 m 3 /s large discharge showed smaller sediment deposition area than Runl. Because sediment deposition occurred upstream and the duration of the peak discharge was short, discharge became smaller when the flow moved down to this section. The terrain downstream from this section extended in the transverse direction. In the actual disaster, the boundary between the valley topography and the alluvial fan was the initiation point of the flood. Upstream from the box culvert, the sediment deposition was remarkable in the channel, and deposition also overflowed to residential area around the river. In large discharge Runl, deposition occurred from the alluvial fan area and spread due to the riverbed rising from deposition. While in Run4 considering debris flow with larger discharge than in Runl, deposition in the upstream affected the sediment runoff to the downstream. The amount of runoff was smaller in Run4, and the deposition range downstream was also smaller than in other cases. Regarding the sediment deposition in the upstream side of the culvert, the deposition length to the channel was long in Run3, followed by Run2 and ; the shortest was Run4. When sediment deposits in the river, flooding tends to occur in the areas surrounding the riverbed rising. Therefore, in Run3 with small discharge, the flooding extended upstream, and eventually the sediment deposition thickness was affected. Figure 8 shows the time series of deposition height in Run3, in which sediment deposition best represented the disaster. As to the sediment and flooding phenomena, Fig.9 shows the time series of the flow depth in Run3.
RESULTS AND DISCUSSION
From 20,000 s (5.6 hrs.) after the simulation, the sediment moved down to the box culvert around the Hiroshima Kure Road, and the deposition started in the upstream of the culvert. Deposition overflow from the river started from the left bank side, the flooding area expanded to 50,000 s (13.9 hrs.), and the sediment thickness increased thereafter, at a maximum 2.0 m. On the other hand, in the right bank side, the inundation area expanded to 80,000 s (22.2 hrs.), and then the deposition height gradually increased. The extent of the sediment inundation range was as follows: first, the sediment deposition started in the river, the river section became small and the flood occurred, and water gradually overflowed outside the river (from the flow depth result in 20,000 s (5.6 hrs.)).
Step existed on the right bank side along the Hiroshima Kure Road and sediment was gradually supplied and the deposition expanded. The sediment spread to the right bank side, then spread to the upstream side. On the other hand, when the sediment deposition in the river extended to the upstream side, water and sediment began to overflow (from the flow depth result in 40,000 s (11.1 hrs.)). Then sediment runoff did not move down to the box culvert, and the sediment deposition in the right bank side of the Hiroshima Kure Road became smaller. In the right bank side, overflowed water and sediment from the upstream moved to this section, and deposition height gradually increased.
In Fig.10, the difference in the ground elevation before and after the disaster from the DEM and the simulation results of Run3 deposition are described. The counter range in Fig.10 is different from the results shown in Fig.7 and Fig.8. From Fig.10, deposition area in the upstream of the culvert is larger in the right bank section in the simulation compared to the disaster. And for the area in left bank section is corresponding well to the disaster. In the left bank section, the ground elevation became higher when the distance became far from the river, and deposition was influenced by the landform condition. In the right bank section, from the main river to the branch river had almost the same elevation. Deposition spread to the branch in the simulation result but it did not spread to the branch due to the effect of the buildings during the disaster. Furthermore, in the area shown in the square in Fig.10, deposition occurred outside the river in the simulation but it was not apperaed in the difference in ground elevation as shown in Fig.10. In the ortho photo taken after the disaster and the results shown in Fig.5, some areas showed flood overflow and deposition in the disaster case, but did not show overflow in the difference in ground elevation shown in Fig.10. It seemed to happen because the minimum range of the deposition was larger in Fig.10. The average deposition of the four points shown in Fig.5 was approx. 1.5 m and the points were located outside the river. The average height of the revetment wall of the river was approx. 1.6 m. The simulation results around these areas showed maximum 2.9 m in the river and maximum 1.5 m outside the river. Therefore, the simulation results correspond to the disaster situation. From the difference shown in Fig.10, the red color area spread widely, but deposition higher than 5 m did not appear in the field survey results.
In Fig.11, the designated high-risk area for debris flows in Tennnou, Kure City is described. Around the Oya-Ohkawa River, the designated area failed to consider the sediment runoff reaching the downstream area moving along the river. The designated area was considered to be prone to direct damage caused by debris flows, such as destruction due to collisions. It also failed to consider the long-time series sediment runoff. However, the image of designated area will lead residents to believe that the downstream area will not be affected by overflow due to deposition in the downstream side. In the Sediment Disaster Prevention Act in Japan, the high risk designated area is set from the landform conditions. When the slope is smaller than two degrees, the area will not be designated . However, in the valley plain area, sediment deposition may occurr on the riverbed and the topography may change during the rainfall event. In the valley plain area, landform can only change in longitudinal profile and can't change in cross sectional profile such as causing flooding and deposition outside from the river. Therefore, only the longitudinal profile changes. As shown in the 2018 disaster in Tennou, sediment runoff continued for a long time and caused large deposition in the area smaller than two degrees. For future work, we will extract similar landform areas that may face sediment and flood damage and also consider effective structural measures to mitigate disaster.
CONCLUSIONS
The sediment disasters caused by the 2018 heavy rains included not only direct damage from debris flows, such as buildings destruction and significant depositions in alluvial fans, but also sediment and flood damage in the downstream area several kilometers from the mountainous area. In Tennou, Kure City, debris flows occurred and large rocks moved in the upstream area. There were sediment runoffs but no overflows in the midstream area; however, there were overflows in the downstream area. In this study, we conducted field surveys and also applied simulations to study the disaster phenomena and how the overflows occurred in time series. From field surveys, we found that landslides up to 1-2 m thick and sedi-ment erosion in river caused runoff to the downstream side. Furthermore, woody debris and sediment diameter approx. 1 m moved in the upstream area, but most of them deposited upstream of the existing dams, and did not move to the downstream side. In the downstream side, there was no sediment deposition in the downstream side of the culvert located in the river. Therefore, culvert blocking seemed to occur. In the simulations, we assumed the scenario culvert blocked, and also considered the sediment runoff in long-time series and in low sediment concentration, such as bed loads. From the simulations, we confirmed that overflow started from the blocked culvert point, but as sediment deposition expanded, the flow section became small in the river and overflow also occurred at the upstream side of the culvert. We maintain that simulation can describe the sediment and flood damage and can consider the detail risk distribution in the affected area. For future work, we will extract similar landform areas that may face sediment and flood damage and also consider effective measures to mitigate disaster.
|
2021-05-11T00:07:29.659Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2f637ff34923159dd4f7fed1c2dadc7c4be09c94",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/journalofjsce/9/1/9_103/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "372407d374d5f7ed6cf306de52fb45ec7a51206a",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
30319573
|
pes2o/s2orc
|
v3-fos-license
|
Accelerating drug development for Alzheimer's disease through the use of data standards
Introduction The exceedingly high rate of failed trials in Alzheimer's disease (AD) calls for immediate attention to improve efficiencies and learning from past, ongoing, and future trials. Accurate, highly rigorous standardized data are at the core of meaningful scientific research. Data standards allow for proper integration of clinical data sets and represent the essential foundation for regulatory endorsement of drug development tools. Such tools increase the potential for success and accuracy of trial results. Methods The development of the Clinical Data Interchange Standards Consortium (CDISC) AD therapeutic area data standard was a comprehensive collaborative effort by CDISC and Coalition Against Major Diseases, a consortium of the Critical Path Institute. Clinical concepts for AD and mild cognitive impairment were defined and a data standards user guide was created from various sources of input, including data dictionaries used in AD clinical trials and observational studies. Results A comprehensive collection of AD-specific clinical data standards consisting of clinical outcome measures, leading candidate genes, and cerebrospinal fluid and imaging biomarkers was developed. The AD version 2.0 (V2.0) Therapeutic Area User Guide was developed by diverse experts working with data scientists across multiple consortia through a comprehensive review and revision process. The AD CDISC standard is a publicly available resource to facilitate widespread use and implementation. Discussion The AD CDISC V2.0 data standard serves as a platform to catalyze reproducible research, data integration, and efficiencies in clinical trials. It allows for the mapping and integration of available data and provides a foundation for future studies, data sharing, and long-term registries in AD. The availability of consensus data standards for AD has the potential to facilitate clinical trial initiation and increase sharing and aggregation of data across observational studies and among clinical trials, thereby improving our understanding of disease progression and treatment.
Data standards and the current landscape of Alzheimer's disease drug development
Drug development in Alzheimer's disease (AD) is increasingly being aimed at early intervention, with the recognition that such strategies hold the most promise to slow or halt disease progression [1]. New drug development tools such as disease progression models, biomarkers, and outcome measures that can easily and rapidly incorporate new and existing sources of information are urgently needed to accelerate drug development at all stages of the AD disease spectrum. The development and regulatory endorsement of these tools has been hampered by the lack of consensus data standards that cover both clinical and biomarker assessments allowing for rapid integrated analyses derived from multiple data sources.
The inability to compare data across different clinical trials arises in part because of differences between them, including data collection and format.
Data standards enable the integration and analysis of data from multiple sources. This, in turn, allows for development of common open-source tools [2,3]. Data standards provide the framework for consistent structure and understanding of data. Use of data standards results in an increase in efficiency of studies by maximizing data utility, minimizing reprocessing of data, and expediting regulatory review of new drug applications (NDAs). Standards also enable integrated analyses across different studies by allowing integration of data and reusability of programming statements within analysis software.
Research organizations have responded to the need for data standards by creating many different sets of standards [4]. Pharmaceutical companies have also created their own internal data standards, whereas government agencies have recommended and even required use of specific standards to funders [5].
Given the rapid increase in global data availability [6] and an increasing number of experimental treatment modalities, an efficient way to compare effects on clinically meaningful outcomes is critical for selecting the most promising therapeutics to advance to the clinic. To maximize the knowledge from the growing number of costly and high risk AD intervention studies, it is imperative that the field attend to the importance of data standardization, beginning at study start-up.
Clinical Data Interchange Standards Consortium data standards
The development and widespread dissemination of universally accepted global clinical data standards is the mission of the Clinical Data Interchange Standards Consortium (CDISC), which has been developing global, platform-independent standards to streamline medical research since 1997 [7]. CDISC is a global nonprofit organization that catalyzes productive collaboration to develop freely available, industry-wide clinical research data standards. The primary CDISC standard governing the structure of data collected in clinical studies is the Study Data Tabulation Model (SDTM), which defines the variables and rules associated with specific observation classes including events, interventions, and findings. SDTM is one of the required standards that sponsors must use for NDAs submitted for the U.S. Food and Drug Administration (FDA) review [8].
The implementation of consensus-based CDISC clinical data standards serves to improve medical research and health care [9]. Such standards support the acquisition, exchange, archiving, and reporting of electronic clinical research data. Notably, CDISC standards are recognized by the FDA and Japan's Pharmaceuticals and Medical Devices Agency as the preferred standards for submission of clinical trial data and enable regulatory reviewers to use sophisticated review tools and conduct more efficient reviews.
Public-private partnerships and precompetitive consortia have emerged as a common strategy to share the cost and risk of development of consensus data standards. The Alzheimer's Disease Neuroimaging Initiative (ADNI), formed in 2004, catalyzed awareness and external recognition of the importance of data standardization in the AD research community [10]. In a parallel effort, two nonprofit organizations, CDISC and the Critical Path Institute (C-Path), created the Coalition for the Acceleration of Standards and Therapies in 2012 to develop Therapeutic Area User Guides (TAUGs) for specific disease areas. The focus of the first CDISC therapeutic specific standard was AD, which used elements from ADNI. AD version 1.0 (V1.0) was completed in 2011. As of January 2017, a total of 27 TAUGs spanning a variety of different disease conditions have been developed by CDISC, most of them under the umbrella of Coalition for the Acceleration of Standards and Therapies.
There are a growing number of public-private partnerships focused on AD [11]. The Coalition Against Major Diseases (CAMD), whose mission is to accelerate the path of drug development, is one of many consortia of C-Path [12]. CAMD is a coalition of stakeholders including industry, government agencies, nonprofit organizations, advocacy organizations, academic experts, and regulatory agencies collaborating to improve the efficiency of drug development for memory disorders [13,14]. CAMD, in close partnership with CDISC and ADNI, represented the key groups that formed the collaborative framework for stakeholders working across consortia to successfully develop CDISC standards specific for AD.
This study discusses the development of the first therapeutic area-specific CDISC standard, how the CDISC standards are used, the need for additional standards, and, most importantly, the need to implement these standards across clinical studies to maximize knowledge gained from past, current, and future clinical trials.
Methods
The primary foundational CDISC data standards are the SDTM, the Analysis Data Model (ADaM), and the Clinical Data Acquisition Standards Harmonization (CDASH) model. SDTM is a standard specification for structuring and organizing data, whereas ADaM is used for the analysis of data sets. CDASH provides traceability from SDTM data sets back to data collection instruments. A complete guide for CDISC SDTM implementation is available [15]. Table 1 identifies these foundational CDISC standards and their descriptions.
The development of the AD CDISC data standard occurred in a series of stages that gathered input from a diverse set of experts including CAMD members, subjectmatter experts within ADNI, and CDISC experts. Clinical scientists contributed to an understanding of the clinical concepts and interrelationships that affect the usability and analysis of the data that results from the application of these concepts in a well-controlled study. CDISC experts, whose expertise includes the ability to fit these concepts within the confines of the standard data model, in turn worked with the broader standards community to ensure the resulting specifications were accurate, consistent, and appropriate within the context of the full body of existing CDISC standards and their associated rules.
A TAUG is a compilation of concepts, including concept maps (defined subsequently), brief narratives explaining the concept in the context of a disease area, and implementation examples illustrating the implementation of these concepts across the various CDISC standards ( Table 1).
The primary sources of input to the AD TAUG were (1) inventories of clinical concepts identified by consensus with CAMD scientists and (2) ADNI data dictionaries. Subject-matter experts from CAMD and ADNI provided clinical expert input into the development of these standards, whereas working groups of CDISC experts mapped concepts relevant to AD to CDISC SDTM and developed controlled terminology to support the use of these standards in clinical trials. CDASH and ADaM were out of scope for both versions 1.0 and 2.0 of the AD TAUGs. Fig. 1 illustrates the process used by developers of standards for vetting the inputs and for compiling and publishing the content in the CDISC AD TAUG V2.0. The development and final public release of AD V2.0 outlined in Fig. 1 was carried out for more than a period of 12 months.
The foremost tool available to developers of standards for building understanding of clinical concepts within these multidisciplinary teams is the "concept map" [16]. Concept maps are the result of concept modeling, whereby knowledge imparted to data modelers from clinical scientists is represented in visual models that describe the process of how a concept is applied and how it results in data elements in a database. These maps enable the experts to ensure that they have reached a common understanding of the concept illustrated. The maps also serve as the first step in fitting the concept into the data model. The interdisciplinary approach ensures that these resulting data models accurately capture the concept and are usable by implementers and analysts. Examples of concept maps developed in the TAUG AD V2.0 are illustrated in Fig. 2.
The AD CDISC V2.0 standard represents broad consensus from the external scientific community. Once the draft therapeutic guide for AD was compiled, it was sent out for a focused review to ensure that the concepts were represented accurately and conformed to rules of the standard. The CDISC standards development team responded to all reviewer comments received and made any necessary changes in an iterative way. This comment resolution process culminated in the first of two reviews by the CDISC Standards Review Council, which is tasked with ensuring the quality of all CDISC TAUGs.
In the later stages of the CDISC standards development process, the draft TAUG was released for a broader public review, during which the global user community was invited to make comments and request changes. The Standards Review Council reviewed all comments and addressed each one to achieve the best standard. Once approved, the new domain was made available for public use. The consensus process that was followed and public availability of the CDISC standard is aimed at encouraging widespread agreement and future implementation.
Results
The creation of AD CDISC standards in SDTM format served as the basis for the AD-specific data standards user guide. A number of the included concepts are applicable across different diseases, such as the approach to handling imaging data. In addition to defining a standard format for representing data from common assessments (i.e., medical history), the standards describe how to record a number of factors that are specific for AD trials and may influence the outcome of analyses.
Data standards are critical to interpreting integrated study data
On completion of V1.0 of the AD TAUG, the AD CDISC standards were prioritized to effectively integrate the AD placebo data from multiple distinct AD trials in the unified AD C-Path Online Data Repository database [17]. The CAMD AD database, which serves as a single integrated database, consists of item-level, patient-level anonymized data from the placebo arm of 24 clinical trials contributed by nine industry sponsors and the Alzheimer's Disease Cooperative Study Group. Development of the integrated database required the remapping of legacy clinical trial data to the CDISC AD standard. The common outcome measure across the different AD trials was Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog). Importantly, it was discovered when integrating the data to the AD CDISC standard that the ADAS-Cog instrument was not represented the same in each study. Table 2 illustrates the unique aspects of each sponsor's trial according to the ADAS-Cog measures. The AD CDISC standard served to highlight the differences in ADAS-Cog item level measures across different studies and was used to align the outcome measures across distinct trials in the development of the AD CAMD database. These were compared with data dictionaries from ADNI and the pooled items were reduced to 49 tables of data elements based on removing duplicates, concepts already covered by published CDISC standards, and items that were irrelevant or out of scope. The remaining elements were categorized as imaging biomarkers, CSF biomarkers, or COAs. A development team consisting of clinical SMEs and standards experts worked together to capture the relevant details of the scoped concepts and assemble them into a user guide showing how to represent them and relate records in CDISC SDTM. Abbreviations: AD, Alzheimer's disease; ADNI, Alzheimer's Disease Neuroimaging Initiative; CAMD, Coalition Against Major Diseases; CDISC, Clinical Data Interchange Standards Consortium; COA, clinical outcome assessment; CSF, cerebrospinal fluid; MCI, mild cognitive impairment; SDTM, Study Data Tabulation Model; SME, subjectmatter expert; TAUG, Therapeutic Area User Guide. Fig. 2. Concept modeling involves iterative discussions between clinical subject-matter experts and data modeling experts to parse the various concepts and relevant qualifiers that describe the pertinent information and data generated within a given research topic. The resulting "concept maps" (examples shown in panels A, B, and C) are the first stage in the development of data models that describe how individual data elements relate to each other so that the resulting data model accounts for and preserves these relationships. (A) A concept map depicting CSF sample processing and the parameters that can impact the results. The color-coding on the perimeter of gray boxes defines observation classes within the CDISC BRIDG model (not discussed). Yellow boxes correspond to the CDISC SDTM domain (data set) where the concepts described reside in SDTM. (B) Concept map depicting MRI for the acquisition of volumetric biomarkers. To fully represent an MRI scan data should be collected regarding subject characteristics, whether contrast enhancement was used, scanner-specific properties, and software properties that determine the anatomic location scanned, pulse-sequence data, and the analysis algorithm to name a few. population for most therapeutic trials that had been carried out historically [18]. Concepts covered in V1.0 included items from the ADAS-Cog, apolipoprotein E (APOE) genotype, and laboratory tests for cerebrospinal fluid (CSF), amyloid b (Ab), and tau biofluid protein biomarkers. The process required development of multiple new SDTM domains as part of the standards development project, definition of new controlled terminology such as for the various APOE haplotypes, and the rules associated with organizing and relating the data in SDTM.
With the advent of diagnostic criteria for early AD [19,20] NOTE. Seven Clinical Report Forms (CRFs) from seven individual sponsors were provided to the Coalition Against Major Diseases team for analysis. Each study originates from distinctive sponsors. All points of contact for the different data sources could not confirm with certainty that the order of the ADAS-Cog items in the CRFs reflected the order in which they were administered in the clinical trials. The order in which ADAS-Cog items were reported across the seven CRFs varied for a total number of items ranging from two to 10. Four CRFs reported the trials as having used the 13-item ADAS-Cog scale, two CRFs reported using 12 items of the 13-item ADAS-Cog, and one CRF reported the use of the 11-item ADAS-Cog scale. All CRFs reported the administration of word recall first, and no CRF reported administering word recognition last (which is recommended in the ADAS-Cog administration instructions); three CRFs reported administering word recognition as the seventh item/cognitive test, and four CRFs reported administering word recognition as the eighth item/cognitive test. Disability Assessment for Dementia, Alzheimer's Disease Cooperative Study-Activities of Daily Living-MCI, Neuropsychiatric Inventory, Clinical Global Impression, and Geriatric Depression Scale. Table 3 highlights the concepts and domains included in the AD TAUG V2.0.
Addition of specific biomarker standards
Protein biomarkers measured in CSF have been a key focus for AD researchers and are used frequently in AD randomized controlled clinical trials [24]. Ab, total tau, and phospho-tau show the most promise as prognostic biomarkers [25,26] and have been qualified by the European Medicines Agency for use in clinical trials at the predementia stage [27,28]. A large number of factors contribute to the measure of each protein and there has been increasing recognition of the importance of a multitude of parameters involved in collection and sample handling [29]. Such parameters are relevant across multiple central nervous system disease states, which has led to the development of consensus guidelines [30].
CSF biomarker values can vary according to a multitude of acquisition and processing parameters, including site of the lumbar puncture, the tube type used for sample storage, and the analytical measurement technique [31]. Such biomarker variables are oftentimes not reported in peer-reviewed publications or clinical protocols. The parameters included in the CSF concept map are known to impact the predictive accuracy of AD CSF analytes in identifying early AD subjects who are more likely to progress to AD dementia [32,33]. Fig. 2A shows which parameters are relevant to contributing to the final biomarker analyte value and should therefore be controlled for and documented at each step along the way.
Standardization of neuroimaging biomarkers is also a focus of the AD CDISC standards. Neuroimaging concepts were defined to cover multiple modalities including fluorodeoxyglucose-positron emission tomography (PET), structural magnetic resonance imaging, and PET neuroimaging. Fig. 2A and B shows the concept maps used to drive the CDISC standards development for neuroimaging in AD. Table 3 outlines the concepts and parameters that are included in the AD V2.0 TAUG. Developers identified ways to represent a variety of imaging parameters including PET scan tracer administration, scanner type, radiolabeled tracers, software type, scanner-specific features, and reference region in addition to fundamental parameters at collection time (time of day, fasting level, and so forth). Many of the factors captured in the concepts are defined in protocols such as ADNI, yet not predefined or even identified as important to capture in other studies.
Discussion
Clinical trials in AD have been conducted with a diversity of approaches in the way in which data are acquired and reported. This confounds cross-comparison between studies and makes it difficult to pool and share data for integrated analyses of multiple trials. Efficiency can be gained through the use of consensus data standards. The current V2.0 AD CDISC standards encompass outcome measures and biomarkers that are relevant to AD clinical trials of drugs targeting AD dementia and predementia stages including mild AD and MCI.
ADNI set out early ambitious goals to meet one of its primary objectives of improving the detection of AD at the earliest stages through the use of biomarkers. The success of ADNI can be attributed to the early agreement that defined data collection standards would be implemented at all sites and all data would be shared with researchers around the world [10,34,35]. ADNI protocols served as the foundation for the development of the AD CDISC standards.
The CDISC TAUG for AD V1.0 enabled CAMD to develop the first standardized database of AD clinical trials [14]. This database, available to qualified researchers, is being used to provide novel insights into AD and served as the foundation for the development of the first-ever regulatoryendorsed clinical trial simulation tool (for mild and moderate AD) [36]. This tool, now being used by academics and industry, could not have been developed based on metaanalyses of disparate data.
Although it was assumed that ADAS-Cog represents a standardized clinical outcome, particularly given its widespread use, it was clear to the CAMD developers that the item-level data varied significantly across the different trials. This included variations in total number of items (11-, 12-, and 13-item versions), item order, and word lists. Such differences posed significant challenges for analysis. By remapping the data to CDISC standards, all 11 common ADAS-Cog items are aligned across studies, whereas still retaining an indication of their original implementation.
CDISC standards are evolving entities. Revisions of the standards are a mechanism to integrate new scientific advances. The evolution of V1.0 to V2.0 of the AD CDISC standard provides a prime example of this. It is anticipated that future versions of the AD CDISC standard (V3.0) will focus on prevention trials, particularly given that treatment at presymptomatic disease states represent a focus of ongoing and prospective clinical trials [1]. Future revisions may include elements related to novel outcome measures [37,38] and biomarkers such as functional magnetic resonance imaging used to assess connectivity and cognitive reserve [39].
Precompetitive initiatives
Collaborative networks have become a mainstay in AD based on the success of flagship public-private partnerships such as ADNI and CAMD. Presently there are .30 consortia with a focus on AD [11] and many new initiatives being launched in other disease areas. Precompetitive forums like the Collaboration for Alzheimer's Prevention provide an effective platform to champion the implementation of AD CDISC standards [40,41]. The success of these consortia depends on the ability to easily analyze data available from AD trials. International initiatives such as the European Medical Information Framework, the European Prevention of Alzheimer's Dementia, the Alzheimer's Prevention Initiative, and the Global Alzheimer Platform have all recognized the critical value of data standards and have committed to their implementation so that data from these efforts can be integrated. In addition, the Global Alzheimer's Association Interactive Network is "advancing research into the causes, prevention and treatment of Alzheimer's and other neurodegenerative diseases through a global cooperative of sharing, investigation and discovery" (http:// www.gaain.org) [6,42].
Focus on biomarkers
AD biomarkers that have been a focus of standardization include CSF analytes [22,31,43], plasma proteins [44,45], and neuroimaging parameters [46,47]. Given that .70% of the variability in biofluid measurements in blood is attributable to preanalytical factors [44], it is critical to standardize sample collection procedures. CDISC data standards should not be confused with protocol standards for biomarker acquisition. However, having a set of consensus-based data standards that capture the concepts relevant to these protocols serves to highlight the importance of acquisition parameters and provides a standardized way of representing data collected according to a given protocol. This allows analysts to quickly filter subsets of aggregated data that were collected in a similar fashion.
Finally, there is a need to consider developing CDISC standards for nascent promising biomarkers including novel imaging methodologies, metabolomics, proteomics, electroencephalogram, and even digital health platform technologies. This will reduce the time for validation and encourage data integration across studies. It is anticipated Table 4 Future recommendations to enable efficient execution of novel AD treatments Prospective use of AD CDISC biomarker and clinical data standards in ongoing and prospective clinical trials of subjects with AD Continued engagement with submission of data and methodology to regulatory agencies in alignment with AD CDISC standards Expanded alliances of all stakeholder groups in implementing the use of AD CDISC standards, particularly precompetitive consortium working on discovery, validation, and regulatory endorsement of AD biomarkers. Enhanced focus on preanalytical factors in all biomarker studies Initiate data standards development for the aggregation of biosensor performance measures (both wearable devices and remote monitoring) that are increasingly being integrated into both observational and clinical trials designs Full engagement and active participation of all stakeholders and sponsors conducting AD clinical trials and biomarker discovery research, such as diagnostic companies and manufacturers, in embracing the use of AD CDISC standards and providing input on future versions of the standards Development of open-source data handling and analysis tools (based on the standards) that provide incentive and added value to users and that facilitate the use of the common data through capture, analysis, submission, regulatory review, and approvals Increased incentives to comply with CDISC standards (RFAs, funding, industry incentives; e.g., TBI seed grant RFA) Development of translational standards in the area of biomarkers (biomarkers that enable decision making from animals to clinic) (i.e., preclinical common data elements [CDEs] that prospective use of CDISC standards will expedite regulatory endorsement of biomarkers in the future for related central nervous system conditions and provide a path for the development of future standards [43].
Regulatory implications of data standards
Regulatory agencies have encouraged consistent data collection and aggregation across multiple disease areas [48,49]. Since December 2016, FDA and Japan's Pharmaceuticals and Medical Devices Agency require the use of CDISC data standards for all NDA and certain investigational new drug submissions [50].
In recognition of the challenges posed by the sharing and aggregation of large data sets, regulatory agencies have launched the Letter of Support Initiative to highlight promising biomarkers, encourage data sharing, and stimulate additional studies [51,52]. Two of the first three letters of support issued for clinical biomarkers were issued to CAMD and focused on AD for exploratory prognostic CSF biomarkers (Ab, tau, and phospho-tau) and the use of low baseline volume of the hippocampus as a biomarker for enrichment in trials at early stages of AD [53]. Notably, these letters, signed by FDA leadership, clearly recommended the use of AD CDISC standards in future AD trials.
Conclusions
The AD CDISC data standard holds the promise of implementing a reproducible research framework that spans from first studies in man through the approval of new medicines. The use of CDISC standards aids in the understanding of the course of disease progression, improves the ability to detect statistically significant signals, and maximizes our ability to learn from both successes and failures. CDISC standards can be leveraged to enable the extraction of knowledge from ongoing and future AD trials. Table 4 highlights the key recommendations for the efficient execution and the development of drug development tools that accelerate the delivery of novel AD treatments. The use of the AD data standard will permit complex data modeling of disease progression from asymptomatic to dementia stages of this devastating condition in urgent need of effective intervention.
Acknowledgments
The authors acknowledge Dr Kewei Chen, Dr Patricia Cole, Dr Mark Forrest Gordon, Dr Susan DeSanti, Dr Adam Fleisher, Dr Andreas Jeromin, Dr Gerald Novak, Roberta Rosenberg, Dr Erin Muhlbradt, and Dr Jessica Langbaum who were critical in providing input to the development of the AD Therapeutic Area User Guide. The aforementioned colleagues served as subject-matter experts providing input on clinical science concepts and controlled terminology to support the use of standards in clinical trials. We acknowledge the leadership of Dr Rebecca Kush at Clinical Data Interchange Standards Consortium (CDISC) for her support of this initiative and Dr Amy Porter, Lisa Bain, and Dr Volker Kern for their role as medical writers and editors. This work was partially funded by the U.S. Food and Drug Administration's Critical Path Public Private-Partnerships Grant Program (grant number 1U18FD005320).
RESEARCH IN CONTEXT
1. Systematic review: The first therapeutic area user guide for AD CDISC standards had not previously integrated biomarkers. Given the growing importance of biomarker assays to understand disease progression, and the requirement of the FDA to have trials submitted using CDISC standards, we initiated an effort towards developing global consensus CDISC data standards for key AD biomarkers.
2. Interpretation: AD data standards promote the acceleration of our understanding of AD. They provide a reproducible research framework that spans from first studies in man though the launch of new medicines. CDISC standards increase our ability to improve our ability to detect signals in new compounds, and maximize our ability to share learnings from both successes and failures.
3. Future directions: Future use of the AD data standards (v2.0) will permit complex data modeling of disease progression from asymptomatic to dementia stages of this devastating disease, and improve the efficiency of future regulatory reviews.
|
2018-04-03T01:38:41.621Z
|
2017-04-15T00:00:00.000
|
{
"year": 2017,
"sha1": "550e2f2481fa5b1fcc392842bfdb9d2c2c0d4c84",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.trci.2017.03.006",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "3d347a6932dc6d55f64692b012458b983af9d381",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245449718
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Home Economics Education in the 21 st Century: The Covid-19 Pandemic as a Disruptor, Accelerator, and Future Shaper
This paper explores the role of home economics education in the 21st century. It commences with an explanation of the disruption to the five predicted future global megatrends – globalisation, urbanisation, digitisation, cybersecurity, sustainability – as a consequence of the global Covid-19 pandemic. The place of megatrends framing home economics is explored by presenting a textual analysis of a literacy publication created as an acceleration point for framing the next one hundred years of home economics and underpinned by global megatrends, published prior to the pandemic. Using the Voyant Tool, visualisations of the book Creating Home Economics Futures: The Next 100 Years are presented and compared to other key literary documents informing the field. The paper then turns to the ways in which education and learning have led to the repositioning of home economics as a field and home economics literacy as the key strategy for ensuring the field continues to remain relevant into the future. Priority areas for education include food literacy; individual, family and community well-being; and the reconstitution of the place of the home.
Introduction
On March 11 2020, the World Health Organization (WHO) (2020) officially declared a global pandemic. Since then, the Covid-19 pandemic has had a dramatic effect, not least of which is the clear demonstration of the fragility of human life, with more than 170 million infections and 3.5 million deaths in just over a year (Worldometer, 2021) and with no end in sight. The advent of this global pandemic is not without precedent, with many pandemics changing the course of human history over centuries, including leprosy; the Black Death, plagues, cholera, measles, the Russian, Spanish and Asian flus, HIV/AIDS and, in the 21 st century, SARS (History.com, 2020). One of the key strategies for reducing the spread of the virus has been to maintain a safe distance from others to avoid transmission, and to that end, since the pandemic was declared, most people around the world have been directed to isolate at home for a period, alongside employing personal protective behaviours such as wearing masks, washing hands frequently, and avoiding crowds.
Global megatrends
When events like a global pandemic occur, they change the course of history, dismantling predictions by futurists and analysts (Godfrey Team, 2020). These predictions are known as global megatrends, defined as 'a long-term process of societal, economic, and political change with a significant impact on a larger number of areas of life, including the spheres of work, consumer and leisure behavior, health, education, cultural identity, and political participation' (Petersen & Bluth, 2020, p. 1). The Covid-19 pandemic is no exception, having a disruptive effect on the predicted megatrends, and will continue to do so until the future containment of the pandemic is better known.
The Godfrey Team (2020) points to the pandemic as a catalyst for the following megatrend shifts: a deceleration from globalisation towards anti-globalism, resulting from the need for local self-sufficiency; a change to urbanisation led by working from home and the need for better-designed living spaces; an even greater acceleration of digitisation to solve problems and remove manual processes; the need for more sophisticated cybersecurity, especially with working-from-home patterns; and a greater focus on sustainability inspired by the visibility of the benefits derived during lockdown periods and the possibility for achieving greater outcomes than expected. Much of this change has resulted from what has been coined 'pandenomics' (Petersen & Bluth, 2020, p. 1), which is the effect of the coronavirus pandemic on the global economy: a massive, wide-ranging global economic crisis, with economies expected to experience major collapse.
The importance of understanding global megatrends has been part of the home economics literature for more than a decade. It was a key feature of the International Federation for Home Economics (IFHE) Position Statement -Home Economics in the 21 st Century (IFHE, 2008) launched to coincide with the centennial celebrations of the establishment of IFHE as a professional organisation, explicitly pointing to the need to future proof the profession, stating this as a clear objective for the decade ahead: [T]he focus on the decade ahead is on future proofing, which describes the elusive process of trying to anticipate future developments, so that action can be taken to minimise possible negative consequences and to seize opportunities. Future-proofing the home economics profession and the federation is a challenging task but one which is necessary to ensure a sustainable vision both for the profession and for individual members. The International Federation of Home Economics has commenced its future-proofing strategy by focussing on questions of sustainability, advocacy and the active creation of preferred futures for Home Economics, relevant disciplinary fields, and the profession itself, while critically reflecting upon and being informed by its historical roots. (IFHE, 2008, p. 2) In response, the book Creating Home Economics Futures: The Next 100 Years (hereafter referred to as the Book) (Pendergast et al., 2012a) brought together key leaders in home economic to consider how to future proof the profession. More than a decade ago, the ten global megatrends formulated by the Copenhagen Institute for Futures Study were used as the basis for the publication. The trends predicted to shape society were: ageing, globalisation, technological development, prosperity, individualisation, commercialisation, health and environment, acceleration, network organising, and urbanisation. The editors framed the Book to examine the global megatrends as contributing to probable futures and highlighted these as the impetus for future-proofing the profession (Pendergast et al., 2012b).
The collection of published works in the Book included a deep dive into the 'intention' of home economics education, arguing that while home economics curricula differ around the world, they share a common philosophical base. Furthermore, the intention of engaging in home economics education is to provide the individual with 'the learning opportunity to develop capabilities to enhance personal empowerment to act in daily contexts' (Pendergast, 2012, p. 13). This educational intention is reiterated in the IFHE Position Statement (IFHE, 2008) that as a curriculum area, Home Economics: […] facilitates students to discover and further develop their own resources and capabilities to be used in their personal life, by directing their professional decisions and actions or preparing them for life. (p. 1) A decade has passed, and we are in the midst of a global pandemic that has disrupted the global megatrends. It is an opportune time to reflect on the role of home economics, and especially home economics education, looking to the future.
Convergent moment
It could be argued that this moment constitutes a new 'convergent moment' for the profession. More than a decade and a half ago, in 2006, Pendergast (2006) introduced the concept of the 'convergent moment' to the home economics profession as a way of 'highlighting the alignment of a range of key factors impacting on the profession which, taken together, provide a climate of opportunity for reflection and renewal, thereby ensuring the relevance and sustainability of the profession' (Pendergast, 2013, p. 57). The potential for these convergent factors to act as a catalyst for generative action was advocated. The convergent factors in 2006 were identified as: (a) the past century of invention, development and changes in roles for men and women; (b) consumption and globalisation patterns; (c) generational characteristics and the emergence of the digital native as the Y generation; (d) features of 'New Times' and the need to be 'expert novices' (good at learning new things); and, (e) significant changes in individual and family structures impacting globally on demographic patterns and on the family's ability to fulfil its main functions as a fundamental social institution.
While these convergent factors remain largely relevant today and have been instrumental in the call for future-proofing the profession made public in the IFHE Position Paper (IFHE, 2008), the disruption to global megatrends by the pandemic means it is important to recast this thinking and to ensure home economics remains relevant in what has come to be known as the 'new normal' (Anderson et al., 2021).
The Book
In order to inform the future role of home economics education in the 21 st century, an analysis of the Book launched at the 2012 World Congress of the IFHE with global megatrends as the framing serves as an important starting point. The foreword of the Creating Home Economics Futures: The Next 100 Years (Pendergast et al., 2012a) describes the Book as follows: This book offers an exciting opportunity to contribute to the thinking associated with the future of the Home Economics profession. Home Economists around the world, and those with an interest in Home Economics, were invited to contribute a chapter to the book. A stimulus chapter, by the same name as the book, was written by the editors for authors to use as a starting point from which to develop or stimulate their ideas on any aspect related to home economics in the next 100 years. A number of abstracts were submitted for consideration, and in this book, the final selection of chapters is presented. As editors of the book, we have been deeply impressed by the range and scope of chapters, presenting diverse and challenging ideas, and by the unexpected but welcomed synergy amongst ideas from practitioners all around the world; this synergy gives us hope for a powerful and sustainable future. This book will make an invaluable contribution to the profession of Home Economics, and will stimulate creative, deeply intellectual and philosophical thinking about possible and preferred futures. (p. iii) The stimulus chapter explained the relevance of global megatrends and their key role in informing the predicted future. It then explained each of the global megatrends and set out the agenda for the need to future proof the profession as a way of taking an agentic role in creating a preferred future for the profession. Twenty chapters were published with 34 authors from 14 countries (Australia, Botswana, Brazil, Canada, China, Finland, Germany, Japan, Malta, Netherlands, Nigeria, South Africa, Sweden, United States of America). The Book is 258 pages and has 105,025 words.
The analysis
An innovative method to analyse the Book's contents and present the analysis's findings as visualisations of the text has been employed. Voyant Tools (available at: https://voyant-tools.org/) was selected because it is a free, web-based text reading and analysis tool that has been used effectively by scholars and researchers for the digital scholarship of text mining since its first version was released in 2003 (Miller, 2018). The tool provides the opportunity to quantitatively explore qualitative data (text) with confidence and replicability; furthermore, it produces attractive visualisation outputs that are easy to analyse and interpret (Hetenyi et al., 2019). This approach also represented other published research (Pendergast, 2010(Pendergast, , 2013) that investigated the textual properties of home economics materials, enabling comparison of the findings.
Findings
The word cloud presented in Figure 1 displays the terms scaled in proportionate size in the visualisation according to their frequency in the Book.
Figure 1 World cloud visualising the frequency of terms in Creating Home Economics Futures: The Next 100 Years
For this analysis, 'home' and 'economics' are aggregated as one term: 'home economics' . Hence, the top 10 words appearing most frequently in the Book are: home economics, food, education, future, life, family, development, new, world, and sustainable. The most frequently occurring one hundred words are presented in rank order, along with their frequency, in Table 1. In addition to frequency counts, the Voyant Tool used for this analysis enables a range of text-driven visualisations, including the visualisation of links between major terms. Figure 2 presents the most frequent links of terms appearing in the Book. These are: home economics and food; home economics and creating; home economics and futures; home economics and education; home economics and years; home economics and economists; food and security; food and vendors.
Discussion
These findings provide a means of quantifying the qualitative data in the form of the text in the Book. This research builds on previous work, which utilised the same analytic base, and presents similar data. However, the previous analyses were conducted manually, using Excel databases. The ten most frequently published words in these documents are presented in Table 2. The first study conducted by Pendergast (2010) produced word clouds from two key artefacts related to the profession at that time: the IFHE Position Statement and the IFHE Congress Proceedings, 2008. A high degree of alignment of the five most frequently used words was reported in this study, these being: home economics (1 st and 2 nd , respectively), profession (2 nd and 1 st ), social (6 th and 3 rd ), life (7 th and 5 th ), and future (8 th and 9 th ). In a further study by Pendergast (2013) using the same methodology to analyse the International Journal of Home Economics (IJHE), exploring all 11 issues of the journal published to that time, the word 'home economics' again emerges as the most frequently used word, with 'profession' (6 th ) also appearing in the top ten words used frequently throughout the journal. 'Food' is used frequently in the Congress Proceedings (4 th ) and the IJHE analysis (2 nd ). In this analysis of the Book, 'home economics' is again first and 'food' second. The words 'education' (3 rd ) and 'future' (4 th ) also reappear. When the ten most frequently occurring words from all four sources are entered into the Voyant Tool, the word map displayed in Figure 3 results.
Figure 3 World Cloud Visualising the Frequency of Terms in Four Sources
All four analyses have 'home economics' as the most common term, with three of the sources having 'food' , 'future' , 'profession' and 'life' in the top ten, with 'food' appearing at the highest rank following 'home economics' . 'Education' , 'family' , 'social' and 'world' also appear in two of the top ten lists.
The consistency of frequently used terms across these analyses creates a powerful visual representation of the formal discourse in the published literature in the field of home economics. There is a valid and reliable evidence base that the home economics literature is strongly focused on the profession, the future, food and life, along with education, family, social and the world. This finding also aligns with the global megatrends, especially the Book, which was framed around these trends. Food is very visible as a context for home economics work and is clearly established as the most common context, according to this literature analysis.
The unique connection to food education is dominant not only in these analysed artefacts but is also in the way home economics is popularly viewed and understood. In the prestigious Journal of the American Medical Association, Lichtenstein & Ludwig (2010) called for the community to 'bring back home economics' in response to escalating rates of obesity. They argue that education about food is essential to address the knowledge gap leading to the obesity health crisis costing billions annually. Indeed, by 2016 Smith had located and analysed 40 articles that had the phrase 'bring back home economics' in the title. This call is part of a burgeoning focus on the need for better understanding education for food literacy, with a systematic literature review inclusive of 44 studies confirming adolescents with greater nutritional knowledge and food skills showed healthier dietary practices (Bailey et al., 2019). Of these, seven of the 44 papers were specifically reporting research about home economics and food literacy in schools (Dewhurst & Pendergast, 2008, 2011Pendergast & Dewhurst, 2012;Ronto et al., 2016a;Ronto et al., 2016b;Ronto et al., 2016c;Ronto et al., 2017), indicative of the contribution of home economics to this field by building a firm evidentiary base.
Reconstituting the field
The IFHE Position Paper (2008) defines home economics as a '[…] field of study and a profession, situated in the human sciences that draws from a range of disciplines to achieve optimal and sustainable living for individuals, families and communities' (p. 1). The paper stipulates that the essential components that all home economics courses of study and professionals identifying as home economists must exhibit the following three essential dimensions: • a focus on fundamental needs and practical concerns of individuals and family in everyday life and their importance both at the individual and near community levels, and also at societal and global levels so that well-being can be enhanced in an ever-changing and ever-challenging environment; • the integration of knowledge, processes and practical skills from multiple disciplines synthesised through interdisciplinary and transdisciplinary inquiry and pertinent paradigms; and • demonstrated capacity to take critical/ transformative/ emancipatory action to enhance well-being and to advocate for individuals, families and communities at all levels and sectors of society (IFHE, 2008, p. 2).
Further, it defined four dimensions of practice, as presented in Figure 4.
Figure 4
Four Dimensions of Home Economics Practice Note. Adapted from Pendergast et al., 2012b, p. 13. Drawing upon the literature analysis and connecting these four dimensions with the global megatrends that have now experienced disruption due to the pandemic, the role of home economics education in the 21 st century can be considered. As explained at the outset of this in this paper, the pandemic has catalysed the following megatrend shifts: • slowing down globalisation; • changes to urbanisation; • greater acceleration of digitisation; • more sophisticated cybersecurity; and • greater focus on sustainability (Godfrey Team, 2020).
The role of home economics, as defined by the four dimensions (IFHE, 2008), remains as pertinent as when they were conceived. In addition, the recognition of home economics as a key player in the food literacy agenda globally connects to a major aspect of the disruption to normal practices and the rapid response to the global pandemic. The need for greater food security (heightened by the memory of the empty grocery shelves and fights in aisles over disappearing stacks of pasta and rice); for food preparation skills (when restaurants and fast food outlets were closed and individuals and families had to prepare food at home more often than ever before with limited resources); for food safety and hygiene practices (when personal protection and practices became a key part of preventing the spread of the virus); for food production as a creative outlet (when people sought engaging activities with newfound time and re-discovered their joy of cooking), are just some of the aspects that have been reconstituted as a response to the crisis. Ironically, the pandemic is likely to have intensified interest in food literacy, creating the legacies of appreciating, activating and strengthening food safety and hygiene practices, food as a creative practice, and other aspects of food literacy (Pendergast, 2021).
Alongside this, the increasing importance of home economics to contribute to the emerging challenges associated with mental health and diminished individual, and family and societal well-being are predictable. Data are increasingly becoming available of the effect of the pandemic and the resulting economic recession and changed ways of living, school and workplace closures, the demands of home-schooling and working from home, isolation and deprivation, poor health outcomes and deaths of friends and relatives; negatively impacting mental health and well-being on a global scale. One study reveals that 4 in 10 adults report symptoms of anxiety or depressive disorder compared to 1 in 10 prior to the pandemic. Well-being is impacted with difficulty sleeping (36%) and eating (32%) and with substance abuse increase (12%) (Panchal et al., 2021). This picture is the tip of the iceberg, with evidence of the impact only just now emerging as the research is gathered. There is no question that home economics education has a crucial role in this space.
The home has become the new epicentre of survival for individuals and families as the world closed its doors in March 2020 and directed people to find shelter in their own homes as a public health imperative (Barnes & Sax, 2020). The home has been reconstituted as a safe space for the place of work, of schooling, of exercise and recreation, of creativity and entertainment. Homes are regarded as safe, secure and familiar, and hence having safe space status where social and personal experience and belongingness have evolved beyond viewing home as merely a domestic space to include this range of functions (Gezici Yalcin & Duzen, 2021). This has been a positive experience for many, so much so that anxiety and resistance to returning to workplaces have become an issue for some employers keen to repopulate office spaces safely (Barnes & Sax, 2020). The rapid response to the provision of digital solutions has seen the ascendancy of online learning and industry tools at a pace never before experienced or expected, paving the way to genuinely effective working from home possibilities.
The mechanism for ensuring 21 st -century home economics continues to make a worthwhile contribution is underpinned by a commitment to what has been described elsewhere as the Home Economics Literacy Model (HELM) presented in Figure 5 (Pendergast, 2015). This highlights the need to intersect the areas of practice and the essential dimensions to ensure home economics practice meets the intention of home economic literacy, meaning to move beyond the 'what' and 'how' to achieve its transformative potential.
Figure 5 Home Economics Literacy Model (HELM)
Examples of how this model operates are presented by Pendergast and Deagon (2021). Table 3 is a further elaborated example demonstrating how this model can be operationalised, in this instance with a focus on promoting resilience in the context of unpredictable change, as is relevant to the pandemic situation. It is important to highlight the four dimensions of practice and the three essential elements forming the underpinning framework structuring this comprehensive home economics approach.
Summary and Conclusion
As the 'new normal' continues to evolve in the coming years, the role of home economics education has never been more significant. The study shared in this paper utilised the Voyant Tool to quantitatively explore qualitative data in the book Creating Home Economics Futures: The Next 100 Years (Pendergast et al., 2012a). The tool enables analysis with confidence and replicability and produces visualisation outputs that are easy to analyse and interpret. The findings reveal a strong connection to the agenda of the Book -to shape the future informed by the global megatrends. The disruptive force of the Covid-19 pandemic on these predicted futures reveals a series of pivots and, in many cases, an acceleration combined with a redirection of future trends. In this space, the potential for home economics education to play a key role in reconstituting the future is abundantly clear. Spaces for intentional education focus include: • the utilisation of the HELM model, which activates the areas of practice and the essential dimensions to ensure home economics education is inclusive of the knowledges, processes, and contexts for transformative action; • food literacy action to mobilise the potential of education to achieve positive outcomes in increasingly challenging food-related health crises, especially those associated with obesity; • enhancing the well-being of individuals, families and communities as a greater understanding of the effects of the pandemic emerge and point to a crisis of massive proportions globally; • a reinvention of the place of the home with new functions likely to be embedded as cultural norms.
Biographical note
Donna Pendergast, PhD, is a full professor in the field of teacher education in the School of Education and Professional Studies at Griffith University. Her research interests include: student engagement and wellbeing, especially of young adolescent learners; school reform; teacher education and professional learning; and home economics and family and consumer studies philosophy and practice.
|
2021-12-25T16:20:04.565Z
|
2021-12-23T00:00:00.000
|
{
"year": 2021,
"sha1": "789f8c830214b1f232671ec97db341caa05f96d8",
"oa_license": "CCBY",
"oa_url": "https://cepsj.si/index.php/cepsj/article/download/1205/553",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "93f8e8fcae5fa215c30890b107c38448f7d48d42",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15095405
|
pes2o/s2orc
|
v3-fos-license
|
Communications in Mathematical Physics Strominger – Yau – Zaslow Geometry , Affine Spheres and Painlevé III
We give a gauge invariant characterisation of the elliptic affine sphere equation and the closely related Tzitzéica equation as reductions of real forms of SL(3,C) anti– self–dual Yang–Mills equations by two translations, or equivalently as a special case of the Hitchin equation. We use the Loftin–Yau–Zaslow construction to give an explicit expression for a six–real dimensional semi–flat Calabi–Yau metric in terms of a solution to the affinesphere equation and show how a subclass of such metrics arises from 3rd Painlevé transcendents.
Introduction
Let X be a six real dimensional Calabi-Yau (CY) manifold -a complex Kähler three-fold with covariantly constant holomorphic three-form Ω. Any such manifold admits a Ricci flat Kähler metric with holonomy contained in SU (3).
We shall consider a subclass of CY manifolds which are fibred over a real three dimensional manifold B, and the fibres are special Lagrangian tori T 3 . This means that there exists a projection π : X −→ B such that the restrictions of the Kähler form ω and the real part of the holomorphic three-form Re(Ω) vanish on any fibre π −1 (p) ∼ = T 3 over a point p ∈ B.
The corresponding CY metric is called semi-flat if it is flat along the fibres. Consider the Kähler form ω = i∂∂φ, where φ is the Kähler potential. A natural class of semi-flat CY manifolds are the T 3 invariant manifolds. In this case the potential φ can be chosen not to depend on the coordinates of the fibres of π. The Ricci-flat condition det ∂ 2 φ ∂z j ∂z k = 1 then reduces to the real Monge-Ampére equation det ∂ 2 φ ∂x j ∂x k = 1, (1.1) where x j , j = 1, 2, 3, are local coordinates on B. The work of Cheng and Yau [6] shows that semi-flat CY metrics on compact complex three-fold are flat, so in what follows we allow CY manifolds to be non-compact, and some fibres of π to be singular. The conjecture of Strominger, Yau and Zaslow (SYZ) [28] states that near the large complex structure limit both X and its mirror should be the fibrations over the moduli space of special Lagrangian tori. More precisely, SYZ consider the moduli space of special Lagrangian submanifolds admitting a unitary flat connection. They write down a metric on X and compute the metric on the moduli space. In the tree level contribution this metric is derived from the Born-Infeld action for the brane, assuming that the moduli parameters slowly vary in time and expanding the action up to second order in time derivatives. The metric on the moduli space Y arises from the kinetic term in the Born-Infeld action. This method is based on Manton's moduli space approximation [21] and was originally used by SYZ. The metric resulting on Y admits the T 3 action even if the original metric on X does not. The full agreement between Y and the mirror of X is therefore expected when instanton contribution from minimal area holomorphic discs whose boundaries wrap the tori are taken into account. These corrections are suppressed in the large complex structure limit.
One approach to a proof of the Strominger Yau Zaslow conjecture [28] would be to describe Ricci-flat metrics on Calabi-Yau manifolds near large complex structure limits. It is expected that in the large complex structure limit the base of the fibration π : X −→ B admits an affine structure and a special metric of Hessian form. To test this conjecture Loftin, Yau and Zaslow (LYZ) [20] aimed to prove the existence of the metric of Hessian form 1 where φ is homogeneous of degree 2 in x j and satisfies (1.1). Given such a Hessian metric on B, the semi-flat Calabi-Yau metric g on T B and the corresponding Kähler form are given by where y j are coordinates on the fibres of T B and z j = x j + iy j . LYZ constructed a candidate for such metric as a cone over the elliptic affine sphere metric with three singular points. One consequence of Mirror Conjecture is that the base metric g B should have singularities in codimension two, and LYZ were interested in a local metric model near the trivalent vertex of a Y-shaped singularity. The monodromy of the resulting affine structure has not been calculated, so it is not yet clear that the metric coincides with the one predicted by Gross-Siebert [10] and Haase-Zharkov [12].
The LYZ construction of the metric comes down to looking for solutions 1 It follows from the work of Hitchin [13] that the natural Weil-Petersson metric on the space of special Lagrangian submanifolds has this form. More precisely, it is shown in [13] that the Kähler potentials of X and its mirror Y both satisfy the Monge-Ampére equation (1.1) and are related by a Legendre transform on the base. The fibres of the special Lagrangian fibration of Y are dual (by a Fourier transform) tori to the fibres of π : X −→ B. of the definite affine sphere equation [27] ψ zz + 1 2 e ψ + |U| 2 e −2ψ = 0, Uz = 0, (1.4) where ψ and U are real and complex functions respectively on an open set in C. LYZ set U = z −2 to account for the singularity of the metric they considered. They then proved the existence of the radially symmetric solution ψ of (1.4) with a prescribed behaviour near the singularity z = 0, and established the existence of the global solution to the coordinate-independent version of (1.4) on S 2 minus three points.
In this paper, we study the integrability of equation (1.4). We show that the affine sphere equation and a closely related equation called the Tzitzéica equation arise as reductions of anti-self-dual Yang-Mills (ASDYM) system by two translations, and hence it admits a twistor interpretation. Moreover, the ODE characterising its radial solutions gives rise to an isomonodromy problem described by the Painlevé III ODE. The two-dimensional group of translations reduces the Euclidean ASDYM equations to the Hitchin equations [14] and Theorem 1.1 below gives an invariant characterisation of (1.4) as a special case of the SU(2, 1) Hitchin equations.
Let A be an su(2, 1) valued connection on a rank 3 complex vector bundle E → C with the curvature F A = dA + A ∧ A and let Φ be a one-form with values in adj(E). Choose a local trivialisation of E and set where m * := −η −1mt η with η = diag(1, 1, −1), so that Φ * = Q * dz.
Theorem 1.1 The Hitchin equations
if the functions (ψ, U) satisfy the affine sphere equation (1.4).
Conversely, any solution to the SU(2, 1) Hitchin equations such that 1. Q has minimal polynomial t 2 and Tr(QQ * ) = 0, is equivalent to (1.6) by gauge and coordinate transformations.
The connection between solutions to the affine sphere equation (1.4) and the Calabi-Yau metric (1.3) in six dimensions has not been made explicit in [20]. The Lax representation of (1.4) will be used to prove the following where c is a non-zero constant, there exist complex coordinates {z, w, ξ} such that the metric g and the Kähler form ω can be written as and ψ(z,z), U(z) are real and complex functions respectively defined on an open set in C which satisfy the affine sphere equation (1.4).
The Hitchin equations (1.5) are integrable as they arise from ASDYM and their solutions can be described by holomorphic twistor data. Therefore any ODE arising as reduction of (1.4) by another symmetry must be of Painlevé type in agreement with an integrable dogma [1,22,8].
In the next section we follow Leung [18] and review the semi-flat Calabi-Yau manifolds. Then, in section 3 we summarise the results about affine spheres which are used in the LYZ construction [20]. In section 4 we prove Theorem 1.1 and give a gauge invariant characterisation of the definite affine sphere equation and the closely related Tzitzéica equation as symmetry reductions of the anti-self-dual Yang-Mills equations. As a byproduct, in section 5 we shall obtain a characterisation of a reduction of the Hitchin equations to the Z 3 two dimensional Toda chain. In section 6 we discuss other possible gauge inequivalent reductions of the ASDYM equations to the affine sphere equation and the Tzitzéica equation. In section 7 we give a proof of Proposition 1.2 and recover the toric Calabi-Yau metric in terms of the solutions of the affine sphere equation. Finaly in section 8 we establish Proposition 1.3 and demonstrate that the existence theorem for Hessian metrics with prescribed monodromy comes down to the study of the Painlevé III equation with special values of parameters, and obtain the corresponding 3 × 3 isomonodromic Lax pair.
Semi-Flat Calabi-Yau manifolds and the SYZ conjecture
Let z j = x j + iy j be holomorphic coordinates on a Calabi-Yau three-fold X, and let φ(z j ,z j ) be the Kähler potential such that ω = i∂∂φ. The Ricci-flat condition for the corresponding Riemannian metric is where Ω = dz 1 ∧ dz 2 ∧ dz 3 is the holomorphic three-form on X. 3). We then compactify the fibres quotienting them by a lattice thus producing a T 3 invariant Calabi-Yau structure on the total space of a toric fibration π : X −→ B.
We are now ready to formulate the SYZ conjecture. If X, Y are mirror Calabi-Yau manifolds (see [11] for a discussion of what it means) then there exists a compact real three-manifold B such that • π : X −→ B, ρ : Y −→ B are special Lagrangian fibrations by tori (the fibres can be singular at some points of B).
• The fibres of π and ρ are dual tori.
The second condition only makes sense for flat tori, therefore the conjecture holds in the large complex structure limit, where the volume of the fibres is small in comparison to the volume of the base space and the metric on the fibres is approximately flat. To understand the large complex structure limit consider a one parameter family of complex structures J(t) given by the holomorphic coordinates z j (t) = t −1 x j + iy j , and the corresponding Calabi-Yau metrics rescaled by t 2 g(t) = φ ij (dx j dx k + t 2 dy j dy k ).
Thus we get a one parameter family of special Lagrangian fibrations. In a limit t −→ 0 the Gromov-Hausdorff limit of metric g(t) is the Hessian metric (1.2) on B, and the size of the fibres shrinks to zero. The SYZ conjecture predicts that such a limit exists for any Calabi-Yau metric on a (not necessarily T 3 symmetric) toric special Lagrangian fibration.
Affine geometry and Hessian metrics
The Hessian equation (1.1) is known not to be integrable, at least in the sense of the hydrodynamic reductions [9]. Its homogeneous solutions are however characterised by an integrable PDE. We shall carry over the homogeneity analysis for a general Hessian metric in (n + 1) dimensions, and then restrict our attention to n = 2 where there is a direct connection with the semi-flat CY manifolds on one side and integrability on the other.
The following proposition follows from combining results of Calabi [5] and Baues-Cortés [2] about parabolic and elliptic affine spheres. Here, we give a direct elementary proof not based on affine differential geometry. It has certain advantages as it exhibits explicit coordinate transformations between solutions to various forms of homogeneous Hessian equations.
Proof. Consider the Hessian metric (1.2) with φ homogeneous of degree 2. Therefore Locally there exists a function r : B −→ R such that V = r∂/∂r and where h, α, γ are a metric, a one-form and a function respectively on the space of orbits of V. The relation Thus d(γ(dr + rα)) = 0 and we can redefine r to set α = 0 and γ = 1. We also note that |V | 2 = x i x j φ ij = 2φ, and recognise g B as a cone over h Now let us consider the surface r = 1 given by a graph in R n+1 wherex α , α = 1, . . . , n, parametrise the surface. We shall show that its induced metric h is given by where ∂ α := ∂/∂x α . To prove it, restrict the function φ to the surface r = 1. This gives an identity φ(x α , v(x α )) = 1/2. We differentiate this identity implicitly with respect tox α and express the first and second derivatives of φ in terms of the derivatives of v where the last relation is just the homogeneity condition restricted to the hypersurface φ = 1/2. Substituting all that to g B gives (3.4).
Now if the function φ in the Hessian metric g B satisfies the Hessian con- To see it, let us write the coordinates x i on R n+1 as (x 1 , . . . , x n , x n+1 ) = (rx 1 , . . . , rx n , rv(x α )), that is, regard R n+1 as the cone over the r = 1 surface. Now consider the invariant volume element where |g B | is the absolute value of the determinant of Hessian metric (1.2) written in the coordinates x i andg B is the same metric expressed in the basis {dx α , dr}. We contract both sides of (3.6) with V . On the LHS of (3.6) we use the form V = x i ∂/∂x i and on the RHS use V = r∂/∂r. We now set r = 1 and impose the Hessian equation On the surface r = 1, one has detg B = det h where h is given by (3.4). Substituting this in the above formula and taking squares of both sides yields (3.5). Note 2 that we have taken det h > 0 from the assumption that det g B = det φ jk = 1.
To obtain the statement in the proposition, perform a Legendre transform and which implies (3.1) and (3.2).
2
Now, let us consider a hypersurface Σ immersed in R n+1 with the flat metric δ jk dx j dx k , given by a graph (3.8) The first and second fundamental forms on Σ are given by where n is the unit normal to Σ. Tzitzéica [29,30] has studied surfaces Σ in R 3 for which the ratio of the Gaussian curvature K to the fourth power of a distance from a tangent plane to some fixed point is a constant. If K = 0, we can always rescale the coordinates to set this constant to +1 or −1 depending on the sign of the Gaussian curvature. We shall call this the Tzitzéica condition. The generalisation of the Tzitzéica condition to hypersurfaces in R n+1 is given by where D = r · n is the same as the distance up to sign. In the adapted coordinates, D and the Gaussian curvature K are given by
It follows that the Tzitzéica condition holds if and only if
where plus and minus signs correspond to positive and negative Gaussian curvature respectively. It is well known in affine differential geometry that an immersed hypersurface Σ in R n+1 is an affine hypersphere with the origin as its centre if and only if the Tzitzéica condition (3.9) holds [25]. It turns out that the metric (3.4), with v satisfying (3.5), is the same as the Blaschke metric (or affine metric) of a proper affine hypersphere. The Blaschke metric is conformally related to the second fundamental form, and is defined as follows. Let N denote the transversal vector field of the surface Σ such that the unit normal n is given by The Blaschke metric is then given by Therefore, for the surface Σ given by the graph (3.8), we have which coincides with the metric (3.4) if equation (3.5) holds.
In affine differential geometry, it is also known [5] that a Hessian metric (1.2) which satisfies det φ ij = 1 is a parabolic (improper) affine hypersphere metric. We have demonstrated that Hessian equation (1.1) on φ implies (3.5) on v. Therefore, this is in agreement with a result of Baues and Cortéz [2] that a parabolic affine hypersphere metric which admits a homothety L V g B = 2g B is the metric cone over a proper affine hypersphere.
Let us now restrict our attention to n = 2, and consider the metric h (3.4). For n = 2, det h > 0 implies that h is a definite metric. In the context of the Calabi-Yau manifolds, the metric g B is Riemannian, hence one is interested in positive-definite h. Baues and Cortés [2] have shown that in such case h is the Blaschke metric of a definite elliptic affine sphere, with affine mean curvature 1. Since h is positive definite we can adopt isothermal coordinates for the affine metric (which are asymptotic coordinates for the second fundamental form h II ) and write it as for some real valued function ψ = ψ(z,z). In this form, Simon and Wang [27] proved that the structure equations 3 of definite affine sphere imply that 3 The usual affine immersion in R n+1 only assumes a flat connection D and a parallel volume element on R n+1 , but not an ambient metric. In particular, the structure equations of a Blaschke hypersurface immersion f : (Σ, ∇) −→ (R n+1 , D) are given by where ∇ is an affine connection on Σ, X, Y ∈ T Σ, ξ is a transversal vector field chosen uniquely up to sign to satisfy certain properties, called the affine normal field, and h is the Blaschke metric defined by (3.12). This definition turns out to be equivalent to (3.10) if one were to use the Euclidean metric on R n+1 . The operator S : T Σ −→ T Σ is called the affine shape operator and H = 1 n Tr(S) the affine mean curvature. A proper affine sphere is defined to be a Blaschke hypersurface with S = HI, I being the identity metric. Another affine invariant quantity is a totally symmetric tensor called the cubic formĈ and is defined byĈ where Udz 3 is the holomorphic cubic differential. Conversely, given a solution of (1.4) one can construct an affine sphere with h = e ψ dzdz as its Blaschke metric. We should note here that if the holomorphic cubic differential U(z)dz 3 is non-zero, we can choose the isothermal coordinates such that U = 1. For example, defining ξ = ξ(z) by We will make use of such coordinate transformation in section 4 4 . Loftin, Yau and Zaslow [20] proved the existence of a semi-flat Calabi-Yau metric (1.3) with the base metric g B as the metric cone over an elliptic affine sphere with the prescribed singularity, by proving the existence of a radially symmetric solution ψ of (1.4) for U(z) = z −2 and the corresponding global solution on S 2 minus three points. Motivated by this work, we are interested in the integrability of the definite affine sphere equation (1.4). The affine sphere equation is closely related to a well known integrable equation, namely the Tzitzéica equation (3. 16) where C is the difference tensor C =∇ − ∇ and∇ is the Levi-Civita connection of h. Consider h as in (3.11) and let C i jk , i, j, k ∈ {1,1} be the components of C in the basis e 1 = dz, e1 = dz. Then it can be shown that the only nonvanishing components of C are C1 11 and C 1 11 = C1 11 , and the function U in (1.4) is defined by U = C1 11 e ψ . It follows that the cubic form isĈ = U dz 3 +Ū dz 3 . See [5,25,27,19] for details. 4 We note that the analytic continuation ψ ξξ + eψ − e −2ψ = 0 of equation (3.14) was used by McIntosh [23] to describe minimal Lagrangian immersions in CP 2 and special Lagrangian cones in C 3 .
In the context of affine spheres, the Tzitzéica equation arises if det h < 0. By writing the metric in isothermal coordinates as h = 2e u dxdy and considering the structure equations, Simon and Wang [27] also show that h is the Blaschke metric of the indefinite affine sphere (with negative affine mean curvature) if and only if u satisfies u xy = e u − r(x)b(y)e −2u , where r(x), b(y) are arbitrary non-vanishing functions of one variable, which can be normalised by rescaling the isothermal coordinates. Thus, we obtain where ǫ = ±1. The equation with ǫ = 1, (3.16), was first derived in [29,30] for the Tzitzéica surface in R 3 with negative Gaussian curvature K = −D 4 , where the indefinite second fundamental form is written in asymptotic coordinates as h II = 2e u D dxdy.
The difference between the two equations (3.16) and (1.4) lies in the relative sign of the two exponential terms on the RHS. For the Tzitzéica equation u = 0 is a solution and other solutions may be constructed using Darboux and Bäcklund transformations, for example see [4]. The definite affine sphere equation does not seem to have such obvious solutions. However, Calabi [5] has shown that an elliptic affine hypersphere with complete Blaschke metric is an ellipsoid. This is in agreement with the fact that (1.4) admits solutions in term of elliptic functions, which can be found by making an ansatz ψ(z,z) = f (z +z) in (3.14).
Reduction of ASDYM
It was shown in [7] that the Tzitzéica equation (3.16) can be obtained from a special ansatz to the anti-self-dual Yang-Mills in R 2,2 with gauge group SL(3, R). In this section, we shall give a gauge and coordinate invariant characterisation of the Tzitzéica equation and the definite affine sphere equation as different real forms of a reduction of ASDYM on C 4 with gauge group SL(3, C), via the holomorphic Hitchin equations on C 2 .
Holomorphic Tzitzéica equation
Consider a holomorphic metric and volume element on C 4 Let A = A z dz + A w dw + Azdz + Awdw be a Lie algebra valued connection on a vector bundle E → C 4 . The anti-self-dual Yang-Mills equations are given by These equations arise from a Lax pair Dz], and (4.1) is required to hold for any value of the spectral parameter λ. Choose a gauge group to be SL(3, C) and assume that A is invariant under the action of two dimensional group of translations C 2 such that the metric restricted to the planes spanned by the generators of the group is non-degenerate. Let X 1 , X 2 be the generators of the group, then the Higgs fields belong to the adjoint representation. We can always choose the coordinates so that the group is generated by the two null vectors X 1 = ∂/∂w and X 2 = ∂/∂w. The ASDYM system reduces to the holomorphic form of the Hitchin equations [14] D z Q = 0, (4.2a) is a curvature of a holomorphic connection A = A z dz + Azdz on C 2 . The Hitchin equations are invariant under the gauge transformations and later we shall also make use of the following coordinate freedom z −→ẑ(z),z −→ẑ(z).
where u(z,z) is a complex valued function holomorphic in (z,z). With this ansatz the Hitchin equations yield the holomorphic Tzitzéica equation Now we shall establish a gauge invariant characterisation of the ansatz (4.6), (4.7) in terms of the gauge and Higgs fields of the Hitchin equations. We will make use of the following lemma.
Lemma 4.1 Consider 3 by 3 complex matrices P, Q such that
There exists a gauge transformation such that P, Q are in the form (4.6) for some u.
Proof. The conditions (4.10) are invariant under the gauge transformations These conditions imply that the nullities (dimensions of the kernels of the associated linear maps) satisfy n(QP ) < 3 and n(P ) = 2. Thus Ker(QP ) = Ker(P ).
Also rank(QP ) = 1 and Im(QP ) is contained in the one-dimensional image of Q, therefore Im(QP ) = Im(Q).
where ω = 0 as T r(P Q) = ω = 0. There is still freedom in (4.12): Thus, using condition (i), the square of the covariant derivative is given by and similarly for (DzQ) 2 . Therefore, conditions (i) and (ii) are invariant under the coordinate transformation. A similar calculation shows that (iii) is also invariant under (4.4). Conversely, we shall now show that any solution to (4.2a, b, c) such that all the conditions in Proposition 4.2 hold, can be gauge and coordinate transformed into the form (4.6),(4.7).
Firstly, by Lemma 4.1, condition (i) implies that we can use gauge symmetry to put the Higgs fields (Q, P ) in the form (4.6). The equations (4.2a) and (4.2b) imply that A z , Az are of the form where n, r, m, t, p, s, h, k are some functions of (z,z). Note that we have also used the assumption that the fields are sl(3, C) valued, hence traceless. Next, to set the diagonal elements of (A z , Az) to be as in (4.7), we consider the residual gauge freedom. Lemma 4.1 implies that the gauges preserving (Q, P ) are given by (4.14) for an arbitrary function a(z,z) = 0. Thus, using (4.3), we have We choose a(z,z) such that (ln a) z = u z − n, and (ln a)z = −p.
This is allowed because the compatibility condition holds automatically as a consequence of condition (iii). To see it, note that equation (4.2c) implies Hence, condition (4.15) is equivalent to which holds by (iii). Note that at this point elements of (A z , Az) will be transformed, however, for convenience we will label them with the same letters as in (4.13). Thus we have set n = u z and p = 0. We now proceed to deal with r, m, t, s, h, k. Tr ((D z P ) 2 (DzQ) 2 ) = 0 in condition (ii) implies that r, t, s, k = 0, and Hence (4.2c) becomes Since r, t, s, k = 0, we can solve the above equations. The last three equations imply that t is a constant, and thus can be set to 1 by a constant gauge transformation of the form (4.14) with a = t −1/3 , and s is determined to be of the form b(z)e −2u . This results in Note that the gauge is now fixed. To get to ansatz (4.6),(4.7), we will now use the coordinate symmetry. Defineẑ,ẑ such that dẑ = e j(z) dz, dẑ = e l(z) dz, and setû := u − j(z) − l(z).
By choosing j(z), l(z) such that e 3j(z) = r(z) and e 3l(z) = b(z), (4.16) becomes gauge equivalent to (4.6),(4.7) in the new variables (ẑ,ẑ,û). The gauge transformation we need in the final step is given by (4.3) with We note that substituting (4.16) to the Hitchin equations yields Therefore, the change of coordinates can, roughly speaking, be regarded as setting r(z) and b(z) to constants such that r(z)b(z) = 1.
We shall now choose the Euclidean reality condition as and select the real form SU(2, 1) of SL(3, C) to deduce Theorem 1.1 from the last Proposition. Proof of Theorem 1.1. Consider the ansatz (4.16) and equation (4.17). By changing the dependent variable from u to for any branch of log − 1 2 , equation (4.17) becomes where U(z) = 2r(z),Ũ(z) = 2b(z). Then, after an SL(3, C) gauge transformation with the ansatz (4.16) becomes Impose the Euclidean reality conditionsz =z,w = −w, resulting in a positive-definite metric on R 4 . The ASDYM equations with these reality conditions are Take the gauge group to be SU(2, 1). A matrix M is in the Lie algebra su(2, 1) if it is trace-free and satisfies where η = η −1 = diag(1, 1, −1).
Let z = p + iq, w = r + is, so (p, q, r, s) are standard flat coordinates on R 4 . The gauge fields A p , A q , A r , A s are su(2, 1) valued. The relations A z = (A p − iA q )/2, Az = (A p + iA q )/2 together with (4.22) imply that with a similar relation between A w and Aw. Concretely, this means that where a + e + k = 0 (and of course A w and Aw are related in the same way). Choosing a real form SU(2, 1) of SL(3, C) on restriction to the Euclidean slice imposes a constraintŨ =Ū and yields the affine sphere equation (1.4).
To sum up, one could achieve the characterisation of the ansatz (4.19), withz =z,Ũ =Ū , analogous to Proposition 4.2. Let us again choose the double null coordinates such that the generators of the symmetry group of the ASDYM are given by ∂w, ∂ w . With the chosen reality condition the ASDYM equations reduce to the SU(2, 1) Hitchin equations where We now consider the reduction of the system (4.23),(4.24). Theorem 1.1 arises as a corollary of Proposition 4.2.
Tzizéica equation
The Tzitzéica equation (3.16) is a different real form of (4.8). It arises from the ASDYM with the gauge group SL(3, R) on restriction to the ultrahyperbolic real slice R 2,2 in C 4 with (w,w, x = z, y =z) real. The Higgs fields are given by P = Aw, Q = A w and the metric on the space of orbits of X 1 = ∂w and X 2 = ∂ w has signature (1, 1). The real version of the ansatz (4.6),(4.7) can be characterised analogously to the holomorphic case treated in Proposition 4.2. However, one needs to take care of the fact that e u(x,y) > 0 for real valued function u(x, y). There are two places where this needs to be considered. First is where we use condition (i) in Proposition 4.2 to put (Q, P ) in the form (4.6),(4.7). To write Tr(P Q) = e u(x,y) , we require that Tr(P Q) > 0. Assume that this can be done at a point (x 0 , y 0 ) (if not then change coordinates y → −y) and restrict the domain of u to a neighbourhood of this point where the positivity still holds.
The second place where the problem of the sign arises is when we use the coordinate symmetry to transform to the Tzitzéica equation (3.16). This can only be done for r(x)b(y) > 0. The sign of r(x)b(y) is governed by the quantity Tr ((D x P ) 2 (D y Q) 2 ) in condition (ii). To see it, note that in the notation of (4.16), After we set t = 1, the condition (iii) implies that k = e u > 0. Hence, the sign of sr, and thus the sign r(x)b(y) is the same as the sign of Tr ((D x P ) 2 (D y Q) 2 ) . However, this cannot be changed by real coordinate transformation x → x(x), y →ŷ(y), because where we have used Q 2 = 0 = P 2 . Therefore, condition (ii) in Proposition 4.2 needs to be replaced by Tr ((D z P ) 2 ) = 0 = Tr ((DzQ) 2 ) and Tr (D z P ) 2 (DzQ) 2 > 0 in the domain of u.
We remark that Tr ((D x P ) 2 (D y Q) 2 ) < 0 corresponds to the equation whereas Tr ((D x P ) 2 (D y Q) 2 ) = 0 yields Louiville equation Therefore, the sign of Tr ((D x P ) 2 (D y Q) 2 ) corresponds to the sign of ǫ in (3.17).
Z 3 two dimensional Toda chain
As a byproduct of the proof of Proposition 4.2, we find that, dropping condition (iii) in this proposition, the Hitchin equations can be reduced to a coupled system which includes the Z 3 two dimensional Toda chain [24] as a special case. Recall that a two dimensional Toda chain is given by where α ∈ Z. In this paper (5.1) is called the Z 3 two dimensional Toda chain when i) α ∈ Z/Z 3 and ii) u 1 + u 2 + u 3 = 0. We summarise the result in the following proposition.
Proposition 5.1 Let u 1 , u 2 be functions of (x, y). The coupled system of equations Proof. These conditions are the first two conditions in Proposition 4.2.
Following the proof and assuming condition (i) gives (4.13). However, now it is not possible to use gauge symmetry to set the diagonal elements of both A x and A y to be the same as in (4.7) without the compatibility condition. Instead, let us use only the gauge transformation (4.14) to eliminate the diagonal elements of A y , by choosing (ln a) y = −p.
As before, condition (ii) implies that m = h = 0 and sktr = 0. The Hitchin equations (4.2a, b, c) imply that t is a function of x only. Hence, we can use the residual gauge freedom (4.14) with a = a(x) to set t = 1. Equation (4.2c) then gives Set u 1 = α, u 2 = −2α + u, and change the coordinate y → −y. The system (5.7) becomes (u 1 ) xy − r(x)b(y)e u 2 −u 1 + e 2u 1 +u 2 = 0 (u 2 ) xy + r(x)b(y)e u 2 −u 1 − c(y)b −1 (y)e −2u 2 −u 1 = 0, which can be transformed into (5.2) by the change of dependent variables and coordinates. There are four distinct cases depending on the signs of ǫ 1 , ǫ 2 . Since the coordinates are real, the signs of ǫ 1 , ǫ 2 are the same as those of r(x)b(y) and c(y)b −1 (y), respectively. Similar to the real version of Proposition 4.2 for the Tzitzéica equation, r(x)b(y) and c(y)b −1 (y) can be related to some gauge invariant quantities. It can be shown that at a given point (x 0 , y 0 ) the signs of r(x)b(y) and c(y)b −1 (y) are determined by the signs of (a) := Tr (D x P ) 2 (D y Q) 2 , We shall analyse these signs and then restrict the domains of (u 1 , u 2 ) to a neighbourhood of (x 0 , y 0 ) where the signs remain constant. If (a) > 0, setting t = 1 gives skr > 0, which gives r(x)c(y) > 0. This implies that r(x)b(y) and c(y)b −1 (y) have the same signs. Now if (b) > 0, then k > 0 meaning c(y)b −1 (y) > 0, hence r(x)b(y) > 0. Similarly if (b) < 0 then c(y)b −1 (y) and r(x)b(y) < 0. On the other hand, (a) < 0 implies that r(x)b(y) and c(y)b −1 (y) have opposite signs. Then, the sign of (b) determines the sign of c(y)b −1 (y). The important point is that the signs of (a) and (b) cannot be changed by real coordinate transformations. This completes the proof.
6 Other gauges
There are several gauge inequivalent ways to reduce the ASDYM equations to the Tzitzéica equation or to the definite affine sphere equation. The reductions are relatively easy to obtain, but their gauge invariant characterisation requires much more work. Here we shall mention one other possibility which is not gauge equivalent to (4.6, 4.7).
It can be shown that the holomorphic Tzitzéica equation The real version of this ansatz was implicitly used by E. Wang [31]. Let us comment on how this formulation is related to (4.6), (4.7). First note that the Lax pairs (4.5) with (4.6),(4.7) and (6.1) are equal for λ = 1. Now consider the ansatz (4.6),(4.7) and set λ = 1 in the Lax pair (4.5). Introduce the new spectral parameter by exploiting the Lorentz symmetry and rescaling the coordinates and read off new A z , Az, P, Q from (4.5) with λ replaced byλ. This yields the ansatz (6.1).
Choosing the Euclidean reality conditions and reducing the gauge group to SU(2, 1) we find another reduction of ASDYM to the affine sphere equation. Take the following ansatz, in which the gauge fields are independent of w andw, ψ = ψ(z,z) is a real function, and U(z,z) is a complex function: Recall that A w = Q and Aw = −P.
Semi-Flat Calabi-Yau metric
In this section we consider the semi-flat Calabi-Yau metric constructed by Loftin, Yau and Zaslow, and obtain the local expression of the metric explicitly in term of solution of the definite affine sphere equation. Let us first recall the Simon-Wang approach to affine spheres [27]. Consider the parametrisation of an elliptic affine sphere The structure equations 5 defining the affine sphere can be written as a linear first order system of PDEs in f, f z and fz one calculates φ jk and thus the metric on the fibre to be φ jk dy j dy k = (p j p k + e ψ q jqk )dy j dy k . Now, let us introduce new coordinates τ := p i y i , ξ := q i y i ,ξ :=q i y i and write p i dy i = dτ − y i dp i etc. Denote the two matrices of coefficients in the linear system (7.1) by −A (z) and −A (z) respectively, so that (7.1) is Then, by considering the corresponding equation for N −1 , the one-forms y i dp i , y i dq i , y i dq i can be written in terms of coordinates τ, ξ,ξ and components of A (z) and A (z) , which are known in terms of ψ. Finally, we can write the metric (1.3) as g = dr 2 + r 2 e ψ |dz| 2 + |dτ + α| 2 + e ψ |dξ + β| 2 , where α = − 1 2 e ψ (ξdz + ξdz), β = (τ + ξψ z )dz + e −ψŪξ dz.
Using the relation between the metric, the Kähler form and the complex structure, we find holomorphic basis {e 1 , e 2 , e 3 } (1.8) and write g and ω as in Proposition 1.2, where we have introduced a complex coordinate w = r + iτ.
This can be understood geometrically, as e ψ dzdz and Udz 3 are the affine metric and the cubic differential respectively of the affine sphere. The metric (1.7) is invariant under the above transformations, together with ξ →ξ = e j(z) ξ.
Remark 2. One expects the linear system associated with the structure equations of affine spheres (7.1) to be equivalent to the Hitchin Lax pair (4.5) giving rise to the affine sphere equation. The matrices A (z) and A (z) in (7.1) are unique up to gauge transformations for some value of λ, then it follows that (A z , Az, Q, P ) will satisfy the Hitchin equations (4.2a, b, c), with reality conditionz =z. Conversely, given a solution (A z , Az, Q, P ) to the Hitchin equations, we should be able to find a value of spectral parameter λ such that (A z + λP ) and (Az + λ −1 Q) can be gauge transformed to A (z) and A (z) respectively. For example, we can obtain A (z) and A (z) in (7.1) from the ansatz (4.19), withz =z andŨ =Ū, by gauge transformation with and choosing the value of spectral parameter in (7.3) to be λ = 1. Note that we need det g = 1, since A (z) and A (z) are not traceless.
Painlevé III
One of the main results of Loftin, Yau and Zaslow [20] is the existence of radially symmetric solutions of the affine sphere equation (1.4) for U(z) = z −2 , with prescribed behaviour near the singularity z = 0. In this section we shall show that the radially symmetric solutions of (1.4) are Painlevé III transcendents.
In the classification of Okamoto [26] it falls in the type D7.
• One can consider the radial symmetry reduction of the affine sphere equation (1.4) with U = z −n for general n ∈ Z. which is not real. There are Bäcklund transformations leading to new solutions, but they change the value of the parameters. This shows that the desired radial solution to the affine sphere equation (1.4) is transcendental. In [17,3] it has been shown that the radial solutions of the Tzitzéica equation (3.16) also satisfies Painlevé III of type D7.
The calculation leading to Painlevé III (8.1) implies that if we gauge transform ansatz (4.19) with U(z) = z −2 ,Ũ (z) =z −2 into an invariant gauge and substitute it into (8.5), then in the new coordinate s = ρ 1/2 the system (8.5) becomes Lax pair of the Painlevé III with special values of parameters (8.1). We shall now present this calculation: An invariant gauge of (4.19) can be obtained using the gauge transformation with g = e iθ/3 0 0 0 e −i2θ/3 0 0 0 e iθ/3 , 6 The spectral parameter λ is not constant along the lift of the generators (8.4) to C 4 × CP 1 ∈ (w,w, z,z, λ) where Ψ is defined. However, the invariant spectral parameter ζ is constant along the lift, and hence we are allowed to express Ψ as a function of ρ and ζ only.
|
2018-11-16T11:44:18.940Z
|
2009-01-01T00:00:00.000
|
{
"year": 2009,
"sha1": "a4dfdef23adfa397b01c459051d9bef952f2c0e6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0809.3015",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7c46bccc03d2204a577089243c841a6c2b1095fe",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
203652158
|
pes2o/s2orc
|
v3-fos-license
|
Role of α-Tocopherol Acetate on Nasal Respiratory Functions: Mucociliary Clearance and Rhinomanometric Evaluations in Primary Atrophic Rhinitis
Primary atrophic rhinitis is a disease of the nose and of paranasalsinuses characterized by a progressive loss of function of nasal and paranasal mucosa caused by a gradual destruction of ciliary mucosalepithelium with atrophy of serous–mucous glands and loss of bonestructures.The aim of this study was to evaluate the therapeutic effects of topic α-tochopherol acetate (vitamin E) in patients with primary atrophicrhinitis based on subjective and objective data.We analyzed 44 patients with dry nose sensation and endoscopic evidence of atrophic nasal mucosa. We analyzed endoscopic mucosascore, anterior rhinomanometry, and nasal mucociliary clearance before and after 6 months of topic treatment with α-tochopherol acetate. For statistical analysis, we used paired samples t test (95% confidence interval [CI], P < .05) for rhinomanometric and muciliary transit time evaluations and analysis of variance 1-way test (95% CI, P < .05) for endoscopic evaluation. All patients showed an improvement in “dry nose” sensation and inperception of nasal airflow. Rhinomanometric examination showed increase of nasal airflow at follow-up (P < .05); nasal mucociliaryclearance showed a reduction in mean transit time (P < .05); and endoscopic evaluation showed significative improvement of hydration of nasalmucosa and significative decreasing nasal crusts and mucusaccumulation (P < .05). Medical treatment for primary atrophic rhinitis is not clearly documented in the literature; in this research, it was demonstrated that α-ochopherol acetate could be a possible treatment for atrophic rhinitis.
Introduction
The definition of dry nose involves several clinical conditions such as the anterior dry rhinitis, primary atrophic rhinitis (PAR) and secondary atrophic rhinitis (SAR), and their complications like ozena and empty nose syndrome. 1 Atrophic rhinitis (AR) is a disease of the nose and paranasal sinuses of considerable clinical interest in otolaryngology. Fraenkel described it for the first time in 1876, but its etiopathogenesis is still debated nowadays. [1][2] According to its etiology, it is classified into PAR and SAR and it is characterized by a progressive loss of function of nasal and paranasal mucosa caused by the gradual destruction of the ciliary mucosal epithelium or respiratory epithelium, by the atrophy of exocrine serous-mucous glands, and by the loss of underlying bone structures. 1 Moreover, the disease involves the metaplastic replacement of the squamous epithelium and subsequent loss of mucociliary clearance. 1 This morphostructural damage of nasal mucosa leads to clinical manifestations such as nasal congestion and paradoxal nasal respiratory obstruction, despite an increase in nasal spaces (paradox stuffy nose) and persistence of secretions; these conditions are mainly determined by the loss of the nasal nerve sensitivity due to submucosal atrophy. [1][2] The diagnosis is clinical based on the subjective and objective findings: increased mucociliary clearance transit time, alteration in rhinomanometric values, nasal symptoms (congestion, dry nose, nasal respiratory obstruction, nasal crusts, mucus secretion, and hypo/anosmia), and epistaxis. [1][2] Several authors have investigated the role of a-tocopherol acetate for its anti-inflammatory, immune, and antioxidant functions have been widely documented in the literature with restoration of epithelium in skin, in oral and vulvovaginal mucosa, and in gastric mucosa and nasal mucosa. [3][4][5][6][7][8][9][10][11][12][13][14] Vitamin E acts as a cofactor for the binding of different enzymes for oxidative cascade reaction: it prevents oxidation and destruction of membrane lipids; it interacts with different cellular proteins that regulate the transcription and the expression of genes that code for cytokines and chemokines. 3,9,13 The aim of this study was to evaluate the effects of the therapeutic protocol with a-tocopherol acetate in patients with PAR without infection, based on the subjective and objective measures.
Patients and Methods
From October 2017 to September 2018, we enrolled 44 patients (29 female and 15 male) aged between 34 and 70 years, mean age 57.2 years old, with clinical history and objective findings of PAR. Most of the patients referred hyposmia/anosmia associated with the sensation of dry nose. Informed consent was obtained from all individual participants included in the study. The research protocol was approved by University Control Group; this study was conducted according to the World Medical Association Declaration of Helsinki. This is a retrospective observational research.
The inclusion criteria were clinical evidence of paradox nasal stuffiness sensation, hyposmia/anosmia, sensation of ''dry nose,'' and endoscopic and computed tomography (CT) scan evidence of mucosal epithelium atrophy with abnormal expansion of paranasal sinuses and nasal spaces. Patients with the history of previous nasal surgery, allergic chronic vasomotor rhinitis, chronic granulomatous disease, use of topical nasal drugs, diagnosis of Sjogren syndrome, prior radiotherapy of the head and neck, and complications of the disease such as septal perforation and crusts infections were excluded. Patients underwent, after general ear, nose and throat examination, a CT scan of the nose and paranasal sinuses, endoscopic rhinologic evaluation, rhinomanometry, and nasal mucociliary clearance (NMC) test with charcoal and saccharine powder.
The endoscopic evaluation was performed always by the same specialist since this score was a subjective data.
Bilateral anterior rhinomanometry (evaluated at 150 Pascal drop pressure) was conducted in basal condition and 5 minutes after nasal decongestion with naphazoline 0.1% nasal spray (1 puff each nostril) in order to analyze nasal flow rates (cm 3 /s) and nasal resistances (Pa/cm 3 /s) using ATMOS1 Rhino 31 (anterior measurements using olive measuring probe). We obtained 8 groups of results before and after topic treatment with a-tocopherol acetate: -Nasal airflow basal before topic treatment (AFbasalT0); -nasal airflow basal after topic treatment (AFbasalT1); -nasal resistance basal before topic treatment (RbasalT0); -nasal resistance basal after topic treatment (RbasalT1); -nasal airflow after decongestant before topic treatment (AFdecongT0); -nasal airflow after decongestant after topic treatment (AFdecongT1); -nasal resistance after decongestant before topic treatment (RdecongT0); -nasal resistance after decongestant after topic treatment (RdecongT1).
The NMC test was performed 2 hours before rhinomanometry using a mixture of charcoal and 3% of saccharine powder, at 1 to 3 PM in order to eliminate the influence of circadian nasal rhythms. 15 Patients waited in a chair 15 to 30 minutes to get acclimated with room temperature and humidity and to control for effects of mucosal decongestion due to exercise. The endpoint of this examination was detected by the perception of sweet taste and appearance of the dye at the pharyngeal inspection (subjective and objective methods).
The treatment scheme used for this study was the nasal administration of pure a-tocopherol acetate 2 puffs in each nostril, 3 times a day, for 6 months, so the follow-up was performed at the end of medical treatment.
For statistical analysis, we used descriptive data, means, and standard deviations of each group of results at both rhinomanometric and NMC transit time values before and after topic treatment; and then we compared means between different groups by means of paired samples t test (95% confidence interval (95% CI), P < .05). For endoscopic scores, we used analysis of variance 1-way test (95% CI, P < .05).
Results
In all patients selected for treatment, we analyzed endoscopic evaluation, nasal airflow, and nasal resistances rates at basal and after decongestant rhinomanometry and NMC transit time (Tables 1 -3).
We observed at endoscopic examination a greater hydration of the nasal mucosa than before topic treatment and a decrease of crusts and mucus accumulation (P < .05; Table 1). Before topic treatment with a-tocopherol acetate, we observed dry mucosa with severe crusting and moderate mucus accumulation; after topic treatment, mucosa was wet with poor of absent crusting and mucus accumulation (Figures 1 and 2).
At rhinomanometric examination, the analysis of nasal airflows before and after medical treatment, both at basal and after decongestant evaluation, demonstrated increased airflows at follow-up with statistical significance (P < .05); while nasal resistances did not have significative differences before and after topic treatment with a-tocopherol acetate (P > .05; Table 4).
The NMC test before topic nasal treatment showed a severe prolonged time (mean 31.52 minutes) instead after medical topic treatment with a-tocopherol acetate, the mean transit time was 21.55 minutes (prolonged transit time) with statistical significance (P < .05; Table 5).
All patients showed an improvement in ''dry nose'' sensation and in the perception of nasal airflow (apparent remission of paradoxical stuffy nose sensation) with an improvement of hyposmia/anosmia.
Discussion
Nasal-sinusoidal walls are lined by nasal mucosa providing to several functions like heating up, humidifying and purifying inspired air, and nonspecific and specific acting against environmental pathogens and others. Nasal mucosa functions are regulated by several factors like nervous system and sex hormones acting in different manner during life. 16 Dysregulation or dysfunction of these mechanisms leads to several clinical conditions characterized by dry nose such as PAR, SAR, and their complications. 1,2 Primary chronic AR is a clinical condition with a higher prevalence in women after puberty, associated with hereditary factors, endocrine imbalances, racial factors, nutritional deficiencies such as lack of vitamin A or D, iron, and autoimmune disorders. 16 The diagnosis was clinical while CT scan was indicated when signs of chronic rhinosinusitis are found or to obtain adjunctive evidence of PAR. 1,2 Treatment for PAR is not well defined, and it is often empirical. [17][18] The saline washes must be considered as the first choice for several authors to promote the cleaning of the nasal cavity and to remove secretions and crusts, which could provoke secondary infections. [17][18] Other therapeutic approaches propose the use of bicarbonate antiseptic solutions in which the diborated sodium acts as an antiseptic and antibacterial substance; bicarbonate sodium helps to dissolve the crusts; and the chloride sodium makes the solution isotonic. [17][18] Glycerin drops or spray associated with glucose can be used because they allow the lubrification of the nasal mucosa. [17][18] The glucose fermentation acidifies the pH and hinders bacterial growth. [17][18] Bacterial superinfections are treated with specific antibiotics such as rifampicin 600 mg daily for 12 weeks; ciprofloxacin 500 to 750 mg for 8 weeks. 19 Surgical treatments, instead, provides for the partial or complete closure of the nostrils with autologous or synthetic implants. 19 Other alternative treatments described in the literature propose the use of liposucked with autologous plateletreached plasma, subcutaneous fat, cancellous bone, autologous bone marrow grafts, grafts of placenta, or adipose tissue. 20 Abbreviations: AFbasalT0, nasal airflow basal before topic treatment; AFba-salT1, nasal airflow basal after topic treatment; AFdecongT0, nasal airflow after decongestant before topic treatment; AFdecongT1, nasal airflow after decongestant after topic treatment; RdecongT0, nasal resistance after decongestant before topic treatment; RdecongT1, nasal resistance after decongestant after topic treatment; RbasalT0, nasal resistance basal before topic treatment; Rba-salT1, nasal resistance basal after topic treatment; std dev, standard deviation. a Evaluation at 150 Pa; AF in cm 3 /s; R in Pa/cm 3 /s. In our previous study, we found a decreased healing time in elderly patients affected by chronic rhinosinusitis after endoscopic sinus surgery treated with topic nasal a-tocopherol acetate for 3 months. 14 In consideration of our previous research, 14 the treatment scheme used for the present study was the nasal administration of a-tocopherol acetate 2 puffs in each nostril 3 times a day, for 6 months. The follow-up was performed after 6 months of medical treatment and consists of the endoscopic rhinologic examination, basal and after decongestant rhinomanometry, and NMC transit time test.
All patients showed an improvement in nasal respiratory function for increased nasal airflow; the patients had a better response to rhinomanometric tests after treatment (increased nasal airflow). Nasal resistances, according to the literature, did not have significative differences before and after medical treatment proposed in this research, neither after decongestant: maybe due to a vascular depletion in AR and to the duration of treatment. 18 Furthermore, the NMC test, after treatment, showed a reduction of the mean transit time near the normal transit time (up to 20 minutes but less than 31 minutes). The duration of NMC in normal individuals is up to 20 minutes, it is prolonged if it is 21 to 31 minutes; it is considered severely or grossly prolonged if it is 31 to 60 minutes or up to 60 minutes. 21 The results obtained suggest the use of a-tocopherol acetate in PAR; this study investigated an aspect scientifically poorly explored: What is the most correct strategy in the pharmacological treatment of PAR? Moreover, we also want to give an impulse for scientific studies on the effect of vitamin E, and specifically of a-tocopherol acetate, in the nasal tropism.
PAR and ozena are often used to indicate the same clinical condition, even if it is important to distinguish them because of in the second one, we find bacterial infection. Medical treatment for PAR is not clearly documented in the literature in terms of follow-up and clinical evaluation with subjective (symptoms) and objective methods (endoscopic evaluation, rhinomanometry, and NMC transit time) and medical treatment. In the present study, we showed relevant results in nasal functions after medical topic treatment with a-tocopherol acetate (vitamin E), and these results lay the foundation for further application of this molecule in sinonasal pathology. Abbreviations: AFbasalT0, nasal airflow basal before topic treatment; AFbasalT1, nasal airflow basal after topic treatment; AFdecongT0, nasal airflow after decongestant before topic treatment; AFdecongT1, nasal airflow after decongestant after topic treatment; RdecongT0, nasal resistance after decongestant before topic treatment; RdecongT1, nasal resistance after decongestant after topic treatment; RbasalT0, nasal resistance basal before topic treatment; RbasalT1, nasal resistance basal after topic treatment; sig, significance. a AF in cm 2 /s; R in Pa/cm 2 /s. b p < 0.05.
|
2019-10-04T13:17:01.899Z
|
2019-10-02T00:00:00.000
|
{
"year": 2019,
"sha1": "087237ead1df45588d0e918f4ba17a2178b6354b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/0145561319870483",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "4874f8f6a7cca0d592fcf0d86465826383482680",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
230818349
|
pes2o/s2orc
|
v3-fos-license
|
Low rate of subsequent surgery and serious complications following intra-articular steroid injection for base of thumb osteoarthritis: national cohort analysis
Abstract Objectives Intra-articular steroid injection is commonly used to treat base of thumb osteoarthritis (BTOA), despite a lack of large-scale data on safety and effectiveness. We estimate the incidence of serious complications and further procedures following BTOA injection, including the risk of post-operative serious surgical site infection for subsequent operative intervention. Methods Hospital Episode Statistics data linked to mortality records from 1 April 1998 to 31 March 2017 were used to identify all BTOA injections undertaken in adults in the National Health Service secondary care in England. Patients were followed up longitudinally until death or 31 March 2017. A multivariable regression with a Fine and Gray model adjusting for the competing risk of mortality in addition to age, sex and socioeconomic deprivation was used to identify factors associated with progression to further procedure. Secondary outcomes included serious complications after injection and subsequent surgical site infection. Results A total of 19 120 primary injections were performed during the 19-year period in 18 356 patients. Of these 76.5% were female; mean age 62 years (s.d. 10.6); 50.48% underwent further procedure; 22.40% underwent surgery. Median time to further intervention was 412 days (IQR 110–1945). Female sex was associated with increased risk of proceeding to surgery. Serious complication rate following injection was 0.04% (0.01–0.08) within 90 days. Of those proceeding to surgery 0.16% (0.06–0.34) presented with a wound infection within 30 days and 90 days, compared with an overall post-operative wound infection rate of 0.03% (0.02–0.05). Conclusions Very low rates of serious complications were identified following BTOA injections performed in secondary care; only one in five patients proceeded to subsequent surgery. Clinical trial registration clinicaltrials.gov, https://www.clinicaltrials.gov, NCT03573765
Introduction
Base of thumb osteoarthritis (BTOA) is a common hand condition presenting to primary and secondary care physicians, characterized by pain and reduced function [1][2][3]. Early treatment options for BTOA include intraarticular steroid injection in addition to splinting and hand therapy [4,5]. Developing best evidence for hand arthritis is a research priority for patients with hand conditions in the UK [6].
Systematic reviews of available randomized control trials and case series noted that evidence of efficacy of intra-articular steroid injections for BTOA was limited and heterogeneous [7][8][9][10]. Smaller single-centre studies have estimated that following BTOA intra-articular steroid injection, only around one-third proceed to surgery [11].
Efficacy aside, research from a recent large US insurance dataset raised concerns that BTOA steroid injections predispose patients to a higher risk of postoperative complications [12]. However, previous studies in other areas of the body have found no evidence to support this finding [13,14].
Study into the long-term course of treatment and risk of complications within routine clinical care is therefore an important addition to the literature in order to better counsel patients. Observational research offers the opportunity to follow patients for longer after an intervention than clinical trials, and enables rare complications and complications that do not present within a short time frame to be better identified [15].
Objectives
Our primary aim was to estimate the incidence of further procedures after intra-articular steroid injection for BTOA in adults in the NHS in England. Secondary aims were to identify factors associated with proceeding to further intervention, especially surgery, serious complications and whether having a BTOA injection prior to surgery affected the risk of serious surgical site infection.
Data source
A bespoke pseudonymized extract of individual-level patient data from the NHS Digital Hospital Episode Statistics for Admitted Patient Care (HES APC) dataset was made (1 April 1998 to 31 March 2017). This extract contained all episodes of NHS care associated with BTOA, defined by a validated list of codes [16]. HES APC contains all admissions, including day-case care, for all individuals, and the extract contained all episodes before and after the 'index' BTOA episode. The extract contained all episodes of care remunerated by the NHS in England, including independent providers (i.e. private hospitals undertaking procedures on behalf of the NHS on NHS patients). All patient episodes of care within the NHS England system are linked via a patient's individual NHS number. This enabled linkage of all NHS-funded treatments undertaken and longitudinal follow-up of each patient. The HES APC extract was also linked to the ONS national mortality dataset prior to pseudonymization to identify cause and date of death [17]. The NHS covers the vast majority of health-care provision in England, with only 11% of the population estimated to hold private health-care insurance, and only 13% of all elective surgery being privately funded outside the NHS [18].
Ethical approval
This study was approved by the University of Oxford Research Services Clinical Trials Research Group (project ID 12787), and the NHS Data Access Advisory Group (DAAG). It was carried out in accordance with the NHS Digital Data Sharing Agreement (DARS-NIC-29827-Q8Z7Q) and registered at clinicaltrials.gov (NCT03573765). Studies using non-identifiable records from Hospital Episode Statistics are exempt from research ethics committee approval. Patients have the right to request that their data is not released by NHS Digital for use by researchers (register a 'Type 2 optout').
Population
Patients identified as having a BTOA injection were followed up until death, or censored at the end of the study (31 March 2017) in order to maximize the longitudinal follow-up possible within the dataset. Minimum follow-up was 1 day, to capture all complications including those occurring within the first 24 h post-operatively. Duplicate episodes can occur over the change of financial year, and these were removed.
Exposures and outcomes were defined using previously validated OPCS-4.7 classification for interventions and International Classification of Disease (ICD) version 10 codes for disease (Supplementary Table S1, available at Rheumatology online), defined in an initial validation study for identification of all cases of BTOA in secondary care [19][20][21].
Two further clinical validation studies were undertaken within our institution to look at the patient population defined within the injection cohort, and the validity of identifying surgical subtypes in HES APC. Discussion with clinical coders, NHS Digital and a sample of over 300 patients undergoing injection or surgery within 1 year was undertaken. The injection cohort was confirmed to include patients undergoing injection in theatre, in specialist outpatient injection clinics run by rheumatologists and hand surgeons, and those undergoing injection in the radiology department as an outpatient procedure. The injection validation study showed we were able to identify patients who had undergone a BTOA intra-articular injection with a positive predictive value of 85.8% using our previously validated code list (Supplementary Table S1, available at Rheumatology online). In the second clinical validation study, the coding for BTOA surgical subtypes found a positive predictive value of 99% in our Trust within a year's sample of 104 patients undergoing BTOA surgery, and therefore our code list was considered appropriate.
In order to further characterize the population included, factors associated with the development of BTOA were identified using OPCS and ICD-10 codes (Supplementary Table S2, available at Rheumatology online). A past medical history of carpal tunnel syndrome, generalized osteoarthritis, knee osteoarthritis, rheumatoid arthritis, hand or wrist fracture, and oophorectomy was identified if the patient had an episode including the relevant code at any time prior to or within the hospital episode for BTOA injection. To determine socio-economic status, Index of Multiple Deprivation (a Government generated score of relative deprivation based on geographical location within England) was used, and the Charlson Comorbidity Index was used to determine overall combined comorbidity level of each patient at the time of injection or surgery. Ethnicity was included as defined by NHS Digital [22][23][24].
A further procedure undertaken in secondary care was defined as a code for surgery after injection or a second injection in the same hand when calculating incidence rates, survival and regression analysis. In order to identify the 'worst-case scenario' of possible patients who may go on to a procedure but have missing laterality codes, a further definition of three or more procedures per person was also included when calculating an estimate of the number of people requiring a further procedure.
To estimate the number of cases proceeding to surgery after injection, surgery was defined as any episode containing the OPCS and ICD codes in Supplementary Table S1, available at Rheumatology online, that occurred after injection. Laterality linked injection and surgery was again used in survival analysis and surgical intervention rates.
Serious complications after primary intra-articular injection as identified in hospital admission records were defined as severe infection (septic arthritis, wound infection leading to wound dehiscence or wound debridement) and tendon injury. As these complications were identified from HES APC, the complications required an episode of admission to hospital including as a daycase patient or requiring surgery, occurring in the same hand as the injection within 30 or 90 days of injection (see Supplementary Table S3, available at Rheumatology online). Surgical site infection after BTOA surgery for those who had a pre-operative intra-articular steroid injection was defined in the same manner, and compared with the rates seen in all post-operative BTOA surgery within the HES extract. The NHS framework of complications within 30 and 90 days was used to determine the comparative incidence rate [25]. All results with a count of <7 were redacted to reduce the risk of secondary disclosure of data according to the NHS Digital analysis guide [26]. Cox proportional hazard analysis of the factors associated with post-operative complications was planned a priori.
Statistical methods
We calculated age-and sex-specific incidence rates of surgery using ONS mid-year population estimates [27]. All complications were calculated as a proportion of the sample with 95% confidence intervals (CI). Incomplete records consisted of only 0.74% cases for age, sex, ethnicity and Index of Multiple Deprivation deciles, and were assumed to be missing at random. We therefore did not employ any imputation, but undertook complete case analysis. Laterality code was present in 93.8%; comparison of demographics of those with and without laterality present within their records demonstrated that the patients were comparable (Supplementary Table S4, available at Rheumatology online).
Kaplan-Meier analysis was undertaken to identify the trend in time to further intervention or surgery. We identified factors associated with further intervention using a Fine and Gray model to produce both a crude and adjusted sub-hazard ratios (sHR) accounting for the competing risk of mortality [28]. Proportional hazards assumption was tested using Schoenfeld residuals. Age was categorized and the category containing the median age (60-69 years) was used as the baseline category due to the non-linear relationship of age with adverse outcome that did not meet the proportional hazards assumption. Statistical analysis was undertaken using Stata version 15.1. A Poisson distribution was assumed and the delta method was used to calculate confidence intervals for complications. . Of these patients 128 (0.67%) had <30 days of follow-up without an event due to having their primary injection during March 2017; 351 (1.8%) patients had <90 days of follow-up without an event, i.e. due to having primary injection between January and March 2017 (1.40%). A total of 76.5% of patients were female, and the mean age at injection was 62 years (S.D. 10.6). A peak of intervention was observed in women around the peri-menopausal age that was not as prominent in men ( Supplementary Fig. S1, available at Rheumatology online). In all, 64.7% of patients had a low level of overall comorbidity with a Charlson Comorbidity Index of zero or one. A total of 83% of patients identified themselves as being of a white background, and the socio-demographic distribution of patients undergoing primary BTOA injection was roughly even across the strata. The full demographic profile of patients undergoing primary BTOA injection, further intervention and surgical intervention is shown in Table 1.
Patient demographics
Jennifer. C. E. Lane et al.
Trends in further intervention
In total, 9651 further interventions were identified after primary BTOA injection in 6461 individuals. The median time to second procedure was 412 days (IQR 110-1945), with an incidence rate of 66.7 per 1000 person-years (95% CI: 65.06, 68.41). A total of 4282 surgeries were undertaken after an injection at any point giving an incidence rate of 22.3 per 1000 person-years (95% CI: 21.51, 23.19). Kaplan-Meier analysis of time to further intervention and surgery is given in Figs. 2a and 2b. The sunburst plot in Fig. 3 illustrates the treatment paths taken in secondary care following primary BTOA injection. The central ring represents all primary intra-articular injections in the cohort, and the outer ring represents the number and type of subsequent interventions undertaken during the follow-up period. The central ring shows that 49.5% of primary injections had no subsequent intervention observed. Of the 50.5% of patients with primary injections who were observed to undergo a further intervention, 28.1% underwent a second intra-articular injection, with simple trapeziectomy being the most common surgical procedure undertaken following BTOA injection.
Factors associated with further intervention after injection
Crude univariable analysis suggested an association of increased incidence of further intervention with female sex. This was not confirmed in multivariable analysis (Fig. 4) when adjusting for age, comorbidity and socio-economic status. Compared with those in the median age category, patients who were at the extremes of the age range at the time of primary injection had a reduced risk of further intervention that persisted in adjusted analysis [adjusted sHR 0.30 (0.13-0.68) for those age 18-29 years, adjusted sHR 0.44 (0.33-0.59) for those age 30-39 years; Supplementary Table S5, available at Rheumatology online]. Increasing levels of comorbidity were associated with reduced incidence of further intervention and there was no association seen between further intervention and socio-economic status.
When considering the factors associated with proceeding to surgery after injection, female sex was associated with a 12% increased relative risk within multivariable analysis [adjusted sHR 1.12 (1.02-1.23); Fig. 5; Supplementary Table S6, available at Rheumatology online]. As was seen with all further intervention, there was a reduced likelihood of progressing to surgery at the extremes of age, and with increasing comorbidity. No association was found with socio-economic status.
Complications after injection
There were a very small number of cases identified as complicated by septic arthritis, neurovascular injury, need for wound debridement or tendon repair after primary injection in secondary care. As all absolute numbers were under 7, the true value must be minimized
Complications after subsequent surgery
In the 4282 thumbs that underwent intra-articular injection in secondary care prior to undergoing surgery, <7 cases presented with serious surgical site infection within 30 or 90 days. The true value must therefore be minimized, but gives a maximum rate of surgical site
Key findings
This large national cohort study observed 50% of primary intra-articular BTOA injections in secondary care proceeded to further intervention. The most common further procedure was a repeat injection. One in five patients in the cohort went on to have surgery at a median time of 412 days following injection. Patients at the extremes of age and with greater levels of comorbidity were observed to be less likely to undergo further injection or progress to surgery. When adjusted for age, social deprivation and comorbidity, female sex was observed to be associated with increased risk of progression to surgery. A very low rate of complications was seen in secondary care following injection, with <4 in 10 000 patients needing hospital treatment for severe infection, neurovascular injury or tendon injury. Although a higher incidence of surgical site infection was seen if patients underwent a pre-operative intra-articular injection at any time prior to surgery, the incidence of serious infection remained below 2 in 1000 for complications within 90 days of surgery.
This study adds to the literature surrounding the incidence of serious complications following intra-articular steroid injections in the hand. Our data are in stark contrast to the rate of post-surgical complications found in US data containing insured and Medicare (state-assisted health care) patients, where 21% of patients sustained any form of complication after BTOA surgery [12]. In their study, undergoing steroid injection prior to surgery increased the odds of a complication by 20%, although no absolute rate of complications was reported for those patients who had pre-operative intra-articular injections. Giladi et al. used a wider definition for infection (including any diagnosis of infection or prescription of antibiotics within 6 weeks of surgery), which may explain some of the disparity, since only the most serious complications seen in secondary care are included in our study. A difference in studied populations within a different health-care system may also have an impact.
Strengths and limitations
Our study contains data from a national source with longitudinal follow-up, enabling patients to be followed within a nationalized health system including if they move to a different health-care provider for subsequent procedures. It includes all patients presenting to a public health-care system that includes patients with a wide age range and range of associated comorbidities and levels of social deprivation. As England in general has a low rate of intervention undertaken in the private sector, this NHS data will capture the majority of health-care activity to produce more generalizable results for the role of interventions for BTOA in a general population. This data identifies trends in patients who may be excluded from clinical trials, and observes patients in health care outside trial centres and health-care settings engaged in research. Results found in our study align with previous work identifying positive responses to intra-articular steroid injection for hand osteoarthritis without serious complications [29,30].
This study is limited to interactions within secondary care only, where patients have an injection under radiological guidance, in an injection clinic, or in theatre. The patients included are therefore only those undergoing primary injection that is registered within this system. Our validation studies showed that the HES APC dataset includes patients undergoing intra-articular injections under radiological guidance or on a specific injection list, but will not include those being undertaken in traditional secondary care outpatient clinic settings or that had occurred previously in primary care or in interface services. This produces a selection bias in the patients included, but also indicates that patients included here are those who have been referred to secondary care. As the data does not link to primary care, we cannot fully record treatment that has gone before, and this is acknowledged as a limitation.
Similarly, our study only reports complications that are sufficiently severe to present in secondary care, and will not include, for example, minor infections treated with oral antibiotics in primary care. Our data define the risk of serious surgical site infection or significant tendon injury requiring intervention, and can inform the consent process regarding the most serious and most clinically important complications. It must be recognized that as the study is based within secondary care alone, only complications that require an inpatient admission or intervention will therefore be included. Whilst patients could present to any secondary care provider within the NHS in England and this would be detected by linkage through their individual identifier, presentations to primary care are not included. We believe that the low rates identified here are not due to misclassification bias or underreporting, but more that they only include the most serious events. Whilst this study adds to the literature by identifying the rates of the most serious complications within a national cohort, further work is needed to identify other complications that would not require admission or further intervention, for example within primary care observational datasets.
Information regarding a patient's comorbidities in this study are only collected from HES and therefore may not be as rich as in primary care datasets, but may contain selection bias of the most pertinent comorbidities likely to affect outcome from secondary care intervention. HES APC also does not collect data on the use of orthoses, thus we cannot compare the use of adjuvant splints in this population alongside intra-articular injection, which should be recognized as a limitation. However, because HES APC is an administrative dataset repurposed to enable research, we have undertaken validation studies in order to minimize misclassification of cases. HES APC data has the significant advantage of preventing inclusion bias, as data is collected outside the main research team.
Future work
Further work is needed to describe the rate of minor infective complications and side effects that would not produce an admitted patient care episode, identifying the rate of complications seen in primary and intermediate care in routinely collected data. Similarly, a large cohort of patients providing additional data complications such as steroid flare and skin depigmentation following injection, and use of orthoses in secondary care would also provide a comprehensive picture of the role of intra-articular injection for BTOA. Replication in other countries would determine whether similar secondary care trends are also seen outside a national health-care system. This study only investigates intra-articular injections overall, and as the NHS only routinely undertakes intra-articular steroid injections, it does not compare with other agents such as hyaluronic acid that are not routinely undertaken in the NHS. A great deal of prior scientific work has focused on comparing the efficacy of the two injections within clinical trials, and future work could compare their efficacy within routine clinical care if both agents are used in one health-care system [31][32][33][34]. This study also does not compare between radiologically guided or blind injections, or compare patientreported outcomes following injection or surgery, and this could be further investigated. Finally, this study found that progression to surgery was more common in women, and further investigation into the reasons for difference in disease progression to surgery between the sexes would enable greater understanding of the factors associated with BTOA disease progression. not necessarily reflect those of the Clinician Scientist Award programme, NIHR, NHS or the Department of Health.
Disclosure statement: All authors have completed an ICJME conflict of interest form that is uploaded with the study (http://www.icmje.org/conflicts-ofinterest/) and declare: no support from any organization for the submitted work; D.P.-A. has received research grants from Amgen, Servier, UCB; departmental fees for speaker services from Amgen, departmental fees for consultancy from UCB. J.L. reports grants from the Medical Research Council (MR/K501256/1) and Versus Arthritis (21605), during the submitted work. B.F.D. reports grants from BMA research grant, during the conduct of the study. The other authors have declared no conflicts of interest.
Data availability statement
The data underlying this article were provided by NHS Digital in accordance with the NHS Digital Data Sharing Agreement (DARS-NIC-29827-Q8Z7Q). No further data can be made available from the authors due to NHS Digital restrictions. Data extracts can be applied for directly via the NHS Digital data access request service (https://digital.nhs.uk/services/data-access-request-ser vice-dars).
|
2021-01-08T06:18:27.223Z
|
2021-01-07T00:00:00.000
|
{
"year": 2021,
"sha1": "7cfe137edd328368e0ac2b16094fb198c1641a15",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/rheumatology/advance-article-pdf/doi/10.1093/rheumatology/keaa925/36171600/keaa925.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "585088e8e042de0b8dcc8f1eb3f64ca4159ace20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224706960
|
pes2o/s2orc
|
v3-fos-license
|
Probing the hidden atomic gas in Class I jets with SOFIA
We present SOFIA/FIFI-LS observations of five prototypical, low-mass Class I outflows (HH111, SVS13, HH26, HH34, HH30) in the far-infrared [OI]63mum and [OI]145mum transitions. The obtained spectroscopic [OI]63mum and [OI]145mum maps enable us to study the spatial extent of warm, low-excitation atomic gas within outflows driven by Class I protostars. These [OI] maps may potentially allow us to measure the mass-loss rates ($\dot{M}_\text{jet}$) of this warm component of the atomic jet.
Introduction
Jets powered by young stellar objects (YSOs) are an integral part of star formation and can extend up to parsec distances from the driving source (e.g. Eislöffel et al. 2000a;Ray et al. 2007;Frank et al. 2014;Bally 2016). These outflows play an important role in transporting the angular momentum accumulated in the accretion disc away from the forming star, and therefore offer a unique opportunity to investigate the accretion properties of protostars.
Based on their infrared spectral energy distribution (SED), the evolutionary sequence of protostars is broadly divided into the three Classes 0, I, and II (e.g. Lada 1987;Andre et al. 1993;Greene et al. 1994). In the earliest evolutionary phase, the newly formed protostar (Class 0, lifetime: τ life ∼ 10 4 yr) lies deeply embedded in its natal cloud, accreting the main part of its final mass and showing the strongest outflow activity (Bally 2016). Due to the high extinction, Class 0 objects and their associated molecular outflows (e.g. detected in CO, SiO) are studied at submillimetre and far-infrared (FIR) wavelengths (e.g. Gueth & Guilloteau 1999;Codella et al. 2007). In the consecutive Class I phase (τ life ∼ 10 5 yr), the central object, though still embedded in a dusty envelope, becomes visible in the near-infrared (NIR). Outflows from Class I sources are detected in various collisionally excited optical and NIR atomic lines indicating an increasing atomic jet component. The more evolved Class II objects (referred to as classical T Tauri stars (CTTSs), τ life ∼ 10 6 yr) have accreted and blown away so much of the surrounding material that they are becoming visible in the optical, but still are far from reaching the main sequence (the following stages, i.e. Class III and Post T Tauri stars, have lifetimes to the order of τ life ∼ 10 7 yr).
The efficiency of the accretion-ejection process during star formation can be estimated from observations that allow reliable conclusions on the the mass-accretion rateṀ acc and the massejection rateṀ loss . Such studies of YSOs demonstrate a correlation between both quantities indicating a physical mechanism behind it (e.g. Hartigan et al. 1995;White & Hillenbrand 2004).
Theoretical models are consistent with that finding (e.g. Shu et al. 1994;Königl & Pudritz 2000), and it is expected that as YSOs go through their stages of evolution, that is, they evolve from a Class 0 to Class II object, their mass-loss rate and massaccretion rates decrease (Bontemps et al. 1996;Saraceno et al. 1996;Caratti o Garatti et al. 2012). Furthermore, the ratio of both quantities ( f =Ṁ loss /Ṁ acc ) provides decisive information on proposed jet acceleration mechanisms. Ratios to the order of f ∼ 0.3 would give reason to justify an X-wind scenario (Shu Article number, page 1 of 24 arXiv:2010.09314v1 [astro-ph.GA] 19 Oct 2020 A&A proofs: manuscript no. sperling_et_al_2020 Notes. (a) taken from the Two Micron All Sky Survey (2MASS); (b) rounded from Zucker et al. (2019) based on GAIA DR2, except SVS13, taken from Hirota et al. (2008Hirota et al. ( , 2011 and based on VLBI observations of the associated maser; (c) we correct the luminosities taken from the cited papers for our assumed distances: L bol = (D adopted /D paper ) 2 L paper bol . (d) Reipurth (1989a), L bol /L = 25 at 460 pc (e) Cohen et al. (1985) measures L bol /L = 66, whereas Reipurth et al. (1993) and Harvey et al. (1998) give L bol /L = 80 at 350 pc in each case. More recently, Tobin et al. (2016) measured L bol /L = 32.5 at 230 pc (f) Antoniucci et al. (2008), L bol /L = 4.6 − 9.2 at 450 pc (g) Antoniucci et al. (2008), L bol /L = 12.4 − 19.9 at 460 pc (h) Wood et al. (2002), Cotera et al. (2001), Molinari et al. (1993). et al. 1988, 1994), whereas f ∼ 0.01-0.5 suggest that magnetohydrodynamical (MHD) disc wind models (Casse & Ferreira 2000;Ferreira 1997) are more suitable describing the jet launching process. In this context, measurements of mass-loss rates can provide useful insights into the energy budget of young stars, the jet launching mechanism, and the evolution of protostellar outflows.
However, large extinction in the earliest stages of star formation prevents an accurate determination of essential physical properties of YSOs such as the mass-flux rate or the particle densities close to the star. Consequently, detailed studies of extended jets from these objects are usually performed only far from the central source (i.e. θ > 10 ). In these regions, however, the jet has already interacted with the ambient medium through multiple shocks, loosing the pristine information about its acceleration mechanism and connection with the accretion events.
The far-infrared [O I] 63µm emission line (here [O I] 63 ) offers a unique opportunity to study the above-mentioned dynamical properties of the atomic jet, since this line is a) directly connected to the occuring shock, b) less affected by extinction, c) expected to be comparably bright amongst other shock tracers. With the traditional approach by using CO rotational lines, one can only indirectly estimate the time-averaged mass-loss rate from young embedded protostars, whereas the [O I] 63 line potentially allows a direct determination of the instantaneous mass loss rate (Hollenbach & McKee 1989).
In this paper, we present extensive SOFIA FIFI-LS observations (Sect. 2) of five prototypical low-mass Class I outflows (HH111, SVS13 (also referred to as HH7-11 or SSV13), HH26, HH34, HH30, see Table 1) starting close to their respective driving source. A brief description of the observed Herbig-Haro (HH) objects can be found in Appendix A. All targets have been mapped along their outflows in the atomic fine structure [O I] 63 ( 3 P 1 − 3 P 2 ) and [O I] 145 ( 3 P 0 − 3 P 1 ) transitions (abbreviation for the [O I] 145µm emission line). Since the excitation energies of the involved states 3 P 1 and 3 P 0 are ∆E( 3 P 1 − 3 P 2 )/k B = 228 K and ∆E( 3 P 0 − 3 P 1 )/k B = 99 K, they can easily be excited via collisions with atomic or molecular hydrogen tracing the presence of warm (i.e. T ∼ 500 − 1500 K), dense, low-excitation atomic gas. In comparison, optical lines such as [S II]λλ6731,6716, Hα, [N II]λλ6548,6583, or [O I]λ6300 trace the hot atomic gas (T ∼ 10 4 K) and are commonly used to investigate extended protostellar outflows (e.g. Hartigan et al. 1994;Bacciotti & Eislöffel 1999;Hartigan et al. 2011). On the other hand NIR emission lines (e.g. [Fe II], H 2 ) have widely been used to derive the physical conditions of warm ionised gas in Class 0/I sources (e.g. Eislöffel et al. 2000b;Davis et al. 2003;Giannini et al. 2004;Nisini et al. 2005;Takami et al. 2006;Garcia Lopez et al. 2013;Giannini et al. 2013). However, NIR lines fail to probe the warm low excitation atomic gas, that can indeed play a central role in the energetics of embedded jets. NIR lines from singly ionised iron (e.g. [Fe II] 1.644 µm) trace shock excited, partially ionised gas at T ex ∼ 2000 − 15000 K in dense regions (n cr > 10 4 cm −3 ); for example, Nisini et al. (2002);Pesenti et al. (2003), whereas ro-vibrational H 2 lines, such as at 2.122 µm, are connected to the warm (T ∼ 2000 − 3000 K), dense (n H ≥ 10 3 cm −3 ), molecular component of the outflow, (e.g. Garcia Lopez et al. 2010;Davis et al. 2011). To complete the picture, observations of low-J pure rotational CO-lines in the millimetre range (e.g. J = 1 − 0 at 2.6 mm, J = 2 − 1 at 1.3 mm, J = 3 − 2 at 0.86 mm) have been used to investigate large scale morphologies of outflows tracing the cold swept-up gas (T ∼ 10 − 100 K), for example, Fukui et al. (1993) and Raga & Cabrit (1993). In the end, all those observations at different wavelengths probe different physical conditions and thus complement each other providing a robust picture of outflow dynamics in protostellar systems.
The FIR [O I] 63 line is predicted to be the main coolant of dense dissociative J-shocks in a wide range of shock velocities and gas densities (Hollenbach & McKee 1989). As such, this line is the best tracer of the interactions between the high velocity primary jet and the dense ambient medium. In addition to dense shocks, [O I] 63 emission is recognised to be strong in photo-dissociation regions (PDR) due to possible UV illumination by a present UV field (Hollenbach 1985;Ceccarelli et al. 1997 (Ceccarelli et al. 1997), ISO (Giannini et al. 2001;Nisini et al. 2002;Liseau et al. 2006), and Herschel/PACS (Green et al. 2013;Karska et al. 2013;Podio et al. 2012;Benedettini et al. 2012;Santangelo et al.
Observations
All five prototypical Class I objects (HH111, SVS13, HH26, HH34, HH30: see Table 1 Young et al. 2012). SOFIA is a modified Boeing 747SP aircraft with a 2.5 m telescope (effective aperture) and has a nominal pointing accuracy of 0 . 5.
As a key feature of FIFI-LS is that both [O I] 63,145 emission lines were observed simultaneously with two independent grating spectrometers with wavelength ranges of 51-120 µm (blue channel) and 115-200 µm (red channel) (Looney et al. 2000;Fischer et al. 2018;Colditz et al. 2018). FIFI-LS provides an array of 5 × 5 spatial pixels (spaxels) in each channel covering a field of view of 30 × 30 in the blue (pixel size: 6 × 6 ) and 1 × 1 (pixel size:12 × 12 ) in the red.
The diffraction-limited FWHM beam size at 63 µm (145 µm) is ∼ 5 . 4 (12 . 4). The spectral resolutions R = λ/∆λ are 1300 at 63 µm and 1000 at 145 µm, which correspond to a medium velocity resolution of 231 km s −1 and 300 km s −1 , respectively. The data cubes of the blue (red) channel feature a sampling of 34 km s −1 (42 km s −1 ) per spectral element. The spatial sampling is specified by 1 /spaxel in the blue channel and 2 /spaxel in the red channel. The data were acquired via three SOFIA flights in Cycles 3 and 5 (program IDs: 03_0073, 05_0200) in two point symmetric chop FIFI-LS mode. Notes. (a) Total effective on source integration time.
Data reduction
Although SOFIA operates 12-14 kilometres above the ground the interfering impact of the Earth's atmosphere has to be mitigated (see Fig. 1). We applied the SOFIA/FIFI-LS data reduction pipeline (REDUX) to our data cubes excluding the telluric correction step, since the atmospheric transmission at about λ = 63 µm is below 0.6 (see Fig. 2a) causing difficulties in the telluric correction (Vacca 2016). Instead, we used our own Python script JENA.py, which firstly cuts off irregularities on the edges of the FIFI-LS data cubes that arise from the mosaic mapping. Furthermore, JENA.py applies an optimal spectrum extraction procedure on each data cube spaxel (portrayed in Fig. 3) to increase their signal-to-noise ratio (SNR Horne 1986). Finally, JENA.py mitigates the impact of the atmosphere as described in the following. Considering that various synthetic spectra of atmospheric transmission (ATRAN-models (Lord 1992)) are accessible, we chose one specific ATRAN-model for each observed object out of all relevant three parametric ATRAN-models τ(λ; a) according to the flight parameters a := {H, θ, wvp} during their observations (Table 2). Here we denote H, θ, and wvp as flight height, zenith angle, and water vapour overburden, respectively ( Fig. 1 and Table 2).
Assuming now that the observed object radiates an emission line in the form of a 1D-Gaussian function, with the four parameters b := {A, σ, µ, B}, and the FIFI-LS spectral instrument function SIF (λ; R) is given by a 1D-Gaussian depending on the spectral resolution R, we ideally expect to detect the discrete signal in each spaxel of a given data cube. The function S in Eq. 2 just samples the modelled signal to equidistant wavelength grid points λ k predetermined by the individual data cubes.
To extract the emission line parameters b in each spaxel of our data cubes, we used the Levenberg-Marquardt algorithm (Newville et al. 2016). Given the fact that the atmospheric transmission causes difficulties in the telluric correction, we chose to weight our χ 2 in the non-linear least-squares fit in each spaxel with the atmospheric transmission Fig. 3: An illustration of the optimal spectrum extraction procedure implemented in JENA.py. The dotted circle around spaxel (i, j) in the middle of the 5 × 5 spaxel field shows exemplary the FWHM of the spatial beam. For each surrounding spaxel, that is (partly) covered by the beam, a factor (e.g. v i+1, j for spaxel (i + 1, j)) is calculated representing the enclosed volume under the normalised 2D-Gaussian beam. Spaxel (i, j) is replaced by the sum of all beam covered spaxels weighted nonuniformly by their specific volume factors (to see on the right side of the picture) while simultaneously preserving its photometric accuracy.
whereby data(λ k ) are the flux measurements at λ k , and (λ k ) are the corresponding error values, which are given by the standard flux errors σ(λ k ) multiplied by the number of spaxels in the spatial beam.
The continuum subtracted flux f in one spaxel is then determined only by the parameter A, since Accordingly, the atmospherically adjusted continuum in one specific spaxel is given by the parameter B.
Errors in A and B were estimated from the covariance matrix. However, this method led to unreliable error values ∆A in a few spaxels with low signal-to-noise values. Therefore we applied the method of Avni (1976) to estimate the 1σ confidence interval for ∆A only in these cases.
On account of the medium spectral resolution of FIFI-LS we did not extract any velocity information from b. The observed [O I] 63 linewidths ∆V obs are to the order of 180-220 km s −1 , indicating that this line is spectrally unresolved in all our targets (∆V obs = ∆V 2 line + ∆V 2 FIFI-LS ). We therefore constrained the intrinsic linewidth in the fitting procedure to be in the range of ∆V line = 30 − 150 km s −1 . The total uncertainty in the absolute flux calibration for the integrated line fluxes amounts to approximately 20 %.
To evaluate the significance of the [O I] 63,145 line detection, we estimated the signal-to-noise ratio in each spaxel using the rms-method. Since atmospheric features at both [O I] 63,145 lines heavily corrupt our spectra (see Fig. 2), we determined the rms on the continuum around the line where the atmospheric transmission is above 0.6.
Continuum sources at 63 µm and 145 µm
Here, we briefly describe the obtained continuum maps (presented in the Appendix C) of our five objects.
For three objects in our sample (HH111 IRS, SVS13, HH34 IRS), a bright continuum source was detected in both channels. The location of these three continuum sources match within a ∼2 offset tolerance with the coordinates taken from 2MASS. For HH26, a bright continuum source was detected only at 145 µm, coinciding with HH26 IRS (Table 1). Another bright region is located at the position of HH26A in the 145 µm continuum map. Towards HH26B, a possible faint continuum source is detected at (α, δ) J2000 =(5 h 46 m 01 s .9, -0 No continuum point sources were detected in both channels for HH30. However, for HH30 the continuum at 63 µm is very faintly elongated along the jet axis at P.A. 30 o .
We fitted a 2D-Gaussian function, to the detected continuum sources (r as radial distance from the source peak) to extract the continuum flux F λ , here defined as background-corrected continuum flux within an aperture of radius 1.5σ s of the fitted 2D-Gaussian (Mighell 1999). The quantified continuum fluxes of our sample are listed in Table 3. These values are consistent with expected values from SED curves in the literature (see e.g. Antoniucci et al. 2008;Benedettini et al. 2000, for HH34 IRS and HH26 IRS) or in the SIMBAD catalogue.
In the red channel, the measured FWHM are to the order of 20-22 for all the detected sources, whereas in the blue channel it is ∼ 10 for HH111 and SVS13, and ∼ 15 for HH34. Since all measured FWHM are significantly greater than the corresponding beam sizes, we conclude that all detected continuum sources are extended. After subtracting the specific 2D-Gaussian from the continuum maps, yet another potential continuum source (also extended) in the red channel of HH34 became apparent at (α, δ) J2000 =(5 h 35 m 30 s .3, -6 • 27 35 . 1).
[O I] 63 morphology
The smoothed, continuum-subtracted [O I] 63 maps for each target of our sample are shown in Figs. 5, 6, and 7. Green stars in the maps indicate the position of the individual driving source (see Table 1). Yellow boxes enclose the carefully selected regions along the expected protostellar outflow where flux measurements were taken (Tab. 4).
In the following paragraphs, we briefly describe the morphology of the [O I] 63,145 maps for each target individually. 63 emission is detected on the continuum source HH111 IRS itself (box B 1 ). The [O I] 63 emission at HH111 IRS is highly concentrated and slightly extended alongside the outflow axis. The far-infrared counterpart of the optical jet is firmly apparent only in the [O I] 63 map (inside B 2 ) and reveals a potential clumpy structure within its bright emitting region of ∼ 45 projected length. At 420 pc, this corresponds to ∼ 0.09 pc or ∼ 18900 AU. For better orientation, the positions in right ascension of a few known optical knots are marked as blue lines (notation from Reipurth 1989b HH34: Three potentially interesting emission regions (here labelled K 1 , K 2 , K 3 ) are seen in the obtained [O I] 63 map. Serving as orientation, we mark the positions of knots E and L in between which the well-known optical jet prominently seen in [SII] towards HH34S is located (nomenclature adopted from with blue crosses). The position of the jet driving source HH34 IRS coincides with the brightest emission within K 1 . The detected [O I] 63 line at HH34 IRS is slightly blue shifted (Fig. 4), whereas towards HH34N a red-shifted component within K 1 is apparent. Looking at the obtained spectra at the location of K 2 and K 3 (Fig. 4), we notice that the line fit suggests a red-shifted outflow towards HH34S. Physically, this is puzzling, since the jet towards HH34S is blue shifted. The morphology of this [O I] 63 emission at K 2 and K 3 would be difficult to explain, if this emission is connected to the outflow. The most obvious explanation then could be that this emission is part of a backflow along the cocoon of material surrounding the jet (Norman 1990;Cabrit 1995). Alternatively, noise at 63 µm towards longer wavelengths mimics an emission line so that the line fit procedure falsely identifies this noise feature as a redshifted [O I] 63 line.
HH26: We find almost no [O I] 63,145 emission on the driving source HH26 IRS itself (box B 1 ). However, three knots (here labelled K 1 , K 2 , K 3 ) of significant emission are arranged along the outflow axis in the [O I] 63 map. At 145 µm K 1 and K 2 seem to be one emitting region. The location of K 1 and K 2 coincides with knot C of the HH26A/B/C chain (see nomenclature in Chrysostomou et al. 2000). Extended [O I] 63 emission at HH26A (here K 1 and K 2 ). The very faint blue-shifted [O I] 63 emission at K 3 appears to be rather non-physical, since it lies in the red-shifted outflow lobe (Davis et al. 1997;Dunham et al. 2014). As in the case of HH34, we therefore interpret this emission seen at K 3 as a noise feature. and SVS13 (coordinates from Table 1). The blue circle shows the FWHM spatial beam size in the blue channel of the FIFI-LS instrument. The golden boxes are the boundaries of the rectangular apertures, in which the fluxes F 63µm and F 145µm are measured (Table 4). Top panel: blue lines at the top label the right ascension of the knots F-P associated with the optical jet (Reipurth 1989b). Contour lines are drawn in magenta in logarithmic scale at four levels between (0.0420-0.3200)×10 −13 erg s −1 cm −2 . Bottom panel: blue crosses label the positions of HH 7-11 associated with the jet (coordinates taken from Bally et al. 1996). Contour lines are drawn in magenta in logarithmic scale at three levels between (0.068-0.400)×10 −13 erg s −1 cm −2 .
Article number, page 7 of 24 Table 1). The blue circle shows the FWHM spatial beam size in the blue channel of the FIFI-LS instrument. The golden boxes are the boundaries of the rectangular apertures, in which the fluxes F 63µm and F 145µm are measured (Table 4) Due to the medium quality of our obtained [O I] 63 maps of HH111, SVS13, HH34, and HH26, we chose to put several aperture boxes of interest in their corresponding maps for flux measurements. The position and size of the aperture boxes were chosen arbitrarily with three constraints. First, the isolated region has to be larger than the blue channel spatial beam size to ensure negligible flux losses due to diffraction, secondly they encompass the jet region with relatively high signal-to-noise ratios, and thirdly a box encloses a physically meaningful region, for example, the jet, the driving source, or a region with bright line emission.
We determined the flux within each aperture box by averaging all enclosed spaxels to a representative spaxel on which the model function (Sect. 3) is fitted. The parameter A (and its error) from this fit is then scaled with the box dimensions to get the reported flux values in Table 4.
Atomic mass-flux rates
Mass-flux ratesṀ jet can be derived from direct jet observations via different methods (see e.g. Cabrit 2002;Dougados et al. 2010;Dionatos et al. 2020). It is tempting to derive mass-loss rates of our targets from the obtained [O I] 63 maps as it has become common practice in the field of star formation. Basically, two different approaches are worth considering here (Sects. 4.3.1 and 4.3.2). Both methods have their specific limitations and caveats, which are discussed in Sect. 5.2.
Assuming that the observed [O I] 63 line luminosity L([O I] 63 )
is connected with a dissociative J-shock cooling region coming from one decelerated wind shock, we could utilise the results of the Hollenbach (1985) and Hollenbach & McKee (1989) papers, namely that the mass-loss rate is predicted to be pro- to be valid over a wide range of shock parameters, provided that n 0 × v shock 10 12 cm −2 s −1 (n 0 pre-shock density, v shock shock velocity). This method of measuring the mass-loss rate could potentially be quite powerful, because only one fairly easy-tomeasure quantity enters Eq. 6. If the HM89 model is not applicable, the mass-loss rates calculated via Eq. 6 are unusable and errors cannot be quantified. It would be interesting to test the validity of Eq. 6 once again, since the Hollenbach & McKee (1989) paper improved collisional strengths and element abundances are available, and new chemical networks could be included.
4.3.2.Ṁ jet from jet geometry and the [O I] 63 line luminosity
Without any assumptions about the origin of the observed [O I] 63 emission, we could follow a similar analysis to the one performed by Hartigan et al. (1995) estimating the mass-loss rate from the [O I] 63 line luminosity and other jet parameters such as the flow velocity or the physical length of the jet. The forbidden [O I] 63 line tracing the warm jet component (T ∼ 300−5000 K) is mainly excited via atomic hydrogen collisions. Since we cannot infer the gas density from the j 63 / j 145 line ratio, we could follow Nisini et al. (2015) in the assumption that the collider density is close to the critical density. Detailed calculations found in Appendix B lead to the following estimate on the mass-loss rate: Here, v t is the component of the jet velocity on the plane of sky and θ is the angular size of the jet. We point out that Eq. 7 comes with some uncertainties such as the unknown level population, often poorly constrained jet velocities and propagating errors that come from the distance measurements. These uncertainties may add up to a total uncertainty of one order of magnitude.
Discussion
In the context of protostellar outflows, the HM89 formula (Eq. 6) has been commonly used to derive mass-loss rates from the
Schematic views on the observed [O I] emission
HH111: We interpret the extended on-source emission as shock-excited gas in the interaction region of a quasi-spherical wind/outflow coming from HH111 IRS (and the disc) with the ambient cloud (Fig. 9). Presumably, several spatially unresolved, internal shocks within the jet body are the driving agents of this bright [O I] 63 emission. This argumentation is supported by optical and near infrared observations (e.g. Reipurth et al. 1997;Davis et al. 2001), which reveal the presence of multiple knots with bow shock morphologies within the jet body. Adopting the knot notation introduced by Reipurth (1989b), we deduce that the knots F−O appear as one emitting region in our obtained [O I] 63 maps due to their low spatial resolution. Several different physical mechanisms have been proposed explaining the origin of the knots (see e.g. Raga et al. 1990;Micono et al. 1998;. The gap of very little [O I] 63 emission between the source emission and the jet periphery can be attributed to very high obscuration at the jet base (Reipurth et al. 1997) or due to a quiet interaction region, where the highly collimated jet expands almost freely into interstellar space. This has been proposed for the HH34 jet ), which shows similar morphology at optical forbidden lines such as [SII] or Hα. Interestingly, the observed [O I] 63 emission along the jet body (Fig. 8) emission as mostly coming from shock-excited gas connected to the outflow (Fig. 9). The overall [O I] 63 morphology matches the near-infrared H 2 map surprisingly well (Chrysostomou et al. 2000;Khanzadyan et al. 2003). Fortunately, the region associated with HH7-11 has also been mapped in great detail, with HST revealing a consistent schematic view of the HH7-11 outflow (see Fig. 12 in Hartigan et al. (2019) Nisini et al. (2002) and Podio et al. (2006). with it being a high-excitation region mostly seen in Hα. In this context, it is noteworthy that for HH7, HH8, and HH10, [Fe II] and H 2 peak at roughly the same locations, whereas for HH11, [Fe II] peaks further downstream of the outflow (see Fig. 6 in Khanzadyan et al. (2003)). This may suggest the presence of shocked, thin, hot gas located at HH11 being able to dissociate molecular hydrogen at the apex of the bow shock. The detection of bright [O I] 63 emission at the location of HH7 strongly supports the notion of it being the terminal bow shock region of the HH7-11 outflow (e.g. Smith et al. 2003;Hartigan et al. 2019). HH7 has a remarkable complex internal substructure (HH7A, 7B, 7C, spatially unresolved in our maps) from which the detected [O I] 63 emission in principle can arise. Furthermore, a potentially present Mach disc in HH7 was reported by Noriega-Crespo et al. (2002). Based on near-infrared H 2 observations, Smith et al. (2003) Smith et al. 2003). The observed [O I] 63 emission, which is an intense compact knot together with a slightly blue-shifted line profile at HH7, is consistent with the dissociative J-type paraboloidal bow shock model. Following this line of reasoning, a significant amount of [O I] 63 emission may be produced in situ, that is, in the dissociative J-shock region, where CO or H 2 O molecules are broken apart (Flower & Pineau Des Forêts 2010). However, recent spectroscopic observations of pure rotational H 2 lines at HH7 are more in agreement with a non-dissociative C-type molecular shock (Neufeld et al. 2019). Molinari et al. (2000) reported signatures of both C-type and J-type shocks in the HH7-11 region, illustrating the complex shock structure of the HH7-11 outflow. The innermost region of SVS13 is particularly interesting, since it exhibits several astrophysical features, for example, H 2 O maser emission (Haschick et al. 1980), multiple continuum sources forming a complex hierarchical system (VLA3, VLA4, VLA4B, Rodríguez et al. 1999;Anglada et al. 2000), multi-ple detected outflows (e.g. Noriega-Crespo et al. 2002;Lefèvre et al. 2017), and outburst events (Eislöffel et al. 1991). Hodapp & Chini (2014) revealed the presence of a micro-jet traced by shock-excited [Fe II] and a series of expanding bubble fragments seen in H 2 . It has been speculated that an observed outburst event in 1990 (Eislöffel et al. 1991) may be the origin of these shelllike structures (e.g. Hodapp & Chini 2014;Gardner et al. 2016). Since we detect most of the [O I] 63 emission from that inner region, we interpret this originating from bow shock fronts of the bubble and the interaction zone where the micro-jet potentially pierces the bubble. However, strong wind shocks from one or more continuum sources can also be responsible for the [O I] 63 emission at SVS13. The diffuse and extended [O I] 63 emission seen in our obtained maps at HH8 and HH10 can be interpreted as jet deflection region Hartigan et al. 2019), that is, a location where the outflow strikes the ambient medium leading to a substantial change in direction. In this scenario, HH7 appears to be off the HH11-HH10-HH8 chain due to that deflection. The HH9 knot features no [Fe II] and only very faint H 2 emission . We detect some [O I] emission vaguely around HH9. Due to its location at the cavity wall around the HH7-11 outflow (Hartigan et al. 2019), we suspect that some entrained or deflected material turbulently shocks the ambient medium.
HH34: We suspect that the detected [O I] 63 emission close to HH34IRS is linked to shock-excited gas in a jet/counter jet outflow region (Fig. 10). This conclusion is supported by near-infrared observations showing strong on-source emission in [Fe II] and H 2 (e.g. Garcia Lopez et al. 2010;Davis et al. 2011). A potential disc around HH34 IRS (Rodríguez et al. 2014) might contribute to the detected [O I] 63 emission. Compared with HH111, no far-infrared counterpart of the optical jet is seen between knots E and L. This is surprising, since there are several demonstrable similarities between the HH111 jet and the HH34 jet (e.g. Reipurth et al. 2002). The HH34 jet is most prominently seen in [SII]λ6716, [O I]λ6300, less bright in Hα, and features numerous knots within the jet body (Bacciotti & Eislöffel 1999;Reipurth et al. 2002;Podio et al. 2006). The relatively strong optical [SII]λ6716 emission in the HH34 jet indicates low shock velocities in the emitting gas (Hartigan et al. 1994). Since the [O I]λ6300 line is prominently detected in the jet, we conclude that atomic oxygen is copiously present in the flow region and could in principle give rise to the far-infrared [O I] 63 line. In the near-infrared, the jet is prominently seen in [Fe II] and H 2 (e.g. Podio et al. 2006;Garcia Lopez et al. 2008;Antoniucci et al. 2014), and [Fe II] peaks where [SII]λ6716 peaks. So, both lines ([Fe II], [S II]λ6716) are likely to be excited in J-shocks at the apices of the internal bow shocks (Podio et al. 2006;Antoniucci et al. 2014). Podio et al. (2006) measured the ionisation fraction (x e ∼ 0.05 − 0.17), electron density (n e ∼ 10 3 cm −3 ), temperature (T e ∼ 1.3 × 10 4 K), and total density (n H ∼ 10 3 − 10 4 cm −3 ) along the jet. These values are consistent with the assumed shock conditions of Hollenbach & McKee (1989). So, the non-detection of a [O I] 63 jet can either be a result of the too low shock velocities within the HH34 jet, or the [O I] 63 jet is indeed present, but too faint to be detected. We suspect that the HH34 jet is detectable in [O I] 63, provided there are deeper exposures.
HH26:
The non-detection of [O I] 63 at HH26IRS is consistent with the interpretation given by Antoniucci et al. (2008) that the jet driven by HH26IRS is mainly molecular, meaning that it is mostly seen in H 2 (e.g. Davis et al. 2002; 2002) or CO (Dunham et al. 2014). Interestingly, no [Fe II] but strong H 2 was detected at HH26IRS in the near-infrared Antoniucci et al. (2008), hinting to a low jet density (Davis et al. 2011).
Following this line of reasoning, the extended [O I] 63 emission at HH26A (here K 1 and K 2 ) supports the conclusion that this region represents a shock-excited region (Fig. 11), where the jet has struck the ambient medium and thus is indeed a deflection region as proposed by Chrysostomou et al. (2002). In this scenario, HH26C is interpreted as the deflected, terminal bow shock, which was not mapped here. Spectroscopic observations at different locations within the HH26 region (Benedettini et al. 2000;Giannini et al. 2004) support this assumption that the observed [O I] 63 emission in the blue lobe towards HH26A is mainly due to shock excitation, that is, not due to the presence of a strong FUV field (Benedettini et al. 2000). Fig. 11: Schematic of the HH26 outflow.
Caveats on the derived mass-loss rates
The crucial assumption of the HM89 model is that all the observed [O I] 63 emission comes from one decelerated wind shock. This explicitly excludes emission from possible multiple shocks, slow shocks, bow shocks, or deflection regions. Observations at high spatial resolution together with comparable observations at other shock tracers (e.g. Hα, H 2 , [Fe II]) may provide deeper insights into the number of shocks, the presence of a Mach disc (the deceleration shock), or the overall geometry of the shock region. Without these potentially valuable observations, the HM89 formula has to be applied with strong reservations, since the global shock structure is undetermined. In the case of HH26 and HH8/HH10, in the SVS13 region there are indeed indications that the detected [O I] 63 emission comes from a deflection region (see Sect. 4.2) and not from a wind shock. Thus, as already emphasised by Hartigan et al. (2019), the HM89 formula will probably give inaccurate mass-loss rates in these cases. On the other hand, if the jet material passes through several, spatially unresolved shocks, an unknown amount of [O I] 63 luminosity emerges from these multiple shocks, and thus the HM89 formula potentially overestimates the mass-loss rate. This is certainly true in the case of the HH111 jet observed in this study as discussed in Sect. 5.1. Theoretically, the HM89 formula could be adjusted, taking into account the unknown number of shocks (Dougados et al. 2010;Nisini et al. 2015), and this correction factor may be on the order of unity (Nisini et al. 2015). If, on the other hand, parts of the wind flow without any interactions within the ambient medium, HM89 underestimates the mass-loss rate (Cohen et al. 1988). This interaction-free component of the jet will inevitably be untraceable. The best case scenario would be that both effects cancel each other out by chance. Furthermore, [O I] 63 emission can have various origins, for example a present disc or a PDR region. In order to disentangle and quantify all possible contributions, specific line ratios in the mid-and far-infrared, for example [O I] (Nisini et al. 1996;Kaufman et al. 1999;Flower & Pineau Des Forêts 2010) [OI] 145 line fluxes at the driving source and at HH7. According to their measurements, the [OI] line ratio is quantified by j 63 / j 145 ∼ 22.5 at the driving source SVS13, and it is j 63 / j 145 ∼ 28.5 at HH7. These values are perfectly consistent with predictions from shock models, meaning that depending on pre-shock densities and shock velocities, values between 10 and 35 are expected (Hollenbach & McKee 1989).
The assumption of the shock origin of the [O I] 63 line is also strongly supported by several lines of observational evidence. Herschel observations of similar protostellar outflow sources demonstrated that most of the [O I] 63 emission emerges from dissociative J-shock regions located at the apex of bow shocks (van Kempen et al. 2010;Benedettini et al. 2012;Podio et al. 2012;Karska et al. 2013). Additionally, there are several observational studies that estimate the total [O I] 63 emission coming from the disc in such sources to be only a few percent (Podio et al. 2012;Watson et al. 2016).
In conclusion, detecting a substantial amount of [O I] 63 at the jet driving sources HH111 IRS, SVS13, and HH34 IRS as extended blob-like emission, we suggest that the bulk of this [O I] 63 emission is connected to a wind shock supporting the applicability of the HM89 formula only in these three cases (see Table 5). For those three cases, both methods lead to very similar massloss rates, that is, they only differ by a factor of 2 at most and are to the order ofṀ ∼ 2 × 10 −6 M yr −1 . In comparison, the dubious regions show broader differences by a factor of 2 − 3 in the derived mass-loss rates. In the case of HH34, and Hartigan et al. (1994) used optical tracers to derive mass-loss rates to the order ofṀ ∼ 10 −7 M yr −1 . Similar mass-loss rates are found in HH111 jet (Hartigan et al. 1994;Lefloch et al. 2007). We derive significantly higher massloss rates in both cases. This points to the conclusion that the bulk of the jet material resides in the warm, neutral component of the jet. Molinari et al. (2000) used far-infrared spectra of the SVS13 region to derive mass-loss rates with the HM89 method. Their mass-loss rates are substantially higher at the source than our measurements. However, Molinari et al. (2000) took spectra from a region that includes HH10 and HH11. Since we detected some [OI] 63 emission at HH10, we conclude that they have overestimated the [OI] 63 emission at the driving source leading to too high a mass-loss rate.
Conclusions
We have presented SOFIA FIFI-LS observations of five protostellar Class I objects and their outflows (HH111, SVS13, HH26, HH34, HH30) in the [O I] 63,145 transitions. Our maps were used to detect shock-excited regions that are connected to protostellar outflows (e.g. a low-excitation atomic jet component, bow shocks, or wind shocks). Our main findings can be summarised by the following points. Strong [O I] 63 emission was detected at the driving sources HH111IRS, SVS13, and HH34IRS, and almost none was detected at HH26IRS. Bright on-source detection of [O I] 63 in these cases may arise from the protostellar outflow, interacting with the ambient medium and leading to shock excitation, meaning a wind shock. Thus, in these three cases (HH111IRS, SVS13, HH34IRS) the Hollenbach & McKee (1989) shock model assumptions most likely prevail, justifying utilisation of the HM89 relation to derive mass-loss rates.
The optical jet at about 15 west of HH111IRS (e.g. Reipurth (1989c) Table 3). The observed outflow rates of our low-mass Class I sample are to the order ofṀ jet ∼ 10 −6 M yr −1 , which is considerably higher than typical outflow rates found in jets from low-mass classical T Tauri stars (Ṁ jet ∼ 10 −7 − 10 −9 M yr −1 , Frank et al. 2014). This finding is consistent with Caratti o Garatti et al. (2012), who found lower mass-loss rates for more evolved sources. We find that both methods applied to determine mass-loss rates (Sects. 4.3.1 and 4.3.2) lead to similar values in most of the cases, that is to the same order of magnitude, even though both methods have dissimilar deficiencies. However, considering the discussed caveats (Sect. 5.2), this result might be fortuitous and only in the cases of HH34IRS, HH111IRS, and SVS13 physically reliable. Comparing the obtained mass-loss rates with estimates from the literature ( Mundt et al. (1990); Bacciotti & Eislöffel (1999) Due to LS-coupling, neutral oxygen can be approximated energetically as a five-level system associated with atomic terms in ascending order of energy: 3 P 2 , 3 P 1 , 3 P 0 , 1 D 2 , 1 S 0 . However, non-LTE calculations performed in Nisini et al. (2015) show that for temperatures below T ∼ 5000 K it is sufficient to take into account only the three lowest lying levels (Fig. B.1) since the higher levels ( 1 D 2 , 1 S 0 ) are barely populated in this case. The level population n i with i = 1, 2, 3 stands for the number density of oxygen atoms in the i th state ([n i ] = 1 cm −3 ). The total number density of oxygen atoms is then n(O) ≈ n 1 + n 2 + n 3 , and it follows n 2 n(O) = 1 + n 1 n 2 + n 3 n 2 −1 . (B.1) Assuming statistical equilibrium and neglecting radiation fields, we can solve the three rate equations (e.g. Liseau et al. 2006): n 2 n 3 = C 13 (C 32 + A 32 ) + C 12 (C 31 + C 32 + A 31 + A 32 ) C 12 C 23 + C 13 (C 21 + C 23 + A 21 ) , (B.2) n 1 n 2 = (C 31 + A 31 ) (C 21 + C 23 + A 21 ) + (C 32 + A 32 ) (A 21 + C 21 ) (C 32 + A 32 ) (C 12 + C 13 ) + C 12 (A 31 + C 31 ) .
|
2020-10-20T01:01:14.383Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "9895e758ef056dbcb6c998676ebf33f696fd0593",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2020/10/aa37242-19.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "9895e758ef056dbcb6c998676ebf33f696fd0593",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
55306795
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the Effect of Operational Factors and Characterization Imperative to the Synthesis of Silver Nanoparticles Exploring the Effect of Operational Factors and Characterization Imperative to the Synthesis of Silver Nanoparticles
chapter Abstract The synthesis and application of silver nanoparticles are increasingly becoming attractive. Hence, a critical examination of the various factors needed for the synthesis of silver nanoparticles as well as the characterization is imperative. In light of this, we addressed in this chapter, the nitty-gritty on the operational parameters (factors) and characterization relevant to synthesis of silver nanoparticle. The following characterization protocols were discussed in the context of silver nanoparticle synthesis. These protocols include spectroscopic techniques such as ultraviolet visible spectroscopy (UV – Vis), Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), transmission electron microscopy (TEM), energy-dispersive X-ray spectroscopy (EDX), X-ray fluorescence (XRF), X-ray diffraction (XRD), thermogravimetric analysis (TGA) and X-ray pho- toelectron spectroscopy (XPS).
Introduction
The field of nanotechnology is gaining more attention daily from different researchers based on the vast applications and its efficacy. Silver nanoparticle is a metallic nanoparticle with the size of 1-100 nm existing either as zerovalent silver (Ag o ) or silver oxide due to their large ratio of surface-to-bulk silver atoms. Of all the metallic nanoparticles, silver nanoparticles is exceptional and it is the most explored by researchers globally because of its various versatility, simplicity of synthesis, adaptability, morphology and its extreme surface area that paves way for the coordination of a vast number of ligands [1][2][3][4][5][6][7][8][9]. The following methods have been identified for synthesis of silver nanoparticles: Wet chemistry, Ion implantation, Biological synthesis and product functionalization. Wet chemistry involves nucleation of the nanoparticles within the solution by the action of a reducing agent on the silver ion complex forming colloidal silver. A number of wet chemistry methods, including the use of reducing sugars, citrate reduction, reduction via sodium borohydride, the silver mirror reaction, the polyol process seedmediated growth and light-mediated growth have been identified [10][11][12][13][14]. However, reduction by borohydride is gradually facing out because of its toxicity thus the major reason why biological method of synthesis has been more preferable. Biological method of synthesizing silver nanoparticles may involve the use of bacteria, fungi and plant extract using green synthesis route. This method is ecofriendly, low cost and silver nanoparticles formed are stable and well dispersed with limited aggregation and good size control [15,16]. There are different applications of silver nanoparticles ranging from its function as catalyst [5,17], water treatment [6], antimicrobial properties [8], chemotherapeutic agent and drug delivery [18], Optical sensor [19], food packaging [20], and adsorption [21]. Although, there have been reports on the synthetic routes and applications of silver nanoparticle, however, nitty-gritty on the operational parameters imperative to the synthesis have not been so reported and the cogent considerable factors in characterization have not been majorly explored by researchers. Therefore, this book chapter aimed at taking a review survey of the operational parameters (factors) and the characterization imperative to synthesis of silver nanoparticles.
Operational parameters for synthesis of silver nanoparticles
The synthesis of silver nanoparticles depends on some important operational parameters. Irrespective of the technique used for the synthesis of silver nanoparticles, certain operational factors such as the concentration and volume ratio of reacting substances, reaction time, temperature and pH influence the synthesis rate, size and shape of the nanoparticles. These parameters could be varied to control its size, shape and general morphology, efficiency and applicability. A survey of these operational parameters are examined in this section.
Effects of concentration
The silver ion concentration majorly affects the synthesis of silver nanoparticles. This parameter was investigated to identify the amount of silver ion most suitable for the generation of silver nanostructure. To investigate the effect of initial silver ion concentration, range of concentrations were prepared while other parameters was kept constant. The common practice is to vary the concentration of Ag + ion from 10 À3 to 10 À2 M. Report from the literature have established and approved 10 À3 M as the most appropriate and suitable concentration where better surface plasmon resonance was obtained. In most wet chemistry and biological synthetic methods, increase in silver ion intensity increases the rate at which the surface plasmon resonance will be attained. Silver nanoparticle is formed within the wavelength range of 400-490 nm with the formation of the ideal bell shape which is characteristic for the formation of Ag 0 nanoparticles [19].
Studies have shown that a variation in the concentration of metal salt used in the synthesis of nanoparticles influences the product of synthesis. Ibrahim [21] synthesized silver nanoparticle using silver nitrate as metallic salt and banana peel extract as reductant and capping agent, and reported a variation in color tending from yellowish brown to light reddish brown and darker shades of reddish brown with increasing silver nitrate concentration. Surface plasmon resonance (SPR) also attained distinctiveness with increasing concentrations of silver nitrate. These findings were also corroborated by reports from literature [22,23]. Typical result of effect of concentration is shown in Figure 1A.
Effect of volume ratio
The volume ratio of silver ion solution to the extract which is serving as the reducing and stabilizing or sodium borohydride plays a substantive in the synthesis of silver nanoparticles. Report from different literature showed that in biological method/green synthesis route, excess silver ion is needed for better formation of the silver nanoparticles. In some instances, ratio 9:1 (Silver ion solution: plant extract/broth) were used while in some other reports, ratio of 4:1 was used. Typical instances is seen in the synthesis of silver nanoparticle using T. peruviana ( Figure 1B). Oluwaniyi et al., (2016) [22] investigated the influence of change in volume of silver nitrate to T. peruviana aqueous leaf extract other parameters were kept constant. Different volume ratios ranging from 4:1, 3:2, 2:3 and 1:4 of 1 mM silver nitrate to T. peruviana aqueous leaf extract, respectively, were used. Excellent surface plasmon resonance (SPR) was recorded on the UV-Vis at ratio 4:1. At 4 parts of 1 mM silver nitrate solution to 1 part of T. peruviana aqueous leaf extract (4,1), the leaf extract bioreduced and stabilized the nanoparticles with the plasmon resonance at 460 nm. Other volume ratios, 3:2, 2:3 and 1:4 of 1 mM silver nitrate to T. peruviana aqueous leaf extract did not give distinct characteristics SPR for silver nanoparticles at the visible region of the UV-Vis. However, the in case of wet chemistry method using sodium borohydride (NaBH 4 ) as the reducing agent, excess volume of borohydride is needed for better formation of silver nanoparticle for better dispersion and low agglomeration. Typical, the ratio of NaBH 4 to silver ion solution is 4:1 or 5:1 [25,26].
Effect of contact time and temperature
Another important factor influencing the growth of silver nanoparticles is the contact time which is also known as reaction time ( Figure 1C). This was done by varying the time taken for the formation of silver nanoparticle. Generally, the change in color to yellow or brown is an evidence of the growth of silver nanoparticle. This is monitored with use of UV-Vis spectrophotometer until the maximum absorption wavelength is reached with excellent surface plasmon resonance (SPR). The intensity of the peak is function of the contact time therefore it increases with increase in time. Contact time is one of the parameters that controls the size of silver nanoparticles because of the blue shift of the adsorption peaks. It can be inferred that at between 0 and 20 minutes (at the early stage), the SPR band is broadened because of the slow conversion of silver ion (Ag + ) to zerovalent silver (Ag 0 ) nanoparticles. Increasing the contact time enhances excellent plasmon band formation because large amount of Ag + has been converted to Ag 0 . However, further increase in the contact time leads to noticeable decrease in the absorption intensity and wavelength which is an indication of some aggregation of silver nanoparticles leading to decrease in particle size [17, 19-23, 25, 26].
Temperature is another essential factor that should be considered in the synthesis of silver nanoparticles because it controls the reaction kinetics of the synthetic process. Increase in temperature is known to increase the rate of reaction because there will be an increase in the effective collision and the frequency factor of the reacting species. From the literature reports, studies showed that increase in temperature leads to increase in the intensity of the plasmon band as a result of bathochromic shift resulting in a decrease in the mean diameter of silver nanoparticle. At the beginning of the reaction, the synthesis of AgNPs may be rapid but this does not connote optimum temperature of the system because low temperature readily underscores the ability of reducing and stabilizing agent [27,28].
Effect of pH
There are so many factors that influence the reduction of silver ion to AgNP. Effect of pH as one of the operational parameters plays a major role because it influences the chemistry of the silver nanoparticle synthesis ( Figure 1D). This is carried out by pH adjustment using phosphoric acid or hydrochloric acid and sodium hydroxide. In practice, during green synthesis, the extract pH is adjusted from pH 2 to 11 and it reduction process monitored by UV-Vis spectrophotometer. This change in the chemical nature of the extract affects its performance as well as the rate of reduction. In the study carried out by Heydari and Rashidipour on the green synthesis of silver nanoparticles using extract of Oak fruit hull, the result showed that the rate of AgNPs synthesis increases with increasing pH up to pH = 9 and then decrease [29]. More so, investigation carried out by Kokila et al., on biosynthesis of silver nanoparticles from Cavendish banana peel extract and its antibacterial and free radical scavenging assay showed that formation of AgNPs depends mostly on the pH of the reaction medium. The result confirmed that formation of silver nanoparticles is favorable in the basic medium than in acidic medium because the absorbance values increase with increase in pH. This could be accredited to the ionization of the functional groups at higher pH and the slow rate of reduction observed in the acidic medium could be attributed to electrostatic repulsion of anions present in the reaction mixture. This was in accordance with the findings in the literature [30][31][32][33][34].
Characterization
One of the main problems confronting scientists is understanding the properties a novel material displayed. This can only be achieved by knowing and determining the structure of this new material by characterization. Presently, there is an established and well accepted concept that structures are driven by properties. This is acknowledged in chemistry and in all fields where chemistry plays a primary character such as biochemistry, biology, environmental science, engineering, medicine, polymer science and nutrition. The make-up or property of a nano/biomaterial is placed into three groups i.e. chemical (e.g., equilibrium position, reaction rates, etc.), physical (e.g., melting/boiling points, solubility, spectra, symmetry, etc.) and biological (e.g., color, drug action, odor, taste, toxicity, etc.). This property gives rise to structural features which affect intensely the macroscopic character of the material. Since this is a structure driven properties concept, the structure of the novel material mostly signifies its composition at each level of complexity. However, this varies from the simple molecule formula (giving the ratio that the elements present bears to each other) and the exact positions and locations of all atoms in the molecules of this novel material referring to the three (3) dimensional electronic density distribution [35]. This section of book chapter therefore, excellently and succinctly state the relevant of various characterization techniques relevant to the synthesis of silver nanoparticles.
UV: Vis spectroscopy
Ultraviolet visible spectroscopy (UV-Vis Spec) remains the most useful characterization relevant to the synthesis of silver nanoparticles [25][26][27][28]. In principle, the absorption of light occurs in the visible region of the electromagnetic spectrum where atoms and molecules undergo electronic transition of π-π*, n-π*, σ-σ*, and n-σ*. Absorption of energy in the form of ultraviolet or visible light is by molecules containing π-electrons or non-bonding electrons (n-electrons) to excite these electrons to higher anti-bonding molecular orbitals. The length of wave depends on the excitation of the electrons, the more easily excited the electrons the longer the wavelength of light it can absorb. The absorption in the visible range directly affects the perceived color of the chemicals involved. UV-Vis in silver nanoparticle synthesis provides vivid information on the surface plasmon resonance (SPR) at the absorption maximum wavelength. The surface plasmon resonance comes from the free electron arising from the conduction and valence bands lying close to each other in metal nanoparticles. It is as a result of the collective oscillation of free electron of silver nanoparticles in resonance with the light wave in silver nanoparticle synthesis [36,37]. All the experimental operational parameters vis-à-vis effect of initial concentration, contact time, temperature, pH, and volume ratio are monitored using the UV-Vis spectrophotometric technique. Information obtained from the absorption spectrum as a result of SPR surface, gives a clue on the type of shape of the silver nanoparticles. It is important that the interpretation from the UV-Vis measurement corroborates with TEM measurement [38].
Fourier transform infrared spectroscopy (FTIR)
The nature, structure and physicochemical properties of silver nanoparticles (AgNPs) are imperative to their activity, behavior, bio-distribution and safety. Therefore, characterization of AgNPs is essential and important for the assessment of the functional features and characteristics of the synthesized nanoparticles.
FTIR measurements is usually carried out to identify the possible biomolecules which are involved in the synthesis of nanoparticles and to find out their functions in reduction and stabilizing the nanoparticles. This spectroscopy method is employ to detect and distinguish small absorption bands (changes on the order of 10 À3 ) of functional group covalently grafted onto silver or functionally active points that is characteristics to AgNPs. This method has the ability to give precision, it is easily reproducible and also a favorable signal-to-noise ratio [39][40][41]. One of the major advantage of FTIR spectrometers to other methods of characterization of AgNPs is that, it is a non-invasive technique, data are collected rapidly data, signals are strong and bold, large signal-to-noise ratio, and very little sample is heat-up [42].
Lately, attenuated total reflection (ATR)-FTIR spectroscopy which is more advance in measurement than the conventional FTIR method has been discovered [43]. Using ATR-FTIR, we can easily know and establish the chemical properties on the polymer surface, nanoparticle surfaces and nature, its sample preparation is very simple when compared to conventional FTIR [44]. Therefore, FTIR as a method is appropriate, indispensable, non-invasive, affordable, easy and hands-on technique to know the function of biological molecules in the reduction of silver nitrate to silver.
Identification of the functional groups or biomolecules which are responsible for the reduction of silver ions in silver nanoparticles could be achieved by the Fourier transform infrared (FTIR) spectroscopy. This is achieved by comparing the intense bands with standard values. The proportionate shift in band revealed after treatment with silver nitrate is a likely indication of participation of the functional groups in the process of nanoparticle synthesis [45].
Scanning electron microscopy (SEM) and transmission electron microscopy (TEM)
The significance attributes of synthesized silver nanoparticles have been documented to have a greater consequence on their behavior and toxicity encompasses of particle size, shape, surface properties, aggregation state, solubility, structure and chemical make-up. The characterization of silver nanoparticle is necessary for proper insight into the formation, synthesis and their utilization in various fields including agriculture, medical, industries, and environment [46,47]. The validation and confirmation of synthesized nanoparticle have been carried out using various techniques however transmission electron microscopy (TEM) and scanning electron microscopy (SEM) is important methods for the cases. The significant of Microscopic techniques in the characterization of silver nanoparticles cannot be overemphasized because they give a more clear insight from the obtained data on the size, size distribution, and other quantifiable properties. The significant of electron microscopy in the analysis the synthesized silver premised on their ability to show the real structure of the particle between some ranges of nanometers (nm) conventional bright field images and the intermediate resolution darkfield techniques, to the high-resolution atomic images [48].
Scanning electron microscopy (SEM)
The SEM works by producing images whenever the electron beams scanning probe the peripheral surface of the given sample in order to confirm its structure as well as the topographical and elemental composition present in the materials [48]. During SEM analysis the electrons possess large amount of kinetic energy that is distributed and eventually leads to the generation numerous signals during the analysis of samples during whenever they interacts with the surface of the atom in the sample. The generated signal are secondary electrons, backscattered electrons, characteristics X rays, cathodoluminescence, specimen current and transmitted electrons which can generate a high-resolution magnified descriptions of a synthesized silver nanoparticle, illuminating facts with sizes that varies from 1 to 5 nm in size. Appropriate signals are collected depending upon the mode of operation of the instrument. The numerous field observed in SEM could be linked to the facts that it produced a large depth of field. Many researchers have utilized SEM for the determination of various synthesized silver nanoparticles including, polyhedral [49], flake flower [50], hexagonal [51], isotropic [52], irregular [53], triangular [54], anisotropic [55] and rod like structures [56], pentagonal [57].
Transmission electron microscopy (TEM)
The TEM works based on the application of a very high resolution microscopy method to generate an image as well as a diffraction patterns of the atomic size as well as shape of material by focusing the electron beam that can penetrate through the given material as well as interact with the sample of microstructure of materials. The major difference between TEM and SEM is that TEM can detect the following in the synthesized silver nanoparticles in a microstructure: crystallographic defects, line defects and planar defects. Another major difference is that TEM could determine the available elemental composition at nano level [58,59]. There are different forms of TEM including high-resolution transmission electron microscopy (HRTEM), scanning transmission electron microscopy (STEM) and analytical transmission electron microscopy (ATEM). TEM also shows a better image, diffraction properties, and the chemical analysis competences when compared to SEM. TEM can also detect a small size up to 0.2 nm when compared to SEM. Also, TEM produced a better resolution image because it utilized a low wavelength electron when compared to SEM. Finally, TEM can shift from diffraction to imaging by shifting the excitation of the lenses following the objective lens. TEM can be utilized to capture silver synthesized particle image in the plane of the fluorescent screen as well as the diffraction pattern from the particles. The nanoparticle size and particle size distribution of the synthesized nanoparticle could be determined and evaluated by transmission electron microscopy (TEM) and high-resolution microscopy. Moreover, the application of image J software for the plotting of histogram by measuring the size of different nanoparticles could be explored. Some of the demerits of using TEM entails required high vacuum, thin sample section, time consuming for the sample preparation [60]. Further insight and details about the morphology of AgNPs are provided by TEM. The most common size of the silver nanoparticles from various TEM image is spherical [61].
Energy-dispersive X-ray spectroscopy (EDX) and X-ray fluorescence (XRF)
The elemental constituents and composition of nano-materials could be determined by EDX and XRF. This section explores the principle and relevance of these analytical techniques in nano-research and most especially, silver nanoparticles studies.
Energy-dispersive X-ray spectroscopy (EDX)
Energy-dispersive X-ray spectroscopy (EDX) is an analytical technique that gives information on the surface atomic distribution and the chemical elemental composition [62][63][64][65]. In most cases, the EDX is always coupled with SEM. The EDX is used in the elemental determination of composition of the silver nanoparticles.
In Practice, it relies on an interaction of some source of X-ray excitation and a sample. Its characterization capabilities are due in large part to the fundamental principle that each element has a unique atomic structure allowing a unique set of peaks on its electromagnetic emission spectrum. In order to determine the peak of an element, a high energy beam of electron or beam of X-ray is targeted toward the sample to analyze. Excitation of electrons in the inner shell (lower energy level) occurs via the incident beam creating an electron holes which electron from the outer shell (higher energy level) fills. The difference between the higher and lower energy levels is released in form of an X-ray. The number and energy of the X-rays emitted from the silver nanoparticle can be measured by an energy-dispersive spectrometer. Electron beam excitation is used in electron microscopes, scanning electron microscopes (SEM) and scanning transmission electron microscopes (STEM). X-ray beam excitation is used in X-ray fluorescence (XRF) spectrometers. A detector is used to convert X-ray energy into voltage signals; this information is sent to a pulse processor, which measures the signals and passes them onto an analyzer for data display and analysis [66,67]. Most researchers utilize EDX for characterization of silver nanoparticles than XRF. Report from the literature vividly revealed that AgNPs signal is detected at 3.0 keV [19,22,23,61,[68][69][70].
X-ray fluorescence (XRF)
This is the emission of characteristic "secondary" (or fluorescent) X-rays from a material that has been excited by bombarding with high energy x-rays or gamma rays. XRF technology provides one of the simplest, most accurate and most economic analytical methods for the determination of the chemical composition of many types of materials, particularly in the investigation of metals, glass, ceramics and building materials, and for research in geochemistry, forensic science and archeology. It is non-destructive and reliable, requires no, or very little, sample preparation and is suitable for solid, liquid and powdered samples It can be used for wide range of elements and provide detection limits at the sub-ppm level; it can also measure concentrations of up to 100% easily and simultaneously [71].
In principle, an inner shell electron is excited by an incident photon in the X-ray region. During the de-excitation process, an electron is moving from a higher energy level to fill the vacancy. The energy difference between the two shells appears as an X-ray, emitted by the atom. The X-ray spectrum acquired during the above process reveals a number of characteristic peaks. The energy of the peaks leads to the identification of the elements present in the sample (qualitative analysis), while the peak intensity provides the relevant or absolute elemental concentration (semi-quantitative or quantitative analysis) [72]. A typical XRF spectroscopy arrangement includes a source of primary radiation (usually a radioisotope or an X-ray tube) and equipment for detecting the secondary X-rays. When materials are exposed to short wavelength x-rays or to gamma rays, ionization of their component atoms may take place. If an X-ray beam is used to excite atoms in a sample, electrons near the nucleus emit secondary fluorescent x-rays on reversion to their original states [73].
In silver nanoparticle studies, XRF could be employed for elemental determination of the composition of nanoparticles although this is not frequently used compared to EDX. The Xray fluorescence technique is of special interest for the analysis of silver nanoparticles because the technique is not only fast, sensitive and capable of simultaneous multi-element analysis, but also ensures that the sample can be quantitatively analyzed without damage. Therefore, it is mostly used to identify determining the presence of silver and other element in the compound. Specifically, silver nanoparticle is detected at 3.0 keV which is the characteristic peak reported by different researchers [74].
X-ray diffraction (XRD)
X-ray diffraction (XRD) (among others, such as FT-IR, UV, TEM, SEM, EDX) is a widely used technique for structural characterization which participate (a main part) in identifying the structure of a (nano/bio)-material or particle. Hence, XRD is a widely held analytical technique, which has been employed in the analysis of both molecular and crystal structures, qualitative detection of elements and their compounds, quantitative resolution of chemical species, quantifying the degree/measure of crystallinity, isomorphous substitutions, stacking faults, polymorphisms, particle sizes, in situ studies at process temperatures and in reactive atmospheres, phase identification and quantification etc. [75,76].
XRD technique is a handy popular technique for characterizing silver nanoparticles and has grown into a common characterization method for evaluating these nanoparticles. Some of the main structural uniqueness are related with these i.e., measuring degree of crystallinity, phase identification, super-lattice generation, impurities detection, material's vacancy characterization and also novel materials development [77]. The crystalline structure or nature of biosynthesized silver nanoparticles is determined by XRD analysis and patterns; this also use to confirm the structural information. Many authors reported a similar diffraction profile for most Ag-NPs with XRD peaks at 2θ of 38.18 , 44.25 , 64.72 , and 77.40 which are indexed to the 111, 200, 220, and 311 crystallographic planes of Bragg's reflections of the face-centered cubic structure of silver crystals, which suitably matched the standard diffraction data with those reported for silver by joint committee on powder diffraction standards. The average crystalline size of the silver nanoparticles was estimated using (Eq. 1), the Debye-Scherrer's equation [45,78]: where d is the particle size, λ is the wavelength of X-ray radiation (1.5406 Å), β is the full-width at half-maximum (FWHM) of the height (in radians) and 2θ is the Bragg angle. The precision, significance, sensitivity and easy use of XRD increases its importance in AgNPs. However, there are some limitations that one might face using this analysis. It can only analyze and identify an unknown material that is homogeneous and single phase. There should be a standard reference file for compounds especially inorganic ones (d-spacings, hkls), peaks overlay mostly happened in XRD and worsen for high angle' reflections, to determine unit cell using XRD, indexing of patterns for non-isometric crystal systems is complicated.
3.6. Thermogravimetric analysis (TGA) and X-ray photoelectron spectroscopy (XPS) The advancement of nanotechnology is rapidly evolving and holds potential to completely redefine applications of material science in the nearest future. In order to maximize the prospects of nanotechnology for diverse applications, the characterization of nanomaterials and/or nanoparticles have become imperative. Among the several techniques available for the characterization of nanomaterials are thermogravimetric analysis (TGA) and X-ray photoelectron spectroscopy (XPS).
Thermogravimetric analysis
Thermogravimetric analysis (TGA) is an analytical technique for measuring changes in the mass of a material that occur in response to programmed temperature changes [79]. TGA represents a branch of thermal analysis examining the mass changes of a sample as a function off temperature (in the scanning mode) or as a function of time (in the isothermal mode). In TGA changes in physical and chemical properties of materials are measured as a function of increasing temperature (with constant heating rate), or as a function of time (with constant temperature and/or constant mass loss). The changes in the mass of a sample due to various thermal events (desorption, absorption, sublimation, vaporization, oxidation, reduction and decomposition) can be studied while the sample is subjected to a program of change in temperature. TGA has found applications in the analysis of volatile products, gaseous products lost during the reaction in thermoplastics, thermosets, elastomers, composites, films, fibers, coatings, paints among others. Further practical applications, are determining composition and thermal stability of materials, evaluating the kinetics of thermally stimulated processes, predicting lifetimes, and studying reactions of materials with gases. There are different types of TGA ranging from isothermal to dynamic TGA.
3.6.1.1. Thermal properties of silver nanoparticles In a recent investigation, thermal behavior of silver nanoparticles was monitored by TGA Khan et al. [80], authors reported dominant weight loss in silver nanoparticles occurred in temperature region between 200 and 300 C. There was almost no weight loss below 200 C and above 300 C. The weight loss was attributed to the evaporation of water and organic components. Overall, TGA results show a loss of 14.58% up to 300 C. In the same study, the differential thermal analysis (DTA) plot displayed an intense exothermic peak between 200 and 300 C which mainly could be attributed to crystallization of silver nanoparticles. DTA profiles suggest that complete thermal decomposition and crystallization of the sample occur simultaneously. Taken together, the TGA/DTA study shows that the dominant weight loss occurs between 200 and 300 C; and the reaction is of exothermic type [80].
In a separate study, the low-temperature sintering behavior of Ag nanoparticles was investigated. The silver nanoparticles were shown to exhibit obvious sintering behavior at significantly lower temperatures ($150 C) than the T m (960 C) of silver while coalescence of the silver nanoparticles was observed by sintering the particles at 150, 200, and 250 C. The thermal profile of the nanoparticles was examined by a differential scanning calorimeter (DSC) and a thermogravimetric analyzer (TGA). Shrinkage of the silver nanoparticle compacts during the sintering process was observed by thermomechanical analysis (TMA). Sintering of the nanoparticle pellet led to a significant increase in density and electrical conductivity. The size of the sintered particles and the crystallite size of the particles increased with increasing sintering temperature [81].
X-ray photoelectron spectroscopy (XPS)
As the demand for high performance materials increases, so does the importance of surface engineering. Typically, the surface of a material represents the platform of interaction with the external environment and other materials. In the case of nanotechnology, surface chemistry of nanomaterials and/or nanoparticles is key to exploring the prospects of these particles for diverse applications. Surface modification can be used to alter or improve the properties of nanomaterials and/or nanoparticles, and so surface analysis becomes a technique for probing the surface chemistry of these particles. More so, nanotechnology approaches include surface modification of nanomaterials in order to suit specific purposes. Therefore, it becomes expedient to understand the physical and chemical interactions occurring at the surface, or at the interfaces of the nanomaterial's layers.
X-ray photoelectron spectroscopy (XPS) also known as electron spectroscopy for chemical analysis (ESCA) is a widely accepted technique for surface analysis. This probably may be because XPS can be applied to a broad range of materials and provides valuable quantitative and chemical state information from the surface of the material being studied. The average depth of analysis for an XPS measurement is approximately 5 nm. XPS measurement involves irradiating the surface of sample materials with monochromatic Al-K-α x-rays. This leads to excitation thereby causing photoelectrons to be emitted from the sample surface. Then an electron energy analyzer is used to measure the energy of the emitted photoelectrons. From the binding energy and intensity of a photoelectron peak, the elemental identity, chemical state, and quantity of a detected element can be determined.
3.6.2.1. X-ray photoelectron spectroscopy (XPS) of silver nanoparticles Several investigations have reported the use XPS technique to characterize the surface chemistry of silver nanoparticles. Larrude et al. [82] characterized silver-multiwalled carbon nanotubes (Ag-MWCNTs) nanocomposite using the XPS technique. Their report showed spectrum revealing the dominance of silver and carbon, with small amounts of sodium and sulfur in the sample. According to the author of the investigation, presence of Na and S was attributable to the use of sodium dodecyl sulfate (SDS) for the MWCNTs dispersion. Also, the study demonstrated increased oxygen content compared to a pure MWCNTs sample. However, there was no evident relationship between the oxygen and the silver contents because the O/ Carbon atomic rate did not change significantly between the different silver concentrations. Furthermore, the spectrum of the Ag 3d core level of the Ag-decorated MWCNTs, confirmed the presence of metallic silver because the 3d5/2 component occurred at a binding energy of 368.3 eV, which is characteristic of the metallic Ag (0) oxidation state [82].
In a separate study involving a surface chemical characterization of silver nanoparticles thin film using XPS instrument equipped with monochromatic Al-K-α X-ray source [83]; the XPS spectrum and the high-resolution XPS window of the core level atoms comprising the silver nanoparticles capped with carboxylate/1-dodecylamine revealed the presence of Ag, C, O, and N atoms according to their binding energies. The most prominent signal in the XPS spectrum was the Ag 3 s consisting of two spin-orbit components at 368.8 (Ag3d 5/2 ) and 374.8 (Ag3d 3/2 ) eV and separated by 6.0 eV. Moreover, the deconvolution of Ag (3d) doublet revealed asymmetric peak shape. These two characteristics indicated the existence of the Ag in metallic form.
Furthermore, another investigation reported consistence of the XPS analysis of silver behenate was with the theoretical C: O: Ag atomic composition. The report noted that brown discoloration of silver behenate powder within a few seconds of exposure to monochromatic X-rays and that this increased significantly with time. Further, noticeable changes to the XPS spectra and the observed surface composition begin to occur after about 30 minutes of X-ray exposure, while prolonged exposure to monochromatic X-rays resulted in significant changes in the C 1s, O 1s, and Ag 3d peak shapes and positions. Changes in the XPS spectra indicated that exposure to Al Kα X-rays resulted in the formation of silver metal particles and decomposition of the carboxylic acid portion of the molecule to hydrocarbon species. Thermal reduction of silver behenate powder produced similar changes in the XPS spectra [84].
Conclusion
This chapter has examined the operational parameters which are imperative to the synthesis of silver nanoparticles. Effect of concentration, volume ratio, contact time, temperature and pH affect the synthesis of silver nanoparticles. Conditions attached to each of these have been identified. Chief among these factors is the effect of pH which affect the chemistry of the silver nanoparticle synthesis. However, irrespective of the synthetic route and conditions, characterization Techniques which are germane to the studies of silver nanoparticles have also been critically examined. The UV-Vis spectroscopy helps in determining the surface plasmon resonance absorption band and this is vital in nanoparticle studies. The functional groups are determined by FTIR, morphology and sizes by SEM and TEM, atomic distributions and relative abundances were revealed by EDX and XRF respectively. The crystallinity can be determined by XRD, surface chemical characterization by X-ray photoelectron spectroscopy (XPS) and silver content by thermogravimetric analysis (TGA). It can be concluded that relevant research in nanoparticle studies rely on both the operational conditions and excellent characterization.
|
2018-12-13T11:39:52.548Z
|
2018-07-18T00:00:00.000
|
{
"year": 2018,
"sha1": "a7b1222998793c9e73c061af0e9c218e5782076d",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/61862",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0d20c4cf2fe656efb64834871a546c93019aab15",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
256004957
|
pes2o/s2orc
|
v3-fos-license
|
CENP-A nucleosomes localize to transcription factor hotspots and subtelomeric sites in human cancer cells
The histone H3 variant CENP-A is normally tightly regulated to ensure only one centromere exists per chromosome. Native CENP-A is often found overexpressed in human cancer cells and a range of human tumors. Consequently, CENP-A misregulation is thought to contribute to genome instability in human cancers. However, the consequences of such overexpression have not been directly elucidated in human cancer cells. To investigate native CENP-A overexpression, we sought to uncover CENP-A-associated defects in human cells. We confirm that CENP-A is innately overexpressed in several colorectal cancer cell lines. In such cells, we report that a subset of structurally distinct CENP-A-containing nucleosomes associate with canonical histone H3, and with the transcription-coupled chaperones ATRX and DAXX. Furthermore, such hybrid CENP-A nucleosomes localize to DNase I hypersensitive and transcription factor binding sites, including at promoters of genes across the human genome. A distinct class of CENP-A hotspots also accumulates at subtelomeric chromosomal locations, including at the 8q24/Myc region long-associated with genomic instability. We show this 8q24 accumulation of CENP-A can also be seen in early stage primary colorectal tumors. Our data demonstrate that excess CENP-A accumulates at noncentromeric locations in the human cancer genome. These findings suggest that ectopic CENP-A nucleosomes could alter the state of the chromatin fiber, potentially impacting gene regulation and chromosome fragility.
Background
Hallmarks of the cancer state include large-scale gene expression changes [1], chromosomal rearrangement, and aneuploidy [2][3][4][5][6]. While the mechanistic basis for these events remains under investigation, such events have been attributed to DNA methylation changes [1], telomere disruption [7], repair and DNA damage pathway protein defects [8], replication distress [9], and misregulation of the centromere-specific histone H3 variant, CENP-A [10][11][12][13]. CENP-A's normal function is to serve as the sole structural marker for centromeric chromatin identity [14], by directly associating with a triad of inner kinetochore proteins CENP-C, CENP-N and CENP-B [15], which in turn recruit the rest of the kinetochore and microtubules to ensure faithful genome segregation during mitosis [16]. Consequently, mislocalization of CENP-A to noncentromere regions is believed to be a prognostic marker for aneuploidies driven by chromosomal breakage and rearrangements, emanating from bicentric chromosomes [10,11,13,17,18]. Indeed, artificial overexpression studies in flies demonstrate that under certain conditions, CENP-A can seed neocentromeres [17,19]. However, when moderately overexpressed to the levels similar to that previously seen in cancer cells [10,11], CENP-A does not easily seed neocentromeres [20], but rather expands centromere domains [21]. In related studies, overexpressed yeast or Drosophila CENP-A accumulates in the euchromatic arms, where it is continually targeted for proteolysis and subsequently degraded [22,23]. Indeed, a recent study confirms this occurs also in human HeLa cells, wherein forced artificial overexpression of tagged CENP-A results in accumulation at ectopic locations [24]. However, although CENP-A mRNA is innately overexpressed several fold in a number of human solid tumors, including colorectal tumors [10,11,18,[25][26][27], its behavior in cancer cells has not been investigated.
To elucidate consequences associated with CENP-A misregulation, we examined CENP-A mRNA and protein levels, partners, structure, and global nucleosome occupancy in human primary normal and colorectal cancers cells, as well as in primary tumors. We report that CENP-A is overexpressed at the mRNA and protein level in some human colorectal cancers. This excess CENP-A partners with histone H3, and associates with transcriptionally coupled chaperones ATRX and DAXX in colorectal cancer cell lines. This distinct class of noncentromeric CENP-A nucleosomes forms a stable octameric nucleosomal species as detected by atomic force microscopy (AFM) and confirmed by high-resolution DNA analysis, which demonstrates binding of 150 to 170 bp of DNA. These distinctive CENP-A nucleosomes localize to open regions of the genome as mapped by DNase I hypersensitivity (DHS), such as promoters of genes, and contain transcription factor binding motifs. In addition, we observe a correlation between large clusters of CENP-A and subtelomeric locations including the fragile region at 8q24. In this 8q24 region, we show that CENP-A is bound to CENP-C, a phenomena that also occurs in early human colorectal tumors, but not in normal human colon cells. Taken together, our data uncover a new role for a classical histone variant in human cancer cell lines.
Results
CENP-A is overexpressed, and ectopic CENP-A nucleosomes associate with H3, ATRX, and DAXX in colorectal cancer cells Early reports of innate overexpression of CENP-A in colorectal tumors date back well over a decade [10]. Thus, we focused on well-characterized colorectal cancer cell lines derived from different stages of tumor progression, such as SW480, HT29, DLD-1, and HCT116, comparing them to normal colon cells. We also included HeLa cells, since they have long been used as a model for human centromere biology [28,29]. We first examined total nuclear CENP-A protein across all the cell lines, using a sensitive fluorescence-based quantitative western blotting system ( Figure 1A). Relative to normal colon cells, and standardized against internal amounts of the core histone H4, we observed CENP-A protein levels were slightly elevated in HeLa cells, lower in DLD-1, 1.35 fold overexpressed in HT29 and almost twofold overexpressed in the cell line SW480 ( Figure 1A lower graph and Table 1 lists fold-values of all proteins tested in Figure 1A). To test whether the excess CENP-A protein in SW480 derived from excess mRNA, we next examined total CENP-A mRNA levels. Indeed, semi-quantitative PCR analysis indicated that CENP-A mRNA is almost fourfold overexpressed in SW480 cells compared to normal colon cells ( Figure 1B, lower panel depicts graphical representation of 4 replicates). We next examined levels of the CENP-A chaperone Holliday junction recognition protein (HJURP), which is required for accurate loading of CENP-A to centromeres [30][31][32]. Surprisingly, HJURP levels do not follow those of CENP-A; the cell line possessing the most CENP-A (SW480) has normal amounts of HJURP (Table 1). This finding intrigued us because, under normal conditions, HJURP restricts CENP-A loading to centromeric nucleosomes. We wondered whether histone H3 variant chaperones were also misregulated. We assessed transcription-coupled histone chaperones ATRX and DAXX, and observed that both are overexpressed in most cancer cell lines relative to normal colon, ranging from three-to twentyfold excess protein ( Figure 1A, Table 1). Thus, these data demonstrate that CENP-A gene expression is innately misregulated in some colorectal cancer cells. To examine the consequences of variable amounts of this key histone variant, we chose to focus the rest of the study on three cell lines, spanning normal (normal colon), moderate (HeLa), and high (SW480) levels of CENP-A protein.
Histone variants such as H3.3/H2A.Z, which use chaperones like ATRX/DAXX, are generally excluded from centromeric CENP-A nucleosomes, and found either at pericentric regions [33] or at promoters of genes [34]. We wanted to assess potential co-occupancy of H3/ H2A.Z and CENP-A, when CENP-A and ATRX are misregulated, as well as potential sites in the genome where such co-associations might occur. To enrich for potential ectopic CENP-A nucleosomes, which might be at low abundance across the genome, we first devised a scheme to enrich noncentromeric CENP-A ( Figure 1C shows a brief outline of the method). We used moderately micrococcal nuclease (MNase) digested nucleosomal arrays ( Figure 1D) from normal colon, HeLa, and SW480 colorectal cells. From these inputs, we sought to enrich centromere-specific CENP-A nucleosomes (henceforth referred to as 'centromeric CENP-A') using native immunoprecipitation (IP) for the inner kinetochore protein CENP-B. CENP-B specifically binds a motif found in most centromeric alpha satellite DNA at every alternate CENP-A nucleosome in active centromeres [35]. Gentle sequential native CENP-A IP was applied to nucleosomes left unbound (UB) from this first step in order to enrich for centromere-depleted CENP-A nucleosomes (henceforth referred to as 'ectopic CENP-A'). While this scheme does not allow for absolute elimination of centromeric CENP-A nucleosomes (due to the fact that CENP-A can localize to autosomal alpha satellite regions lacking CENP-B box), ectopic enrichment was sufficient to examine potential differences in composition and structure using biochemical and nanomolecular tools [36]. We also performed mock IPs to ensure that background levels of histones sticking to immune-beads could be factored in for each experiment that followed. The resultant sets of IPs were then resolved on high-resolution protein gels and probed for CENP-B, CENP-A, H2A.Z, and H3 by quantitative twocolor fluorescent WB. We observed a significant fraction of ectopic CENP-A present in normal colon cells, but relatively inefficient CENP-B pre-clearing (most likely due to a low abundance of extracted protein from that cell line) made the interpretation difficult ( Figure 1E and Table 2). In contrast, CENP-B pre-clearing of nuclear extracts from HeLa and SW480 cancer cells was robust. Six-to seven-fold enrichment of CENP-B was observed in the CENP-B IP compared to the sequential CENP-A IP, indicating efficient centromeric chromatin depletion, leaving behind a pool of non-CENP-B associated CENP-A nucleosomes ( Figure 1E and Table 2). The sequential CENP-A IP demonstrated a 3-fold enrichment of ectopic CENP-A nucleosomes compared to the centromeric fraction in SW480 cells, which constituted a 10-fold increase in comparison to HeLa, wherein ectopic CENP-A is depleted with regard to the centromeric fraction (3.02 versus 0.32 enrichment for SW480 and HeLa, respectively). Although no appreciable increase in H2A.Z was seen in the ectopic fraction of CENP-A, a threefold enrichment of canonical histone H3 in ectopic CENP-A IP was observed in SW480 compared to HeLa or normal colon cells. These data suggested co-occupancy or increased proximity of H3 and CENP-A in colorectal cancer cells (2.91 versus 0.98 H3 enrichment for SW480 and HeLa, respectively).
A recent study has reported that artificially overexpressed tagged CENP-A associates with the H3.3 chape rone DAXX in HeLa cells [24]. Since both, ATRX and DAXX, were overexpressed in HeLa and SW480 cells relative to normal ( Figure 1A), we next investigated association of these chaperones with centromeric and ectopic CENP-A IPs from normal colon, HeLa, and SW480 cells. We noted a strong association between these chaperones and CENP-A in colorectal cancer cells ( Figure 1F, Table 2). In contrast to a recent report demonstrating that artificially overexpressed CENP-A relies on DAXX/ATRX to associate at ectopic locations, we were unable to conclude that there was specific enrichment exclusive to the ectopic CENP-A fraction, Figure 1E and F were quantified on Odyssey Li-Cor, background-corrected, and input-adjusted fold-ratios between immunoprecipitations (IPs) were calculated as indicated. (WB) analysis of total nuclear CENP-A, HJURP, ATRX, and DAXX, relative to core histone H4 across cell lines. Lower panel: quantification of CENP-A protein expression standardized to normal colon (replicate data from Table 1 was used as a detection specificity control. Gray CENP-A arrow in third row indicates a band that was already present prior to H2A.Z probing. Data quantification is provided in Table 2. (F) WB for ATRX and DAXX in normal, HeLa, and SW480 CENP-B IPs versus ectopic CENP-A IPs (data summarized in Table 2).
but rather noted both centromeric and ectopic CENP-A fractions associated with these transcription-coupled chaperones. These results outline three distinguishing characteristics of the 'high' CENP-A state in human cells: increased association of CENP-A with H3.3 chaperones ATRX and DAXX, increased interaction of canonical H3 with ectopic CENP-A, and an abundance of the ectopic CENP-A fraction.
Ectopic CENP-A nucleosomes have altered conformations
In vivo, CENP-A and H3 do not mix within single nucleosomes [37]. Given the association of ectopic CENP-A and H3 above, we were curious whether such nucleosomes, or their chromatin fibers, might present an alteration of nucleosomal features. To this end, we turned to highresolution microscopy. In an extensive series of studies using AFM coupled to other biochemical assays, we have previously shown that in contrast to in vitro reconstituted recombinant CENP-A nucleosomes, which are octameric and generally indistinguishable from H3 nucleosomes [38][39][40][41], CENP-A nucleosomes purified from native human centromeres from HeLa or HEK cells, display smaller dimensions [42,43], and attain a stable octameric height only at specific points of the human cell cycle [44]. Therefore, we next used AFM to measure native nucleosomal dimensions of ectopic versus centromeric and recombinant CENP-A nucleosomes.
In agreement with previously published work, native bulk nucleosomes observed on extracted chromatin arrays are exclusively octameric, averaging 2.5 nm in height ( Figure 2, lowest panel, gray, AFM data summarized in Table 3). Furthermore, in vitro reconstituted H3-( Figure 2, second panel from bottom, yellow), or CENP-Anucleosomes ( Figure 2, third panel from bottom, yellow), which are octameric [38][39][40][41]45,46], both possess dimensions essentially identical to bulk nucleosomes (dotted red line denotes mean octameric values). In contrast, the majority of total CENP-A nucleosomes in SW480 possess diminutive dimensions, averaging 2.1 nm in height ( Figure 2, second panel from top, blue). However, upon closer examination, we noted that total CENP-A nucleosomes from SW480 have a distinct second population, with sizes reminiscent of the larger fraction of the stable octameric state (Figure 2, second panel from top, righthand tail highlighted in red). Indeed, upon depleting centromeric CENP-A nucleosomes using the CENP-B depletion strategy above ( Figure 1C), ectopic CENP-A nucleosomal arrays derived from SW480 cells display a broad height distribution with an overall average slightly smaller than bulk octamers (2.46 nm, Figure 2, top panel, red; Table 3). This broader height distribution is most likely due to partial contamination of the ectopic fraction with the centromeric CENP-A nucleosomes originating from alpha satellite arrays lacking CENP-B boxes, as mentioned above.
These data indicate that two distinct populations of CENP-A nucleosomes co-exist in colorectal cancer cells: one that contains diminutive features similar to that previously reported from native centromeres, and another that closely mimics the stable H3 or CENP-A octameric nucleosome in vitro.
Ectopic CENP-A hotspots localize to DNase I hotspots and transcription factor binding sites We were curious to understand where ectopic CENP-A nucleosomes such as those above (Figures 1E, F and 2) might reside in the genome. Therefore, we amplified the nucleosomal DNA contained within SW480 CENP-B-associated centromeric and ectopic CENP-A nucleosomes, and used Ectopic CENP-A forms a structurally distinct type of nucleosome in colorectal cancer cells. Atomic force microscopy (AFM) analysis of nucleosomal heights from SW480 total input chromatin, in vitro reconstituted chromatin containing either H3 or CENP-A octameric nucleosomes, SW480 total CENP-A IP, and SW480 ectopic CENP-A IP. Gray graph indicates bulk octameric input, yellow indicates recombinant nucleosomes, and blue and red indicate tetrameric and octameric immunoprecipitation (IP) nucleosomes, respectively. Dashed red line indicates average octameric height. Insets contain mean ± standard deviation, and nucleosome count in each sample. P values are provided to the right of each dataset compared. Images show representative chromatin arrays, with arrowheads indicating single nucleosomes (scale bar 100 nm, data summarized in Table 3).
these two types of DNA in a co-immunofluorescence in situ hybridization (co-FISH) experiment against human metaphase chromosomes. As expected, CENP-B-associated nucleosomal DNA (in green) hybridizes almost exclusively to centromeres ( Figure 3A). In contrast, ectopic CENP-A nucleosomal DNA (in red) hybridizes to chromosome arms ( Figure 3A), illustrating the effectiveness of the CENP-B depletion strategy. This interesting distribution prompted us to generate a genome-wide map of ectopic CENP-A nucleosome residency in the genome. In order to achieve this, we performed high-throughput genomewide sequencing using exclusively the gel-purified mono-nucleosomal fraction from thoroughly MNase digested chromatin from Mock IP DNA, or ectopic CENP-A nucleosomal DNA from normal colon, HeLa, and SW480 lines. The mono-nucleosomal fraction was first assessed using high-resolution Bio-Analyzer chips ( Figure 3B and C). Whereas, no detectable DNA could be seen in the mock IP, centromeric CENP-A IPs from either HeLa cells or SW480 cells, using the classical anti-centromere antibody (ACA) (first used to identify CENP-A in human cells by the Earnshaw lab, (39)), yield two species: one at approximately 120 bp, and the other at approximately 170 bp ( Figure 3C-E). The smaller species is consistent with previously published data for centromeric CENP-A nucleosomes (24, 38, 40, 43 to 50). In contrast, ectopic CENP-A mononucleosomes contain DNA fragments ranging from 125 to 164 bp of DNA ( Figure 3B and D), greater than the 120 bp present in the CENP-A octameric crystal structure [39], or the 100-120 bp wrapping previously demonstrated to exist in vivo for native centromeres of yeast [47][48][49], Drosophila [42,50] or human cells [24,43,44]. As suggested by the AFM data above (Figure 2), these DNA data support the possibility that ectopic CENP-A nucleosomes contain distinctive structural features.
Sequencing of the mononucleosomal fraction obtained from chromatin input samples from each cell line confirmed equal and robust genomic representation in the extracts, which were comparable to other ENCODE data sets (Table 4). Reassuringly, mock IP ChIP-seq performed to rule out potential background signal identified a very small number of weak background-related hotspots (approximately 200). Furthermore, correlation analyses of replicates for normal colon, HeLa, and SW480 ectopic CENP-A ChIP-seq each demonstrated excellent concordance, with an r 2 > 0.9 for each set of replicates ( Figure 4A). From the pooled replicate concordant data, we next determined statistically significant, input-adjusted tags representing true ectopic CENP-A 'hotspots' in the genome, at a stringent false discovery rate (FDR) of 0.1%. This method yields a robust view of CENP-A occupancy after accounting for copy number variation often found in cancer genomes. Contrary to our expectation that CENP-A would be found exclusively at centromeres or heterochromatin, ectopic CENP-A hotspots localize to noncentromeric loci in normal colon, HeLa, and SW480 cells ( Figure 4B, left panel). Indeed, the main difference was the number of hotspots found in each cell line, which generally corresponded to the overall level of CENP-A expression: whereas in normal colon cells, there are approximately 450 ectopic CENP-A hotspots, in HeLa cells there is a twofold increase to approximately 950, and in SW480 cells there is an almost sixfold increase over normal colon to approximately 2,850 hotspots ( Figure 4B, left panel). These hotspots do not arise from background signal, as only a tiny fraction of the mock IP-hotspots correlated with any of the ectopic CENP-A hotspots above ( Figure 4B, left panel).
To investigate the nature of ectopic CENP-A hotspots, we next classified them with respect to known genomic and epigenetic features. Irrespective of the difference in the total number of hotspots, a sizeable portion of ectopic CENP-A was found at gene loci, with 23%, 38%, and 44% of ectopic hotspots at genes in HeLa, SW480, and normal colon cells, respectively ( Figure 4B, right panel for histogram, and Additional file 1 contains the dataset of all hotspots discovered). Thus, CENP-A presence at genes seems to be a common feature, as it was found in all cell lines examined, with a significant fraction of those sites present at promoters of genes (7%, 15%, and 34% in HeLa, SW480, and normal colon cells, respectively). Indeed, CENP-A enrichment at promoters is statistically significant in SW480 compared to HeLa cells (Fisher's exact test P value: 0.0174), suggesting that colon cells tend to accumulate CENP-A at open chromatin regions (specific examples are shown in Figure 4C).
In the experiments above, we noted that the trans cription-coupled chaperones ATRX and DAXX are overexpressed in SW480 cells ( Figure 1A), whereas levels of the CENP-A chaperone HJURP, which normally restricts AFM measurements of a height and b diameter data of ectopic versus centromeric or recombinant CENP-A nucleosomes provided as average ± standard deviation, with number of particles measured indicated in parenthesis. On average, each type of experiment has three replicates.
CENP-A to centromeres [23,31,32,51,52], generally did not correlate with increased CENP-A levels. We wondered whether ectopic CENP-A accumulation at promoters is linked to HJURP presence. Therefore, we performed HJURP IPs from cross-linked chromatin, using the CENP-B depletion strategy as above ( Figure 1C), followed by high throughput sequencing analysis to unveil potential sites of ectopic HJURP localization. We were unable to obtain robust ectopic HJURP enrichment. Fewer than 300 HJURP hotspots were detected in SW480 cells ( Figure 4D, Additional file 1 for list of HJURP hotspots). Although 36% of the 942 HeLa CENP-A hotspots correlate with HeLa HJURP sites, only 5% of SW480 CENP-A hotspots colocalize with SW480 HJURP sites ( Figure 4D). Such paucity of noncentromeric HJURP sites overlapping with ectopic CENP-A sites in SW480 is consistent with HJURP's primary documented role as a centromeretargeted chaperone, and would support the hypothesis that overexpressed CENP-A can co-opt alternative chaperone pathways to accumulate at genes, as has recently been shown for forced overexpression of CENP-A in human cells [24]. If CENP-A is indeed co-opting accessibility pathways to accumulate at genes, we hypothesized that chromatin accessibility might play a role in ectopic CENP-A localization. To test this, we turned to the classical DNase I nuclease hypersensitivity assay [53,54] combined with high throughput deep sequencing [55] to pinpoint with base-pair accuracy the locations of transcription-factor bound chromatin in SW480 and HeLa cells. (Normal colon cells were present in too low density for us to reliably assess DHS sites in those cells). To release hypersensitive chromatin, we performed very light DNase I digestion of either SW480 or HeLa nuclei following established protocols [56]. DHS fragments (ranging from 50 to 350 bp) float on top of sucrose gradients, separating them from the rest of the longer DNAs, which originate from the chromatinbound fraction. Purified DHS fragments were then subjected to deep sequencing ( Figure 5A), thus generating a genome-wide distribution map of DHS.
As expected, the vast majority of DHS enrich primarily at promoters in HeLa and SW480 cells ( Figure 5B, right panel shows histograms, and Additional file 1 for a list of DHS), and overlap completely with the compendium of aggregated DHS clusters identified by the ENCODE project for 129 human cell lines ( Figure 5C). DHS identified in our data sets included promoters of housekeeping genes, oncogenes, and tumor suppressor genes (Additional file 1 for list of all DHS, examples in Figure 5D). For example, the Myc gene, a known regulator atop a cascade of tumor effector proteins [57], has a large DHS site astride its promoter in SW480 cells ( Figure 5D). Indeed, the gene encoding CENP-A itself has a strong DHS site upstream of its promoter specifically in SW480 cells but not in HeLa cells, providing a satisfying correlation between increased accessibility of the CENP-A gene promoter, and excess CENP-A mRNA (and subsequently, protein) present in SW480 cells ( Figure 5D).
When comparing DHS hotspots to ectopic CENP-A sites, we observed that a large fraction of DHS tracks with ectopic CENP-A locations ( Figure 5B, left and middle panels). Globally, about approximately 380 CENP-A sites overlap with DHS sites in HeLa ( Figure 5B, left and middle panels), whereas twice that number, approximately 740 SW480 CENP-A hotspots align perfectly with SW480 DHS.
A mechanistic question that arises from the correlation between ectopic CENP-A and DHS, was whether ectopic CENP-A creates DNase I sites once it binds to chromatin, and demonstrate that ectopic CENP-A nucleosome-associated DNA is octameric in size in normal colon, HeLa, and SW480 cells, ranging from 124 to 164 bp in length. (C) As above, indicating the size of nucleosome-associated DNA isolated from HeLa and SW480 cells by IP with anti-centromere antibodies (ACA) serum. (D) High-resolution Bio-Analyzer analysis of mononucleosomal DNA values for input, ectopic CENP-A nucleosomes (from Figure 1E) and centromere-specific CENP-A nucleosomes that were purified with ACA serum (from Panel E below). DNA lengths below 350 bp were binned into four categories, and plotted by percentage of total fluorescence (pg/uL). (E) Two-color WB analysis of CENP-A and H3 protein levels from HeLa and SW480 isolated by IP using ACA serum. or whether such sites precede CENP-A occupancy. To this end, we compared CENP-A hotspots to aggregated genome-wide locations of DHS and transcription factor binding sites from 129 and 94 cell lines respectively (ENCODE project). From these comparisons it was apparent that pre-existing DNase I and transcription factor binding sites are striking determinants of ectopic CENP-A localization ( Table 5). Approximately half of normal colon or SW480 CENP-A hotspots (61% and 45%, respectively) overlap with ENCODE DNase I clusters; and a majority of normal colon and SW480 CENP-A hotspots (63% and 48%, respectively) overlap with transcription factor binding clusters found in a variety of cells (Table 5). This increase in overlap between SW480 CENP-A hotspots and our DHS analysis, compared to the ENCODE DHS data (from 26% to 45%), indicates that CENP-A can also localize to transient hypersensitive sites, which were not detected in our experiments but were captured in the vast compendium of DHS sites in the ENCODE data. We were curious whether ectopic CENP-A locations had DNA sequence-specific features that might yield insights into what attracts CENP-A to them. Using the DNA consensus detection algorithm TOMTOM to detect motifs common amongst CENP-A hotspots, we discovered that CENP-A enriched sequences are not AT-rich, nor do they contain centromere-like repetitive DNA. Indeed, fewer than 20% of the hotspots contain Alu, LINE or SINE elements, and less than 0.01% of the hotspots contained centromeric consensus alpha satellite sequences (Table 6), suggesting the CENP-B depletion strategy was effective. Whether from normal colon, HeLa, or SW480, ectopic CENP-A hotspots contain CpG motifs ( Figure 6A-B), and motifs associated with transcription factors, especially those of the zinc-finger and helix-turn-helix classes ( Figure 6B-C). Together, the DHS and DNA motif data suggest that ectopic CENP-A accumulates at regions of high nucleosome turnover in the genome.
Ectopic CENP-A nucleosomes cluster at subtelomeric sites, including 8q24/Myc, in colorectal cancer cells and tumors In the genome-wide map of all CENP-A hotspots identified in this study, we noted a qualitative clustering of CENP-A hotspots in subtelomeric and pericentromeric regions (Figure 7 shows all chromosomes, Figure 8A focuses on one example, grey boxes denote clusters). Such regions have been previously associated with chromosomal breakpoints and translocations [58]. We chose one of these domains involving the cytoband 8q24 for further analysis, as it represents one of the most frequently rearranged regions in the cancer genome of many carcinomas and hematological malignancies [59,60]. Furthermore, this region has long been associated with tumorigenesis [61,62], and with chromosome instability [63]. From previously published cytogenetic SKY/CGH maps, it is known that the 8q24/Myc locus is amplified and translocates to multiple chromosome partners in SW480 cells but not in normal colon cells [64].
The deep sequencing data uncovered a 30 MB region of CENP-A and DHS co-enrichment on the 8q24/Myc locus in SW480, but not HeLa or normal cells. This enrichment was apparent even after correcting for copy number amplification of this locus ( Figure 8B, see input-adjusted hotspots below the tag density tracks). This result was surprising, because although this region has been extensively studied, there are no extant reports of it containing unusual histone variants. Furthermore, large domains of CENP-A usually exist only in active centromeres, wherein they attract inner kinetochore proteins such as CENP-C, which connect CENP-A to the outer kinetochore during mitosis [65]. We tested whether CENP-C was enriched in the 8q24 region. Using CENP-A and CENP-C ChIP followed by quantitative PCR (qtPCR) for probes spanning this 30 MB locus (primer locations indicated in Figure 8A), we observed robust enrichment of both CENP-A and CENP-C within the domain spanning the 8q24 locus ( Figure 8C, qtPCR graph). We reasoned that a CENP-A/CENP-C domain spanning 30 MB should be visible by immunofluorescence. Therefore, we used a combination of 8q24-FISH and CENP-A-IF to visualize this region. To ensure the accuracy of detection, we first tested the 8q24 FISH probe on metaphase spreads from normal human lymphocytes. As expected, we observed two discrete subtelomeric signals per chromosome, for a total of 4 N per mitotic cell ( Figure 8D, upper left panel). We next tested whether 8q24 was amplified and Using a combination of either 8q24 FISH and CENP-A IF, or Myc FISH and CENP-A IF, we next tested whether CENP-A co-localizes to any of the signals of 8q24 in normal colon, HeLa, and SW480 cells. No correlation between 8q24 and CENP-A signals can be seen in normal colon, and very little can be seen in HeLa cells ( Figure 8C-D, three lower left panels). In contrast, 8q24 co-localizes to a distinct CENP-A domain in 38-66% of SW480 cells, when using either the 8q24 or Myc probes, respectively ( Figure 8C, middle left panel, white arrows point to co-localized signals; Figure 8D, lower right panels; data are quantified in Table 7). Thus, in the colorectal cancer cell line possessing the highest amount of CENP-A protein ( Figure 1A), CENP-A localizes to 8q24 in a large fraction of cells. Consistent with the qtPCR data ( Figure 8C, first panel), CENP-C IF combined with 8q24 FISH demonstrates enrichment of CENP-C on one 8q24 locus ( Figure 8C, middle right panel).
We were intrigued by the presence of CENP-A/CENP-C at the 8q24 locus in the SW480 colorectal cancer SW480 cell line, which was derived from a late stage colorectal tumor nearly 30 years ago [64]. We sought to understand how early in tumorigenesis CENP-A might mislocalize to 8q24. We acquired primary early and late stage colorectal tumors, as well as matched normal tissue from the same patients, and performed FISH/IF to test co-localization of CENP-A to 8q24. The co-IF/FISH data show that the 8q24/Myc locus is amplified in all four tumors, and that CENP-A domains are enriched on one of these 8q24 loci ranging from 33 to 78% of tumor cells, depending on the donor ( Figure 8C, lowest set of panels for representative images of normal versus tumor, white arrow points to co-localization, quantification in Table 7). Thus, CENP-A occupancy of this locus is robust and occurs even in early stage tumors.
Discussion
In this report, we present a comprehensive examination of the histone variant CENP-A in colorectal normal and cancer cells, finding that ectopic CENP-A exists outside centromeres in human cells. Ectopic CENP-A tracks to two distinct types of domains: small regions found at promoters and accessible chromatin; and large domains found at sites of common chromosomal rearrangements. Our report yields a number of specific findings. First, CENP-A, which is innately overexpressed in cancer cells ( Figure 1A-B, Table 1), associates with histone H3 ( Figure 1E, Table 2), and shows increased association with transcription-coupled chaperones DAXX and ATRX ( Figure 1F, Table 2). Second, ectopic CENP-A nucleosomes are stable octamers in configuration ( Figure 2, Table 3), containing 125 to 165 bp of DNA ( Figure 3B-D). Third, ectopic CENP-A nucleosomal tags are depleted in centromeric consensus satellite sequences (Table 6), and localize instead to unique noncentromeric locations in normal and cancer cell lines ( Figure 3A). These nucleosomes occupy genes and promoters ( Figure 4B-C), are HJURP-free ( Figure 4D), and correlate primarily with hyper-accessible (DHS) chromatin ( Figure 5, Table 5). Fourth, CENP-A/DHS ectopic sites co-occupy regions containing known transcription factor binding motifs ( Figure 6, Table 5). Lastly, large clusters of CENP-A hotspots exist in regions spanning pericentric and subtelomeric regions specifically in colorectal cancer cells (Figures 7 and 8, Table 7). An example of such a cluster is at a segment of the 8q24 locus spanning the Myc Comparative analysis of a CENP-A hotspots derived from normal colon, HeLa and SW480 cells compared to ENCODE data aggregates reveal a significant fraction of sites overlap with ENCODE b DNase I clusters and c transcription factor binding sites in the genome. Each column shows overlap in terms of number of sites or % of total CENP-A sites.
oncogene, which, even in relatively early stage tumors tested in this study, associates with CENP-A and CENP-C ( Figure 8B-D, Table 7). A number of avenues of investigation arise from our observations. Regardless of the absolute amount of ectopic CENP-A, in normal colon cells, and in the cancer cell lines examined, there is a connection between DHS/ transcription factor binding sites and ectopic CENP-A ( Figures 5 and 6). That CENP-A can compete for regions linked to transcription was initially demonstrated in budding yeast, wherein CENP-A is reported to exist at barely detectable levels in a handful of genic promoters [49], which increases when CENP-A is artificially overexpressed [48]. Such CENP-A is continually targeted for subsequent proteolysis [23,52]. Earlier work has also demonstrated that artificial constitutive overexpression of CENP-A in Drosophila cells results in a gradual accumulation and slow removal of CENP-A from chromosome arms [22], possibly via association with the common histone chaperone RbAp48/p55 [66]. In vitro, common chaperones such as p55 and NAP-1 assemble CENP-A nucleosomes efficiently [41,66]. However, generally it has not been thought that such phenomena could occur in human cells, with many laboratories publishing studies using tagged/ overexpressed CENP-A as a marker for human centromeres. However, a recent report tracking artificially overexpressed human CENP-A has demonstrated that it can occupy ectopic sites, binds histone H3.3, contains octameric size DNA fragments, and is potentially chaperoned by ATRX and DAXX [24]. Indeed, in worms, which form holocentric centromeres that line the edges of chromosomes, normal amounts of CENP-A seed centromeric domains using regions of low nucleosome turnover [67]. Our report demonstrates that a subset of native human CENP-A binds H3, forms octameric height nucleosomes, which localize to accessible chromatin domains at promoters and transcription factor sites at low levels even in normal human colon cells. This process appears to be magnified in amplitude in colorectal cancer cell lines, where a significant fraction of ectopic CENP-A nucleosomes overlap with DHS and transcription factor binding sites (Figure 5 and 6). It is feasible that a default transcription-linked pathway exists to use trace amounts of CENP-A either promiscuously expressed at the wrong time (that is, not at the end of G2 [68]), or remnant after HJURP-dependent incorporation at centromeres is complete at mid-G1 [69]. Not mutually exclusive to this explanation is the interesting possibility that defects in the timing of CENP-A expression, or promiscuous binding of CENP-A to other chaperones, coupled to defects in proteolysis, might cumulatively conspire to permit increased CENP-A accumulation at transcription factor binding sites in cells.
Conclusions
A functional implication of stable CENP-A occupancy of promoters/DHS and its correlation with transcription factor binding sites is the potential link to gene expression changes reported in cancer cells. It is currently unknown if CENP-A is recruited by, or competes for transcription factor binding sites, either of which would be predicted to impact gene expression. Indeed, the DHS data demonstrate that many of the sites that attract CENP-A are already DHS and transcription factor binding sites, that is, high nucleosome turnover regions in a number of human cell lines. At the vast majority of genes in vivo, octameric H3 nucleosomes, with specific N-terminal tail modifications, dominate the epigenetic regulatory landscape [70]. Ectopic CENP-A nucleosomes would lack known H3 N-terminal tail modifications, and could potentially circumvent traditional epigenetic regulatory cascades. Thus, the functional impact of CENP-A nucleosomes on pre-existing DHS sites, or on promoter architecture, remains an exciting avenue of research. Ongoing studies are focused on whether recruitment of transcriptional activator or repressor complexes is altered in the presence of ectopic CENP-A nucleosomes, and whether such events influence gene expression patterns specifically in the cancer context.
Our study also provides support for a potential role for CENP-A in chromosomal instability. Whereas various artificial overexpression studies over the past decade have clearly established CENP-A's ability to seed neocentromeres [17], this study provides a correlation between CENP-A and a defined chromosomal rearrangement at 8q24 in human cancer cells, which is absent in normal colon cells (Figure 8, Table 7). When normal human ES cells are challenged by induced DNA breaks, excess native CENP-A is rapidly mobilized, but does not localize to immediate break sites indicated by gamma-H2A.X staining [71]. However, a recent study used osteosarcoma-derived U2OS cancer cells, in which an artificially induced break was shown to efficiently recruit overexpressed CENP-A:GFP [72]. Thus, depending on the timing of the break, and availability of free histones, CENP-A might enrich during subsequent steps of chromatin re-establishment following repair or translocations of amplified regions in cancer cells. An avenue of research that arises from these findings Table 5. Additional file 1 contains the full list of hotspots, and a genome-wide overview is in Figure 7. (C) A list of other TOMTOM consensus motifs that correlate to the motifs identified in CENP-A hotspots includes chromatin effector proteins.
is elucidating the timing of CENP-A enrichment at breakpoints during tumorigenesis, and investigating its potential role in structural rearrangements of chromosomes in subtelomeric sites such as 8q24.
Increased levels of CENP-A expression have been reported in metastatic prostate, breast, lymphoma, lung and colorectal tumors. Consequently, our observations, combined with other recently published studies on artificially induced hybrid CENP-A/H3 nucleosomes [24,73], have implications for accumulation of downstream epigenetic defects that arise during tumorigenesis.
Cell culture
All cell culture medium except epithelial cell medium was supplemented with 10% fetal bovine serum and 1X penicillin and streptomycin. Cell culture media DMEM was used for HeLa cells, RPMI for SW480 and DLD1, and McCoy's media for HCT116 and HT29. Epithelial cell medium was used to culture normal human colon epithelial cell (HcoEpiC). HcoEpiC cells are very slow growing, with the cell cycle lengths ranging from 36 to 90 hours depending on passage number.
Total nuclear proteins extraction
Nuclei were purified from cell lines: HcoEpiC, HeLa, HCT116, DLD1, HT29 and SW480 following published procedure [43,44]. Total nuclear proteins extracts were prepared in RIPA-Buffer. Equal amount of nuclear proteins were fractioned on SDS page gels, stained and analyzed on Odyssey and amounts were adjusted to equal amount of histone H4 for further analysis. Samples containing equal amount of histone H4 were fractioned on SDS page gel and the amounts of CENP-A, HJURP, ATRX, DAXX and histone H4 were determined by quantitative western blot analysis. Relative concentration of CENP-A in different cell lines was calculated as ratio CENP-A/H4 in a cell line divided by ratio of CENP-A/ H4 in normal colon cells. Relative concentrations of ATRX, DAXX, and HJURP were calculated similarly. Table 7. Merge of DAPI (blue or gray), 8q24 or Myc (green) and CENP-A (red) is indicated for each cell line at the top of each image. Automated co-localization analysis was performed using Image J; white is indicative of co-localization shown as insets. Quantification of CENP-A and 8q24/Myc co-localization in cancer cells and tumors after co-immunofluorescence demonstrates that a statistically significant enrichment of CENP-A occurs on one of the translocated 8q24/Myc loci. White spots denoting co-localization were detected using Image J's automated co-localization algorithm.
Quantitative western blot analysis
Quantitative infrared western blotting was performed using Odyssey Li-Cor CLx system (Lincoln, NE, USA). Briefly, infrared western blot (WB) signal was acquired with high dynamic range and analyzed using Image Studio software. Bands of interest were manually selected and their total intensity quantified with subtraction of median background signal from an area 3-pixels wide above and below the band in the same lane. The resulting total infrared signal values (arbitrary unit) were used for subsequent calculations as indicated.
Chromatin immunoprecipitation CENP-A, and CENP-B chromatin immunoprecipitation (ChIP) for WB analysis was performed following published protocol [43,44]. Chromatin IP for CHIP-Seq was performed similarly as above, except for the following modifications: MNase concentration was 0.6U/ml, digestion time was 10 min for cancer cells and 8 min for normal HcoEpiC cells, and nuclei were treated with 0.05 to 0.1% formaldehyde for gentle in situ crosslinking within intact nuclei for 30 min at RT, as indicated in ENCODE protocols, before extraction of chromatin in low salt buffer. IP-enriched chromatin was eluted with 100 mM NaHCO 3 and 1% SDS at 65°C for 2 h. To reverse cross-linking, NaCl was added to the final 200 mM concentration and incubated at 67°C for additional 4 h followed by RNase A (150 μg/ml) treatment for 1 h and then proteins were digested with proteinase K (100 μg/ml) for 3 h. The DNA was purified by phenol extraction and ethanol precipitated. The DNA was repaired using PreCR Repair mix (New England Biolabs, Ipswich, MA, USA) following manufacturer's instructions. DNA was purified using Chroma Spin columns (Clontech, Mountain View, CA, USA).
Nucleosome reconstitution in vitro
Lyophilized recombinant histones (a gift from Jennifer Ottesen) were unfolded in 7 M guanidinium HCl, mixed in equimolar amounts (Either H3 or CENP-A and one each H2A, H2B, H4), and refolded into 2 M NaCl according to the protocol by Luger et al. [74]. Refolded nucleosomes were reconstituted onto a plasmid containing a 'Widom 601' positioning sequence (a gift from Carl Wu) using sequential salt dialysis adapted for low volumes. Briefly, histone octamers were mixed with plasmid DNA at 0.9:1 ratio in 2 M NaCl, 10 mM Tris-Cl pH = 8.0, 1 mM EDTA (0.18 mg/ml histones; 0.2 mg/ml DNA) and incubated on ice for 30 min. Next, 40 ul of histone/DNA mix was layered onto a dialysis disc (Millipore, 0.025um, Billerica, MA, USA) covered with a dialysis membrane (Thermo Scientific, 7000 MWCO, Waltham, MA, USA) and floated on the surface of 50 ml pre-chilled 1 M NaCl, 10 mM Tris-Cl pH = 8.0, 1 mM EDTA buffer. Sequential dialysis steps against 1 M, 0.8 M, 0.6 M, and 0.15 M NaCl (each with 10 mM Tris-Cl pH = 8.0, 1 mM EDTA) were carried out for 2 hours at 4°C (0.6 M dialysis was done overnight). Reconstituted chromatin was diluted one hundredfold in 1X PBS, 2 mM MgCl 2 buffer and imaged on AP-mica [75].
Atomic force microscopy imaging and analysis AFM imaging of bulk and immunoprecipitated CENP-A chromatin was performed essentially as described previously [43,44] with some adaptations (see manual analysis below). Extracted or IP-eluted chromatin was deposited on APS-mica (prepared as described by Dimitriadis et al., 2010 [43]) in the presence of divalent magnesium ions. The sample was incubated for 10 minutes, briefly rinsed with MilliQ water, and dried in a vacuum chamber. The sample was imaged using AFM 5500 (Agilent Technologies, now Keysight Technologies, Santa Clara, CA, USA) operating in AC mode (noncontact/tapping), equipped with either OTESPA or TESP silicone tip (Bruker Nano, Santa Barbara, CA, USA) with a nominal radius of 3 to 7 nm. Images were captured at 4096x4096 resolution with an instrument operating at setpoint equivalent to 65% to 75% of free amplitude (typically 1.5 to 2.5 V). Acquired images were processed using Gwyddion (gwyddion.net) software (flattening, line correction, and polynomial background subtraction) and analyzed either manually (see below), or, for bulk controls, using NIH Image J software (imagej.nih.gov/ij/) Particle Analysis function. Briefly, the images were limited by threshold (to remove tip convolution) and filtered to include only round or elliptical shapes. Max. height, total area, and volume information was collected. SigmaPlot software was used to statistically analyze the data and generate graphs. For ectopic and recombinant CENP-A and H3 nucleosomes, manual measurements were performed in Gwyddion software to ensure that strictly DNA-associated particles were included (diameter cutoff <20 nm).
BioAnalyzer analysis of DNA fragments obtained from chromatin immunofluorescence DNA samples were prepared according to manufacturer's recommendations and ran on High Sensitivity DNA Chips (Agilent Cat #5067-4626, Wilmington, DE, USA) on the Agilent 2100 BioAnalyzer system. Data with the control lower and upper limits were automatically called or manually aligned (see figure) with the Agilent 2100 Expert Software.
DNase I digestion of chromatin from HeLa and SW480
HeLa and SW480 cells were harvested with trypsin, washed twice with ice cold PBS containing 0.1% Tween 20, and resuspend in low sucrose buffer (15 mM Tris-HCl, PH 8.0, 15 mM NaCl, 60 mM KCl, 1 mM EDTA, 0.5 mM EGTA, 1 mM spermidine, EDTA free protease inhibitors). Cells were mixed (1:1) with same buffer containing 0.04% NP-40 and nuclei were released at 4°C. Nuclei were harvested by centrifugation, washed with low sucrose buffer and DNase I digestion was performed with 20 million nuclei as previously described [76,77]. DNA fragments of 100 to 500 bp from a chromatin digestion with 60 U/ml DNase I (Sigma, St. Louis, MO, USA) were purified using sucrose gradient [77] and DNA was precipitated in 0.1 volume of sodium acetate and 0.7 volume of isopropanol.
Bioinformatic analysis of chromatin immunoprecipitationseq, DNase-seq, and TOMTOM DNA motif enrichment Purified DNA from ChIP or DNase I digested chromatin were used to prepare libraries for Illumina highthroughput sequencing as described in manufacturer's protocol (Illumina Sequencing, San Diego, CA, USA). Libraries were sequenced to generate 35 bp single end reads using Illumina GAII sequencer at the Advanced Technology Center, NCI (Frederick, MD). Sequence reads were mapped to the reference genome hg19 by the CASAVA 1.8.2 pipeline.
Hotspot detection for DNase-seq
We identified regions of local enrichment of sequence tags using a hotspot detection algorithm essentially as previously described [55,77] with a false discovery rate (FDR) of 0.1%.
Hotspot detection and input adjustment for chromatin immunoprecipitation-seq
The hotspot detection algorithm was similarly applied to ChIP-seq data with the following modification. The sequencing data from matching input samples are used for the processing of ChIP data, as a measure of background signal that might be significant. After normalizing the input data to match the number of tags in the ChIP data, the number of input tags is subtracted from the number of ChIP tags in the target window before calculating its z-score.
DNA Motif discovery analysis
A motif discovery analysis on selected DNA sequences was performed using MEME [78] on a parallel cluster at the NIH Biowulf supercomputing facility (meme.nbcr.net/). DNA sequences for MEME input were from the top 2000 (by tag density) hotspots. To limit the computational load, only the 200-bp regions with the highest tag density were used instead of the entire width of hotspots in cases where the hotspots spanned greater than 200 bp. The width of motif for searching was set to 6 and 20 for minimum and maximum, respectively. To identify binding motifs for known transcription factors, we queried individual position-specific matrices against the Transfac database using the Tomtom software (http://meme.nbcr. net/meme/cgi-bin/tomtom.cgi). We retrieved statistically significant matches that share the majority of specific nucleotides in the sequence motifs.
Quantitative PCR analysis
Quantitative (real time) PCR was performed using the IQ-Sybr Green Supermix kit from BioRad (#170-8880, Hercules, CA, USA) in 25 μl reaction according to the manufacturer's protocol and samples were amplified using I-cycler fitted with MyIQ Single color real time PCR detection system (BioRad, Hercules, CA, USA). In all experiments no template-and Mock IP (normal IgG IP; negative control) controls, and input chromatin DNA and IP samples (CENP-A & CENP-C) were included from same experiment. The qtPCR reactions were setup in triplicate thus giving three threshold cycle numbers (Ct) for each sample. Experiment was repeated three separate times. Enrichments and fold changes were calculated as follows: Ct.i = average Ct of input Ct.m = average Ct of mock IP Ct.IP = average Ct of IP samples (CENP-A and CENP-C) STDV.i = standard deviation of input STDV.m = standard deviation of mock STDV.IP = standard deviation of IP Step 1. Calculate ΔCt and STDV ΔCt for CENP-A, CENP-C and Mock IP using the following formulas: Step 2. Transform ΔCt and STDV ΔCt (with respect to input) to input to linear scale fold change (FC) and fold change error (E) as follows: Step Figure 8B) were selected and obtained from commercial source (Invitrogen, Grand Island, NY, USA). DNA was isolated from each BAC and labeled with biotin-dUTP and hybridized to normal blood lymphocytes metaphasespread slides. Each BAC was evaluated for intensity and specificity of hybridization at target region. The BAC named RP11-150 N13 was selected to be used as a probe for 8q24 (chr 8:126,377,028 to 126,556,325), and a previously published Myc probe was used to confirm the results [79]. For probes, 2 μg BAC DNA was labeled with biotin-dUTP by nick translation in the presence of 4 nmol/L labeled nucleotide. Approximately 100 to 200 ng of labeled BAC probe was ethanol precipitated in the presence of 20 μg each salmon sperm DNA and human Cot1 DNA. The dry pellet was dissolved in 5 to 6 μl of hybridization buffer. The hybridization buffer contained 50% deionized formamide, 20% dextran sulfate and 4X SSC. The probe was denatured for 5 min at 80°C and then pre-annealed for 1 h at 37°C before adding to the slides for hybridization.
Co-immunofluorescence and fluorescent in situ hybridization experiments
The IF on metaphase chromosomes, interphase cells from cell lines and tumor-normal patient sample cells, was performed on unfixed cells following published protocol with some modifications [80,81]. Enrichment of mitotic cells was achieved by double thymidine block to arrest cells in G1 phase of cells cycle. Actively growing culture was treated 5 mM thymidine for 18 to 20 h. The cells were released from first block and grown in fresh medium for 10 h followed by second block with 5 mM thymidine for 12 h. Cells were released from second block and cultured further in fresh medium for 9 h. These cells were either harvested to make slides or treated with 100 μg/ml colcemid (Roche, Indianapolis, IN, USA) for 1 h to make metaphase chromosomes and then harvested to make slides. The cells were harvested with trypsin and washed with PBS, resuspended in 75 mM KCl, and incubated at 37°C for 13 min and then placed on ice. Cells were cytospun onto glass slides for 5 min at 600 rpm. After air drying, the slides were incubated in freshly prepared KCM buffer (120 mM KCl, 20 mM NaCl, 10 mM Tris-HCl, pH 8.0, 0.5 mM EDTA) containing 0.1% Triton X-100 and protease inhibitors (1 μg/ml aprotinin, pepstatin A, Leupeptin and antipain each) for 15 min at room temperature (RT) followed by blocking (KCM buffer containing 3% BSA, protease inhibitors and 1:100 dilution normal IgG) for 30 min and primary antibody (KCM buffer containing 1% BSA, protease inhibitors and 1:100 dilution normal IgG) for 1 h. The slides were washed with KCM three times 5 min each at RT followed by secondary antibody staining for 1 h at RT. The slides were washed with KCM buffer four times 5 min each at RT, fixed with 10% buffered formalin for 10 min at RT, washed with H 2 O three times 5 min each at RT, incubated in carnoy's fixture for 30 min at RT followed by dehydration in ethanol series (70, 95 and 100% ethanol) 5 min each and air dried. For FISH, slides were equilibrated in 2X SSC for 5 min and digested with pepsin (10 μg/ ml) for 3 min at 37°C. Pepsin digestion time varied for different samples based on amount of cytoplasm left after spinning cells on slides or age of slides. The slides were washed three times in 2X SSC and dehydrated in ethanol series. The DNA on slides was denatured in 70% formamide and 2X SSC at 80°C for 5 min. The slides were incubated in ice cold 70% and 95% ethanol for 3 min each followed by 100% ethanol for 5 min at RT. Then denatured slides were hybridized with pre-annealed for 20 to 24 h at 37°C. At the end of hybridization, the slides were washed in 50% formamide and 2X SSC three times for 5 min each at 45°C, 0.2X SSC four times for 5 min each at 65°C and 2X SSC at room temperature once for 5 min. After washing, slides were incubated with blocking buffer (4× SSC/0.1% Tween-20, 3% bovine albumin) containing normal sheep or goat IgG (1:100 dilution) for 1 h at 37°C. The slides were then incubated with 1:1000 dilution streptavidin alexa 488 (Invitrogen, Grand Island, NY, USA) in developing buffer (4X SSC/0.1% Tween 20, 1% BSA) containing normal IgG for 1 h. The slides were washed in 4X SSC/0.1% tween 20 solution four times at 45°C followed by two washes in 2X SSC at room temperature. Slides were air dried and mounted with aqueous mounting media containing DAPI (Vector Labs, Burlingame, CA, USA). The slides were observed with a DeltaVision RT system (Applied Precision, GE Healthcare, Issaquah, WA, USA) controlling an interline charge-coupled device camera (Coolsnap; Roper) mounted on inverted microscope (IX-70; Olympus America, Center Valley, PA, USA). Images were captured using the 100X objective at 0.06 μm z-sections, de-convolved, and 2D-projected using softWoRx (api.gehealthcare.com/api/softworx-suite. asp). One hundred interphase cells were analyzed for CENP-A and 8q24 for all cell lines. For tumors, the number of cells analyzed ranged from 70 to 85 except for one tumor in which fifty cells were analyzed due to insufficient material.
Tumor and matched normal tissue
Tumor/normal tissues were obtained from the CHTN network. The pathology report indicated Tumor 1, Tumor 2, and Tumor 3 were moderately differentiated stage three tumor with no metastasis, high grade poorly differentiated stage three tumor with metastasis to one lymph node, and low grade well differentiated stage three tumor with no metastasis, respectively. Tumor cells were minced in buffer containing 250 mM sucrose, 15 mM Tris-HCl PH 7.5, 15 mM NaCl, 60 mM KCl, 1 mM EDTA, 0.5 mM EGTA, 0.15 mM spermine, 0.5 mM Spermidine and protease inhibitors (adapted from Dalal et al., 2005 [82]). Cells were collected by centrifugation at 600 g (1500 rpm) for 10 min at 4°C. The cell pellet was washed twice with same buffer. The cell pellet was resuspended in buffer containing 2 M sucrose instead of 250 mM and spun at 16,000× g for 30 min at 4°C. The cells were washed with buffer containing no sucrose and cells were cytospun onto glass slides for 5 min at 600 rpm. The slides were air dried and processed for IF and FISH as above.
|
2023-01-20T14:08:03.935Z
|
2015-01-13T00:00:00.000
|
{
"year": 2015,
"sha1": "57694419095fcf5cfdaf944f49a22c4969cf522f",
"oa_license": "CCBY",
"oa_url": "https://epigeneticsandchromatin.biomedcentral.com/track/pdf/10.1186/1756-8935-8-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "57694419095fcf5cfdaf944f49a22c4969cf522f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
80684206
|
pes2o/s2orc
|
v3-fos-license
|
CODEN (USA): IJPSPP Pharmacokinetic Drug-Drug Interactions between Concomitantly Used Metformin with Pravastatin
The present study is aimed to investigate the safety and reliability of anti diabetic drug Metformin and possible drug interaction with Pravastatin when they were administered as combination treatment. The study was conducted on healthy Wistar and streptozotocin induced diabetic rats. A simple and sensitive high performance liquid chromatographic method was developed for the simultaneous estimation of Metformin and Pravastatin in rat plasma and also to estimate possible pharmacokinetic parameters of these drugs after oral administration. There was no significant difference in the tmax of Metformin alone and combination with Pravastatin on day 1 and day 8 respectively. These were no significant increase in both AUC (0 – 24 h) and AUC (0 ) of Metformin alone and combination of Pravastatin on day 1 and day 8 respectively. Similarly there was no significant enhancement in the Cmax between Metformin alone and combination with Pravastatin on day 1 and day 8 respectively. There is however no significant difference in Cmax, t1⁄2, values. Similarly there was no significant difference in the tmax of Pravastatin alone and combination with Metformin on day 1 and day 8 respectively and no significant enhancement in Cmax, tmax, t1/2, values between Pravastatin alone and combination with Metformin on day 1 and day 8 respectively. In the present study, based on the results it can be concluded that the concurrent administration of these two drugs have potential benefit in the treatment of Diabetes and hyperlipidemia. In addition, due to their insignificant pharmacokinetic interaction the combinational therapy can be safe and highly advantageous in hyperlipidemia patients with diabetes.
INTRODUCTION
Diabetes mellitus (DM) is a chronic metabolic disorder characterized by a hyperglycemia caused by insulin deficiency, often combined with insulin resistance. In diabetes, the homeostasis of carbohydrate and lipid metabolism is improperly regulated by the pancreatic hormone, insulin; resulting in an increased blood glucose level. [1] Hyperglycemia occurs because of uncontrolled hepatic glucose output and reduced uptake of glucose by skeletal muscle with reduced glycogen synthesis. Diabetes mellitus is classified on the basis of the pathogenic process that leads to the hyperglycemia. The broad categories of DM are designated type 1 and type 2. [2] Metformin lowers blood glucose concentration and improves insulin sensitivity by reducing hepatic gluconeogenesis and enhancing insulin-simulated peripheral glucose uptake. It also inhibits adipose tissue lipolysis, thereby reducing circulating levels of free fatty acids (FFA). [3] Metformin, an oral anti-diabetic drug, is being considered increasingly for treatment and prevention of cancer, obesity as well as for the extension of healthy life span. [4] Metformin is not metabolized at all but is completely excreted in urine. Metformin may therefore accumulate and cause lactic acidosis if other medications have induced renal failure. [5] When patients are diagnosed with diabetes, a large number of medications become appropriate therapy. These include medications for dyslipidemia, hypertension, anti-platelet therapy, and glycemic control which may lead to drug interactions with antidiabetic drugs. [6] Metformin has many drug-disease interactions that can increase the risk of metformin-associated lactic acidosis (MALA). [6] Drug interactions are often categorized as pharmacodynamic or pharmacokinetic in nature. [6] A pharmacodynamic drug interaction is related to the drug's effect on the body. Pharmacodynamic drug interactions can be either beneficial or detrimental to patients. [6] Any drug that has the potential to raise blood glucose may produce apparent inefficacy of an oral hypoglycaemic drug. Stopping a drug which causes hyperglycaemia may produce a significant fall in blood glucose. This may require a parallel reduction in the dose of a hypoglycemic drug. [5] Some drugs can lower blood glucose, but the mechanisms of action are not well understood. Taking one of these drugs with a hypoglycemic drug might cause clinically significant hypoglycaemia. The patient may need a lower dose or even have to cease the oral hypoglycemic drug. Conversely stopping a drug with the potential to lower blood glucose might produce relative inefficacy of a hypoglycemic drug and create a need for an increased dose. [5]
Materials Drugs and chemicals
Metformin and Pravastatin were procured from aurobindo laboratories as a gift sample. All HPLC grade solvents (methanol and water) were procured from finar chemicals Ltd., Ahmadabad. All chemicals used were analytical grade.
Animal study
Male Wistar rats (weighing 200-220 g) were procured from the animal house CMR College of Pharmacy, Hyderabad. Animals were randomly divided into four groups each group contains six animals. Each rat was maintained under controlled lab environment atmosphere humidity of 50%, fed with standard pellet diet and water ad libitum. The protocol of animal study was approved by the institutional animal ethical committee with No. IAEC/1292/VCP/Y6/Ph D-16/61. Study Design [7] The rats were grouped as follows Group I : Metformin alone in single dose/day in diabetic rats. Group II: Pravastatin alone in single dose/day in diabetic rats. Group III: Pravastatin alone in single dose/day in normal healthy rats Group IV: Metformin and Pravastatin concomitant administration as a single dose/day in diabetic rats.
Collection of Blood Samples
After administration of the drugs, blood samples of 0.5 ml were drawn from each anesthetized (isoflurane) rat at pre-determined time intervals was collected from the retro-orbital plexus using a capillary tube into prelabelled eppendorf tubes containing 10% of K2EDTA anticoagulant (20μL). The time intervals for the sample collection were 0 (Pre dose), 0.5, 1, 2, 4, 6, 8, 10, 12, 16, 18 and 24 hours (post dose), Equal amount of saline was administered to replace blood volume at every blood withdrawal time. Plasma was obtained by centrifuging blood samples by using cooling centrifuge (REMI ULTRA) at 3000 rpm for 5 minutes. The obtained plasma samples were transferred into pre-labelled micro centrifuge tubes and stored at −30°C until bio analysis of pharmacokinetic and pharmacodynamic parameters. As described above, all the procedures were followed on day 8 also. Pharmacokinetic parameters were calculated by noncompartmental analysis by using Win Nonlin® 5.1 software. Concentrations obtained from the above bioanalytical method were compiled.
Method of Analysis Preparation of Plasma Samples for HPLC Analysis
Rat plasma (0.5 ml) samples were prepared for chromatography by precipitating proteins with 2.5 ml of ice-cold absolute ethanol for each 0.5 ml of plasma. After centrifugation the ethanol was transferred into a clean tube. The precipitate was re suspended with 1 ml of Acetonitrile by vortexing for 1 min. After centrifugation (5000-6000 rpm for 10 min), the Acetonitrile was added to the ethanol and the organic mixture was taken to near dryness by a steam of nitrogen at room temperature. Samples were reconstituted in 2001 of mobile phase was injected for HPLC analysis. For HPLC an Inertsil ODS 3V, 250 × 4.6 mm, C18 column with 5μm particle size and the mobile phase consisting of A mixture of Phosphate buffer and Methanol in the ratio of 60:40 v/v, the flow rate was maintained at 1ml/min and the eluent was monitored at 215 nm. Phenformin used as internal standard. The retention times of Metformin, Pravastatin and Phenformin were found to be 7.2, 4.6 and 3.2 min respectively.
Standard calibration curve of Metformin and Pravastatin in rat plasma
Different concentration (0.05, 0.1, 0.5, 1, 5, 10, 20, 40 ng/ml) of Metformin, Pravastatin in plasma were prepared for calibration curve. The samples were treated as above for protein precipitation method and peak areas of Metformin and Pravastatin were noted down. The peak area ratios obtained at different concentrations of the Metformin, Pravastatin were plotted using UV -Vis detector at 220 nm.
Pharmacokinetic Analysis
The pharmacokinetic parameters, peak plasma concentrations (Cmax) and time to reach peak concentration (tmax) were directly obtained from concentration time data. In the present study, AUC0-t refers to the AUC from 0 to 24 hours, which was determined by linear trapezoidal rule and AUC0- refers to the AUC from time at zero hours to infinity. The AUC0- was calculated using the formula AUC0-t + [Clast/K] where C last is the concentration in g/ml at the last time point and K is the elimination rate constant. Various pharmacokinetic parameters like area under the curve [AUC], elimination half life [t½]. Volume of distribution (V/f) total clearance (Cl/f) and mean residence time for each subject using a noncompartmental analysis by using Win Nonlin® 5.1 software.
Statistical Analysis
Statistical comparisons for the pharmacokinetic -Pharmacodynamic study among, Metformin, Pravastatin alone and in combination groups and plasma concentration -response study among concentrations and time were carried out with student's paired T-Test a value of P<0.05 was considered to be statistically significant. Data were reported as mean S.E.M linear regressions were used to determine the relationship between total plasma concentrations and pharmacokinetic and pharmacodynamic parameters. The mean concentration versus time profile of Metformin and Pravastatin in rat plasma is shown in Figures 1, 2, 3, 4, 5 and 6.
RESULTS AND DISCUSSION
In the present study, Metformin is completely absorbed after oral administration with peak plasma concentration of 24.34 0.3g/ml after 2 hours of dosing on day 1. In combination with Metformin and Pravastatin on day 1, the peak plasma concentration of Metformin 26.03 0.12g/ml occurred 2 hours after dosing. There was no significant increase in peak plasma concentration levels. Similarly Pravastatin is completely absorbed after oral administration with peak plasma concentration 3.02 ± 0.03g/ml occurred 2 hour after dosing on day 1 in combination with Metformin and Pravastatin on day 1. The peak plasma concentration of Pravastatin 4.80 ± 0.04g/ml occurred 2 hours after dosing. There was no significant increase in the peak plasma concentration levels similarly on day 8 of Metformin alone and with combination of Metformin with Pravastatin on day 8. Peak plasma concentration are 31.92 0.22g/ml and 32.41 0.10 S g/ml respectively similarly Pravastatin on day 8 and combination with Metformin concentrations are 4.80 0.04g/ml and 4.615 0.04g/ml respectively. There was no significant difference in peak plasma concentration on day 8 (P>0.05). There was no significant differences were observed between diabetic and healthy Pravastatin treated rats on day 1 and day 8 respectively (P<0.05) on oral administration of Pravastatin alone and with combination of Metformin. With Pravastatin on day 1 showed a 2% increase in the AUC 0 -24 of Metformin compared to combinational treatment similarly. Pravastatin on day 1 and with combination Metformin with Pravastatin on day 1 administration resulted in an increase in the AUC 0 -24 of Pravastatin compared with combinational treatment. Similarly on day 8 of Metformin and Pravastatin in combination treatment were 1.65% and 2.8% increase in the AUC0 -24 respectively. The mean AUC0 -24 of Pravastatin in diabetic (HL) rats was 33.49 0.20g/ml /h and 44.11 0.22g/ml/h which was reduced to 21.9 0.11g/ml/h and 38.22 0.09 g/ml/h Pravastatin in healthy rats on day 1 and day 8 treatment (P<0.05) respectively. 6.56 ± 0.28 6.64 ± 0.09 The half life was similar with alone and combination treatment on day 1 and day 8. All these changes were not statistically significant (P>0.05). All the results were showed in Table 1-6.
In the present study, based on the results obtained from kinetic study it is evident that the single dose of Metformin, Pravastatin individually and concomitantly treated diabetic rats did not show any bio statistically significant interactions in its pharmacokinetic parameters. So, it can be concluded that the concurrent administration of these two drugs have potential benefit in the management of diabetic patients with hyperlipidemia. In addition, due to their insignificant pharmacokinetic interaction the combinational therapy can be safe and highly advantageous in patients with diabetes and constipation.
|
2019-03-18T13:58:31.211Z
|
2018-01-15T00:00:00.000
|
{
"year": 2018,
"sha1": "0428cd652559b673b9f8c8b11d119eca33433305",
"oa_license": null,
"oa_url": "https://doi.org/10.25004/ijpsdr.2018.100108",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f9230acce8bc5a91f04f5ad3b5b036e084afcbcb",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221233311
|
pes2o/s2orc
|
v3-fos-license
|
Defining movement instabilities in yips golfers using motion capture and muscle synergies.
'Yips' is an involuntary movement disorder seen in some professional golfers. The diagnostic challenge in yips is to distinguish symptoms of task-specific dystonia from psychological 'choking'. We evaluated 15 professional golfers with mild symptomatic yips via anxiety tests, motion-capture and surface electromyography during a 'putting' task. Movement instabilities were analyzed via temporal statistical methodologies (one-dimensional statistical parametric mapping). In a subset of golfers, we found significant differences in angular velocities of the putter-club rotation and altered synergy neural coefficients during the downswing phase. Our results showed that golfers with mild yips require sensitive motion-capture evaluations wherein movement instabilities become evident. Particularly the downswing is affected, and the ensuing perturbations in phasic muscle activity share dystonic features that are consistently identified as abnormal muscle synergy patterns. Despite a strong subjective feeling of yips that golfers complain of, movement analysis can reliably exclude those with 'choking' from those with task-specific dystonias.
Introduction
Some golfers experience an acute involuntary loss of performance in competitive environments.Instinctively, it is attributed to competition stress or anxiety that is seen in any high-precision, high-pressure sports.This performance deficit is classically known as "choking" (Beilock & Carr, 2001;Hill et al., 2010), and among golfers well-known as "the yips".However yips is also suggested to be movement disorder.As a motor problem, it is characterized by abnormal involuntary twitching, jerks, spasms or freezing of planned motor movement (Torres-Russotto & Perlmutter, 2008).Consequently, many authors regard yips as a task-specific dystonia(Charles H. Adler et al., 2011;Clarke et al., 2015;Smith et al., 2003) or an occupational cramp resulting from intensive over-use of specific musculature for long periods of time which affects fine motor control (Altenmüller, 2003).
A key challenge in yips is to disentangle features of dystonia from "choking" (Clarke et al., 2015;Smith et al., 2003).Like all movement disorders, diagnosing task-specific dystonia is clinical.Neurologists have to "see it to diagnose it" via physical examination or by video evidence (Logroscino et al., 2003), or risk categorizing yips as a psychological "choking" phenomenon.To identify dystonic features in yips, it is crucial to observe or evoke patterns such as abnormal co-contractions during the task(C.H. Adler et al., 2005) (e.g. in putting .or drive shots).Unfortunately dystonic symptoms are rarely observed in golfers until their performance gets worse (Sachdev, 1992).When symptoms of yips are mild, the movement variability is often inconsistent (Marquardt, 2009) and detection of motor deficits may be difficult to capture videographically.Furthermore, dystonic patterns such as mild tremoric representations are dampened due to a two-handed club grip, and single handed shots to evoke yips may appear contrived(Charles H. Adler et al., 2018).Therefore, our aim in this study was to objectively identify features of task-specific dystonia to rule-out "choking" in mild yips golfers.The fallout of "choking" or dystonia is singulara measurable kinematic outcome with demonstrable performance loss due to an abnormal motor or stress response.We speculate that despite the strong psychological subjective priors associated with yips (Hill et al., 2010), precise movement analysis would allow us to capture true dystonic features consistently in the form of abnormal stereotyped muscle activity.To that end, we designed our study around golfers suffering from putter"s yips, since putting, by virtue of its importance in competitive golf, is one of the most affected stroke in yips (Kim et al., 2004).
We evaluated "choking" characteristics using trait and situational anxiety tests.To address the kinematics of putting shots, we used sensitive motion-capture systems to visualize movement characteristics and study their putting trajectories.Motion capture provides a direct image of orientation and position of the golf club during a putting stroke making it an ideal tool to quantify movement instabilities in golfers (Evans et al., 2008;Evans & Tuttle, 2015).Within the framework of dynamic motor control, we applied the concept of muscle synergies on surface electromyographic recordings to represent the coordinated activation of muscle groups working as a specialized functional unit(d" Avella et al., 2003).Prior studies have used synergy estimation as a tool to evaluate functional abnormalities between healthy and neurological patients (Giszter & Hart, 2013;Gizzi et al., 2011;Lunardini et al., 2015).Typically, muscle synergies function to access the best subset from a vast library of motor tasks to accomplish a smooth coordinated movement(d" Avella et al., 2006).However in diseases of the nervous system such as stroke, dystonia or spinal cord injury, these physiological synergies are affected (Giszter & Hart, 2013;Gizzi et al., 2011;Lunardini et al., 2017).In functional movement disorders like task-specific dystonia, it is suggested that subjects may have fixed and normal synergy structures but abnormal neural coefficients that may indicate an inability to access or modulate a well-defined motor behavior (Santello & Lang, 2015).
To summarize, we hypothesized that during putting, within each golfer, the co-contraction balance maintained by upper-arm and forearm muscles are altered in yips shots when compared to normal shot patterns.This difference would be observable via motion-tracking of the putter club together with muscle synergies to reveal features of movement instabilities in high-precision putting shots.These findings would allow us to characterize yips golfers with task-specific dystonia thereby ruling out those with choking. .
General information
15 professional golfers [14M, 1F, age 52.87 ±12.56 years (Mean, SD), all right handed] volunteered for the study and were prospectively enrolled at a single center.All golfers self-reported to have had yips symptoms for putting shots.Clinically we considered these symptoms as mild yips which were defined as inconsistencies during putting and those that were not strictly confined to problems only in putting (Klämpfl et al., 2013;Marquardt, 2009).Inclusion criteria were (i) professionals or high ranked golfers with current or past history of the yips, (ii) golfing experience of >15 years, (iii) handicap score of <14 before onset of yips and (iv) symptoms of yips severe enough to change grip style or alter training conditions.Exclusion criteria were subjects with (i) apparent physical injuries to upper arm or forearm (ii) diagnosed neurological, neuromuscular or psychological symptomatology.The research protocol was approved by the local ethics committee and written informed consent was obtained from the golfers in accordance to the Helsinki Declaration.All golfers were examined by a study neurologist on their arrival to the testing center.
Anxiety tests and Putting task
Subjects were asked to complete two separate anxiety questionnaires at the start of the test session; (i) Trait Anxiety Inventory in Sports (TAIS) -which provides a comprehensive measure of anxiety in competitive sports (Hashimoto et al., 1993); and (ii) Sports Competition Anxiety (SCA) test -an index of situational anxiety that analyzes athletes on how they feel before and during a competitive situation (Hamidi & Besharat, 2010;Martens, 1977).The TAIS test uses a 4-point Likert scale for a set of 25 questions with a minimum score of 25 (low anxiety) and a maximum of 100 (high anxiety proneness).The SCA test consists of 15 items, 10 of which are scored, with a score of less than 17 indicating a low level of anxiety, 17 to 24 an average level of anxiety, and more than 24 a high level of anxiety.
The putting task was performed on an artificial putting surface, in a room equipped with 12 motion tracking cameras with the distance from starting position to hole set to 2.2 meters.40 trials were performed and after every shot, the golfers were requested to verbally communicate their impression regarding their performance.
The golfers were specifically instructed to try and putt all the shots without any explicit critique provided by the experimenter.Trials were then sorted and classified as normal-shots and yips-shots based on their subjective experience which was irrespective of their success in putting the shots.All trials were videotaped and qualitatively assessed for dystonic movements.Supplementary data provides additional details regarding evaluation of the putting task. .
Movement analysis using sEMG and motion capture
Fig. 1 provides the overview of surface electromyography (sEMG) and motion capture sensor evaluation.sEMG was recorded using Trigno wireless EMG system (Delsys, Inc.US) using 16 sEMG sensors sampled at 2kHz, band-pass filtered between 20 to 480Hz, rectified and low-pass filtered at 10Hz.Signals from biceps, triceps, pronator, supinator, flexor digitorum superficialis (FDS), extensor digitorum communis (EDC), extensor carpi radialis (ECR) and flexor carpi ulnaris (FCU) from each arm were recorded on LabChart-7 software (ADInstruments, New South Wales, Australia).A 12 camera OptiTrack Prime 17W system (NaturalPoint, Inc, US) was used for motion capture.The system was optimized for recording in small spaces with a high resolution of 1.7 Megapixel recorded at a frame rate of 360 frames per second.system were defined as the unit vector in the direction of forward swing (Y axis), -the unit vector parallel to the club shaft (X axis) and -the vector perpendicular to and (Z axis).The angular velocity of putter head (red curve -Z axis) was used since this was the most sensitive parameter to record twitches, jerks or freezing of movements. .
For each putting trial, motion tracking and sEMG data were epoched for 2 seconds (-1 second to +1 second with 0 being the time of ball impact).This epoch included a time window immediately prior to start of backswing, backswing, downswing till ball impact and follow-through phases.The club coordinate system was defined to monitor the angular velocities of the club during the putting stroke.Calculations of angular velocities are described in supplementary data.The "putter downswing" is a fast movement occurring at a critical time point between backswing (governed by anti-gravity muscles) and follow-through (by muscles defining posture stability necessary to end the swing phase).Since the downswing phase in the golfers depended on the putter head direction and speed, we focused our analysis on the phasic component of the putter swing(d" Avella et al., 2006).Given that a constant tone was involved to maintain the stability of the putting movement, the phasic component was extracted by subtracting the tonic component represented as a linear ramp between 400ms before movement onset (backswing) and 400ms after follow-through phase(d" Avella et al., 2006).
For muscle synergy analysis, trials were downsampled from 2 kHz to 500 Hz and synergies were extracted for the downswing phase from each arm separately.Phasic sEMG data described above, for every trial, was factorized using non-negative matrix factorization (NNMF) into a synergy matrix of weights (W), a neural recruitment coefficient (C) represented mathematically as: where M is a m×n matrix of sEMG data (with m = number of muscles and n = number of samples), W is a m×k matrix containing the muscle synergies used to reduce m muscles into k dimensional space, and C is a k×n matrix containing the k patterns used to control full set of m muscles temporally defined by n. k denotes the number of synergies and E the residuals.Intuitively, for a particular muscle activation pattern M, W specifies the relative contributions of the muscles involved in the synergy and C is the coefficient that changes over time and across conditions.To avoid the local minima, the algorithm was iterated 1000 times and the final synergy vectors were normalised by their maximum values (Lee & Seung, 1999).The methodology for calculation, validation of synergies and their reconstructions were performed as described in previous studies (Safavynia & Ting, 2012;Torres-Oviedo et al., 2006).The NNMF algorithm received each trial data as input and synergies were calculated trial by trial without fixing the synergy number.The least synergy number to adequately reconstruct the sEMG data were quantified by 2 parametersthe centered Pearson"s correlation coefficient (R 2 ) calculated for every muscle from the trial dataset with respect to the reconstructed synergies and the variance accounted for (VAF) which is used to obtain a goodness of fit between the actual and reconstructed EMG from the synergies.To obtain consistent features from the data, .
such of those synergies which crossed a threshold of mean value R 2 (EMG reconstructed R 2 ) of 80%, a VAF of 90% and a VAF of muscles of 80% were considered sufficient to represent the input EMG dataset.In order to match the resulting synergies, we sorted the trial based synergies based on their degree of alignment represented by their Pearson"s correlation coefficients and matched them versus every pair.As a first step, synergies from a trial (any) within the group (normal or yips) were sorted according to their power contribution to the filtered sEMG data.sEMG power was computed as the Root Mean Squares (RMS) of the sEMG signals recorded.In the second step, the synergies from the rest of the trials were ordered with respect to the correlation of their weights obtained from the previous step.
Statistical analysis
Descriptive and correlational statistics for relevant demographic variables were performed.We identified the total number of yips trials and then a random number of normal-shot trials were matched to keep the number same under all response conditions.To evaluate the motion tracking differences between normal and yips shots, a 2-tailed paired t-test using 1-dimensional Statistical Parametric Mapping (SPM) was used.The procedure involved calculating a t-statistic threshold (t*) and the temporal smoothness at each time point using the residuals on the time-series data (Pataky et al., 2013).For each subject, thresholds were calculated based on the number of matched trials which differed for each participant making the SPM t-test ideal for subject-level analysis.Alpha value of 0.05 was set, and if the SPM t-trajectory crossed the threshold any time point in the time series data, the values were deemed significant (Fig. 2A, Comparison) (Robinson et al., 2014).
The advantage of this method is that the results are reproducible and avoids dependence on standard deviations to make interpretations where sEMG often show variability due to multiple trials.For the resulting synergy weights, paired t-tests with Bonferroni correction were applied, and for the temporal neural coefficients a 2-tailed paired SPM t-test was used (Fig. 2B, Comparison).For both tests, statistical significance was set to p<0.05.All offline analyses were done using Matlab 2017b.SPM analyses were performed using open-source spm1d code available at www.spm1d.org.
Data sharing statement:
Data that support the findings of this study are available from the corresponding author upon reasonable request.Custom scripts will be available before publication and deposited in a community repository.
. (A) Analysis of angular velocity of club: For each subject, trials were sorted to perform comparisons using one dimensional statistical parametric mapping (1D-SPM).For normal hits and yips shots, the mean value of angular velocity curves, trial-by-trial, from backswing to impact and follow-through were mostly similar (overlapping mean SD curves) except for certain time segments during downswing (shown by solid black line in mean SD curve).This represents the time where the values for yips shot were higher than for normal hits crossing a critical threshold (t*) in paired SPM t-test.The probability of finding such significant time segments by random sampling is given by their respective p values < 0.05 for individual subject thus rejecting the null hypothesis.
Demographic details:
Demographic details are described in
Motion tracking results:
For the putting task, shots were classified as Normal-shots (Ns) and Yips-shots (Ys) according to their subjective responses as described in Table 1.About 25% of the shots were Ys (10.0 ±3.99) out of 40 trials.
Quantitatively all participants showed some degree of overlap in swing trajectories (Mean, SD curves) between Ns and Ys, suggesting that high precision shots requiring fine control do not deviate much from the mean (Fig. 3).During the swing phase, SPM paired t-test showed specific time segments of t-curves which crossed the critical threshold value at p < 0.05 -in 9 out of 15 golfers as shown in Table 2.This time window of significant change in angular velocity of the putter club between Ns and Ys was characteristically observed in the downswing phase.However the downswing time in itself remained similar between Ns and Ys within the subjects [Ns = 315.26ms ±54.5, Ys = 315.87ms ±55.9, paired t-test -t(14) = -0.49,p = 0.631] (Supplementary Table-1).
Fig. 3
. Sections of backswing, downswing and follow-through are shown in Subject 01 in grey dotted line with the black dotted line representing time of ball impact.For normal hits and yips shots, the mean value of angular velocity curves from backswing to impact and follow-through were mostly similar throughout except for certain time segments during downswing (shown by horizontal solid black line).This represents the time where the values for yips shot were higher than for normal hits crossing a critical threshold (t*) in 2-tailed paired SPM t-test.
Synergy analysis results:
Averaged sEMG between Ns and Ys failed to show any significant difference using mean and SD plots for each subject (Supplementary Fig. 2) to evaluate for stereotyped burst patterns.Based on the changes observed in angular velocity, we pre-selected only the downswing time for muscle synergy analysis.Synergy constructions for short time periods have the advantage of revealing minor perturbations in the movement that affect stability (Wojtara et al., 2014).Muscle synergies calculated iteratively for the downswing phase was represented by 3 synergies for each arm with reconstruction scores of approx.80% and above.
Reconstruction scores are shown in Supplementary Table-2.
The patterns of 3 spatial synergies (W1, W2 and W3) extracted from each arm showed broad similarities between Ns and Ys for the downswing task (Supplementary Fig. 3).For the left arm, W1 showed a synchronized burst with an extension component mainly involving the supinator, EDC and ECR.W2 had flexors active with strong activations in pronators, FDS and FCU.W3 showed a non-specific activation in all muscle groups.In contrast, the right arm synergies showed an opposite spatial structure.W1 was mostly observed having a flexor component with biceps, pronators, FDS and FCU showing strong activations.W2 showed higher activations in Sup, FDS and ECR compared to other muscles.Finally W3 had both higher pronator and supinator activation.Subject-wise comparison of paired t-test showed no significant changes in W"s between Ns and Ys.
For evaluation of neural coefficients (C), SPM paired t-tests showed significant differences in 11 of the 15 golfers (Table 2).The change in neural coefficients with respect to synergy weight is shown as an example in 2 golfers in Fig. 4. Since the downswing phase is a fast movement action, we construe this change in C"s to have affected the entire downswing time of interest (and not a specific time segment within the downswing phase).
. Compiling the findings on movement analysis, we observed that Subject 02, 03, 06, 07 and 15 did not show any overt differences between normal and yips shots on motion capture or on muscle synergy analysis.The remaining golfers showed significant differences in either motion capture or muscle synergy analysis or frequently in both (Table 2).
Discussion
Our study is the first comprehensive report on mild yips golfers where sensitive movement-related measurements were utilized to reveal features of a movement disorder.Specifically we found that in mild yips (i) golfers experience reasonable amounts of stress that may contribute to a state of underperformance overlapping with their movement instabilities; (ii) for putting shots, whereas motion-tracking readily captures fine motor changes in movement trajectories, features of co-contraction imbalance on sEMG recordings may not be particularly evident; (iii) finally, the downswing is particularly affected, and the ensuing perturbations in muscle activity share dystonic features that are consistently identified as abnormal muscle synergy patterns..
Stress and competitive sports
Models that explain sports related anxiety conceptualize that the cognitive self-evaluation and stress response if left unchecked, result in increased muscle tension, loss of focus and a range of other physiological behavioral changes (Ford et al., 2017).As a consequence, depending on the individual"s own threshold of sense of anxiety, the performance-anxiety loop can either streamline their quality of shots or can potentially debilitate the task (Apter, 1984).The golfers in our study were not of anxious type as revealed by TAIS scores but experienced a certain degree of competition stress as seen from SCA test.Though we believe this to be normal stress responses during gameplay, the subjective feedbacks given during the experiment suggest otherwise (Table -1).Qualitatively, there appeared to be less disagreement among our golfers that anxiety was perhaps not the only factor contributing to their performance deficit.Prior studies have also reported similar observations that higher muscle activations and grip force can impact stroke play kinematics irrespective of levels of situation induced anxiety(Charles H. Adler et al., 2011;Smith et al., 2000;Stinear et al., 2006).
Downswing putting accuracy in yips
Professional golfers frequently spend considerable time in perfecting the putting stroke (Alexander & Kern, 2005).To perform a smooth shot, expert golfers recommend that the start of downswing phase of the club to be dictated by gravity, then eventually adjusting the hand torque at ball impact.Of significance is the angular velocity of the putter club-head and the hand torque model which advocates minimizing the hand torque from the start of downswing to allow a less variable velocity at ball impact, making the putting shots more consistent and accurate (Hume et al., 2005).As an outcome measure of motion patterns, we chose the putterclub angular velocity during the entire swing and found that it was largely inconsistent during the downswing phase for yips-shots.This result reflects the temporal difference between normal and yips-shots and hence indicates the change in the uniformity or regularity of the shots.We therefore interpret that the .
inconsistencies seen for yips-shots is a miscommunication in the co-contracting forearm muscles during such "fine adjustments" that may have prevented an ideal trajectory anticipated by the golfers.
Utility of muscle synergies
Our initial screening of sEMG differences in multiple muscle pairs between normal and yips-shots was mostly inconclusive (Supplementary Fig. 2).Adler et.al. reported that abnormal co-contraction patterns were observed in wrist flexors and extensors in the downswing phase in yips affected golfers(C.H. Adler et al., 2005).Co-contractions are essential to maintain the joint position balance and in high-precision shots like putting, any discrepancy that results in an erratic trajectory may not necessarily imply muscle dysfunction (Gribble et al., 2003).Therefore in low-force tasks like putting, due to the trial by trial sEMG variability, we abstained from over-reporting the effect of co-contractions as manifestations of yips in our participants.
This leads us to the next point in using synergy analysis to identify features of focal dystonia.In maintaining biomechanically constrained joint balance, we used muscle synergies to identify patterns of muscle activity that achieve multi-joint coordination.We observed that the muscles of the elbow-wrist joint required 3 synergies to provide the necessary balance, direction and speed to perform the putting stroke.The apparently high number of synergies for a putting stroke documented here is a response for a low force isometric task which necessitates precise movement control (Santello et al., 2013).
The spatial synergy weights (W) represent the muscle activations during a specific time of interest, here, the entire course of the downswing.In maintaining downswing balance, we observed that the variability in W"s was generally seen to be constrained to similar spatial patterns for normal and yips-shots.This appears to be an expected outcome since expert golfers minimize movement at the wrists by locking them in position, control positional parameters by spatially scaling downswing times and orient club head to avoid change in trajectories (Coello et al., 2000).Furthermore, the extracted W"s illustrate functional groupings and due to their anatomical proximity or effect of crosstalk, we speculate that this may have contributed to the similarity in spatial synergies.
Neural coefficients (C) are believed to represent neural commands from specific synergies that influences the W"s modulating it over time (Safavynia & Ting, 2012).The observed differences in C"s in a subset of golfers signify an altered phasic muscle synergy activity in yips-shots than in normal conditions.These were uniquely defined for each golfer suggesting an individual-specific relationship in muscle activations from higher centers.Muscle activations which occur in a multi-dimension space, require a coordinative input in the form of neural information to exclude and select appropriate motor patterns to harmonize movement.
This harmony is achieved by spinal pre-motor neurons which dynamically adjust activations from inhibitory and excitatory pre-motor neurons in conjunction with higher centers like sensorimotor cortex, basal ganglia and cerebellum (Giszter & Hart, 2013;Overduin et al., 2015;Takei et al., 2017).With long years of practice and repeated use, these pre-motor neurons evolve to reduce variability and strengthen access to a specific synergy necessary for motor control (Bizzi & Cheung, 2013).Yips-shots are an extreme example of this creation of "specific synergy" due to a highly sensitive pool of pre-motor neurons eventually leading to abnormal sensory integration (Alnajjar et al., 2015), impaired cortico-motor information processing or maladaptive plasticity (Santello & Lang, 2014).In actively adjusting putting trajectory, these golfers were unable to maintain their co-contraction stability due to an abnormal synergy representation.The manifestations seen here of yips-shots are therefore an amplification of altered dynamic phasic activity that dystonia are a part of.
Limitations
There are some limitations to this study.Each golfer plays with a certain degree of uniqueness and this subtle but diverse behavior in motion capture and sEMG led us to focus on a case-by-case basis.Testing in laboratory environments never brings out the same level of anxiety experienced by players as "sinking the putt" in the green.Our goal was not to create a high-stress environment for the golfer but rather identify features of muscle and kinematic imbalance under any possible yips-like condition.We were careful to interpret our findings on muscle synergies which were based on changes in unidirectional downswing movement.Detailed modeling using joint kinematics along with truncal muscle synergy estimation for putting shots could be beneficial to address in the future.Furthermore, it would be advantageous if a standardized anxiety test was specifically tailored to yips since it the first-line assessment for any yips affected athlete.
In our formulation, we focused on identifying features of dystonia via movement analysis though other crucial variables may also be at play.Using demographic variables such as golfing experience, duration of yips symptoms, practice rounds per year, along with results from anxiety scores, motion-capture and synergy analysis, we categorized the participant groups into 2 types, using an unsupervised cluster analysis algorithm (Fig. 5 and Supplementary Table -3).The basis for this classification system comes from a frequently documented "continuum" model suggested by and Type-2 (choking) yips (Smith et al., 2003).Non-movement associated variables could help classify the golfers better, although our focus rested mainly on motion-capture and muscle synergies to identify the problem.Future studies will need a systematic evaluation of these effects.
. ( Smith et al., 2003).The best total sum of distance was 5 for these two clusters.
Conclusions
Diagnosis of yips is fraught with difficulties mainly due to limited research, scant literature and incongruity within the target populations.Still, we were able to highlight and unravel abnormal kinematics and synergy patterns that influence motor behavior among golfers irrespective of their subjective feeling of yips.Future work will need to address the link between spinal and central causes of yips, their mechanisms and how interventions could rehabilitate these golfers using behavioral therapy, swing dynamics or "normalize" the faulty synergies leading to an improvement in their performance.
Fig. 1
Fig.1 legend: (A) Graphical representation of wireless sEMG electrodes used along with the sensors on the putter club for motion capture system, (B) The club coordinate system consisted of three orthogonal unit vectors , ,and for X, Y and Z axis respectively which were calculated using acryl plates attached to shaft and putter-head.(C) (top) Snapshot of putting swing for an individual participant.Time "0" = time of ball impact.(bottom) The club coordinate
Fig. 5
Fig. 5 legend: (A) Classification of yips golfers shown using two-dimensional scatterplot.The variable k for number of clusters was set to k = 2 to cluster the input data into Type-1 dystonia and Type-2 choking, suggested by Smith et.al.
54Table 1 legend: TAIS score -Trait Anxiety Inventory in Sports score, SCA test -Sports Competition Anxiety test .
|
2020-08-23T07:02:28.887Z
|
2020-08-22T00:00:00.000
|
{
"year": 2020,
"sha1": "3653377cf4609c04468624cc012b280370758984",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-132954/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4faf05b528c366add090eec4cc1becb990fd895d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
287844
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of phylogenetic trees through alignment of embedded evolutionary distances
Background The understanding of evolutionary relationships is a fundamental aspect of modern biology, with the phylogenetic tree being a primary tool for describing these associations. However, comparison of trees for the purpose of assessing similarity and the quantification of various biological processes remains a significant challenge. Results We describe a novel approach for the comparison of phylogenetic distance information based on the alignment of representative high-dimensional embeddings (xCEED: Comparison of Embedded Evolutionary Distances). The xCEED methodology, which utilizes multidimensional scaling and Procrustes-related superimposition approaches, provides the ability to measure the global similarity between trees as well as incongruities between them. We demonstrate the application of this approach to the prediction of coevolving protein interactions and demonstrate its improved performance over the mirrortree, tol-mirrortree, phylogenetic vector projection, and partial correlation approaches. Furthermore, we show its applicability to both the detection of horizontal gene transfer events as well as its potential use in the prediction of interaction specificity between a pair of multigene families. Conclusions These approaches provide additional tools for the study of phylogenetic trees and associated evolutionary processes. Source code is available at http://gomezlab.bme.unc.edu/tools.
Background
Understanding historical relationships between genes, proteins and species is a core aspect of evolutionary biology, with the phylogenetic tree playing a fundamental role in analysis and visualization. However, major challenges still exist in the representation and analysis of the information encoded within phylogenetic trees. For instance, inferring the "true" tree is fundamentally a difficult problem, leading to continuous refinement of reconstruction methods [1]. Similarly, methodologies for tree comparison are also undergoing significant development [2]. In this instance, the typical goal is to compare trees in order to determine their degree of similarity, providing one mechanism to test a variety of hypotheses regarding evolutionary associations. For example, comparison of gene trees with organismal trees allows the detection of non-standard events such as horizontal gene transfer [3,4]. Comparison of species trees can be used to give a picture of host-parasite symbiosis as is seen, for example, in the case of attine ants, their fungal cultivars, and the Escovopsis parasite [5]. Another example is the prediction of protein-protein interactions, as it has been shown that interacting proteins often appear to coevolve with one another [6][7][8]. Such instances of coevolution are largely based on the premise that in order to maintain their interaction (and thus their broader functionality), changes in one gene/protein will be coordinated with changes in the other, and this process of coevolution or correlated evolution can be observed through the similarity of their phylogenetic trees [9,10].
While there are a variety of methods available for the comparison of trees, two general categories of approaches are clearly distinguishable. The first class of approaches focuses on comparing trees through topological features, for example quantifying the number of shared/non-shared substructures (e.g. subtrees of four leaf nodes) between a pair of trees [11,12] or finding the minimum number of operations (e.g. nearest neighbor interchange) to transform one tree into another [13][14][15]. The second class of approaches compares the distance or path length information directly. Specifically, in these approaches assessing the similarity between two trees is reduced to a problem of finding the degree of correlation (most commonly the Pearson correlation) between the elements within the respective distance matrices. The "mirrortree" method is based on such an approach and was developed for the prediction of protein-protein interactions [16]. Continued work in this area has led to multiple modifications of the basic mirrortree approach including the use of patristic distances obtained from the corresponding neighbor-joining tree instead of the observed inter-protein distances [17], the correction of patristic distance matrices for their inherent similarity due to background "tree of life" evolution [17][18][19], and the incorporation of ancestor node information into the distance matrices [20].
While methods based on distance matrix similarities have proven to be of particular value, several substantial disadvantages exist. For instance, these methods assume that each value in a distance matrix is independent of the other distance values. This is generally not the case as, if a distance (path length) between two leaf nodes changes, lengths of all other paths involving the modified edge(s) also change. Therefore, any method in which the distance matrices are directly manipulated without considering this dependency may bias the reported correlations. It is also difficult to extend these existing approaches, for example, to incorporate robust estimation into the identification of outlying lineages between compared trees. Furthermore, by definition, it is not possible to handle trees of different size or to align multiple trees simultaneously. Finally, prior knowledge cannot be readily incorporated so as to help guide comparisons.
Here, we report a novel method for the comparison of evolutionary distance matrices (and hence trees) based on the superimposition of Euclidean embeddings that best realize the given distance relationships. Specifically, we start from a set of aligned sequences and generate distance matrices based on either distance information calculated directly from the alignment, or distances derived from a corresponding neighbor joining tree. From these distance matrices we then map each sequence to a Euclidean space via metric multidimensional scaling (MDS). This operation produces a multidimensional structure or point pattern, where each point represents a taxon, and the distance relationships between all points is maintained from the original distance matrix. For the purpose of comparing two trees, the same operation is applied to the second distance matrix, generating the second Euclidean embedding. Finally, we superimpose one embedded point pattern onto the other with the degree of fit being determined by the least squares sum of deviations between corresponding point pairs or by some other measure as described below.
In this paper, we refer to the general comparative approach of Euclidean embedding creation and alignment as "xCEED", the Comparison of Embedded Evolutionary Distances. However, this general approach actually contains three different superimposition methods, differing with regard to the question being asked or the data available (see Figure 1). Briefly, the first approach is an indirect superimposition of target structures (trees) that is guided by a low-noise reference structure, 16S ribosomal RNA phylogenies. While similar to the tol-mirrortree and vector-projection methods [17,18], this approach, rCEED, provides a new way to remove background correlation caused by tree-of-life evolution and thus helps in providing an accurate measure of coevolution (see Figure 2). Like the tolmirrortree and vector-projection methods, rCEED requires both a reference structure as well as correspondence information for proper alignment (e.g. protein A in tree 1 maps to protein B in tree 2). We describe the application of rCEED to the prediction of coevolving protein interactions and demonstrate its improved performance over the mirrortree, tol-mirrortree [16,17], phylogenetic vector projection [18], and partial correlation methods [19].
In cases where the identification of incongruent region between trees is desired, robust structure alignment (vCEED) can be performed using "Verboonian" Procrustes [21], which penalizes less for the existence of outliers when compared to rCEED. As a result, one can detect local regions of similarity even in the presence of outliers and/or identify outliers relative to a common shared structure. The identification of horizontal gene transfer (HGT) events is an area where outlier detection within a phylogenetic tree is needed and we provide an example of the applicability of vCEED to this problem.
As with rCEED, we can also use vCEED to detect coevolving protein interactions, especially in cases where a reference structure is not available and/or target structures (trees) contain outlying taxa and show its in this. We also compare the performance of vCEED with that of rCEED and other existing methods.
Finally, alignment without either a reference structure or mapping information can be performed with a Gaussian mixture model superimposition approach (gCEED). As a proof-of-concept for the potential broader utility of this approach, we describe its application to the prediction of protein interaction specificity between multigene families. As a whole, the xCEED methodology provides a novel approach to the tree comparison problem and the study of related evolutionary processes.
Prediction of protein interactions
We first applied both rCEED and vCEED to the prediction of protein interactions through the detection of a coevolutionary signal between orthologous protein families. While analogous to the approaches of [17,18], rCEED attempts to address some of their weaknesses. Specifically, in the tol-mirrortree approach, Pazos and colleagues subtracted the distance matrix of 16S rRNA from that of each protein, and then measured the correlation between these "difference of distance" matrices [17]. However, direct subtraction of rRNA from protein distances is problematic, as their evolutionary rates are different and it is not clear as to how to properly scale such differencing procedures. In phylogenetic vector projection, Sato and colleagues formed a vector from the lower triangular region of each distance matrix [18] and computed a difference vector between a gene vector and the same gene vector projected onto that of 16S rRNA. Again the correlation between distance matrices is measured with these difference (normalized) vectors. While avoiding direct subtraction of amino acid and rRNA distances, this approach (as does the tolmirrortree approach) still assumes that all pairwise distances are independent. Not accounting for nonindependence between distances can potentially cause bias in evaluation of correlation between two distance matrices [22].
The rCEED approach addresses these issues by viewing the leaf nodes in an embedded structure as independent variables. To measure the degree of coevolution, we estimate how similar the deviations from the reference structure are for each embedded structure. Doing this makes it possible to remove the background tree-of-life correlation without direct subtraction of rRNA distances from amino acid distances or assuming independence between distances. Specifically, we fit the reference structure(s) onto the first embedded structure and then onto the second structure separately (see Figure 2). Afterwords, we superimpose these two reference structures onto each other while carrying along their associated structures, which are the actual targets of interest. After this superimposition we can remove the reference structures, and then measure the degree of similarity between the remaining two target structures. As a single outlier can make the estimation of correlation coefficients unreliable [23] we also evaluated the use of vCEED in this application as it is specifically tailored for dealing with outliers (see following section as well as Methods for more details).
We compared the predictions of rCEED and vCEED to those of the mirrortree, tol-mirrortree, phylogenetic vector projection, and partial correlation methods using the data of Pazos and colleagues [17]. This data consisted of 388 protein interactions (true positives) out of a total of 19,972 possible between 188 E. coli proteins. Results are shown in Table 1 where we
Figure 1
The three different types of embedded structure alignment described in this work. (a) rCEED aligns two target structures indirectly using a reference structure. This alignment is based on classical Procrustes superimposition. (b) For the detection of outliers and/or common substructures, we use vCEED to perform a local alignment (rather than global in the case of rCEED). (c) If neither a reference structure nor correspondence information is available, we can align the structures using gCEED which adapts a Gaussian mixture model approach for the accurate superimposition.
benchmarked the performance of all methods by computing the area under receiver operating characteristic curve (AUC) and estimated the significance by using the method of DeLong et al. [24]. We also provide the area under precision-recall curve, with the full precision-recall curves provided in additional file 1. As shown in Table 1, the AUC for the precision-recall curve was the greatest for vCEED with a value of 0.091, followed by rCEED using either patristic (0.083) or observed (0.069) distances. The worst performer was the mirrortree method with an PR-AUC of 0.048. Similar trends are observed when using the ROC score with rCEED having a score of 0.763, with that of mirrortree and tol-mirrortree being 0.687 and 0.722 respectively. The phylogenetic vector projection and partial correlation approach had ROC scores of 0.704 and 0.687 respectively. In all cases, the difference in AUC between rCEED and other methods was statistically significant (p-values ≈ 10 -6 ). We also found that the AUC of vCEED was 0.763 -nearly that of rCEED using patristic distances.
Figure 2
Schematic overview of rCEED approach. (a) Genetic distances obtained from sequence alignment or patristic distance obtained from phylogenetic tree are mapped into Euclidean space by multidimensional scaling. Orthologous protein families X 1 and X 2 along with two identical reference structures (16S rRNA orthologs), X r , are embedded in a Euclidean space. (b) Next, each reference structure is superimposed onto their respective protein families. (c) All four structures are now superimposed based on estimated transformations between each set of references. Since both reference structures were orthogonally transformed in (b), they will match exactly at this step. (d) The final superimposition result after removal of the reference structures. 1 Area under precision-recall curve. 2 Area under receiver operating characteristic curve. 3 The significance was computed using rCEED (observed distances) as reference according to [24].. 4 Based on observed distances. 5 Based on patristic distances after the reconstruction of neighbor joining trees. 6 August 2009 version of DIP.
Detection of horizontal gene transfer
With the basic xCEED approach, we are able to estimate how well two trees match in a global sense through a least squares model. Specifically, if there exists an incongruent region between two trees, the least squares approach will tend to smooth away large local errors by allowing greater errors in other, otherwise well-aligning regions. However, in some cases we would prefer to maintain the best alignment of a substructure and/or be able to identify outliers that are not consistent with a comparison structure. To address this need, we adapted a robust Procrustes method previously proposed by Verboon and Heiser [21], with the difference between this and globally optimal superimposition diagrammed in Figure 3.
In Figure 3(a) it can be seen that errors are distributed across all pairs, as would be done using the basic xCEED method using least squares (e.g. rCEED with a reference structure). However, in this example there is a substructure that is in fact identical between the two that is lost as a result of the spreading of errors throughout the alignment. In contrast, Figure 3(b) shows the case where we have used Verboonian robust Procrustes (vCEED) for the alignment. In this case we have found and aligned the identical substructures; allowing identification of both this region of high-similarity as well as the outliers which deviate significantly between the two distance matrices.
This ability to detect local similarity and/or outliers is of particular utility in the identification of horizontal gene transfer (HGT) events. In HGT, a gene or group of genes is transferred laterally from another species, rather than inherited vertically from the parent(s). There are a variety of approaches to predict the occurrence of HGT based on, for example, codon usage, patterns of sequence homology, and patterns of gene distribution [25,26]. However, the most robust method for detecting HGT is through the comparison of phylogenetic trees of different genes. When a species accepts a gene laterally from another species, the location of the recipient species in the phylogenetic tree will be unusually close to the location of the donor species, which can be detected through manual analysis of the tree. Using vCEED, we can detect possible HGT by comparing a tree that potentially harbors one or more HGT events with a reference tree that does not, and then identifying the associated outliers as likely HGT candidates.
As a proof-of-concept, we applied vCEED to the case of the RuvB (COG2255) gene family described in [27]. In E. coli, the RuvA and RuvB proteins catalyze branch migration of Holliday junctions during genetic recombination and form an operon conserved in the majority of sequenced bacterial genomes. In contrast with the RuvA family, the RuvB gene is believed to have undergone HGT [27]. We compared the trees (as MDS-constructed embedding) of RuvB orthologous proteins collected from 41 bacterial species (see Methods) to that of 16S rRNA, with errors in the superimposition plotted in Figure 4. In this example, we expect that the lineages that underwent HGT will show up as outliers in the superimposition of the reference structure (16S rRNA) onto that of RuvB. As can be observed, genes with errors larger than the threshold of 0.01 for c (Equation (6) These four were the same species identified by Omelchenko and colleagues as being related to the HGT of the RuvB gene. In addition, vCEED was also able to identify sll0613, a Cyanobacterial gene from Synechocystis which, as can be observed in the phylogentic tree of RuvB, is closer to the Firmicutes rather than the Proteobacteria or Actinobacteria as opposed to RuvA. We also tested our approach with the more complicated case of the UppS gene family (COG0020) which, as also described in [27], is believed to harbor multiple HGT events. Figure 5 shows the outlying genes according to vCEED using 16S rRNA as the reference and using the same threshold value of 0.01 for c as in the previous example. As can be observed, we found that APE1385 from A. pernix, an archaeal gene, has the greatest divergence in the comparison to the 16S rRNA tree. We also see in the phylogenetic tree that it has atypical affinity to bacterial genes from C. jejuni (Cj0824) and B. burgdorferi (BB0120), both of which are also identified as weak outliers with errors just above threshold. Both Cj0824 and BB0120 would generally be expected to appear in the tree under the proper phyla, Proteobacteria (orange) and Spirochaetes (light green), respectively. Further examination of the identified outlier genes within the phylogenetic tree shows a bacterial branch (green) of D. radioduran (DR2447), C. glutamicum (Cgl0966), M. tuberculosis H37Rv (Rv1086) and M. leprae (ML2467), embedded within an archaeal phylum, the Euryarchaeota. We also see in the archaeal branch that a Crenarchaeota gene, SSO0163, stands out in its grouping with other genes from the Euryarchaeota phylum.
The Rickettsiales (blue) identified by Omelchenko and colleagues were also included in our outlier list, although they were not the most deviating. Note that HGT detection via vCEED for RuvB. The phylogenetic tree of the RuvB (COG2255) family is shown on the left (redrawn from [27]). Shown on the right are the vCEED alignment errors between COG2255 and 16S rRNA. The vertical line at 0.01 was the threshold c we used in this analysis (see Equation (6)). being an outlier does not certify that the gene was horizontally transferred. Other mechanisms for this deviation can also occur including large differences in evolutionary rate or poor quality of the sequence alignment. Therefore, while this approach can potentially aid in the automatic prediction of potential HGT events, manual inspection of the phylogenetic tree may still be required. For example, the Firmicutes genes, L183602 and SA1103, while being slight outliers, are in a monophyletic subtree of Firmicutes (purple) and can thus be excluded from further consideration.
Interaction specificity between multigene families
As demonstrated earlier, we can use either rCEED or vCEED to compare trees so as to predict the potential interaction of a pair of protein families. Again, these approaches require the use of mapping information to link the leaves of the two trees. There are applications, however, where one would like to compare trees that lack mapping information or where the recovery of mapping information is the primary goal. An important example of this type is in trying to determine likely interaction specificity between a pair of protein or HGT detection via vCEED for uppS. The phylogenetic tree of the UppS (COG0020) family is shown on the left (redrawn from [27]). In addition to RP425 and RC0590 which was previously identified, an archaeal gene, APE1385, is clustered within a group of bacterial genes. Also observable is a bacterial branch consisting of DR2447, Cgl0966, Rv1086, and ML2467, with abnormal affinity to archaeal species. Both examples appear as outliers with vCEED (right) and indicate possible horizontal gene transfer. See Results for further details.
Two primary methods for specificity prediction, MATRIX [28] and MORPH [29], currently exist, and like all methods, have their own inherent strengths and weaknesses. With MATRIX, a significant weakness is that the tree structure is completely ignored throughout the specificity search. MATRIX also requires multiple simulated annealing runs (≥ 100 runs with trees of 15 leaves or more) to determine which pairings are most frequent. Perhaps most important, both MATRIX and MORPH assume that there is a one-to-one correspondence between members of the two protein families; i.e. protein A from family 1 interacts solely with protein B from family 2. Thus it is not possible to generalize to the more realistic situation where we are looking at specificities between protein families of different size.
In addition it precludes the possibility of many-to-many or multiple interaction partners for a given protein.
Here we adapt the use of a registration algorithm based upon Gaussian mixture models with our basic embedding and alignment approach [31]. In this case, we regard each vertex in the embedded structure (i.e. each leaf in the phylogenetic tree) as the mean of a Gaussian component such that the entire embedding is represented as a mixture model (see Methods). The central idea is that if we have two structures that are highly similar, as we align one structure closer to the other, their corresponding mixture models become accordingly similar. By trying to minimize the divergence between the two mixture models, we can eventually find the best superimposition. We refer to this method of alignment as Gaussian CEED or gCEED for short. Using gCEED, we attempted to determine the specificity information between protein families provided in Ramani et al. [28].
The first example is the case of the interacting protein family of GyrA and GyrB. Each protein family is known to have a single paralog, ParC and ParE respectively, and these paralogs are also known to interact. Figure 6(a) shows the trees and interaction specificity (a leaf on one tree interacts with the corresponding leaf on the other tree) between these two multigene families. Results of the initial superimposition are shown in Figure 6(b)-Step1. The probability matrix is shown after having converted probabilities to grayscale values such that darker elements at [i, j] denote a higher probability of correspondence between i-th protein of family 1 and j-th protein of family 2. Proteins are arranged such that correct individual binding partners lie along the diagonal. In this first step we see that the initial alignment appears to have found the correct broader interaction specificity of GyrA with GyrB (region "a" in upper left of matrix) and ParC with ParE (region "b" and lower right) as observed by the distinct segmentation of the probability matrix into two distinct regions. For ParC/ParE, correct correspondence for three individual interactions was also found in the initial alignment (CC_1566 ⇔ CC_1974 as well as NMA1802 ⇔ NMA1941 and RSc0978 ⇔ RSc0976). Both regions a and b, being indeterminate, are separately superimposed in an iterative manner with results after each superimposition shown in the submatrices of Figure 6(b).
The final result after complete alignment is shown in Figure 6(c). Here we can see that gCEED successfully predicted the interaction specificity for 12 out of 20 individual interactions. The other misassigned 8 pairs were degenerate cases and their interaction specificity could not be further defined due to a lack of structural information. The reason for this can in part be observed within Figure 6(a), where the four proteins from each family (marked with arrows) can be observed to be very close to each other (short branch lengths from their common ancestor). In such instances it is difficult for the algorithm to find a correct high-probability mapping as multiple alignments are equally viable. Nevertheless, the interaction specificity at the protein-family level was correctly predicted. In addition, over half of the specific interactions could be recovered solely from the alignment of these structures.
We performed the same specificity analysis using gCEED to a total of 34 protein family pairs used in previous studies and compared results to that of MATRIX and MORPH in terms of stringent accuracy ( Table 2). As can be observed, there is no significantly superior approach (Wilcoxon's signed rank test -data not shown), as all methods show instances where they have the greatest accuracy of specificity prediction. However, we emphasize the extra functionality of gCEED that is suited to realistic situations where (1) the size of the protein families at hand are unlikely to be identical, and/or (2) there exist some a priori knowledge of validated interacting protein interactions.
As a demonstration of this functionality within gCEED, we again used the case of GyrA and GyrB interactions. We first made the GyrA tree progressively smaller by sampling from nineteen down to ten sequences from the total of twenty GyrA orthologs, with 100 different combinations for each size. We then performed specificity prediction by aligning each sampled GyrA tree with the complete 20-node GyrB tree. To evaluate our performance, we introduce the vicinity hit rate as a means to estimate how close each node's true interacting partner is in relation to others within the aligned structures. Specifically, we define the vicinity hit rate as the ratio of nodes that have their true interacting parter within top three highest predicted probability partners. Thus the vicinity hit rate allows for situations where the true interacting partner is very close (but not the closest) to the predicted interaction partner as determined through the alignment. Figure 7(a). Again, each histogram along the x-axis was generated from 100 samples of the GyrA tree of corresponding size and the dark line shows how the average hit rate changes as the size of this tree decreases. In this instance, the ability for gCEED to determine binding specificities with a vicinity hit rate of approximately 65% (the hit rate generated in the original 20 vs. 20 superimposition) is relatively well maintained out to approximately 15 leaves or a 25% difference in tree sizes. As the difference between tree sizes decreases, we also begin to observe greater numbers of very poor predictions along with lesser numbers of very good predictions. These arise in situations where the the smaller tree fits very well, but in the wrong position within the larger tree, resulting in a very poor vicinity hit rate (shaded box in Figure 7(a)). The situation is analogous, but far less common for the high vicinity rate predictions (e.g. above 80%).
Results of this analysis is shown in
We would expect that additional information in the form of prior knowledge of an existing protein interaction pair would help to improve predictive performance. Such knowledge can be readily introduced into the gCEED alignment scheme and results of knowing just a single pair a priori are shown in Figure 7(b). Here we picked a random, but correct pair of interacting proteins between the two trees to serve as the a priori known information. As these proteins interact, we assume that they must be near each other in the final superimposition. We thus impose a constraint in the optimization of Equation (12), where the two proteins are kept within a pre-specified distance range (0.05 in this work).
Results show that use of prior knowledge provides a significant improvement in the stability of the vicinity hit rate, with a mean hit rate of approximately 60% even when reducing tree size to nearly half of its original value. In addition, using the structural information provided by the known interaction pair, we were able to avoid degenerate cases (shaded box in Figure 7(b)). In the comparisons between trees with greatest difference in size, the average vicinity hit rate of ten-node sample trees was 32.0% without prior knowledge versus 53.2% when using a single known protein pair. Together, these results suggest the potential for using gCEED in realistic situations where differences in tree sizes exist and/or prior information is available.
Conclusions
In this work, we have described a novel approach for the comparison of phylogenetic trees, represented as embedded structures, and shown several examples of its application. First, when applied to the prediction of protein interactions, we see an improvement in prediction accuracy using the rCEED/vCEED approach when compared to other available approaches. We note, that high similarity between two embedded structures does not require that there is a physical interaction between members, but is only suggestive of the possibility. Similarly, the physical interaction between two proteins does not necessitate coevolution. Thus coevolutionary approaches such as those presented here can only identify a portion of the complete interactome within a given species. For the enhanced prediction of protein interactions, approaches such as rCEED/vCEED may show their greatest efficacy when combined with other computational approaches (e.g. [32][33][34]).
With vCEED, we were also able to perform a local alignment between structures, providing the opportunity to detect outliers that often indicate unusual evolutionary events including the horizontal gene transfer described here. While phylogenetic methods which detect incongruity between trees are generally considered the gold-standard for HGT detection, these methods are not readily automatable and require extensive manual analysis. Our results suggests that vCEED has significant potential in aiding such identifications.
By using the information inherent in the representation of a tree as an embedded structure, we were able to demonstrate the ability to align and measure the similarity between trees even when correspondence information is not available or when their sizes are different. While a basic example, the need to establish interaction specificity between interacting protein families supports the development of new approaches, and in this regard, gCEED shows significant promise.
While the embedding and superimposition of taxa within a Euclidean space in no way supersedes the use of a phylogenetic tree, it does provide several useful capabilities. For instance, embedding generates a deterministic structure that bypasses ambiguities associated in direct tree comparisons by transforming a specific distance matrix into a single specific shape enabling consistent comparison between trees. Similarly, use of a representative embedding also makes it possible to take into account the entire point-pattern structure all at once when determining correlation, rather than examining pair-by-pair correlation as in the mirrortree or related approaches. Finally, the representation of trees as embedded structures provides the capability to compare trees of different size, which is a built-in limitation of correlation-based methods. In this case, it becomes a matter of comparing two structures using procedures based on registration approaches such as the gCEED approach proposed in this work. As a whole, the xCEED approach provides an additional set of tools for the study of phylogenetic trees and associated evolutionary processes.
Figure 7
Comparison of trees of different size. The large tree is a 20-node GyrB tree. The smaller is a GyrA tree, formed from random sampling of nodes with sizes ranging from nineteen to ten nodes (x-axis). For each size of the smaller tree a histogram of vicinity hit rate is shown on the y-axis, based on 100 randomly-formed trees of a given size. The dark line specifies the average hit rate. (a) Accuracy of comparison without using any known interaction information. (b) Accuracy of comparison when using a single correct protein interaction pair as prior information.
Data
For the prediction of protein interactions, we tested our method using data identical to that used by Pazos and colleagues [17]. This data set consists of experimentally characterized interactions among Escherichia coli proteins deposited in the February 2004 version of the DIP database [35]. For each protein in the interaction data, orthologs from 43 other prokaryotic species were collected to form each protein family. Among all the possible pairs of protein families, those that have less than ten common matching species (or taxa) were removed, leaving 19,972 suitable test protein interaction pairs (118 different proteins in total). From this complete set of protein interaction data, there were 115 experimentally characterized, true-positive, interaction pairs. We updated this set of interactions by checking all the 19,972 test interactions with the July 2007 version of DIP, and found that 388 of them were experimentally validated (an increase of 223 true-positive interactions from the 2004 version of DIP). We used this updated data set when measuring the discrimination power of our method. Along with this set of true interactions, a set of negative interactions was formed from the complement of this data -i.e. protein pairs not experimentally shown to be interacting. Thus a total of 19,584 negative interactions were formed in this way. For specificity prediction we used the data from [28].
Each protein family was aligned with clustalw [36], and distance matrices were calculated with the protdist routine from phylip [37]. These distance matrices are different from those used in [17] in that our data are created directly from the sequence alignments rather than from neighbor-joined trees. However, for comparison we also performed the same test with those used in [17]. The sequences and distance matrices of 16S rRNA were downloaded from the Ribosomal Database Project II [38].
The basic xCEED approach: Classical MDS and superimposition with Procrustes
The approach we have developed is based upon extensions to the methods of multidimensional scaling and Procrustes analysis and we discuss these two fundamental approaches now. First, classical MDS attempts to find a Euclidean embedding of the data while simultaneously trying to preserve their interpoint distances [39]. Given distance matrix D = [d ij ], we first compute the contrast matrix M which is defined to be equivalent to CDC , where C is the centering matrix I -1 n 1'1 (1 is a row vector of ones and n is the number of nodes), and D = − ⎡ ⎣ ⎤ ⎦ 1 2 2 d ij . After performing eigenvalue decomposition on M, which gives M = QΛQ', we get X = QΛ 1/2 , which gives the coordinates of the points embedded in a, potentially high-dimensional, Euclidean space. Note that we truncate the negative eigenvalues in Λ since D is a Euclidean matrix if and only if M is positive semi-definite, which then defines the maximum dimensionality. Again, distances between points in this new structure representation are those that were provided by the original distance matrix for the tree. where U and V is the left and right singular matrices that are coming from the singular value decomposition of Z'CW(= UΣV'), where Σ is the matrix of singular values.
Reference-based comparison of embedded evolutionary distances (rCEED): application to the quantification of protein coevolution We first collect two sets of orthologous sequences from two potentially interacting protein families; respectively designated F 1 and F 2 . In addition, we also assemble F r , which is a set of orthologous 16S rRNA sequences. Distance matrices, D 1 , D 2 , and D r , are then derived with respect to the species that are common to all F 1 , F 2 , and F r . The coordinates X 1 , X 2 , and X r , where each row represents the coordinate vector of a species embedded in Euclidean space, are produced from D 1 , D 2 , and D r by MDS. In cases where the dimensionality of the coordinate matrices are different, we zero-fill until the size of X 1 , X 2 , and X r are all minimally equivalent. We then find the robust superimposition between X 1 and X 2 by first superimposing X r onto both X 1 and X 2 independentlŷˆˆˆ,ˆˆˆX X R 1t X X R 1t (2) such that tr((X 1 -X 1 )(X 1 -X 1 )') and tr((X 2 -X 2 )(X 2 -X 2 )') are minimized. HereX i denotes the reference structure, X r , fitted to X i . Then we compute transformation parameters, s, t, and R, by superimposingX 2 ontoX 1 .ˆˆˆX X R 1t Since bothX 1 andX 2 represent the different orthogonal transformations of the same reference structure X r , this superimposition is an exact match. The final superimposition of X 2 onto X 1 is computed by simply applying to X 2 the same parameters,ŝ r ,R r , andt r obtained by (3).ˆˆˆX X R 1t whereX 1 denotes X 2 indirectly fitted onto X 1 . A schematic of our rCEED approach is given in Figure 2. Notice that we obtain a robust analytical solution for the superimposition parameters by putting the reference structure (in this case, X r andX 2 in (2) and (3) always on the right hand side of the fitting equations. The standard root-mean-square deviation, std. rmsd, as a measure of structure similarity is given by: X X X X 1x X 1x 1 1 1 1 r r r r (5) where x r is the centroid of a reference structure. Because the number of common species will be different from one pair of protein families to another pair, their distributions in the space will have different variances. As a result, they are all normalized in (5), so that we can compare the strength of the coevolutionary signal among differently sized pair sets of protein families.
Verboonian robust superimposition (vCEED): application to the detection of horizontal gene transfer Verboon [21] proposed a robust method (Verboonian Procrustes) by adopting an alternative objective functions which put less penalty on errors over some threshold boundary. The direct consequence of this approach is that it brings us a better local alignment at the expense of allowing some outliers. Formally speaking, the transformation parameters are estimated by minimizing the loss function L(s, R, t) = ∑ i f(ε i ) where ε i is the residual distance between two corresponding points, and f(·) is a robust version of the error function. We adopted the Huber kernel [40] in this work, although other functions such as Lorentzian kernel or biweight function [41] are available. According to Verboon, we can minimize this loss function based on a weighted least squares model (, ,) a r g m i n ( Since both transformation parameters (s, R, and t) and weight matrix (P) are unknown, we estimate them using Expectation-Maximization, where we alternate between the computation of transformation parameters using a fixed weight matrix P and the updating of P based upon the current estimation of transformation. Through this iterative process, the weight value in P gets smaller if an error term is larger than the pre-specified threshold, c. In the work described here, we used an empirically chosen value of 0.01 for c.
The central idea is that as we transform one point set closer to the other, the corresponding mixture models become similarly closer. We translate (t), rotate and project (R) the point set Z as before; the mixture model will then take the following form: Our goal then is to find the optimal R and t that minimize the dissimilarity between the two models P w and P z new using the divergence D.
For the derivation of (12), see [42]. We assumed isotropy, so Σ i = Σ j = s 2 I for all i and j's. We further assumed that the weights of all Gaussian components are equal such that a i = 1/m and b j = 1/n.
|
2017-06-16T02:32:41.789Z
|
2009-12-15T00:00:00.000
|
{
"year": 2009,
"sha1": "1b267a78bc31490b451697b4fae6ecc911614ce4",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-10-423",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed32686132bb52d4939ef846aa4cc26f8a689981",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
]
}
|
29507758
|
pes2o/s2orc
|
v3-fos-license
|
Nosocomial infantile gastroenteritis associated with minirotavirus and calicivirus
A prospective study was carried out to determine the epidemiology and etiology of acute gastroenteritis on the general infant ward of The Montreal Children's Hospital in the late fall of 1976. Diarrhea occurred in 41 of 165 infants (25%), with probable nosocomial acquisition in 26 patients. Two infants each had two episodes of diarrhea, and one had three. A putative pathogen was found in 31 of 45 case episodes (69%). Virus-like particles were present in 28 of 45 patients, and in 24 of 74 asymptomatic room contacts. Particles belonging to six morphologic classes were identified: adenovirus, rotavirus, minirotavirus, calicivirus, picorna-parvovirus, and coronavirus. More than one agent was identified in 12 infants with diarrhea and in five asymptomatic room contacts. No wardwide etiologic pattern was evident, but minirotavirus or calicivirus or both were associated with diarrhea in 20 patients, accompanied by vomiting in 15 of these infants. Moreover, spread of individual agents was almost entirely limited to minirotavirus and calicivirus, with diarrhea in six of ten, and four of seven, virus positive room contacts, respectively. These viruses were also identified in stools from 12 infants without diarrhea, seven of whom had repeated vomiting. Data support the etiologic role of minirotavirus and calicivirus in diarrhea or vomiting or both in hospitalized infants.
laboratory culture; the standing of some of the small virus-like particles, both as viruses and pathogens, is unclear." Careful examination of some of these newer particles shows consistent morphologic characteristics, permitting a preliminary classification based on electronmicroscopic appearance." D Thus far only two reports have made a morphologic distinction amongst the small viruses.•' 10 We report an investigation of nosocomial infantile gastroenteritis carried out on the general infant ward of The Montreal Children's Hospital in the late fall of 1976. Our data support the validity of the morphologic classification of enteritis viruses proposed by others,":'? and add information on the clinical features and epidemiology (transmissibility) of infantile gastroenteritis associated with minirotavirus and calicivirus, and their potential to cause clinical enteritis in close contacts,
MATERIALS AND METHODS
All ward patients were studied during the period October 8 to November 30, 1976.· Accommodation comprised six three-crib rooms and one four-crib room, 0022·3476/78/120922 + 05$00.5010 e 1978 The C. V. Mosby Co.
RESULTS
Multiple pathogens were present in 12 index patients and five asymptoma tic room con tacts, Table I. Bacterial pathogens and virus-like particles identified in stool from infants with diarrhea and asymptomatic room contacts culture of each patient with diarrhea were tested for enterotoxin production, using the infant mouse assay," and the YI adrenal cell assay methods." Control strains of E. coli were kindly provided by Dr. D.A. Sack. Assays were carried out in triplicate.
The predominant lactose-fermenting colony type from each infant with diarrhea was tested for invasiveness by the guinea pig eye inoculation (Sereny) test." An inoculum of I drop of a very heavy suspension of the organism in Mueller-Hinton broth (at 5 X lO" cells per ml) was used. One hundred and sixty-five infants aged 9 days to 24 months (median 4 months) were studied; 41 had diarrhea, comprising 8 of 16 infants resident at outset, and 33 of 149 newly admitted infants. Two to eight new cases occurred per week; 26 cases were considered nosocomial. Two' infants each had two periods of diarrhea, and one had three, for a total of 45 diarrhea episodes. Eighty-one room contacts of infants with diarrhea remained asymptomatic, and specimens from 74 were available for examination.
A putative stool pathogen was identified in 31 of 45 case episodes (69%), and in 31 of 74 (42%) asymptomatic room contacts (Table I). Bacterial pathogens were isolated from seven infants, four of whom coincidentally carried a virus. Concurrent infection with two agents occurred in 10 patients, and with three agents in two, without evidence of an agent:agent pattern.
Viruses. Six morphologic classes of virus-like particles were present in stools from 28 of 45 episodes of diarrhea
Coronavirus
Patients with pathogens situated either side of a walk-through corridor. Diarrhea was defined as an increase in the usual daily stool frequency by two or more, and excessive water loss in stool. When symptoms began more than 24 hours after admission to hospital, the infections were considered to be nosocomial.
Stool specimens were obtained from resident infants on the first or second day of the study, and from newly admitted patients as soon as available after admission to the study ward. When possible, specimens were obtained from infants with diarrhea at daily intervals for the first three days of their illness, and at varia ble intervals for up to four days after diarrhea had ceased. Specimens were also obtained from asymptomatic room contacts of infants with diarrhea, usually on the first and fourth days following the onset of symptoms in the index case. Specimens were not obtained from medical or nursing staff.
Virus identification. Stool was suspended in phosphatebuffered saline (at 10% w Iv) and stored at -20°C. Specimens were subsequently thawed, clarified by lowspeed centrifugation (3,000 g) for 10 minutes, and 2 ml aliquots concentrated by ultracentrifugation (100,000 g) for 1 hour. The pellet was resuspended in a drop of 1% ammonium acetate solution, and examined using a Philips 201 or 300 electronmicroscope." Virus-like particles were sought in at least 5 grid squares, and classified as described by Flewett" and by Middleton et al." Specimens containing adenovirus particles were inoculated into tissue cultures ofHEp-2 and human embryonic lung cells. Cytopathogenic effect was confirmed by a second passage, and adenovirus isolates were typed by neutralization tests. Specimens from room contacts of adenovirus positive infants with diarrhea were cultured in the same way.
Specimens containing picorna-parvovirus particles were inoculated onto primary Rhesus monkey kidney cells, and isolates identified by cytopathogenic effect on second passage and electronmicroscopic appearance. Particles were classified as polioviruses or non-polio enteroviruses by neutralization tests.
Bacterial identification. All specimens were examined for Salmonella and Shigella by conventional methods, and for enteropathogenic serotypes of Escherichia coli by slide agglutination using a battery of 16 commercially prepared antisera (Difco). The predominant E. coli colony type from an early specimen culture of each infant with diarrhea was sent to The Laboratory Centre for Disease Control, Ottawa, for complete serotyping. Incubation at 4°C was used for enhanced recovery of Yersinia enterocolitica. ' (62%), and in 24 of 74 (32%) asymptomatic room contacts (Table I). These included adenovirus, rota virus, minirotavirus (Fig. I), calicivirus (Fig. 2), picorna-parvovirus, and coronavirus. Minirotavirus is the name we have chosen for a 32 nm particle, wh ich resembles particles identified in Toronto (rninireovirus)" and in Glasgow ." It is distinguished from other sm all round viruses by its slightly larger size and its irregular margin, at times resembling a palisade of very small ca psomeres. Caliciviruses tend to be smaller than minirotaviruses, and are dist inguished by their scalloped or coarsely indente d surface appearance. Both viruses have variation s in surface detail, depending on their lie on the grid and their state of preservation. Picorna-p arvoviruses are small dense particles, with an entire margin, and no detectable surface structure. Ten of 26 nosocomial cases showed virus particle concordance with their presumed index source (Ta ble 11). Eight of 24 virus positive asymptomatic room contacts showed similar concordance. With the exception of one asymptomatic acquisition of rotavirus, viral concordance between infants with diarrhea and their room contacts was limited to minirotavirus and calicivirus, Concordance between asymptomatic infants excreting virus and their respective room contacts was also limited to minirotavirus, which was found in three of 15 contacts, none of whom developed diarrhea. Correlation of symptoms with virus acqu isition was variable : to of 17 infants with either rniniro tavirus or caliclvirus developed diarrhea (Table II).
Twelve infants with dia rrhea had no id entifia ble pathogen in the stoo l, of whom nine exposed 26 room contacts. Twenty -five contacts remained asymptomatic and had negative stool examinations. One had diarrhea associated with picorna-parvovirus.
Bacteria. Two serotypes of enteropathogenic E. coli were identified in specimens from four infants with diarrhea, three of whom coincidentally carried a virus. These and two other enteropathogenic serotypes were also identified as the only pathogen in seven of 74 asymptomatic contacts. There was no evidence of spread of these enteropathogenic serot ypes in the infants studied, and complete serot yping of 55 isolates of E. coli from 35 infants with diarrhea also failed to show an epidemiologic pattern in this symptomatic population . None of th e commonly recognized enterotoxigenic serotypes was idenrifled." Also none of these same 55 isolates of E. coli was invasive, evidenced by negative Sereny tests .
Stools from one infant with diarrhea were consis tently positive for a strain of E. coli (O?:K ?:H4) producing heat stable and labile enterotoxins. The same specimens contained aden ovirus (tissue culture negative). Isolates of E. coli obtained from parents and two siblings were enterotoxin negative, and none was found to carry E. coli (0? :K?:H4).
Salmonella typhimurium and Y. enterocolitica were the only pathogens identified in one index patient each . Neither elise was nosocomial and transmission to room contacts did not occur.
DISCUSSION
Most studies of nosocomial gastroenteritis have concerned epidemics of diarrhea in closed patient populations, often attributed to single agents. This study differs in the open nature of the population, The ward continued to accept new patients throughout, while attempting to discharge, transfer, or cohort those with diarrhea. No infant with a history of recent loose stools was accepted into the ward: about half of the new patients had acute respiratory disease. Respiratory isolation priorities led to frequent room changes and may, in part, explain the high incidence of nosocomial diarrhea and acquisition of viruses by one third of asymptomatic infants studied. It does not, however, explain the variety of virus particles identified. It seems likely that once the large and varied reservoir of pathogens had been established, the continued admission of new patients and their rapid turnover permitted survival of individual agents at a low level of endemicity.
Infants were not followed after discharge from hospital, ,and our data are probably a low estimate of the prevalence of nosocomial diarrhea and communicability of viruses in the study population.
Almost half of the instances of diarrhea were associated with either minirotavirus or calicivirus in stool. Their claim as true viruses and potential pathogens is su pported by the temporal ass ociation of fecal carriage and presence of diarrhea, and evidence of communicability from infant to infant. In one sequence, lateral transmission of calicivirus could be traced through a chain of seven infants. In another exceptional instance, an infant had intractable diarrhea for several weeks, the course of which was punctuated by two periods of vomiting associated with fecal shedding of minirotavirus and calicivirus, respectively. Two other infants had separate attacks of diarrhea and vomiting associated with minirotavirus and calicivirus.
Seven of 12 infants without diarrhea but with miniro- (Table I) had repeated vomiting coincident with the presence of virus in the stool, and two others had loose stools without an increase in daily frequency. Vomiting was as frequent an association as diarrhea in patients found to carry these viruses (Table III). Vomiting is also a prominent feature of rotavirus gastroenteritis," but low-grade fever was present in only three of 20 infants with diarrhea associated with minirotavirus and calicivirus, We have been unsuccessful in attempts to cultivate minirotavirus and calicivirus using human fetal intestinal organ culture, human embryonic kidney cells, monkey kidney eells, human embryonic lung, and HEp-7 cell lines . Adenovirus was a relatively common finding in stool examined by electronmicroscopy, but poor correlation with symptoms and its occurrence with other agents obscure its significance. Communicability was not demonstrated by either electronmicroscopy or tissue culture methods. Culture using HEp-2 cells was successful in six of eight specimens from infants with diarrhea, and has been sustained with five isolates iden tified as adenovirus types 2 (two isolates), and 7 (three isolates).
Rotavirus played a smaller role than expected, and transmission could only be presumed in one asymptomatic room contact. This may reflect the bias of small numbers, for there is no doubt that rota virus can assert its presence in an open infant ward:' Seasonal variations seem to affect identification rates of enteritis-associated viruses in a similar manner."
The Journal of Pediatrics December 1978
Picoma-parvoviruses particles appeared to be a random finding in stool and were not shown to be communicable. Enteroviruses were identified in five of eight specimens examined by tissue culture, two of which were identified as polioviruses, The yield of bacterial pathogens in this study was meagre, and the data are largely negative. Classical enteropathogenic serotypes of E. coli occurred sporadically, and three of four affected infants with diarrhea carried a virus in the same specimen. Sequential specimens from one infant with diarrhea contained an enterotoxinproducing strain of E. coli, but the same specimen contained an adenovirus. Screening of five lactosefermenting organisms from more than 400 infants with diarrhea seen at this hospital (1975)(1976)) has yielded only one other enterotoxin producing strain ofE. coli (06:H 16), in an infant who acquired diarrhea in Pakistan (unpublished data).
We found that multiple putative pathogens were operating concurrently and independently in the study population. Approximately half of the cases of diarrhea were associated with either minirotavirus or calicivirus, whose standing as etiologic agents is strengthened by the data presented. Community-based epidemiologic studies, serology, and virus culture win be necessary to substantiate the pathogenic role of these viruses.
|
2018-04-03T01:28:05.433Z
|
1978-12-01T00:00:00.000
|
{
"year": 1978,
"sha1": "3199cafeb4eeff87ce887ef985be7170aa8b482a",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s0022-3476(78)81212-8",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ad8b06087ea4269d9ce4a6ec712f7790ce06baf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52040370
|
pes2o/s2orc
|
v3-fos-license
|
Is ambulatory blood pressure measurement a new indicator for survival among advanced heart failure cases
Background Ambulatory blood pressure monitoring (ABPM) in heart failure is not well defined. However, from the limited studies available, ABPM may be used to optimize heart failure therapy, and as a prognostic marker in this patient group. We analyzed the ABPM values with survival in advanced heart failure with reduced ejection fraction (HFrEF) patients who are on optimal guideline directed medical therapy (GDMT). Methods and results Hundred patients of advanced HFrEF were followed up for one year. Baseline left ventricular ejection fraction (LVEF), left ventricular end diastolic diamension (LVEDD) and ABPM values were measured and they were analyzed with survival. Deceased patients (n = 36) have lower ABPM values and are dippers as compared to living patients (n = 64) [24 hr systolic blood pressure (SBP24hr) = 97.6 ± 12.5 mmHg, 24 hr diastolic BP (DBP24hr) = 64.6 ± 10.2 mmHg, decrement in systolic BP (dipSBP) = 9.9 ± 5.2 mmHg and decrement in diastolic BP (dipDBP) = 11.1 ± 6.5 mmHg Vs SBP24hr = 109.4 ± 16.9 mmHg, DBP24hr = 71.7 ± 17 mmHg, dipSBP = 1.6 ± 5.9 mmHg and dipDBP = 2.7 ± 6.3 mmHg] and they were statistically significant with p values < 0.001, 0.025, <0.001, and <0.001 respectively. A logistic regression analysis was done to predict one year survival using age, sex, LVEF, LVEDD, SBP24hrs, DBP24hrs, dipSBP, dipDBP and dipMAP as independent predictors. When SBP24hrs is raised by one unit the chances of survival are 1.145 times more(Exp(B) = 1.145). One unit dip in SBP and DBP will reduce the chances of survival by 0.697 times and 0.586 times respectively. Conclusion In advanced HFrEF patients with Lower SBP & DBP and dippers have lesser survival compared to those with higher SBP & DBP and non-dippers.
Introduction
Heart failure (HF) is a syndrome complex with varied clinical features, etiology and pathophysiology. With such heterogeneity, it is difficult to assess severity and prognosis. Two most commonly used prognostic indices are left ventricular ejection fraction (LVEF) 1 and New York Heart Association (NYHA) functional class. 2 While NYHA functional class is subjective, LVEF is evaluated once and so it does not detect dynamic changes. In some studies, LVEF did not correlate with survival time in advanced heart failure patients. Several dynamic indeces such as stress testing, maximum myocardial oxygen consumption, maximum heart rate at effort etc are used. European Society of Cardiology defines advanced HF indexes 3 as NYHA class III or IV symptoms, objective evidence of severe cardiac dysfunction (EF < 30%), severely impaired functional capacity and HF hospitalization more than once in the past 6 months despite optimal guideline directed medical therapy. HF is associated with alterations in sympathetic and parasympathetic nervous system, renin-angiotesin system and vasopressin/atrial natriuretic peptide 4 secretion. Indeed, patients with severe congestive heart failure have increased sympathetic nervous system activity and impaired baroreceptor function, which will directly influence diurnal blood pressure profile.
Ambulatory blood pressure monitoring (ABPM) is capable of evaluating multiple aspects of blood pressure (BP) including 24-hr BP, nocturnal BP, dipping patterns, morning surge BP, postprandial hypotension and BP variability. There is an abundance of data on ABPM in hypertension, stroke, diabetes and chronic kidney disease but relatively little data on ABPM in heart failure. Several studies correlated ABPM variables with lesions in target organs in hypertensive patients. These studies used left ventricular hypertrophy 5 , microalbuminuria, 6,7 retinal alterations and cerebrovascular diseases 8 as variables. However, few studies used ABPM to investigate heart failure. Some small studies have suggested that ABPM, specifically nocturnal blood pressure may be superior to office blood pressure measurement in predicting hospitalisation for heart failure 9 . During night retained fluid redistribute, resulting in increase in central venous pressure which in turn activates cardiopulmonary baroreflex. Hence there is decrement in night time BP. But in heart failure it is blunted resulting in non dipping pattern. 10-13
Aims and objectives
To determine the difference in mean baseline ambulatory BP measurement of advanced HF with reduced ejection fraction (HF r EF) cases who died and who survived during one year of follow up.
To determine the correlation of baseline ambulatory BP values with baseline LVEF and left ventricular end diastolic dimension (LVEDD).
Inclusion criteria
Patients presenting with advanced HF with reduced ejection fraction (EF) (NYHA IV), and on medical therapy were included. An exclusion criteria is applied before selection of patients for study. Evidence based treatment is optimised as per the ACC/AHA guidelines and at maximal tolerable doses.
Exclusion criteria
Patients with acute HF syndrome, hemodynamically unstable terminally ill patients, irregular heart rhythm, HF with normal EF, congenital heart disease, acute coronary syndrome, revascularization within past six months, endocarditis, pericarditis, myocarditis, peripheral arterial disease and patients on cardiac resynchronization therapy were excluded.
Material and methods
One hundred eligible patients of HF with reduced EF (NYHA IV) admitted in the department of cardiology, SMS Medical College, Jaipur were enrolled. After stabilization and decongestion, 24 h ambulatory BP monitoring, measurement of LVEF (Simpson's method) and LVEDD were done. To exclude peripheral arterial disease palpation of all peripheral pulses, recording of BP in all four limbs and auscultation for any bruit were done.
Medtech ambulatory BP instrument was used for 24 hr ABPM recording. EasyABPM software was used to analyze the ABPM values. ABPM was done by putting cuff on non dominant arm in the morning and removed at the same time next morning. Patients were given diary to record any unexpected events and instructed to relax the cuffed arm at the time of insufflations. The monitor was programmed to record BP every 30 min during day time and hourly in the night time. The following ABPM variables were obtained, mean 24 h systolic BP (SBP24hr), mean 24 h diastolic BP (DBP24hr), mean 24 h mean arterial pressure (MAP24hr), mean wake systolic BP (SBP W ), mean wake diastolic BP (DBP W ), mean wake mean arterial pressure (MAP W ) mean sleep systolic BP (SBP S ), mean sleep diastolic BP (DBP S ), mean sleep MAP (MAP S ), decrements in systolic BP (dipSBP), decrements in diastolic BP (dipDBP) and decrements in MAP (dipMAP).
Ischemic etiology was ruled in based on history of myocardial infarction or prior revascularization (coronary artery bypass graft or percutaneous coronary intervention). Patients with risk factors for coronary artery disease, coronary angiography was done to rule out ischemic etiology. First patient was enrolled on January 2015 and last patient was enrolled on February 2016. ABPM monitoring was done within one to two weeks of enrolment. No patients died before ABPM measurement. As patients were on HF medications only optimisation was done during follow-up. Patients were followed up for one year with regard to mortality. Follow-up was completed on February 2017. Death if occurred was confirmed by death certificate or by first degree relatives' information. Correlation between ABPM values and LVEF and LVEDD was done.
Statistical analysis
Categorical variables were expressed as percentages and continuous data as mean AE standard deviation (SD). Student's ttest was used to analyze the difference in the baseline ABPM values (SBP24hr, SBP W , SBP S , DBP24hr, DBP W , DBP S , MAP24hr, MAP W , MAP S , dipSBP, dipDBP and dipMAP) in both groups. Correlation between baseline ABPM values (SBP24hr, DBP24hr, dipSBP and dipDBP) with LVEF and LVEDD was done using Pearson correlation coefficient. Logistic regression was done for prediction of survival on the basis of independent predictors (age, sex, SBP24hr, DBP24hr, dipSBP, dipDBP, dipMAP, LVEF and LVEDD).
Results
Hundred patients of advanced heart failure were enrolled and were followed up for one year. At the time of enrolment 2D echocardiogram and 24hr ambulatory BP monitoring was done. Characteristics of the participants are shown in the Table 1. During the one year follow-up, 36 (36%) deaths occurred. Of 36 deaths, 7 (20%) deaths occurred in hospital (4 deaths due to worsening of heart failure and 3 deaths due to sudden death), 29 deaths occurred at home (25 deaths were due to worsening of HF and four deaths were due to sudden death). There was no statistically significant difference between living and deceased patients' baseline characteristics with regard to age, sex, diabetes, hypertension, smoking, electrocardiographic abnormalities, serum creatinine and medications. Table 2).
Kaplan Meier and log rank tests of nonparametric analysis showed SBP 24hr, DBP 24hr, dip SBP, dip DBP are significant for prediction of survival (Figs. 1-4). The group of patients with SBP24hr > 105 mmHg (n = 45), DBP > 69 mmHg (n = 38), nondipper systolic BP (n = 68) and nondipper diastolic BP (n = 67) has longer survival time. We took an arbitrary cut-off value of SBP24hr 105 mmHg which is the mean SBP24hr of the study population similarly we took arbitrary cut-off value of DBP24hr 69 mmHg which is mean DBP24hr of study population in the Kaplan Meier survival analysis.
Logistic regression for prediction of survival of advanced heart failure patients
A logistic regression analysis shown in Table 4 was done to predict one year survival using age, sex, LVEF, LVEDD, SBP24 h, DBP24 hr, dip in SBP, dip in DBP and dip in mean arterial pressure as independent predictors. A test of full model was statistically significant indicating that the predictors as a set reliably distinguish between the patients who survived and who died.
Discussion
In healthy subjects, BP is highest in the early morning hours and declines to its lowest level at night. Normal circadian rhythm is dictated by various mechanisms including the sympathetic nervous system, postural position, baroreflexes, physical activity, tobacco use, sodium intake, alcohol use and neurohormones. 14 The superiority of circadian BP, specifically nocturnal BP, has been repeatedly demonstrated for cardiovascular outcomes in many disease states including hypertension, diabetes, stroke and kidney diseases. In heart failure up-regulated neurohormones, increased sympathetic activity, salt and water retention, and impaired baroreceptor reflex may impact the normal circadian rhythm. HF pharmacologic therapies that modulate the neurohormonal milieu, such as beta-blockers and ACEIs, may also play a role in the altered circadian rhythm.
Several small studies have demonstrated different average daytime blood pressure ranging from 108/72 mmHg 15 to 131/ 77 mmHg. 16 The data obtained by Borne et al 17 demonstrated even lower ambulatory daytime and nocturnal blood pressure in NYHA class III-IV patients. These conflicting data highlight the need for large studies to assess the circadian BP patterns in the heart failure population, especially in the current era of evidence based medicine.
In the healthy controls, the normal circadian pattern is one of the nocturnal dipping or a decline in BP from ambulatory daytime BP. Typically, this decline is approximately 20% compared with awake reading; however, the general consensus is that a decline or a "dip" of <10% from day to night BP reading is considered abnormal and is associated with poor cardiovascular outcomes. 18 Current literature classifies patients based on their nocturnal dipping profile: (1) dippers,10%-20% decline in BP from day to night, (2) Non-dippers, 0%-10% decline in nocturnal BP, (3) Extreme dippers, those with >20% decline in BP, and (4) Risers, an increase in nighttime BP from daytime reading. Several reports from independent centers showed that prevalence of LV hypertrophy, 19 cerebrovascular disease, 18,20 and microalbuminuria 21 were higher among subjects with blunted or abolished fall in BP from day to night than individuals with normal day-night BP difference in hypertensive patients. Furthermore, day-night BP changes significantly refined cardiovascular risk stratification above office BP and other traditional risk markers. Yamamoto et al 22 demonstrated that the degree of ambulatory BP reduction from day to night at the baseline assessment was significantly (p < 0.01) smaller in the group with subsequent cerebrovascular events than in the group with no future events. In older patients with isolated systolic hypertension, the Syst-Eur study found that cardiovascular risk increased with a higher night:day ratio of systolic BP (i.e., in patients more likely to be non-dippers) independent of the average 24-h BP. 23 Similarly, Ohkubo et al 24 showed an increased cardiovascular mortality in non-dippers (relative risk [RR]: 2.56, p = 0.02) and reverse-dippers (RR: 3.69, p = 0.004) in comparison with dippers. While these definitions have been applied to heart failure patients, there is no consensus on what constitutes a normal dipping profile in heart failure. In a large cohort of NYHA class II-III heart failure patients the majority had an abnormal dipping profile using the standard definitions. In the same cohort, the presence of an abnormal dipping profile was an independent predictor of HF outcome. Non-dippers and risers had a 1.6 and 2.7 times increased risk of death or hospitalization compared to dippers, respectively. 25 In fact, there is some data that suggests a normal dipping profile may be detrimental in HF. Canesin MF et al 26 studied the effect of dipping profile on survival in 38 NYHA IV HF patients. Patients who had 6 mmHg decline in nighttime mean BP (dipSBP) had better prognosis at 6 months. While these findings can't be extrapolated to patients with less severe heart failure (NYHA I-III), they do raise considerable questions regarding the normal dipping profile in HF patients. Large-scale assessment of dipping profiles in HF patients with varying severity is required. Establishing values to define dippers and nondippers in HF is essential; it is possible that a dip of <10% is beneficial in HF. Establishing these definitions are important in ultimately determining if pharmacotherapy can be used to normalize the dipping profile and improve outcome. In our study we found that deceased patients were dippers (dipSBP = 9.9 AE 5.2 mmHg and dipDBP = 11.1 AE 6.5 mmHg) as compared to survived patients who were nondippers (dipSBP = 1.6 AE 5.9 mmHg and dipDBP = 2.7 AE 6.3 mmHg).
The link between blood pressure and outcome in heart failure can be made at a variety of levels. In a subset of 181 chronic HF (CHF) patients from the Rotterdam Heart Study, Mosterd et al 27 found that those community CHF patients with a higher BP had a better outcome. Canesin et al 26 studied 24-h ambulatory BP in 38 patients with advanced HF (NYHA IV) and also assessed their LVEF and LVEDD. These patients were then followed up for a minimum period of at least 6 months wherein 12 deaths occurred in this period. The mean 24-h, waking and sleeping systolic pressures of the living patients were higher than those of the deceased patients and were significant for predicting survival. Patients with a nocturnal dipDBP of less than 6 mmHg had longer survival. Conversely, patients with mean nocturnal dipSBP of <105 mmHg were 7.6 times more likely to die than those with SBP >105 mmHg. In this study, LVEF (35.2 AE 7.3%) and LVEDD (72.2 AE 7.8 mm) were not correlated with the survival.
In our study, analysis of LVEF showed moderate positive correlation with dipSBP (r = 0.33) and dipDBP (r = 0.32) whereas SBP24hr (r = 0.11) and DBP24hr (r = 0.18) showed slight positive correlation. LVEDD showed moderately negative correlation with dipSBP(r = À0.18) and dipDBP (r = À0.35) whereas SBP24hr (r = 0.18) and DBP24hr (r = 0.09) showed slight positive correlation. Canesin et at 26 showed positive correlation of LVEF with SBP24hr, SBP W and SBP S whereas LVEDD was negatively correlated with above ABPM variables. Caruana et al 28 did not observe correlation of LVEF with above parameters but positive correlation of LVEF with dipSBP and dipDBP similar to our study. Different findings of correlation between measures of BP and its variability with LVEF are probably due to heterogeneous characteristics in the disease evolutionary level, etiology and even the presence of associated diseases in patients of HF.
Franciosa et al 29 Caruana et al 28 showed in their study in 20 patients of NYHA III/ IV that non dippers were more in HF compared to controls but they did not analyse ABPM values with mortality. Portaluppi et al 32 also showed that circadian variability of BP altered in HF. Contrary to this Moroni et al 33 showed that there was no loss of circadian variability in advanced HF.
Conclusion
LVEF and NYHA functional class are the most frequently used prognostic indices in HF. In our study it is observed that lower SBP and dippers evaluated by 24-h ambulatory BP monitoring are predictors of higher mortality. Comparative analysis of survival plots suggest that parameters of SBP24hr, DBP24hr and night decrements of systolic/diastolic pressure obtained by ambulatory monitoring are predictors of mortality. So ABPM should be one of the new predictors of mortality together with the established ones.
Conflict of interest
None.
Funding
This study was not funded by anyone.
Ethical approval
all procedures performed in the study involving human participants were in accordance with ethical standard of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent
Informed consent was obtained from all individual participants included in study.
|
2018-08-21T22:42:14.697Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "7f086052c5268e54ceb3ad134afcc1bf49bab299",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ihj.2017.08.028",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f086052c5268e54ceb3ad134afcc1bf49bab299",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234251070
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of successful application of herbicides ‘Chemical Glyphosate’ and ‘Himstop’ 330 against annual and perennial weeds in cotton fields of Uzbekistan
. This article provides information on the effectiveness of successive application of herbicides chemical glyphosate (CG) (3 l/ha) and Himstop 330 (1.5 l/ha) against annual and perennial weeds in cotton fields. When Himstop 330 was used at a rate of 1.5 l/ha, the number of annual weeds was reduced by 84.6-90.4%. 14.5-17.9%. When CG 54% herbicide was applied at a rate of 3.0 l/ha, annuals were reduced by 16.5–19.4%, while perennials were reduced by 86.4–91.5%.CG, when applied in the fall at a rate of 3.0 l/ha, and Himstop 330, at a rate of 1.5 l/ha (in series) with the sowing of annual weeds 89.6-94.5%, perennials 88.2- Provides a loss of 92.6%.When CG was used separately at a rate of 3.0 l/ha and Himstop 330 at a rate of 1.5 l/ha, 3.25 q/ha more cotton was obtained than the control variant. CG (3.0 l/ha) and Himstop 330 (1.5 l/ha) herbicides when applied in series provide a high cotton yield of 5.2 q/ha per hectare.
If cotton, carrots, onions, and many other crops remain in the grass at the beginning of the growing season, the damage from weeds will be enormous, resulting in the negative effects of weeds on the later phases of the growing season. [4,[6][7][8].
Weeds belonging to different families will be adapted to grow under certain environmental conditions. For example, weeds such as tulips, asterisks, wild oats, and ostrichs grow in wheat fields, while weeds such as Bermuda grass, wild rose hips, purslane, black nightshade, goosefoot, Aleppo grass, field bindweed, beauv, purple nutsedge grow among cotton. The use of wheat-cotton crop rotation allows to reduce these weeds by drastically changing the growing conditions [3,8,11] A single herbicide has different effects on different weeds. Therefore, chronic application of one herbicide leads to an increase in the number of weeds that are resistant to these herbicides. In experiments conducted at UzPITI [1], the use of kotoran for four years in the phytocenosis of weeds reached 78%, and when using promethrin -48%. Similar patterns have been observed in other regions. For example, the chronic use of simazine herbicide in maize fields increased the number of weed-resistant weeds from year to year. [8].
Other scientists have noted that the effects of chronic herbicides on weeds are declining from year to year. This negative process can be stopped by alternating application of herbicides with different areas of action, the use of mixtures, sequential application. Because drugs that kill annual weeds well have a weak effect on perennials, herbicides that effectively kill perennials have a weak effect on perennials [5,9]. Based on this, we conducted experiments to determine the effectiveness of sequential application of Uzbekproduced herbicides "Chemical Glyphosate" (CG -analogue of Raundap herbicide) and Himstop 330 (analogue of Stomp's herbicide) against annual and perennial weeds in cotton fields. Clearing the fields of weeds by increasing the effectiveness of chemical control measures will increase the quantity and quality of the cotton crop [4,10].
The aim of the research is to develop methods to increase the effectiveness of chemical control measures against consecutive application of annual and perennial herbicides CG and Himstop against typical gray soils of Tashkent province.
Materials and Methods
Scientific research was carried out in 2019-2020 in the conditions of typical gray soils of Tashkent province. Designed methods were used in conducting and conducting the experiment [5,10]. The experiment was performed in 4 repetitions of 4 variants (Table 1). S-6524 variety of cotton was planted. The soils of the experimental area are moderately sandy, typical gray soils in terms of mechanical composition. Groundwaters are located at a depth of 4 m.
In the selection and preparation of land for the experiment, the typicality of the soil, the degree of supply of humus and nutrients were studied. Amir Gayrat farm in Boka district of Tashkent province was selected for the study. The options were placed in a single tier. Egat length is 40 m. Each option is 8. row, i.e. at the expense of one visit of the drill (8x90 cm = 7.2 meters, 7.2x40 meters = 288 m 2 ). The total area of one plots is 288 m 2 , the calculated area is 144 m 2 . Therefore, the total area of the experiment is 1,120 m 2 , and the calculated area is 560 m 2 .
CCG is a 54% aqueous solution produced by Chimreaktivtaminot LLC in Uzbekistan. The active ingredient is glyphosate. In late September, perennial weeds growing in early October were sprayed en masse against unfinished perennial and annual weeds.
Himstop 330 -manufactured in Uzbekistan by "Khimreaktivtaminot" LLC. The herbicide has the ability to selectively act internally (systemically). It acts on annual onestage and two-stage weeds by rooting.
The following phenological observations were made in the experiment: -the date of sowing of cotton, the beginning of germination of seedlings, the time of germination;combing, flowering, fruiting and opening of pods. The days that elapsed from weeding to full ripening; -1000 seed weight was determined; In determining the yield of cotton, each plots was harvested from the field. Calculation of the thickness of cotton seedlings was carried out 2 times during the growing season.
• After the first time the cotton is completely single (at the end of May) • The second time at the end of the growing season, during the cotton harvest In the experiment, 4 points were selected from 4 plots to determine the thickness of cotton seedlings (length of 1 point was 11.1 m).
The following agrochemical properties of soil are studied in the experimental field: • To determine the agrochemical parameters of the soil of the experimental field, mixed soil samples were taken from 5 points of the field in the spring in the form of envelopes from 0-30 and 30-50 cm soil layers. The total humus in these samples, the amount of humus I.M.Tyurin; nitrogen and phosphorus I.M. Maltseva, L.N. Gritsenko; nitrate nitrogen-ionometric instrument; mobile phosphorus was determined by B.P. Machigin and exchangeable potassium P.V.Protasov methods [3].
• Samples were taken in the laboratory to determine the amount of NPK, general and mobile forms, humus humus in the layer of 0-30 cm and 30-50 cm before planting on the drive, sub-driving layers of soil.
Results and discussion
In the experimental fields, annual weeds are found mainly "Bermuda grass", wild "redroot pigweed", "goosefoot", "black nightshade", "purslane", perennials "field bindweed" and "beauv". The number of weeds was taken into account before cultivation after each watering. In the herbicide-free control variant, 39.2 annuals and 4.75 perennial weeds were recorded per 1 m 2 of land during the 1st accounting period ( Table 2). In the calculation period obtained before cultivation after the first irrigation, the "plot" was 16.4 units/m 2 in the control option. "Goosefoot" -7.5 pieces per 1 m 2 , "wild rose" -5.75 pieces, "black nightshade" -4.25 pieces, "purslane" -5.25 pieces.
When Himstop was sprayed at a rate of 1.5 l/ha, the amount of "build" was 1.50 pieces. In this variant, the number of annual weeds was 3.75 pieces/m 2 . CCG had a weak effect on annual weeds when applied separately at a rate of 54% to 3.0 l/ha. In this variant the presence of 12.7 pieces/m 2 , "goosefoot" -6.25, wild "redroot pigweed" -4.75, "black nightshade" -3.35, "purslane" -4.50 and in total 31.6 pieces/m 2 and annual weeds taken into account.
Himstop 330 only affected perennial weeds that came from seed. In the control variant, "field bindweed" was 2.25 pieces/m 2 and "beauv" was 2.50 pieces/m 2 . In the variant where the herbicide Himstop 330 was applied at a rate of 1.5 l/ha, the "field bindweed" was 2.30 pieces/m 2 , and the "beauv" was 2.50 pieces/m 2 for a total of 4.80 pieces/m 2 . When CG 54% applied with the rate of 3.0 l/ha, there were contained 0.35 pieces/m 2 of "goosefoot", 0.40 pieces/m 2 of "peat", and a total of 0.75 pieces/m 2 of perennial weeds.
When consecutive use of CG (3 l/ha) and Himstop (1.5 l/ha), the number of single-stage and two-stage annual weeds was effectively reduced compared to the variants in which these pesticides were used separately. The number of "Bermuda grass " in this variant averaged 1.25 pieces/m 2 , goosefoot 0.25 pieces, wild "redroot pigweed" -0.25, "black nightshade" and "purslane" 0.20-0.25 pieces.
The total number of annual weeds was 2.0. Himstop 330 herbicide partially affected the emergence of perennial weed seeds. The efficacy of herbicides against weeds is given in Table 3. When using Himstop 330 at a dose of 1.5 l/ha, 84. 6-90.4. When the herbicide CG was applied at a rate of 3.0 l/ha, annuals were reduced by 16.5-19.4%, while perennials were reduced by 86.4-91.5%.
When Himstop 330 was applied at a rate of 1.5 l/ha, the dry mass of annual weeds was reduced by 85.5-90.2%. In this variant, the dry mass of perennial weeds decreased by 13.2-15.5%. When Himstop 330 was used at a rate of 2.0 l/ha, the dry mass of annual weeds decreased by 88.3-92.5%, while the mass of perennial weeds decreased by 15.7-18.7%. When CG 54% applied at a rate of 3.0 l/ha, the dry mass of annual weeds decreased by 17.5-21.7%, the dry mass of perennial weeds decreased by 87.0-90.5%. When CG 3.0 l/ha and Himstop 330 were used consecutively at 1.5 l/ha, the dry mass of annuals decreased by 90.2-94.2% and the dry mass of perennials by 90.4-92.0%.
This means that in order to effectively reduce the number and dry mass of annual and perennial weeds in the cotton field, it is necessary to apply CG 3.0 l/ha and Himstop 330 1.5 l/ha consecutively. In agriculture, all agro-technical measures are aimed at creating favorable conditions for the growth and development of crops. Timely eradication of weeds plays an important role in these measures. Fields heavily contaminated with weeds cannot be cleared of weeds in a short period of time. This causes the growth and development of the cotton at the beginning of the growing season to be delayed. Keeping the fields clean from the beginning of the growing season can only be achieved with the help of herbicides. As can be seen from Table 4, in the variants in which herbicides were used, the height of the cotton and the number of live leaves were significantly higher than in the control variant. On June 1, the height of cotton was 21.1 cm in the control variant, and in the variant where the samurai herbicide was used at a rate of 1.5 l/ha, the height of the cotton was 24.5 cm. CG 54% 3.0 l/ha In the variant used in moderation, the height of the cotton was 23.2 cm.
This difference was even greater in the variant when used in series with CG, with Himstop (2.4 cm). In this variant (September 1), the height of the cotton was 97.5 cm. During this period, the height of the cotton was 89.0 cm in the control variant The number of harvest branches was also lower in the control variant than in the experimental variant and was 14.2 (Table 5). In the experimental variants, this figure averaged 15.2-16.0 units per plant. Yield elements were also higher in the variants where herbicides were used than in the control variant. In the variants CG and Himstop herbicides used separately, the cocoons were 1.0-0.75 more than in the control. In the variant where the herbicides CG and Himstop were applied in series, the number of pods was 1.50 more than in the control variant.
Consecutive application of herbicides with different areas of exposure ensures that the fields are free of weeds in a timely manner, creates favorable conditions for cotton growth and development, and ensures that the cotton yield is higher than the herbicide-free option. Sequential application of herbicides with different areas of exposure ensures that the fields are free of weeds in a timely manner, creates favorable conditions for cotton growth and development, and ensures that the cotton yield is higher than the herbicide-free option. In the control variant, the cotton yield was 25.3 q/ha. Himstop 330 herbicide at a rate of 1.5 l/ha, and CG 54% in the fall applied at a rate of 3.0 l/ha, yielded 3.25 times more cotton than the control option. When Himstop 330 herbicides were used in series with CG, a higher cotton yield was obtained at 5.2 q/ha compared to the control option.
Conclusion
In order to effectively eradicate weeds with different biological properties, it is necessary to apply herbicides with different exposures alternately or sequentially. The number of annual weeds decreased by 84.6-90.4% when Himstop 330 was used at a dose of 1.5 l/ha. In this variant, the herbicide perennial weeds were weakly affected, ie the effectiveness was only 12.5, respectively. -15.8% and 14.5-17.9%, respectively. CG 54% herbicide applied at a rate of 3.0 l/ha reduced annual weeds by 16.5-19.4%, this reduced perennials by 86.4-91.5%. CG 3.0 l/ha in autumn and Himstop 330, 1.5 l/ha in combination with seed sowing (in series) 89.6-94.5% of annual weeds, 88% of perennials, provides a loss of 2-92.6%. When CG 3.0 l/ha and Himstop 330 1.5 l/ha were used separately, 3.25 q/ha more cotton was obtained than the control variant. The herbicides CG (3.0 l/ha) and Himstop 330 (1.5 l/ha) provide a high cotton yield of 5.2 q/ha per hectare when applied in series.
|
2021-05-11T00:06:00.680Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "812b0bc1a06715b8dd7905372a3db049163fe63d",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/20/e3sconf_emmft2020_02011.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2292c94677bb56a5d59af558c93084575ba5db36",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
250156480
|
pes2o/s2orc
|
v3-fos-license
|
Spectroscopic investigation of tau protein conformational changes by static magnetic field exposure
Electromagnetic fields taint the molecular environment of proteins and induce changes in the central nervous system. This research applied Fourier transform infrared spectroscopic analysis to investigate the effects of static magnetic fields on tau protein in neurological disorders. It explores the conformational changes of tau protein and highlights its’ potential application as a pathological biomarker for early detection and therapeutic interventions. The results indicate that tau protein is susceptible to magnetic field exposure in the amide B, fingerprint, and amide regions (IV-VI). Changes in peak positions and band intensities were identified and delineated as the outcome effect of magnetic forces on molecular vibrations. Magnetic forces may affect the microtubule structure of the tau protein, leading to protein aggregation. These results indicate the potential application of FTIR spectroscopy for the early detection and classification of degenerative diseases through spectrum analysis. Different magnetic fields can be used as spontaneous therapeutic procedures to induce changes in the molecular environment of proteins.
the last decade, several studies indicated that moderate TSMF in the range between 1 mT and 1 T can affect human cortical excitability [4]. TSMF has been used as a noninvasive brain stimulation technique, but more investigations are needed to understand these mechanisms [5][6][7]. Another area of scientific concern is whether exposure to static magnetic fields causes DNA damage or not. Past studies have confirmed that a static magnetic field alone has no such effect on the fundamental properties of cell growth and survival under standard culture conditions. However, there is an indication that the frequency of micronucleus formation changes substantially when specific treatments such as X-irradiation and mitomycin C are used during exposure to a strong static magnetic field. Several studies have suggested that a strong static magnetic field may influence ion transport and gene expression. Other studies have found that a strong magnetic field can cause spatial orientation phenomena in cell culture [8]. Moreover, protein misfolding and aggregation are becoming fingerprints of neurodegenerative diseases, including Alzheimer's, Huntington's, Parkinson's, prion diseases, and amyotrophic lateral sclerosis [9]. Protein misfolding and aggregation are natural phenomena that disrupt cell function through many mechanisms that are yet to be fully explained [10,11].
In 2002, The International Agency for Research on Cancer (IARC) revealed that extremely low frequency (ELF) magnetic fields are 'possibly carcinogenic to humans' [12][13][14][15]. Most of the neural activities in the brain are processed through electrical impulses. Consequently, it is possible that exposure to a static magnetic field (SMF) can yield physiological and biochemical alterations. However, there is a lack of unanimity regarding the effects of SMF on different proteins; it appears that SMF induces physiological changes in exposed proteins. These changes can either improve biological tasks or cause dysfunction depending on the amount of absorbed energy by the proteins [16][17][18][19][20].
Tau is a microtubule-associated protein that has an essential role in axonal stabilization, neuronal development, and neuronal polarity [21]. It composes cytoskeletons which are crucial in maintaining spine development and morphology, brain cognitive behavior, consciousness, and memory storage [22]. These skeletons form a tube-shaped where nutrients and other substances are carried through into the neurons. Tau proteins are intrinsically disordered due to their unstable conformation and high pliability [23]. The role of tau protein is to maintain healthy neurons through reversible polymerization of microtubules formations to maintain neural growth and neural polarity [24]. Figure 1 illustrates the function of Tau protein in keeping neurons healthy [25].
Tau protein determinations and quantifications are essential to the diagnosis of most degenerative diseases. Early detections of tau protein changes based on biomarkers may allow earlier diagnosis for most infectious and autoimmune diseases [26]. Researchers are becoming more convinced that Alzheimer's disease (AD) may result from a combination of abnormal tau and beta-amyloid proteins associated with other age-related diseases such as cerebrovascular and Lewy body [27]. Analysis of several brain tissues from AD patients confirmed two different abnormal formations of extra neural amyloid plaques and intraneural tangles containing fibrillary structure mainly composed of microtubule Tau protein [27][28][29][30][31].
It seems that neuropathologic diagnosis of AD cases correlates to the formation of both filamentous tau protein and cored neuritic plaques [32]. The tau filaments are called 'paired helical filaments' (PHFs). They reveal reparative patterns as viewed through electron microscopy showing two tiny filaments twisted around one another, forming periodic structures with a 65-80 nm crossover distance. On the other hand, neuritic plaques are composed of activated microglia and reactive astrocytes entangled with neuritic elements in the plaque periphery [27]. In other words, the abnormal aggregation of tau protein into insoluble paired helical filaments (PHFs) is one of the indications of AD [33]. It appears that intracellular inclusions of fibrillar forms of tau protein with β-sheet structure assemble in specific brain regions, causing memory loss. Besides, pathological studies using animals and humans have shown that tau oligomer formation contributes to neuronal loss [23]. Microtubule-associated protein tau belongs to the expanding group of natively unfolded proteins or intrinsically disordered proteins (IDPs), which can display novel features in protein chemistry.
Furthermore, histological analysis and tau positron emission tomography (PET) imaging studies have disclosed that cognitive impairment correlates with tau presence and neuronal loss in the human brain and its associations with aging and AD [34]. It suggests that tau oligomers formation and grey matter loss lead to cognitive deficits by different mechanisms, suggesting implications for future therapeutic trials targeting tau pathology [35]. Another analysis suggests that aggregation of hyperphosphorylated forms of tau in the neural soma forming neurofibrillary tangles may constitute the leading cause of AD [36]. Heiko and Eva Braak have shown that tau deposits of neurofibrillary tangles follow a particular pathway in their proliferating from the trans entorhinal cortex to neocortical association areas and finally to secondary and primary cortical areas. They suggested six disease stages where the first two stages are asymptomatic, the following two stages show some cognitive ability loss, and the last two stages are associated with dementia [37].
Previous studies have shown that the vibration modes of the amide bands exhibit high sensitivity when exposed to ELF-electromagnetic fields [38,39]. Protein conformational changes are observed because of the effects on vibrational modes of the amide bands in proteins in the IR region of the spectrum [40]. Fouriertransform infrared (FTIR) spectroscopy is a noninvasive technique to detect changes in the secondary structure of proteins and has been widely used to investigate protein oligomerization, aggregation, and fibril formation [10,11]. A considerable amount of research linked protein conformational changes to the molecular vibration of the absorption bands of the secondary structure in the studied proteins. These changes can be detected through the spectrum variations in the mid-infrared range (400-4000 cm −1 ). FTIR is a sensitive diagnostic tool to differentiate abnormalities in biological cells. The biochemical changes in cells often correspond to changes in the fingerprint regions of the proteins.
The present study will investigate the impact of exposing tau protein to SMF by FTIR spectroscopy. The experimental setup aims to imitate neuronal proteins' exposure to SMF, similar to what happens in the brain by passing neuron signals. A considerable amount of scientific research has revealed a strong connection between band intensities and their peak positions of the IR spectra to the secondary structure changes of the studied proteins [41][42][43].
Studies of the effects of SMF on different proteins could open new perspectives in health research, especially in the applicability of immunosensor development in clinical diagnosis. Several studies have used biological samples obtained from patients with neurodivergent conditions to confirm the application of immune sensors through the time-dependent phase angle shift technique Single Frequency Analysis (SFA), which is based on the phase angle shift from antibody-antigen interactions [44].
Early detection of tau oligomers formation through absorbed infrared energy can be used as a sensor to detect certain diseases' early stages accurately. The absorbance spectrum in the Mid-infrared region for a tissue or blood sample is considered a unique way to represent the contents of the sample. The unknown sample is easily identified by comparing the obtained fingerprint spectrum to a library of known spectra. The analysis of the measured spectra can provide insights into physiological and pathological changes that have taken place at the molecular level.
Preparation of stock solutions
Tau protein contains six recombinant tau proteins expressed in E. coli, with molecular weights of 36.8, 39.7, 40.0, 42.6, 42.9, and 45.9 kDa. The proteins are expressed without histidine tags. The purity of each protein is >90% (SDS-PAGE). The six Tau isoforms differ from one another by the number (3 or 4) of microtubulebinding repeats (R) of 31-32 amino acids each and the number (0, 1, or 2) of amino-terminal inserts (N) of 29 amino acids each [45]. Tau protein is supplied as a solution in 125 mM Tris-HCl, pH 6.8, with 4% SDS, 10% 2-mercaptoethanol, 20% glycerol, and 0.004% bromophenol blue. Each sample of 50 ml of the tau protein ladder contains 0.25 mg of each of the six isoforms.
The Tau protein samples were obtained from Sigma Aldrich Chemical Company. The samples were kept at −20°C without any further purification prior to their use. Samples solutions were incubated for one hour at room temperature, then the samples were prepared for measurement. 50 ml of each aqueous sample was positioned on a silicon window plate and was left to dry completely at room temperature prior to taking spectroscopic measurements.
FTIR spectroscopic measurements
Bruker IFS 66/S spectrophotometer equipped with a liquid nitrogen-cooled Mercury-Cadmium-Telluride (MCT) detector and KBr beam splitter was used to scan the FTIR measurements. The spectrophotometer was continuously purged with dry air to reduce the noise signal. Each spectrum was obtained by taking an average of 60 repeated scans to maintain measurement accuracy. The FTIR measurements were obtained at a spectral resolution of 4 cm −1 , and the aperture setting was held at 8 mm, which gives the best signal-to-noise ratio. OPUS software is used to accurately make the baseline corrections, the second derivatives of the spectra, and peak position identifications. The Fourier self-deconvolution (FSD) process decomposes the major absorption bands into the original constituents of each individual peak, which correspond to the vibrational modes of molecules in the protein's secondary structure. All FSD processes were consistently repeated six times. The FTIR spectra of tau protein were scanned in the featured region of (4000-400) cm −1 . The background's absorption spectrum was consistently subtracted from the sample's spectrum from each of the FTIR spectra. The net effect of the static magnetic field on the tau protein, represented by the difference spectra, was calculated by subtracting the sample's spectrum before exposure to the magnetic field from the sample's spectrum after exposure. As an accuracy check, the resulting difference spectra of the featureless region of the protein spectra (1800-2200) cm −1 , must always give zero difference. In addition, control samples with the same protein concentration difference spectra resulted in a flat line formation as expected.
Static magnetic field measurements
DC voltage regulated at 220 volts and connected to an electromagnetic coil was used to produce a static magnetic field. The prepared sample was placed as close as possible to the center of the coil at a fixed holder. This sample positioning ensures that the magnetic field is uniform and perpendicular to the sample's surface. The Tau sample experienced the following various magnetic fields: 0.0, 0.24, 0, 48, 0.6, 0,9, 0.45, 1.2, 1.8, 2.2 mT. Each sample was exposed to each magnetic field setting for 2.0 min. The value of each magnetic field was taken for each measurement using a magnetic field axial probe PHYWE Gauss meter.
Results
The experimental results were obtained by analysis of FTIR absorption spectra for tau protein over the entire mid-infrared region (4000-400) cm −1 as shown in figure 2(A). At the same time, the second derivative of the absorption spectrum is shown in figure 2 The FTIR spectra were obtained for Tau Protein before and after the exposure to different static magnetic fields while keeping the sample fixed in its position all the time. This approach allows for precise comparison of band intensities and peak positions before and after exposing the protein to the different static magnetic fields.
The FSD absorbance spectra of the tau protein showed detailed structure for the significant absorption bands including, all amide bands and the fingerprint region. The absorption peaks in the amide I region (1600-1700) cm −1 correspond to the stretching vibrations of the C=O coupled with C-N stretching and C-C-N deformation mode [46].
Several peaks are shown in the amide II region (1480-1600) cm −1 due to the out-of-plane merger of N-H inplane bending and C-N stretching vibration [47]. The absorption bands in the amide III region (1220-1320) cm −1 are composed of an in-plane mixture between the N-H bending and the C-N stretching with extra contributions from the C-H and the N-H deformation vibrations [47,48]. The absorption bands in the amide IV region (625-770) cm −1 correspond to the vibration linked to a mixture of OCN bending vibrations coupled with out-of-plane N-H bending [49]. In the amide V region (640-800) cm −1 , the absorption bands are due to out-of-plane N-H bending, while the absorption bands in the VI region (537-606) cm −1 are due to out-of-plane C=O bending [50]. The absorption bands in the region (900-1220) cm −1 are assigned to C-O bending vibrations of saccharides (glucose, lactose, and glycerol) [51]. Furthermore, the absorption bands in the region (1360-1430) cm −1 are due to the vibrations of specific amino acid chains, while the absorption bands in the range of (1430-1480) cm −1 are caused by fatty acids, phospholipids, and triglycerides [52].
All the prominent seen peaks of the absorption bands before and after exposure to magnetic fields in the mid-infrared spectral range (400-4000) cm −1 are assigned according to the second derivative, and the FSD computations are listed in table 1. The FSD absorption spectra of tau protein show the main absorption bands, including the amide bands and the fingerprint regions.
The spectra in figure 3 for the range of (500-1000) cm −1 show a remarkable increase in most bands' intensities due to the exposure to the static magnetic field.
Moreover, peak shifts with minor changes in the band structures were observed in some cases, indicating high sensitivity to the magnetic field. The peak at 583 cm −1 merged with the neighboring peak at 594 cm −1 after the magnetic field exposure. The bands at (595 and 612) cm −1 have tripled in their intensities, and the peak at 612 cm −1 shifted to 620 cm −1 . The peak at 632 cm −1 decreased in intensity and appeared as a shoulder with the neighboring band at 650 cm −1 as the magnetic field strength increased. The peak at 669 cm −1 did not change, while the peaks at 683, 704, and 719 have shown a small increase in their intensities. The peak at 737 cm −1 reduced its intensity sharply and disappeared, while the peak at 800 cm −1 increased its intensity suddenly after exposure to the magnetic field. The remaining peak' changes are listed in table 1. The FSD spectra in the ranges of (1000-1700) cm −1 and (1700-4000) cm −1 are shown in figure 4 and figure 5 respectively. All changes in band intensities and peak positions are listed in table 1 and table 2. Consistently, the resultant effect of the magnetic field on the tau protein is determined by subtracting the tau protein's spectrum before exposure to the magnetic field from the spectrum after the exposure. The difference spectra for the range (500-1700) cm −1 is shown in figure 6(C), which reveals all the affected absorption bands by the exposure to the SMF.
Discussion
The experimental results of exposing tau protein to SMF have shown changes in the measured absorption spectra. These changes are related to changes in the secondary structure of tau protein and are indicated by band position, band intensity, and bandwidth variations for most of the observed bands. The peaks' changes are usually linked to a shift in the absorbed frequency due to external effects on the vibrational bonds. The frequency of vibration of the normal mode excited by the absorbed light is expressed by equation (1). Where ν is the frequency in (cm −1 ), k is the force constant in N cm −1 , and μ is the reduced mass in kg. The only changing variables in the equation are k and μ. These two molecular variables determine the absorbed infrared frequency by any molecule. No two different molecules have the same force constants and atomic masses, so each chemical structure's infrared spectra must differ. It seems that any protein conformation can alter the frequency of the absorbed energy due to modifications in the original bond vibrations of the absorption band. Therefore, it is reasonable to expect that protein aggregation instigates a decrease in the frequency of the secondary structure vibrations [9]. The intensity of the absorption bands in the infrared spectra depends on the concentration of molecules in the sample, as shown by Beer's law in equation (2), which relates the concentration to the absorbance.
Where A is absorbance, ε is absorptivity, � is path length, and c is concentration. Any variation in the number of bonds is directly connected to conformational changes in the protein's structure and can affect absorbance. Furthermore, the intensities of IR bands can be enhanced by increasing the number of dipole moments of the macromolecules because of their alignment to the applied field. The increase in intensities is proportional to the square of the variation of the dipole moment of the molecule induced by vibrations, as shown by equation (3) [56], the intensity I k between the vibrational state E k and the ground state E 0 is Where N is Avogadro's number, R α,k is the transition dipole moment between the states 0 and k in the αdirection, E 0, and E k is the first and the (k+1) th eigenvalues. The dipole moment depends on the distance separating the charge; for example, C-H stretching is more intense than C-C rock vibration. The frequencies of stretching vibration are higher than their corresponding bending frequencies, and bonds to hydrogen have higher stretching frequencies than other heavier atoms.
The intensity is related to the substance's temperature and the quantum mechanical interaction between the radiation and the absorber. Therefore, determining absolute intensities is a complicated task. Most researchers calculate the integrated intensities by integrating the area under the absorption bands-to indicate relative changes in the intensities. Any changes in the absorption bands of proteins indicate conformational changes in those proteins due to charge or vibrational changes. The most substantial effect occurs when the magnetic field is perpendicular to the plane of vibration. The most notable variations in the absorbance levels were observed for the symmetric and asymmetric stretching modes of phosphodiester groups of cellular nucleic acids [57]. The overlapping between the absorption bands makes it difficult to determine band intensities accurately. A simple approach is to regard the spectra as a characteristic fingerprint of the conformational change. Therefore, conformational change in the spectrum may be used to recognize and define transient conformational states of a protein [46].
The absorption spectra revealed several noticeable changes in the amides (IV-VI) regions due to exposure to SMF, as summarized in table 1 and shown in the relevant figures. Most of these bands have shown an increase in the intensity due to exposure to the magnetic field, which implies an increase in the transition dipole moment for these absorption bands. The peak at 512 cm −1 has shown a slight increase in its intensity due to the SMF effect on the NH out-of-plane bending vibrations. The most substantial rise in intensities happened to the bands that involve CO out-of-plane bending with some out-of-plane displacement of the NH group at 595 and 612 cm −1 , respectively. A decrease in intensity occurred in the bands 632 cm −1 and 737 cm −1 due to the magnetic field effect on the CO in-plane bending, CC stretch, and CNC deformation [50].
Tau protein is classified as a natively unfolded protein, and it is expected to have a very low content of secondary structural elements [58,59]. Therefore, the absorption bands in the amide III region have shown relatively weaker bands compared to the other bands, as shown in figure 4. The band at 1430 cm −1 increased in intensity and shifted to 1427 cm −1 , while a new weak band seems to be awakened at 1441 cm −1 and the band at 1450 cm −1 showed a drop in its intensity after tau exposure to SMF.
The absorption bands in amide II and amide I were not affected by the tau exposure to SMF, where little or no changes were observed regarding both intensities and peak positions. The lack of changes can be connected to the low content of secondary structure elements or the small effect on the involved vibrational bonds. It is worth noting that the increase and decrease of intensities should correspond respectively to the peaks and the inverted peaks in the difference spectra in figure 6 and figure 7.
The absorption bands for the range (1700-4000) cm −1 are listed in table 2, where they have maintained their positions with little or no intensity increase after exposing tau protein to SMF. Only these bands 2340, 2358, 2829, 2850, 2872, 2917, 2934, and 2960 cm −1 have shown a noticeable significant increase in their intensities, out of which the bands at (2872 and 2960) cm −1 involve a symmetric and asymmetric stretch of the C-H methyl group, respectively. Also, the bands at (2917 and 2934) cm −1 involve the asymmetric stretch of the C-H methylene group, while the band at 2850 cm −1 involves the symmetric stretch of the C-H methylene group. These peaks are associated with the presence of saturated lipids, and any intensity increase, mainly CH 2 vibrations, can indicate higher content of saturated lipids. An increase in saturated lipids is related to vascular perturbations, which may lead to developing dementia [60][61][62].
The absorption bands at (2340 and 2358) cm −1 have shown a moderate increase in their intensities while maintaining their positions. This is also attributed to changes imposed by the magnetic field on molecular vibrations involving C-C or C-N triple bonds.
The increase in the intensity of the absorption bands because of tau protein exposure to SMF should be proportional to the square of the dipole moment magnitudes of those bands. The transition dipole moment matrix (R α,k in equation (3)) can increase in magnitude by increasing the charge of the dipole moment or the separating distance between the charges; otherwise, all other variables are constant. The exerted magnetic force can induce the above changes in the vibrating atoms of the active molecules.
Where q and v are the charge and velocity of the atoms of the active molecule, and θ is the angle between the force and velocity. The magnetic force can exert torque on molecular vibration to align the molecules in the direction of the magnetic field [63]. Moreover, the magnetic force may lead to changes in the amplitude of vibrations and therefore increase or decrease the intensity of the absorption band depending on the molecular orientations with respect to the magnetic field direction. The magnetic field imposes changes on the molecular structure of the protein in the form of aligning and twisting the molecular vibrations, therefore, affecting the bands' intensities. The magnetic force tends to affect asymmetric and out-of-plane motions more than symmetric and in-plane vibrations.
The absorbed molecular frequency may shift because of changes imposed by the magnetic force on molecular vibrations. In other words, any changes in the molecular environment may lead to changes in the spring constant or the displacement of the molecular vibrations affecting the absorbed energy of the molecule. Therefore, exposure of tau protein to SMF can cause changes in the absorbed energy of molecular vibrations and allow some bands to become susceptible to shifting their peak positions.
Our experimental results showed that tau protein exposure to SMF strongly influences the vibrations that involve C-O out-of-plane bending vibration with out-of-plane displacements of the NH group. One may speculate that the effect of the magnetic force on the bending vibrating atoms comes in the form of additional torque twisting the vibrating molecules. Therefore, enhancing the potential energy and kinetic energy of the moving atoms increases the frequency of vibration, and causes the shift in peak position and the increase in intensity.
In the case of amide B, the C-H stretching vibrations are the result of Fermi resonance of the amide II band, which means the frequency has been doubled while the stretching distance is half of that for the fundamental vibrational mode. The magnetic force acts on the vibrating atoms inducing a bending vibration to interfere with the stretching vibration. In other words, if a magnetic force is applied to a vibrating molecule, the magnetic force tends to align the molecules' vibration in its direction. The resulting torque induces an alternating bending effect on the vibrating molecules, which produces changes in the potential energy and the kinetic energy of the vibrating bonds. The magnetic force on moving charges in a magnetic field is expressed by equation (4) and is depicted in figure 8, which shows the induced magnetic forces on each moving charged atom.
The induced magnetic forces impose an out-of-plane bending on the stretched vibration. The resultant force has increased slightly during the stretching out and compressing in with a slight bend in the vibration direction. The increased forces in both directions of the vibrating molecule increase in the amplitude of vibration and yield an intensity increase.
The magnetic force is expected to be more effective against the lighter vibrating molecules such as CH 2 and CH 3 , which can be verified from equation (1), where the frequency ν 2 is proportional to K μ −1 . Meaning the ν 2 is proportional to the inverse of the reduced mass of the vibrating molecule. Besides, the induced magnetic force has a more proportional impact on vibrations with smaller vibrational amplitudes.
The experimental results have shown that tau protein is sensitive to SMF, reflected by intensity changes and shifts in peak positions at varying degrees for most absorption bands as listed in tables 1 and 2. Tau protein exposure to SMF resembles invoking a molecular environmental change leading to protein conformations through the vibrational changes and other magnetic interactions such as the surface tension generated by the magnetic pressure [64]. It seems that SMF induces molecular polarization in the exposed protein by aligning all molecular vibrations in the plane of the magnetic field direction. The magnetic force interferes by folding all normal protein functions into a two-dimensional surface. These momentary changes in the secondary and tertiary structures of the tau protein leave the possibility open for altering the shape of this sensitive microtubule protein and turning it into an insoluble misfolded protein. On the other hand, it is also possible to use SMF as a noninvasive technique to induce the needed changes to eliminate unwanted protein accumulations.
The intensities' decrease for some bands after the exposure to the magnetic field is caused by a suppressing effect on the amplitude of oscillation which modifies the molecular vibrations in these absorption bands. The inverted peaks in the difference spectra in figures 6 and 7 should also coincide with these bands' positions if there are no shifts. The same is true concerning the increase of intensities where the difference spectra peaks should correspond to the final frequency of the involved bands.
It seems once tau protein is exposed to SMF, a genuine interaction is induced, yielding changes in the protein's secondary and tertiary structures momentarily. The shifts in peak positions correlate to variations in the internal energy of the molecular structure of the protein, which may lead to instantaneous changes in the protein's structure. A magnetic field can affect molecular vibration and induce molecular distortion within the exposed protein. In other words, the magnetic field causes changes to the molecular environment; these induced changes can be helpful in some cases and harmful in others. For example, exposure to a certain strength of magnetic field leading to the unfolding of tau protein into a proper orientation could delay the progression of degenerative diseases. In contrast, exposure to a different type or strength of magnetic field could expedite the aggregation of tau protein and could facilitate the progression of such diseases. Besides, protein exposure to different strengths of magnetic fields and a wide range of electromagnetic radiation may reveal some clues behind protein misfolding or provide a biomarker measurement indicating some disease progression. Shows the vibrating molecule without being exposed to the magnetic field. (B) shows the molecule vibrations while being exposed to an inward magnetic field perpendicular to the plane of vibration. The green arrows indicate the direction of motion and represent the force on each atom and the blue arrows indicate the force direction on each atom by the magnetic field. The resultant force on the atoms increases with a little shift in each direction.
The importance of understanding the molecular mechanisms behind protein conformation stems from the need to understand these amyloid aggregates, known for their high heterogeneity and the increasing cases of neurotoxicity. Thus, brain pathogenesis leads to oligomer formations, which is the leading cause of most degenerative neurological diseases, where early detection using invasive techniques could be a lifesaver [37,[65][66][67][68]. FTIR spectroscopy is a simple technique to acquire spectra for comparison with a biomarker spectrum of the different neurological diseases to identify each disease. Brain exposure to TSMF can be used as a therapeutic procedure to alter protein conformations and dissolve amyloid oligomers [69]. However, such techniques require a complete understanding of the effects of magnetic fields on protein structure. Besides, more investigations are still needed to understand the mechanisms behind protein misfolding, fibril formation, and aggregation, which leads to most neurological diseases. In general, continued progress in FTIR spectroscopy and imaging, combined with other spectroscopic methods, can lead to an improved mastery, identification, and treatment of different protein-folding diseases [11].
Conclusion
The experimental results can be summarized: (1) Tau protein is susceptible to SMF exposure. A significant number of the absorption bands have shown changes in their intensities and peak positions. (2) The magnetic force can affect the molecular vibrations and twist these vibrations, affecting the tau protein's microtubule structure.
(3) The shifts in peak positions correspond to changes in the potential energy of the molecular vibrations; these changes are caused by the magnetic field forces on the vibrating atoms. A shift to higher potential energy implies an increase in the stretching amplitude or an increase in the protein force constant leading to a higher frequency in a more solidifying structure. (4) the variations of band intensities relate to the changes in the transition dipole moment or the rate of change for charge in the molecular dipole moment. (5) FTIR spectroscopy has the potential to be used as a sensor for early detections and classifications of dementia diseases through spectrum analysis of a single drop of plasmas. (6) SMF has the potential to be used as a spontaneous therapeutic procedure to impose changes on the molecular environment of the proteins.
|
2022-07-01T15:17:02.886Z
|
2022-06-29T00:00:00.000
|
{
"year": 2022,
"sha1": "ee129f9ae3d03a47691163da8ad1de4f71ac5222",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2399-6528/ac7d3a",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d36861abff64739554f38b1f1e23285ba64b1be2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
268793045
|
pes2o/s2orc
|
v3-fos-license
|
Let-7i enhances anti-tumour immunity and suppresses ovarian tumour growth
Cancer immunotherapy has seen significant success in the last decade for cancer management by enhancing endogenous cancer immunity. However, immunotherapies developed thus far have seen limited success in the majority of high-grade serous carcinoma (HGSC) ovarian cancer patients. This is largely due to the highly immunosuppressive tumour microenvironment of HGSC and late-stage identification. Thus, novel treatment interventions are needed to overcome this immunosuppression and complement existing immunotherapies. Here, we have identified through analysis of > 600 human HGSC tumours a critical role for Let-7i in modulating the tumoural immune network. Tumoural expression of Let-7i had high positive correlation with anti-cancer immune signatures in HGSC patients. Confirming this role, enforced Let-7i expression in murine HGSC tumours resulted in a significant decrease in tumour burden with a significant increase in tumour T cell numbers in tumours. In concert with the improved tumoural immunity, Let-7i treatment also significantly increased CD86 expression in antigen presenting cells (APCs) in the draining lymph nodes, indicating enhanced APC activity. Collectively, our findings highlight an important role of Let-7i in anti-tumour immunity and its potential use for inducing an anti-tumour effect in HGSC. Supplementary Information The online version contains supplementary material available at 10.1007/s00262-024-03674-w.
Introduction
High-grade serous carcinoma (HGSC) is the most fatal gynaecological cancer with a 5 year survival rate of 47.4% and a median survival time of 52 months [1][2][3].Currently, the standard treatment for advanced-stage ovarian cancer is primary cytoreductive surgery and a platinum-based adjuvant chemotherapy [4,5].Cancer recurrence is frequent after initial treatment, which leads to poor survival outcomes [6,7].Immune therapy represents a promising strategy to improve patient survival by utilising the body's immune system to eliminate cancer cells [8].Current immunotherapies for solid tumours include immune checkpoint blockers (ICBs), cancer vaccines, and adoptive cell therapy, with many new therapies currently in early phase trials [9][10][11].In particular, several checkpoint inhibitors such as anti-PD-L1 (e.g.avelumab), anti-PD-1 (e.g.pembrolizumab, nivolumab), and anti-CTLA-4 (e.g.ipilimumab) have been tested clinically and have shown significant benefits for many cancers.However, only a 9.7-15% objective response rate has been observed in HGSC (NCT01772004, NCT02054806) [6,[12][13][14][15].This low response rate has largely been attributed to the highly immunosuppressive nature of the tumour microenvironment (TME) and low level of T lymphocyte infiltration found in HGSC [16,17].
The presence of functional cytotoxic T lymphocytes in tumours dictates tumour responsiveness to immunotherapies [18,19], and additionally tumour-infiltrating lymphocytes (TILs) are a known independent predictor of improved clinical outcomes for ovarian cancer [9,20].However, approximately 70% of ovarian tumours lack a sufficient number of functional TILs to combat tumour growth due to the immune suppressive nature of the TME [6,7,7,9,20,21].In ovarian cancer, many immune suppression mechanisms have been identified: CD8 + T cell suppression by regulatory T cells (T regs ), IL-10 and IL-6 mediated upregulation of inhibitory receptor PD-1 on tumour infiltrating CD8 + T cells, as well as the presence of immunosuppressive myeloid-derived suppressor cells (MDSCs), tumour-associated macrophages (TAMs), and cancer-associated fibroblasts [6,[22][23][24].These result in a decrease in CD8 + T cell infiltration and function in ovarian tumours that inhibits the efficacy of immunotherapies.Therefore, a lack of functional TILs is a critical deficit in the required cancer immunity cycle [25] and thus remains a prime target for novel immunotherapies to complement ICB therapies.Strategies being investigated to overcome barriers to CD8 + T cell infiltration include targeting molecules expressed by tumour cells, such as CDK4/6, CXCL13, and PD-L2 [26][27][28] where encouraging results are observed, and further trials have been recommended.While promising, the targeting of individual molecules potentially allows evasion of treatment via pathway redundancy where other immunosuppressive pathways in the TME can compensate for the functions of targeted molecules [11,29].A novel treatment to overcome system redundancy is the use of microRNAs (miRNAs), as these endogenous molecules can simultaneously target multiple genes to regulate multiple targets [30].
miRNAs are non-coding, single-stranded RNAs that are 21-23 nucleotides in length [31,32].They are partially complementary to the 3'-end of the untranslated region (UTR) of mRNA and can recruit the RNA-induced silencing complex (RISC) to induce translation suppression, degradation, decapping or deadenylation of target mRNA [31,32].Use of miRNA for cancer therapy is an emerging field with several drugs including targomiRs, MRG-201, MRG-106, and RG012 currently in clinical testing [33].Immune processes such as the differentiation and activation of tumour-associated immune cells including macrophages (e.g.miR19a-3p), natural killer (NK) cells (e.g.miR-181), and T cells (e.g.miR-29a-30 and miR-21-5p) are dependent on the expression of certain miRNAs [34][35][36][37].Within tumour cells, miRNAs can regulate antigen processing and presentation by targeting one or more components of the antigen processing machinery and MHC-I molecules [38,39].Previous research has shown the ability of miR-326 and miR-340 for enhancing T cell infiltration in lung adenocarcinoma and large B cell lymphoma, respectively [40,41].Specifically in ovarian cancer cells, miR-20a and miR-92 have been shown to impact MICA/B and PD-L1 expression, respectively, to affect NK and T cell activity [42,43].Furthermore, miR-199a negatively regulates IKKβ mRNA in epithelial ovarian cancer cells, which is needed to induce the NF-κB pathway to secrete pro-inflammatory and pro-tumour cytokines [35].Although these findings aid understanding of how tumoural miRNA expression is implicated in carcinogenesis, there is currently a lack of comprehensive systematic approaches which directly identify miRNAs important for T cell infiltration and anti-tumour immunity in ovarian tumours.In this study, we performed a systematic analysis integrating patient data and in vitro and in vivo experiments to identify miRNAs important for anti-tumour immunity in HGSC.Let-7i was identified to be an important mediator for this process, and its impact on tumoural immune network is examined in this study.
The cancer genome atlas (TCGA) analysis
LinkedOmics was used to identify miRNAs important for regulating anti-tumour immunity in HGSC [44].The platform was developed using TCGA dataset and contains tumour genomic information from 602 ovarian serous adenocarcinoma patients.First, the symbols of all miRNAs in the human genome were collected.This was done by developing a Python script to parse all gene symbols beginning with 'MIR' from NCBI Gene Info file (https:// www.ncbi.nlm.nih.gov/ gene/), 2002 symbols were found.All 2002 miR-NAs were then entered into LinkedOmics, each one being subjected to the following search options: Cancer Cohort: TCGA_OV, Search Dataset: miRNA Seq, Target Dataset: RNAseq, Statistical Method: Pearson Correlation.Of the 2002 genes entered into LinkedOmics, 521 were found to have data for miRNA Sequencing in the Ovarian TCGA database.The LinkedInterpreter module of LinkedOmics was subsequently used to perform the enrichment analysis with the following parameters: Tool: Gene Set Enrichment Analysis (GSEA), Rank Criteria: FDR, Minimum Number of Genes: 3, Simulations: 500.
miRNAs were then prioritised based on immune-related GO terms.These terms were determined by gathering the list of all GO terms and their descriptions and searching for immunological-related words of interest using Python.There were 3877 immunological-related terms out of all 51,281 GO terms.The filtered data were outputted to csv file, and each significant immune-related GO term was counted for each miRNA and presented using Pandas Python.The data were plotted using Seaborn and Matplotlib bar, heatmap and custom scatter (bubble) plot modules.Codes for plots are available at this link: https:// github.com/ secre tx51/ Let7i-Data-Figure-Gener ation.
For correlation with anti-cancer immune signature analysis, enrichment score of 68 immune signatures previously reported [45] was calculated in single-sample gene set enrichment analysis (ssGSEA).Clinically annotated data from TCGA obtained from the Open-Access and Controlled-Access tiers of the TCGA Data Portal (http:// tcga-data.nci.nih.gov/ tcga/ findA rchiv es.htm) were used with NIH approval.Total of 347 HGSC patients were included in the analysis.miRNA expression data were obtained from Agilent miRNA microarrays and Illumina miRNA-Seq data sets.For the miRNA-Seq data, we derived the 'reads_per_ million_miRNA_mapped' values for mature forms for the miRNA examined from the 'isoform_quantification' files.The correlation analyses were carried out by Python (version 3.8.0)(http:// www.python.org/).
Cell culture
ID8 murine HGSC cells were generated and kindly provided by Prof Roby from University of Kansas, and ID8-ip1-Luc cells were generated from isolation of tumour cells after ID8 tumour engraftment in a female C57BL/6 mouse followed by luciferase labelling.Cells were grown in high-glucose Dulbecco's Modified Eagle's Medium (DMEM, Sigma-Aldrich) supplemented with 7% foetal bovine serum (FBS, Sigma-Aldrich), insulin-transferrin-selenium (1X ITS, Lonza), and 1% penicillin-streptomycin (Sigma-Aldrich).Cells were authenticated to ensure no cross contamination with other cell lines, and all cells were tested negative for contamination with Mycoplasma.
Nanoparticle preparation
miRNA-containing liposomal formulations were prepared as previously described [46,47], using the hydration of freeze-dried matrix method.Dioleoyl trimethylammonium propane (DOTAP, 18:1), cholesterol, and polyethylene glycol (PEG)2000-C 16 Ceramide were purchased from Sigma.For all formulations, a nitrogen/phosphate (N/P) ratio of 4:1 was used, and formulation were designed to reach a 20 µg of miRNA per 200 uL volume concentration once hydrated.miRNA was diluted in sucrose solution (0.925 mg of sucrose used per 1 µg of miRNA) and mixed with equal volume of cholesterol and PEG2000-C 16 Ceramide dissolved in tertbutanol.This formulation was snap-frozen then freeze-dried overnight (BenchTop Pro, Omnitronics) at a condensing temperature of − 80 °C and pressure of less than 0.1 mbar.The lyophilised product was dissolved in nuclease-free water with gentle shaking and sonication prior to injection.When hydrated at 20 µg miRNA/200 µL concentration, the sucrose in solution makes the formulation isotonic and ready for in vivo use.
Mice
All mice experiments were approved by University of Queensland (UQ) Animal Ethics Committee.Female C57BL/6 J mice (6-8 week old) were purchased from ARC and housed in UQ Centre of Advanced Imaging animal facility.Luciferase-labelled ID8-ip1 cells, ID8-ip1-miR-Ctrl, or ID8-ip1-Let-7i (1.5 × 10 6 cells/mouse) were implanted into mice via intraperitoneal (i.p.) injection.Tumour growth and establishment were monitored every week via i.p. injection of luciferin and bioluminescence imaging was performed 7-9 min post-injection.Luciferin bioluminescence images were acquired using IVIS Lumina X5 imaging system and analysed using in vivo imaging software.For the nanoparticle study, mice received PEGylated DOTAP NPs containing either negative control miRNA (miR-Ctrl) or Let-7i intravenously given twice weekly (20 µg/dose) [46] starting from day 6 post tumour inoculation.Mice received 3 weeks of treatment.For all mice experiments, ascites fluid was collected from mice at experimental endpoint.Tumours in omentum and other organ sites alongside inguinal and mesenteric lymph nodes (LNs) were dissected from mice in a double-blinded manner.Tissues were kept in FACS buffer (2% FBS and 5 mM EDTA in PBS) for flow cytometry analysis or snap frozen in liquid nitrogen for RNA analysis.
Flow cytometry
Omental tumour and LN tissues were mashed through a 70 µm cell strainer to acquire single cell suspension.Cells were centrifuged at 500 rcf for 5 min at 4 °C, washed with FACS buffer twice, then resuspended in 85 µL FACS buffer.Cells were then incubated with anti-mouse CD16/CD32 monoclonal antibody (1:200, BD Biosciences, Cat# 553142) for 15 min at 4 °C.Antibodies or respective isotype controls listed in Supplementary Table 1 were diluted in FACS buffer and used to surface stain cells for 20 min at 4 °C.Precision Count Beads (Biolegend) were additionally added to allow quantification of the total number of immune cells in each sample.BD Fortessa X-20 flow cytometer and FlowJo software were used to analyse samples.Immune cell populations were defined as listed in Supplementary Table 2.
Cell growth assessment in vitro
Growth of ID8-ip1-Luc-Let-7i and ID8-ip1-Luc-miR-Ctrl cells was monitored by seeding 5000 cells/well in a 6 well plate and counting the total number of cells for 5 consecutive days.
RNA extraction, cDNA synthesis, and miRNA quantitation
RNA was extracted from cells once they reached 80-90% confluency using TRIzol reagent (Life Technologies) according to manufacturer's protocol.NanoDrop-One (Thermo-Fisher) was used to quantify both RNA quality and concentration.For detection of Let-7i for in vitro cell lines, 100 ng of RNA was reverse-transcribed using TaqMan MicroRNA Reverse Transcription Kit (Thermo-Fisher, Cat# 4366596) and quantified using TaqMan MicroRNA Assay Kit (Thermo-Fisher, Cat# 4427975) according to manufacturer's protocol.The 2 −ΔΔCt method was used to calculate the relative quantity of Let-7i present in each sample, with the expression of SNO135 used to normalize data.For absolute quantitation of microRNAs, 4 ng of RNA was reversetranscribed using TaqMan MicroRNA Reverse Transcription Kit followed by quantitation using the QIAcuity digital PCR system (Qiagen) according to manufacturer's protocol.Results were analysed using the QIAcuity software suite.
Statistical analysis
Correlation analyses was performed using Pearson Correlation test in LinkedOmics platform.Spearman's correlation was used to assess miRNA/gene signature correlation using TCGA dataset.In vitro and in vivo experimental data analysis was conducted with GraphPad Prism version 8 software.Unpaired two-tailed Student's t-test was used for statistical analysis for in vitro and in vivo experiments.Two-way ANOVA with Sidak's multiple comparisons test was used when assessing impact of the treatment on different immune cell populations.Statistical significance was defined by a p value < 0.05.Standard error of the mean is shown in all figures.
Let-7i positively correlated with immune activity in human ovarian tumours
To determine which miRNAs have highly positive correlation with anti-tumour immunity in HGSC, enrichment analysis based on Gene Ontology (GO) was first performed on the TCGA database using LinkedOmics platform [44].LinkedOmics platform was chosen specifically because of its high efficacy in determining the downstream pathways of miRNAs using human samples.The LinkInterpreter module from LinkedOmics transforms and identifies associations into biological understanding, through pathway and network analysis.The module was set to perform the analysis searching from the Ovarian cancer miRNAseq dataset and uses bulk RNA seq data.This was done for all 521 miRNAs in the ovarian miRNAseq TCGA dataset.The data were then filtered to include only GO terms relevant for immune or cytotoxic T cell functions, which was determined to be 3877 of the total 51,821 GO terms.The miRNAs were then ranked based on the number of significant immunological pathways identified (p < 0.05).
From this analysis, Let-7i was found to impact the greatest number of immunological pathways out of all tested miRNAs in the ovarian miRNAseq dataset (Supplementary Fig. S1).Out of the 7 categories examined (Fig. 1), Let-7i in comparison to other miRNAs had a more significant impact on two categories: immune response regulation and inflammatory response.These two categories included GO pathways related to lymphocyte regulation and antigen processing/presentation, which are highly relevant for generation of effective anti-tumour immune response.GO terms included in all categories are listed in Supplementary Table 3.When looking at the individual GO immunological pathways that Let-7i acts on, it has the greatest significance by log2 P Value of all miRNAs for leukocyte and myeloid dendritic cell activation pathways (highlighted in red, Supplementary Fig. S2).This evidence points towards the potential role of Let-7i in regulating immune responses in HGSC.Indeed, in TCGA ovarian cancer dataset, correlation analysis between tumoural Let-7i expression and the 68 anti-cancer immune signatures previously reported [45] revealed that Let-7i had an overall positive correlation with these gene signatures important for anti-cancer immunity (Fig. 2).
Increased Let-7i expression significantly reduces ovarian tumour growth in vivo
Given the impact of Let-7i on ovarian tumour biology has been largely unexplored to date, we first investigated whether tumoural Let-7i expression can reduce ovarian tumour progression in vivo.C57BL/6 J mice were inoculated with transduced murine ID8-ip1 cells that have constant forced Let-7i or miR-Ctrl (control) expression.ID8-ip1 was chosen as the model murine HGSC line for this study as it is a widely used to study ovarian tumour biology in immune competent mouse models.Validation of Let-7i expression in transduced ID8-ip1 cells prior to transplantation showed a 6.9-fold increase in Let-7i in ID8-ip1-Luc-Let-7i cells compared to their ID8-ip1-Luc-miR-Ctrl cells (p < 0.0001, Supplementary Fig. S3).Tumour growth was monitored via luminescence imaging of luciferase-tagged ID8-ip1-Luc-miR-Ctrl and ID8-ip1-Luc-Let-7i tumours (Supplementary Fig. S4).A significant decrease in tumour signal were seen within ID8-ip1-Luc-Let-7i bearing mice at weeks 2, 3, and 4 post tumour inoculation (Fig. 3A).Consistently, at endpoint, the mice bearing ID8-ip1-Luc-Let-7i tumours had a significant reduction in total tumour weight by 94.4% compared to mice bearing ID8-ip1-Luc-miR-Ctrl tumours (p < 0.0001, Fig. 3B).This effect is also mirrored in ascites volume, where a significant reduction was seen in the ID8-ip1-Luc-Let-7i tumour bearing mice (p < 0.0001, Fig. 3C).Altogether, these data indicate that an increase in tumoural Let-7i expression drives an anti-tumour effect.
To assess whether this anti-tumour effect could be caused by Let-7i intrinsically impacting cell growth pathways within the tumour cells, we monitored the growth patterns of the ID8-ip1-Luc-Let-7i and ID8-ip1-Luc-miR-Ctrl cells in vitro.No significant difference was observed for the growth rate between these two transduced cell lines (Supplementary Fig. S5).This suggests that Let-7i does not have a major or significant impact on intrinsic cell growth mechanisms of these cancer cells, but impacts the in vivo tumour microenvironment to reduce tumour growth.We hypothesise that the anti-tumour effect observed following Let-7i treatment is likely due to its regulation of the tumoural immune network given the strong association of Let-7i with immune pathways in human ovarian tumours (Figs. 1 and 2).
Let-7i enhances T cell tumour infiltrates and activity of antigen presenting cells
We next investigated the impact of Let-7i on immune cell networks within tumours.Given that the ID8-ip1-Luc-Let-7i model had tumours of extremely low weight, we conducted an additional experiment which would procure tumours of adequate sizes for such analysis and also more closely resemble how Let-7i could be therapeutically delivered to tumours in human patients.For this experiment, we inoculated C57BL/6 J mice with ID8-ip1 cells and after allowing tumours to establish for 1 week, we intravenously injected mice with nanoparticles (NPs) containing Let-7i or a nontargeting negative control miRNA (miR-Ctrl) (Fig. 4A).These NPs take advantage of the enhanced permeability and retention effects in solid tumours to passively target tumours [46].The experiment was terminated at 28 days post tumour inoculation, as this was when luminescence imaging of tumours suggested Let-7i NP treated tumours began to have an impact on tumour growth rate compared to the control group and sizeable tumours were needed for the immunological analyses (Fig. 4B, Supplementary Fig. S6).Consistent with luminescence imaging results, average tumour weight in Let-7i NP treatment group was slightly reduced compared to miR-Ctrl-treated tumours but still of sufficient size for detailed immune assessment (Fig. 4C).Ascites volume was notably reduced in mice treated with Let-7i NPs compared to control (Fig. 4D).Let-7i expression was indeed higher in the Let-7i NPs treatment group indicating successful delivery of Let-7i mimics using the nanoparticles (Supplementary Fig. S7A).While this level of Let-7i increase is much lower than what was observed in transduced cell lines (Supplementary Fig. S3), absolute quantification by digital PCR indicates that this level of increase corresponds to an average of 5487,118 copies of Let-7i being delivered to each tumour (Supplementary Fig. S7B).Full immune profiling was performed in tumours obtained from the omentum, a major ovarian cancer metastatic site in human patients and in ID8-ip1 mouse model, immediately after dissection.For lymphoid populations, there was a trend of an increase in number of tumour infiltrating lymphocytes (CD3 + T, CD4 + T, CD8 + T, and B cells) per gram of tumour (Fig. 5A, Supplementary Fig. S8A), although the CD8 + T cells had a comparable frequency of memory cells in both treatment groups (Supplementary Fig. S9).These trends were also Fig. 4 Impact of Let-7i NP treatment on tumour burden in ID8-ip1-Luc tumour model.A C57Bl/6 J mice were i.p. injected with 1.5 × 10 6 luciferase labelled ID8-ip1-Luc cells and treated with i.v.injection of NPs containing either non-targeting miRNA control (miR-Ctrl) or Let-7i, at 20 µg/dose.NP treatment started at 6 days post tumour inoculation and doses were given twice per week for 3 weeks.Ascites was drained and measured, and tumours in omentum (primary site of tumour growth) as well as tumours growing in other parts of the peritoneal cavity were dissected.B Tumour growth was monitored throughout the duration of the experiment by bioluminescence imaging of luciferase signal.Total radiance signal (photons/sec) was quantified using IVIS imaging system.C Total tumour weight of mice at experiment endpoint.D Volume of ascites in mice at endpoint.All bars and error bars represent mean ± SEM (*, p < 0.05; miR-Ctrl group, n = 6; Let-7i group, n = 8).Tumours did not develop for four and two mice for miR-Ctrl and Let-7i treatment groups, respectively.Statistical analyses were performed by unpaired Student's t test observed when considering the percentage of these cells out of all CD45 + cells (Supplementary Fig. S10A).Minimal changes were seen in the number of NK and NKT cells in tumours following Let-7i treatment (Fig. 5A, Supplementary Fig. S10A).
For myeloid populations, a strong trend of decrease in number of neutrophils per gram of tumour was observed in Let-7i NP treated mice (Fig. 5B, Supplementary Fig. S8B), with this trend also seen in neutrophil portions out of all CD45 + leukocytes (Supplementary Fig. S10B).As tumourassociated neutrophils are considered part of the MDSCs population and often have immunosuppressive qualities [48], this indicates Let-7i delivery to tumour could reverse immunosuppression in these tumours.Minimal changes in macrophage numbers were observed.For DCs, a trend of increase in cell numbers per gram of tumour was seen in mice which received Let-7i NP treatment (Fig. 5B).This observation was seen however to a lesser extent when considering their percentage out of CD45 + cells within tumour (Supplementary Fig. S10B).
As we had observed trends of increase in APC numbers (e.g.B cells, DCs) in tumours after Let-7i NP delivery, we assessed whether the activity of these APCs were also impacted by Let-7i treatment.Within the draining lymph nodes, the usual site of T cell co-stimulation by APCs, some minor changes in the number of T cells, DCs, and monocytes were observed with Let-7i NP treatment (Fig. 6A-B, Supplementary Fig. S11); however, most notably, a significant change in the CD86 expression on DCs was observed with treatment of Let-7i (Fig. 6C).Overall, the mean fluorescent intensity (MFI) of CD86 was increased by 3.33, 3.03, and 2.47-fold for B cells, DCs, and monocytes, respectively, in the Let-7i treatment group compared to the control group.CD86 is a co-stimulatory marker on APC that is upregulated upon activation that in turn activates T cells through co-stimulation of CD28 [49,50] and therefore indicates an enhanced ability to induce T cell immunity.Within tumours, there was also a trend of an increase in CD86 MFI in APCs (Supplementary Fig. S12).The results suggest improved ability of these APCs to activate T cells, which is a crucial step in initiating cancer immunity for an anti-tumour response.Altogether, these in vivo immune data support the major pathways identified to be closely associated with Let-7i expression in human HGSC tumours (Fig. 1, Supplementary Fig. S2).
Discussion
Despite the promise of immunotherapies such as immune checkpoint inhibitors for treatment of many cancer types, ovarian cancer lacks a significant response rate to ICIs due to the highly immunosuppressive tumour microenvironment [6][7][8][9].miRNAs represent a promising strategy to overcome immunosuppression within the tumour and thus improve treatment outcomes for ovarian cancer patients.Through multiple lines of evidence, including analyses of 602 human HGSC tumours using the LinkedOmics platform, we have identified the critical role that Let-7i plays in modulating tumoural immune network.We found that enhancing Let-7i expression in ovarian tumours significantly decreases tumour burden, increases the activity of APCs in lymph nodes, and increases T cells presence within tumours.The study highlights the potential of utilising Let-7i to enhance anti-tumour immunity in HGSC tumours and represents the first study to systematically characterise the therapeutic and immunological effect of Let-7i in tumours.The Let-7 family has previously been shown to be important regulators of the immune response in various pathologies including cancer [51,52].However, little has been investigated for the specific Let-7i miRNA.The Let-7 family appears to exert both pro-and anti-tumour effects within cancer cells, thus highlighting a potential benefit to dissecting and individually targeting the important members.Members of the Let-7 family can inhibit Fas expression to desensitise cells to Fas-related apoptosis, while also inhibiting immune evasion in head and neck squamous cell carcinoma (HNSCC) via increased degradation of PD-L1 [53][54][55][56].Specifically, for Let-7i, previous studies have found that it reduces cancer cell proliferation and migration through downregulating ERK3 expression in head and neck cancer, as well as HGMCA1 expression in bladder cancer [57,58].Specifically in ovarian cancer, Let-7i upregulation decreased stemness and self-renewal, reduced anchorageindependent growth, decreased functional phenotypes associated with metastasis and increased sensitivity to PARPi and platinum-based therapies [59,60].We are the first to assess the in vivo effects of Let-7i therapeutic delivery in any tumour model to our knowledge and found Let-7i has a significant therapeutic effect on ovarian tumours.Furthermore, our data indicate that Let-7i specifically impacts the tumoural immune networks, rather than intrinsically on cancer cell growth mechanisms, providing another piece of evidence for mechanism of action for Let-7i for ovarian cancer treatments.
Our data are consistent with other studies that have shown a role for the Let-7 family in regulating adaptive immune responses [52].In activated CD8 T cells, reduced Let-7 g enhances clonal expression and effector function [52].A recent study demonstrated that Let-7 family can promote memory and antagonise terminal differentiation in CD8 T cells [61].Other studies have observed that Let-7 family expression affects differentiation of effector CD8 T cells with high Let-7 needed to maintain naïve phenotype [35,62,63].We did not observe any negative impact on CD8 T cells in our study using Let-7i, again highlighting the potential advantage of targeting specific members of the Let-7 family.While Let-7i NP treatment is not expected to directly influence lymphocytes as it has been well established that CD8 T cells do not take up NPs well in vivo [64], it is possible that other types of immune cells may take up these NPs and mediate the immune effects observed in this study.When Let-7i is only introduced to tumour cells, a significant inhibition of tumour growth was observed in vivo (Fig. 3) but not in vitro in the absence of any other immune cells (Fig. S5).Together, these data suggest that Let-7i expression in tumour cells has a significant indirect effect on tumour microenvironment that enhances anti-tumour immunity, although whether the immune effects observed following Let-7i-loaded NP treatment is driven primarily by its impact on tumour cells remains to be further investigated.Nevertheless, the observed trend of increase in T cells in tumours following Let-7i treatment highlights the potential for Let-7i as an immunotherapy as the presence of TILs is significantly associated with improved outcomes and longer overall survival [65][66][67].
Interestingly, we found that introducing Let-7i to tumours resulted in enhanced CD86 expressions in APCs in the draining LNs, an important costimulatory molecule that activates and differentiates T cells through interaction with CD28 and is associated with improved APC function [49,50].These data also further support the correlation between Let-7i miRNA in HGSC and immunological signatures for activation and differentiation (Fig. 1, S2).However, due to the broad impact of Let-7i on the immune system, the exact mechanism remains unclear, including whether the Let-7iloaded NPs are taken up by the tumour cells, the APCs, or both to induce anti-tumour immunity.Although as NPs used in this study accumulate in the tumour site through passive targeting approach taking advantage of the enhanced permeability phenomenon in tumours [46], our data suggests that Let-7i likely requires a close interaction between the tumour microenvironment and key APCs to induce this anti-tumour immunity.Future studies may focus on further exploring the mechanism of Let-7i-induced immunity by dissecting the roles of the individual cell types in Let-7i-NP uptake and function.
Poor APC function is major hurdle in overcoming immune suppression as mature dendritic cells are essential for activating T cells, presenting tumour neoantigens, and tumour clearance.Supporting this role, studies have found that an increase in APC maturation in the LN causes significant killing of target cancer cells [68,69].The effect of Let-7i on APC phenotype is complementary to previously reported impact of other let-7 family members (let-7a, let-7b) in HNSCC tumours where their expression resulted in decreased PD-L1 expression in cancer cells [56].This would overall contribute to enhanced T cells' anti-tumour effects.Importantly, compared to all the other Let-7 family, Let-7i had the strongest correlation with tumoural immune pathways in the > 600 human tumours examined in this study, highlighting its utility in enhancing anti-tumour immunity in HGSC.This complements well with the previously reported role of Let-7i in sensitising cancer cells to PARPi and platinum-based therapies in HGSC [59,60].Future work should focus on validating Let-7i's impact on tumour immunity in other murine models of ovarian cancer as well as its ability to generate tumour antigen-specific immune response.Further improvement on the nanoparticle system to deliver Let-7i to tumours is also needed to further potentiate its impact on anti-tumour immunity.The use of Let-7i with other immune therapies should also be further investigated (e.g.other types of immune therapies or other microRNAs).For instance, based on the cellular mechanism of Let-7i described here, Let-7i could combine well with miR-155, which has been shown to regulate MHC-II and costimulatory markers in DCs in lymph nodes [70].The ability of both Let-7i and miR-155 to positively modulate APC function in LNs could result in possible synergistic effect when used in combination to promote T cell priming and tumour clearance.Finally, an increase in functional T cells in the tumour microenvironment is a critical determinate of anti-cancer immunity and is a requirement for an effective response to immune checkpoint blockade.Collectively, our findings highlight the impact of Let-7i on tumoural immune networks and its potential use for inducing an anti-tumour effect in HGSC.
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by/4.0/.
Fig. 1
Fig. 1 Correlation of miRNA expression in HGSC immunological pathways.miRNAs are ranked according to the number of impacted immunological pathways (p < 0.05), depicted across categories by the
Fig. 2 Fig. 3
Fig. 2 Correlation between tumoural Let-7i expression and anti-cancer immune signatures in TCGA ovarian cancer dataset.Spearman's correlation was used to assess miRNA/gene signature correlation using TCGA dataset (n = 347)
Fig. 5
Fig. 5 Impact of Let-7i NP treatment on immune cell populations in ID8-ip1-Luc tumour model.Omental tumours from ID8-ip1 tumour bearing mice treated with Let-7i or miR-Ctrl NPs were profiled by flow cytometry to examine the frequencies of A lymphoid and B
Fig. 6
Fig. 6 Impact of Let-7i NP treatment on immune cell populations in lymph nodes of ID8-ip1-Luc tumour bearing mice.Inguinal and mesenteric lymph nodes (LNs) from ID8-ip1-Luc tumour bearing mice treated with Let-7i or miR-Ctrl NPs were profiled by flow cytometry to examine the percentage of A lymphoid and B myeloid immune cells out of all CD45 + leukocytes within LNs at experiment endpoint.C CD86 mean fluorescence intensity (MFI), a co-stimulatory recep-
|
2024-04-01T06:17:36.311Z
|
2024-03-30T00:00:00.000
|
{
"year": 2024,
"sha1": "162a1042466350f20a8d0a3c36b8850f19d84137",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00262-024-03674-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8af66bc9b08b05165ccef2bfcc75d00f1d3428c9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220508999
|
pes2o/s2orc
|
v3-fos-license
|
Chariotry and Prone Burials: Reassessing Late Shang China’s Relationship with Its Northern Neighbours
In place of the traditional view that raids and invasion from the north introduced new weapons and chariots to the Shang (c. 1200 BC), we argue that archaeological evidence illustrates the presence of several regional groups at or near the late Shang centre, Anyang. Here we review burial practices at Anyang dating to the late second millennium BC, and describe a substantial group of prone burials that reflect a ritual practice contrasting with that of the predominant Shang elite. Such burials occur at all social levels, from victims of sacrifice to death attendants, and include members of lower and higher elites. Particularly conspicuous are chariot drivers in some chariot pits. An elite-level link with chariots is confirmed by the burial of a military leader in tomb M54 at Huayuanzhuang at Anyang, with tools that match exactly those of chariot drivers. Given that prone burial is known to the north, in the Mongolian region that provided chariots and horses to the Shang, a route can be traced eastwards and southwards, down the Yellow River, and then through mountain basins to Anyang. Our inference is that a group originally from outside the Central Plains can be identified in these distinctive burials. This marks a first step towards understanding the heterogeneity in the central population of the late Shang.
Introduction
China's two first dynasties, the Shang (c. 1500-1046 BC) and the Zhou (1046-256 BC), are central to all accounts of early China. They mark the beginning of Chinese history in the proper sense as, from c. 1200 BC, they have left us contemporary inscriptions. Both dynasties occupied the principal agricultural region known as the Central Plains (Fig. 1), with a network of connections that extended their contacts north in a search for horses, and south for metals. Both dynasties are renowned for extraordinary cast bronze vessels used for offering food and alcohol to their ancestors (Bagley 1999). Metallurgy, in the form of knives, tools, and personal ornaments, had first been introduced to the high mountains and plateaux north of the Central Plains, termed here 'the Arc', during the early second millennium BC (Rawson 2017). Prior to the Shang, bronze was employed in a completely innovative way, from about 1600 BC at Erlitou, to cast the first vessels. Enormous urban sites at Zhengzhou (1500-1300 BC) and Anyang (1200-1046 BC), as well as other centres, have revealed large vessel sets within tombs and hoards (Campbell 2018, pp. 51-99).
As these vessels were employed in sets for ceremonial banquets for the ancestors, they can be taken to indicate the regions that were part of the cultural domains of the Shang and the Zhou. Other regions-in which some vessels have been found, but these were not used in sets-lay outside the political and cultural reach of these dynasties. We are primarily concerned here with the late Shang kings, who had Fig. 1 Map of the eastern Steppe, the Arc (with its three zones) and the Central Plains, with major sites mentioned in the article their capital at Anyang. They viewed their ancestors in a generational sequence, and arranged their banquets accordingly; they also engaged with these ancestors through other ritual activities, particularly with divinations (Keightley 1999). Records of these divinations were carved into the ox scapulae and turtle plastrons employed and stored in archives. Among the topics recorded was the need for the ritual killing of animals and human victims, who were often identified by the name Qiang 羌. These Qiang are often understood as having been human sacrifices (a term which will be examined below). Ritual deposits in pits are major features of the Shang royal cemetery at Xibeigang at Anyang, where the eastern tomb group in particular was surrounded by hundreds, even thousands, of sacrifices of animals and humans (Fig. 2).
The late Shang dynasty was renowned for its engagement in war. Like sacrifice, war is extensively mentioned in the inscriptions on the oracle bones. Because the Shang saw themselves at the centre of their world, their enemies lay around them in various regions named fang 方 ('direction' or 'land'). In this period, the chariot, a war machine, was introduced from the north, as were northern weapon types. We examine here the connections between this dramatic shift in warfare and the occurrence of burials where the individual was interred in a prone position, a practice that became particularly widespread in this period. Among the people buried in this position were chariot drivers, many of the human victims, and some members of the elite.
As the oracle bone inscriptions make clear, at Anyang the Shang employed specialists and followed rituals as essential aspects of their rulership (Keightley 2012, pp. 232-235). We can be sure that careful attention was paid to the ordering of tombs and of sacrificial pits at the royal cemetery, as well as to the deposition of bodies and the provision of offerings. Body position was especially important, and it follows that a prone position in death would undoubtedly have been an explicitly chosen feature. Here we explore the tomb orientation and body position that characterise both the chariot drivers and one major military leader. We suggest that these particular individuals, found among the burials at Anyang and at other Shang centres, reflect the presence of a regional sub-group who were critical in the introduction of chariots from the north.
We have known for some decades that chariots originated in western Siberia and reached northern China across the steppe, with early observations by Hayashi (1959), Watson (1971) and Piggott (1974). Fuller accounts were given by Shaughnessy (1988), Wu'en Yuesitu (2007), and Wang (1998); for a new and expanded discussion of the steppe origins see Chechushkov and Epimakhov (2018).
Chariot rein holders and northern knives and tools were recognised by Loehr (1956) and Watson (1971) as following prototypes known from South Siberia and Mongolia. These observations were developed for a wider audience in a path-breaking essay by Lin (1986), followed by extensive studies by Wu'en Yuesitu (2007) and Yang (2016), with the Russian perspective presented by Kuzmina (2004). An important, recent discussion of newly excavated material is set out by Zhu (2013). What has been lacking, with a few exceptions (Wu 2013), is an account of the ways in which chariots and the northern weapons and artefacts, notably the chariot tools, have been found together. Further, to gain a fuller understanding of the role of these northern technologies in the hands of the late Shang, we here emphasise the roles of minority groups within late Shang society, living at Anyang, in bringing them to the Central Plains. Northerners have, from the days of the great Chinese historian Sima Qian (active c. 100 BC), received a bad press (Watson 1961, p. 154). In his vivid presentation of the lifestyle of the Xiongnu and their tendency to raid and indeed engage in warfare, Sima Qian writes that 'in periods of crisis they take up arms and go off in plundering and marauding expeditions'. Historians today continue to set out the social and cultural differences between the northerners and the inhabitants of the Central Plains, seeing the dark side of raiding and war, rather than any benefits that the two sides might have derived from each other (Li 2006). It has been difficult to explore the relations between the steppe and the Central Plains as contact was not direct. People, animals and technologies, exchanged in both directions, had to pass through the Arc (Rawson 2017). This broad area of land around the large Yellow and Yangtze drainage basins extends from the Bohai Bay in the east, across present-day northern China and south along the eastern edge of the Tibetan Plateau, to present-day Yunnan. It was first identified in the 1980s (Tong 1987) as a region with a strong dependence on pastoral economies and a large variety of material cultures very different from those traditionally ascribed to the Central Plains. In the present discussion, the northern area of the Arc is divided into three sections, as shown in Fig. 1. In each of these, communication from north to south is determined by the south-flowing rivers. Recent excavation and research have made it possible to track the movement of people, tools, horses and chariots during the Shang period down the Yellow River where it flows south from Inner Mongolia.
Burial, Death Attendants, and Sacrifices
Continuous excavations at Anyang from the 1920s to the present (2020) have revealed the preoccupation of the late Shang rulers with elaborate burials, in which they were accompanied by death attendants in their tombs and by extraordinarily high numbers of human victims placed in pits around those tombs in ritual offerings. As several authors have pointed out (Reinhart 2015;Campbell 2018), there has been considerable reluctance to discuss the purposes and procedures that led to the burial of such large numbers of people. It is essential to accept that late Shang ritual practice was led by deep religious beliefs about the universe and the distribution of power within it. Both the tombs of the late Shang kings and the many divination records demonstrate the power of the ancestors in support of their descendants. In the context of beliefs about the continuing demands of dead royal ancestors, we need to recognise that people buried within or near the major tombs with their own coffins and what may be recognised as their own grave goods were understood to have continuing roles in the afterlife as attendants. Although they are often described as victims, their implied social position in this afterlife was clearly very different from that of those interred in the ramps of the royal tombs or in the pits around them. In some literature in Chinese, the first type are termed 'companions' (renxun 人殉), while the second type is termed 'human sacrifice' (rensheng 人牲).
To help us interpret the intended functions of people buried as attendants in such major tombs, we can look at the nine men buried at the base of tomb M1001 at Xibeigang, possibly the tomb of King Wu Ding 武丁 (Fig. 3). One attendant was buried under the floor of the coffin chamber and the other eight were at the same depth in the corners of the shaft. Each is crouched in a small pit, armed with a weapon, and accompanied by a dog; at least five appear to be prone, a burial position we discuss below (Liang and Gao 1962, pp. 28-30). The eight have bronze ge 戈 blades, but the individual at the centre has a blade made of stone. The weapons and the dogs suggest that these attendants were chosen to serve as guards in the afterlife.
Further individuals interred on secondary ledges and ramps would have been understood as fulfilling other functions in their attendance on the deceased. These were not victims within the Shang construction of the universe, as their deaths were considered in the light of being thereby guaranteed a future life in the service of the Shang kings and other elites. The practice preserved in a large tomb, M1 at Wuguancun, within the royal cemetery at Xibeigang, can be interpreted in this way: the double ramped tomb had a large rectangular coffin chamber surrounded by the customary ledge containing numerous burials. On the west side, where male burials were concentrated, one individual was interred with ritual vessels. Jade ornaments and weapons were found in this and other graves. On the eastern side, a number of burials, mainly of females, were documented (Guo 1951, pp. 15-19).
In so far as we can interpret the motives and concepts of the Shang, numerous inscriptions on the oracle bones inform us about the other group, the rensheng, or human sacrifices, and indicate the necessity of distinguishing those who were to accompany the rulers (and various other members of the elite) in the afterlife from those human victims who were offered as sacrifices as part of the Shang effort to obtain the support of High God, Di 帝, the Natural Powers, and their ancestors in their enterprises, particularly in war: Making cracks on gengchen 庚辰 (day 17) divined: (we) offer to Yang (the Mountain Power) three Qiang, three young penned sheep (lao 牢), and split open (mao 卯) three young cows (Keightley 2012, p. 66).
It is a general, but reasonable, assumption that the people called Qiang who were mentioned in such inscriptions were buried in the numerous well-organised pits around the royal tombs. On many occasions, mass groups of decapitated bodies were thrust into single pits, with their skulls interred separately (Campbell 2012). Liang and Gao (1962, Figs. 14, 16). b Plan of the base of the tomb with an example of a sacrifice buried prone (dotted box). After Liang and Gao (1962, Figs. 10, 11) Pits might hold a single individual but could hold up to thirty or even forty, piled on top of each other (Huang 2004, p. 53). An assessment of body positions shows that many of these victims were buried prone (Zhongguo 1977, pp. 33-36). Along with these burials in pits, we can also document rows of bodies beheaded and placed prone on the four ramps of the royal tomb M1001, sometimes identified as the tomb of Wu Ding, thought to be the first late Shang king to have ruled from Anyang (Liang and Gao 1962, pp. 38, 40). The extreme violence that these burials record may have been exactly what the Shang saw as necessary in their search for contact with and support from the High God, the Natural Powers, and the ancestors.
The greatest numbers of human sacrifices date from the early Anyang period, under King Wu Ding (c. 1200 BC), as estimated by Huang (2004, pp. 79-80). While such sacrifices also took place outside Anyang, the majority of evidence (described in detail by Chang (1980, pp. 119-124) is found at the major royal cemetery at Xibeigang, especially around the eastern tomb group and at the precinct of the temple-palace sites at Xiaotun.
Prone Burial
Any survey of the burials of both death attendants and of human sacrificial victims at Anyang indicates that significant numbers are described as having been buried prone. These prone burials are one element in our discussion of the introduction of the chariot to the Central Plains. To date there has been little general understanding, either from a general theoretical perspective or from an appreciation of the specific historical situation in late Shang China, of why these prone burials occurred. Before the Anyang period just a few prone burials are known, from the Erlitou (c.1750-1500 BC) and Zhengzhou periods; these rare examples include a tomb orientated east-west in which three bodies lay prone, buried with bronze vessels (Huang 2004, pp. 44, 47, 106).
In death, it was standard for Shang elite individuals to be placed in a supine extended position (Fig. 4a) and interred in a rectangular pit, orientated north-south, often with a wooden coffin chamber. The chamber might contain one or two coffins (Fig. 4c). Often constructed level with the lid of the coffin chamber was a secondary ledge, on or within which grave goods and even death attendants were interred. Many of the tombs at Anyang had a dog buried below the coffin. High-level elite tombs in the royal cemetery had up to four ramps (either one, from the south: two, from south and north; or four, from all four cardinal points). Elite tombs outside the cemetery generally did not have ramps, but they often had abundant grave goods. People below this level who were still accorded formal burials in lineage cemeteries, as at Dasikong, had many fewer grave goods (Zhongguo 2014, pp. 244-261). These we will consider as lower elites (Fig. 4d).
A striking divergence from this predominant supine interment position can be observed among some of the people excavated in what are termed the sacrificial pits, as well as among some of the death attendants in elite tombs and some of the actual central burials. The most immediate feature of such burials is that the bodies were placed in a prone rather supine position, so that their backs would have been visible Yang and Yang (1979, Fig. 21). b Prone position adopted by the regional group with northern connections. From late Shang tombs in the western sector of Yinxu. After Yang and Yang (1979, Fig. 22). c Tomb M121 at Qiangzhangda, Shandong Province, showing the tomb occupant in supine extended position with a dog pit below the waist. The tomb is oriented north-south. After Zhongguo (2005, Fig. 86). d Tomb M446 at Anyang Dasikongcun, with the tomb occupant in prone position. The tomb is oriented east-west. After Zhongguo (2014, Fig. 231) when looking from above, prior to the bodies being covered over (Fig. 4b). Other deviations from Shang ritual are also evident in these central prone burials, as the body orientation is often east-west rather than north-south (Fig. 4d). We may thus speculate that the interred individuals were unlikely to have belonged to the main populations of either low or higher Shang elites, but rather to one of several different cultural or regional groups. Following this logic, Shang society at Anyang must have been composed of several different populations.
The prone burials have been assessed by several Chinese scholars, who have concluded that, while they are numerous at Anyang, they are rare on the Central Plains both before and after the late Shang period (Hu 2016;Meng 1992;Zhang 2016). With much published research and analysis of burial practices available on the Shang, we know that this unusual burial position was used at many cemeteries at Anyang, with up to thirty per cent of the people buried prone. Important examples are also found at other Shang-related tombs in Shandong, Hebei and Shaanxi. The practice of prone burial declined steeply soon after the Zhou conquest of the Shang (Zhang 2016, pp. 149-153).
The royal burials and the sacrifices at Anyang are contemporary with the arrival of the chariot, which is recognised as having spread across Eurasia, entering China from the north. It may be that the sudden increase in prone burials was in some way connected with this military innovation: importantly, chariot drivers interred in pits with their chariots were also often (though not universally) buried prone. In this discussion, chariot drivers are a significant further category of death attendant. The proportion of charioteers buried prone with their horses and chariots was high, both at such sites in Anyang-as Xiaotun M20, M164 (Shi 1970a, p. 16); Dasikong M175, M226 (Ma et al. 1955;Zhongguo 2014, pp. 466-471); Guojiazhuang M52 (Zhongguo 1998, p. 128); Meiyuanzhuang M40 and M41 (Yang and Liu 1998)and also at Qianzhangda, in Shandong (Zhongguo 2005, pp. 126, 135).
Chariot drivers must have had a range of skills, managing the horses as well as the wooden chariot with its large and seemingly fragile wheels (Chechushkov and Epimakhov 2018, p. 436). We have some hints of the range of skills required from a set of very distinctive tools, shown in Fig. 5a, found in the box of chariot burial M41 at Meiyuanzhuang at Anyang (Wu 2013, p. 50;Yang and Liu 1998). As we shall see below, almost all of these tools had prototypes in the Arc to the north, or further northwest in the steppe. Shown in Fig. 5a at top (not to scale) is a bow-shaped rein holder (gongxingqi 弓形器), consisting of a broad band with two loops carrying jingles. The reins could, it is assumed, have been twisted around the loops. These were important for all those we recognise as chariot drivers or chariot owners/users, both those found buried in chariot pits and those of the elite associated in death with chariots. Some of the rein holders were decorated with horse heads, which, along with the jingles, are features of northern bronzes from the Arc and not typical of standard Shang weapons or vessels developed at Anyang. Nevertheless the rein holders were almost certainly cast at Anyang.
Below the rein holder in Fig. 5a is a single-edged curved knife with an oval ring terminal similar to some we see in sacrificial pits; on the far right is a whip end. The artefacts at 5a below the rein holder and knife are socketed axes accompanied by a small spade-like bronze-all tools for repairing the woodwork of the chariot. As socketed axes were such a standard tool in other parts of Eurasia, their specific use with chariots in this context has very rarely been considered, but although these tool sets are occasionally identified as agricultural tools, since they are almost universally found with other chariot fittings we should reject this identification. It is also clear that they were introduced from the north: careful typological work has shown them to originate in Siberia with the Seima-Turbino phenomenon and Andronovo complex, and subsequently to have been taken east to Mongolia (Yang et al. 2020, pp. 85, 102, 107, 110).
The high proportion of prone burials among the guards under the tomb chamber in tomb M1001 and among the charioteers accompanying the elite indicate that their skills in fighting and defence were valued in an afterlife. The crouching position of the guards under the coffin of tomb M1001, and their weapons, suggest that some attack from below, from the netherworld, was anticipated.
While we can easily imagine the roles envisaged for the dead guards and the charioteers, the intended roles of victims in 80 sacrificial pits (among a group of a thousand pits in the northeastern section of the royal cemetery) are much more challenging. As many as ten individuals, often without skulls, were placed in a Yang and Liu (1998, Fig. 17). b Plan of Anyang Xiaotun M20 chariot pit. After Shi (1970a, Fig. 8) single pit. Those in charge of the ritual decided to bury all these victims in prone position and to provide them each with a curved knife, some without grips, some having grips with round holes, and a few with animal heads. The 80 pits held more than seven hundred such knives (Fig. 6). The knives were often accompanied by a single shaft-hole axe and a sharpening stone, underlining the functional purpose of the knives as tools and defining the identity of the victims (Gao 1967). In many cases, the weapons were unfinished local replicas, but they are all recognisable as copies of knives and axes associated with the Arc (Cao 2014;Zhu 2013). These knives are often curved, with a single cutting edge defining their purposes, which may have included cutting loose ropes or leather harnesses and skinning animals. The seven hundred knives and other weapons might also indicate that these people were a fighting force, as well as concerned with the management of animals, or they may have been a conquered and captured group of northerners.
It is very unlikely that such knives signalled membership of any elite. Ritual specialists at Anyang must have understood these distinctive features, the prone position and the northern-type weapons, as marking the identity of these people in terms of a cultural or regional group. These burials in pits pose a question. Were the interred individuals understood to be like the charioteers and the guards, that is, intended as a defensive force in the afterlife? Or were those buried in such pits captives, put to death to concretely symbolize a Shang victory over enemy forces, presumably from Fig. 6 Three knives from the group of more than seven hundred from 80 pits with prone sacrifices with knives, axes and sharpening stones, from Anyang Xibeigang. After Gao (1967, pls. 1, 2, 7) 1 3 the Arc? While we can recognise some form of planning by the ritual specialists, we have great difficulty in interpreting the roles of the human victims in such pits and on the ramps of the royal tombs, where prone burials are also found.
The Burial of Ya Chang and Its Context at Xiaotun
We now examine the issues of the prone burials, northern tool sets, and ideas about an afterlife in greater depth in relation to an impressive burial at Anyang Huayuanzhuang: Tomb M54 (Zhongguo 2007a) (Fig. 7). The principal burial is that of a man, buried prone, and named on his weapons and ritual vessels as Ya Chang 亚 長. The term Ya is sometimes taken to indicate a military official (Yan 2013, pp. 174-175), and what remains of Ya Chang's skeleton shows that he suffered major wounds and probably died in battle. He was thus clearly a warrior or leader in warfare. Nearby is a major chariot pit, M20 ( Fig. 5b) (Shi 1970a, b). More significantly, Ya Chang's tomb is near the central temple-palace area at Xiaotun, and southeast of the famous tomb of Fu Hao 婦好, one of the principal consorts of King Wu Ding 武丁 (Zhongguo 1980) and the only female head of an army. This location alone shows that Ya Chang was an important member of the Shang elite. In addition, the area covered by his tomb-which can be taken as having some kind of proportional relationship to status-is similar to that covered by the tomb of Fu Hao. Ya Chang was accompanied by 15 attendants, also buried prone, and 15 dogs, whereas Fu Hao Fig. 62). b Artefacts in tomb M54. After Zhongguo (2007a, Fig. 77) had 16 attendants and six dogs (her skeleton did not survive, although one of her attendants, whose skeleton can still be examined, was also buried prone).
While Ya Chang was undoubtedly recognised as a major member of the Shang elite, if we look more closely at the tomb contents, we find several unusual features in addition to the prone position. Important for this discussion is the exceptional quality of the chariot fittings in his tomb, which included six very fine rein holders, all elaborately inlaid with turquoise. This is a feature often seen on chariot fittings and some weapons, but very rarely on other late Shang bronzes. As the only other chariot fittings with such carefully executed turquoise inlay were found in pit M20 (Fig. 5b) and in the chariot pits at the royal cemetery at Xibeigang, it is likely that Ya Chang's came from a similar workshop (Li 2009, pls. 85-89, 91-106). In addition, Ya Chang was supplied with the standard tools, such as socketed axes, chisels and spades, for repairing the woodwork of chariots. In the light of the high quality of the bow-shaped fittings and the complete set of tools in his grave, we may infer that Ya Chang was almost certainly a major leader of chariots in warfare.
Ya Chang's curved knives, which are part of his chariot tool set, are especially relevant for understanding his background and thus his prone burial. He had one knife with an oval ring as a terminal and a bowed grip, along which were geometric patterns (Fig. 8a); these patterns are typical of knives from the Arc rather than from Anyang; similar items have been found at Suide and Shilou to either side of the northern Yellow River, where it flows south-a region we will discuss below (Zhu 2013, pp. 7-8). A small pointed and curved bronze with a jingle (Fig. 8b) also has a parallel at Shilou (Zhu 2013, p. 7). Two other knives, both with animal heads, are also highly significant: the upper one (Fig. 8c) has a horse head that imitates a northern type, while the oval eye is a characteristic of objects made at Anyang (we can compare the eye with those on a jade and a bronze made in the late Shang: Zhongguo 2007a, Fig. 144; Zhongguo 2005, Fig. 232). The animal head on the other knife ( Fig. 8d) is that of a stag and is more rounded in form than that of the horse, having part of an antler rising directly above the eye. Rather than a curved oval shape, this eye is completely circular and formed with an outer ridge and so resembles a tiny tube. It thus belongs to a totally different tradition. These knives enable us to track connections from their origins in the north down a route to the Shang capital.
While the military leader in tomb M54 has sometimes been identified as coming from the south (He 2013), his chariot fittings, tools, and especially his knives suggest that we should look to the north, the sources of the chariot and horses. In addition, the tomb held some other very unusual items that confirm his northern connections. First of all, the man's head and part of his body were wrapped with a textile, now decayed, to which were attached 150 jade beads and more than a thousand cowries. Ornament attached to clothes was more typical of northern Eurasia than of the Central Plains. He also had a very unusual solid stone artefact, with three sunken holes (Fig. 9a) in which traces of colour remain (Zhongguo 2007a, p. 214). Although this artefact has later counterparts in small bronzes with four tubes (Yang and Yang 1979, p. 97), this Shang tradition did not survive. This object type can be interpreted as a palette, and this again indicates steppe connections as later stone palettes, filled with colour, were found together with tattoo kits at the cemetery at Filippovka (Fig. 9b) (Yablonsky 2011, table XI:7.9). In addition to this stone palette, Ya Chang was accompanied by some gold items, and almost all the gold items found at Anyang have connections with the Arc (Rawson 2018, pp.111-112). In this grave were two exceedingly thin sheets of gold in the form of discs. One displays circles made by small indented points, with a six-pointed star at the centre (Fig. 9c)-a form of decoration found primarily on chariot fittings. Figure 9e shows the facings Fig. 100). b Two palettes for colour excavated at Filippovka. After Yablonsky (2011, pl. XI:7.9). c Circular gold foil appliqué decorated with a star from tomb M54. After Zhongguo (2007a, pl. 57). d Bow-shaped item with jingles decorated with a star to compare with Fig. 9b, from tomb M132 at Qianzhangda, Shandong Province. After Zhongguo (2005, p. 331). e Yoke linings also with stars, from tomb M20 at Anyang Xiaotun. After Shi (1970b, pls. 64, 66) of a chariot yoke with decoration in the form of six-pointed stars from tomb M20 at Anyang Xiaotun (Shi 1970b, pls. 64, 66), while Fig. 9d shows an arched rein holder, with jingles at the two ends, from a chariot burial at Qianzhangda, east of Anyang (Zhongguo 2005, p. 331).
Fu Hao, King Wu Ding's consort, was also buried with distinctively northern bronzes. Her four mirrors represent rare artefact types at Anyang and are generally associated with the Arc. Her grave also contained a number of curved knives with ring terminals, one with an ibex head, all of which we should probably now see as tools for a chariot driver; pins with jingles accompanied these (Linduff 2006). Her six bow-shaped rein holders are conspicuous for their elaborate decoration, as is her complement of tools for repairing chariots (Zhongguo 1980, pl. 75). These chariot tools, associated with northern knives, are highly unusual grave goods for a woman: they indicate that Fu Hao, like Ya Chang, also buried with his attendants in the same general area of Xiaotun, was a leader with numerous chariots. These were roles in life, and there was an expectation that they would be carried on in the afterlife, in defence of a central ritual site threatened by spirit enemies.
Other burials in the same area reinforce the perceived importance of chariot warfare to supernaturally defend religious centres. The famous chariot burial, M20, was unearthed in the early period of excavation before the Second World War. A figure in the excavation report reproduced a drawing by one of the excavators, Shi (1970a, Fig. 8) (Fig. 5b). In the centre, we see the oval outline of one chariot box; another which survives is less clearly discernible. Near the bottom of the drawing we can see the two yokes for the horses, whose heads appear along the lower border. At the top are two charioteers, laid east-west and buried prone. Horse-headed knives, made at Anyang, also appear in the pit, and a dagger axe and spear show that the chariot was for warfare. There are also two rein holders. Early excavations at Anyang revealed several other chariot burials in the same area. The archaeologists concluded that they were arranged for a 'specific purpose' (Li 1977, p. 111). A pit with a single horse and groom was found in the same area. Nearby, in tomb M10, were further prone sacrifices, with northern knives and axes and whetstones (Zhu 2013, p. 16).
Southern Mongolian Tombs and Ornament of Animal Heads
The people buried prone at Anyang and the frequent simultaneous interest in single-edged curved knives and chariots draw us to the only currently known significant regions where prone burials have consistently been found and where such weapons were used. These are located in the steppe of southern and eastern Mongolia (Kovalev and Erdenebaatar 2010, pp. 104-105). The authors of the paper in which the features of this culture were first fully formulated suggest that the area includes both southern Mongolia and part of Inner Mongolia, south of the Yinshan mountains (Ma 2015, pp. 278-285). These graves are defined as belonging to the Tevsh regional group (Amartuvshin 2016) (Fig. 10). Only about two dozen of these monuments have been investigated (Amartuvshin 2016;Kovalev and Erdenebaatar 2010;Miyamoto and Obata 2016;Volkov 1972, p. 556).
A further variant, designated as belonging to the Ulaanzuukh culture, has been located in eastern Mongolia (Tumen et al. 2013). Radiocarbon analysis of bones in the graves gives dates between 1300 and 1000 BC (Kovalev and Erdenebaatar 2010, p. 105;Miyamoto and Obata 2016, p. 64).
The position of the stones marking the several groups of Ulaanzuukh-Tevsh graves is diverse. Some of the Tevsh tombs are best recognised before excavation from a characteristic stone outline known as the 'hourglass shape' (Amartuvshin 2016, 72) (Fig. 11a). This resembles the shape of an animal skin spread taut over the ground, which may have been employed as part of the burial ritual. The same 'hourglass' shape is also seen at the Inner Mongolian tombs (Fig. 11c). However, other tomb shapes were also adopted for the prone burials, including those in the Ulaanzuukh group (Kovalev and Erdenebaatar 2009, p. 164;Tumen et al. 2013). They are united by a shared funeral practice in which the deceased was placed in a narrow pit, in a prone position, with the head towards the east (Fig. 11b). It seems very likely that we should trace the origins, perhaps the very distant origins, of some of the people buried in a prone position in Shang period tombs at Anyang among these regional groups, coming from present-day southern and eastern Mongolia and Inner Mongolia.
Almost all the Ulaanzuukh-Tevsh tombs have been robbed. Finds usually include a few stone beads and animal bones, with, perhaps, other objects of stone. Surviving metal is very rare. But one example contributes directly to our account of contacts between the Shang-period Anyang and the north. A pair of U-shaped gold hairpins with ram's heads brings into the discussion decoration of metalwork with animal heads (Fig. 12a). These correspond directly with a specific knife type with animal heads, such as the one with a deer head from Huayuanzhuang tomb M54 (Fig. 8d). Both the rams on the gold hairpins and the deer head on the knife share an unusual and highly distinctive form of absolutely circular, tubular-shaped eyes and horns that rise directly from above the eyes. The hairpins were found by V.V. Volkov at Tevsh Uul in an undisturbed tomb in 1971 (Volkov The gold hairpins (Fig. 12a) are an extremely rare find; as a single example of the animal style with tubular eyes they cannot at this stage, without some further evidence, secure the origins of animal heads with tubular eyes in this region of Mongolia. However, a group of chance finds of animal-headed ornaments in the same style, along with a large bronze knife with a ram (or ibex) head, have come from the Ömnögovi and Övörkhangai regions, which overlap with the Tevsh area (Fig. 13a). These strongly-formed heads all have eyes shaped as short tubes (Erdenechuluun and Erdenebaatar 2011). Many other knives from the Mongolian region (Fig. 13c) have parallels found in the Arc (Fig. 14d). Thus, it seems reasonable to suggest that some knives, such as the one with a stag head at Huayuanzhuang (Fig. 8d), may be copies of or have associations with metalwork from southern Mongolia.
Another reason to consider the role of people from Mongolia relates to the Shang-period chariot. While the chariot, or light wooden vehicle drawn by two horses, was developed first at sites in the eastern Urals, for East Asia, sites in Mongolia provide important information on the transition to the Shang. Petroglyphs in Fig. 12 Three U-shaped hairpins from Mongolia. a Golden hairpins with ram heads from a Tevsh grave. After Kovalev and Erdenebaatar (2009, Fig. 5). b Damaged hairpin from Chandmani Khar uul. After Amartuvshin (2016, 86, Fig. 85) the Altai reaching into Mongolia show spoke-wheeled chariots (Jacobson-Tepfer 2008). While horses were domesticated further west, it has recently been discovered that people on the present-day Mongolian plateau moved directly from hunting to herding, including herding of horses (Jeong et al. 2018). Horses were especially valued, and their heads were ritually buried around the numerous stone monuments known as khirigsuurs, dating between 1300 and 700 BC (Allard and Erdenebaatar 2005). Some of the male horse heads have recently been examined and show signs of wear through traction, possibly while drawing chariots (Taylor 2017). Large standing stelae, the deer stones, are renowned for lively images not only of stags, but also of knives, daggers, shaft-hole axes and rein holders hanging from belts which are carved around many of them (Kovalev 2007;Volkov 2002). These different monuments, petroglyphs, khirigsuurs and deer stones have illuminated the key role of the Mongolia plateau as a major region of origin for chariot and horse use in East Asia (and their associated weapons and tools), and also the likely source for the chariots and horses employed at Anyang.
The Route from Southern Mongolia to Central China
Diverse people in the Arc acted as a bridge between the Mongolian steppe and Anyang, and here we review the archaeological evidence along the route that traverses this complex territory, by examining three characteristic features: prone burials, certain types of metalwork, especially knives (Fig. 14) and other chariot tools, and distinctive ceramics. All of these appear in tombs south of the Great Bend of the Fig. 13 Animal-style ornaments and dagger decoration. a Group of bronze ornaments in the shape of horned animal heads. From Ömnögovi and Övörkhangai Provinces, Tevsh culture. Redrawn from Erdenechuluun and Erdenebaatar (2011, Figs. 77-79). b Dagger with an animal head, Tevsh culture from Bayankhongor Province, Mongolia. After Erdenechuluun and Erdenebaatar (2011, Fig . 292). c Jinglehead knife from Zavkhan Province, Mongolia. After Erdenechuluun and Erdenebaatar (2011, Fig. 308) Journal of World Prehistory (2020) 33:135-168 Fig. 14 A series of curved knives to illustrate three principal contexts and categories. Top row: high quality knives from Anyang and the Yellow and Fen Rivers; middle row: a group of knives reported in 1962 from a hoard at Chaodaogou, Hebei Province; bottom row: three knives from horse or chariot burials at Anyang and one from Qianzhangda. a Stag-head knife from Anyang Huayuanzhuang. After Zhu (2013, Fig . 16). b Knife from Jingjiecun, Shanxi Province. After Li (2011, Fig. 4.1-10). c Knife from Suide, Shaanxi Province. After Shaanxisheng (2009a, p. 515). d Jingle-head knife from Ganquan, Shaanxi Province. After Shaanxisheng (2009b, p. 608). e-h Knives from Chaodaogou. After Zhu (2013, Fig. 6). i Knife with horse head from a chariot burial at the northern area of Anyang Xiaotun. After Shi (1970b, pl. 136). j Knife from Anyang Xiaotun. After Zhu (2013, Fig . 11). k Knife from Qianzhangda. After Zhongguo (2005, Fig . 246). l Knife from the tomb of Fu Hao, Anyang Yinxu. Redrawn from Zhongguo (1980, pl. 66) Yellow River, especially at the site of Zhukaigou, in Inner Mongolia. A direct link with the Ulaanzuukh-Tevsh burials is offered by two chance finds of hair ornaments from the area of the site. These are oval-shaped, as are the ones in Mongolia, but plain, without animal heads (Neimenggu and E'erduosi 2000, p. 122).
The cemetery at Zhukaigou has revealed several prone burials, including one of the two skeletons in tomb M1044 (Neimenggu and E'erduosi 2000, p. 186) (Fig. 15a). Another tomb, M1040, is well known for its weapons (Neimenggu and E'erduosi 2000, p. 224): the dagger shown in Fig. 15b, left, appears to be the antecedent of later versions in the steppe and in the northeast Arc. Figure 15b, right, shows a knife with a ring at the end of the grip of a generic type, found at many sites in the Arc, as well as in Mongolia (Wu'en Yuesitu. 2007, p. 166;Yang 2016, pp. 168-169); this type is ancestral to the curved knives at Anyang. The Zhukaigou knife is ultimately descended from curved knives with a hole at the end of the grip, prevalent in the Seima-Turbino phenomenon, originating, it is thought, in the Altai (Chernykh 1992, p. 221). A Shang style ge or halberd blade in this tomb, and early or middle Shang-period vessel fragments found at the site, must be an outcome of contact between these areas of Inner Mongolia, where prone burial was practised, and the Shang on the Central Plains.
Within the Arc, distinctive ceramics enable us to make further links between Zhukaigou and other sites in the Loess Plateau region. Three recurrent ceramic forms are found at Zhukaigou and at Shimao and other sites further south. They comprise a large jar, often rounded, on three short hollow legs (known as a sanzuweng 三足甕); a wide mouthed tubular container (dakouzun 大口尊); and a cooking vessel (li 鬲) with three separate, bulging lobes (Tian and Han 2003) (Fig. 16). The site at Shimao was only recently discovered (Sun et al. 2018). Somewhat earlier in Fig. 15 Examples of a prone burial at Zhukaigou, and one with northern-type knives, Inner Mongolia. a M1044 with its artefacts. After Neimenggu and E'erduosi (2000, Fig. 144). b M1040 with its artefacts. After Neimenggu and E'erduosi (2000, Fig. 189) date than Zhukaigou, Shimao (c. 2300-1800 BC), displays impressive and extensive stone construction, with a key element being the ritual burial of human skulls. This seems to prefigure the depositions of victims at Anyang (Sun et al. 2018).
As we go south from Shimao, the same ceramics recur in the Lijiaya area, a region with very clear links with the Shang. Lijiaya is a major fortified site, with remains of occupation and tombs dating from the late Shang into the Western Zhou, some containing prone burials, and some in an east-west orientation (Cao 2019). A small spatula with a snake or alligator head has led archaeologists to name a wider region with similar finds as belonging to the Lijiaya sphere or culture (Shaanxisheng Kaogu Yanjiuyuan 2013, pl. 25:3). Cao Dazhi, who has examined the bronzes of the region in great detail, provides a fruitful discussion of animal-headed knives, daggers and spatulas with snake or alligator heads found in this area and illustrates numerous examples of socketed axes and chisels of the types found with chariots (Cao 2014, pp. 418, 485-487, 510-513; see also Linduff et al. 2017, pp. 133-145). He compares local, excavated weapons with the carvings on deer stones (already mentioned) in present-day northern and central Mongolia (Kovalev 2007;Volkov Fig. 16 Characteristic ceramics of the Shimao-Zhukaigou tradition that were also found in Western Zhou tombs, most especially those of consorts of the Jin and Peng lords. a Dakouzun at Zhukaigou. After Neimenggu and E'erduosi (2000, Fig. 83). b Sanzuweng at Zhukaigou. After Neimenggu and E'erduosi (2000, Fig. 74). c Li at Zhukaigou. After Neimenggu and E'erduosi (2000, Fig. 193). d Dakouzun at Hengshui. After Song et al. (2006, Fig. 17). e Sanzuweng at Hengshui. After Song et al. (2006, Fig. 18). f Li at Nianzipo. After Zhongguo (2007b, Fig. 206) 1995, arguing that the weapons found in the region along the Yellow River came from Mongolia, recognising important traits of similarity (Cao 2014, pp. 284-296). Although the weapons may have actually been local copies, Cao's suggestion of a contact with Mongolia is very far-sighted and highly pertinent. The deer stone region is to the north of and distinct from the Ulaanzuukh-Tevsh zones, but people in the two areas must have shared a taste for similar if slightly different knives and daggers and axes.
Cao has also published the large number of middle and late Shang bronze vessels and weapons from the Central Plains recovered from minor sites in the Lijiaya area, mainly from burials (Fig. 10). Most of these tombs have not been properly reported. One at Linzheyu is said to have a tomb in an east-west orientation (Wu 1972). Cao argues that, rather than being used in the ritual sets typical of the Central Plains, the Shang vessels may have been awarded to northerners in exchange for horses, perhaps originating as far away as Mongolia, that were taken down the banks of the Yellow River and then southeast to Anyang. He thus argues that the Shang vessels indicate a route along which horses were brought from the north to Shang centres.
We can follow this communication south of Lijiaya if we move to the east side of the Yellow River and its tributary, the Fen River in Shanxi Province (Shanxisheng 2006, p. 134). Three burials are reported at Jingjie in Lingshi, with one of the graves orientated east-west containing an interment in which had been placed a knife crowned by a ram's head with tubular eyes. Two of the tombs had death attendants and held valuable Anyang vessels. A multitude of weapons was accompanied by chariot fittings decorated with stars, and bronze whip ends. Here, therefore, the Shang had greater influence than further north in the Lijiaya centres. One of the Shang vessels has a small image of a horse on the base in thread relief. Images of horses are extremely rare: two small bronze models of horses have been found at Ganquan County slightly further south, on the west side of the Yellow River (Wang et al. 2007); and Fu Hao's tomb is famous for tiny jade silhouettes of horses. Taken together, we can suggest that in the late Shang, horses had gained particular importance (Zhongguo 1980, pl. 30:2).
The role of chariots, horses, and chariot drivers buried prone in the inferred movement south is confirmed by a group of tombs with chariot burials from Qiaobei in Fushan (Fig. 17). The Qiaobei chariot is probably an early type, with a box smaller than the more advanced chariots from Anyang and Meiyuanzhuang. At the very front is a charioteer buried prone. In one of the larger tombs at the same site, M9, four attendants are buried prone. At both Jingjie and Qiaobei, chariot ownership and driving were central. Some distance southwest of the Shilou and Suide region and also from Jingjie, on the southern edge of the Loess Plateau, is the site of Nianzipo. Excavations revealed 136 burials, of which 64 were in the prone position, the rest in other postures (Zhongguo 2007b). The dating of the Nianzipo cemetery to Shang times is based on both the discovery of a Shang bronze of the late Anyang period and some radiocarbon evidence (Linduff et al. 2017, p. 153).
An interesting additional feature is that the shape of some burials at Nianzipo distantly resembles the hourglass shape of the Tevsh tombs (Zhongguo 2007b, pp. 254-264) (Fig. 18). The Nianzipo tombs held single skeletons laid prone, with their heads towards the east or northeast. While some tombs were lined with 1 3 Journal of World Prehistory (2020) 33:135-168 Zhongguo (2007b, Figs. 190, 196, 201, 206) stone slabs (a general northern Eurasian tradition), others retained just individual slabs at the head and foot. As Alexei Kovalev has shown, this feature is also found in Mongolia (Kovalev and Erdenebaatar 2010, p. 104). It seems that the occupants of this cemetery originally had connections with Inner Mongolia or further north. Lobed vessels in such tombs can be related to the li found at Lijiaya, belonging to the Shimao-Zhukaigou tradition (Neimenggu and E'erduosi 2000, p. 237;Shaanxisheng 2013, colour pl. 20).
The burials at Nianzipo can be compared with tombs in the Bin County area of Shaanxi Province, where a few prone burials and many li vessels, similar to those at Nianzipo in the Lijiaya tradition, have been found (Liang 1999, pp. 79, 83;Dou et al. 2019, pp. 22, 24). While the abundance of Shang bronze vessels at Lijiaya sites and at Jingjie suggests strong contact from Anyang going north, the sites of Zhukaigou, Lijiaya, Qiaobei and Nianzipo indicate a southward movement of northerners favouring prone burial.
The conjunction of the Lijiaya ceramic tradition and prone burial reappears, considerably later, at the major Western Zhou Peng state cemetery, with 1299 burials (Xie et al. 2019), at Hengshui, near Houma. A recently excavated major tomb, M2158, held the tenth century BC lord of the Peng state: he was buried prone, as were his six attendants. He and his elites belonged to a regional group with distinctive northern features, taking the name of Gui 鬼. The west-east tomb plan is also relevant, as with its slightly bowed outlines and four angular corners it vaguely resembles (perhaps fortuitously) the hourglass shape, derived from an animal skin, characteristic of the tombs in the north (Fig. 19). Like Ya Chang, the central individual in this tomb wore a large number of shells, which had possibly Fig. 19 Plan of tomb M2158 at Hengshui, Shanxi Province, of a lord of Peng, buried west-east in a prone position. After Xie (2019, Fig. 12) decorated a cloth wrapped around the body. The tomb was overflowing with bronze ritual vessels, perhaps a consequence of numerous gifts from the Zhou king and his elite.
Not only the prone burial of this lord, but also ceramics in the tomb of a prominent consort of another Peng lord, buried in the same cemetery (Song et al. 2006), present strong evidence of relations with regional groups along the northern Yellow River. Although this woman was a member of the royal Zhou Ji lineage, she had a prominent display of 13 three-legged jars (sanzuweng) and three tubular containers (dakouzun). These are ceramics which descend from the Shimao-Zhukaigou tradition and survived at Lijiaya; they are not found in Zhou male tombs, but are quite frequent in tombs of consorts of rulers of the Jin state in the same Houma region, as described by Chen (2002) and Khayutina (2017) (see also Rawson 2013). Due to the geographic position of their domain, the Jin state had a porous frontier with the north. Marriage alliances with regional groups with northern connections were, it seems, a feature of the diplomatic strategies of the Jin. In the southern Fen River region, we see the continuation of a tradition that originated in the north of the Arc and was cemented along the Yellow River and in the southern Loess Plateau, most probably as people gradually moved south. The Peng cemetery provides strong evidence that we can associate prone burials with specific regional groups in the Shang period, with several examples also dating to the subsequent Zhou period.
In other areas of the Arc, prone burials are very rare, and the recorded animalheaded knives are often chance finds (Cao 2014, pp. 418-419). The one fullyreported example of a Bronze Age prone burial in the northwest of the Arc was found at Gamatai, in the Guinan County of Qinghai province on the northern edge of the Tibetan Plateau (Qinghaisheng and Beijing 2016). In the northeast of the Arc, contact with the Ulaanzuukh people in eastern Mongolia was a clear possibility (Tumen et al. 2013). Three prone burials have been located in a small group of east-west oriented tombs at Zhangjiayuan (Ji and Zhang 1993, p. 321). Individuals buried prone, occasionally in east-west orientated tombs containing some animalhead decorated metalwork, have also been found at Luan County in the same region near Tianjin (Zhang and Zhai 2016). Gold personal ornaments, a northern taste, were popular in such tombs. The coincidence of prone burials and gold personal ornaments, favoured in the steppe and not among the Shang, once more confirms the link of prone burials with northerners. Some northern knives and weapons (Zhu 2013, p. 11) are also present in a well-known hoard from Chaodaogou ( Fig. 14e-h), south of the Yan Mountains (Zheng 1962;Varenov 1999).
Discussion
Prone burials are significant in the history of the late Shang on account of their association with chariots at Anyang. They are one of several features that appear to be innovations in late Shang rituals, which also include the introduction of ramps for large royal and elite tombs; widespread burial of dogs, not only below the coffins but in other positions; the presence of numerous attendants buried with the elite; and the chariot pits and thousands of sacrifices in pits that characterize 1 3 the royal cemetery and the burials at Xiaotun. Since during all periods of the late Shang and Zhou careful attention was paid to all rituals including burial, it is not likely that tomb orientation or body position were accidental. Up to thirty per cent of the excavated individuals, elites, death attendants and sacrifices have been revealed as buried prone. This is a relatively large percentage of the known population, and requires an extensive discussion. Such an investigation pertains to the much more difficult questions of the sources of these various ritual mortuary practices, which appear on present evidence to have arrived in the late Shang period. These are very large topics, so discussion here has centred on prone burial associated with the introduction of chariots, the management of horses and other issues of warfare and defence in the afterlife, marked by burial of northern tools and weapons.
In this paper, we have also concentrated on communication with southern and eastern Mongolia, and Inner Mongolia, where prone burial has also been found. The significance of a northern connection and contact with Anyang are supported by the introduction of chariots from the north and particular artefacts at sites along a route south at Zhukaigou, Lijiaya, Jingjie, Qiaobei and Nianzipo, as well as by some examples in the northeastern Arc, where gold ornaments typical of northerners were favoured. This connection is also matched with a shared use of typical ceramics at several sites.
In the context of the late Shang, we have paid particular attention to chariots and the tools employed with them, including the bow-shaped rein holder and curved knives with jingles and animal heads or oval loops as terminals. This set of artefacts, including rein holder, knife, and tools has hitherto not achieved the recognition it deserves as indicating both the active use of chariots in war and the identity of those who drove them. All of these artefacts relate to northern prototypes, found in Siberia or Mongolia (Kuzmina 2004). While, as mentioned, chariots and their horses have long been recognised as coming from the north, arriving in the late Shang period, the coincidence of the arrival at Anyang of prone burial, the chariots, and the set of chariot tools, marks a significant change in late Shang ritual practice and belief.
Constructing chariots, breaking and training horses, and managing them, both in driving pairs and day-to-day, are highly specialised skills, as is well known from the cuneiform text in Hittite, c. 1350 BC, on how to train pairs of horses and ensure their well-being (Kelekna 2009, pp. 98-99). Cao Dazhi reinforces this information with evidence from oracle bones that references a search for a well-trained horse to create a pair (Cao 2014, pp. 222-225). Fighting from chariots was a further specialised skill, and northerners who had mastered this may have been valued as members of Shang military forces. Once the chariot had been introduced to Anyang, the vehicles were probably built there locally. Horses, on the other hand, must have been repeatedly sought from the north, as Cao Dazhi has suggested. Moreover, if chariot fighting was a significant aspect of warfare, it is likely that it arose because northerners had first come south to attack the Shang using chariots. As we have seen, quite a number of the people managing chariots were buried prone, as was Ya Chang, who is likely to have been a military leader with a chariot force. A large number of prone burials are also found among the sacrificial pits, including the more than seven hundred individuals with northern knives, who may have been valued or feared for their fighting skills. As already discussed, we cannot readily explain their role in the death rituals; they do, however, reinforce the connection between northerners, recognisable by their weapon set, and prone burial.
While at first sight the large numbers and the very different social positions of the people buried prone is puzzling, we have comparable cases, known not from burials but from oracle bone inscriptions. These mention the Qiang in several contexts (Luo 1991;Shelach 1996;Campbell 2018, pp. 115-116, 203-208). Qiang are often described as human victims to be sacrificed. But the word Qiang羌 also occurs with the term fang 方, suggesting that Qiang were outsiders from a particular region (fang). The term Duo Qiang 多羌 could refer to 'managers of the Qiang'. The term Ma 馬 for horse was used in similar ways. There are references to Ma fang 馬方, a region of the Ma, and indeed to attacking the Ma. When the term Duo Ma 多馬 occurs, David Keightley notes that, as horses were not used for riding within central China at this date, it is likely that Duo Ma referred to leaders of chariots (Keightley 2012, p. 325). As both the Qiang and the Ma peoples appear in several social or political roles, it is possible that people buried prone may also have fulfilled several roles, or have been a mixed community of several different groups. Some may have been leaders of the chariot forces, others may have been drivers, and yet others-those sacrificed with their knives and shaft-hole axes-may have supported the chariot forces, or may have challenged the Shang.
Further, while some of the people buried prone, especially among the sacrificed, may have recently come southwards and have been part of enemy forces, others may have come from elsewhere and have lived at Anyang for a period. Indeed, the limited isotopic analysis work relating to diet and geographical origins suggests that some people at Anyang had come from beyond the Shang centre, but had then spent part of their lives there (Cheung et al. 2017). It is thus possible that several different groups, with northerners among them, made up the population who were buried prone.
This paper, therefore, offers the following conclusions. The people buried prone at Anyang were among several distinctive groups making up wider Shang society. This heterogeneity was inevitable as the Shang defended their territory and engaged with neighbouring peoples, but also because they sought resources from the north, such as horses, chariots and their drivers, as well as ores from the south. While some weapons originating in the north came with these innovations, others, notably the rein holders and other tools, including copies of northern socketed axes, shaft-hole axes and knives, were made at Anyang but preserved customary techniques of northern chariot management and northern warfare. From the evidence of their weapons, we suggest that among the people buried prone there were some who had contacts with the north, having come south as chariot drivers and/or to join the Shang army.
In the many Shang battles and campaigns, a few of these individuals may have played important roles and have been awarded high elite burial in death. Ya Chang, the occupant of M54 at Anyang Huayuanzhuang died in battle, and his tomb may have been a reward for his military achievement. He was accompanied by exceptional late Shang bronze vessels, but they appear to have been assembled in an unusual way, perhaps in haste and following a battlefield death. His set of flasks (gu 觚), and cups (jue 爵), for example, was made up from two different groups. He had only one square wine vessel on pointed legs (fangjia 方斝) and one square wine flask (fangzun 方尊), whereas a pair of both would have been more typical. In addition, the location of some of the inscriptions on the bronzes in the tomb is surprising: for example, a pair of inscriptions has, most unusually, been placed inside the neck of the fangzun (Zhongguo 2007a, pp. 117-118), and this may be a sign of a hasty addition or a change to attribute the vessel to Ya Chang after his sudden death. At the same time, other items, such as the colour palette and gold ornaments, retained references either to Ya Chang's possible northern origins or to the northern origins of his predecessors.
Thus, we argue that late Shang prone burials illustrate the diversity of the communities at Anyang. Among these were northerners who acted as chariot drivers, soldiers and leaders employed by the Shang to defeat other northerners. Some of the northern weapons found in Shang burials were brought from the north. Others were copies or Shang versions made and developed at Anyang to complement the identity and roles of their owners. The use of chariots and of a distinctive tool set in northern style indicates the close connections that the late Shang had with their northern neighbours. At the same time, we must recognise that although many outsiders lived within late Shang territory and the elites exploited the chariot-a northern machine-both were rapidly assimilated within what is today recognised as Shang material and ritual culture. A signal of that assimilation is the elaborate bronze decoration on both vehicles and horse harnesses that must have made the chariots glitter in the sun as they moved across the ground.
|
2020-07-09T09:06:22.591Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "ad12188a949388c0852963a8964b7e92e2e41d3f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10963-020-09142-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1c5ffbd0fc236190ad6929401eb91011cf6ac4a5",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
}
|
119027165
|
pes2o/s2orc
|
v3-fos-license
|
Multi-Dimensional Astrophysical Structural and Dynamical Analysis I. Development of a Nonlinear Finite Element Approach
A new field of numerical astrophysics is introduced which addresses the solution of large, multidimensional structural or slowly-evolving problems (rotating stars, interacting binaries, thick advective accretion disks, four dimensional spacetimes, etc.). The technique employed is the Finite Element Method (FEM), commonly used to solve engineering structural problems. The approach developed herein has the following key features: 1. The computational mesh can extend into the time dimension, as well as space, perhaps only a few cells, or throughout spacetime. 2. Virtually all equations describing the astrophysics of continuous media, including the field equations, can be written in a compact form similar to that routinely solved by most engineering finite element codes. 3. The transformations that occur naturally in the four-dimensional FEM possess both coordinate and boost features, such that (a) although the computational mesh may have a complex, non-analytic, curvilinear structure, the physical equations still can be written in a simple coordinate system independent of the mesh geometry. (b) if the mesh has a complex flow velocity with respect to coordinate space, the transformations will form the proper arbitrary Lagrangian- Eulerian advective derivatives automatically. 4. The complex difference equations on the arbitrary curvilinear grid are generated automatically from encoded differential equations. This first paper concentrates on developing a robust and widely-applicable set of techniques using the nonlinear FEM and presents some examples.
Introduction
The first problem to be solved with the techniques of numerical astrophysics was the structure and evolution of stars -an "implicit" problem that involves a static or slowly-evolving structure (Chandrasekhar 1957;Aller & McLaughlin 1965). Its solution consists of determining the state variables of the fluid (density, temperature, pressure, flux of radiation, composition) at each radius in the stellar interior, and is obtained by relaxing a large set of coupled nonlinear difference equations (derived from the differential equations of stellar structure) along with boundary conditions at the stellar center and surface. Radial stellar structure is only a one-dimensional problem and, while once considered difficult and CPUintensive, now is solved easily on personal computers (PCs). Since then, many other fields of numerical astrophysics have been developed: "explicit" hydrodynamic simulations of explosive and jet phenomena (Norman 1997); N-body and smooth particle hydrodynamics (SPH) of discrete or semi-discrete systems of particles (Dubinski & Hernquist 1997;Monaghan 1992); Monte Carlo simulations of radiation flow (Leahy 1997;Park & Hong 1998); etc. All have matured to the point where the solution of threedimensional, time-dependent problems is not uncommon.
Ironically, however, only modest progress has been made in extending the original implicit problems into several dimensions: rapidly rotating stars, evolving and interacting binaries, detailed accretion disk structure and evolution, etc. There are several important reasons for this. Firstly, the geometry of these systems is unknown until the problem is solved. For example, the shape of the outer surface of a rotating star or (possibly thick and advective) accretion disk will be part of the solution, and the shape of a rapidly rotating stellar core may have a different oblateness (or even prolateness) from that of the outer envelope. No numerical method capable of operating in nearlyarbitrary geometries has been applied extensively in astrophysics. Instead, one either has assumed spherical symmetry and treated only slowly-rotating, perturbation problems (Kippenhahn & Thomas 1970), or has assumed that the isosurfaces of the state variables are coincident (which implies rotation on cylinders via von Zeipel's theorem) and again solved an essentially one-dimensional, or limited two-dimensional problem (Eriguchi & Müller 1991;Clement 1994). Re-cently, some progress for two-and three-dimensional stellar models has been made using a multi-domain approach and treating the stellar surface as a discontinuity (Bonazzola, Gourgoulhon, & Marck 1998).
Secondly, even if a general geometrical method applicable to large numbers of two-dimensional and three-dimensional problems could be developed, the current stellar structure methods for solving the immense system of simultaneous nonlinear equations would take a prohibitively long amount of CPU time and memory. For example, a relatively modest problem with 256 3 grid points and ten state variables at each point would generate a banded matrix 10 8.2 × 10 8.2 in size, taking up at least 10 14.9 bytes (0.9 PB) for the non-zero elements. Direct inversion techniques, similar to the Henyey method commonly used in stellar structure (which take the bandedness into account), would take about a thousand years to invert this matrix once on a large parallel supercomputer like the Cray T3D, with perhaps 10 4 or more such inversions necessary for a complete stellar evolution model 1 .
Fortunately, there exist techniques for solving both of these problems that are well developed and have been in use in the engineering field for many years (although it is still rare to see both used at the same time). The Finite Element Method (FEM), introduced more than four decades ago and the preferred method of treating multidimensional structural engineering problems since the late 1960s (Zienkiewicz 1977), approximates objects as distorted lattices of small structural members called elements. For solving large systems of coupled equations generated by such grid problems, the multigrid method was introduced in the 1970s (Brandt 1977) and is now beginning to be used in astrophysics as well (Truelove et al. 1998;Norman 1998). This approach dispenses with the large matrix, cleverly reaching a solution after a few sweeps of the mesh. Together, these techniques promise to make multidimensional astrophysical structural problems possible, and bring the time to solve them within an order of magnitude or so of that for explicit problems. When coupled with the continued expected increase in speed of computers over the next few decades (which has averaged about a factor of 2 every 2 years for the past 20 years), it is not inconceivable that three-dimensional structure problems will soon be solved routinely on future models of PCs and that four-dimensional problems will become commonplace on supercomputers.
This first paper deals with the development of the general geometrical method for solving multidimensional structure and evolution problems. At this stage the speed and efficiency of execution of the method will not be a concern; the focus will be only on producing a robust and widely-applicable set of useful techniques. Our goal will be to determine the essential features of most astrophysical systems of equations and the geometrical demands they place on the numerical method. These properties will then be encoded at the outset, ensuring some measure of generality. The second section describes the set of equations that can be addressed with nonlinear astrophysical finite element analysis and develops a method for solution on fixed (non-moving) grids. In section 3, this then is generalized to include situations where the positions of the grid points are part of the solution and the grid can change with time. Finally, tests and examples are given, using the author's code, including rotating polytropic star models.
The Basic Four-Dimensional, Nonlinear Finite Element Method on Fixed Grids
This section describes the techniques used in the author's computer code, entitled GENRAL, for solving general astrophysical problems. It utilizes the techniques of finite element analysis (FEA) -in use in the field of engineering for some time -but generalizes them to nonlinear equations in four dimensions, instead of linear equations in two or three dimensions. For this initial development, it is assumed that the coordinates of the grid points (or "nodes") at which the variables are evaluated do not change while the solution is being computed.
General Form for Equations of Continuum Astrophysics
Appendix A shows that the differential equations of continuum astrophysics in curved spacetime can be cast into the generic form where [w] v = w v is a generalized solution vector holding all of the v = 1, ..., V unknown state variables; ℜ q is a generalized residual for each of the q = 1, ..., V equations (which will be forced to zero through numerical relaxation techniques); T q and F q are, respectively, the generalized stress tensor and force vector for these equations; and g is the determinant of the metric tensor g ([-+++]-signature). 2 In a similar manner, the boundary conditions on these equations can be cast as for each of the r = 1, ..., R boundary conditions. In general, R = V since, depending on the highest order derivative in ℜ r , there may be 0, 1, or 2 associated boundary conditions. The last term in (2) is adequate for handling Dirichlet, Neumann, and mixed boundary conditions, such as the radiative condition at a stellar or accretion disk surface. The first two terms are necessary for including constraints on field equations. S is the projection tensor along the boundary ∂Ω and orthogonal to the boundary normal n S ≡ n ⊗ n + g where n · n = −1, n · S = 0, and S · S = S, "⊗" is the outer (dyadic) product, and "·" is the inner (scalar) product. S causes the divergence in equation (2) to be performed on the boundary only and the derivative normal to the boundary to be only first 2 Throughout the paper the notation of Misner, Thorne, & Wheeler (1973) is used, with Greek letters indicating coordinate indices in four-dimensional spacetime (α = 0, 1, 2, 3) and Latin "integer" letters indicating three-space indices only (i = 1, 2, 3). The comma denotes ordinary differentiation with respect to the coordinates, while a semicolon will denote covariant differentiation. Repeated indices indicate summation over the entire range of those indices (the Einstein summation convention), so that g αµ g µβ,γ ≡ 3 µ=0 g αµ ∂g µβ ∂x γ A raised index indicates contravariant properties of the tensor and a lowered index indicates covariant properties. Note that w v is written as a contravariant vector with a raised index, like the coordinates x α ; this is partly for convenience (to facilitate the summation convention) and partly to draw attention to the variables as generalized coordinates of the system. order in n µ ∂/∂x µ . An alternative form for (2) is which also has no second derivatives in n µ ∂/∂x µ . It is important to note that equation (1) includes not only structural and steady problems, but also evolving ones as well. For these cases, in addition to having three spatial coordinates, the computational grid can extend into the fourth (time) dimension, possibly from the initial time step or hypersurface to the final one. While certainly increasing the computational and memory load on the computer, this approach will have distinct advantages over conventional approaches to initial-value problems.
The boundary conditions are not necessarily completely described by equation (2). While it is well known that, physically, the boundary of a boundary is zero (∂∂Ω = 0), computationally one often introduces sub-boundaries by truncating the mesh or imposing symmetries on the problem. These conditions produce right-angle kinks in the boundary, where n suddenly rotates by 90 • and the boundary conditions abruptly change. Such boundary corners occur, e.g., where the t = 0 initial hypersurface intersects the world line of a stellar surface or (in the case of axisymmetry or plane symmetry) where the stellar surface intersects the symmetry axis or plane. For example, when solving Maxwell's evolutionary equations on the four-dimensional domain Ω under such conditions, they will be bounded on the t = 0 portion of ∂Ω by the initial value (solenoidal and Coulomb) constraints; these will be bounded further at an external stellar 2-surface ∂∂Ω; and these may be bounded still further by the rotation axis or equatorial plane at edges ∂∂∂Ω. Therefore, additional equations, similar to (2), with successive projection of the first two terms into the sub-boundaries of lower dimension, may be needed until one reaches the zero-dimensional ∂∂∂∂Ω (endpoints of line segments) where the conditions become simply In the examples in this paper all boundary conditions are of the simple form (4), but in general astrophysical situations the form (2) will be needed.
Continuous Solution: The Element Mesh
Formally, in the finite element method (FEM) the computational domain Ω is subdivided not into nodes, but into sub-domains (δΩ) called "elements" -similar to "zones" or "cells" in the finite difference method (FDM). (Nodes come later, and then only to facilitate the element process.) The elements are constructed in such a way that each function w v is continuous over the entire domain, but its derivatives are only piecewise continuous; i.e., w v is continuously differentiable only within each element. The x α are treated in the same manner; they also are continuous across element boundaries with no spatial "gaps" between elements.
Although a variety of generic element shapes can be used, the most common are triangular and quadrangular. Of course, these assume higher-order shapes in three and four dimensions (i.e., equilateral triangles, tetrahedra, simplices [hyper-tetrahedra]; squares, cubes, and hypercubes), but they still shall be referred to here as the triangular and quadrangular classes. In GENRAL, elements of the quadrangular type are used exclusively because of their convenience. The computational domain is filled with a topologically rectangular set of of these building blocks, where D(≤ 4) is the dimensionality of the problem, and ℵ α ′ is the number of elements along each mesh dimension α ′ . Each element that borders the domain has one surface lying on the boundary that itself is an element of dimension D − 1. The total number of such boundary elements enclosing this rectangular mesh is a sum over the rectangular faces The element mesh can be distorted by stretching, compressing, bending, or even twisting it to conform to the geometry of the domain (as, for example, in a curvilinear coordinate system). In the engineering FEM this coordinate transformation is called the "isoparametric" transformation, because coordinate values x α and the variables w v are specified at the same nodal points. In general relativity this transfor-mation is the generalized Lorentz transform with inverse where ξ α ′ is the coordinate in mesh space, with range 0 ≤ ξ α ′ ≤ 1 in each dimension α ′ . Basis vectors along the mesh coordinate direction α ′ , and corresponding 1-forms, are Appendix B discusses conditions that may need to be satisfied by this transformation. However, unless one wishes to use the mesh as an actual Lorentz frame of reference, or wants to follow the evolution of all wave phenomena, only the Jacobi condition is necessary for numerical stability L ≡ det||L|| = 0 (9)
The Choice of a Basic Coordinate System
In the past, when developing a finite difference numerical simulation code, for example, it has been customary (and considered necessary) to write the differential equations in the same coordinate system described by the computational mesh. That is, if the mesh is spherical-polar, then the equations are written in spherical-polar coordinates, and so on. However, in the numerical method developed in this paper, this degeneracy is neither necessary nor desirable, as the mesh coordinate system is unknown until the problem is solved.
To allow for an arbitrary, unknown mesh, the differential equations will be written in a "basic" or "real-space" (x α ) system which does not change as the calculation proceeds. The derivatives still will be computed in the mesh (ξ α ′ ) system, but, in order to use them in the differential equations, will then be transformed to the basic system using the isoparametric/Lorentz transformation. The choice of coordinate system for the mesh is determined by how one lays out the elements in real space. That is, gives the metric coefficients in mesh space. However, one still needs to choose a system in which to write the differential equations; but, since the computer will be doing all the curvilinear work for us, one can select a very simple basic system, keeping the coordinates as Cartesian (or as Minkowskian) as possible. For example, in axisymmetric problems, cylindrical coordinates will be used, not spherical-polar. For threedimensional problems, Cartesian coordinates will be used, no matter how spherical the star or flattened the accretion disk. Any curvilinear properties of the metric orthogonal to the computational domain will be embodied in the volume element √ −g, and curvilinear behavior within the domain will be handled by the isoparametric transformation.
Discrete Solution: The Nodal Mesh and Interpolation Scheme
As with all continuum numerical methods, the solution is expressed as a finite set of discrete values. In the FDM these are values of the solution at specified points (nodes) in space; in spectral methods these are coefficients of basis or interpolation functions. In the FEM, these discrete values are both nodal values and basis function coefficients. That is, the FEM has properties of both finite difference and spectral methods.
The finite element nodes are distributed within each element in such a way that the w v can be interpolated across the element in each dimension with at least linear accuracy or better. For quadrangular elements, the simplest approach is to fill each element box with a (possibly hyper-) cubic mesh of (℘ α ′ + 1) nodes per dimension α ′ , where ℘ α ′ is the order of interpolation in that dimension, and nodes are shared by adjacent elements at all the interfaces (corners, edges, faces, and hyperfaces). For a problem of total number of dimensions D, the total number of nodes describing each element is, then, That is, for four-dimensional elements, I = 16 for first-order (linear) interpolation, I = 81 for secondorder (quadratic) interpolation, and I = 256 for thirdorder (cubic) interpolation -just within each element. The total number of nodes in the entire mesh (no sum on α ′ ). The number of nodes on each element's boundary is the total minus those in the interior and for the entire mesh No matter what the value of ℘ α ′ , in these simple cases the basis functions in mesh space for a nodeî within a given element e are products of Lagrange interpolation polynomials £î eα ′ in each dimension α ′ N ′î e (ξ) = £î e (ξ) where ξ α ′ ıe is the mesh coordinate value at nodeî in element e, N ′î e is the contribution from that element to the basis or "shape" function for nodeî, as measured in the mesh (primed) system. The range of the nodal indices in the entire mesh isî, = 1, ..., I, but the product in equation (16) only runs over the nodes within element. The total basis function for nodeî is, then, a sum over the element contributions (which actually involves only those elements containing that node). Note that each shape function attains unit value at its own node and zero at all other nodes in its associated elements (and in the mesh as well) Often the body-centered nodes, and sometimes even face-centered nodes, are removed from the standard Lagrangian elements elements to form the socalled "serendipitous" elements (Zienkiewicz 1977). In that case, if nodel is removed, then the basis functions are given by the normal Lagrange shape function with the Lagrange shape function for that missing node subtracted off withl =î.
Most of the properties of Lagrangian elements can be illustrated in one dimension. Figure 1 shows a simple 1-dimensional, 5-node mesh and its discretization in linear and quadratic elements. Note that interior shape functions have continuous derivatives at their respective nodes, while shape functions on element boundaries have discontinuous derivatives. (The latter also involve more nodes as they are composed of shape function pieces from adjacent elements.) The FEM, therefore, can be considered to be a multidomain spectral method with each of the thousands to millions of elements being a separate domain.
Because the shape functions are continuous throughout Ω, the solution w and the coordinates x are truly continuous functions of position in the mesh: like spectral methods but unlike the FDM where interpolation is only an ad hoc addition to the scheme. Also, because a unique inverse relation ξ = ξ(x) exists, the variables have an implicit function of position in real space
Formation of Derivatives and the Differential Equations
We now have a numerical procedure for computing the derivatives of the w v with respect to x α . First, the coordinate transformation matrix is formed and then inverted to obtain L α ′ α . Then the derivatives of the variables are computed in mesh space and, finally, transformed to real space by the chain rule Although not usually used in practice in the actual computer code, it is sometimes useful for analytic purposes to express the shape functions in real space coordinates and use them to interpolate the w v and compute their derivatives The real-space Nî also have the normalized property at their respective nodes By comparing (26) and (27) with with (20) and (25), one concludes that the real-space basis functions and their derivatives are With expressions for the w v and w v ,α (either in mesh or real space) we now can calculate the residuals ℜ q (equation 1) at any point in the domain Ω and ℜ r (equation 4) at any point on the boundary ∂Ω, not just at the nodes. The next step in the development of the astrophysical FEM is to construct a set of V I equations for the V I shape function coefficients (w v ı ) that fully describe w v (x). This is accomplished by integrating the physical differential equations (1) and/or boundary conditions (4) over a function Wî(x) which peaks near (but not necessarily at) nodeî and falls to zero far from that node. This produces V I discrete nodal equations Relaxation schemes in the code then attempt to force the nonlinear ℑ qî to zero. In principle, each ℑ qî is a function of all of the w v . However, in practice, because Wî is peaked near nodeî, ℑ qî involves only nodes local toî -in fact, only nodes in those elements containing nodeî. The ℑ qî , therefore, are more similar to difference equations than to spectral equations and, when linearized, produce a banded rather than filled matrix.
Because ℜ q can contain second derivatives, the integral in equation (30) cannot be performed uniquely for every node using only the interpolation within a single element. 3 The standard solution to this problem, and the key step in the finite element process, is to integrate the weighted residual by parts to arrive at the so-called "weak" form where, in the mesh system, the volume scalar is the surface 1-form normal to the domain boundary is and ǫ is the flat-space Levi-Civita permutation tensor. In the weak form, all terms involve only firstorder derivatives of the variables w v with respect to the nodal coordinates. Second order derivatives are generated by the Wî ,β term which, after integration, differences the flux T β q on each side of each node in a manner similar to a finite volume scheme.
Note that, in order to generate the nodal equations in the interior of the domain, weights Wî(x) that vanish on the boundary always will be used. Therefore, the first term in equation (31) -the boundary term -will always be zero.
Second-order Equations: The Galerkin Method
The weighted residual method can be derived in a number of ways. In early papers on finite element analysis, only linear problems were addressed and the nodal equations were generated using a variational approach that maximizes the norm of the solution (Zienkiewicz 1977). This led to the form (30) with the shape function itself as the weight This choice for Wî is called the Galerkin method and is especially useful for second-order equations. For example, for the simple Poisson equation in one dimension (w ,xx − ρ = 0), with quadratic elements and uniform node spacing ∆x, equation (31) generates the following nodal equation at the central nodeî of each element which is similar to the finite difference form (wî −1 − 2wî+wî +1 )/∆x 2 − ρî = 0. (Somewhat more complex 4th order difference equations are generated for nodes on element boundaries.) In general the Galerkin weighted residual method generates derivatives similar to those expected in finite difference schemes (although in general geometry), but scalars and source terms are weighted averages of nodes surroundingî rather than evaluated exclusively atî. An important property of the shape functions hints at a more fundamental interpretation of the weighted residual method that is not discussed usually in the engineering literature. As the element volume δΩ approaches zero, the shape functions become good approximations to the Dirac delta function (no sum onî) where kî is a scaling constant of order unity, but generally different for each nodeî. Therefore, with the Galerkin method, the weights Wî in equation (30) are generalized approximations to δ(x − xî) which, when integrated over a differential equation, generate or "pick out" the corresponding difference equation near nodeî. As the number of finite elements approaches infinity, the ℑ qî more closely approximate the complete set of ℜ q defined at all points in Ω. Therefore, while originally derived for linear equations, the weighted residual method is valid for nonlinear problems as well.
First-order Equations: Petrov-Galerkin Schemes and "Staggered Grids"
The lack of a single, universally-applicable weighting function Wî(x) is the main impediment against developing a truly general simulation code. One must always know the order of differential equation being integrated. For example, while the Galerkin scheme works well for second-order equations, it has the same pitfalls for first-order equations and fluid flow as centered-differencing schemes have in the FDM: leapfrogging, in which important terms in the equations do not depend on variables at the node at which the integral (31) is evaluated ((∂w/∂x)î ≈ (wî +1 − wî −1 )/2∆x), and two-point oscillations near shocks.
Such problems can be addressed by using weighting functions other than the Nî. These are called Petrov-Galerkin schemes (Hughes 1987). For oddorder equations, functions that shift the peak of the weight away from nodeî reduce or eliminate many of these problems. This is the case for weights that are shape functions of twice the element interpolation order and for those that are distorted by the shape function derivative In the first caseî + 1 2 signifies a position in the mesh centered between nodes. The functions W P G1 ı peak in between nodes and have almost the same effect as using a staggered grid does in the FDM. Both W P G1 ı and W P G2 ı generate non-leapfrogging differences ((∂w/∂x)î ≈ (wî +1 − wî)/∆x) and are still generally second-order accurate (or higher) as the scalar source terms are evaluated at the same place as the derivatives.
An upwinding scheme can be generated by setting v α ∝ u α and integrating only the advective terms with this weight. However, this scheme has low order accuracy and is rather diffusive (Hughes 1987). Better methods for handling shocks are the van Leer scheme (van Leer 1979), in which one enforces monotonicity in the gradient of the flux of a conserved quantity, and higher order Godonov schemes (Colella & Woodward 1984;Colella 1990) in which one applies the shock jump conditions within the element itself. Upwinding schemes in the FEM are a sub-field in themselves and are largely beyond the scope of this paper.
Application of Boundary Conditions
Not all of the nodal equations generated by (31) are useful. For example, for each second order differ-ential equation (one where a given T β q is a function of the gradient of at least one w v ) exactly K of the nodal equations (those integrated over shape functions peaking at boundary nodes on ∂Ω) are meaningless or incomplete. This is due to the absence of elements beyond the boundary needed to complete the integrals. For first-order equations (ones where T β q is, at most, a function of w and x only) those on only one portion of the boundary must be discarded (e.g., on one side or surface). These ignored nodal equations must be replaced by exactly the same number of boundary or initial conditions. These are generated in a weighted residual manner similar to that in equation (30) except that the integral and weighting functions now are evaluated on the boundary where d(∂Ω) is the magnitude of the surface element and the nodesk lie on ∂Ω. The number of equations, therefore, will remain equal to the number of unknowns, as is necessary for a well-posed problem.
Integration of the Weighted Residuals
In the FEM the integrals in equations (31) and (38) often are performed numerically using Gaussian integration (Abramowitz & Segun 1965), with all sampled points x g interior to element boundaries. In practice, the integrals are calculated piecewise, element by element, with each element's contribution to the various integrals summed accordingly. To accomplish this, one defines a local mesh coordinate system, referenced to the element center and parallel to the global ξ α ′ system where ξ e is the position in mesh space of element e's center, ∆ξ α ′ e is the element width in direction ξ α ′ , and double primes refer to the local element system. The Lagrangian shape functions within each element take on a very simple form in this local system. For linear interpolation in one dimension (with nodes lying at s1 e = −1 and s2 e = +1) For quadratic interpolation (with element nodes lying at sĩ e = −1, 0, +1) and so on for higher order interpolation. (Equations [43] and [44] are the functions depicted in Figure 1.) Higher dimensional element shape functions are products of these in a manner similar to equations (16) for Lagrangian elements and (19) for serendipitous elements.
Equation (31) then becomes, dropping the first term, as discussed earlier, and summing over elements and Gaussian integration points, is the element volume in the real-space coordinate system, e = 1, ..., E is the element number in the mesh, g = 1, ..., G is the number of the Gaussian integration point within element e, and ω g is the Gaussian weight at that point. Similarly, equation (38) becomes is the surface 1-form on the α ′ boundaries, and , depending on the boundary.
Because the residual weights [Wî(x g ) = W ′′î (s g )], their derivatives, and the Gaussian weights ω g are the same for all elements, they can be precomputed and stored prior to beginning the relaxation of the solution. The only quantities necessary to compute during the relaxation are the T β q and F q at each x ge interior integration point and the f r at each x gb boundary integration point.
Historically, the number of integration points G used in each element is a function of the expected nonlinearity of the product Wîℜ q with respect to position x. In engineering, this is usually of low order (linear or quadratic), but in astrophysics this product can vary with exponential order or higher. Nevertheless, in practice, even with highly nonlinear functions, the author has had quite satisfactory results using the same number of integration points as nodes in each dimension (i.e., G = I). Fewer than this ("underintegration") reduces the order of accuracy or even can produce a singular matrix. More than this ("overintegration") does little to improve accuracy (of order unity improvements only) at great computational expense.
Solution of the Simultaneous Nonlinear Equations: The Multi-Dimensional Henyey Method
For solving the V I nodal equations and boundary conditions, the author currently uses a standard multivariate Newton-Raphson technique, sometimes called the "Henyey" method in astrophysics (Clayton 1968;Aller & McLaughlin 1965). The ℑ qî are linearized by expanding in a Taylor series about the solution w v , resulting in a matrix inversion problem In the engineering FEM ∂ℑ qî /∂w v is called the "tangent stiffness" matrix. It has a length V I on a side and bandwidth ∼ V I (1−1/D) . At present GENRAL uses direct methods (Gaussian elimination with lowerupper decomposition) to solve equation (49), repeatedly applying the corrections until the norm over all falls below a certain tolerance. The tangent stiffness matrix need not be extremely accurate. Indeed, when ∆ w << 1, the matrix need not be recomputed at all, with little impact on the rate of convergence and no impact on the accuracy of the solution. Furthermore, the elements of the stiffness matrix in equation (49) can be calculated using numerical differentiation, rather than writing out explicitly the partial derivatives of each equation with respect to each variable. This eliminates the need to know the geometry of the mesh beforehand -a feature important for multidimensional astrophysical structures. Numerical differentiation of ∂ℑ qî /∂w v is not necessarily more expensive than algebraic differentiation, especially if the the baseline residual integrals ℑ [n] qî are calculated only once for each matrix, and those partial derivatives known to be identically zero are not computed.
Logarithmic Variables
It is quite common for an astrophysical state variablee.g., the density ρ -to vary by many orders of magnitude over Ω. Therefore, in order to maintain the same relative accuracy over the domain, it may be necessary to solve for a much more slowly varying function, e.g.,ρ ≡ log 10 (ρ). In addition, for variables that can be positive or negative (like velocity) one may need a more complex function and its inversẽ where S ≡ sgn(v) = sgn(ṽ) and v scale is a fixed scaling value for v. These "scaled logarithmic variables" are linear for |v| << v scale and logarithmic for |v| >> v scale and can be negative or positive. Unfortunately, there is some loss of convenience and intuition in re-writing the equations in terms of these new, modified variables, especially if there are many of this nature. Therefore, the following procedure has been devised in order that the differential equations still can be coded in their basic form (i.e., using ρ and v) while maintaining the accuracy of solving for log 10 (ρ) and slog 10 (v): 1. Each variable is flagged as being linear, logarithmic, or scaled logarithmic, and then stored as 2. Partial derivatives of the integrals in equation (49) are calculated with respect tow v . However, when computing the functions T β q , F q , and f r , the stored variables and their gradients are re-exponentiated to their unmodified forms with With this scheme one can choose a variable to be logarithmic or not at runtime, or even switch its character during execution, without modifying the code.
Pivoting
Lower-upper decomposition techniques work well only when the matrix elements on the diagonal are decidedly non-zero. That is, one must identify which equation ℑ qî is "for" which variable w v . The term pivoting refers to the exchange of rows and/or columns in the matrix to ensure that all elements on the diagonal are indeed largei.e., that the trace of the stiffness matrix is a maximum.
Local, or partial, pivoting ensures that, at a given node, the correct physical equation is paired with the correct variable. For example, in MHD computations, when the electrical conductivity is infinite, the current J is determined by Maxwell's equations, not Ohm's law. Or, in a hydrostatic star the momentum equation determines the pressure structure, while velocity is determined by energy or particle conservation. In the standard partial pivoting algorithm one searches a matrix column for the largest element and switches the row of that element with the one presently on that column's diagonal (Press et al. 1989). When applied at a single node, this algorithm is very successful in automatically pairing equations and variables. That is, the solution will be able to evolve from a dynamic state to a hydrostatic one without re-casting the equations or writing a new simulation code.
Global pivoting ensures that an equation ℑ qî (integrated near nodeî) is applied at the correct nodê . This is a more difficult task than local pivoting and is not easily automated in our case. The correct identification is not always =î, especially for firstorder equations, for which the answer is determined by where the boundary conditions are applied. Fortunately, the global pivot of the matrix is a property that usually does not evolve with the simulation; it need be determined only once. Therefore, an elaborate pre-pivoting scheme has been devised for GEN-RAL, in which, based only on the equation order and location of boundary conditions, shape function coefficients are identified with nodal equations, useless nodal equations are discarded, and boundary conditions are inserted. Local pivoting can still shift emphasis for a given variable from one differential equation to another, but the overall global identification of integrals (31) with nodes remains fixed.
The Finite Element Method with Nodal
Coordinates as Part of the Solution In most multi-dimensional astrophysics problems, the grid chosen initially will not be a good match for the final structure, due to a poor fit to the object boundary or poor resolution in areas of rapid gradients. Therefore, the coordinates and/or quantity of nodes should be changed as the solution converges in order to get a better fit ("adaption"). In addition, one needs to allow for the mesh at one time step to be different from that at the previous time step (grid motion).
Adaptive Gridding
Adaptive gridding, as used here, means modifying the mesh spacing in order to achieve greater accuracy or stability without changing the total number of nodes or the topology of the mesh. One-dimensional stellar structure models use a form of adaptive gridding as they utilize a mass coordinate rather than radius, allowing the radius of each mass zone, including the outer stellar radius, to expand or contract depending on the current state of the star. Our general multidimensional adaptive gridding scheme takes the same form as equation (1) where A α ′ β ′ is the adaptive gridding tensor. Currently the author is using a diagonal form which ties the mesh spacing to the local gradient of the state variables. Note the sum over V variables; f v is a vector of 1's and 0's that, at run time, selects those state variables to which the grid should be adapted, eα′ is a unit vector in spacetime along the local mesh direction ξ α ′ is a measure of the local linear mesh spacing along an element edge (again, no sum on α ′ ), and C 1 ≈ 0.2 is a constant that regulates the strength of the gradient term. Equations (53) provide four constraints on the nodal coordinate values, allowing the mesh spacing along ξ α ′ to decrease in regions of high gradients in the variables, but to be uniform otherwise. Note that the adaptive gridding equations are not the coordinate conditions needed to complete the description of the metric (see Appendix A). They simply move nodal positions around in an already-determined metric. C 2 is a very small constant (≈ 10 −10 ), such that in the absence of adaptive gridding (i.e., all f v = 0) and with the proper boundary conditions, the mesh will assume an appropriate curvilinear character with uniform spacing ∆x α ′ in each ξ α ′ .
Moving Grids and the Advective Derivative
Adapting the grid to local conditions changes the spatial and time coordinates of the nodes. For an observer traveling along ξ 0 ′ this produces what appears to be motion of the grid through space. Now, computational fluid dynamics on stationary, uniformlyspaced meshes is a fairly complex field in itself. On arbitrarily-spaced and moving grids the resulting mixed Lagrangian-Eulerian hydrodynamic equations might seem intractable. However, with the FEM the exact opposite is true. If the elements have an extent in time as well as in space, then the inverse isoparametric transformation will automatically take into account the complicated effects of differencing the fluid equations with respect to a moving grid. It is unnecessary, and indeed incorrect, to attempt to include the grid velocity in the differential equations.
As an example, consider the non-relativistic total advective derivative in flat spacetime where v i is the three-velocity of fluid flow. Now, let v i g be a grid three-velocity such that e ξ i ′ remains parallel to e x i , although e ξ 0 ′ can make an angle (tan θ = |v g | ∆t/|∆x|) with e t . (These are the conditions LeBlanc & Wilson (1970, 1971) placed on their moving grid in their Lagrangian-Eulerian MHD calculations.) The isoparametric transform and its inverse then are (no sum on β or i). Substituting equation (58) into (56) one obtains The first term in equation (59) is the apparent time derivative (along ξ 0 ′ ) at a given node in the grid frame while the second is simply the moving advective derivative (v − v g )·∇. Thus the isoparametric transform reproduces the Lagrangian-Eulerian equations of LeBlanc & Wilson under the same conditions.
Adaptive Mesh Refinement
The purpose of adaptive mesh refinement (AMR) is similar to that of adaptive gridding: to increase resolution in a local region of the mesh. However, in the case of AMR the numbers of elements and nodes usually increase as the calculation proceeds. When the element mesh extends into the time dimension, AMR makes it possible to treat efficiently the propagation of particularly important wave phenomena. Rather than subdivide the entire domain finely, one does so only for the region of spacetime near the null geodesics along which the phenomenon propagates.
The Courant condition (Appendix B) prescribes one form of AMR, as it places an upper limit on the temporal mesh spacing and, therefore, a lower limit on the number of elements necessary to solve a problem. If the Courant condition is not satisfied in a local region, the number of elements in ξ 0 ′ will have to be increased.
In the FEM AMR can be achieved by subdividing some of the elements, usually by a power of two in a given dimension, and then rebuilding the interpolation grid of nodes within the newly-gridded subdomain. There are two matters of concern in this case: 1) the subdivision must be done by elements, not nodes, regardless of the final order of the interpolation grid used within each element, and 2) larger elements bounding the more finely subdivided region must have their internal interpolation scheme, and hence their nodal shape functions, modified by the addition of new boundary nodes. That is, these modified elements must be serendipitous elements. This ensures that the functions w v (ξ) and x α (ξ) are continuous across element boundaries and that there are no spatial gaps or "hanging nodes". This second requirement can significantly increase the complexity of the mesh and the number of element types for which quantities need to be pre-computed and stored.
Recently, however, the astrophysical community has been embracing an alternative, multi-level approach to AMR (Truelove et al. 1998;Norman 1998). A region of space is refined not by subdividing the grid cells themselves, but by applying separate, and successively more refined, grids at the same location and with some nodes in common between each level. This hierarchical approach eliminates the hanging node problem without resorting to defining many new serendipitous element types: each grid level is subdivided with standard linear or quadratic elements. This technique also fits naturally into a multigrid iterative scheme for solving the coupled nodal equations.
Tests and Examples
Below are presented some tests of the code GEN-RAL on problems with known solutions. At the present time the code uses the multi-dimensional Henyey technique to solve the difference equations it generates. As discussed earlier, this approach has severe computer time and memory limitations. Therefore, all the examples involve a much smaller number of elements than the millions that one would use in a typical astrophysical simulation. (Generally, the full code running on a desktop workstation is limited to 4096 elements or 6561 nodes total [64 2 , 16 3 , or 8 4 linear elements or 32 2 , 8 3 , or 4 4 quadratic elements]). Nevertheless, the tests serve to demonstrate the unique features of the FEM, including its ability to solve nonlinear astrophysics-like problems in multidimensional, arbitrary curvilinear coordinate systems and to achieve high accuracy in the solution by employing higher order interpolation, adaptive gridding, and logarithmic variables.
Fixed Cartesian Grid Tests: Poisson's Equation in One to Four Dimensions
The second set of tests involves solving a differential equation in up to four dimensions, but on a regular Cartesian (not Minkowskian) grid of dimension D. The equation used is Poisson's equation with a known solution where r ≡ x 2 + y 2 + z 2 + t 2 is the radial distance from the origin and 0 < x α < 1. The components of the generalized equation (1) are then with Dirichlet conditions on the entire boundary where r(∂Ω) is the expression for r evaluated on the boundary. This test exercises the code's ability to solve equations (45) and (47), but does not test the coordinate transformations. Results of the fixed grid tests are given in Table 2, which shows how the accuracy of the solution in different dimensions, determined by the normalized "L2" error norm varies with number of nodes and elements used. Two differences from the volume and surface area integration tests are worth noting. Firstly, the solution errors for quadratic elements are third-order accurate (E L2 w ∝ ∆x 3 ∝ (ℵ℘ − 1) −3 ), not fourth-order. Secondly, underintegration (G = ℘ D ) does not work. G always must be equal to or greater than (℘ + 1) D (i.e., ≥ 2 D for linear elements and ≥ 3 D for quadratic elements) in order to generate the proper second-order differences in the Laplacian. Underintegration, at best, reduces the accuracy of the solution by one order. At worst, it destroys nearest neighbor differencing, producing leap-frogged differences, and can lead to no solution at all. "Iso-integration" (G = I) is probably sufficient for any equation of the form (1), but it should be checked in each circumstance to be certain.
Adaptive and Curvilinear Grid Tests
The third set of tests exercises nearly all the features of the code in order to obtain a solution to a rather pathological Poisson problem -a Fermi-Dirac-like function with a cold temperature of 1/f = 0.02. Such sudden exponential drops in the solution are common at stellar core-halo boundaries or stellar surfaces and are difficult to resolve accurately without a large number of nodes or a variable change (to optical depth, for example). For this demonstration the conservative form for the stress and force terms (in up to four Cartesian dimensions) has been chosen (no sum on β) where a β = (1, a, b, c) are constants and r ≡ x 2 + a 2 y 2 + b 2 z 2 + c 2 t 2 (For example, in one dimension, a = b = c = 0; in two dimensions, b = c = 0; and so on.) And, as one is still interested at this stage in testing the FEM machinery and not the astrophysical viability of the code, once again simple Dirichlet boundary conditions are employed, so equation (4) becomes rather than, for example, a multipole expansion of the interior solution. The above conservative form (66 -67) was chosen in favor of other forms (such as 62 -63) because its solutions are particularly accurate for a small number of nodes and suitable for demonstrating adaptive gridding and the use of logarithmic variables. Figure 2 shows the solution of this Poisson problem in one dimension as one applies successively more features of the code. The top panels of Figures 2 show standard fixed, equally-space grids of 8 and 16 linear elements respectively. Some improvement in accuracy can be obtained by doubling the resolution, but this incurs additional storage and computational expense.
Turning on the adaptive gridding equations (53), however (middle panels), significantly improves accuracy for the same number of elements. This also demonstrates one aspect of the ispoarametric transformation: variable node spacing. A closer examination of this more accurate solution, however (Figure 2, bottom left), shows very large relative errors in the log when w << 1. Nevertheless, these can be overcome easily, without re-writing and re-coding the equations, as the bottom right panel of Figure 2 shows. When w is identified as a logarithmic variable, rather than linear, the solution remains accurate over eleven orders of magnitude. (Note the different node spacing in the bottom panels, with the grid adapting to w on the left andw [= log 10 w] on the right.) Figure 3 shows a similar development for a twodimensional Poisson problem where the Fermi-Dirac surface has an elliptical ratio of 2:1 (a = 2). The first three panels demonstrate the errors possible in locating the surface if the proper coordinate geometry is not used. Additional improvement in accuracy can be obtained by using adaptive gridding (middle right panel). However, this solution for r > 0.5 suffers the same oscillatory errors seen in the one-dimensional case (bottom left), which again can be eliminated by identifying w as logarithmic (bottom right).
It is important to note that, in all of the solutions displayed in Figures 2 and 3, no explicit curvilinear coordinate system is used. The coordinates of the grid points (whether fixed or part of an adaptive gridding solution) are stored only as xî and yî, not as rî and θî, for example, and yet are still fully arbitrary (subject to the Jacobi condition). The Poisson equation is written only in terms of x and y as well. Of course, the derivatives are still calculated using the coordinate grids shown, but then they are immediately transformed to (x, y)-space using the transformations (7) and (8). Thus, with these new techniques the grid can be moved around to obtain a more accurate solution while the physical equations remain coded in the same very simple form.
The Fermi-Dirac Poisson tests were used to determine a good value for the adaptive gridding constant in equation (54). Several dozen models like those in Figures 2 and 3 were computed for different values this parameter. It was found that the accuracy improved by factors of 3 − 10 as C 1 was increased from 0 to 0.2, but beyond this point the accuracy did not improve much. In fact, for values much greater than 0.2, the models became unstable, often not converging. Therefore, C 1 = 0.2 was chosen as a semi-universal value in the adaptive gridding equation. It has proven to be useful both in the Fermi-Dirac tests in Figures 2 and 3 and in the stellar structure models below.
One important point about adaptive gridding should be mentioned. As currently implemented, the technique is rather volatile and unstable. Unless great care is taken, iterations with adaptive grids often diverge, violating the Jacobi condition in the process. In the case where a single solution to a steady state problem is sought, sometimes less CPU time cost will be incurred by subdividing the mesh more finely or using quadratic elements, rather than using adaptive gridding techniques. On the other hand, when many thousands of successive models are to be computed, as is the case for evolutionary problems, each newlyconverged model will be a good initial approximation to the next evolutionary state, yielding convergence for each time step in only a few iterations. In this case, the amount of time spent converging the first adaptively-gridded model will be a small cost compared to the CPU time adaptive gridding saves over the course of the evolution by using a smaller number of elements and nodes to obtain the same high level of accuracy.
Stellar Structure Tests: Polytropic Stars
The fourth series of tests adds the ability to solve a coupled set of both first-and second-order partial differential equations. It also demonstrates the use of √ −g to solve a problem in which the basic coordinate system (not just the grid) is curvilinear, due to the assumption of symmetry conditions involving a coordinate direction orthogonal to the computational domain.
Spherical Polytropes in One Dimension
In one dimension the equations for polytropic stellar structure are hydrostatic equilibrium (Euler's equation with zero velocity), Poisson's equation for gravity, and the polytropic equation of state dp dr + ρ (n + 1) dw dr = 0 (69) Pressure p and density ρ are unity at the stellar center and zero at its surface, and r is the spherical radius coordinate. The polytropic index n is a measure of the hardness of the equation of state; the factor (n + 1) in the hydrostatic equilibrium (HSE) equation is a normalization constant for the gravitational potential w.
In semi-analytic treatments, the hydrostatic equilibrium equation is multiplied by r 2 /ρ, differentiated with respect to the radius r, and combined with Poisson's equation to give a single second-order equation. Following this procedure here, however, tests only our ability to solve that simple equation and not much else. A slightly stronger test of the method would be to leave the system as a set of coupled equations and identify with boundary conditions at r = 0 of p = 1 and p ,r = 0. Note the need for a non-unit volume element of r 2 in order to form the proper divergence. A basic curvilinear coordinate system must be used because of the spherical symmetry assumed in directions orthogonal to e r . Unfortunately, the above approach is still unsatisfactory, because it cleverly casts HSE as a secondorder equation, avoiding the first-order equation problems discussed earlier. Also, as shown below, it does not lend itself to generalization to two or more dimensions. To address these issues, the following alternate identification of the stress and force terms for the pressure equation has been chosen T r p = 0 (75) which casts HSE as a first-order equation (with only the boundary condition p = 1 at r = 0). Boundary conditions on the potential w are w ,r = 0 at the stellar center and w = −r surf ace /r at the stellar surface. In addition, the condition p = p s ≡ 10 −4 is applied at the surface on the adaptive gridding equation to determine the stellar radius. Figure 4 shows an n = 1 polytrope in one dimension, which has the analytic solution The left panel uses only the Galerkin method, while the right two panels show the solution using the Petrov-Galerkin scheme in equation (37) to integrate the HSE equation. The need to treat first-order equations differently from second-order ones is clearly evident. The leapfrogging first-order differences produced by the Galerkin method not only display pointto-point oscillations, they also miss an implicit boundary condition (p ,r = 0 at r = 0, implied by w ,r = 0 and equation 76). The Petrov-Galerkin scheme improves the accuracy considerably, and with adaptive gridding remains roughly second order accurate for linear elements and third order for quadratic. At the present time no robust, automated method for determining the order of the differential equation has been developed. The code must be told explicitly not only the order of each equation but also the location(s) of the boundary conditions. While this is the greatest obstacle to producing a truly general continuum simulation code to solve all types of equations, it is a relatively modest amount of effort compared with writing a new code for each problem.
Rotating Polytropes in Two Dimensions
Treatment of a uniformly-rotating polytropic star in two dimensions poses several problems. Firstly, it appears to be an overdetermined system (79) (where E rot ≡ ω 2 /4πGρ is the normalized rotational energy per unit mass, ω is the uniform angular velocity of stellar rotation, and R and Z are the cylindrical radius and axis coordinates) with four equations but only three unknowns. In normal astrophysical situations this dilemma will not arise, as the fluid equations plus the conservation of energy are a well-posed problem. However, it occurs here because two unknowns (v R and v Z ) and only one equation (conservation of mass) have been removed from the full set. The trick is to convert the two redundant equations for HSE (77) into only one. One possible solution is to form a single secondorder equation by taking the divergence of the HSE equation, similar to the standard semi-analytic approach. However, while it works fine in one dimension, this approach is unstable to two-dimensional perturbations when both the Dirichlet and Neumann boundary conditions are applied on the same (r = 0) surface, leaving the stellar surface free. Setting p = p s at the surface does not help either; in this case the mesh must become adaptive, and this constraint must be used as a boundary condition on the adaptive gridding equations, not on HSE. Moving the Neumann condition to the stellar surface is a better approach, but difficult to apply for more complex problems (e.g., rotating polytropes).
An approach that does work is to project the HSE equation along a direction in which p and w have significant gradients. The projection direction need not be along e p ≡ −∇p/|∇p|; e r appears sufficient, even when the polytrope is rotating rapidly. But it must not be orthogonal to e p , along which the gradients are zero. The components of the general stressforce equation for the rotating polytrope are, therefore, similar to (75)-(76) and (72)-(74) T i p = 0 (80) with a sum on i over R and Z, boundary conditions p = 1 and e i r w ,i = 0 at the stellar center and at the stellar surface where p s << 1 is a small fraction of the central pressure and w s is the specified surface potential. Two different methods were tested for calculating w s . The first was an exterior multipole expansion where the P ℓ are Legendre polynomials and M ℓ are the moments of the mass distribution in the star which, because of additional equatorial plane symmetry, are non-zero only for even ℓ. Generally, orders up to L = 12 were used. The second method used a Green's function integral over the domain This expression is valid for all continuous, Newtonian self-gravitating, axisymmetric systems. 5 The singleparameter complete elliptic integral represents the summed relative contributions to the surface potential at (R s , Z s ) from different angular elements of a ring of matter at (R, Z). I e diverges with α (a measure of how close the ring is to x s ), but it can be evaluated numerically easily and tabulated to a part in 10 10 accuracy for the useful range 0 ≤ α ≤ 10 (i.e., for rings approaching within only a fraction 2 × e −10 = 9.1 × 10 −5 of |x s |) over which the integral lies in the range 1.0 ≤ I e ≤ 5.1. In all models presented here, even with the largest meshes (33 2 ) and adaptive gridding (R s /δR s ∼ > 200), α remains well below 8.0 and I e below 4.2. As implemented in the author's code, the speed of the integral outer boundary condition technique was significantly slower than the exterior multipole expansion, increasing the time to form the stiffness matrix (although not affecting the time to invert it) by factors of several. While the author made no attempt to optimize the implementation, even after such efforts it nevertheless should remain somewhat expensive, as it requires (for each non-zero stiffness matrix element and each right-hand-side vector element) the complete integration of the potential (equation 89) at 4-6 surface points, each integration being the equivalent of of a multipole moment computation. However, while the multipole expansion began to break down for modest rotation speeds, the integral technique converged with no problem for all rotation speeds up to breakup, making the extra computational effort worthwhile and necessary. In addition, as the integrals are done only for surface points, this hybrid technique (differential equation with integral boundary conditions) still will be much cheaper than computing the global potential by performing such an integral for every point in the domain. Figure 5 shows two dimensional n = 1 non-rotating polytropes for the two element classes each with 9 2 nodes -the analog of the middle and right panels of Figure 4 -using the multipole boundary condition. Note especially the variable grid spacing near the stellar surface and the difference in smoothness between linear and quadratic interpolation. Figure 6 shows the same models for the 33 2 node cases. Note especially the departure from sphericity in the pressure contours in the models employing linear elements that is absent in the quadratic element cases; the 9 2 quadratic model is more accurate than the 33 2 linear model. However, errors in the two cases scale only as which is one full order less accurate than expected. This may be due to the reflective boundary conditions along the axis and equator, which are only first and second order accurate, respectively, in the linear and quadratic cases. The nature of this boundary condition cannot be improved at this time, but will be once iterative/multigrid methods for solving the coupled equations are implemented. This will allow boundary "ghost" elements to be handled easily, allowing application of boundary conditions as accurate as the mesh interior.
The Maclaurin spheroid sequence (a series of n = 0, uniform density polytropes) provides a full twodimensional test of the method. (Note that the method takes no advantage of the uniform rotation, uniform density, or polytropicity, so the ability of the code to solve the Maclaurin problem is a good indication of how it will do on general multidimensional stellar structure and other more complex problems.) The appearance of Maclaurin spheroids is similar to that of the n = 1 polytropes in Figure 5 and 6, but the logarithm of the pressure varies little with radius except near the surface, where the radial grid spacing decreases dramatically due to the sudden pressure drop. For this reason it is sufficient to use a larger pressure boundary value (p s = 10 −2 rather than 10 −4 ) in order to determine the locus of the Maclaurin spheroid surface. Figure 7 shows the analogy of Figure 6 for a Maclaurin spheroid with ω 2 /2πGρ = 0.224 -very near the theoretical limit of 0.2246656 (Tassoul (1978)). The integral boundary condition (89) is used to compute the surface potential.
The complete Maclaurin series from ω 2 /2πGρ = 0 to 0.224 was computed in four different ways, using 9 2 and 17 2 nodes for the two classes of elements (linear and quadratic). Values of τ ≡ E rot /|E w |, the rotational flattening ratio of the semi-minor and semimajor axes f = 1 − a Z /a R , and total angular mo- rot Ω ρ(R, Z)R 2 dΩ have been computed from these models and are compared in Figure 8 with analytic curves from Tassoul (1978) and Chandrasekhar (1969). The fractional errors for the four series are shown in Figure 9. The models are quite accurate for such a small number of elements, with errors in the 10 −3 to 10 −4 range in the 17 2 quadratic case. They show the expected result that third order interpolation is significantly more accurate than second order, but the decrease in the errors with increasing numbers of elements is not as steep as expected. Most of this behavior is probably due to the less accurate reflective boundary conditions mentioned earlier.
Summary
This paper has developed a general method for solving multidimensional structural, and dynamical, problems of astrophysics. Virtually all situations involving continuous media are potentially addressable -in normal flat Cartesian space or in curved spacetime. Problems in this area include, but are not limited to, the full structure and secular evolution of viscous, rotating (and even magnetized) stars and accretion disks in two and three dimensions, interacting binaries, asymmetric stellar envelopes and winds, non-radial pulsating stars, nonlinear development of secular and thermal accretion disk instabilities, and stationary or evolving spacetimes.
While this method is most useful for structures that evolve on timescales long compared to a dynamical time, there is no formal restriction on how short the evolution time must be. Therefore, the approach to dynamical instabilities from a stable configuration, and even initial dynamical development, also can be studied, although the author still recommends the use of an explicit code for full dynamical evolution.
The equations of continuum astrophysics have been condensed into a general compact covariant form, and that form encoded into the author's FEM program GENRAL. A user can solve a particular astrophysics problem by supplying a single subroutine that takes as input one given coordinate position, plus the value of the variables and their gradients there, and returns as output the differential equations (i.e., the generalized "stress tensor", "body force vector", and possible boundary conditions appropriate for that problem). The program then generates the nodal or "difference" equations on a user-specified general curvilinear grid using the finite element weighted residual integrals, and solves the large set of coupled equations to produce the solution to the equations. While described by discrete nodal values, as in finite difference methods, the finite element solution is continuous, as in spectral methods, due to the finite element interpolation functions. Either second order (linear interpolation) or third order (quadratic interpolation) accurate solutions are possible in the code. In addition, the positions of the nodes themselves can be part of the solution (a "rubber" mesh), allowing grids to be fit to unknown boundary shapes and regions of high gradients to be more finely resolved with the same number of mesh points.
While the method is cast in a full covariant form, it is anticipated that the initial applications will be mainly in the area of non-relativistic stars or accretion disks in static gravitational fields. The covariant form, however, is important even for non-relativistic problems. When the mesh extends into the time domain, even only for one or two elements, the coordinate transformations that are a natural component of the finite element method automatically generate any arbitrary Lagrangian-Eulerian (ALE) advective derivatives needed to take possible grid motion into account.
The method has been demonstrated on astrophysically interesting problems (spherical or rotating polytropic stars) in one and two dimensions, with full adaptive gridding, and on simpler problems in three and four dimensions.
A Note on the Solution of Elliptic Potential Problems
A great deal has been written on the numerical solution of astrophysical potential problems like equation (78). The technique used here has elements of past approaches plus some new features and is wellsuited to the FEM. Like many past authors (Clement 1978;Bonazzola, Gourgoulhon, & Marck 1998), the approach here casts the problem as a differential equation with boundary conditions specified on the exterior of the domain. However, rather than being a simple 1/r or low-order multipole potential at a large radius in the vacuum region, the author's preferred boundary condition is an integral solution of the differential equation, specified at the stellar surface. This integral is physically equivalent (on that surface) to the "full integral" technique that computes the potential throughout the computational domain using a Green's function integral rather than solving the differential equation itself (Tassoul (1978); Eriguchi & Müller 1985;Komatsu, Eriguchi, & Hachisu 1989). However, the author's method of evaluating this integral is somewhat different as it requires no expansion in terms of Legendre polynomials, relying instead on a single, slowly-varying, numerically-tabulated function I e (α) to handle the axisymmetry of the potential field. When calculated in this manner, using the integration techniques already available in the FEM, the numerical integral is a solution of the discrete finite element equations themselves to within the truncation error. The boundary condition and interior differential equation, therefore, match well, leading to good convergence of the models.
This technique will work in any situation where the full integral technique can be used: extension of the Newtonian case into three dimensions will be trivial, and it will be straightforward for the general relativistic case as well. In three dimensions there is no axisymmetry, so I e (α) will not be needed, and |x−x s | will take the simple Pythagorean form. For axisymmetric relativistic stars, the four metric potentials are given by three Green's function integrals plus a first order equation (Komatsu, Eriguchi, & Hachisu 1989). Therefore, the three elliptic equations can be solved in the same manner as Poisson's equation is solved here (although probably using different tabulated I e functions), preserving the differential equations in the stellar interior but using the integral solution on the surface. The fourth equation would be handled with the Petrov-Galerkin scheme demonstrated in section 4. The advantage of this approach compared to the full integral technique is speed (the number of volume integrals is proportional to the domain surface area, not the volume). The advantage over the multidomain technique is convenience (one does not have to deal with the vacuum region and the matching of stellar and vacuum solutions).
Unresolved Issues
While the code and method are mature enough to begin solving two-dimensional structural problems routinely, there are several unresolved issues, mentioned in the text, that must be addressed more completely before the full potential of the astrophysical finite element method can be realized.
First and foremost are the execution speed and memory issues. While the reader may consider the generation of the transformations and finite element integrals rather time-consuming, by far the greatest use of computer resources is the technique currently used to solve the coupled equations -the Henyey technique. For large meshes in three or more dimensions, it becomes prohibitively expensive, requiring thousands to many millions of years of CPU time (∝ [ℵ℘] 3D−2 ) and equally absurd amounts of memory (∝ [ℵ℘] 2D−1 ) to invert once. However, multigrid methods (Brandt 1977) need only about twice the grid size in storage (∝ [ℵ℘] D ) and only require a few sweeps of the mesh to converge (∝ [ℵ℘] D log[ℵ℘] D ). The author and P. Godon have been experimenting with modern parallel multigrid algorithms in finite difference codes with considerable success. The CPUtime and memory scalings, and linear speedup on parallel supercomputers, all have been realized. Efforts are currently underway to make GENRAL a parallel, multigrid FEM code.
Implementation of iterative schemes like multigrid for solving the equations will make straightforward the application of accurate reflective and periodic boundary conditions. While possible with the Henyey technique, this process is much more difficult as it involves columns of matrix elements far from the diagonal and special techniques for inversion. With iterative methods, as with explicit codes, one can enclose the computational domain in a layer of "ghost" elements whose properties are determined at each iteration by the interior solution. The ghost element approach will have the same order of accuracy as the interpolation scheme, unlike the current approach for the reflective boundary condition, which uses essentially a backward difference.
Another possibly important issue is time evolution. All examples in this paper, even the four-dimensional ones, are time-independent and use a Cartesian metric. The inclusion of time dependence may be as simple as employing a Minkowski metric and time derivatives of the variables, and the letting the finite element machinery solve the problem. Often, however, the addition of a new feature generates new numerical problems which require modification of that machinery. Until more experience is obtained with time dependent problems, it is not clear whether the techniques discussed here are complete or whether they will need additional major development to handle evolutionary situations.
Finally, many issues remain in the use of the finite element method for dynamical evolution problems. These are currently important topics in the engineering field, but, because explicit finite difference codes do well for astrophysical problems in this area, development of these issues here will have lower priority. They include adaptive mesh refinement (for dynamical collapse situations), implementation of the general boundary conditions in equation (2) (for magnetohydrodynamics and solving Maxwell's or Einstein's equations), and proper upwinding schemes with behavior comparable to the higher order Godonov schemes (for problems that develop shocks).
A Note on Numerical Relativity with Finite Element Analysis
In numerical relativity it is customary to perform a "3+1 split" of the metric such that where γ ij is the 3-metric that raises or lowers indices on the shift 3-vector β i , and α is the "lapse function", all of which are functions of position in spacetime (Arnowitt, Deser, & Misner 1962;York 1979). The 3-metric is specified on the initial hypersurface by solving the field constraints (initial value data), the lapse and shift are computed from four coordinate (or "gauge") conditions, and the Einstein field equations are used to evolve γ ij to the next hypersurface. The goal is to choose a gauge in which the hypersurfaces do not intersect a singularity before a significant amount of evolution occurs in some part of the mesh. The current method for singularity avoidance is to eliminate pathological parts of spacetime from the mesh ("excise the black hole") (Cook et al. 1998).
While such an approach is also possible with the FEM (Arnold, Mukherjee, & Pouly 1998, advancing time in a step-by-step fashion, the full covariant nature of the method and the lifting of the degeneracy between basic and mesh coordinate systems, allow ad-ditional approaches to be taken. In particular, it becomes possible to extend the mesh fully in the time dimension, from initial to final hypersurface, choosing a relatively simple gauge for α and β i . Then, adaptive gridding in all four dimensions can be used to keep the grid boundaries away from singularities and to further adjust the separation in time between spacelike hypersurfaces. Because of the ispoarametric transformation, the foliation no longer has to be along surfaces of constant time x 0 . The separation between adjacent surfaces can be non-uniform, the time coordinate can vary considerably over the hypersurfaces, and the final hypersurface even can end at different times. In effect, the adaptive gridding completes, in mesh coordinates, the job of slicing and singularityavoiding that a poor gauge choice may fail to do. One advantage of this approach is that some or all of the field constraints can be applied on the final, instead of initial, hypersurface, turning an explicit hyperbolic problem into an implicit boundary value problem (like stellar structure) and possibly stabilizing the growth of errors.
However, while such techniques probably can succeed in keeping physical singularities at bay, it is doubtful that they can avoid coordinate singularities in general situations. (These arise in the most benign of curved surfaces -on the surface of the earth, for example.) Apart from knowing the geometry before hand and choosing the proper basic coordinate system, there are only a few ways to avoid these problems entirely. One is to embed the spacetime in a higher dimensional, flat Minkowskian space. In principle, as many dimensions as independent spacetime metric coefficients (ten) would be needed for the embedding, although it may be possible with fewer. From the ten hyperspace coordinates, and how they vary in the four-dimensional mesh, one then could derive the local metric of the spacetime and use it in the physical equations. These ten equations for the metric in terms of the hyper-coordinates, plus the six Einstein equations and the four adaptive gridding equations, would be sufficient to determine the twenty independent g αβ and hyper-coordinates at each node in the finite element mesh. While a fairly immense job for present-day computers, this prescription has the advantage of being singularity-free in general situations.
Another method of avoiding coordinate singularities using the FEM is to dispense with global coordinates entirely, using only line segment lengths and deficit angles to describe the geometry and the Regge calculus to describe the physics (Regge 1961;Holst 1998). At present, this approach has been developed only for simplex-type elements and not hypercubes, so it is not a straightforward application of the code discussed herein. However, it may be useful to recast the Regge calculus for other element types.
Finally, all methods that involve a full four-dimensional finite element spacetime are probably well beyond the capabilities of present computer technology, even with the use of parallel multigrid techniques. Nevertheless, they appear to have such attractive features and elegance that it is important to begin to develop them.
The author is grateful to J. Fanselow for support during the early development of this work, to L. Caroff and M. Bicay at NASA for allowing a small portion of a theoretical astrophysics grant to be used for this purpose, and to the JPL Director's Research and Development Fund for support to complete this work. Discussions with numerous people were very helpful, including G. Lyzenga and A. Raefsky on the finite element method, M. Norman on adaptive gridding, P. Godon on spectral methods and multigrid methods, and W. Cannon, S. Finn, M. Holst, K. Thorne, and J. York on the use of these techniques for general relativity. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.
A. Appendix A. Casting of the Differential Equations of Continuum Astrophysics into General Finite Element Form
This appendix shows that virtually all the equations of astrophysics of continuous media can be cast into the flux-conservative, finite element form (equation 1) and their boundary conditions into equation (2). That is, while possibly second order in spatial and time derivatives, they can be written as the fourdivergence of a generalized stress tensor plus a generalized body force vector, each of which are functions of no more than the first spacetime derivative (four-gradient) of the variables. Of course, it is always possible to define additional variables (e.g., the 24 connection coefficients) and turn the field equations and conservation laws into first-order equations involving only the F q term in equation (1). The challenge, however, is to use only the original metric and field components as variables (avoiding additional computational expense), and still maintain the flux-conservative form. Below is one solution to this problem.
A.1. The Equations in Geometric Form
The discussion here is concerned only with differential equations. Local physics, such as the equations of state, opacity, emissivity, viscosity, etc., is not treated in detail. While having position and time dependence, these processes can be described with simple algebraic equations that do not affect the numerical method used.
The differential equations are the deceptively simple set of the Einstein equations for the gravitational field G = 8π T (A1) (where G and T are the symmetric Einstein curvature and stress-energy-momentum [SEM] tensors), and Maxwell's equations for the electromagnetic field where ∇ is the covariant gradient operator, the antisymmetric Maxwell tensor M = * F is the dual of the antisymmetric Faraday tensor F , and J is the four-current. With the symmetry, equations (A1) are 10 in number, and (A2)-(A3) constitute 8, for a total of 18. However, because of identities satisfied by the Einstein and Faraday tensors, there are actually only 12 independent equations (6 metric and 6 electromagnetic) but 16 unknowns at each point in space: the 10 independent components of the metric g and the 6 independent components (the electric and magnetic fields E and B) of the antisymmetric Faraday tensor. The remaining 4 metric unknowns are determined by the choice of a coordinate system or gauge. The standard method for generating a set of 12 evolutionary equations is to project (A1)-(A3) into the hypersurface normal to a time-like vector (or world line) n with the projection tensor (equation 3). For example, if n µ = g 0µ / √ −g 00 ), then only the spatial part S ij will be non-zero, with i, j = 1, 2, 3. In general, however, n can be any time-like vector, so the equations will be left in general form. If the twelve spacelike components of equations (A1)-(A3) are satisfied throughout the four-dimensional spacetime domain Ω (with one factor of S for each tensor order) S · G · S = 8π S · T · S (A4) then all that is necessary to satisfy the timelike components n · G = 8π n · T (A7) n · ∇ · F = 4π n · J (A8) n · ∇ · M = 0 (A9) throughout all spacetime is to satisfy the latter equations on one hypersurface only. Equations (A4)-(A6), therefore, are the 12 independent differential equations to be solved for the six metric and six electromagnetic field components, while equations (A7)-(A9) are the constraints that need to be satisfied in order for a solution to exist at all. (For reference, equation (A5) is Ampere's law, (A6) Faraday's law, (A7) contains the Hamiltonian and momentum constraints [by further contraction with n or S, respectively], (A8) is Coulomb's law, and (A9) is the solenoidal condition on the magnetic field.) Equations (A4), with (A7) as initial conditions, constitute the Cauchy problem of general relativity. Equations (A6) and (A9) are the covariant form of the Evans-Hawley "constrained transport" method for enforcing the solenoidal constraint in nonrelativistic magnetohydrodynamics (Evans & Hawley 1988). Equations (A5) and (A8) represent constrained transport in the presence of sources. When (A4)-(A6) are solved as Cauchy problems, equations (A7)-(A9) are applied on the initial hypersurface. However, as our approach here is to relax the system for a four-dimensional spacetime, instead of evolving a three-dimensional surface forward in time, they can be applied on any spacelike hypersurface.
In addition to the field equations, there are conservation laws that follow from identities satisfied by the fields. The Einstein curvature tensor is constructed in such a way that ∇ · G = 0, so the conservation of energy and momentum ∇ · T = 0 (A10) must also hold from equation (A1). Similarly, as F satisfies ∇· (∇ · F ) = 0, then the four-current must also be conserved The field equations then are "closed" by expressing the SEM tensor and four-current in terms of the state variables, and solving the conservation laws of energy and momentum for those variables. For most conceivable astrophysical situations -including those with multi-fluid dynamics, electromagnetic fields and currents, radiation, viscosity, and nuclear reactionsexpressions for T and J involve terms with, at most, first-order derivatives of the state variables with respect to space or time. This is true even in situations near black hole horizons where particle interaction and fluid flow time scales are comparable, and equations like Ohm's law, for example, are no longer valid.
A final group of differential equations comes from forming the zeroth, first, and second moments of the Boltzmann-Vlasov equation for each particle species (j) (photons, nuclei, etc.). These determine each species' individual number density, velocity (including the peculiar drift velocity q (j) ), and internal energy per particle. The kinetic equations give rise to familiar processes like nuclear burning, radiative transport, viscosity, and electrical conductivity. Nevertheless, they all have the same "conservative" form, with the divergence of a term that involves (at most) first-order spatial derivatives of the state variables. For example, the zeroth moment of these kinetic equations yields where N (j) s the particle number density and c (j) is the net particle creation rate due to nuclear reactions. For many astrophysical situations, particle conservation will be the only one of these equations needed, the drift velocity and particle energies being determined by the diffusion or other approximations.
A.2. The Equations in Component Form
With no loss of generality one can choose to write the differential equations in a coordinate frame. In this case, the connection coefficients are given by Γ α βγ = 1 2 g αµ (g µβ,γ + g µγ,β − g βγ,µ ) and their trace reduces to Γ µ βµ = (ln √ −g) ,β where g is the determinant of the metric g ≡ det||g|| (A15) coordinate system that has an arrow of time. It is absolutely necessary that this condition be satisfied in order that the mesh be well behaved.
B.2. The Local-Lorentz Condition
A second possible condition is that the element coordinate system have a locally Lorentz character everywherei.e., that ξ 0 ′ be the mesh time coordinate. This ensures that, if the spatial portion of the element mesh moves with time, the mesh velocity always will be less than the speed of light. This is important, however, only if the mesh is used as a frame of reference for measuring physical quantities.
There are various ways of ensuring the Lorentz nature of the transformation L α α ′ . The safest and simplest way is to ensure that each unit vector in the new space satisfies the proper timelike or spacelike constraint. With the constraints that ξ 0 ′ denotes the time dimension and that e α ′ · e β ′ = g α ′ β ′ , these local Lorentz conditions become Inequality (B3) is equivalent to demanding that each of the element sides in the ξ i ′ direction be spacelike. A less stringent, but still sufficient, condition on g i ′ i ′ could be derived by choosing a specific timelike vector, such as n µ ′ = δ 0 ′ µ ′ / √ −g 0 ′ 0 ′ (which still requires g 0 ′ 0 ′ < 0), and then constructing from the corresponding projection tensor a set of three independent vectors orthogonal to n µ ′ s i ′ µ ′ = n i ′ n µ ′ + δ i ′ µ ′ The condition that these vectors be spacelike (s i ′ · s i ′ > 0) leads to a modified form for inequality (B3) In this less restrictive case, the e i ′ unit vectors can be a bit timelike, but no more so than that given by the above inequality.
B.3. The Courant Condition
In standard nonrelativistic computational fluid dynamics, in order that the (explicit) forward integration in time be stable, the distance traversed in a single time step by sound waves or by the fluid itself (whichever is faster) must be substantially less than the mesh spacing. The ratio of these distances, called the Courant number C, is chosen to be ∼ 0.1 − 0.4 or so depending on the stability of the numerical integration scheme. This nonrelativistic Courant condition (which can be written as −∆t 2 /C 2 v 2 max + ∆x 2 > 0 for one-dimensional flow) easily generalizes in the four-dimensional general relativistic case to g αβ ∆χ α ∆χ β > 0 where the vector ∆χ = (∆t/Cv max , ∆x, ∆y, ∆z). It further generalizes in the case of a general curvilinear element mesh to in each element, with ∆ζ = (∆ξ 0 ′ /Cv ′ max , ∆ξ 1 ′ , ∆ξ 2 ′ , ∆ξ 3 ′ ); ∆ξ α ′ is the width of the element in each mesh dimension; and v ′ max is a three-velocity magnitude equal to the maximum disturbance speed within the element. Because stable implicit techniques are used in time as well as space, a Courant number very close to unity probably can be tolerated. Therefore, in the case of relativistic flow, where v ′ max = 1, the Courant condition reduces to the requirement that the geodesics connecting opposing corners in each element must be spacelike (g α ′ β ′ ∆ξ α ′ ∆ξ β ′ = ∆s 2 > 0).
Generally, the Courant condition (B4) is much too restrictive and is routinely violated in slowly evolving or steady-state problems, where time steps are very long or even infinite. In implicit codes the condition needs to be satisfied only if one wishes to follow every short timescale transient phenomenon or wave. Fig. 1.-Examples of shape functions in one dimension for linear (top) and quadratic elements (bottom). The solid line shows the shape function for the0 node (at ξ = ξ0 = 0); the dashed line for the1 node (at ξ1 = 0.25); and so on. Note the element boundary nodes (large open circles) and interior nodes (smaller filled circles in the quadratic case). The derivatives of the shape functions are discontinuous at boundary nodes, although the functions themselves are continuous. Each function attains unit value at its corresponding node and exactly zero at all other nodes in the element. By definition, shape functions are also identically zero in elements not containing their corresponding node. In all cases, coordinates and differential equations are expressed in x and y only, and although derivatives are calculated on the curvilinear grid, they are immediately transformed to the (x, y) system and used as such in the equations. Top left: uniformly-spaced Cartesian grid (E L2 w = 0.081); top right: circular-polar grid (E L2 w = 0.067); middle left: elliptical grid with the same axial ratio as the solution (E L2 w = 0.037); middle right: adaptive elliptical grid allowing finer resolution of the Fermi surface (E L2 w = 0.010); bottom left: same as middle right, but with logarithmic contours (note oscillations similar to those in the bottom left panel of Figure 2); bottom right: resulting solution when solving forw = log 10 w. Fig. 4.-One-dimensional n = 1 spherical polytropic stars with 9 nodes. Left: standard Galerkin weighting of the FEM integrals, with linear elements (E L2 p = 1.4); middle: Petrov-Galerkin type 2 weighting and linear elements (E L2 p = 0.048); right: same as middle, but with quadratic elements (E L2 p = 0.0018). All models use adaptive gridding and logarithmic variables. Note the close nodal spacing near the stellar surface. Fig. 5.-Two-dimensional n = 1 spherical polytropic stars with 9 2 nodes using the multipole boundary condition (w M s ). Top panels: mesh and pressure contours for (bi-) linear elements; bottom panels: mesh and pressure contours for (bi-) quadratic elements. Contours follow the precise interpolation within the respective elements. Note the adaptive gridding near the stellar surface, similar to that in Figure 6, but using the integral boundary condition (w I s ) and solving a twodimensional n = 0 rotating polytropic star (Maclaurin spheroid) with ω 2 /2πGρ = 0.224 -very close to the theoretical limit of 0.2246656. Top panels: linear elements with 9 2 and 17 2 nodes, respectively; bottom panels: quadratic elements with 9 2 and 17 2 nodes. Although ω 2 is the primary model parameter, the flattening ratio, total angular momentum, and ω 2 itself, with the normalizations in Tassoul (1978), are plotted as a function of the derived parameter τ for comparison with the analytic theory. Figure 8 as a function of ω 2 . Solid lines show models with linear elements, dashed lines quadratic elements; curves without symbols are for 9 2 node models, those with are for 17 2 models.
|
2019-04-14T01:52:00.208Z
|
1998-06-19T00:00:00.000
|
{
"year": 1998,
"sha1": "98db4c465315700be4e8f3c49be067f5e445adae",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/9806278v2.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1be778d37ccfde461790f15c29341a0dda8c8f0c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
234548459
|
pes2o/s2orc
|
v3-fos-license
|
Stochastic production simulation for generating capacity reliability evaluation in power systems with high renewable penetration
There are increasing challenges in the power industry to plan a system that provides adequate generation capacity to sustain the load when the renewable energy penetration level is extremely high or 100%. This study proposes a stochastic production cost simulation method to evaluate the generating capacity reliability in power systems with high renewable penetration. In comparison with conventional approaches, such as planning reserve margin and probabilistic assessment, the proposed method can consider hourly chronological characteristics of the system operation, which is significant in the planning process for integrating new resources, such as storage, demand response, and renewables. The method was tested on the modified ISO New England system based on two scenarios: 45% renewable penetration and 100% clean energy. In addition, several sensitivity analyses were conducted following each scenario. Simulation results indicated that the proposed method can be used to quantify the reliability value of the system under various renewable penetration levels. When the system is not reliable, this method can be used to determine additional capacity required to ensure system reliability.
INTRODUCTION
Generating capacity reliability, also referred to as generation resource adequacy (RA), is a term that is frequently used by utilities to evaluate the availability of sufficient resources to meet the system demand under all but the gravest scenarios. Lack of generation capacity may have severe implications for the system, including load shedding, extreme price spikes, and rolling blackouts. An example of insufficient generating capacity that had a severe impact was the California Electricity Crisis that occurred in [2000][2001]. Subsequently, the Californian legislature has acquired load serving entities to maintain adequate physical generating capacity (including electrical demand response) to meet the load requirements, such as peak demand and reserves. Over the past decade, power systems worldwide have been transforming to accommodate cleaner energy resources, embracing a more variable resource fleet in the system. Under these circumstances, several countries and regions have set up ambitious renewable energy targets. United States, California has committed to achieving 100% clean energy by 2045 in the Senate Bill 100 [1]. Under this law, 60% of the power purchased by utilities in California must come from renewable resources by 2030, and the additional 40% should come from "zero-carbon" resources by 2045. In Hawaii, oil accounted for more than two thirds of generating capacity in 2018; however, the state has committed to generating 100% of its power from renewable energy by 2045 [2].
The increasing renewable penetration level will substantially alter the operation and planning of the electrical grid over the next several decades. A challenge faced by the current power industry is to ensure RA alongside the changing generating mix and supply. Renewable energy resources, such as wind and solar, are naturally variable and stochastic [3,4]. Due to historical reasons, numerous power systems in the world are still using planning reserve margin (PRM) [5] to evaluate the generating capacity reliability. This method may be effective in traditional power systems with a limited amount of renewable energy resources. However, in future power systems where the renewable penetration is high, using this method may lead to inaccurate results because the PRM solely considers the peak system demand of a year without considering the detailed hourto-hour operational difficulties of a system. In several situations, the system may have sufficient capacity margin, but is still not able to meet the demand due to operational constraints, such as insufficient ramps to meet the net load variability. There is an increasing need in the power industry to use more advanced methods, such as stochastic production simulation for generating capacity reliability evaluation.
Recently, the topic of ensuring power system generating capacity reliability considering the impacts of renewable energy resources has been receiving attention from numerous publications. A method to assess the adequacy of generating systems containing wind power by considering wind speed correlation was proposed [6]. The adequacy of generating systems considering the impact of the operational strategies of storage and hydro energy resources were described in [7], where Markov chain was used to model the system operational reliability. Reference [8] studied the impact of heterogeneous expansion of renewable energy on RA in interdependent electricity markets. However, none of these studies focus on the development of a stochastic production simulation framework to evaluate the generating capacity reliability under extremely high or 100% variable energy resource (VER) penetration with a realistic dataset. Reference [9] developed a Monte Carlo simulation method to demonstrate the impact of uncertain parameters affecting the reserve margin of a system, which was a conventional approach for RA modelling. Reference [10] developed a modelling framework to investigate the impact of high penetration levels of VERs and different market designs on achieving RA. A probabilistic methodology for integrated reliability evaluation considering RA and dynamic security assessment in a unified framework was proposed in [11]; however, they did not adopt a stochastic production cost simulation framework, and thus may have underestimated the impact caused by the uncertainty of VERs.
The contributions of this paper include four key highlights. Firstly, the detailed formulations of the stochastic production cost simulation model are presented. Secondly, a practical method for the modelling of random variables to generate simulation scenarios is introduced. Further, the proposed method was implemented on the modified dataset of an actual power system in the United States. Subsequently, several extreme scenarios with 100% renewable energy penetration levels were investigated. The simulation results presented in this paper should serve as a useful guideline for researchers, system planners, and policymakers to design 100% renewable energy power systems in the future.
The rest of this paper is organized as follows. Section 2 introduces two traditional generating capacity reliability evaluation methods. Section 3 details the stochastic production cost simulation method. Section 4 describes the modified ISO New England test system and dataset used for simulation. Section 5 presents the case study results. Finally, Section 6 concludes this work.
Planning reserve margin approach
PRM is a widely used metric for RA in the planning process, which is calculated as the percentage by which the installed capacity exceeds peak demand. A typical PRM study would only need to perform a simple calculation. However, the PRM method is still used in utilities. In California, the public utilities commission requires load serving entities to hold sufficient capacity to meet their peak load with a 15% reserve margin for at least a year ahead [12]. At the midcontinent independent system operator, a reference PRM of 17.1% was used for the 2018-2019 planning year; in other words, the total installed capacity had to be 17.1% higher than the peak demand of the system [13]. The PRM is a relatively old metric that was developed in the 1940s, but is still widely used. With the ongoing development of the power industry, the drawbacks of this capacity-based metric have emerged, including (i) inaccurate assessment of the performance of energy limited resources, for example, hydro capacity with limited water resources; (ii) it does not account for the forced outage rates (FORs) of generators; (iii) it does not consider the operational characteristics of emerging technologies, such as energy storage and demand response resources. To overcome these drawbacks, the probabilistic assessment approach was proposed for power system generating capacity reliability evaluation.
Probabilistic assessment approach
During the initial stages of its development, the probabilistic assessment approach was induced by the impact on RA evaluation due to different failure rates of generation resources. Two otherwise identical systems with different unit failure rates cannot achieve the same reliability level. The most popular metrics used for RA evaluation include the loss of load expectation (LOLE) and loss of load probability (LOLP). LOLP is the probability that the demand will exceed the capacity during time horizon T, that is, where D is the system demand, C T is the installed capacity of the system, and F D is the cumulative distribution function (CDF) of demand. LOLE is the expected number of time units that demand exceeds the generation capacity. The relationship between LOLE and LOLP is LOLE = LOLP × T where T is the time horizon, which is typically 1 year. Both LOLP and LOLE are generating reliability metrics from a system level. Another important metric, the effective loadcarrying capacity (ELCC), applies to individual generators. It is usually used to calculate the capacity value of a resource. The ELCC of a generator is defined as the measure of additional load that the system can supply with the generator without changing the system reliability [14,15].
The probabilistic assessment approach is an improvement to the PRM method. However, it has limitations. Firstly, although this approach models the uncertainty of the system, it assumes that the uncertainty models are precise. For example, in Equation (1), the CDF of demand is calculated from the load duration curve, which requires accurate load forecast and unit FORs. Secondly, to model the FOR, a two-stage outage model of units is typically used. The outage probabilities are convolved with the load duration curve to calculate the LOLP [16]. When the system has numerous units and/or the units have several derated states, the convolution method is computationally intensive. Further, the probabilistic assessment method disregards hourly chronological characteristics of the system operation, which is becoming substantial with the vast deployment of new generation technologies, such as batteries, distributed generation resources (DERs), and demand response resources. To overcome these drawbacks, a stochastic production cost model (PCM) was proposed and has prevailed in the industry for generating capacity reliability evaluation over the past decade. The next section outlines the details of the stochastic PCM-based method.
3.1
Overview of production cost model The PCM was developed into an hourly chronological unit commitment and economic dispatch simulation. It captures the costs of operating a fleet of generators, with the objective of minimizing costs, while simultaneously enforcing a wide variety of operating constraints. The advantages of the PCM method include: (i) simulating all the hours in a year and not just the peak hours, thereby providing more details of the system operation; (ii) enabling complete utilization of the load, solar, and wind forecasts. With the development of advanced forecasting techniques, system planners are currently more confident in making decisions for future systems. The forecasts are captured in the PCM as they are input to it; (iii) modelling complicated system conditions, such as tie-line flow with neighbourhood systems, transmission network congestions, and large-scale integration of behind-the-meter DERs. However, the PCM method has a few disadvantages, including: (i) it may require a significant amount of simulation time when the system is large; (ii) the simulation results significantly depend on the quality of the input data, thereby necessitating a data validation process [17]. Figure 1 shows the typical input and output information of the PCM. At the centre of the model are the supposed security constrained unit commitment (SCUC) and economic dispatch (SCED) algorithms. The input data to the PCM includes the physical parameters of the generators, the economic data, FORs, operating requirements, network data, and time-series data for wind, solar, load, and reserve requirements. The output information of the PCM includes the system operation costs, locational marginal pricing, commitment and dispatch results of
Production cost model formulations
As illustrated in Section 3.1, the SCUC and SCED algorithms are key models for production cost simulation. Usually the SCUC model is operated regularly to determine the commitment status of resources. Then, the SCED model is operated for specific hours (or subhours if the resolution is less than 1 h) for obtaining the dispatch and system prices of the resources [18][19][20]. In this section, the details of the SCUC model are presented; the SCED model can easily be obtained by fixing the binary variable in the SCUC model. The SCUC model can be formulated as follows. The objective function is: where t is the time horizon, g is the generator index, k is the reserve category, y is the unit start up binary variable, p is the unit generation level, r is the cleared reserve quantity, C SU g,t is the start up cost of unit g at time t, C P g,t (⋅) is the energy generation cost function, and C R g,t is the reserve cost of unit g at time t. In this study, three types of reserves are modelled, including regulation (REG), spinning (SPIN), and nonspinning (NSPIN) reserves; thus, k ∈ {REG, SPIN, NSPIN }. system t is the load balance violation of the system at time t. It is a slack variable used to quantify the balancing violations. The value of lost load is usually set high (e.g. $5000/MWh). The term OtherPenalties represents additional penalty functions, such as transmission line limit violations, energy storage capacity violations, and reserve violations. The objective function in Equation (1) minimizes the overall system operational costs plus the operational penalties in the study horizon T. Subject to the following constraints: The hourly power balance constraints: where n is node, load n,t is the load connected to node n at time t, and loss represents the transmission loss. Hourly transmission limit constraints: where l represents the transmission line, F P l,t (⋅) is the DC power flow equation on the transmission line, and F l,t is the power flow limit on the transmission line. The load n,t is constant except for demand response resources.
Regulation reserve requirement: where R REG is the regulation reserve requirement of the system. Regulation plus spinning reserves requirement: where R SPIN is the spinning reserve requirement of the system. Operating reserves requirement: where R NSPIN is the nonspinning reserve requirement of the system.
Resources capacity limit and ramping constraints: where u g,t is a binary variable to demonstrate the commitment status of generator g at time t. P g,t and P g,t are the maximum and minimum output limits of generator g at time t, respectively. Ramp UP g,t and Ramp DOWN g,t are the upward and downward ramp rates of generator g at time t, respectively. State-of-charge (SOC) constraints for energy storage resources (ESRs) [21]: where i is the index of ESRs, I D is the resolution of the simulation, is the charging efficiency, P ESR i,t is the cleared quantity of generation when the ESR is in generation mode, L ESR i,t is cleared quantity of load when the ESR is in generation mode, SOC Initial i is the initial state-of-charge for ESR i, and SOC Max i is the maximum state-of-charge for ESR i.
Other constraints to the PCM include the minimum up and down times of the generation resources, unit commitment logic, and maximum energy per day for energy-limiting resources. However, to maintain the conciseness of this study, these constraints have not been highlighted. Readers can refer to ref. [22] for the details of those constraints.
The stochastic production cost simulation process demonstrated in Figure 2 is based on the PCM represented in Equations (1)- (12). If there is no balancing violation in the system, the variable system t in (1) is zero. Conversely, the number of hours with system t ≠ 0 in the simulation horizon is summed to obtain the loss of load hours (LOLH). Accordingly, there are two methods based on LOLH, as illustrated in Section 3.4, to calculate the LOLE of the system.
Modelling of random variables
In RA studies, random variables include load, wind generation, solar generation, and the FORs of resources. A challenging aspect of applying the stochastic production cost simulation is to generate a set of discrete scenarios that could represent the stochastic processes of the random variables. Usually, this is achieved by using a stochastic procedure to generate a scenario tree. In this study, we used the mean reversion stochastic process (MRSP) [23,24] to model the stochastics of load, wind, and solar. The historical data of several years were used to estimate the parameters of the MRSP equations. Once the parameters were determined, a set of different scenarios can be generated through the stochastic process. The details of the calculation process are shown in the appendix. The modelling of the FOR is different from the above process. First, the FORs for each generation unit were obtained from historical data. Then, the outage hours were generated randomly and independently through the simulation horizon. Consequently, the percentage outage of hours was close to the FORs of the resources.
Stochastic production simulation process
The stochastic production simulation process is shown in Figure 2. A total of N (usually no less than 100) scenarios were constructed to represent the stochastics of the random variables. The production cost simulation runs N times by iterating all the scenarios. Note that the way the scenarios are generated can have a significant impact on the results. To mitigate this issue, we can simulate a sufficiently large number (e.g. tens of thousands) of scenarios. However, this would require a significant amount of computational power. The purpose of this study is not to guide the planning process in the decisionmaking of real power systems, but to demonstrate the feasibility of the proposed model with a limited number of simulation scenarios, while obtaining insightful conclusions from the simulation results. Hence, only 100 scenarios were simulated for the stochastic PCM.
The process shown in Figure 2 is called the Monte Carlo simulation, from which the LOLE index can be calculated. At the centre of the process in Figure 2 is the PCM developed in Section 3.2. All the simulation cases in Section 5 employ the algorithm shown in Figure 2 to evaluate the generating capacity reliability. The North American electric reliability corporation (NERC) has established the supposed "1-day-in-10-year" LOLE principle for power systems in North America [25]. This LOLE criteria is based on days/year instead of hours/year, primarily because it originated from the PRM method, where the peak load was calculated for each day. However, the PCM is usually run with a 1 h resolution, and 1 year horizon. Thus, it is necessary to convert the days/year principle of the NERC to the equivalent hours/year number obtained from the PCM results. Generally, there are two ways to achieve this: • Method 1: Convert the 1-day-in-10-year to x-h-in- 10-year where x is the number of hours with lost load. Billinton et al. [26] showed that the 1-day-in-10-year principle was equivalent to 7-h-in-10-year, or 0.7-h-in-1-year. Consequently, if a total of N scenarios is simulated, the maximum number of hours that contains lost load is 0.7 × N to meet the NERC requirement. If the total LOLH is less than 0.7 × N, the system is reliable. Otherwise, the system is unreliable. • Method 2: Adhere to the days/year principle but count the days (instead of hours) with lost load from the production cost simulation results. Under this assumption, a day with load loss for 1 h and that for 24 h are equivalent, both of which are counted as one day of lost load. If a total of N scenarios is simulated, the maximum number of days that have loss of load is 0.1 × N to meet the NERC requirement.
Base model description
In this study, we performed simulations on a 76-unit 8-zone system, which was originally developed by the author in [27,28] based on the structural attributes and data from ISO New England (ISO-NE). The ISO-NE is an independent nonprofit regional transmission organization. The ISO-NE energy region is divided into eight load zones namely, Connecticut, Maine, New Hampshire, Rhode Island, Vermont, Northeastern Massachusetts/Boston, Southeastern Massachusetts, and Western/Central Massachusetts. Simplified transmission modelling was adopted to reduce computation time as this study is majorly focused on evaluating the generating capacity reliability. After the development of the ISO-NE 8-zone system, the generation resource mix in the ISO-NE system had some changes [29], including: (i) more natural gas generators were added to the generation fleet; (ii) a few coal units were retired; and (iii) more renewable energy resources were built. To reflect the changes in (i) and (ii), we further calibrated the existing thermal generation data in [27] and [28] to match up to the 2019 generation mix of ISO-NE, where the capacity of coal units was reduced to 900 MW, and the capacity of natural gas units was increased to 15,900 MW. The total installed capacity in 2019 was 31,209 MW, with natural gas being the dominant resource. To reflect the change in (iii), the wind and solar capacity was increased to 1400 MW and 589 MW, respectively. The installed capacity by resource type in 2019 is shown in Table 1.
In the base model, the FORs for thermal generators was obtained from the NERC generating availability data system (GADs) [30]. GADS is recognized as a valuable source of reliability information for total unit and major equipment groups and is widely used by industry analysts in numerous applications. The unit FORs vary by the type and size of generation. Additionally, system operating reserves are considered in the base model. The requirement for the regulation reserve is 300 MW. The requirements for the spinning and nonspinning reserves are both set to 3.5% of the load each hour.
Parameter estimation for random variables
As shown in Section 3.3, the random variables are modelled with the MRSP in Equation (A.1). The historical data for load, solar, and wind from 2017-2019, which can be downloaded from the ISO-NE website [31], were used to estimate the parameters. A similar method to reference [24] was adopted for generating the stochastic values of load, solar, and wind in multiple scenarios. The features for load, solar, and wind are different. Load has both repetitive daily and weekly profiles. However, solar only has repetitive daily profiles. Wind generally does not have repetitive profiles, as the wind output on two consecutive days may significantly vary. Some techniques were used to capture these features. Firstly, to capture the weekly repetitiveness of the load, the 2017-2019 historical data were aligned with weekly patterns. For example, the first day of 2019 was a Tuesday. Hence, for the 2017 and 2018 historical data, the first day was chosen as the first Tuesday of the corresponding years.
Thereafter, to capture the daily repetitiveness of the load and solar, hourly ratios of the historical data relative to the benchmark year were used. For example, if the year 2019 was used as a benchmark, then the hourly load and solar ratios can be derived as follows: where (Wind Y,m,d,h ∕̂), whereî s the calculated parameter for wind in Table 2. Therefore, the stochastic wind output for target year Y, month m, day d, and hour h is (Ŵind Y,m,d,h × Wind Y,m,d,h ∕̂). Again, this procedure was repeated N times to generate N different scenarios. Figure 3 shows 20 stochastic load profiles and the deterministic load profile (shown as the thick dark curve) over three consecutive days of the peak load week.
Case study scenarios
The above model was designed to be tested on two different scenarios: a high renewable penetration scenario (Scenario 1) and 100% clean energy scenario (Scenario 2). The generation capacity data for the first scenario was obtained from the ISO-NE capacity plan for year 2029 [29], where the capacity of the renewable penetration level was 45%. The second scenario was developed based on Scenario 1 by increasing the wind and solar capacity to a certain level to achieve the clean energy penetration level of approximately 100%. The details of the two scenarios are discussed below.
Scenario 1: A 2029 45% renewables case
In the ISO-NE, the state renewable portfolio standard (RPS) requirements promote the development of renewable energy resources to serve the retail load using renewable energy. The generation queue was developed in the ISO-NE region to meet the requirements. Wind generation dominates new resource proposals and solar power ranks second in the generation queue, thereby taking the total wind and solar capacity in 2029 to 15,656 MW and 6711 MW, respectively, as listed in Table 3.
Notably, only utility-scale solar photovoltaic (PV) is included in the solar capacity. The PV that is connected to local distribution utilities, or the behind-the-meter PV, is not included in the solar capacity. Another important change to the generation mix of ISO-NE in 2029 is the retirement of conventional resources, including coal, oil, and nuclear. Compared to 2019, 2200 MW of oil capacity, the entire coal capacity, and 600 MW of the natural gas capacity will be exhausted by 2029. The third change is the increase in ESRs. As shown in [29], the capacity of ESRs would be 2400 MW in 2029. The generation capacities by resource type for Scenario 1 are shown in Table 3. The capacity of wind and solar accounts for 45% of the total system capacity.
Scenario 2: A 100% clean energy case
We observed if the LOLE reliability standard can be fulfilled if the system held 100 percent clean energy. Till now, a scenario excluding all the thermal units was created (i.e. Scenario 2). The generation capacity mix of this scenario is presented in Table 3. All the natural gas, oil, coal, and nuclear units would be exhausted. The ESR capacity remained 2400 MW (more ESR capacity will be added in the subsequent section to conduct sensitivity studies). The solar and wind capacity was increased to 2.5 times of that in Scenario 1, taking the solar and wind capacity to 16,662 MW and 38,220 MW, respectively. The total installed capacity in Scenario 2 was 59,562 MW, all of which comprised clean energy. The simulation results for both scenarios will be demonstrated in the next section. The method developed in Section 3 will be used to identify whether the LOLE reliability standard is satisfied. If the reliability standard is satisfied, the thermal generation capacity will be decreased continuously in the scenario until the LOLE value reaches the 0.1-day-per-year reliability standard. The purpose is to observe the least possible thermal generation capacity to make the system reliable. However, if the reliability standard is not satisfied, the extent of additional capacity required to make it reliable will be estimated.
SIMULATION RESULTS
In this section, the stochastic production simulation results for the two scenarios in Section 4.3 are presented. In addition, under each scenario, a few additional cases were studied for conducting sensitivity analysis. The optimization problems were solved using CPLEX [32]. A small mixed-integer programming gap, that is, 0.01%, was adopted for all the simulations to increase the accuracy of the solution.
Results for 45% renewables case
The stochastic production simulation process in Section 3.4 was performed on Scenario 1. To generate the stochastic profiles for the random variables load, wind, and solar, we needed to first obtain their deterministic profiles. However, to the best of our knowledge, these profiles could not be obtained from public literature. Therefore, the authors developed deterministic profiles based on the data in the 2019 base case, where the chronological load, wind, and solar profiles could be obtained from historical data. From [29], it was shown that the peak value of net load (i.e. actual load minus behind-the-meter DERs) in 2029 was near to that in 2019 due to the increase of behind-themeter DERs. Accordingly, we used the 2019 historical load profile as the deterministic load profile in Scenario 1. In addition, compared to the 2019 base case in Table 1, the solar and wind capacity in Scenario 1 increased 11.4 and 11.2 times, respectively. Therefore, we multiplied the solar (wind) data in each hour of 2019 by 11.4 (11.2) to obtain the deterministic solar (wind) profile in Scenario 1. After building the deterministic profiles for load, solar, and wind, the method described in Section 4.2 was used to create 100 (i.e. N = 100) stochastic profiles for each variable. Figure 3 shows the first 20 stochastic load profiles developed with this method. Next, the stochastic production cost simulation was iterated 100 times, each running an annual 8760 h production cost simulation. The reliability metrics were calculated after 100 iterations were completed. In this study, Method 1 in Section 3.4 was adopted for evaluating the LOLE metric. In the total 876,000 h of the 100 scenarios, unserved load was found only in 21 h. Thus, the LOLE was 0.21 h per year (or 21 h per 100 years). Based on Method 1, the equivalent LOLE by day was 0.03 day-per-year. This is smaller than the 0.1 day-per-year NERC standard. Consequently, we concluded that the Scenario 1 case was reliable.
The above results reveal that additional thermal units could be retired, while still meeting the NERC reliability standard. We conducted some sensitivity studies to explore the extent of thermal capacity retirement so that the LOLE in Scenario 1 would be 0.1 day-per-year. Firstly, the case below was studied: • Scenario 1.1: retiring 1500 MW of oil capacity from Scenario 1, while capacities of other resources remained.
We started with retiring oil resources because they are generally the most expensive to operate and have the highest emission rates. After running 100 iterations with the PCM, unserved load existed in 96 h. The LOLE of the system using Method 1 was calculated as 96 ÷ 70 × 0.1, which was 0.14-day-pear-year. This was larger than the 0.1-day-per-year NERC standard, thereby dissatisfying the reliability standard, which implied that we were over-retiring the capacity of the oil resource. We could continue to increase the oil capacity to satisfy the reliability standard.
However, there was an approach to accelerate this process. Based on the simulation results of an unreliable case such as the one in Scenario 1.1, we could estimate the quantity of required capacity to make the system reliable. The procedure is described as follows. Firstly, plot the loss of load duration curve, as shown in Figure 4. The x-axis is the 96 h that have unserved load. The y-axis is the MW quantity of the unserved load in each hour. The duration curve was obtained by sorting the MW quantities of the 96 h in decreasing order. Secondly, calculate the maximum Figure 1, the corresponding y-axis value on hour 70 was 140 MW, which implies that approximately 140 MW of additional capacity would be required to make the case in Scenario 1.1 reliable. A summary of the results for Scenario 1 and Scenario 1.1 is shown in Table 4. Note that the 140-MW additional capacity was an ideal number, which implies that when 140 MW capacity was added to the system, and when all the capacity was able to show up in the hours with unserved load, the system would become reliable. A second assumption is typically difficult to guarantee for two reasons: (i) units have FORs; thus, it is likely that they have outages during those hours. (ii) Even when the units do not have outages in those hours, they may still not be available due to other constraints, such as minimum down time and ramp rate limitations. Therefore, the actual quantity of additional capacity to make the case in Scenario 1.1 reliable should be larger than 140 MW. We conducted more sensitivity studies by increasing the oil capacity in incremental steps. Accordingly, the system became reliable when 300 MW of additional oil capacity was added to the case in Scenario 1.1.
Results for 100% clean energy case
The proposed stochastic production cost simulation process was then conducted on Scenario 2 for a 100% clean energy case. Although the total capacity was 59,562 MW, much higher than the deterministic peak load which was 23,988 MW, the system could still be unreliable because the wind and solar had low capacity factors and may not have been able to serve the load when needed. This was confirmed from the stochastic production cost simulation results, which showed that the unserved load existed in approximately 45% of the total hours in the 100 simulations. The LOLE of the system in Scenario 2 would be a prohibitive value. Consequently, we conducted the following sensitivity studies to reduce the LOLE of the system. Test results showed that the number of hours that had unserved load accounted for approximately 40% of the total hours in the 100 iterations. The system was still very unreliable because energy storage is a passive type of resource, which means it can only absorb or discharge energy generated by other resources in the grid. In Scenario 2.1, the ESRs did not have sufficient stored energy to meet the load in approximately 40% of the total hours.
In Scenario 2.2, 15,300 MW of natural gas capacity was added to Scenario 2. The system no longer comprised 100% clean energy. However, the system may have 100% net clean energy if the interchange with neighbourhood networks is considered. Our simulation results showed that 74% of the total energy was supplied by renewables, 21% by natural gas units, and 5% by demand response and imports. The curtailment of wind and solar accounted for 24% of the total energy. It can be observed that the quantity of renewables curtailment was close to the energy generation from natural gas units. If the curtailed wind and solar could be exported to neighbouring grids, the net energy generated by clean units (including renewables, demand response, ESR, and pumped storage) was nearly 100%. The LOLE of the system in Scenario 2.2 was 1.75 days per year, as shown in Table 5. The system was still unreliable. Using the method demonstrated in Figure 4, 4362 MW of "perfect" capacity would be needed to make the system reliable. Figure 5 shows the number of hours with loss of load in each month. Most of the lost load events occurred in the summer (July and August) and winter (January and December), when system demand was high. Figure 6 shows the number of hours with lost load by hour-of-day. Most of the lost load events occurred in late afternoon and early evening when the load was high and solar energy diminished to zero. Hour 19 saw the largest occurrence of load loss events.
In Scenario 2.3, an additional 5000 MW of 4-h batteries were added to Scenario 2.2. The purpose was to see if batteries could help improve the system reliability level. As shown in Table 5, the LOLE was 0.42 day-per-year, which was still larger than the NERC requirement. Hence, the system in Scenario 2.3 was not reliable. The system would need 2872 MW of "perfect" capacity to become reliable. By comparing the results between Scenarios 2.2 and 2.3, it was found that both the LOLE and required capacity were reduced. This illustrated that batteries did help improve the reliability level of the system. Finally, we observed if long-duration batteries, such as 8-h duration, could be more beneficial to the system. Scenario 2.4 was designed to achieve this. Simulation results are listed in Table 5. Comparing the results between Scenarios 2.3 and 2.4, it was found that the reliability benefits owing to longduration storage batteries were minimal because most of the loss of load events in Scenarios 2.3 and 2.4 occurred in summer when the demand was high; however, wind and solar generation was extremely low in some of the 100 iterations. For example, for a rainy day, the solar generation in the system could be zero, although its installed capacity was 16,442 MW. In the evening, when solar generation was zero, there existed hours in which wind generation was extremely low. In these cases, all the energy would have been used to serve the load, instead of charging the batteries. The system cannot depend on batteries to store energy and release it when needed due to insufficient energy to recharge the batteries. Hence, long-duration batteries did not significantly relieve the loss of loads during those hours.
CONCLUSION
This paper presents a stochastic production cost simulation method to evaluate the generating capacity reliability in power systems with high renewable penetration. Compared to conventional approaches, such as the PRM and probabilistic assessment, the proposed method was able to consider the hourly chronological characteristics of the system operations, which is significant in the planning process for integrating new resources, such as storage, demand response, and renewables.
The stochastic model was first tested using two scenarios: a 45% renewables case and a 100% clean energy case (both percentages being capacity orientated). In addition, a few sensitivity studies were conducted for each scenario. The conclusions from the studies can be summarized as follows: • The system was reliable in the 45% renewables scenario. In addition, the system could further retire 1200 MW of capacity from oil units, while still satisfying the NERC reliability criteria. • In the 100% clean energy case, where all thermal units were retired and the capacity from wind and solar was approximately 55 GW, the system was unreliable with a large LOLE value. • When the system had 100% clean energy resources, it was extremely difficult or at least prohibitively costly (e.g. building excessive ESRs) to satisfy the reliability standard. Thermal generation resources might be needed to ensure RA with moderate costs. • By re-examining the concept of 100% clean energy, results from the simulation of this study indicate that the 100% clean energy concept does not necessarily mean the system has no thermal resources. By using the "net energy" concept, if the energy generated by thermal generators can be mitigated by the clean energy exported to neighbouring grids during other hours when the system is generating excessive clean energy, the "net energy" from renewables could still be 100%. • In the 100% clean energy scenario, adding only a moderate quantity of storage capacity would not make the system reliable. This is because storage is a passive type of generation resource. When there is insufficient residual energy to charge it, it cannot store sufficient energy to meet the unserved loads. • In the case when the system had 100% clean energy, adding long-duration batteries (e.g. 8 h) only slightly increased the reliability level of the system compared to adding shortduration batteries (e.g. 4 h).
Future work on evaluating the system reliability with an increasing penetration of renewable energy would include the combined simulation of RA and dynamic stability to make the model more practical. In addition, the reliability of distribution systems is another noteworthy topic to be investigated.
|
2020-12-31T09:04:15.927Z
|
2020-11-05T00:00:00.000
|
{
"year": 2020,
"sha1": "c24b75c0c97d8ba8f610f4b1a0a1ab4c3e14c81b",
"oa_license": "CCBY",
"oa_url": "https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/enc2.12016",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "18e7372ced2e7d30087bff5c6c1d3070e732c53b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
247631641
|
pes2o/s2orc
|
v3-fos-license
|
Genomic insights into metabolic flux in ruby-throated hummingbirds
Hummingbirds are very well adapted to sustain efficient and rapid metabolic shifts. They oxidize ingested nectar to directly fuel flight when foraging but have to switch to oxidizing stored lipids derived from ingested sugars during the night or long-distance migratory flights. Understanding how this organism moderates energy turnover is hampered by a lack of information regarding how relevant enzymes differ in sequence, expression, and regulation. To explore these questions, we generated a chromosome level de novo genome assembly of the ruby-throated hummingbird (A. colubris) using a combination of long and short read sequencing and scaffolding using other existing assemblies. We then used hybrid long and short-read RNA-sequencing for a comprehensive transcriptome assembly and annotation. Our genomic and transcriptomic data found positive selection of key metabolic genes in nectivorous avian species and a deletion of critical genes (GLUT4, GCK) involved in glucostasis in other vertebrates. We found expression of fructose-specific GLUT5 putatively in place of insulin-sensitive GLUT4, with predicted protein models suggesting affinity for both fructose and glucose. Alternative isoforms may even act to sequester fructose to preclude limitations from transport in metabolism. Finally, we identified differentially expressed genes from fasted and fed hummingbirds suggesting key pathways for the rapid metabolic switch hummingbirds undergo.
INTRODUCTION
The ruby-throated hummingbird (Archilochus colubris) is distinguished by features of natural and evolutionary history, morphology, and physiology from mammalian model systems such as mice, rats, and humans. They are among the smallest vertebrate endotherms (2.5-3.5 g). They employ hovering flight, displaying the highest wingbeat frequencies of any bird (and highest limb oscillation frequencies of any vertebrate;~50-60 Hz), and in doing so sustain the highest metabolic rates among all vertebrates (R. K. Suarez 1992). In addition, ruby-throated hummingbirds engage in an annual migratory journey from breeding grounds throughout Eastern North America to wintering grounds as far south as Central America. If measured in terms of body lengths traveled, small North American hummingbirds engage in some of the longest distance aerial migrations of any species (Gass 1979). In doing so, they demonstrate a remarkable ability to sustain high rates of metabolism using endogenous lipids, an ability not shared by mice, rats, or humans (Marshall D. McCue and Pollock 2013).
To fuel these activities, hummingbirds oxidize fatty acids and carbohydrates in their flight muscles at rates faster than any other vertebrates thus far studied . Remarkably, the dietary source of both fuels, carbohydrate and fat, is the same: simple sugars (glucose, fructose, sucrose) in floral nectar that provide more than 90% of the total calories they ingest (Baker, Baker, and Hodges 1998). Once ingested, hummingbirds must either oxidize or convert them into energy dense (and thus easier to carry) lipid depots. Remarkably, hummingbirds can switch between relying exclusively on oxidation of endogenous lipids to exclusive reliance on newly ingested sugars to fuel hovering flight over a period as short as 20-30 minutes (Welch and Suarez 2007;Chen and Welch 2014;Welch et al. 2006).
In order to keep up with the high energetic demands of hovering flight, hummingbirds transport, take up, and oxidize circulating sugars in flight muscles at rates as much as 55× greater than the maximum rates observed in any non-flying mammals . Once in circulation, the flux of sugar to, and oxidation in, exercising muscle is thought to be limited principally at each of three key steps: 1) delivery from capillaries to the extracellular space, 2) transport across the fiber membrane, and 3) phosphorylation in the muscle fiber Wasserman et al. 2011;Rose and Richter 2005;Bertoldo et al. 2006). Mechanistic understanding of steps 2 and 3 are poorly understood as GLUT4, the key glucose transporter in mammals, is absent in birds and while hummingbird hexokinase activity is higher than other vertebrates, this alone cannot explain the rate of hummingbird glycolytic flux (R. K. Suarez et al. 2009).
The ability of hummingbirds to fuel hovering flight completely with fructose as a fuel raises interesting fundamental questions about the enzymatic basis for rapid sugar flux. The same three key steps that regulate glucose uptake and oxidation by muscles presumably apply to fructose as well Wasserman et al. 2011;Rose and Richter 2005). In hummingbirds, there is ample evidence that at steps 1 and 2 capacity for fructose uptake into flight muscle fibers is dramatically higher than in other vertebrates. However, the enzymatic basis for high rates of fructose phosphorylation (step 3) remains unknown.
While common among migratory birds (Guglielmo 2010;Jenni and Jenni-Eiermann 1998), the ability to fuel flight exclusively or predominantly with endogenous lipid stores is itself something that distinguishes hummingbirds from model mammalian species. Many avian species build fat stores to power long distance migratory flight using the fatty acids that are present in their diet (Guglielmo 2010). Some of these switch to or exploit seasonally-available diets that are rich in specific lipid classes (Guglielmo 2010;Pierce et al. 2005). However, hummingbirds achieve high rates of de novo lipogenesis on a simple sugar diet and high rates of lipid accumulation to see them through both overnight fasts and migratory flight.
For these reasons, genomic studies of the ruby-throated hummingbird are warranted and necessary for further understanding of these fine-tuned metabolic systems. Here we produce a chromosome level hybrid genome assembly of the ruby-throated hummingbird. We annotated the genome using a combination of Illumina and Oxford Nanopore cDNA sequencing from muscle and liver tissues to identify full coding sequences and multiple encoded isoforms. Finally, we performed differential expression analysis and differential alternative splicing analysis on fasted and fed birds in both muscle and liver tissues to fully characterize the mechanisms underlying high catalytic rates (high catalytic efficiency and/or high levels of enzyme expression) and control over metabolic flux. These results are crucial for understanding the hummingbirds' exquisite control over rates of substrate metabolism and biosynthesis which could give insight into metabolic control of orthologous pathways in humans.
Chromosome level genome assembly
We generated a total of 26Gb of Oxford Nanopore data on the PromethION with a read-length N50 of 40Kb and 240Gb of Illumina Nova-seq data on the hummingbird brain ( Figure 1A and Methods). We performed hybrid de novo assembly with MaSuRCA ) which resulted in 1,837 contigs with a contig N50 of 13.54Mb. The assembly was determined to contain 1.13% heterozygous sequence ( Figure S1). Using the scaffolded assembly of a different hummingbird species, Anna's Hummingbird (Calypte anna) (Rhie et al. 2021) (Figure S2), we performed reference based scaffolding with RaGOO . Our final assembly of the ruby-throated hummingbird had 33 chromosomes that contained 98.1% of the total sequence ( Figure 1B, Table 1). The total genome length was 1.1Gb with a scaffold N50 of 46Mb, a scaffold L50 of 5 chromosomes and the largest scaffold 100Mb (Table 1). We assessed the assembly for completeness using BUSCO for avian genomes and determined it to be 96.6% complete (Table S1).
Genome Annotation
Avian genomes are the smallest of the amniotes, with smaller genomic elements (e.g. introns, exons, intergenic DNA) and fewer transposable elements compared to mammals . With our new genome in hand, we examined the repetitive elements in the A. Colubris genome assembly. We used RepeatModeler2 to generate de novo repeat libraries (Flynn et al. 2020) and used them in combination with the curated Avian library to perform homology based repeat masking with RepeatMasker. Among vertebrates, birds exhibit relatively low copy numbers and an overall reduced diversity of repetitive elements (International Chicken Genome Sequencing Consortium 2004;, with the exception of the woodpecker (Picoides pubescens) whose genome is 22.2% TEs, mostly contributed by the LINE/CR1 . In A. Colubris we detected 163 Mb of repetitive sequence representing 14.83% of the genome, including 116Mb of TEs that make up 10.50% of the genome, consistent with the repeat content in other avian lineages (Table 1, Table S2) ). Among classified repeats, LINE/CR1 elements were the most abundant superfamily found in the A. Colubris genome, making up 6.95% of the sequence. Next were LTRs (2.57%) and repeats discovered by our de novo libraries but not classified by RepeatClassifier (Unknown; 2.50%). Preliminary gene annotation was accomplished via a liftover of the C. anna annotations from the NCBI annotation (GCF_003957555.1) with LiftOff, a tool that maps annotations between closely related species . The C. anna annotation LiftOff to A. colubris consisted of 15,879 genes and 31,163 transcripts for an average of two transcripts per locus.
Transcriptome assembly
In order to capture both the complexity of differential splicing and the precision of splice junctions, transcription start and end sites we used a combination of short-read Illumina NovaSeq and long-read Oxford Nanopore cDNA sequencing on six hummingbirds across both muscle and liver tissue ( Figure 1A). Briefly, we used the hybrid reference based assembly pipeline from StringTie2 ) to expand our existing C. anna LiftOff annotation to a total of 17,878 genes and 43,348 transcripts. Our transcriptome assembly identified 96.4% (41,807) of genes containing multiple isoforms with an average of 2.4 isoforms per gene, a large improvement over the C. anna NCBI LifOff annotation alone ( Figure 1C). Additionally, our assembly identified 1,999 novel loci of which 1,051 were functionally annotated by BLASTing to the SwissProt database. Included in these novel genes are genes critical to metabolism including ALDOA, PFKM, G6PD, PGLS, PC, PCK2, PFKFB1, PYGM, and PLPPR1. Furthermore, the hybrid transcriptome assembly increases the number of isoforms variants per gene as exemplified by the Solute Carrier Family 2 Member 5 (SLC2A5/GLUT5) where the C. anna NCBI annotation contained two transcripts and our new hybrid annotation contains five splice isoforms that encode for different protein isoforms ( Figure 1D). Our expanded annotation provides the opportunity to understand gene expression changes at the transcriptome level during transitions between fuel use regimes, thus providing insights into potential mechanisms that make these organisms such flexible metabolic performers. Positively selected genes in nectivory Nectar feeding animals have among the highest recorded metabolic rates, incidentally, flight requires the highest metabolic rates of any form of locomotion known (Raul K. Suarez, Herrera M, and Welch 2011;R. K. Suarez 1992), with metabolic rates reaching 170 times higher than at rest . Using our new ruby-throated hummingbird genome assembly, we performed phylogenetic analyses of nectivorous avian species. We used 20 species, (Chimney swift, Anna's hummingbird, Helmeted guineafowl, Chicken/Red junglefowl, Wild turkey, Japanese quail, Zebra finch, Bengalese finch, Common canary, Painted honeyeater, Black sunbird, Cape sugarbird, Emperor penguin, Adelie penguin, Burrowing owl, Barn owl, African ostrich, Sanda bush warbler, Hooded crow, ruby-throated hummingbird) of which five are nectivorous from four separate lineages ( Figure S3). We used OrthoFinder (v2.3.12) to identify orthologous gene clusters between all 20 species (Emms and Kelly 2019). OrthoFinder groups genes into orthogroups, sets of genes descended from a single gene in the species, the last common ancestor based on their sequence similarity. OrthoFinder assigned 98.0% of genes to orthogroups generating 17,895 orthogroups containing a total of 364,583 genes. Of these orthogroups, 5,085 (28%) were shared between all 20 species and 1,207 were shared and present as a single copy (Figure 2A).
Next, we analyzed all shared single copy orthologous proteins for evidence of positive selection with PAML (Yang 2007) (Methods). Of the 1,207 shared single-copy orthologs we determined 39 (3.2%) genes with evidence of positive selection in the nectivorous branches (Table S3). A gene ontology (GO) analysis revealed that seven of the positively selected genes are metabolic interconversion enzymes ( Figure 2B). Nectivorous birds have extremely high rates of substrate metabolism, therefore it is likely that enzymes in these pathways are well adapted to increase the rate of sugar metabolism. We found two enzymes crucial for oxidative sugar metabolism to be positively selected, PDHA1 and GAPDH. GAPDH is the enzyme separating lower and upper glycolysis and is the rate limiting step in the pathway (Shestov et al. 2014). PDHA1 is a subunit of the pyruvate dehydrogenase complex in the mitochondrial matrix that controls the flux of pyruvate (the end product of glycolysis) into the tricarboxylic acid cycle. With an extremely low fat diet, nectivorous birds rely on rapid lipogenesis to generate endogenous fat storage and lipolysis for fueling flight in the fasted state. Interestingly, we identified positive selection in genes involved in fatty acid elongation (ACADL), beta-oxidation (HACD3) and ketone utilization (BDH2) (Guo et al. 2006). These data indicate that select genes (and the pathways they regulate) are positively selected in nectivorous birds to enable their highly energetically demanding lifestyle.
Hummingbird sugar transport and metabolism
The relative expression of the distinct GLUT transporters across the liver and muscle tissues provides key insights into hummingbird sugar metabolism. In the liver tissue the primary GLUT genes are SLC2A2 and SLC2A5 with a medium level of transcription of SLC2A9, SLC2A10 and SLC2A11 and comparatively low levels of SLC2A1, SLC2A3, SLC2A6 and SLC2A13 ( Figure 3A). The muscle tissue has the highest expression of SLC2A5, medium expression of SLC2A1 and SLC2A12 and low levels of SLC2A10, SLC2A3, SLC2A11, SLC2A13 and SLC2A2. SLC2A2, encoding the GLUT2 protein which has a high K m , plays a stronger role in enteric ) and hepatic sugar transport, resulting in the expected higher expression we observe in liver over muscle samples. Interestingly, chicken SLC2A1 and SLC2A3 share sequence homologies of ∼80% and ∼70% respectively with human GLUTs, but other isoforms such as SLC2A2 and SLC2A5 only share ∼65% and ∼64% sequence homology . A comparison of SLC2A2 sequences to 20 bird species reveals the loss of an N-linked glycosylation site in four of the 20 species (Workman et al. 2018). In the case of the African ostrich and the barn owl this site is lost due to truncation at the 5' end of the protein. However, in the hummingbirds (Anna's hummingbird and the ruby-throated hummingbird) the Asn-64 is replaced by Ser-64, therefore eliminating the conserved N-linked glycosylation site present in the sixteen avian species as well as humans and mice. In mice, the loss of this glycosylation site is coincident with increased GLUT2 protein endocytosis and the onset of type 2 diabetes ).
The absence of SLC2A4 (GLUT4) leaves many unanswered questions about how glucose enters avian muscle cells. From our study we note particularly high liver and muscle expression of SLC2A5 (GLUT5), which facilitates fructose uptake in mammals (Barone et al. 2009). SLC2A5 is not expressed highly in mammals and in mammalian GLUT5 a single point mutation is enough to switch the substrate binding preference of GLUT5 from fructose to glucose (Nomura et al. 2015). The abundance of SLC2A5 transcripts in hummingbird tissues, especially muscle tissue, is particularly interesting because it suggests this transporter is principally responsible for glucose/fructose transport into hummingbird tissues. There is considerable sequence divergence between hummingbird GLUT5 and mammalian GLUT5 (65.5% identity to human, 63.7% identity to mouse) and even from hummingbird to chicken (80.5% identity). We hypothesize that this form of hummingbird GLUT5 has transport capacity for glucose, but at a lower affinity than its capacity for fructose. In bacterial GLUT transporters (e.g. XylE) a Trp residue at the floor of the sugar binding pocket displays two hydrogen bonds with the bound glucose (Sun et al. 2012). In the same position on rat GLUT5 this residue moves to Alanine (Ala395) (Nomura et al. 2015) and in the hummingbird this residue is serine (Ser403) ( Figure S4). The Trp amino acid is well conserved amongst all human glucose transporters (GLUT1-4), however it moves to Ser in human GLUT7 which is also a dual (glucose and fructose) transporter ( Figure S4). With this information we can speculate that hummingbird GLUT5 could be a dual glucose/fructose transporter.
Using long read cDNA data we quantified the relative abundance of SLC2A5 transcripts in muscle and liver and identified differential alternative splicing occurring between muscle and liver tissues ( Figure 3B). Particularly, the muscle tissue has higher expression of the isoform that skips exon 3. The dominantly transcribed isoform translates to a protein highly similar in structure to mammalian GLUT5 (Nomura et al. 2015). However, the muscle GLUT5 variant skipping exon 3 is missing transmembrane domains TM3, TM4 and the intracellular tip of TM5 ( Figure 3C). In this isoform, the salt bridges between the amino and C-terminal TM bundles are absent and therefore the outward facing state is likely not favored. Our transcriptome sequencing in two different tissues across two opposing metabolic states (fed and fasted) highlighted the complexities of metabolic regulation at the transcriptional level.
While the high expression of fructose transporter gene SLC2A5 strongly suggests that fructose uptake capacity may be sufficient to meet fructolytic and oxidative demand during hovering flight, the enzymatic basis for high rates of fructose phosphorylation is still unclear. The main sugar kinase expressed in the liver is ketohexokinase (KHK), which has high affinity for fructose in mammals. However, in both humans and in the ruby-throated hummingbirds, the muscle mainly expresses hexokinase 2 (HK2), which is a glucose-specific kinase in humans ( Figure 3A). The hummingbird KHK and HK2 genes have 65% and 87% identity to their human orthologs, respectively, therefore their substrate affinities could be different from their human orthologs.
Previous studies assessed A. Colubris muscle total hexokinase activity and determined the V max to be 50% lower for fructose than glucose phosphorylation, which would not keep up with the calculated required rates of fructose oxidation by flight muscle during hovering flight . To further understand differences in fructose and glucose metabolism we used a chronic stable isotope tracer methodology to examine the speed of glucose and fructose usage for de novo lipogenesis in the ruby-throated hummingbird. We fed ruby-throated hummingbirds sucrose-based diets enriched with 13 C on either the glucose or the fructose portion of the disaccharide. Isotopic incorporation into fat stores was measured via the breath 13 C signature while fasting (oxidizing fat). We found that the respiratory exchange ratio (RER=V CO2 /V O2 ) (RER) and tracer oxidation increased quickly with feeding ( Figure 3D, Figure S5A), and with the respiratory exchange ratio RER exceeding a ratio of 1, suggesting lipid synthesis was occurring along with tracer oxidation. At the 10 min mark the RER began to fall but remained above 0.85 for the remainder of the trial. Peak tracer oxidation did not differ between the enriched sucrose solutions ( Figure S5B, p = 0.66). However, the time to peak oxidation differed, with the fructose-enriched sucrose solution reaching peak tracer oxidation faster than the glucose-enrich sucrose solution ( Figure 3E, p = 0.02). Overall these data support a hypothesis where fructose is rapidly transported out of the blood and metabolized while glucose remains in the bloodstream for longer and is used as a fuel source when blood fructose levels decline.
Another key regulator of blood glucose homeostasis is glucokinase (GCK), in mammals this enzyme has a high K m and is the glucose sensor not only for regulation of insulin release by pancreatic β-cells, but also for key organs that contribute to glucose homeostasis, such as the liver (Matschinsky and Wilson 2019;Peter et al. 2011). However, birds do not express GLUT4, the insulin sensitive glucose transporter, and the ruby-throated hummingbird in particular maintains the highest blood glucose concentration known amongst vertebrates Beuchat and Chong 1998). Our transcriptome assembly did not identify GCK in the assembled hummingbird transcriptome and we did not identify any GCK sequence in any of the ruby-throated hummingbird RNA-seq reads. When we compared the ruby-throated hummingbird genome to the chicken reference genome we determined that the region of the genome containing the GCK gene is not syntenic to any of the hummingbird sequence ( Figure S6).
Figure 3. Hummingbird sugar transporters. A) Gene expression heatmaps for ruby-throated hummingbird muscle and liver tissue. Variance-stabilizing transformation is applied for graphical
representation. SLC2A6 is only expressed in the liver, therefore boxes in muscle are filled in as gray. B) (Left) Isoform expression of SLC2A5 in the liver and (right) muscle. FPKM of each isoform color coded according to the top scale bar. C) Ribbon representations of the two protein models for GLUT5 isoforms predicted by Alphafold2 based on the mammalian SLC2A5 ortholog. Left is the X1 isoform, right is the X2 isoform that is missing exon 3. In both atomic models amino-and C-terminal TM bundles are colored blue and red respectively. Regions of the X1 isoform that are missing in the X2 isoform are depicted in light blue. Arginine-glutamate salt bridges at the intracellular tips of TMs are green. ICHs stands for "intracellular helices". D) Tracer oxidation rate over twenty minutes when birds were fed 13 C on either the glucose or the fructose portion of the disaccharide. E) Peak oxidation time of glucose and fructose in minutes (p=.02, pair t-test). Identification of differentially expressed genes that respond to fasting To identify differentially expressed genes (DEGs) that rapidly respond to fasting, we profiled the transcriptomes of total mRNA from the muscle and livers of A. Colubris hummingbirds that were fed sucrose ad libitum (fed) or fasted for a time period of one hour (fasted) ( Figure 1A). We analyzed three biological replicates for each metabolic condition (fasted versus fed) with Stringtie2 hybrid long and short-read quantification and DESeq2 with our newly constructed reference and annotation ). The fasted versus fed condition produced marked change in the transcriptomes. We identified 140 differentially expressed genes (DEGs) with adjusted p-values below 0.1 in the liver ( Figure 4A, Table S4) and 191 DEGs in the muscle ( Figure 4C, Table S5). Thus, the one hour fasting targets targeted a relatively small set of genes in the muscle and liver that likely play a role in the hummingbird's rapid switch from fed to fasted metabolism. To categorize these genes according to their gene ontology we used the Genetonic pipeline and generated functional gene-set enrichments for both the A. Colubris liver and muscle ) ( Figure 4B,D). This analysis yielded 200 statistically significant pathways (FDR < .05) in the liver and 106 in the muscle (Table S6-7). The response to fasting in the muscle and liver influenced dramatically different metabolic and regulatory pathways in each tissue.
In the liver, the one-hour fast influenced many metabolic and homeostatic pathways ( Figure 4B), including coenzyme biosynthetic processes, cellular responses to nutrient levels, response to hypoxia, response to carbohydrate, fatty acid metabolic processes, homeostatic processes and response to glucocorticoid stimulus ( Figure 4B). Genes particularly affected in these processes are key regulators of metabolic flux including PDK4, G0S2, and ANGPTL4, which likely contribute to the rapid transition to lipid metabolism in the hummingbird liver during an acute food withdrawal. Induction of these genes occurs independently of PPAR signaling in the hummingbird liver as all the PPAR genes do not have any changes in expression between the fasted and fed state (Table S8). Therefore, in hummingbirds, the PPAR pathway does not appear to control the expression of the metabolic switch genes in the liver, at least in an acute (one hour) fast. Many newly assembled genes were also differentially expressed including MSTRG.13300 and MSTRG.13844 which we were able to functionally annotate with SwissProt as HRG1 and AT1B, respectively.
The most statistically significant pathway upregulated in the fasted muscle was mitochondrial ATP synthesis coupled proton transport (GO:0042776, p=1.90E-06) ( Figure 4D). Other key genes regulating metabolic flux were affected such as ENHO, PPARA, G0S2 and SREBF1 ( Figure 4C). ENHO is associated with energy storage and metabolism as a precursor to the protein adropin, and was strikingly downregulated in the fasted birds . Interestingly, in humans, adropin is generally associated with liver and brain expression as opposed to the skeletal muscle expression we observed in A. Colubris. G0S2 is the only gene that was identified as differentially expressed in both the liver and muscle tissues. While G0S2 is known to have a significant role in liver lipid transport, a definitive role for G0S2 within skeletal muscle has yet to be elucidated and it appears that G0S2 is also present in mitochondria, with the speculation of several possible functions (Turnbull et al. 2016). These results point to the role of G0S2 in hummingbird rapid metabolic flux.
DISCUSSION
The results of our study are critical in understanding the hummingbirds' exquisite control over rates of substrate metabolism and biosynthesis. Our positive selection analysis points to a subset of 39 genes critical to the development of nectar based life-style. Pathways such as glycolysis, the tricarboxylic acid cycle, lipogenesis and lipolysis have to function rapidly to allow for the high energetic demands of flight and reliance on nectar as the only fuel source. This was evident in the selection of genes involved in these pathways (e.g. GAPDH, PDHA1, ACADL, HACD3, and BDH2) in nectivorous bird lineages.
In our work, we look deeply into glucose and fructose uptake into the hummingbird tissues. The lack of avian GLUT4 has been previously established, but we also identified the loss of GCK. It is likely that the low levels of insulin secretion and high sustained blood glucose in hummingbirds is due in part to the lack of expression of GCK, a key regulator of insulin secretion and blood glucose homeostasis. Utilizing our assembly, annotation and expression data we speculate that hummingbird GLUT5 has transport affinity for both glucose and fructose with a higher affinity for fructose. Unlike most other animals, 50% of the hummingbirds' diet consists of fructose (Baker, Baker, and Hodges 1998), which studies show is much more cytotoxic than glucose (Horst, Ter Horst, and Serlie 2017). As a consequence, hummingbirds rapidly sequester fructose into the muscle tissue, as evidenced by rapid declines of blood fructose levels upon fasting (Muhammad 2021). Further, we concluded that the birds are preferentially clearing fructose from circulation first and oxidizing it to CO 2 as shown by the tracer oxidation study. Our data supports the hypothesis that both glucose and fructose are transported into the muscle cells via the GLUT5 transporter, with fructose being favored first when concentrations of both are high and glucose is imported later when fructose in blood is scarce.
Our complete assemblies of the ruby-throated hummingbird genome and transcriptome allowed for isoform level analysis of gene expression. This analysis revealed expression of a GLUT5 protein variant in the hummingbird muscle which is projected to have an internally facing active site, but due to its loss of exon three, it is unlikely that it maintains the transport activity. Future biochemical and functional studies will illuminate whether hummingbird GLUT5 is capable of fructose and glucose transport and how its variants are different from GLUT5 found in other species. We speculate that a potential biological function of this GLUT5 protein isoform is in sequestering fructose inside muscle cells rather than acting as a fructose transporter, as fructose phosphorylation capacity is low and likely cannot keep up with the rapid import. Future biochemical studies into the functionality and role of this GLUT5 protein isoform in hummingbird sugar metabolism are warranted.
We characterized the hummingbird liver and muscle expression profiles during the transition to the fasted state. These results gave insights into the drivers of rapid metabolic flux in hummingbirds. An interesting result was the upregulation of the PDK4 gene in the fed to fasted transition. This protein kinase is located in the matrix of the mitochondria and inhibits the pyruvate dehydrogenase complex (PDH) by phosphorylating one of its subunits. Because PDH is considered the gatekeeper of the TCA cycle, its inhibition in the fasted state would shut down complete glucose oxidation and promote gluconeogenesis and fat oxidation. Expression of PDK4 was increased 2.36 fold during fasting, results consistent with previous studies on chickens implicating glucagon as a stimulator of PDK4 expression (Honda et al. 2017). It is possible that rapid switching from carbohydrate to fat oxidative catabolism in the fasted state is contributed to, in part, by the rapid upregulation of hummingbird PDK4. This suggests further study into PDK4 molecular biology in hummingbirds. Another result calling for future biochemical and molecular studies is the downregulation of G0S2 in both the liver and muscle tissue. G0S2, the G(0)/G(1) switch gene 2, is an inhibitor of Adipose triglyceride lipase (ATGL), a rate-limiting enzyme that catalyzes the first step in triglyceride hydrolysis in adipocytes. Previous studies of G0S2 on chicken, turkey and quail have revealed avian G0S2 has 50 to 52% homology to mammalian G0S2 and suggest its importance in regulation of ATGL-mediated lipolysis (Oh et al. 2011). Our results suggest G0S2 plays a very important role in the rapid transition of fed to fasted metabolism across multiple tissues. Lastly, we observed changes in expression of genes controlling vessel dilation and constriction, likely very important to osmoregulation (e.g. HRG1 and AT1B). Hummingbird kidneys are not designed to concentrate urine as when they are feeding they must eliminate large quantities of water; however, when they are not feeding, they are susceptible to dehydration (Bakken et al. 2004). Therefore these changes in vessel dilation are likely necessary for preparing the splanchnic tissues for the osmotic shift that occurs during fasting.
In conclusion, our results have leveraged cutting-edge long and short-read sequencing technologies to generate a high quality genome assembly and annotation of the ruby-throated hummingbird. With the resources we generated, ruby-throated hummingbird genes can now be quickly cloned and expressed for further biochemical experiments, such as measuring their enzymatic properties, e.g., K cat or V max , to compare to other avian or mammalian analogues. Expressed proteins may also be used for structural biology studies, applying either X-ray crystallography or cryoEM to generate structural maps of the proteins, then examine how the structure compares to other orthologues in dictating biological functions.
ACKNOWLEDGMENTS
Funding: This study was supported by grants from the Human Frontier Science Program (RGP0062 to MV, GWW, KCW, and WT) Competing interests: WT has two patents (8,748,091 and 8,394,584) licensed to Oxford Nanopore Technologies.
Animal use and ethics statement
This study was conducted under the authority, and adheres to the requirements of, the University of Toronto Laboratory Animal Care Committee (under protocol 20011649) as well as the guidelines set by the Canadian Council on Animal Care. Twelve adult male ruby-throated hummingbirds (Archilochus colubris) were captured in the early summer at the University of Toronto Scarborough (UTSC) using modified box traps. The hummingbirds were individually housed in Eurocages at the UTSC vivarium on a 12h:12h light:dark cycle. The hummingbirds in these cages were provided with perches and were on an ad libitum diet of 18% weight to volume of NEKTON-Nektar-Plus (Keltern, Germany) for 2-3 months until tissue sampling occurred.
One day prior to experiment day (23 hours) 12 male birds were placed on a 33% sucrose solution ad libitum diet in place of the NEKTON-Nactar-Plus diet. Birds were then divided into a fed group (n=6) and a fasted group (n=6). One hour prior to sampling, birds from both conditions were placed in small glass jars that had perches. This restricted the birds' ability to fly and was done in hopes of reducing energy expenditure variation between individual birds. Birds in the fed group were then provided with ad libitum 1M sucrose solution for one-hour up to sampling, which began at 10:00 h. The fasted group (n=6) was deprived of food one hour prior to sampling. The one-hour fast was chosen because previous work by Chen and Welch (2014) has shown via respirometry that this time is sufficient for the fasted hummingbird to shift from using circulating sugars to using fats for fueling metabolism.
Tissue samples were collected via terminal sampling of the hummingbirds. They were anesthetized via isoflurane inhalation and sacrificed using decapitation. Flight muscles (the pectoralis and supracoracoideus muscle) and liver were collected. Tissues were flash frozen in liquid nitrogen and subsequently stored at −80°C. In addition, one female hummingbird was also captured and sampled in the same fashion as above. This sample was used for DNA isolation for genome assembly purposes and was not subject to any experimental conditions.
DNA Sequencing
DNA was extracted from the hummingbird from two 25mg pieces of brain tissue and two 25mg pieces of pectoralis muscle tissue with the Nanobind CBB tissue kit alpha Handbook v0.16d (4/2019) from Circulomics following the protocol for using the dounce homogenizer. DNA quality was assessed with the Thermo Scientific™ NanoDrop™ 2000/2000c Spectrophotometer. We generated a sheared nanopore library and an ultra-long nanopore library to enrich for both size and depth. For the sheared library DNA was sheared to 10kb with covaris g-tube. For the ultra-long library DNA was size-selected with the Short Read Eliminator XS Kit from Circulimics. Oxford Nanopore sequencing libraries were prepared using the Ligation Sequencing 1 D Kit (Oxford Nanopore, Oxford, UK, SQK-LSK109) according to manufacturer's instructions and sequenced for 72 hours on 2 PromethION R9.4.1 flow cells. Nanopore reads were base-called with Guppy Software (version 3.0.6). Sequencing runs were pooled for genome assembly purposes. For shotgun Illumina sequencing, a paired-end (PE) library was prepared with the Nextera DNA Flex Library Prep Kit from Illumina and sequenced on the Illumina NovaSeq6000 (Illumina, Inc., San Diego, CA, USA). All sequencing data have been deposited at the NCBI SRA database under BioProject PRJNA811496.
Genome assembly
The genome was assembled using both the Illumina and nanopore sequencing datasets with MaSuRCA with FLYE_ASSEMBLY=1 and all other parameters set as default. The genome was scaffolded with RaGOO using the C. anna assembly (GCA_003957555.2) as a reference . Assembly similarity was first checked by aligning the two assemblies with nucmer from the mummer package (Marçais et al. 2018) and assemblies were considered highly similar. Assembly completeness was checked with BUSCO using the aves lineage (Manni et al. 2021). Assembly heterozygosity was quantified with the kmer analysis toolkit (KAT) (Mapleson et al. 2017) GenomeScope (Vurture et al. 2017) and the assembly was determined to have 1.13% heterozygosity (Figure S1). Repeats were annotated by first running RepeatModeler (v2.0.1) to generate a database of custom repeat annotations. The assembly was first masked with RepeatMasker (v4.0.9) using the Aves database and then further masked using the custom generated database. The genome assembly has been deposited under BioProject PRJNA811496.
RNA extraction
RNA was extracted from approximately 40 to 50 mg of pectoralis tissue and 20 mg of liver using the Qiagen RNeasy Fibrous Tissue Mini Kit (Qiagen, Hilden, Germany). RNA quality was assessed using a nanodrop and the presence of sharp 18S and 28S rRNA on an agarose gel. RNA quality was also assessed with the Agilent 2200 TapeStation system RNA high sensitivity kit (Agilent, Santa Clara, CA) before and after polyA isolation with NEBNext ® Poly(A) mRNA Magnetic Isolation Module.
RNA Sequencing
PolyA mRNA from all samples was supplemented with Spike-In RNA Variants (SRIV) set 3 from Lexogen. Libraries for Illumina sequencing were generated with NEBNext® Ultra™ RNA Library Prep Kit for Illumina and sequenced on the Illumina NovaSeq6000 (Illumina, Inc., San Diego, CA, USA). Libraries for long-read sequencing were generated with the cDNA PCR sequencing kit (SQK-PCS109) from Oxford Nanopore Technologies according to the manufacturers instructions. Libraries were each sequenced on a PromethION flow cell for 72 hours. All sequencing data have been deposited at the NCBI SRA database under BioProject PRJNA811496.
Genome annotation and transcriptome assembly
Illumina RNA-seq reads were trimmed with trimmomatic (v0.39) with the following parameters: SLIDINGWINDOW:4:20 LEADING:10 TRAILING:10 MINLEN:50. Trimmed reads were then aligned to the ruby-throated hummingbird reference genome with HISAT2 (v2.2.0) (Kim et al. 2019) with the following parameters --score-min L,0,-0.5 -k 10 to account for the high heterozygosity in the wild hummingbirds and filtered for primary alignments with Samtools (v1.9) (H. Li et al. 2009). Nanopore cDNA sequencing reads were aligned with deSALT (v1.5.4) and filtered for primary alignments with a mapping quality score greater than 50. An initial genome annotation was done by lifting over the predicted annotations from the Calypte anna genome annotation (GCA_003957555.2) onto our ruby-throated hummingbird assembly with LiftOff (Shumate and Salzberg 2020). These lifted over annotations were used as a reference model for hybrid transcriptome assembly with stringtie2 . We ran Stringtie2 separately for each paired Illumina and nanopore sample (n=12) with the following command: To filter out low evidence assembled transcripts we aligned the 12 stringtie2 gtfs with GffCompare (v0.11.2) (Pertea and Pertea 2020). The 12 gtf files were merged with stringtie merge and filtered to retain transcripts that had evidence from at least two of the 12 gtfs. Gene and transcript abundance measurements were computed against the final merged and filtered gtf file with the same command as above and the addition of the -e flag: To correct for transcripts assigned to the incorrect gene locus during stringtie's merge function we ran the R package IsoformSwitchAnalyzeR (Vitting-Seerup and Sandelin 2019) with fixStringTieAnnotationProblem = TRUE. We generated the transcript count matrix files using the prepDE.py3 script from stringtie2 and the gene count matrix files were generated using the abundance measurements from the gtf and the gene to transcript associations from the IsoformSwitchAnalyzeR output. The ruby-throated hummingbird transcriptome assembly is available on zenodo (DOI:10.5281/zenodo.6363333). Protein predictions from the transcriptome were done with TransDecoder (v5.5.0) (https://github.com/TransDecoder/TransDecoder). Genes that were not annotated in the anna's hummingbird reference were first confirmed to have functional open reading frames by identifying a corresponding protein prediction from the TransDecoder output. They were then functionally annotated by Blast (v2.2.31+) (Camacho et al. 2009) to the Swiss-Prot database (Boeckmann 2003) and run through the InterProScan5 (v5.44-79.0) pipeline (Jones et al. 2014).
Differential expression
Differential gene expression was done with DESeq2 (Love, Huber, and Anders 2014) filtering for genes with at least 10X coverage in at least four of the six samples per tissue. The three fasted samples and three fed samples were compared separately for the liver and muscle tissue and significantly differentially expressed genes were determined using adjusted p-values beneath 0.1. Isoform level expression was quantified with Ballgown (Frazee et al. 2015) and isoform level FPKM values were compared across the muscle and liver tissues. Significantly upregulated pathways were determined with the GeneTonic R package for both liver and muscle.
Gene loss analysis
We did not identify the GCK gene in our transcriptome annotation or in the functional annotation of the predicted proteins. As further validation we used Blast (v2.2.31+) using the chimney swift (Chaetura pelagica) and chicken (Gallus gallus) GCK gene sequence and protein sequence to both the ruby-throated hummingbird predicted protein set and genome. The Blast search did not uncover any hits that we could determine to be open reading frames. We then aligned the chicken reference genome (GCA_000002315.5) to the ruby-throated hummingbird genome with minimap2 (H. Li 2016) with the following parameters: minimap2 -x asm20 -c --eqx. We noted that the region with the GCK gene in the chicken genome is non-syntenic to any of the ruby-throated hummingbird DNA sequence. The paf output file was processed with rustybam (https://github.com/mrvollger/rustybam) and plotted with SafFire (https://mrvollger.github.io/SafFire/). To further ensure that there was no expression of the GCK gene in the ruby-throated hummingbird we mapped all the RNA-seq reads to the chimney swift GCK gene sequence with Bowtie2 (Langmead and Salzberg 2012) allowing for multiple mismatches by using the very-sensitive-local flag. None of the RNA-seq reads mapped to the chimney swift GCK sequence. Lastly, we also validated that the GCK gene was not present in the annotation of Anna's hummingbird either.
Positive selection analysis
For molecular evolution analyses, we used a consensus tree topology based on molecular phylogenies generated by Hacket et al., Oliveros et al., and Prum et al (Hackett et al. 2008;Oliveros et al. 2019;Prum et al. 2015). Species were chosen to give outgroups in multiple clades, as well as provide species as a sister lineage for each nectivorous lineage, where such a species existed in the publicly available genome databases. Additionally, species were added that evolved between nectivorous lineages to highlight the convergent nature of the phenotype. Additionally, adding lineages between nectivorous lineages allowed us to ensure that the branches we tested for positive selection to the best of our ability matched the branches where the transition to nectivory happened in more species rich phylogenies. Proteomes for all species selected were downloaded from NCBI and clustered with cd-hit (v4.8.1) with a sequence identity threshold of 98% to remove redundancy in the datasets (W. Li and Godzik 2006). Orthologous gene groups were generated by running the clustered proteomes through the OrthoFinder (v2.3.12) pipeline (Emms and Kelly 2019). 1-to-1 orthology groups were determined by selecting all single copy genes that were contained in all species.
For each 1-to-1 orthology group (OG), the branch-site test of positive selection was performed using codeml in PAML v4.10 (https://github.com/abacus-gene/paml; Yang 2007) to detect genes under positive selection in nectivorous bird lineages. Using the phylogenetic tree reported in phylogenies generated by Hacket et al., Oliveros et al., and Prum et al (Hackett et al. 2008;Oliveros et al. 2019;Prum et al. 2015), the topology was unrooted using the ete3 toolkit (Huerta-Cepas, Serra, and Bork 2016), and foreground branches were assigned to the following nectivorous lineages: Grantiella picta, Promerops cafer, Leptocoma aspasia, and the clade of Calypte anna + Archilochus colubris. A likelihood ratio test (LRT) was performed for each OG, with the branch-site model A (specified in ) as the alternative model and model A with a fixed ω = 1 as the null model. LRT statistics were converted to p-values using pchisq in R v.3.5.0 (R Core Team 2018). To provide a conservative estimate of genes under positive selection among nectivorous lineages, each OG with a statistically significant LRT (p ≤ 0.05) was also required to possess at least one site under positive selection with a posterior probability ≥ 0.95 (according to the Bayes empirical Bayes analysis of positive selection included in the codeml branch-site test of positive selection). Genes under positive selection are reported in Table S3. Intermediate files from this analysis are available on zenodo (DOI:10.5281/zenodo.6363333).
Protein structure models
Structures for full length GLUT5 (isoform X1) and for the alternative spliced variant (isoform X2) where modeled with AlphaFold v2.01 (https://github.com/deepmind/alphafold) using default settings without templates to avoid model bias (Jumper et al. 2021). A reduced version of the BFD database (https://bfd.mmseqs.com/), optimized for speed and lower hardware requirements, was employed during multi-sequence alignment (MSA). The overall confidence measure (predicted local-distance difference test, pLDDT) for the generated models was > 75, which generally indicates good backbone prediction. Atomic models with highest scores at the overall confidence measure were selected (86.2 for isoform X1 and 84.5 for isoform X2). pLDDT is a per-residue confidence metric, and as such, can be used to monitor how the model confidence varies along the chain. Very low confidence regions (<50) included flexible aminoand C-terminal ends that were removed from the models.
Tracer oxidation study
Four ruby-throated hummingbirds were fasted for 1 hour, following which they were placed in a 500 ml respirometry container and baseline fasting breath delta 13 C breath stable isotope signature and respiratory exchange ratio (RER) recording (see ) for respirometry and breath stable isotope set up). After 5 min the birds were then fed a 150 ml of a 20% sucrose solution with sucrose enriched with 13 C on all six carbons of the glucose (sucrose (glucose-13 C6, 98%), Cambridge Isotope Laboratories, Tewksbury, MA, USA] or fructose [d-sucrose (fructose-13 C6, 98%), Cambridge Isotope Laboratories] portion of the sucrose molecule. The birds were fed through a 1 ml syringe in the lid of the respirometry jar which allowed for continuous breath measurements, and previous training allowed for quick consumption of the sucrose solutions. The time of feeding was recorded and used as t=0. The respiratory measurements continued over the next 20 minutes to measure the rise and start of the fall of RER, representing the switch from fasted to fed. The birds were then returned to their cages and repeated the process again 1 week later with the other sucrose solution, with 2 birds starting with fructose-enriched, and 2 birds starting with the glucose-enriched. RER was analyzed following , and tracer oxidation rate analyzed following (M. D. McCue et al. 2010), and were averaged for each minute over the course of 20 minutes. The time to and peak tracer oxidation rate was analyzed using a pair t-test. Figure S1. Genome-scope plot. Coverage and kmer frequency plot using the Illumina gDNA reads and the MaSURcA assembly of the ruby-throated hummingbird. Table S6. Gene ontology pathway analysis of differentially expressed genes in the A. Colubris liver. Table S7. Gene ontology pathway analysis of differentially expressed genes in the A. Colubris muscle. Table S8. Gene expression results from the genes involved in the PPAR signaling pathway in both the fasted and fed muscle and liver.
|
2022-03-25T13:28:01.313Z
|
2022-03-21T00:00:00.000
|
{
"year": 2022,
"sha1": "9ab7fc56b8d48f809115273eae0b9bdbf72dca72",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/03/21/2022.03.21.485221.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "9ab7fc56b8d48f809115273eae0b9bdbf72dca72",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
17075012
|
pes2o/s2orc
|
v3-fos-license
|
Optical and X-ray Topographic Studies of Dislocations, Growth-sector Boundaries, and Stacking Faults in Synthetic Diamonds
The characterization of growth features and defects in various high-pressure high-temperature (HPHT) synthetic diamonds has been achieved with optical and X-ray topographic techniques. For the X-ray studies, both characteristic and synchrotron radiation were used. The defects include dislocations, stacking faults, growth banding, growth sector boundaries, and metal inclusions. The directions of the Burgers vectors of many dislocations (edge, screw, and mixed 30 ˝ , 60 ˝ , and 73.2 ˝), and the fault vectors of stacking faults, were determined as <110> and 1/3 <111> respectively. Some dislocations were generated at metallic inclusions; and some dislocations split with the formation of stacking faults.
Introduction
The purpose of these studies was to characterize the various growth features and defects in some specimens of synthetic high-pressure high-temperature (HPHT) diamonds grown by the reconstitution technique [1][2][3]; and to identify Burgers vectors of dislocations and fault vectors of stacking faults.The interest in X-ray topographic investigations of diamonds is due both to the importance of diamond as a material, but also to the possible information about their growth processes, which take place under conditions of extreme temperature and pressure.Thanks to the low absorption of diamond, X-ray topography can be successfully applied for studying whole diamonds [4]; however, more convenient for the investigations seem to be diamond slabs with artificially introduced surfaces, where X-ray topography could be simultaneously used together with other methods, e.g., cathodoluminescence topography [5,6].
The interesting objects for investigations of this kind are large synthetic diamonds (several millimetres in diameter) grown by the reconstitution method.The progress in this technique has made possible the routine growth of such diamonds with relatively low concentrations of dislocations and metallic inclusions.X-ray topographic investigations of large synthetic diamonds have already been reported in a number of publications [7][8][9][10][11].Some newer possibilities, especially using synchrotron X-radiation, were introduced by using double-crystal X-ray topographic methods [12,13].Crystal growth techniques are continually advancing; and newer commercial products contain fewer and fewer defects.Other investigations of synthetic diamonds with optical and X-ray methods performed by some of the present authors have been published elsewhere [14][15][16][17][18][19][20][21].In some natural diamonds, it has been found that nitrogen stiffened the diamond structure against plastic deformation [22].
The places where dislocations and stacking faults meet the crystal surface can be revealed by etching [23,24]: a mildly destructive technique as it removes material.On a (111) face of a diamond, the triangular etch pits are called "trigons", which come in several varieties: of positive and negative orientations with respect to an octahedral face, point-bottomed and flat-bottomed, steep-sided and shallow-sided [25].Dislocations in <111> directions can give rise to low-elevation hillocks on HPHT diamonds [26]; and some {111} growth sectors in synthetic diamonds can be substantially free of dislocations.Edge or mixed dislocations have been found in (001) and (111) growth sectors of low dislocation density (<47 cm ´2) HPHT diamonds, all with Burgers vectors of (a/2) <110> type [27].
Dislocations and stacking faults can also widen double-crystal rocking-curves [28].In a quest for supplying perfect diamond crystal for use as monochromators for synchrotron radiation, sub-surface damage has been investigated by limited-projection topographs [29].Chemical vapor-deposited (CVD) diamond on HPHT or CVD diamond substrates showed dislocations emanating from points at, or near to, the substrate surface [30].
Some of the diamonds studied here had been grown in the [001] direction and others in the [111] direction.Some were complete crystals while others had been sliced and polished parallel to a (100) or (110) plane.Some diamonds contained metal inclusions of micrometre size, identified as body-centred cubic Fe-Co [31], which introduced strain; and in a large diamond (100) slice, 5 mm ˆ5 mm ˆ0.7 mm, the metallic inclusions were 600 µm in length, from which numerous dislocations emanated.
The investigative methods to image defects comprised optical microscopy, with or without filters; quantitative birefringence microscopy using a rotating polarizer/analyser in the "Deltascan" or "Metripol" technique [32]; and X-ray topography, using either conventional or synchrotron sources of X-rays [33][34][35].
Section 2 describes the quantitative birefringence technique and its application to the study of strain in synthetic diamonds grown in the [111] direction, together with X-ray topography to locate bundles of radiating dislocations.Section 3 concentrates on synchrotron X-ray topography and its application to the study of defects in diamonds grown in the [001] direction.Section 4 is an extensive study by characteristic X-radiation of the Burgers vectors of dislocations, and the fault vectors of stacking faults, in slabs of diamond cut from a crystal which had grown predominantly in the [001] direction.
The Diamond Specimens
For convenience, the four diamonds in this suite are labelled A1, A2, A3, and A4 (renamed from various earlier studies, where they had been named HS, GB, MM2, and S8 respectively [31]).Diamond A1 is 2.0 mm ˆ2.0 mm ˆ0.3 mm in size and had been grown in the [111] direction.It is a typical example of diamonds, which after laser drilling a central hole, would be used in industry as wire-drawing dies.Shaped like a triangle with truncated apices, distinguishing features of the diamond include prominent boundaries between the three {100} growth sectors and the abundance of micron-sized particles known in the diamond trade as "clouds".Previous X-ray and optical studies of this diamond and the following specimen, A2, were reported by Kowalski, Moore and co-workers [31]; and it was suggested that these "clouds" may consist of body-centred cubic iron-cobalt, a favoured solvent/catalyst in the diamond synthesis process.
An optical micrograph of diamond A1 is presented in Figure 1a.The diamond does not contain large metallic inclusions or other features expected to be sources of strain, and with the exception of the "clouds" near the centre, the specimen is generally optically clear.
Birefringence Measurements of [111]-Grown Diamonds
Figure 1b-d show the central area of the diamond taken by the prototype technique of quantitative polarizing microscopy called "Deltascan", invented by Professor Michael Glazer and coworkers at the University of Oxford [32].After development, this technique was renamed and marketed as "Metripol".The specimen is placed between two polarizers (one circular, one linear), illuminated by a monochromatic light source, and viewed by a CCD camera connected to a video frame-grabber in a PC computer.The circularly-polarized light passes through the specimen which, if optically anisotropic, elliptically polarizes the light.The linear polarizer rotates about the microscope axis at a frequency ω.The resultant intensity I through this circular polarizer-specimenlinear polarizer (CP-S-LP) system is given by: ) sin ] 2 where I0 is the incident intensity, t the time, φ the orientation of the optical indicatrix (an ellipsoid describing the variation in refractive index with light vibration direction), and δ the phase shift due to optical retardation of the light by the specimen.This phase shift is given by: where λ is the wavelength of the monochromatic light (in this instance, sodium light of wavelength 589 nm), L is the thickness of the specimen, and Δn is the birefringence of the specimen.
Birefringence Measurements of [111]-Grown Diamonds
Figure 1b-d show the central area of the diamond taken by the prototype technique of quantitative polarizing microscopy called "Deltascan", invented by Professor Michael Glazer and co-workers at the University of Oxford [32].After development, this technique was renamed and marketed as "Metripol".The specimen is placed between two polarizers (one circular, one linear), illuminated by a monochromatic light source, and viewed by a CCD camera connected to a video frame-grabber in a PC computer.The circularly-polarized light passes through the specimen which, if optically anisotropic, elliptically polarizes the light.The linear polarizer rotates about the microscope axis at a frequency ω.The resultant intensity I through this circular polarizer-specimen-linear polarizer (CP-S-LP) system is given by: where I 0 is the incident intensity, t the time, ϕ the orientation of the optical indicatrix (an ellipsoid describing the variation in refractive index with light vibration direction), and δ the phase shift due to optical retardation of the light by the specimen.This phase shift is given by: where λ is the wavelength of the monochromatic light (in this instance, sodium light of wavelength 589 nm), L is the thickness of the specimen, and ∆n is the birefringence of the specimen.
Thus, the intensity I at any point on the CCD camera consists of contributions from the optical retardation, giving the magnitude of the strain birefringence, and the orientation of the optical indicatrix, giving the direction of the strain birefringence.These two components are separated by the system software, and represented as colour-coded maps of the specimen.Such a map is also produced for the absorption in the specimen, I/I 0 .The palette of colours used in the interpretation of the Metripol maps is shown in Figure 2. Figure 1b shows the quantitative optical absorption map of diamond A1.For example, the bright yellow colour on the growth-sector boundaries matches the value 40 on the palette, and thus the transmission I/I0 at those points is 0.40.The absorption tends to increase as the centre of the specimen is approached (that is, I/I0 values decrease).This observation is accounted for by the increase in the population of "clouds" near the centre of the diamond.
Measurements of Birefringence
The optical retardation map of Figure 1c displays the modulus of the phase change due to optical retardation |sin δ|, which is directly proportional to the birefringence Δn in accordance with Equation (2).Thus, the optical retardation maps may be considered as plotting the magnitude of strain in a specimen.The resulting image has a striking three-fold symmetry, and suggests that the strain birefringence is greatest around the growth-sector boundaries.The image colour at these boundaries is a pale orange, which corresponds to a palette value of 75 and thus |sin δ| is 0.75.This corresponds to a birefringence Δn of 15 × 10 −3 .Further from these regions, the strain reaches a minimum and is depicted by dark blue and purple.These colours represent |sin δ| values of the order of 0.10, and the birefringence Δn in these regions is 1.8 × 10 −3 .
The orientation of the optical indicatrix φ of diamond A1 is shown in Figure 1d.The orientation takes values of 1° to 180°, and so the first 180 colours could be read from the Metripol palette; but here we are using just 18 colours at 10° intervals.The striking pattern of optical orientation indicates that the strain in A1 is radial about the centre of the specimen.While radial strain is often found to be localized to a few hundred microns around a particular feature, for example an inclusion, in this image the pattern is maintained throughout almost the entire crystal.
X-Ray Topography of [111]-Grown Diamonds
Figure 3 shows a section X-ray topograph of this diamond, taken with the 440 reflection of MoKα1 radiation (wavelength λ = 0.71 Å; Bragg angle θ = 34.22°),with the incident X-rays slicing nearly parallel to the major (111) faces of the diamond (off-set = 1.04°).This image reveals bundles of numerous dislocations (forming a "Y" configuration), arranged in directions opposite to those of the The clever feature of Metripol is to display separately the three contributions to the intensity.The absorption in the specimen I/I 0 ranges from 0 to 1 in steps of 0.01, and so the first 100 colours of the palette are used to depict these values.The optical retardation maps display the value of |sin δ|, which again ranges from 0 to 1; and the first 100 colours of the palette are used to represent this quantity, in steps of |sin δ| = 0.01.The orientation of the optical indicatrix varies from 0 ˝to 180 ˝, and the first 180 palette colours are used.
Figure 1b shows the quantitative optical absorption map of diamond A1.For example, the bright yellow colour on the growth-sector boundaries matches the value 40 on the palette, and thus the transmission I/I 0 at those points is 0.40.The absorption tends to increase as the centre of the specimen is approached (that is, I/I 0 values decrease).This observation is accounted for by the increase in the population of "clouds" near the centre of the diamond.
Measurements of Birefringence
The optical retardation map of Figure 1c displays the modulus of the phase change due to optical retardation |sin δ|, which is directly proportional to the birefringence ∆n in accordance with Equation (2).Thus, the optical retardation maps may be considered as plotting the magnitude of strain in a specimen.The resulting image has a striking three-fold symmetry, and suggests that the strain birefringence is greatest around the growth-sector boundaries.The image colour at these boundaries is a pale orange, which corresponds to a palette value of 75 and thus |sin δ| is 0.75.This corresponds to a birefringence ∆n of 15 ˆ10 ´3.Further from these regions, the strain reaches a minimum and is depicted by dark blue and purple.These colours represent |sin δ| values of the order of 0.10, and the birefringence ∆n in these regions is 1.8 ˆ10 ´3.
The orientation of the optical indicatrix ϕ of diamond A1 is shown in Figure 1d.The orientation takes values of 1 ˝to 180 ˝, and so the first 180 colours could be read from the Metripol palette; but here we are using just 18 colours at 10 ˝intervals.The striking pattern of optical orientation indicates that the strain in A1 is radial about the centre of the specimen.While radial strain is often found to be localized to a few hundred microns around a particular feature, for example an inclusion, in this image the pattern is maintained throughout almost the entire crystal.
X-Ray Topography of [111]-Grown Diamonds
Figure 3 shows a section X-ray topograph of this diamond, taken with the 440 reflection of MoKα 1 radiation (wavelength λ = 0.71 Å; Bragg angle θ = 34.22˝), with the incident X-rays slicing nearly parallel to the major (111) faces of the diamond (off-set = 1.04 ˝).This image reveals bundles of numerous dislocations (forming a "Y" configuration), arranged in directions opposite to those of the growth-sector boundaries and into regions of low birefringence.The greatest contributions to strain birefringence came from the growth-sector boundaries.
Crystals 2016, 6, 71 5 of 18 growth-sector boundaries and into regions of low birefringence.The greatest contributions to strain birefringence came from the growth-sector boundaries.Double-crystal topographs and rocking-curve measurements of diamond A1 were made using synchrotron X-radiation at the Daresbury Laboratory, UK.The 800 reflection from a (100) silicon monochromator selected 1.0 Å radiation from the wavelength continuum.The 1.0 Å X-ray beam was then diffracted by diamond A1 using the 333 reflection.Rocking-curve widths were of the order of 20 seconds of arc, which are large compared to the 3″ widths of perfect single-crystal diamonds.Kowalski, Moore and co-workers [31] concluded that the relatively large misorientations in this crystal were associated with the incorporation of "clouds" during growth.The double-crystal topographs confirmed the radial dislocation distributions observed in the characteristic-radiation studies.
The other diamonds in this suite (A2, A3 and A4) showed some differences and some similarities to A1 in the optical and X-ray images; A4 being the most similar.A2 and A4 had fewer "clouds" of point-like inclusions than A1; while A3 was free of "clouds".The images of these diamonds are given in Figure 4, together with their sizes and estimates of birefringence far from, and near to, the growthsector boundaries (g.s.b.).Reading down the figure, the second row shows optical micrographs, followed by quantitative maps of absorption, retardation, and orientation.(For A2 and A3, only the central regions are mapped.)The final row shows section X-ray topographs, taken with the 440 reflection of MoKα1 radiation for A2 and A4; and with the 311 reflection of 1 Å synchrotron radiation for A3.The bundles of dislocations emanating from the centres of diamonds A1 and A4 have brought relief of strain to those sectors, and thus the strain birefringence in those regions is relatively low.By contrast, the growth-sector boundaries of A2 and A3 are less strained, and there are fewer strainrelieving dislocations.The lower level of strain in these diamonds is manifest by the strain extending over much smaller volumes, and thus the type of "trefoil" birefringence image seen for A1 and A4 is not observed for A2 and A3.Double-crystal topographs and rocking-curve measurements of diamond A1 were made using synchrotron X-radiation at the Daresbury Laboratory, UK.The 800 reflection from a (100) silicon monochromator selected 1.0 Å radiation from the wavelength continuum.The 1.0 Å X-ray beam was then diffracted by diamond A1 using the 333 reflection.Rocking-curve widths were of the order of 20 seconds of arc, which are large compared to the 3" widths of perfect single-crystal diamonds.Kowalski, Moore and co-workers [31] concluded that the relatively large misorientations in this crystal were associated with the incorporation of "clouds" during growth.The double-crystal topographs confirmed the radial dislocation distributions observed in the characteristic-radiation studies.
The other diamonds in this suite (A2, A3 and A4) showed some differences and some similarities to A1 in the optical and X-ray images; A4 being the most similar.A2 and A4 had fewer "clouds" of point-like inclusions than A1; while A3 was free of "clouds".The images of these diamonds are given in Figure 4, together with their sizes and estimates of birefringence far from, and near to, the growth-sector boundaries (g.s.b.).Reading down the figure, the second row shows optical micrographs, followed by quantitative maps of absorption, retardation, and orientation.(For A2 and A3, only the central regions are mapped.)The final row shows section X-ray topographs, taken with the 440 reflection of MoKα 1 radiation for A2 and A4; and with the 311 reflection of 1 Å synchrotron radiation for A3.The bundles of dislocations emanating from the centres of diamonds A1 and A4 have brought relief of strain to those sectors, and thus the strain birefringence in those regions is relatively low.By contrast, the growth-sector boundaries of A2 and A3 are less strained, and there are fewer strain-relieving dislocations.The lower level of strain in these diamonds is manifest by the strain extending over much smaller volumes, and thus the type of "trefoil" birefringence image seen for A1 and A4 is not observed for A2 and A3.
Optical Studies of [001]-Grown Diamonds
Just two representatives, B1 and B2, of several suites of synthetic diamonds are reported here.Both B1 and B2 are 4.0 × 2.0 × 0.8 mm 3 in size.They had both been grown in the [001] direction; and had been cut and polished parallel to a (110) major face, to reveal growth sectors of {001}, {111}, and {113} types.The growth sector information available from such {110} diamond slices has frequently
Optical Studies of [001]-Grown Diamonds
Just two representatives, B1 and B2, of several suites of synthetic diamonds are reported here.Both B1 and B2 are 4.0 ˆ2.0 ˆ0.8 mm 3 in size.They had both been grown in the [001] direction; and had been cut and polished parallel to a (110) major face, to reveal growth sectors of {001}, {111}, and {113} types.The growth sector information available from such {110} diamond slices has frequently been employed in studies of impurity zoning and optical absorption [36,37].Differences in lattice parameter between sectors in diamonds have been reported by Lang and co-workers [12].
A plane-polarized optical micrograph of B1 is presented in Figure 5a.The crystal comprises three major growth sectors: from left to right, (111), ( 001) and (1 11).The (001) sector appears to have a larger nitrogen content than the other two, and this can be seen in the micrograph in Figure 5b.Taken with plane-polarized light and a Wratten 47B blue filter, the sectors of higher nitrogen content absorb blue light more efficiently, and so appear darker in the micrograph.A small area at the bottom of the left growth-sector boundary appears very pale, and this is the (1 13) sector.In synthetic diamonds, the concentration of single-substitutional nitrogen in {111} sectors is usually twice that of {100} sectors [38]; yet the (001) sector in diamond B1 clearly absorbs more blue light than the surrounding {111} sectors, suggesting a greater nitrogen content.
been employed in studies of impurity zoning and optical absorption [36,37].Differences in lattice parameter between sectors in diamonds have been reported by Lang and co-workers [12].
A plane-polarized optical micrograph of B1 is presented in Figure 5a.The crystal comprises three major growth sectors: from left to right, ( 111), (001) and (111).The (001) sector appears to have a larger nitrogen content than the other two, and this can be seen in the micrograph in Figure 5b.Taken with plane-polarized light and a Wratten 47B blue filter, the sectors of higher nitrogen content absorb blue light more efficiently, and so appear darker in the micrograph.A small area at the bottom of the left growth-sector boundary appears very pale, and this is the ( 113) sector.In synthetic diamonds, the concentration of single-substitutional nitrogen in {111} sectors is usually twice that of {100} sectors [38]; yet the (001) sector in diamond B1 clearly absorbs more blue light than the surrounding {111} sectors, suggesting a greater nitrogen content.Diamond B1 also contains a number of inclusions which most likely consist of metals used as solvent/catalysts in the growth process.These inclusions are small, however, and do not have a large effect on overall crystal perfection, as the birefringence micrographs in Figures 5c,d confirm.
For the latter image, the crystal was rotated by 45°.Note how the birefringence pattern changes upon rotation of the specimen.This effect makes analysing the strain in a crystal more difficult, and emphasizes the benefits of the Metripol quantitative birefringence microscope which eliminates such orientation dependence.The greater strain is found along the top edge of the crystal, which is the area originally in contact with the seed crystal, and also the growth-sector boundaries.Some growth banding is visible in each sector, the 0° image highlighting the <111> banding, and the 45° image accentuating the [001] banding.Diamond B1 also contains a number of inclusions which most likely consist of metals used as solvent/catalysts in the growth process.These inclusions are small, however, and do not have a large effect on overall crystal perfection, as the birefringence micrographs in Figure 5c,d confirm.
For the latter image, the crystal was rotated by 45 ˝.Note how the birefringence pattern changes upon rotation of the specimen.This effect makes analysing the strain in a crystal more difficult, and emphasizes the benefits of the Metripol quantitative birefringence microscope which eliminates such orientation dependence.The greater strain is found along the top edge of the crystal, which is the area originally in contact with the seed crystal, and also the growth-sector boundaries.Some growth banding is visible in each sector, the 0 ˝image highlighting the <111> banding, and the 45 ˝image accentuating the [001] banding.
X-Ray Studies of [001]-Grown Diamonds
Diamond B1 was also studied with single-crystal X-ray topography using the Synchrotron Radiation Source at the Daresbury Laboratory, UK.The specimen was aligned with the large (110) surface horizontal, and a 50 µm ribbon of polychromatic ("white") radiation was used to section the crystal in a number of places.Various topographic images were formed as the crystal planes selected appropriate wavelengths for Bragg reflection.Growth-sector boundaries were seen to appear with varying strength according to the reflection, but of particular interest was the observation that the [001] growth banding, caused by impurity variations from fluctuating growth conditions [38], was visible in certain X-ray topographic reflections and invisible in others.A detailed study was then undertaken, obtaining numerous topographic images of the specimen using a variety of wavelengths and reflections in order to establish a pattern (if any) of growth-banding visibility.
Just two such X-ray topographs are shown here.Figure 6a is the image of the 335 reflection with 0.82 Å X-rays.Growth banding in the (001) sector appears in the form of a number of thin, clear lines on a darkened region.The sector is traversed in places by bundles of dislocations.The {111} sectors do not display any banding, and this was the case in all topographic images.The right-hand growth-sector boundary is visible along its entire length.
X-Ray Studies of [001]-Grown Diamonds
Diamond B1 was also studied with single-crystal X-ray topography using the Synchrotron Radiation Source at the Daresbury Laboratory, UK.The specimen was aligned with the large (110) surface horizontal, and a 50 μm ribbon of polychromatic ("white") radiation was used to section the crystal in a number of places.Various topographic images were formed as the crystal planes selected appropriate wavelengths for Bragg reflection.Growth-sector boundaries were seen to appear with varying strength according to the reflection, but of particular interest was the observation that the [001] growth banding, caused by impurity variations from fluctuating growth conditions [38], was visible in certain X-ray topographic reflections and invisible in others.A detailed study was then undertaken, obtaining numerous topographic images of the specimen using a variety of wavelengths and reflections in order to establish a pattern (if any) of growth-banding visibility.
Just two such X-ray topographs are shown here.Figure 6a is the image of the 335 reflection with 0.82 Å X-rays.Growth banding in the (001) sector appears in the form of a number of thin, clear lines on a darkened region.The sector is traversed in places by bundles of dislocations.The {111} sectors do not display any banding, and this was the case in all topographic images.The right-hand growthsector boundary is visible along its entire length.[001] and [1 1 2 ] orientation, form an inverted "V" shape; and emanate from the seed crystal area at the top of the diamond.
The [001] growth banding in B1 was studied in many other topographic images, and in each case was assigned a rank according to visibility.The results were then tabulated in order to find a possible correlation between banding visibility and diffraction conditions.By positioning a photographic plate directly under the specimen and parallel to its major faces, the multi-wavelength synchrotron beam was diffracted by many crystallographic planes, producing approximately fifteen clear topographic images on each plate.The images on the central line of each plate were identified for this study: these images corresponded to the 115, 337, 224, 335, and 333 (or 111, depending on wavelength) reflections.Although the 113 reflection was present, the wavelengths required for diffraction were long, and the topographs were very pale as a result of air absorption.Off-centre images were also examined, and growth banding visibility was seen to be similar to that of adjacent images on the central line.The visibility of growth banding in each topograph is presented in Table 1. Figure 6b is the image of the 224 reflection with 1.18 Å X-rays.Banding in the (001) sector differs from the previous image in that the individual bands are stronger (although distorted), and the dark background is less prominent.The right-hand growth-sector boundary is also still present, but becomes fainter and less sharp nearer the bottom of the crystal.However, the dislocations which were only slightly discernible in the previous image are now individually visible.The two bands, of [001] and [11 2] orientation, form an inverted "V" shape; and emanate from the seed crystal area at the top of the diamond.
The [001] growth banding in B1 was studied in many other topographic images, and in each case was assigned a rank according to visibility.The results were then tabulated in order to find a possible correlation between banding visibility and diffraction conditions.By positioning a photographic plate directly under the specimen and parallel to its major faces, the multi-wavelength synchrotron beam was diffracted by many crystallographic planes, producing approximately fifteen clear topographic images on each plate.The images on the central line of each plate were identified for this study: these images corresponded to the 115, 337, 224, 335, and 333 (or 111, depending on wavelength) reflections.Although the 113 reflection was present, the wavelengths required for diffraction were long, and the topographs were very pale as a result of air absorption.Off-centre images were also examined, and growth banding visibility was seen to be similar to that of adjacent images on the central line.The visibility of growth banding in each topograph is presented in Table 1.A ranking system was devised for banding visibility: I = invisible, 2-only general darkening of region, 3-general darkening of region with some clear banding, 4-clear banding, 5-very strong and crisp banding.The banding visibility appeared not to be so much governed by X-ray wavelength as by the Bragg angle; and the rows in the table are ordered to emphasize this.By differentiating Bragg's law, one can show that the sensitivity to small variations in lattice parameter is enhanced as the Bragg angle increases.
Diamond B2 is so similar in outward appearance to B1, and in its birefringence, that separate pictures are not shown here.Its internal structure, as seen in X-ray topographs, is however quite different: the growth banding is less obvious and bundles of dislocations appear in strong contrast in the [001] and [112] directions.
Figure 7a shows these dislocations particularly well resolved in a projection topograph, as well as some strain around three metal inclusions (top left).In Figure 7b, which is a section topograph taken with a slit width of 50 µm, the dislocations are less well resolved (especially those in the [001] direction) but the growth banding in the (001) sector is more apparent.The dislocations were never completely invisible in any of the many X-ray topographs taken with various diffraction vectors (g), so their Burgers vectors (b) were not unambiguously determined from the g.b = 0 criterion.(See Section 3.4 below.)The results nevertheless were consistent with the dislocations being of mixed type (edge and screw) and with b being parallel to a <011> direction.Diamond B2 is so similar in outward appearance to B1, and in its birefringence, that separate pictures are not shown here.Its internal structure, as seen in X-ray topographs, is however quite different: the growth banding is less obvious and bundles of dislocations appear in strong contrast in the [001] and [1 12 ] directions.
Figure 7a shows these dislocations particularly well resolved in a projection topograph, as well as some strain around three metal inclusions (top left).In Figure 7b, which is a section topograph taken with a slit width of 50 μm, the dislocations are less well resolved (especially those in the [001] direction) but the growth banding in the (001) sector is more apparent.The dislocations were never completely invisible in any of the many X-ray topographs taken with various diffraction vectors (g), so their Burgers vectors (b) were not unambiguously determined from the g.b = 0 criterion.(See Section 3.4 below.)The results nevertheless were consistent with the dislocations being of mixed type (edge and screw) and with b being parallel to a <011> direction.Where a crystal is deformed by a displacement of vector u from its perfect structure, the electron density is modified as follows: ρpr 1 q " ρpr `uq " p1{Vq Σ F g expr´2πi g.pr `uqs where r is the lattice vector, V is the volume of the unit cell, and F g is the structure factor for the diffraction vector (reflection) g.The extra phase factor exp[-2πi g.u] shows itself in the structure factor as exp[2πi g.u].For g.u = 0, there is no change in structure factor; and therefore in diffraction the deformed crystal will appear perfect.This can be simply appreciated geometrically, by noting that for g perpendicular to u, the atomic displacements are parallel to the Bragg planes and therefore they have no influence on the Bragg reflection.
The strain field surrounding a mixed dislocation in general has three components, which may be written as u " Ab `Bpb ˆlq `Cpl ˆb ˆlq where b is the Burgers vector of the dislocation and l is the unit vector in the direction of the dislocation line.The second term gives the component perpendicular to both b and l; and the third term is perpendicular to both the second term and to l. Cylindrical polar coordinates (r, θ, z) are chosen, with
Specimen Preparation
We also studied dislocations and growth sectors in slabs (C1 and C2) cut from a synthetic diamond of truncated octahedral habit, with diameters of 5 mm ˆ5 mm at its base, showing also small {011} faces.We published X-ray topographs of these specimens in Figure 2b,c and Figure 8 of reference [13].In this paper, we also published evaluations of lattice parameter differences within different growth sectors in diamond C, obtained from double-crystal experiments with conventional X-ray sources.These differences ∆a/a were 1.0 ˆ10 ´6 between {111} and {001}; and 1.0 ˆ10 ´5 between {001} and low nitrogen {011} sectors (and similarly between {001} and {113} sectors).In the present paper we include more complete X-ray topographic results concerning identified dislocations and stacking faults.The diamond was of truncated octahedral habit, but obviously it could grow into one hemisphere only, from the seed situated close to the end of the reaction capsule.The dimensions of the crystal close to its base were 5 × 5 mm 2 , while its height was nearly 3.5 mm.The diamond contained large octahedral faces and some smaller cube faces truncating its vertices: the largest (001) face was at the top vertex.The diamond contained also some narrow {011} and {113} facets, but only one of the {011} faces was of significant dimensions.The diamond contained also some metallic inclusions.
We decided to cut the crystal, using a laser saw, into two slabs (C1 and C2) perpendicular to the main [001] growth direction.The slabs were mechanically polished, removing also the areas close to the top vertex and the bottom-most imperfect layer.The thicknesses of the two slabs were approximately 0.7 mm, while the gap between them due to laser sawing and polishing was also evaluated to be 0.7-0.8mm.Here we include the results obtained in the slab (C1) closer to the seed.As already mentioned, other results obtained in the present diamond have been described elsewhere [13,14].The diamond was of truncated octahedral habit, but obviously it could grow into one hemisphere only, from the seed situated close to the end of the reaction capsule.The dimensions of the crystal close to its base were 5 ˆ5 mm 2 , while its height was nearly 3.5 mm.The diamond contained large octahedral faces and some smaller cube faces truncating its vertices: the largest (001) face was at the top vertex.The diamond contained also some narrow {011} and {113} facets, but only one of the {011} faces was of significant dimensions.The diamond contained also some metallic inclusions.
We decided to cut the crystal, using a laser saw, into two slabs (C1 and C2) perpendicular to the main [001] growth direction.The slabs were mechanically polished, removing also the areas close to the top vertex and the bottom-most imperfect layer.The thicknesses of the two slabs were approximately 0.7 mm, while the gap between them due to laser sawing and polishing was also evaluated to be 0.7-0.8mm.Here we include the results obtained in the slab (C1) closer to the seed.As already mentioned, other results obtained in the present diamond have been described elsewhere [13,14].
Single-Crystal and Double-Crystal X-Ray Topographic Investigations of Diamond C
An important part of the investigation was a study of dislocation structure by means of the Lang transmission method.The topographs were taken using MoKα 1 radiation in 111-and 220-type reflections from equivalent crystallographic planes.The artificially introduced surfaces of the two slabs were examined with Lang back-reflection topography using 311-type reflections of CuKα 1 radiation.These topographs were very useful in revealing the stacking faults close to the examined surface as well as the differences in integrated intensity from the various parts of the sample.
The two slabs were also studied using the back-reflection double-crystal method; in double-crystal 422 Si ´311 ♦ and 1325 Quartz ´004 ♦ arrangements with CuKα 1 radiation.The latter arrangement offered almost perfect matching of lattice spacing and negligible broadening: under 0.15 arc seconds of the rocking curve due to spectral dispersion.Some former double-crystal topographic investigations have already been described in our previous paper [13].Several section topographs were taken in 400 and 440 symmetrical reflections using MoKα 1 radiation.All topographs were recorded on 50 µm Ilford L4 nuclear emulsion plates.
The appearance of growth sectors in all four surfaces providing the successive sections of the investigated diamond have already been discussed [13].We should note the increase of intensity in the central (001) growth sector with distance from the seed.This sector is separated from large octahedral sectors by narrow strips of {113} sectors.In the slab (C2) farther from the seed, the octahedral sectors are separated by narrow {011} sectors; only one of them is large and connected with a significant face.In the slab (C1) closer to the seed, some octahedral sectors are separated by cube sectors corresponding to side vertices and surrounded by {113} sectors.The cathodoluminescence and double-crystal topographs of the lowest artificial surface, closer to the seed, revealed narrow stripes corresponding to the {100} growth sectors from the lower hemisphere.
Studies of Dislocations, Inclusions, and Stacking Faults
The two diamond slabs provided a good opportunity for studies of dislocations, because both specimens contained regions in which the dislocations were in low concentrations and were well resolved.The best technique for the characterization of dislocations was Lang transmission topography, but in many cases back-reflection topographic methods were also useful.The back-reflection topographs were often more legible, revealing defects from a near-surface layer only.In some cases, we were able to obtain images of defects from the whole thickness of the slabs, also in back-reflection geometry.
To identify the dislocation type and the orientation of dislocations, we took Lang topographs of each slab in 111-type reflections from all four equivalent planes and additionally symmetrical 220-type reflections from equivalent planes.A set of 111-type reflections is usually sufficient, but several effects can make interpretation less clear.High concentrations of other dislocations or defects in the neighborhood can decrease the contrast of a particular dislocation, making it invisible even where g.b ‰ 0. On the other hand, the effect of decoration of a dislocation line by impurities can make the contrast relatively high, even where g.b = 0. Contrast of the edge component of a dislocation does not completely vanish unless the diffraction vector is parallel to the dislocation line.It was therefore reasonable to confirm the identification of Burgers vectors using also the set of more sensitive 220-type reflections and to check whether each contrast behavior was explained by a particular Burgers vector.
Representative Lang topographs are shown in Figures 8 and 9. To illustrate the behavior of contrast on dislocations, we reproduce here the four 111-type topographs of the slab (C1) closer to the seed.A major difficulty in the determination of orientations and Burgers vectors of dislocations at this concentration of defects was the identification of a particular dislocation line in the various topographs.As most dislocations were located along typical <011>, <112>, and <001> directions, we found it helpful to predetermine their orientation and to localize their outcrops on the various surfaces by comparison of the topographs with prepared diagrams containing the projections of possible dislocation directions in 111-and 220-topographs (see Figure 10).Then we were able to predict the positions of these dislocations in the other topographs and to confirm the preliminary identification.
predict the positions of these dislocations in the other topographs and to confirm the preliminary identification.predict the positions of these dislocations in the other topographs and to confirm the preliminary identification.We were able to identify up to seventy best-resolved dislocations in both slabs.Eighteen of them are marked in the topographs of the sample (C1) closer to the seed, and fifteen in the topographs of the slab (C2) farther from the seed.The probable identification of the selected dislocations marked in the topographs of C1 shown in Figures 8 and 9 are listed in Table 2. Some of the lines were revealed with much higher contrast, which however, behaved in the different reflections as if from a perfect dislocation.In such cases the line may be either composed of a few dislocations, or of a dislocation split into two partial dislocations and a narrow stacking fault.The majority of dislocation lines were oriented along <211> directions and many also along <110> directions.Along <211>, the dislocations were mostly of mixed 30 ˝type and a few were of edge type.These dislocations also dominate in the regions with the highest density of dislocations.The dislocations in these regions were unresolved in Lang topographs, but some conclusions were drawn from them, and also from back-reflection topographs taken in reflections from different crystallographic planes.Along <011>directions, 60 ˝dislocations were common.All dominating dislocations have {111} slip planes.Quite frequently 73.3 ˝and 54.7 ˝mixed dislocations also occurred along <112> directions; and screw dislocations along <011> directions.We also found some 45 ˝mixed dislocations along <001> directions.Nearly all dislocations were observed to be straight-lined, but some of them consisted of straight segments oriented along several different directions.The identified types of dislocations are in agreement with those theoretically predicted by Hornstra [39].
We found many cases of the origin of two or more dislocations at metallic inclusions; as for example, in the case of dislocations denoted 2 a,b,c,d in Figures 8a and 9.The topographs revealed many metallic inclusions of different sizes in the diamond.Some of the inclusions produced characteristic extended black contrast, with the central parts not reflecting.More metallic inclusions were revealed in the transmission topographs, but many of them were visible also in the back-reflection topographs.The inclusions, especially the larger ones, were much more numerous in the slab (C1) closer to the seed.Comparing the Lang topographs with optical micrographs, also revealing major inclusions, it was noted that the dark contrast comes from a much greater volume than the real volume of the inclusion, such is the extent of the surrounding strain field.
Comparing the topographs of both slabs, and following the dislocations visible within them, we may conclude that dislocations in the more populated regions were generated at two large metallic inclusions.One of these inclusions is situated close to the seed in the topographs of slab C1.The other large inclusion is visible in the lower left part of these topographs.
Studies of Stacking Faults
The topographs also revealed numerous stacking faults.From the point of view of their geometrical appearance, stacking faults in both slabs can be divided into two categories.One consists of stacking faults of a regular triangular shape and the other of stacking faults less regularly bounded.These latter seem to be the result of splitting of some parts of dislocation lines.The configuration of the stacking faults close to the surface can be easily followed in the single-crystal back-reflection topographs, or in double-crystal topographs recorded at the slopes of the maximum.The stacking faults produced here show relatively strong contrast, while the contrast of dislocations and growth-sector boundaries is relatively faint.The contrast due to growth-sector boundaries was also very weak in Lang transmission topographs.
The identification of observed planar defects as stacking faults was confirmed by observation of their contrast in various transmission reflections.The extinction of stacking-fault contrast should occur where g.f = m, where m is an integer and f is the fault-vector.In a crystal with the diamond structure, f is either 1/3 <111> for the intrinsic type of stacking fault, or 2/3 <111> for the theoretically possible extrinsic type of stacking fault.Each fault vector is perpendicular to the corresponding {111} fault plane.A particular stacking fault is not visible in one of the four 111-type reflections and in three of the six 220-type reflections.We were able to confirm these rules, determining the orientation of fault planes from the geometrical features of the image in the case of up to forty different stacking faults.Some of them are marked in Figure 8 and listed in Table 3.It was found from geometrical analysis of the topographs, that triangular stacking faults were bounded on two sides by partial dislocations oriented along <011> and <112> directions.On the third side the triangular stacking faults are bounded by the surfaces of the slabs.It was found that in some cases, partial dislocations were formed by the splitting of a dislocation coming to the stacking fault.
In particular, such is the case of the stacking fault marked by s5 in Figure 8.We noticed triangular stacking faults generated at one point containing a metallic inclusion.
Of interest is the comparison of the discussed Lang transmission images with the double-crystal topographic images of both artificially introduced surfaces of the diamond slab, shown in Figure 11.The double-crystal topographs well reveal the structure of the growth sectors, and also the lattice parameter differences, deduced from the difference in the angular position of the rocking curve.One can note that the double-crystal topographs do not reveal some parts of the dislocations and stacking faults visible in the transmission Lang topographs, as a result of the low g.b values.Some stacking faults outcropping to the surface can be seen in Figure 11b.On the other hand, the reflection topographs well reveal the dislocations outcropping on to the surface in some dense bundles.Contrary to the situation in more highly absorbing crystals, the double-crystal images of dislocations here do not exhibit characteristic black-white rosettes; and most of their contrast is connected with the dislocation line and some additional strain component associated with the relaxation of the stress at the free surface [40].Of interest is the comparison of the discussed Lang transmission images with the double-crystal topographic images of both artificially introduced surfaces of the diamond slab, shown in Figure 11.The double-crystal topographs well reveal the structure of the growth sectors, and also the lattice parameter differences, deduced from the difference in the angular position of the rocking curve.One can note that the double-crystal topographs do not reveal some parts of the dislocations and stacking faults visible in the transmission Lang topographs, as a result of the low g.b values.Some stacking faults outcropping to the surface can be seen in Figure 11b.On the other hand, the reflection topographs well reveal the dislocations outcropping on to the surface in some dense bundles.Contrary to the situation in more highly absorbing crystals, the double-crystal images of dislocations here do not exhibit characteristic black-white rosettes; and most of their contrast is connected with the dislocation line and some additional strain component associated with the relaxation of the stress at the free surface [40].
Conclusions
Quantitative polarizing microscopy has been successfully applied to measure and to map the variations in birefringence across several [111]-grown synthetic HPHT diamonds.X-ray topographic methods have been employed to locate dislocations in these diamonds, which showed that the dislocations grew radially from the centre of each diamond into regions of relatively low birefringence; with the result that dislocations appeared to relieve strain.
Growth banding, caused by impurity variations from fluctuating growth conditions, in [001]grown diamonds was revealed both in polarizing microscopy and in synchrotron X-ray topography.The latter technique imaged, in a variety of contrasts, not only this banding but also numerous clearly-resolved dislocations.
Using Lang projection topographs taken in 111-and 022-reflections from all equivalent crystallographic planes, we performed extended studies of dislocation structure in a 0.7 mm thick diamond slab (C1) cut from a large cuboctahedral diamond close to the seed.This included the
Conclusions
Quantitative polarizing microscopy has been successfully applied to measure and to map the variations in birefringence across several [111]-grown synthetic HPHT diamonds.X-ray topographic methods have been employed to locate dislocations in these diamonds, which showed that the dislocations grew radially from the centre of each diamond into regions of relatively low birefringence; with the result that dislocations appeared to relieve strain.
Growth banding, caused by impurity variations from fluctuating growth conditions, in [001]-grown diamonds was revealed both in polarizing microscopy and in synchrotron X-ray topography.The latter technique imaged, in a variety of contrasts, not only this banding but also numerous clearly-resolved dislocations.
Using Lang projection topographs taken in 111-and 022-reflections from all equivalent crystallographic planes, we performed extended studies of dislocation structure in a 0.7 mm thick diamond slab (C1) cut from a large cuboctahedral diamond close to the seed.This included the identification of crystallographic orientation and type of up to 70 of the best resolved individual dislocations.The dislocations with {111} glide planes, especially those directed along <112> directions, were found to be dominant.Many of these dislocations were evidently found to be generated on metallic inclusions present in the sample.Discussing mineral inclusions in natural diamond, a recent publication [41] reproduces part of a figure from our earlier paper [13] to illustrate the fact that inclusions emit bundles of many dislocations.
The numerous stacking faults were identified on the basis of their extinction rules and their geometric appearance.We also confirmed the intrinsic character of observed stacking faults on the basis of comparison of fringe patterns obtained in high-resolution back-reflection double-crystal synchrotron topographs with theoretical predictions based upon an application of plane-wave dynamical theory.
Figure 2 .
Figure 2. The colour palette for quantitative birefringence microscopy.
Figure 2 .
Figure 2. The colour palette for quantitative birefringence microscopy.
Figure
Figure 6b is the image of the 224 reflection with 1.18 Å X-rays.Banding in the (001) sector differs from the previous image in that the individual bands are stronger (although distorted), and the dark background is less prominent.The right-hand growth-sector boundary is also still present, but becomes fainter and less sharp nearer the bottom of the crystal.However, the dislocations which were only slightly discernible in the previous image are now individually visible.The two bands, of
3. 3 .
The g.b = 0 Criterion for Invisibility of a Dislocation of Burgers Vector b measured along the direction l of the dislocation line, and θ measured from the plane containing b and l.A = θ/2π; but B and C are more complicated expressions, involving the Poisson's ratio of the material.For a screw dislocation, b is parallel to l, so b ˆl = 0 and u = (θ/2π)b.Thus the g.u = 0 criterion for invisibility in diffraction becomes just g.b = 0.For an edge dislocation, b is perpendicular to l.Thus b.l = 0; and l ˆb ˆl = (l.l)b´(l.b)l = b.Therefore u = (A + C)b + B(b ˆl).An edge dislocation will be invisible if both g.b = 0 and g.(b ˆl) = 0.A mixed dislocation is never completely invisible in diffraction, since g cannot simultaneously satisfy the three equations g.b = 0; g.(b ˆl) = 0; and g.(l ˆb ˆl) = 0.The values of the parameters B and C are however usually smaller than A, so low visibility is found where g.b = 0 for all types of dislocation.The determination of the direction of the Burgers vector b needs the invisibility of two reflections: g 1 .b= 0 and g 2 .b= 0.Each equation defines a plane in which b must lie; and the intersection of these two planes gives the direction (but not the magnitude) of the Burgers vector b.The solution of the two simultaneous equations g 1x b x `g1y b y `g1z b z " 0 g 2x b x `g2y b y `g2z b z " 0 gives the desired ratio of the Cartesian components (b x , b y , b z ) of b.
Figure 10 .
Figure 10.Diagram showing the projections of dislocations oriented along different crystallographic directions and their relative lengths in the −1 −1 1 topograph (corresponding to Figure 8a).
Figure 10 .
Figure 10.Diagram showing the projections of dislocations oriented along different crystallographic directions and their relative lengths in the −1 −1 1 topograph (corresponding to Figure 8a).
Figure 10 .
Figure 10.Diagram showing the projections of dislocations oriented along different crystallographic directions and their relative lengths in the ´1 ´1 1 topograph (corresponding to Figure 8a).
Figure 11 .
Figure 11.(a) A 1 −3 2 5 quartz-400 diamond back-reflection double-crystal topograph taken in CuKα1 radiation from the face closer to the seed.Diffraction vector horizontal, to the right; (b) A 422 silicon-311 diamond CuKα1 double-crystal topograph taken from the other large (100) face of the sample (similar to Figure 8 of reference [13]).
Figure 11 .
Figure 11.(a) A 1 ´3 2 5 quartz-400 diamond back-reflection double-crystal topograph taken in CuKα 1 radiation from the face closer to the seed.Diffraction vector horizontal, to the right; (b) A 422 silicon-311 diamond CuKα 1 double-crystal topograph taken from the other large (100) face of the sample (similar to Figure 8 of reference [13]).
Table 1 .
The visibility of [001] growth banding in X-ray topographs of diamond B1.
Table 1 .
The visibility of [001] growth banding in X-ray topographs of diamond B1.
A ranking system was devised for banding visibility: I = invisible, 2-only general darkening of region, 3-general darkening of region with some clear banding, 4-clear banding, 5-very strong and crisp banding.The banding visibility appeared not to be so much governed by X-ray wavelength as by the Bragg angle; and the rows in the table are ordered to emphasize this.By differentiating Bragg's law, one can show that the sensitivity to small variations in lattice parameter is enhanced as the Bragg angle increases.
Table 2 .
The identification of the dislocations marked in Figures8 and 9.
Table 3 .
The identification of stacking faults marked in Figure8.
|
2016-07-09T08:41:28.331Z
|
2016-06-24T00:00:00.000
|
{
"year": 2016,
"sha1": "b0e7e92920f416583b0e4e6996d8716852b18a7c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4352/6/7/71/pdf?version=1466757474",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "b0e7e92920f416583b0e4e6996d8716852b18a7c",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
199519166
|
pes2o/s2orc
|
v3-fos-license
|
Intakes and Food Sources of Dietary Fibre and Their Associations with Measures of Body Composition and Inflammation in UK Adults: Cross-Sectional Analysis of the Airwave Health Monitoring Study
The purpose of this study was to investigate the associations between intakes of fibre from the main food sources of fibre in the UK diet with body mass index (BMI), percentage body fat (%BF), waist circumference (WC) and C-reactive protein (CRP). Participants enrolled in the Airwave Health Monitoring Study (2007–2012) with 7-day food records (n = 6898; 61% men) were included for cross-sectional analyses. General linear models evaluated associations across fifths of fibre intakes (total, vegetable, fruit, potato, whole grain and non-whole grain cereal) with BMI, %BF, WC and CRP. Fully adjusted analyses showed inverse linear trends across fifths of total fibre and fibre from fruit with all outcome measures (ptrend < 0.0001). Vegetable fibre intake showed an inverse association with WC (ptrend 0.0156) and CRP (ptrend 0.0005). Fibre from whole grain sources showed an inverse association with BMI (ptrend 0.0002), %BF (ptrend 0.0007) and WC (ptrend 0.0004). Non-whole grain cereal fibre showed an inverse association with BMI (Ptrend 0.0095). Direct associations observed between potato fibre intake and measures of body composition and inflammation were attenuated in fully adjusted analyses controlling for fried potato intake. Higher fibre intake has a beneficial association on body composition, however, there are differential associations based on the food source.
Introduction
The prevalence of adult overweight and obesity is continuing to rise; it is estimated that~58% of the global population will be overweight or obese by 2030 [1]. In addition to the impact of obesity at an individual level in terms of morbidity, the global economic burden of obesity is estimated to be 2.8% of gross domestic product [2]. Evidence supports that positive energy balance has a direct association with body mass [3] with diet and physical activity established as modifiable factors in the trajectory of adult weight gain [4]. In turn, excess body fat, specifically excess visceral fat, is an essential component of the pathophysiology of cardiometabolic disease [5]. Understating the modifiable factors to prevent excess adiposity is, therefore, a public health priority.
Dietary fibre is a heterogeneous group of compounds consumed from a variety of plant food sources. Existing research has focused on total fibre intakes and suggests an inverse association between total dietary fibre and body weight [6]. Limited studies have considered the food sources of fibre. Where there is evidence, studies have observed differential benefits of fibre intake from cereal, fruit and vegetable sources on obesity-related cardiometabolic health outcomes [7][8][9][10]. Understanding the relationship between food sources of fibre and body composition may be more readably translated to food-based eating guidelines. A limitation of previous studies exploring fibre intakes is the use of food frequency questionnaire (FFQ) data collection methods [11], rather than prospective 7-day estimated weighed diet records. Compared to FFQs, 7-day estimated weighed dietary records have been shown to have a greater agreement with dietary fibre intake collected from 16-day weighed records (gold standard method) [12]. The aim of this study was to investigate the associations of dietary fibre intakes from major UK food sources of fibre: potatoes, cereal (whole grains and non-whole grain cereal), fruit, vegetables, and legumes with measures of body composition (body mass index, waist circumference, and total body fat) and C-reactive protein-a marker associated with abdominal adiposity [13] and a strong predictor of future cardiovascular disease (CVD)risk [14,15]. This study was conducted in a large UK occupational cohort, the Airwave Health Monitoring Study-an ongoing longitudinal study of British police force employees [16].
Study Population
Recruitment procedures and baseline characteristics of the Airwave Health Monitoring Study of the British police forces have been described previously [17]. Dietary data from a random sample of food diaries collected between 2007 and 2012 (n = 7771) were used for the present study. We excluded participants with self-reported chronic disease diagnosis at enrolment: angina, heart disease, chronic obstructive pulmonary disease, cancer, chronic liver disease, thyroid disease, arthritis, diabetes (type 1 or type 2) and/or previous stroke (n = 501) as these diseases may affect dietary intakes. Participants were excluded based on missing data for primary outcomes. No female participant reported being pregnant. The final sample size included in the present study was 6898 (Supplemental Figure S1). The Airwave Health Monitoring Study is conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects were approved by the National Health Service Multi-Site Research Ethics Committee (MREC/13/NW/0588). Written informed consent was obtained from all participants.
Assessment of Fibre Intake and Other Dietary Variables
Dietary intake was measured using 7-day estimated weighed food diaries. Calculation of nutritional intake was conducted using Dietplan software (Forestfield Software Ltd, Horsham, UK) which was based on the McCance and Widdowson's 6th and 7th Edition Composition of Foods UK Nutritional Dataset (UKN) following a study-specific standard protocol [17]. To account for individual differences in reporting and total energy intake, energy-adjusted dietary variables were calculated using the nutrient density method [18]. The Goldberg method was applied to estimate prevalence of energy intake misreporting [19], the methods and results of which have previously been reported in detail [17]. Fibre intake was estimated from the American Association of Analytical Chemists (AOAC) analytical method of dietary fibre reported in the UKN database. Fibre intake from the following food groups of interest were calculated: i) fruits, ii) vegetables (excluding legumes and white potatoes), iii) potatoes (excluding sweet potato), iv) legumes (including peanuts), v) whole grains, vi) nuts and seeds and vii) non-whole grain cereal sources. Fruit and vegetable classifications were based on common UK culinary usage, e.g., tomato as a vegetable, sweet corn as vegetable. Whole grain content of foods was estimated from previously published data [20] and on-line manufacturer declarations. Table S1 details the foods classified in each food group.
Measures of Body Composition
Enrolled participants attended a regional health-screening clinic. Trained research nurses used a standard protocol to conduct all clinical examinations as described previously [16]. The primary outcome measurements for the current analyses were body mass index (BMI), percentage body fat (%BF), waist circumference (WC) and C-reactive protein (CRP). Body weight was measured to the nearest 0.05 kg using digital scales (Marsden digital weighing scale). Standing height was measured to the nearest 0.1 cm (Marsden H226 portable stadiometer, Marsden Weighing Group, South Yorkshire, UK). BMI was calculated as weight (kg)/height(m 2 ), %BF was measured via bioelectrical impedance analysis (Tanita BC-418MA body composition analyser, Tanita Corp., Tokyo, Japan), WC was measured between the lower rib margin and the iliac crest in the mid-axillary line using a Wessex-finger/joint measure tape (Seca 201, Seca Ltd, Birmingham, UK) and CRP was measured using serum (IL 650 analyser Instrumentation Laboratory, Bedford, MA, USA).
Measurement of Covariates
Data on occupational, lifestyle, medical history, socioeconomic and demographic factors were collected during the health-screen visit using a structured on-line questionnaire. Total working hours (including usual weekly overtime) was classified into categories (<40, 41-48, ≥49 hours per week) [21,22]. Physical activity information was collected using The International Physical Activity Questionnaire Short Form (IPAQ-SF) [23] which calculates metabolic equivalent minutes per week across three exercise parameters (walking, moderate and vigorous) with participants categorised as undertaking a high, moderate or low level of activity [24]. Weekly TV viewing time was recorded as part of the lifestyle questionnaire in multiples of 15 minutes and categorised into three groups (high, moderate and low) based on tertile cut-off values (TV viewing hours per week: low <6, moderate 6-15, high >15).
Statistical Analyses
To assess differences between two groups, independent t-tests were used for data with a normal distribution (mean and standard deviation presented) and Mann-Whitney U-tests were used otherwise (median and interquartile range presented). To achieve a normal distribution CRP was logarithmically transformed. Associations across categorical variables were analysed using Chi-Squared test (χ 2 ). Fibre from legumes was combined with fibre from vegetables for the analyses due to low legume intake. Food sources of fibre intake data were highly skewed. Therefore, general linear models tested the association between fifths of fibre intake (g/ 1000 kcal) and each outcome variable of interest. Table S2 presents the cut-off intakes by quantile. To test linear associations, orthogonal polynomial coefficients were generated and applied to correct for the unequal spacing between median values of each quintile of intake [25]. Adjusted means are presented with corresponding 95% confidence interval (95% CI). Two models were constructed for the analyses and adjusted for confounders. Covariates were selected for inclusion into the models by either i) an observed significant statistical association (p < 0.05) with both the independent variable and dependent variable under investigation (and plausibly classified as a confounder) or ii) a priori based on an association determined in previous cohort studies. The crude model was adjusted for age (continuous) and sex. The fully adjusted model was adjusted additionally for ethnicity, marital status, final attained educational level, length of weekly working hours, smoking (current, previous, never), daily TV viewing hours (thirds) and physical activity (IPAQ category), alcohol intake (mean g/day), total energy, and macronutrients: saturated fat, polyunsaturated fat, non-milk extrinsic sugars (all % energy intake), and sources of fibre other than those under study (g/1000 kcal). Based on previous studies indicating an association between fried potato consumption with cardiometabolic risk [26,27] we additionally adjusted the models for potato fibre by categories of fried potato consumption (nil consumers, low consumers, and high consumers who recorded below/above sex-specific median intakes when non-consumers removed: men 22.4 g/day, women 17.8 g/day). Statistical analyses were conducted using SAS version 9.4 (SAS Institute, Cary, NC, USA). Statistical tests were two-sided with a significance level at 0.05.
Additional Analyses
Three sets of stratified analyses were conducted. Firstly, as obesity may lie on the causal pathway between diet and measures of inflammation, we stratified by BMI (<25 kg/m 2 and ≥25 kg/m 2 ). Secondly, as carbohydrate intake is associated with dietary fibre intakes and may potentially modify associations between fibre and measures of body composition, we stratified by low and high carbohydrate intake. High carbohydrate intake defined as energy intake of ≥50% derived from carbohydrate; this value identifies those above the mean intake within the cohort and is also the guideline amount for the UK population. Lastly, we stratified participants by those estimated to be acceptable and under-reporters of energy intake to test the robustness of our results against potential dietary intake misreporting [28]. We also conducted linear regression analyses with transformed independent variables (fibre intakes) to estimate beta coefficients.
Cohort Characteristics
Male employees accounted for 61.2% of the sample; mean age was 41.1 standard deviation (SD) 9.1 years, and 75.2% were employed in England. The majority of the cohort had a BMI above 25 kg/m 2 (66.6%), and 49% had a waist circumference higher than sex-specific cut-off values (Table 1). Mean daily fibre intake for the cohort was 17.3 SD 6.0 g and 3.3% of participants had mean intakes of 30 g or more per day. The main sources of fibre intake were non-whole grain cereal sources (39.9 SD 12.7%), vegetables excluding legumes (16.2 SD 8.6%), and potato (13.6 SD 8.6%) ( Table 2). Sources of fibre differed across total fibre intake categories; with participants in the lowest category obtaining 47.8% (SD 12.7%) of fibre from non-whole grain cereal sources and 13.8% (SD 9.2%) from potatoes compared to 30.9% (SD 11.2%) and 8.2% (SD 6.0%), respectively, for participants in the highest fifth of total fibre intake (Table S3). Participants in the lowest fifth of total fibre intake were more likely to be male, employed in Scotland and work more than 40 hours per week; they were also less likely to have obtained a degree or postgraduate qualifications (Table S3).
Fibre Intake and Body Composition
After adjustment for potential confounders, including energy intake, there were significant inverse linear trends for all measures of body composition across fifths of total and fruit fibre intakes (Table 3). A direct association was observed between fibre from potatoes and all measures of body composition in the crude models (adjusted for age and sex). Full adjustment for confounders attenuated these associations. Fibre from non-whole grain cereal sources showed an inverse association with BMI but no association with %BF, WC or CRP in fully adjusted models. Fibre from vegetables and legumes showed no association with %BF or BMI; however, inverse associations were observed with WC (p trend = 0.0156) and CRP (p trend = 0.005). Tests of linear association conducted using linear regression (Table S4) showed comparable associations, with total and fruit fibre inversely associated with all outcome measures.
Additional Analyses
For total fibre intakes, trend estimates were comparable across analyses stratified by BMI (</≥25 kg/m 2 ). In predicted under-reporters of energy intake, the significance between fibre intake and CRP was attenuated in the fully adjusted model. In participants with high carbohydrate intake (≥50% EI from carbohydrate), there was an attenuation of the relationship between total fibre intake and CRP and BMI (Tables S5-S10).
Summary
Few studies have investigated the associations between fibre intakes from different food sources with measures of body composition. In this large UK population sample, we demonstrate that total fibre and fruit fibre intakes are inversely associated with all measures of body composition while potato fibre shows no association. Our findings suggest an inverse dose-response trend between total fibre intake with body composition and inflammation. The latter is supported by a previous longitudinal study in a US population sample that observed a 63% lower risk of elevated C-reactive protein (CRP) in the highest versus lowest quartile of total fibre intake [29]. Although our findings support a previous meta-analysis of an inverse association between total dietary fibre intake and cardiometabolic risk [30], there are limited large observational studies to compare our findings with. A previous study in a Spanish adult population sample found that lower total fibre intake was associated with overweight and obesity [31]. However, in contrast to our findings, the association did not remain significant when only plausible energy reporters were analysed [31].
In common with other nutrients, dietary fibre is obtained from several different food sources (fruit, vegetables and grains), all of which contain differing nutrient profiles. By testing associations between the main UK food sources of dietary fibre, we have been able to estimate independent associations of food sources of fibre on measures of body composition. Fruit fibre was the only source of fibre that was consistently inversely associated with all four measures of body composition, while whole grain sources of fibre were inversely associated with three, and vegetables (including legumes) with two. Non-whole grain sources of fibre only showed an inverse association with BMI. Although differential effects of food sources of fibre on health outcomes have been previously observed, there are a limited number of studies exploring associations with body composition. Pooled prospective data from European Prospective Investigation into Cancer and Nutrition (EPIC)-InterAct study indicated fibre from cereals but not from fruit and vegetables combined was inversely associated with overall and abdominal fat gain [10], while a further study in a Dutch population sample observed cereal fibre to be only inversely associated with BMI in men [9]. In terms of cardiometabolic health, fibre from cereals and vegetables but not from fruit were associated with reduced incidence of type 2 diabetes risk [8], while a meta-analysis of prospective studies reported fruit fibre to be associated with reduced risk of cardiovascular disease [30]. The Nurses' Health Study and Health Professionals Follow-up Study reported no association across quintiles of total fibre intake and risk of coronary heart disease, while observing a protective dose-response effect from cereal fibre [11]. The apparent difference from our findings may relate to difference in outcomes, with our study considering body composition, an intermediate risk factor, compared to cardiometabolic disease end points. We also used prospective compared to retrospective dietary assessment and we separated whole grain and non-whole grain cereal fibre.
In minimally adjusted models we show a direct association between potato fibre intake with body composition, however, this significance was attenuated when analyses were adjusted for additional confounders including saturated fat intake and cooking method (frying). Given that potato consumption is an important component of total fibre intake (contributing to~14% of total fibre intake in the Airwave Health Monitoring Study population sample) and recent research has reported that potato consumption to be associated with an increased risk of hypertension [32] and type 2 diabetes [26], the lack of an association between potato fibre and body composition needs further investigation. In fully adjusted analyses we observed fibre from non-whole grain cereal sources (the main contributor to fibre intake) to only remain inversely associated with BMI after adjustment for confounders. Non-whole grain cereal and potato food sources may be subject to more extensive processing than other food groups in terms of preparation (e.g., potatoes require cooking before consuming and non-whole grain cereal grains are commonly used in baked goods such as biscuits and cakes). Levels of processing can influence the fibre composition of starchy foods, for example, retrograded amylase and starch (resistance starch III [RSIII]) is a digestion resistant carbohydrate formed through heating and cooling of starchy foods [33]. No estimates exist for the intake of RSIII from different food sources in the UK population and further evidence is needed to determine if RS exert different changes on the pathways associated with body composition compared to native fibres.
Potential Mechanisms
The mechanisms linking dietary fibre and body composition are yet to be fully characterised. Controlled human feeding studies supplementing with fibre have observed beneficial outcomes on weight loss through increased satiety and a decrease in hunger therefore, lowering overall energy intake [34,35]. Both of these studies supplemented with oligofructose, a fibre that is predominately obtained from wheat products in the Western diet [36], supporting our observed association of higher whole grain and non-whole grain cereal fibre intakes with lower BMI. However, fibre from whole grain sources, but not non-whole grain sources, was associated with lower waist circumference and percentage of body fat, suggesting that some benefits of fibre from whole grains are attributable to nutrients and non-nutrient components (e.g., phytochemicals) linked to whole grain fibre. A potential mechanism for the differential effects of food sources of fibre on markers of body composition and inflammation is via gut microbiota profile modification in response to different types of fibres (e.g., cellulose, lignin) combined with bioactive compounds such as polyphenols [37] contained in different food sources. Furthermore, differential products from dietary fibre fermentation by bacteria in the colon, such as short-chain fatty acids, have been shown to impact on metabolic pathways [38]. In interpreting our observations regarding the food source of fibre, it is important to acknowledge that the food source of fibre may simply be a marker for other dietary components. Moreover, other nutrients consumed with the fruit, or indeed the lower energy density of fruit compared to some other foods may mean that eating fruit simply displaces other more caloric foods.
Public Health Nutrition Implications
It is estimated that about nine percent of the UK adult population meet the target intake for dietary fibre [39], which is more than we observed in British police force employees with <5% consuming the 30 g or more a day recommendation. We have previously reported that specific occupational factors in the Airwave Health Monitoring Study Cohort to be associated with a poorer diet quality-duration of weekly working hours and job strain [40], which may help explain their lower fibre intakes. The low intake of whole grains and lack of specific whole grain guidelines in the UK has previously been emphasised [20,41]. The observation that higher whole grain consumption was associated with increased overall fibre intake and that it was independently associated with important markers of cardiometabolic risk, supports the need for clearer population guidelines to improve whole grain intakes such as the Dutch food-based dietary guidelines [42]. Our finding that fruit sources of fibre intake are consistently inversely associated with measures of adiposity and inflammation is aligned with prospective evidence that demonstrates lower overweight in persons that consume more fruit [43].
Strengths and Limitations
The main strength of our study is that we estimated fibre intakes from 7-day food records, shown to provide improved estimations of fibre intake compared to food frequency questionnaires [12]. Additionally, our application of extensive food group disaggregation to the dietary dataset has facilitated the investigation of fibre intakes from a wide range of food sources. Another strength is the objective measure of body composition as part of a rigorous clinical protocol [16]. Although the sample included in the study was drawn from a specific occupational group, potentially limiting generalizability of findings, the period of greatest weight gain in adulthood occurs between the ages of 25-45 years [44]; therefore, it is of public health interest to understand how specific dietary exposures associate with body composition at this key life stage. Additionally, this study included a high proportion of males in early adulthood who are under-represented in existing UK cohort studies [45]. The established limitation of all current dietary measurement tools needs to be acknowledged-the reliance on self-report. We have previously reported the estimated prevalence of energy intake underreporting to be comparable to national UK diet survey data and biased towards participants with higher BMI [17]. We, therefore, conducted energy-adjusted analyses and, to investigate the effect of differential error, we stratified participants by energy reporting status. Absolute intakes of fibre may be underreported along with energy intake. However, the mean fibre intake for those classed as acceptable energy intake reporters was 19 g/day (vs. the cohort average of~17 g/day)-a value that is still considerably lower than UK recommendations of 30 g/day. The overall trends in energy-adjusted intakes against objective measures of body composition are less likely to be influenced by errors in absolute energy intake reporting. It is not possible to estimate reporting errors at the food level. Underreporting may not be distributed equally across all types of foods but may be biased towards 'unhealthy' foods [46] and, therefore, overestimation of fibre per unit of energy intake is a possibility. As with all observational nutritional epidemiological studies, understanding the effect of an individual component in the diet against overall dietary intake is challenging due to collinearity between dietary components. Additionally, the observed associations reported in this study may simply be due to uncontrolled residual confounding, either in variables not controlled for or, because of imprecision in using self-reported and categorised data to estimate covariates. Lastly, the cross-sectional nature of our study limits any causal inferences that can be derived from our data.
Conclusions
In conclusion, this study contributes to the limited existing evidence base on the beneficial associations between dietary fibre and body composition. In the face of the current trend to focus on the macronutrient content of our diet, the importance of dietary fibre to health is too often neglected. It is important that public health nutrition practitioners continue to work to increase fibre intakes in the UK population; one potential avenue would be the promotion of low consumed food groups such as legumes and whole grains in addition to fruit and vegetables in keeping with broader nutritional recommendations.
Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6643/11/8/1839/s1. Figure S1: Airwave Health Monitoring Study participant flow chart for inclusion in the cross-sectional study: The association between food sources of dietary fibre with measures of body composition and inflammation; Table S1: Food group descriptions applied to the dietary data from the Airwave Health Monitoring Study; Table S2: Cut-off values per quintile group of dietary fibre intake (energy-adjusted); Table S3: Characteristics of Airwave Health Monitoring Study participants by quintile of energy-adjusted dietary fibre intakes; Table S4: The association between fibre intakes and measures of body composition and inflammation -estimated beta coefficients. Tables S5 and S6: Analyses stratified by body mass index category; Tables S7 and S8: Analyses stratified by carbohydrate intake; Table S9 and S10: Analyses stratified by classification of energy intake reporting Author Contributions: R.G., E.C., R.E. and G.F. designed the research question and methodological design; R.G. performed the statistical analyses and drafted the paper; A.H., M.A. and H.G. were responsible for primary data collection and management; R.G., R.E., E.C., G.F., Q.C., A.H., M.A. and P.E. contributed to the interpretation of the results and had primary responsibility for final content; P.E. is the principal investigator of the Airwave Health Monitoring Study. All authors read and approved the final manuscript.
Funding:
The Airwave Health Monitoring Study is funded by the Home Office (grant number 780-TETRA) and the Medical Research Council (grant number MR/L01341X/1), with additional support from the National Institute for Health Research (NIHR) Imperial Biomedical Research Centre (BRC). The diet coding was supported through discretionary departmental funds.
|
2019-08-11T13:03:27.148Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "748783d4664282bdc32608f962e484a43713e228",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/11/8/1839/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b19586b9045fad975ab66618c2e67053107c99aa",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
612092
|
pes2o/s2orc
|
v3-fos-license
|
An unusual foreign body migrating through time and tissues
Background Beside infections, foreign body incidences are amongst the most frequently encountered pathologies in pediatric otolaryngology. While inhaled foreign bodies represent an acute emergency, symptoms of ingested foreign bodies sometimes appear with some delay. Typically fishbones tend to go unnoticed in a first examination and become symptomatic by fever, odynodyspahgia and torticollis. Exceptionally, foreign bodies migrate and become manifest with a considerable delay. Case report We present a case of a young girl who presented with an unusual foreign body which migrated through the cervical tissues causing repeated cervical tumescence's before being diagnosed. Conclusion Repeated cervical abscesses or tumescence's in children or young patients should alert the treating physician to seek for an underlying pathology such as unnoticed foreign bodies or malformations (e.g. cysts). Further the scarce literature on these migrating foreign bodies is discussed.
Background
The most frequent ingested foreign bodies in the Ear Nose and Throat sphere are chicken and fish bones [1]. The symptoms are immediate and patients quickly seek for medical help after a few unsuccessful trials to extract the foreign body by themselves. Beside the tonsils, the base of the tongue and the upper esophagus are the places where usually the impacted foreign bodies are found [1]. Their removal is essential to prevent super-infections, abscesses and perforations with potentially life threatening mediastinal complications in case of esophageal foreign bodies [2]. Although rarely, foreign bodies sometimes migrate within the tissues and become symptomatic after a certain time lapse [3]. In those cases, the direct relation between the suspected foreign body ingestion and the first symp-toms is rarely established due to the latency and unusual clinical presentation [4,5].
Case report
We report the case of a 4-year old girl who was admitted to our ENT outpatient clinic with a cervical neck mass without other signs and symptoms. The patients history revealed, that she had previously been treated several times for odynophagia with cervical tumescence within the last two month. Symptoms and swelling disappeared temporally after the antibiotic treatments. However, the cervical mass rapidly reappeared after the end of the treatment. Otolaryngological examination showed no particularity, beside a firm lateral cervical mass. A cervical CT scan (Fig 1a) revealed a deep subcutaneous collection, suggesting the presence of an cervical abscess. Potential infectious origins such as the tonsils, the salivary glands, teeth or the facial skin were calm. Despite an intravenous antibiotic treatment with decrease of the cervical mass, an ultrasound control 10 days later showed a persistent subcutaneous liquid collection. We then opted for incision and drainage of this collection. The drainage and cleaning of the abscess cavity unearthed a blade of grass within the purulent discharge (Fig 1b).
Reviewing the patients history, the parents suddenly recalled she had complained of a transitory foreign body feeling during several days after chewing a blade of grass two months ago. Follow-up showed no further recurrence of the neck swelling.
Discussion
Ingested foreign bodies (FB) in children vary in shape and size, whereas coins, nonmetallic sharp objects and other blunt objects seem to be the favorite items (for a detailed overview see [6]). A majority of ingested FB pass trough the gastrointestinal tract uneventfully. Severe complications are rare and often associated with delayed discovery due to silent and protracted clinical manifestations such as new onset asthma, excessive salivation or recurrent upper respiratory infections [3]. These undetected FB tend to create fistulas to the surrounding structures (e.g. aorta, bronchia, etc.) leading to potential life-threatening situations [3]. In contrast to adults, where symptoms and information on the swallowed object facilitates the diagnostic and therapeutic approach, children often present with few or absent symptoms and absence of symptoms does not preclude the presence of a FB [6]. However the detection of a foreign body and the follow-up of the clinical course is crucial, especially since complications even sometimes occur after it has been extracted [7]. Impacted foreign bodies within the ENT sphere, typically fish bones, have been reported to cause upper respiratory airway tract abscesses [8]. However, the migration through the entire pharyngeal wall ending in a superficial cervical abscess several months later is uncommon but has to be considered [1,5,9,10]. Repeated abscesses which seem resistant to treatment should always evoke the possibility of a foreign body or an underlying congenital malformation such as branchial cleft cysts [8], even if radiological examination fails to evidence its presence. While FB migration has been reported in adults [1,9], the present case reports this rare complication in a child. Particularly, the FB's nature -a grass blade -seems uncommon, even amongst adult reports [9]. Even though a glass blade is not solid or hard, depending on the ingestion angle, it can exhibit a considerable sharpness. In the present case this might have facilitated the initial tissue penetration.
Similar to foreign bodies in the ear [11] or nose [12], ingested FB in children are prone to lead to chronic and delayed symptoms [3]. Thus the possibility of a ingested foreign body should always been considered even when initial investigations where negative.
|
2014-10-01T00:00:00.000Z
|
2006-09-11T00:00:00.000
|
{
"year": 2006,
"sha1": "69c3ef70d489af0c0a26d16c02256ccea765c171",
"oa_license": "CCBY",
"oa_url": "https://head-face-med.biomedcentral.com/track/pdf/10.1186/1746-160X-2-30",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69c3ef70d489af0c0a26d16c02256ccea765c171",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
33762621
|
pes2o/s2orc
|
v3-fos-license
|
Modelling of Compound Parabolic Concentrators for Photovoltaic Applications
In this paper ways of using compound parabolic concentrators as primary optical elements for concentrated photovoltaics are evaluated. The problems related to these classical non-imaging optical elements for photovoltaics applications have been evaluated by modelling different types of linear and point focus concentrators. Particular consideration is given to the issues of manufacturability and cost. The non-uniformity of the flux resulting at the concentrator exit aperture has been considered and some solutions are proposed in order to reduce adverse effects on performance, as well as to increase the angular tolerance of the system.
Introduction
Concentrator photovoltaics (CPV) systems [1,2] in use today can be divided , in first instance, into two main categories: Fresnel lens refractors and parabolic reflectors. Both can be either point focus (3D) or linear focus (2D) concentrators. The concept of the co mpound parabo lic concentrator (CPC) as a primary concentrator has received some attention in the field of build ing integrated PV, but only for lo w concentration (<5x) non-tracking applications [1][2][3] w it h few e xcep t io ns [4,5]. Fo r s o lar t rackin g applicat ions, CPCs o ffer the possib ility o f h igh so lar concentration ratios, in princip le approaching the theoretical limits [6,7]. Ho wever, one of the largest hurdles in the use of CPCs fo r p rimary opt ics in PV con cent rato rs is their unwieldy character and the necessary high material usage. This can in part be offset by reducing the length of the CPCs with the so called truncated CPCs, or T-CPCs, which use far less material with on ly a mino r reduct ion in concentration ratio and optical efficiency [6,8]. Despite this imp rovement , the su rface area o f the p rimary opt ical component remains high co mpared to a lens or a parabolic mirror. In this paper, so me new possibilit ies for cheap and easily manufactured CPCs will be d iscussed, as alternative of the more diffused concentrators based on lenses or parabolic troughs for the medium and mediu m-high levels of concentration positioned on trackers for large scale, field applications.
CPCs offer some technical advantages: compared to a classic parabolic reflector, a CPC can be used with a less precise tracking system, due to the flat optical efficiency response, opening up the possibility of using cheaper commercial trackers not normally suitable for CPV; moreover, co mpared to a Fresnel lens, the optical efficiency of a CPC is higher. The best designed lenses currently available show optical efficiencies <90% [9,10], wh ile the performances of CPCs can be limited with good approximation only by the reflectiv ity of the optical surface; indeed, the smoothness of the CPC's surface helps to strongly reduce the manufacturing defects limit ing mo re complex, structured designs. Therefore, optical efficiency can be higher than 90% with advanced reflective films or coatings, such as those discussed in this paper. Additionally, some of these materials permits to filtering unwanted portions of the solar spectrum, wh ich is advantageous in minimizing cooling requirements for the solar cell.
In common with most high concentration PV systems, the use of a flux ho mogenizer could be considered. As discussed in this paper, the flu x profile at the outlet aperture of a CPC is highly non-uniform, and therefore the impact on cell performance for a concentrator cell can be deleterious. The design of the homogenizer suited to a CPC is discussed.
Background
Descriptions of the CPC began appearing in literature in the mid-1960s [11,12]. As described by [6], the CPC was used for many different applications, ranging from high-energy physics to solar energy collection. In the field of solar energy, CPCs have mainly been used in solar thermal applications, most common ly as static linear collectors focusing light onto evacuated tubes at low concentration (~1.5x). There are applications where CPCs are used as the primary concentrator with photovoltaic cells, and other where they have been considered as secondary, non-imaging concentrator stage for some PV concentrator systems. Some projects have looked at the use of CPC troughs for combined PV-thermal (PV/T) applications [13]. In Sweden, Brogren firstly exp lored the use of CPCs for PV/T applications that require water for space heating [14], then further investigated [15,16].
One of the advantages that CPCs offer with respect to conventional imag ing systems (parabolic mirrors and some Fresnel lenses) is their higher tolerance to misalign ments with respect to the sun disk direct ion. Since CPCs approach the behaviour of ideal concentrators, their optical efficiency can be kept closed to unity up to the acceptance angle with a reduction factor for the entrance flu x of only the cosine of the misalign ment angle. As a consequence, for a given optical concentration ratio, they show the largest acceptance angle. The requirement on tracking accuracy is therefore lower, co mpared to other concentrators with the same concentration ratio.
Most Fresnel lens systems concentrate light onto single solar cells with a point focus approach, rather than onto dense arrays of series connected cells. The significant advantage of this approach is that the problems of cell current mis match are largely avoided (cells will still need to be series connected with other cells to build voltage, but, if the optical efficiency of each lens is the same, then cell currents should also be well matched). Single cells are able to tolerate a reasonably high degree of light non-uniformity, however, as discussed by [17,18], there can be a reduction in efficiency. In addition, when lenses for high concentration are used in conjunction with mu lti-junction cells, the effect of the non-uniformity can be a mo re serious problem because of the different light deflections for the different wavelengths converted by the cells in stack [19]. This problem is avoided for concentrators using reflective optics. Secondary flu x ho mogenisers can be emp loyed to give near uniform light distribution on the cells. They are frequently used for both lens systems [19,20] and parabolic d ishes [21][22][23]. The simp lest flu x ho mogenisers are rectangular boxes with reflect ive sidewalls (i.e. a kaleidoscope). Solid blocks made of plastic or glass, using the principles of total internal reflection, may realize the same design. However, care must be taken to avoid melting due to strongly focused spots of concentrated light.
CPC Design
The CPCs can be designed to concentrate light in either two or three d imensions. Obviously, the 2D-CPC has a lower concentration factor. According to [6], they can be designed following the Eq.s (1,2), as a function of the concentration ratio C(ND), the required acceptance angle θi and the refract ive index nout of the material at the exit aperture, for a CPC with ND d imensions: where L represents the length of the concentrator, ni is the refract ive index of the med iu m at the entry, (usually air, i.e. with n i = 1), ain and aout are the entrance and exit apertures radii respectively, as illustrated in the standard representation of Fig. 1. In o rder to utilize the advantage given by the refractive index at the outlet nout, it is important to have the solar cell in optical contact with a transparent, dielectric material with n> 1 as, for examp le, silicone; the interface should be matched to min imize the reflection losses at the receiver front surface. For PV, and in general for all the energy production applications, it is important to reduce the cost of the system to a min imu m; therefore, it is reasonable to consider CPCs filled with air rather than with materials capable to ensure higher concentration factors (with refract ive indexes greater than one). Even with this assumption, the highest theoretical limits for optical concentration in air is fairly high: ~216x for 2D concentrators, and ~46,000x for 3D concentrators [6].
2D Concentrator Systems
2D-PV concentrator systems have been extensively studied, both theoretically and experimentally, with both reflecting mirrors [24,25] as well as with lenses [26,27]. The 2D-CPCs are not common ly used in PV applications because the length of the two parabolic reflective walls appears to be excessive for large scale purposes. For example, fo r a concentration factor of 30x on a 4-cm wide cell (the same size used in the EUCLIDES pro ject [24]), the length of an ideal CPC co llector results 19m long. Even an halved-CPC is too long fo r any p ractical applications. The 2D-CPC is an ideal concentrator in terms of light concentration factor for a given acceptance angle, but, for PV applications, the ideal characteristics for the optical Parabola 2 L efficiency are not strictly required. The necessity for the optical systems, in fact, is to operate at an incident angle range for the imp inging rad iation at which its efficiency is the highest. In general, for the CPCs, the enhancing of the concentration factor leads to an increasing of the object length; besides, the higher the concentration ratio, the lower the angular acceptance of the system. The shortening of the CPC involves a s mall loss of concentration and a small gain of angular tolerance, if the truncation is produced in the region of the parabola where the sloping is lower, i.e. fro m the entrance aperture. So, it is possible to design a truncated CPC concentrator far shorter than the ideal one for a given concentration factor, reducing the length of an ideal CPC of higher concentration ratio and of lower angular acceptance, achieving a structure with higher angular acceptance respect to the ideal one and considerably shorter. Eq.
(3) defines a T-CPC, with the main parameters given in Fig 2, as exhaustively described in [2].
To avoid the problem of excessively large lenses and long focal d istance, PV lens concentrators typically consist of a number of small modules rather than a single large lens. CPCs designs could also be suitable to such a configuration. If very narrow solar cells were used for a 2D CPC, the length of the collector would be suitable for industrial fabrication technologies, and for tracking systems similar to those currently used for lens arrays. Suitable cells for th is purpose are, for examp le, the concentrator Sliver™ cells developed at the Australian Nat ional University. Sliver cells have a width of about 1mm, and could work efficiently under a concentration factor of about 30x [28,29]. 2) shows that an increase in the refract ive index n out increases the optical performances of the concentrator. By partially filling the evacuated solid, the object can accept rays otherwise rejected, due to the refraction of light at the air-silicone interface. This effect is shown in the ray traces in Fig. 3. The figure shows the ray trace close to the exit aperture of a truncated 2D-CPC, 15-cm long, with an exit total aperture 2×a out = 1 mm and a concentration factor of 30×, for a ray beam misaligned at 0.6° and with the solar angular divergence of 0.26°. Fig. 3a shows the outlet of the concentrator without the dielectric, wh ile Fig. 3b shows the ray trace of the same rays when the concentrator is filled with a material with refractive index n = 1.49 (i.e. PMMA for λ = 600 n m) for a length of 25 mm starting fro m the exit. The latter configurat ion is able to tolerate misalign ment up to 0.6°. The structure behaves as a simp lified form of a two-stage CPC [3], while the surface curvature of the object is like that of a single CPC. The partially filled, truncated CPC can be analysed as a two stage CPC, where the first stage is a T-CPC with a low exit angle θ out1 and with an exit material with n>1 (θ out1 ≅ 15° and n = 1.49 in the example o f Fig. 3b), and the second stage is another T-CPC with an exit angle θ out2 ; this last exit angle can be selected a little lo wer than 90°, in order to achieve the higher level of concentration for the selected angular acceptance. Because of the rays outgoing with the higher exit angle are the rays incoming with the higher angle of incidence respect to the optical axis of the system, the θ out1 corresponds at the inlet angular acceptance for the second stage. The acceptance angle is here defined as the highest entrance angle for wh ich all the light is transferred to the exit aperture.
As the truncation of the considered objects reduces their lengths, the incident, acceptance angle θ i has a smaller value for a given concentration factor C than the case of two ideal, longer CPCs, series connected. The incidence acceptance angle θ i,ideal for two comp lete CPCs series connected is derived fro m the relationship given in Eq. (4), with an assumed total concentration factor C. Consequently, the transmission-angle curve hasn't a cut off angle for incident beams in correspondence of the acceptance value as for ideal concentrators, but it has a slope for θ>θ i , as shown, for the considered case, in Fig. 4.
This kind of concentrator, because of its particular form, requires protective glass at the inlet aperture, to avoid the detrimental effect of dirty deposition on the large concave area. This element could be positioned on the co mplete structure, with an antireflect ion coating on it, usually acting as self-clean ing surface as well, to reduce the optical losses for the Fresnel reflection at its interfaces. Considering the different cases of presence of uncoated dielectric surfaces, the optical efficiencies obtained by simu lation with the software TracePro Where i i ,φ θ represent the angles of incidence for the incoming rad iation, in spherical coordinates, while s s ,φ θ are the angles indicating the scattering direction. L s is the scattered radiance, while E i is the incident irradiance. This optical property has been introduced to consider the slight effect of the light diffusion at the reflector surfaces. Because of the Fresnel reflection at the interfaces of materials of different refract ion indexes, portion of the incident light flu x is back reflected at the interfaces of the protective glass and of the encapsulant; this factor of losses, common with every concentrator system using lenses, can be strongly reduced depositing an antireflection layer on the surfaces, which are, in this case, all planar.
The very thin and long illu minated area of this proposed design has the additional advantage of a very high perimeter/surface ratio for the PV device, which permits to cool down the cells using passively, maximizing the thermal spreading effect at the receiver level.
The necessity of flu x unifo rmity on a single cell significantly depends on the particular kind of cell emp loyed; indeed, the cell size, the contact pattern and coverage, the doping levels and the external circu it configurat ions play all an important role. In the supposed case of Sliver cell used for the 2D-CPCs system, there's a fairly high tolerance for non-uniformity on the device, because the emitter contact is placed on the side of the device, and its small dimension is in the direction perpendicular to the inco ming radiat ion, which is of the order of the electrons diffusion length, for Si with lifetime higher than 200µs. For sy mmetrical reasons, the uniformity along the long dimension of the device is ensured, so uniform light could be expected along the string of series connected cells. A flu x profile along the short side of the cell is graphed in Fig. 5 Table 1. Optical efficiency of a reflective, 2D-CPC of 30× concentration factor, for different misalignment angles and for different characteristics of fabrication: (a) no protective glass and no encapsulant; (b) no protective glass, 25mm of encapsulant, without ARC; (c) 5mm of protective glass and 25 mm of encapsulant, without ARC; (d) 5mm of protective glass, 25mm of encapsulant and single MgF2 ARC layer on each interface. The specular reflectance of the surfaces adopted is 94.87%, while glass and encapsulant have been modelled with refraction index n = 1.49
3D Concentrator Systems
In the case of 3D-CPCs it's possible to consider a system assembly similar to that used with a 3D-lenses concentrator. An illustration of an array of these 3D-CPCs objects is shown in Fig. 6. One important characteristics for PV applications of these 3D concentrators is the very high non-homogeneity in the spatial flu x distribution produced at the exit aperture, as shown, for examp le, in Fig. 7, for a truncated CPC with a concentration ratio of 115x and a length of 30 cm, with an incident radiation directed along the optical axis of the concentrator, with the solar angular distribution of 0.26°.
A method to correct this effect is to emp loy a light mixer to redistribute the light on the exit area. To achieve this result it is necessary to break the symmetry of the system as described in [19,32]. The strong non-linearity introduced by these changes of geometry produces a chaotic behaviour in the determin istic path of the rays. A well known method is the use of a kaleidoscope with squared section and reflective walls at the CPC outlet. Depending on the mixer unit length it is possible to achieve different levels of uniformity for the illu mination flu x on the target area. For practical purposes it is important to find a t rade-off between the length of the kaleidoscope and the level of flu x unifo rmity; indeed, using a non-ideal reflector, the optical losses introduced by each reflection on the mixer walls significantly reduce the concentrator optical efficiency. Moreover, if the kaleidoscope and a portion of the CPC is filled with a dielectric, as previously described for 2D-CPCs in order to increase the angular acceptance of the concentrator, a material with a very low absorption coefficient has to be selected. Considering a reflector with a 94.87% of specular reflectance, 5% absorbance and 0.13% of integrated BRDF as before, and a dielectric with the PMMA optical properties which co mpletely fills the 3-cm long kaleidoscope and fills the CPC outlet for 1.4 cm, the simulated perfo rmances are reported in Tab. 2, for different incident angles of a beam with the solar d ivergence. Fro m the results in the Tab. 2, the energy loss due to mult iple reflect ions at the kaleidoscope walls is evident. Indeed, the fraction of inco ming rays achieving the exit aperture is close to 1 (column 4), but a significant part of the radiation energy is absorbed, even for a fairly good reflector with the characteristics specified before. The variation in the flu x uniformity as function of the mixer length for normal incidence of the solar radiat ion for the truncated CPC unit is considered without dielectric filling, and is reported in Fig. 8. Table 2. Optical efficiency and fraction of collected rays for the 115×, 3D-CPC with a 3-cm long kaleidoscope at the exit aperture, for different misalignment angles; the results are considered without and with the partial filling of the output of the structure with a transparent dielectric with n = 1,49. The specular reflectance of the surfaces adopted in the model is 94.87%
Misalignment angle (°)
Optical efficiency (Without partial filling of dielectrics) The variat ion of the optical efficiency of the concentrator with the kaleidoscope length, for the reflector of the described properties, is reported in Fig. 10a for the cases of partial filled and of empty objects; diversely, the correspondent transmission-angle curves are in Fig. 10b.
To reduce the length of the mixer, a structured surface with V-shaped grooves can be employed, as described by Leutz [33]. Such a design increases the chaotic behaviour of the light rays path, working as an efficient mixing tricks to permit a length reduction. Another imp rovement of the optical efficiency can be achieved for structures with a lower concentration factor; indeed, in these cases, the average exit angle for the rays is lower and consequently also the number of reflection on the kaleidoscope walls. Ho wever, in o rder to achieve a high optical efficiency for real 3D objects the solution adopting a metal coated reflective kaleidoscope does not seem effective. An alternative solution adopts a kaleidoscope made of a transparent dielectric material working for total internal reflect ions (TIR). In such a way this part of the structure doesn't give a performance reduction strongly related to its length as in the previous cases with metalized, reflective surfaces. In Tab. 3 the optical efficiency of the T-CPC 30-cm long with a 4-cm long kaleidoscope made of a material with the optical properties of highly transparent glass, coated with a single layer of MgF2 as antireflection, is reported fro m simu lations with the TracePro ® software.
Materials
Because of the particular geo metries required fo r the surface profiles, the fabrication of the structure can be done by plastic moulding. Co mputer controlled machining tools can work surface profiles with the CPCs curvature, with a precision level of 0.01mm; the smooth curvature required for these objects takes out the fabrication problems, own of Fresnel lenses, of achieving very sharp corners. The reflectance of the surface can be ensured by metallization with A l, Ag or applying reflective films; in any case the reflective coating must be properly covered with poly meric layers acting as protective barriers against moisture.
The large interest in high reflective, lo w cost materials for solar concentrator, both for PV as well as for thermal application has lead to a large body of literature on this issue. Reflective materials have very good optical properties, even for large scale and low cost production [34][35][36]. For the here modelled structures, both reflective adhesive films as well as evaporated metal coatings directly deposited on the concentrator surfaces can be evaluated. For the part icular geometries of the CPCs, the specular reflectance of the surfaces has to be evaluated at high angles of incidence for the light beam. Metallic reflectors have high insensitivity to the light impinging angle, as shown in the measured results in Fig. 11 for a glass coated with silver, tested for two different light wavelengths. The peak reflectance at higher angles in Fig. 11b is due to the Fresnel reflection. Nevertheless, mu lti-layer poly meric films also demonstrate very high reflectance for all the incidence angles [38]. Figure 11. Experimental results of specular reflectance for a silvered mirror at different angles of light incidence, for two different wavelengths, 543 nm (a) and 1063 nm (b). The measurements have been carried out at the glassed side of the mirror The transparent, dielectric material here used for the simu lations has refractive index n = 1.49. By varying the material it is possible to change the refractive properties in order to manage the angular acceptance.
Conclusions
The use of some CPC designs as primary concentrators for CPV has been described. Both 2D and 3D CPC structures have been evaluated and some particular solutions have been selected for possible photovoltaic applications. Historically, the large reflective area required for CPCs has limited their use to being secondary collectors or concentrators for low level of concentration, but, considering the very low price of currently available, high efficiency film reflectors, or the possibility of industrially coating small size structures with high reflective metals, this family o f optical objects can be considered as a competitive choice for CPV applications.
The industrial develop ment of very narrow linear concentrator cells has opened up the possibility of linear micro -concentrators. The part icular shape of this kind of cells is suitable for linear concentrators, where each cell represents an element of a string of cells along a trough. The small width of the cells allo ws the use of CPCs, a class of concentrators not normally emp loyed for large scale photovoltaic applications because of their intrinsically large dimensions, despite the fact that they have almost ideal non-imaging optical properties. By mov ing toward very small devices, it is possible to achieve concentrators of reasonable size with the inherent advantages of this class of optical object, i.e. their good tolerance at misalign ment errors and the possibility o f emp loying lo w cost but with very high reflective materials leading to high optical efficiency. Moreover, the very thin width of the cell permits efficient cooling at med iu m level concentration ranges, increasing the overall system efficiency.
3D-CPCs can be employed in the range of 100×, permitting very h igh optical efficiency (closed to 90%) for real devices produced with available industrial technology. The detrimental effect of the high non-uniformity in the light distribution at the target can be corrected with low optical losses, using a kaleidoscopic transparent dielectric material, acting for total internal reflections, working as light guide, and for mixing the radiation concentrated by the truncated CPC.
|
2019-04-12T13:58:37.980Z
|
2013-08-01T00:00:00.000
|
{
"year": 2013,
"sha1": "4634c269866d912622937ad75e3915dfac7d5249",
"oa_license": "CCBY",
"oa_url": "http://www.sapub.org/global/showpaperpdf.aspx?doi=10.5923/j.optics.20130304.02",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "628e3b35dcd141eaa2f315ebbcc03e4bee6536a6",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
267444000
|
pes2o/s2orc
|
v3-fos-license
|
Sotrovimab: A Review of Its Efficacy against SARS-CoV-2 Variants
Among the anti-Spike monoclonal antibodies (mAbs), the S-309 derivative sotrovimab was the most successful in having the longest temporal window of clinical use, showing a high degree of resiliency to SARS-CoV-2 evolution interrupted only by the appearance of the BA.2.86* variant of interest (VOI). This success undoubtedly reflects rational selection to target a highly conserved epitope in coronavirus Spike proteins. We review here the efficacy of sotrovimab against different SARS-CoV-2 variants in outpatients and inpatients, discussing both randomized controlled trials and real-world evidence. Although it could not be anticipated at the time of its development and introduction, sotrovimab’s use in immunocompromised individuals who harbor large populations of variant viruses created the conditions for its eventual demise, as antibody selection and viral evolution led to its eventual withdrawal due to inefficacy against later variant lineages. Despite this, based on observational and real-world data, some authorities have continued to promote the use of sotrovimab, but the lack of binding to newer variants strongly argues for the futility of continued use. The story of sotrovimab highlights the power of modern biomedical science to generate novel therapeutics while also providing a cautionary tale for the need to devise strategies to minimize the emergence of resistance to antibody-based therapeutics.
Introduction
Sotrovimab, also known as VIR-7831 or GSK-4182136 (Xevudy ® , manufactured by GSK) [1,2] is a monoclonal antibody (mAb) derived from S-309 (an mAb isolated from a SARS-CoV convalescent) which targets a highly conserved epitope of the receptor-binding domain (RBD) within the Spike protein of SARS-CoV-2.In this regard, it is noteworthy that its origin was an antibody made for a coronavirus other than SARS-CoV-2 and that sotrovimab was chosen for clinical development based on powerful in vitro antiviral activity and because it targeted a relatively invariant epitope shared by two coronaviruses.Sotrovimab was classified as either an RBD core cluster I [3] or a class 3 mAb [4], binding to both the "up" and "down" conformation of the RBD and interacting with a unique proteoglycan epitope at residue N343 [5].Sotrovimab works by inducing both neutralization of virus infection and antibody-dependent cell cytotoxicity (ADCC) [6].Sotrovimab serum half-life was improved by inserting the Met428Leu/Asn434Ser (LS) mutation (Xtend TM ) in the Fc region.Unlike with other half-life extended anti-Spike mAbs, this mutation does not impact the ADCC functions of sotrovimab, which are important for its activity against SARS-CoV-2 [7][8][9].This mutation was previously used in the mAb ravulizumab (approved for paroxysmal nocturnal hemoglobinuria).Hence, sotrovimab was rationally chosen in the early days of the pandemic with the view that it may be more resilient to viral evolution by targeting a conserved domain and then enhance by molecular biology techniques for a longer serum half-life while preserving critical antiviral Fc functions.Having been authorized by the FDA for treatment of high-risk outpatients since 26 May 2021, sotrovimab has, to date, shown the highest levels of resilience in in vitro activity against SARS-CoV-2 sublineages except for the recently emerged BA.2.86* variant of interest (VOI) ("Pirola clan") [10,11].In this review we will discuss the results achieved by sotrovimab in clinical trials and post-marketing experiences and draw lessons from this experience that could help in the future design of mAb-based therapeutics.Given that the overall safety of anti-Spike mAbs has been excellent, we will focus on efficacy only.
Methods
A search of the literature in the PubMed (through Medline), EMBASE, Cochrane central, medRxiv, and bioRxiv databases of articles published and posted between 1 December 2019 and 29 December 2023 was carried out using English language as a criterion for selection.A search of the literature through MEDLINE and PubMed electronic databases was performed for articles published during the same timespan using the following Medical Subject Heading (MeSH) and query: ("COVID-19" OR "SARS-CoV-2") AND "monoclonal antibody" AND "Spike" AND ("S-309" OR "sotrovimab" OR "VIR-7831").We also screened the reference lists of the most relevant review articles for additional studies not captured in our initial search of the literature.
Outpatient RCT Efficacy
The efficacy of sotrovimab was established by the COMET-ICE double-blind randomized clinical trial (RCT) (NCT04545060) which evaluated it in unvaccinated patients at risk for progression, mostly from the USA and with symptoms for less than 5 days, between August 2020 and March 2021 (with infecting sublineages being a cocktail of Alpha, Epsilon, Gamma, and Zeta [12]).In this RCT, 500 mg of i.v.sotrovimab reduced hospitalization from 7% to 1% in an interim analysis on 583 patients [13], and the final results on 1057 patients confirmed a reduction from 6% to 1% in hospitalization lasting longer than 24 h or death at day 29 [14].It is noteworthy that other anti-Spike mAbs and COVID-19 convalescent plasma (CCP) have also shown efficacy when administered early in the course of disease which, together with the sotrovimab results, makes a compelling case that antibody-based therapeutics are very effective in reducing the progression of COVID-19 [15].Consistent with this notion, the MANTICO RCT in Italy (NCT05205759) found that among adult outpatients with mild-to-moderate SARS-CoV-2 infection due to Omicron BA.1 and BA.1.1,early treatment with sotrovimab reduced the time to recovery compared with casirivimab/imdevimab and bamlanivimab/etesevimab (mAbs that were both deauthorized by the FDA at that time because of inefficacy and which may thus represent an inadequate control arm) [16].More recently, a small-sized RCT in Thailand comparing sotrovimab to the combination of CCP and favipiravir showed comparable efficacy for both regimens when used in outpatients with COVID-19 [17].
Sotrovimab is also being investigated in a Phase II trial (NCT05210101) on pre-exposure prophylaxis in 93 seronegative immunocompromised individuals [18].
Inpatient RCT Efficacy
In a multicenter TICO double-blind RCT (NCT04501978) involving 546 unvaccinated patients (mostly from hospitals in the USA) with more than 12 days of symptoms carried out between December 2020 and March 2021 (hence at the time of the B.1.2 and Epsilon VOC), sotrovimab did not reduce pulmonary complications on day 5 nor lead to better clinical recovery on day 90 than the placebo [19].It is noteworthy that sotrovimab, like other anti-Spike mAbs, has not been shown to reduce mortality in inpatients.This is distinct from the results of CCP, which reduces mortality in hospitalized patients when used early in hospitalization with units that have high neutralizing antibody titers [20], including mechanically ventilated patients [21].This may reflect some fundamental differences between the efficacy of monoclonal and polyclonal preparations in more advanced disease.
Viral Evolution and Baseline SARS-CoV-2 Susceptibility to Sotrovimab
Sotrovimab remained strongly active in vitro until BA.2, but its activity, as assessed by IC 50 in in vitro viral neutralization assays on replication-competent cell lines, declined since the emergence of BA.4/5 (Table 1).Most importantly, binding to and viral neutralization efficacy were totally abolished by the emergence of the 2023 FLip's lineages [22] and in BA.2.86* ("Pirola clan") [23].In the latter, the Spike mutation K356T creates a motif for glycosylation of N354 which abolishes sotrovimab binding to Spike.It is notable that this mutation, virtually absent before the marketing of sotrovimab, has become apparent in multiple sublineages since then but never reached significant prevalence before BA.2.86* (Figure 1).Table 2 summarizes the key SARS-CoV-2 Spike mutations that confer in vitro resistance to sotrovimab.
While only 1 of the 35 patients in the COMET-ICE trial who had treatment-emergent resistance mutations experienced progression to hospitalization lasting longer than 24 h or death through day 29 [12], it should be noted that the COMET-ICE RCT did not recruit severely immunocompromised patients [14].Severely immunocompromised patients have been the primary focus of sotrovimab treatment in real life and have a much higher risk of treatment-emergent resistance.
Real-World Evidence
Given that placebo-controlled RCTs are no longer considered ethical by most investigators, the only current sources of clinical efficacy data are standard-of-care-controlled RCTs or observational studies.The latter are mostly retrospective in nature and often lack propensity-score matched controls.During the Delta wave, Aggarwal et al. in Colorado matched 522 patients receiving sotrovimab to 1563 not receiving mAbs and demonstrated a 63% decrease in the odds of all-cause hospitalization (raw rate of 2.1% vs. 5.7%) and an 89% decrease in the odds of all-cause 28-day mortality (raw rate of 0% vs. 1.0%) [36].These data were confirmed by Ong et al. in Singapore, who found that sotrovimab protected against in-hospital deterioration (hazard ratio, 0.41) [37].On the other hand, Aggarwal et al. in Colorado reported that sotrovimab treatment was not associated with reduced odds of 28-day hospitalization (2.5% vs. 3.2%) or mortality (0.1% vs. 0.2%) during the BA.1 and BA.1.1 waves [38], for which sotrovimab had IC50 above 150 ng/mL (Table 1).
In a study conducted during the time period corresponding to the Delta and BA.1 waves in California, Cheng et al. found that a sotrovimab cohort had a 55% lower risk of 30-day hospitalization or mortality (RR 0.45) and an 85% lower risk of 30-day mortality than a no-mAb cohort (n = 1,514,868) (RR 0.15) [39].Similar data were reported from Wales, where Evans et al. reported that in higher-risk adult patients in the community with COVID-19, those who received treatment with molnupiravir (n = 359), nirmatrelvirritonavir (n = 602), or sotrovimab (n = 1079) had lower risk of hospitalization or death than those not receiving treatment (n = 4973); there was no difference reported between the BA.1 and BA.2 waves [40].
In routine care of non-hospitalized high-risk adult patients with COVID-19 in England, no substantial difference in the risk of severe COVID-19 outcomes was observed between those who received nirmatrelvir/ritonavir (n = 5704) and sotrovimab (n = 3322) between February and November 2022, when Omicron subvariants BA.2, BA.5, or BQ.1 were dominant [41].
A recent metanalysis of 14 studies including 41,000 patients who received sotrovimab (in US, UK, Italy, Denmark, France, Qatar, and Japan), which included four studies comparing the effectiveness of sotrovimab with untreated or no monoclonal antibody treatment controls, two studies comparing sotrovimab with other treatments, three single-arm studies comparing outcomes during BA.2 and/or BA.5 versus BA.1, and five studies reporting rates of clinical outcomes in patients treated with sotrovimab, it was reported that the rates of COVID-19-related hospitalization or mortality among sotrovimab-treated patients were consistently low (0.95% to 4.0% during BA.2; 0.5% to 2.0% during BA.5).All-cause hospitalization or mortality was also low in these patients (1.7% to 2.0% during BA.2; 3.4% during combined BA.2 and BA.5 periods).During BA.2, a lower risk of all-cause hospitalization or mortality was reported across studies with sotrovimab versus untreated cohorts.Compared with other treatments, sotrovimab was associated with a lower (molnupiravir) or similar (nirmatrelvir/ritonavir) risk of COVID-19-related hospitalization or mortality during BA.2 and BA.5, and there was no significant difference in outcomes between the BA.1, BA.2, and BA.5 periods [42].
Discussion
Sotrovimab proved to be the most resistance-resilient anti-Spike mAb monotherapy during the course of the COVID-19 pandemic, largely because it targeted a very conserved epitope which rarely mutates.Despite sotrovimab use being first limited by the United States FDA on 30 March 2022, and then deauthorized on 5 April 2022 due to inefficacy against BA.2, its usage largely continued in both the US [43] and the EU [44]. Figure 1 shows how usage continued in England.In the absence of other effective anti-Spike mAb therapies, some clinicians advocated the continued use of sotrovimab against omicron lineages, even though the IC 50 against these variants was never below 500 ng/mL.It should be nevertheless noted that the widespread vaccine boosting campaign, which results in antibody responses in most individuals, has minimized the additional benefits conveyed by early treatment, bringing into question the cost-effectiveness of the approach.
Observational and real-world data reports of continued sotrovimab efficacy, despite a precipitous loss of binding to later variants, are difficult to reconcile with the established principles of antibody action, which require binding to the virion for neutralization and activation of Fc-mediated antiviral functions.Assuming that those beneficial effects are real, unlikely but possible explanations include the persistence of minoritarian sotrovimabsusceptible populations in some individuals, insufficient sampling of VOC prevalence, or some as-yet uncharacterized effect of the mAb on the immune system that affected immune function.Recently, a defect in post-infection B-cell memory generation after treatment with bamlanivimab has been reported for the epitopes targeted by this mAb [45].Whether the same concerns apply to other anti-Spike mAbs, such as sotrovimab, remains to be investigated and could represent a clinical concern.With the current BA.2.86* wave originating in November 2023, sotrovimab has now totally lost its in vitro efficacy.While its efficacy could conceivably return in the future with a novel viral lineage that again uses the sequences that defined its epitope, it seems prudent to invest in the pipeline and to work on designing combinations of mAbs that are less susceptible to the emergence of mutations [35].In this regard, VIR-7832 is a modification of sotrovimab with the addition of a three-amino acid mutation GAALIE (G236A, A330L, I332E) to the Fc region which enhances binding to FcγRIIa and FcγRIIIa, decreases affinity for FcγRIIb in vitro, and evoke protective CD8 + T lymphocytes in vivo [46,47].However, VIR-7832 never reached clinical use.
In summary, sotrovimab was a success story that nonetheless provides a cautionary tale of how even a superbly designed mAb remains vulnerable to rapid viral evolution.In fact, the concept of using long half-life mAbs as treatment for immunocompromised patients who are unable to mount their own antibody responses, while rational and successful for some time, may carry within it the seeds of eventual failure.These patients harbor swarms of variants, and the introduction of monotherapy with an mAb will invariably select for variants that do not demonstrate antibody-mediated antiviral effects [31].This phenomenon was carefully documented in a patient who received sotrovimab, which led to the emergence of mAb-resistant variants [48].
Table 2 .
Spike mutations associated with sotrovimab resistance in in vitro studies.Bold characters show the ones that have been detected as emerging in vivo after treatment with the specific mAb.In cases where the exact amino acid change has not been studied in vitro, only the residue is highlighted.Number within parentheses represent the median fold-reduction in neutralizing antibody titers.
Table 3 .
Reported cases of sotrovimab treatment-emergent resistance.
|
2024-02-06T17:57:56.007Z
|
2024-01-31T00:00:00.000
|
{
"year": 2024,
"sha1": "0b5269a51dfa07f93070462e81185a0440269cf2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/16/2/217/pdf?version=1706692105",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97597648f3171b6f63e50745791d69cde2e6758e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
119328536
|
pes2o/s2orc
|
v3-fos-license
|
Zodiacal Exoplanets in Time (ZEIT) VII: A Temperate Candidate Super-Earth in the Hyades Cluster
Transiting exoplanets in young open clusters present opportunities to study how exoplanets evolve over their lifetimes. Recently, significant progress detecting transiting planets in young open clusters has been made with the K2 mission, but so far all of these transiting cluster planets orbit close to their host stars, so planet evolution can only be studied in a high-irradiation regime. Here, we report the discovery of a long-period planet candidate, called HD 283869 b, orbiting a member of the Hyades cluster. Using data from the K2 mission, we detected a single transit of a super-Earth-sized (1.96 +/- 0.12 R_earth) planet candidate orbiting the K-dwarf HD 283869 with a period longer than 72 days. Since we only detected a single transit event, we cannot validate HD 283869 b with high confidence, but our analysis of the K2 images, archival data, and follow-up observations suggests that the source of the event is indeed a transiting planet. We estimated the candidate's orbital parameters and find that if real, it has a period P~100 days and receives approximately Earth-like incident flux, giving the candidate a 71% chance of falling within the circumstellar habitable zone. If confirmed, HD 283869 b would have the longest orbital period, lowest incident flux, and brightest host star of any known transiting planet in an open cluster, making it uniquely important to future studies of how stellar irradiation affects planetary evolution.
INTRODUCTION
The study of stars in clusters has been a cornerstone of stellar astrophysics for over a century (e.g. Russell 1914;Shapley 1917). Because clusters contain coeval stellar populations with uniform ages, compositions and formation histories, it is possible to study stars while controlling for these variables, determine how stars of different masses appear and evolve, and understand cases where stellar evolution took unconventional paths. Stars in open clusters have enabled studies of, among other phenomena, stellar mergers (Leiner et al. 2016), mass transfer (Geller & Mathieu 2011), rotation (Barnes 2007), and magnetic activity (Stern et al. 1981).
Now that in the last few decades, the detection of exoplanets has gone from unproven (Struve 1952;Campbell & Walker 1979) to achievable (Campbell et al. 1988;Latham et al. 1989;Mayor & Queloz 1995;Butler et al. 1997;Cochran et al. 1997), to routine (Rowe et al. 2014;Morton et al. 2016;Mayo et al. 2018), fundamental questions about formation and evolution of exoplanets are becoming pertinent. Since the very first discoveries, exoplanets have been found with orbits (Mayor & Queloz 1995;Naef et al. 2001;Cochran et al. 1997), and interior structures/compositions (Charbonneau et al. 2009;Masuda 2014) different from those of our own Solar System planets, in tension with traditional planet formation theories (e.g. Boss 1995). As the number of detected exoplanets grows, increasingly sophisticated analyses are beginning to yield insights into these surprising features of the exoplanet population (e.g. Rogers 2015; Dawson et al. 2015).
As astronomers begin to tackle fundamental questions about the origin and evolution of exoplanets, it stands to reason that the study of exoplanets in clusters may be similarly foundational to the study of stars in clusters. Studying a coeval planet population within a cluster could isolate trends in planet properties as a function of stellar mass (Cochran et al. 2002), while comparisons between different clusters and field populations could reveal how planet demographics depend on birth environment and how they change over time (Meibom et al. 2013;Mann et al. 2016a).
Recently, significant progress has been made detecting exoplanets in clusters. Some of the earliest discoveries came from radial velocity (RV) searches of cluster members (Sato et al. 2007;Lovis & Mayor 2007;Quinn et al. 2012) which were generally only sensitive to giant planets. Searches for transits were originally unfruitful (Gilliland et al. 2000;Burke et al. 2006;Pepper et al. 2008) 12 but found success after the launch of the Kepler space telescope, which detected two sub-Neptunes in the billion-year-old NCG 6811 cluster during its original mission (Meibom et al. 2013). The turning point for detecting planets in clusters came when the failure of a second reaction wheel ended the original Kepler mission and forced the spacecraft to point towards the ecliptic plane to maintain stable pointing (Howell et al. 2014). Fortuitously, a wealth of nearby and well-studied clusters and associations, including the Hyades, Praesepe, Pleiades, M67, Ruprecht 147, and Upper Scorpius, happen to lie near the ecliptic plane, making Kepler's extended K2 mission well suited for detecting small transiting planets around these well-characterized stars. K2 has fulfilled that promise with the detection of four planets in the Hyades (Mann et al. 2016a;David et al. 2016b;Mann et al. 2018;Ciardi et al. 2018;Livingston et al. 2018), six planets and one candidate in Praesepe (Obermeier et al. 2016;Libralato et al. 2016;Mann et al. 2017), one planet in Upper Scorpius (Mann et al. 2016b;David et al. 2016a), one planet in the Cas Tau association (David et al. 2018), and one planet in Ruprecht 147 (Curtis et al. 2018).
The sample of small transiting planets in open clusters is already showing intriguing patterns, perhaps hinting that planets in young clusters may be less dense than their older counterparts (Mann et al. 2016a;Obermeier et al. 2016;Mann et al. 2017). However, the inferences which might be made about the existing population of planets in open clusters are limited by the sample. Because of its short observing baseline, K2 is most sensitive to planets in periods less than about 40 days, so the known small transiting cluster planets tend to orbit close to their host stars and be highly irradiated. Meanwhile, although radial velocity surveys have detected some longperiod, cool planets, these objects are quite massive. Currently, there are no known small planets in temperate orbits around stars in open clusters, making it impossible to study the evolution and properties of planets in low-irradiation regimes.
Here we report the detection of a long-period transiting planet candidate around the bright (V=10.6, K=7.7, Kp=10.1) Hyades member HD 283869. We detected a single transit event in K2 Campaign 13 observations of HD 283869, with a depth, duration, and shape corresponding to a super-Earth in a roughly 100 day orbit around a K-dwarf stellar host. If the candidate is eventually confirmed to be real, it would be the first known temperate small planet in an open cluster. Our paper is organized as follows: in Section 2, we describe the K2 discovery observations and both archival and follow-up data on HD 283869. Though we do not validate that the candideps-converted-to.pdfate is indeed an exoplanet with high confidence, our analysis of K2 data, spectroscopy, and imaging suggests this is likely the case. In Section 3, we perform an analysis to determine stellar and planetary parameters under the assumption that the single transit event we see is indeed due to an exoplanet. In Section 4, we discuss the uniqueness of the candidate around HD 283869 and explore the path towards confirming the 12 The lack of detections from transit surveys of clusters was not entirely expected (see, e.g. van After the data were downlinked from the spacecraft, they were processed by the K2 mission pipeline and released to the public. We downloaded the calibrated target pixel files from the Mikulski Archive for Space Telescopes, produced light curves, and removed systematic errors caused by Kepler's unstable pointing using the method described by Vanderburg & Johnson (2014). We searched the processed light curves for transits using a Box-Least-Squares algorithm (Kovács et al. 2002;Vanderburg et al. 2016b). Even though our transit search algorithm is designed to identify periodic phenomena, it detected a single, high signal-to-noise 13 transit-like dip in the brightness of HD 283869. The dip had a depth of about 800 ppm, a duration of about 4.6 hours, and a shape characterized by a rounded bottom and short ingress and egress times, consistent with the transit of a small exoplanet.
Upon identifying the transit-like event, we re-processed the K2 light curve by fitting a systematics model simultaneously with the long-timescale variability of the star and a single transit of a long-period planet (see Vanderburg et al. 2016b, for details). Our final K2 light curve is shown in Figure 1. The K2 light curve is dominated by a long-period signal, which we think is likely astrophysical and could be related to stellar rotation. We measured a period of about 37 ± 2 days in the K2 light curve using both an autocorrelation function and Lomb-Scargle analysis. If this period is in fact the rotation period of the star, then HD 283869 is an anomalously slow rotator for a star of its mass and age; most single Hyades and Praesepe members with similar masses have rotation periods of about 10-15 days. We discuss this point further in Section 4. When the long-period signal is removed, the dip is clearly visible by eye near the beginning of the K2 light curve.
While K2 data are typically quite reliable, occasionally single events like the one we detect in the light curve of HD 283869 can be caused by instrumental phenomena. We therefore subjected the single dip to a battery of tests to rule out various scenarios which we have observed to cause similar signals in K2 data in the past. In particular, we confirmed that there were no changes to the scattered background light (perhaps caused by a bright Solar System object moving across Kepler's focal plane 14 ) during the 4.6 hour transit-like event. We also confirmed that the dip was not a residual of our correction for systematics caused by K2's repeated drifting motion and thruster corrections. The dip spanned two drift periods and took place while Kepler was oriented in a part of its roll that was well-characterized by our "self flat field" systematics correction. We also inspected the light curves of the 13 We estimate the signal-to-noise of the dip is roughly 21. 14 For an example of such a scenario see Figure 4b of Vanderburg (2014), which shows a spurious single transit-like event caused by an increase in scattered background light as the planet Jupiter moved out of Kepler's focal plane. two other stars 15 observed by K2 within 5 arcminutes of HD 283869 and found no similar simultaneous dips, indicating that the transit-like-event was not caused by some wide-reaching detector anomaly. We performed standard K2 pixel-level tests (see, e.g. Vanderburg et al. 2016b;Mayo et al. 2018) and confirmed that the apparent position of the star did not shift appreciably during the transit-like event both by difference image analysis (see Figure 2) and analysis of measured image centroids 16 . 15 In particular, https://archive.stsci.edu/prepds/k2sff/ html/c13/ep248053336.html and https://archive.stsci.edu/ prepds/k2sff/html/c13/ep248053424.html. 16 With a Kepler-band magnitude of 10.15, the image of HD 283869 is saturated in the K2 images, which can confuse diagnostics like image centroid shifts and difference images. Nevertheless, with the difference image analysis, we are able to show that the source of the transit is cospatial with HD 283869, and we are able to confirm that the shift in image centroids (transverse to the spacecraft roll) during transit is less than about 2 milliarcseconds compared to the spacecraft position in the two days surrounding the transit.
Finally, we showed that the shape and depth of the transit remained the same when the photometric aperture used to extract the light curve was changed.
Based on these tests, we conclude that the transit-like event we see is probably caused by some astrophysical phenomenon in the direction of HD 283869, and throughout the rest of the paper, we proceed under this assumption. In Sections 2.2 and 2.3, we go further and argue that that the most likely explanation for the dip in the light curve of HD 283869 is that the star is indeed transited by a small, long-period exoplanet, but we do not go so far as to attempt to validate the signal as being caused by a genuine exoplanet with high confidence. Instead, given the difficulty of ruling out all possible false positive scenarios for single transit events, we consider the likely source of the signal to be a "planet candidate," which it will remain until it is confirmed by the detection of additional transits or through precise Doppler monitoring (e.g. Vanderburg et al. 2015). For convenience, throughout the rest of the paper, we refer to the planet candidate as HD 283869 b.
Spectroscopy
HD 283869 is a well studied star thanks to its longsuspected Hyades membership. Here, we make use of extensive archival observations and some new observations taken after we identified the planet candidate orbiting HD 283869.
After being identified as a candidate Hyades member by photometric and proper motion surveys, HD 283869 was observed spectroscopically three times between 1974 and 1980 with the Radial Velocity Spectrometer at the Coudé focus of the 5.1m Palomar Hale telescope (Griffin et al. 1988) as part of a survey to identify true Hyades members among previously identified candidates. The three RV measurements from this survey had a mean velocity of 39.6 ± 0.17 km s −1 on the IAU system 17 (with no variations at the 500 m s −1 level), suggesting kinematics consistent with Hyades membership 18 . Some of us began observing HD 283869 in 1991 as part of an RV survey of Hyades members using the CfA Digital Speedometers on the 1.5m Wyeth Reflector at Oak Ridge Observatory in the town of Harvard, MA and on the 1.5m Tillinghast Reflector at Fred L. Whipple Obser-vatory on Mt. Hopkins, AZ (Stefanik & Latham 1985). We obtained a total of 17 observations with the CfA Digital Speedometers between 1991 and 2006, all but two of which came from Oak Ridge Observatory. The RV time series shows no convincing evidence for astrophysical variability at the 300 m s −1 level, and a periodogram search reveals no strong periodicities. The mean velocity of the 17 Digital Speedometer observations is 39.7 ± 0.13 km s −1 on the IAU scale. There is no significant velocity difference between the CfA observations and the Palomar observations taken two decades earlier.
More recently, we observed HD 283869 with the Tillinghast Reflector Echelle Spectrograph (TRES), the high-resolution successor to the CfA Digital Speedometers on the 1.5m telescope at Mt. Hopkins. We obtained one observation in October 2011 and two other observations in September 2017 after we identified the planet candidate. We measured relative radial velocities between the three TRES observations using methods developed by Buchhave et al. (2010). We detect a possible 80 m s −1 RV shift between the observation taken in 2011 and the two observations taken in 2017, but the formal confidence of this shift is only about 2σ, and we do not consider it significant. When placed on the IAU scale, the average of the three TRES RVs is 39.84 ± 0.1 km s −1 , where the uncertainty is dominated by the transfer onto the IAU system. We adopt this value for the absolute RV.
The most precise existing radial velocity observations of HD 283869 were conducted as part of a survey to detect giant planets in the Hyades using the High Resolution Echelle Spectrograph (HIRES) on the 10m Keck I telescope on Maunakea, HI (Cochran et al. 2002;Paul-son et al. 2004). HD 283869 was observed six times between 1998 and 2003 with typical uncertainties of about 5 m s −1 . We placed limits on radial acceleration on HD 283869 by fitting the six HIRES RV measurements with a linear model while allowing for a radial velocity "jitter" term. We found no statistically significant acceleration, measuring a best-fit slope of about 3 ± 2 m s −1 yr −1 , roughly the acceleration induced by either a Jupiter mass planet at 5 AU, or an 0.1 M M-dwarf at 50 AU. Significantly closer or more-massive objects than this must be nearly face-on in order to escape detection.
All in all, four decades of spectroscopic observations of HD 283869 show no evidence for radial velocity variations, placing strong limits on the presence of binary companions. The lack of detected RV variations show definitely that HD 283869 is not a short-period eclipsing binary, eliminating that false positive scenario for the planet candidate. The RV constraints also place limits on the presence of distant companions which might be eclipsing systems themselves, decreasing the likelihood of a hierarchical eclipsing binary false positive scenario.
Imaging
We used a combination of archival imaging and newly acquired high angular resolution images to search for visual companions to HD 283869. We first inspected images taken in the original Palomar Observatory Sky Survey (POSS) on a photographic plate with a bluesensitive emulsion to search for stationary background objects close to the present day position of HD 283869. Since HD 283869 was observed by POSS in 1955, its apparent position in the sky has moved by about 9 arcseconds, making it possible to search for stationary background stars near the its present-day position (see Figure 3). In a blue-sensitive plate, the saturated point spread function of HD 283869 extends near its present-day position 9 arcseconds away, and we see no evidence for any elongation that might hint at a background star in the present-day location of HD 283869. We estimate based on the other nearby faint stars in the POSS image that if there was a star brighter than about 18th magnitude at the present-day position of HD 283869, we would have seen it. Since we see no such star close to the presentday position of HD 283869, we can exclude background stars about 6 magnitudes fainter in blue bandpasses. We also searched for wide co-moving binary companions using the Hot Stuff for One Year (HSOY) catalog (Altmann et al. 2017). We identified no other stars out to a distance of 900 arcseconds (about 40,000 AU projected distance) brighter than R≈19 (corresponding to roughly 0.1 M M-dwarfs) with a proper motion consistent with HD 283869. Finally, we queried the Gaia DR2 database for sources near HD 283869 (Gaia Collaboration et al. 2016bCollaboration et al. , 2018. Gaia identified three very faint point sources within the K2 photometric aperture at distance of 3.7 , 9.2 , and 12.8 . These point sources are too faint for Gaia to have measured proper motions or parallaxes, so we cannot ascertain whether any of them are physically associated with HD 283869 or if they are background objects. All three of these stars have Gaiaband G magnitudes fainter than G=19.4, too faint to have caused the 700 ppm transit signal we observed on HD 283869. Evidently, there are no widely separated stars near HD 283869 which could have contributed the transit signal we see. After identifying the planet candidate, we observed HD 283869 with two speckle imaging instruments: the NN-Explore Exoplanet Stellar Speckle Imager (NESSI) on the 3.5m WIYN telescope on Kitt Peak in Arizona, and 'Alopeke on the 8m Gemini-N telescope on Maunakea, HI. NESSI and 'Alopeke both work by taking many short (40-60 ms) exposures of a target star simultaneously in two optical narrow bands. The short exposures freeze out atmospheric turbulence, resulting in subimages which can be reconstructed using Fourier techniques to produce diffraction-limited images over small fields of view. We observed with NESSI in 40 nm-wide filters centered at 562 and 832 nm and with 'Alopeke in similar filters centered at 562 and 880 nm 19 .We reduced the data using the method described by Howell et al. (2011), and detected no nearby companions in any of the reconstructed images. The strongest constraints at small angular separations are placed by 'Alopeke; we can exclude stars 4.4 magnitudes fainter at angular separations of 0.1 arcseconds (or projected distances of 5 AU). The NESSI images are deeper than the 'Alopeke images due to observing conditions, and contribute the strongest constraints at larger angular distances. The NESSI data at 832 nm exclude stars about 5.8 magnitudes fainter at this wavelength at distances of about 1 arcsecond, or projected distances of 50 AU.
The constraints we place on background objects and visual companions from archival and speckle imaging further limit false positive scenarios, making it more likely that the planet candidate around HD 283869 is indeed a transiting exoplanet. Therefore, throughout the rest of this paper, we perform analyses assuming that HD 283869 is single and that the candidate transit event is indeed caused by a transiting exoplanet.
ANALYSIS
3.1. Membership in the Hyades HD 283869 has a long history of being associated with the Hyades cluster. Griffin et al. (1988) measured a radial velocity for HD 283869 consistent with Hyades membership, but they flagged it as a possible member, citing inconsistencies in literature proper motion measurements as a source of doubt. More recently, Perryman et al. (1998) and Röser et al. (2011) assigned HD 283869 membership using updated astrometric parameters from Hipparcos (ESA 1997) and the PPMXL catalogs, respectively.
We reassessed the case for HD 283869's membership in the Hyades. First, we note that there is solid evidence for HD 283869's membership based on its position and proper motion. HD 283869 is located near the outskirts of the Hyades core (see Figure 4), and the star's space velocity is towards the cluster's convergence point (The star has a velocity of 23.7 km s −1 parallel to the cluster's convergence point and only 1.3 km s −1 perpendicular to the convergence point, Röser et al. 2011). Using the methods described by Rizzuto et al. (2011) and Rizzuto et al. (2015), and the Hyades cluster model from Rizzuto et al. (2017), we calculate a membership probability greater than 99%. This calculation 19 Due to poor weather conditions for our observation with 'Alopeke, only the image taken with the 880 nm filter was usable. does not take into account the measured radial velocity (consistent with Hyades membership) and the fact that HD 283869 falls right on the Hyades main sequence in a color-magnitude diagram. Including this additional information brings the membership probability to near unity. Although HD 283869 has a slightly discrepant proper motion perpindicular to the cluster convergence point (larger than all but a handful of other known members) and might have an anomalously long rotation period (see Section 4.2), the preponderance of the evidence suggests that it is indeed a Hyades member.
Limits on Additional Transiting Planets
We placed limits on additional (short-period) transiting planets by performing injection/recovery tests following the procedure outlined by Rizzuto et al. (2017). We injected 4000 transit signals with randomly chosen planet and orbital parameters into the light curve of HD 283869 and attempted to recover them with the "notch-filter" pipeline described by Rizzuto et al. (2017). Our results are shown in Figure 5. We find that we are generally sensitive to sub-Earth-sized planets in short-period ( 5 day) orbits and somewhat sensitive to Earth-sized planets out to periods of about 25 days. If there are other similarly-sized planets orbiting interior to HD 283869 b, then there must be some misalignment between the planets' orbits.
Stellar Parameters
We used the Stellar Parameter Classification (SPC, Buchhave et al. 2012Buchhave et al. , 2014 method to determine the effective temperature, surface gravity, and equatorial rotational velocity of HD 283869 from the three TRES spectra. We ran SPC while fixing the metallicity to the cluster metallicity; we used a value of +0.15 which is an average of several previous determinations (Paulson et al. 2003;Dutra-Ferreira et al. 2016). Averaging the results for each of the three spectra, we measure a temperature T eff,SPC = 4686 ± 50 K, surface gravity log g SPC = 4.70 ± 0.1, and we place an upper limit on the star's projected equatorial rotation velocity of about 2 km s −1 . We measure an average Mt. Wilson activity R HK indicator from our three TRES spectra of R HK = −4.77 ± 0.05 using the procedure described by -Sensitivity to additional transiting planets around HD 283869. We show the orbital periods and planet radii of our injected planets as circular points in the plot; blue points represent planets which we successfully recovered with our notch-filter pipeline, and red points indicate planets which we did not recover. The plot background color shows the fraction of recovered planets in each region of parameter space. Mayo et al. (2018).
We estimated the luminosity of HD 283869 using the parallax from Gaia DR1 (21.05±0.29 mas, Gaia Collaboration et al. 2016a) 20 and fitting empirical templates to the available photometry, following the procedure from and Mann et al. (2017), which we briefly describe here. We first downloaded archive photometry from the literature, including J H K S from the Two Micron All Sky Survey (2MASS, Skrutskie et al. 2006 We converted literature photometry to fluxes using the appropriate filter profile and zero-point (e.g., Cohen et al. 2003;Bessell & Murphy 2012;. Utilizing spectra from the IRTF Cool Stars Library (Cushing et al. 2005;Rayner et al. 2009) and CONCH-SHELL catalog (Gaidos et al. 2014), we found the best-fit spectral template by comparing these fluxes to values derived from these spectra, allowing the mean flux level of the template to float (Figure 6). We filled in regions of high telluric contamination and those not covered by our templates using BT-SETTL models (Allard et al. 2011). Given that the star is within the 'Local Bubble', reddening is likely to be negligible (Lallement et al. 2003), and was not included in our analysis. The final bolometric flux was taken to be the integral over all wavelengths of the best-fit template and model, scaled to 20 Recently, a more precise parallax for HD 283869 was included in Gaia DR2 of 21.003±0.054 mas. We confirmed that the stellar parameters and uncertainties derived using this new parallax remain consistent within errors, and the uncertainties in stellar parameters, which are dominated by systematic errors in stellar evolutionary models, were unchanged. match the photometry. Interpolating between templates gave a negligible improvement in the fit (improvement in reduced χ 2 of <0.1). Uncertainty on the bolometric flux was calculated by accounting for errors in the individual magnitudes, zero-points, and differences between templates. This procedure yielded a bolometric flux of 2.61±0.05×10 −9 erg cm −2 s −1 . Combined with the Gaia DR1 parallax (21.05±0.29 mas) this gave a luminosity of 0.182±0.006 L .
To determine other stellar parameters, we interpolated this luminosity onto the Mesa Isochrones and Stellar Tracks ( Liu et al. 2016). Accounting for differences between the two model grids, and errors on the input parameters, this procedure gives T eff = 4655±55 K, R * = 0.664 ± 0.023M , and M * = 0.742 ± 0.023M . This T eff is consistent with the value derived from the TRES spectrum. We also obtained a consistent radius using the Stefan-Boltzmann relation with the TRES T eff and above luminosity, and a consistent mass using the empirical mass-luminosity relation from Henry & McCarthy (1993), suggesting that the model-derived parameters are reasonable for this star.
Transit Light Curve
We determined transit parameters by fitting the K2 light curve with a Mandel & Agol (2002) model 21 using a Markov Chain Monte Carlo (MCMC) algorithm with affine invariant ensemble sampling (Goodman & Weare 2010). Often, when astronomers fit transits, they parameterize planetary orbits with physical variables such as the orbital inclination i or the ratio of the planet's semimajor axis to the stellar radius a/R . The large uncertainties and covariances in the orbital elements of singlytransiting planets make it difficult for MCMC explorations to converge in situations like that of HD 283869. Therefore, instead of using a physical parameterization, we fit the K2 light curve in terms of variables directly related to the shape of the transit. In particular, we fit the transit in terms of the planet-star radius ratio, R p /R , the full duration of the transit from first to fourth contact, t 14 , the time of transit center t t , the transit impact parameter, b, and linear and quadratic limb-darkening coefficients, u 1 and u 2 . We also fit for a "jitter" term describing the uncertainty in the flux in each K2 longcadence datapoint. We imposed priors requiring both the transit duration and the flux uncertainty term to be greater than zero and requiring the impact parameter to be between 0 and 1 + R p /R . We imposed informative Gaussian priors on u 1 and u 2 , centered on the values interpolated from limb darkening models (0.644 and 0.096 for u 1 and u 2 , respectively, Claret & Bloemen 2011) with widths of 0.07, (roughly matching the level of agreement between models and observations, Müller et al. 2013). We explored parameter space with 100 walkers, which we evolved for 10,000 steps each, discarding the first half for burn-in.
Orbital Period
Because we only observed a single transit of the planet candidate HD 283869 b, the candidate's orbital period is not well determined. We therefore estimated the orbital period of HD 283869 b using a simplified version of the method described by Vanderburg et al. (2016a). We began by taking the posterior samples from our MCMC analysis of the K2 light curve described in Section 3.4, which include 500,000 individual samples of the parameters {R p /R , t 14 , b}. To estimate the orbital period of the planet, we took each set of these parameters drawn from the posterior, randomly drew samples of the eccentricity e and argument of periastron ω from the joint distribution described by Kipping (2013) and Kipping (2014), and calculated the orbital period P by evaluating the following equation 22 : where G is the gravitational constant, M is the stellar mass, R p is the planetary radius, and R is the stellar radius. The resulting distribution of possible orbital periods for HD 283869 b peaks at about 40 days, with long tails extending to short periods inside of 10 days and long periods well beyond one year. The duration, impact parameter, and planet-star radius ratio are not the only information we have at our disposal about the orbital period of HD 283869 b. We can also place constraints based on the fact that the planet candidate only transited once during the 80 days of K2 observations. In particular, because the single transit occurred just about 8 days after the beginning of the K2 observations, and no other similar dips occurred during the rest of the observing campaign 23 , the candidate's or-22 This equation can be derived by simplifying Equation 2 from Vanderburg et al. (2016a) if the scaled semimajor axis a/R 1, a safe assumption for long-period transiting planet candidates like HD 283869 b.
23 While Kepler observations during Campaign 13 were uninterrupted, our default light curve reduction excluded data from bital period must be longer than about 72 days. We accounted for this by discarding all samples of the transit parameters and orbital periods with periods less than this minimum allowed period.
We also took into account the probability that we would detect the transit of a long-period planet at all in our observations. When the orbital period of a planet is longer than the duration of observations, there is no guarantee that the transit will take place while observations are taking place. For orbital periods longer than the duration of observations B, the probability P of detecting a transit decreases as: P = (B + t 14 )/P for P > B + t 14 (2) We took this additional prior into account by randomly selecting whether to discard individual samples for periods longer than the observing baseline with a probability described by Equation 2.
We use the surviving samples to estimate both orbital and transit parameters for HD 283869 b. The parameters are summarized in Table 1 and the orbital period probability distribution is shown in Figure 8. Most likely, the orbital period is not much longer than the minimum allowed period of 72 days; our analysis yields P = 106 +74 −25 days 24 . Interestingly, given the luminosity and temperature of HD 283869, there is a fairly high likelihood that HD 283869 b orbits in the host star's habitable zone. 71% of the surviving orbital period samples fall within the optimistic habitable zone as calculated by Kopparapu et al. (2013), and 36% of the surviving samples fall within the conservative habitable zone. The equilibrium temperature of HD 283869 b is about 255 +38 −44 Kelvin, which would make it the first temperate planet found in an open cluster.
several short periods of time when the spacecraft briefly lost finepointing control. We re-reduced the K2 light curve while including these data and confirmed that no transits occurred during these gaps (see Figure 7). 24 The orbital period is not particularly sensitive to the choice of eccentricity prior. If we assume the planet's orbit is circular, we find P = 99 +50 −20 days. -K2 light curve during periods when the spacecraft lost fine-pointing control. Each panel shows both the systematicscorrected K2 light curve (orange) and the raw K2 light curve convolved with the shape of HD 283869 b's transit (grey) to partially average over the uncorrected K2 roll systematics. We show the raw K2 light curve in addition to the more precise systematics-corrected light curve to demonstrate that no plausible transit signals were absorbed by the systematics correction in these poorly-constrained parts of the flat field. The periods when K2 lost fine-pointing control are interior to the two horizontal blue lines, and the depth of The black curve shows the probability distribution of the orbital period from our analysis in Section 3.5. The light green and dark green shaded regions represent orbits which fall in the optimistic and conservative circumstellar habitable zones, respectively (Kopparapu et al. 2013). Despite our weak constraint on orbital period, we can say fairly confidently that if real, HD 283869 b is temperate, with a 71% chance of orbiting within the star's habitable zone and a 99% upper limit on equilibrium temperature of 327 Kelvin.
infrared and the fairly small size of the host star could make future transit transmission spectroscopy observations possible.
What sets HD 283869 b apart from the population of transiting planets in clusters is its long orbital period and low irradiation environment. The longest-period validated transiting planet in a cluster is K2-136 d (Mann et al. 2018), which with a period of 25.6 days is the outermost planet in a three-planet system. HD 283869 b likely has an orbital period more than three times longer than K2-136 d. HD 283869 b would also be the transiting cluster planet which receives the least stellar irradiation. HD 283869 b receives 1.2 +0.5 −0.6 times the flux received by the Earth, four times less flux than is received by K2-103, the present record holder.
The combination of its young age, proximity, and lowirradiation make HD 283869 b an intriguing target for studying the development of small, temperate planets. At an age of roughly 600-800 million years, HD 283869 b may still be evolving into its mature state. Radius evolution models calculated by Lopez & Fortney (2014) for super-Earths with hydrogen-rich envelopes predict that in the absence of photoevaporation, if HD 283869 b has a hydrogen-rich envelope, its radius will contract somewhere between 5% and 10% between now and maturity at an age of about 5 Gyr. Comparisons of the density of HD 283869 b to similar planets around older field stars could test these models. Observations of HD 283869 b might otherwise reveal surprises; other transiting planets discovered in the Hyades and Praesepe like K2-25 b and K2-95 b seem to be larger than their counterparts around mature stars (Mann et al. 2016a;Obermeier et al. 2016;Mann et al. 2017), indicating that processes like atmospheric evaporation may still be taking place. If transit observations of HD 283869 b show evidence for atmospheric loss, HD 283869 b might be the progenitor of an even smaller temperate planet, and potentially an early version of a rocky habitable-zone planet.
Evidently Slow Rotation
In Section 2.1, we identified a possible 37-day rotation period for HD 283869, which is considerably longer than the rotation periods of stars of similar mass and age in the Hyades and the similarly aged Praesepe open cluster. At face value, this is surprising. Several groups (Douglas et al. 2016(Douglas et al. , 2017Rebull et al. 2017) have used K2 data to measure rotation periods of Hyades and Praesepe stars and found tight period-mass relations for single stars in these clusters, with high (≈ 85%) recovery fractions. A few other Hyades-age stars show longer-period variability than their peers, including the Praesepe member EPIC 211974724 with a 35 day period (Agüeros et al. 2011;Douglas et al. 2017), but it is unclear whether these long rotation periods are actually reliable. HD 283869 also appears unusually inactive in spectroscopic indicators. For HD 283869, Mt. Wilson R HK = −4.77, while the median R HK for Hyades stars is -4.47 with a scatter of 0.09 (Pecaut & Mamajek 2013). While HD 283869's Hα equivalent width is not easily distinguished from other Hyades age stars in low-resolution spectra obtained by Douglas et al. (2014), inspection of high-resolution spectra of some of these stars shows HD 283869 is less active in H-α as well.
One possibility for explaining the longer-period variability on HD 283869 and others like EPIC 211974724 is that we view these stars nearly pole-on and the variability timescale is dominated by the spot evolution timescale/activity lifetime rather than the stellar rotation period. This interpretation is consistent with our upper limit on the projected rotational velocity of about 2 km s −1 . Interestingly, if true, this explanation would imply that the planet candidate, HD 283869 b, has an orbit significantly misaligned from its host's spin axis. A pole-on viewing geometry could also potentially explain the lower spectroscopic activity indicators as well if fewer active regions are visible from our line of sight.
Another more mundane possibility is that the longperiod variability is instrumental in origin, and the true activity signal of HD 283869 is undetectable in the presence of long-timescale instrumental systematics. We think this explanation is unlikely. While Kepler and K2 data do exhibit long-term systematics due to differential velocity aberration, the morphology of the long-term signal in the HD 283869 light curve does not match typical instrumental signals in K2 data. If the signal were instrumental, its amplitude would be unusually high for a star of this brightness. Additionally, the amplitude and morphology of the signal does not depend on the size or shape of the photometric aperture used to extract the light curve. The long-period signal is large enough that it should be detectable in ground-based observations which could clarify its origin. 25
Recovering and Confirming the Planet Candidate
Confirming HD 283869 b and determining its orbital period with radial velocity follow-up will be quite challenging. We estimate a planet mass of about 6.5 ± 25 The 35 day period detected on the Praesepe star EPIC 211974724 has already passed this test; the signal was detected both in K2 and ground-based observations separated by 5 years, effectively ruling out instrumental artifacts (Agüeros et al. 2011;Douglas et al. 2017).
2 M ⊕ using the probabilistic mass-radius relationship from Wolfgang et al. (2016), which corresponds to an RV semiamplitude of about 1.0 ± 0.4 m s −1 . While some short-period 26 exoplanets with RV semiamplitudes this small have been detected, such small signals push against the limits of existing instrumentation and analysis techniques. Detecting such a small RV semiamplitude in the presence of the high-amplitude stellar activity signals expected for Hyades-age stars will be very difficult. Even in the optimistic case that HD 283869 has an unusually slow rotation period of 37 days, given the amplitude of photometric variations observed during the K2 observations, we estimate the stellar activity would induce up to 6-8 m s −1 peak-to-peak RV variations. Detecting the smaller signal of HD 283869 b in radial velocities may not be possible until instrumentation and analysis techniques have advanced.
The most straightforward path to confirming the transit signal and precisely measuring the orbital period of HD 283869 b is photometric monitoring to detect additional transits. The candidate's long orbital period and shallow depth make it infeasible to detect from the ground, so space-based monitoring is required. NASA's recently-launched Transiting Exoplanet Survey Satellite (TESS) mission (Ricker et al. 2015) will not observe HD 283869 during its two-year prime mission because it lies too close to the ecliptic plane, but could observe HD 283869 in an extended mission. In particular, some of the extended mission concepts proposed by Bouma et al. (2017) observe the ecliptic plane for periods of time ranging from 14 days to up to 112 days. If one of these longer ecliptic pointings were to be adopted as a TESS extended mission, it could detect a transit of HD 283869 b. The orbital period of the planet is probably just a bit longer than the 72 day minimum allowed orbital period, and TESS should be able to detect a transit of HD 283869 b with a signal-to-noise ratio of about 11 (Jaffe & Barclay 2017;Stassun et al. 2017). The confirmation of a habitable-zone super-Earth in an open cluster would be a strong example of how K2-TESS synergy can strengthen the legacy of both missions.
We thank Luke Bouma for helpful discussions about TESS extended mission strategies, and we thank the anonymous referee for a helpful and constructive review. This work was performed in part under contract with the California Institute of Technology/Jet Propulsion Laboratory funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute. AWM was supported through Hubble Fellowship grant 51364 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. ACR was supported (in part) by NASA K2 Guest Observer Cycle 4 grant NNX17AF71G. D.W.L. acknowledges partial support from the TESS mission through a sub-award from the Massachusetts Institute of Technology to the Smithsonian Astrophysical Observatory (SAO) and from the Kepler mission under 26 The long orbital period of HD 283869 poses an additional challenge. Most advances in treating stellar activity signals have been for exoplanets with orbital periods shorter than the stellar rotation period (e.g. Haywood et al. 2014). This research has made use of NASA's Astrophysics Data System and the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. The National Geographic Society-Palomar Observatory Sky Atlas (POSS-I) was made by the California Institute of Technology with grants from the National Geographic Society. The Oschin Schmidt Telescope is operated by the California Institute of Technology and Palomar Observatory.
This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/ gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
Some observations in the paper made use of the NN-EXPLORE Exoplanet and Stellar Speckle Imager (NESSI). NESSI was funded by the NASA Exoplanet Exploration Program and the NASA Ames Research Center. NESSI was built at the Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley. The NESSI data were obtained at the WIYN Observatory from telescope time allocated to NN-EXPLORE through the scientific partnership of the National Aeronautics and Space Administration, the National Science Foundation, and the National Optical Astronomy Observatory.
We wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. We are also honored to be permitted to conduct observations on Iolkam Duag (Kitt Peak), a mountain within the Tohono O'odham Nation with particular significance to the Tohono O'odham people.
|
2018-05-28T18:10:54.000Z
|
2018-05-28T00:00:00.000
|
{
"year": 2018,
"sha1": "0dc5930f47dcc58b16f083b2f47aed28636e5dc0",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-3881/aac894/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0dc5930f47dcc58b16f083b2f47aed28636e5dc0",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219721829
|
pes2o/s2orc
|
v3-fos-license
|
Angle Closure Glaucoma in Retinitis Pigmentosa
Background Angle closure glaucoma (ACG) whether primary or secondary lens induced has rare occurrence in cases with retinitis pigmentosa (RP). Method Five patients with history of diminished vision, ocular pain, and nyctalopia were clinically evaluated. Four patients had unilateral presentations of circumciliary congestion, corneal edema, and high intraocular pressure (IOP), while one had bilateral presentation, respectively. Anterior chambers were shallow; fundoscopy revealed the features of RP and gonioscopy affirmed closed angles in all the cases. The management strategies were individualized based on the specific ocular condition. Result The raised IOP were not well controlled with conventional medical treatment. Neodymium yttrium aluminium garnet laser peripheral iridotomy (LPI) was performed in two patients and in the fellow eye in other two patients as a prophylactic measure. Phacoemulsification surgery with implantation of intraocular lens (IOL) was performed in three patients, whereas phacoemulsification only without IOL and trabeculectomy performed in one patient. Among them, two patients had subluxated lens, where one was managed with capsular tension ring and the other was left aphakic, respectively. However, the vision was not improved significantly in these patients. Conclusion RP may be associated with ACG in rare instances. In these patients, angle closure-related high IOP can have a detrimental effect on the pre-existing visual impairment. However, this can be prevented by thorough clinical examination and timely intervention in those susceptible eyes.
Introduction
Retinitis pigmentosa (RP) is the term used for a diverse group of progressive hereditary disorders that primarily affect photoreceptors and retinal pigment epithelial (RPE) function. It predominantly affects the rods, followed by subsequent degeneration of cones [1]. The association between RP and glaucoma has been long sought. The first case of RP associated with glaucoma was described by Galezowski in 1862 [2]. Since then there have only been few reports of glaucoma associated with RP. The prevalence of primary open-angle glaucoma with RP ranges from 2 to 12% [3][4][5]. However, the association of RP and primary angle-closure glaucoma (PACG) has rarely been reported. Badeeb et al. reported the prevalence of 1.03% PACG in RP in patients over 40 years of age [3]. In view of this rarity, the association might be coincidental [4]. Intraocular pressure (IOP) elevation, due to acute angle closure may aggravate the visual impairment in RP patients with pre-existing optic nerve dysfunction [5][6][7]. Angle closure-related IOP elevation can be prevented by timely intervention in these susceptible eyes. Therefore, understanding the association of angle closure and RP may help preserve visual function in these patients.
Here, we have reported five unusual cases of angleclosure glaucoma (ACG) in RP along with their case-based management. Informed consents of the included patients were obtained, and all the work conducted were in accordance with the Declaration of Helsinki (1964).
Case Description
2.1. Case 1. A 65-year-old female, with a history of nyctalopia presented with a sudden, profound, and painful loss of vision OU of three days duration. She recounted a history of poor vision OU since 20 years. On examination, her visual acuity (VA) was hand movement (HM) OU. There was circumciliary congestion (CCC) with corneal edema (Figure 1). The anterior chambers were shallow with Van Herrick (VH) grade two. The pupil was middilated and sluggishly reacting to light OS and presence of posterior subcapsular cataract (PSCC) OU. Goldmann applanation tonometry (GAT) revealed an intraocular pressure (IOP) of 24 mmHg OD and 58 mmHg OS. She was managed with full tolerated antiglaucoma medications (Tab acetazolamide 250 mg, Gtt pilocarpine nitrate 2%, Gtt timolol maleate 0.5%+Gtt brimonidine tartrate 0.1%, and Gtt latanoprost 0.005%) and topical steroids (Gtt prednisolone acetate 1%). The following day, the corneal clarity was enhanced and her IOP was within a normal range of 11 mmHg OD and 12 mmHg OS, respectively. Gonioscopic examination disclosed closed angles in superior, temporal, and nasal quadrants OD, whereas, there was peripheral anterior synechiae (PAS) in all quadrant OS. Fundus examination revealed the cup disc ratio (CDR) 0.5 OD and 0.9 OS. The peripheral vessels were narrow and attenuated OU. There was diffuse RPE atrophy and bony spicules in the posterior pole and midperipheral retina ( Figure 2). Neodymium yttrium aluminium garnett (Nd: YAG) laser peripheral iridotomy (LPI) was done OU followed by phacoemulsification with posterior chamber intraocular lens (PCIOL) OD. Her postoperative best corrected visual acuity (BCVA) was 4/60 OD, and IOP was regulated under control OU with topical medications and hence continued.
Case 2.
A 55-year-old female with a history of nyctalopia presented with painful loss of vision in RE since two weeks. She had no significant systemic illness or family history of ocular diseases. On examination, VA was HM OD and 6/18 OS. Congestion with corneal edema was evident with shallow AC (VH grade 1), middilated sluggishly reacting pupil, and presence of glaucomflecken with nuclear sclerosis (NS) grade 2 OD. Similarly, AC was also shallow (VH grade 2) and lens opacification of NS grade 2 OS ( Figure 3). IOP with applanation was 46 mmHg OD and 17 mmHg OS. The patient was managed with full tolerated antiglaucoma medications. The Figure 6). The patient mentioned inability to follow up as she was from a very distant area. LPI was done OU, and patient was planned for cataract surgery, but the patient declined. 3 Case Reports in Ophthalmological Medicine segment evaluation revealed shallow AC OD, while there was diffuse corneal edema with shallow AC, middilated pupils OS (Figure 7), and lenticular opacification of NS grade 2 OU. IOP was 16 mmHg OD and 65 mmHg OS, respectively. With commencement of maximum tolerated antiglaucoma medications, the cornea OS cleared allowing gonioscopy which revealed PAS in >270°. On fundus examination, discs were pale OU with CDR of 0.6 : 1 OD and 0.9 : 1 OS, respectively. Other than waxy pallor, the components of classic triad of RP were fulfilled with the presence of attenuated vessels and diffuse bony spicules OU.
Combined cataract surgery and trabeculectomy was planned OS. However, subluxated lens of three-clock hours from 6 to 9 o ′ clock were noted intraoperatively and implantation of capsular tension ring with PCIOL was possible in the bag at the conclusion of surgery. Prophylactic LPI was done in the fellow eye.
Case 5.
A 56-year-old male with complains of nyctalopia and diminished vision since three years. VA was perception of light with inaccurate projection of rays OD; however, there was no perception of light (NPL) OS. There was CCC, corneal edema OD with shallow AC OU. There was grade 3 NS OD, while the fellow eye was aphakic ( Figure 8). The pupils were middilated and sluggishly reacting OU. The fundus visibility was very poor OD due to corneal edema and dense NS, and there was posteriorly dislocated lens in vitreous OS. Gonioscopy revealed PAS in three quadrants OS, but hazy media precluded angle evaluation OD. However, the patient denied history of trauma. The IOP was 60 mmHg OD and 35 mmHg OS. The IOP was controlled with maximum tolerated antiglaucoma medications. Combined cataract surgery and trabeculectomy with mitomycin C was performed OD. Intraoperative, subluxation of lens more than 180°from 3 to 10 clock hours was discovered and thorough anterior vitrectomy was performed and the patient was left aphakic. Superior flap with surgical iridectomy was created at superior 12 o ′ clock position. Postoperative ocular findings were uneventful and revealed no vitreous in AC, but there was no improvement in VA after surgery. The fundus examination post surgery affirmed pale disc with narrow attenuated peripheral vessels and diffuse RPE changes with bony spicules and attenuated arteriole. The presence of sclerosed venules implied probable overlapping sequelae of venoocclusive disease (Figure 9). Table 1 represents the ocular biometric parameters of all the subjects included in this series. Table 2 represents summary of the cases with regard to vision, IOP, and management.
Discussion
We have reviewed five patients of ACG in RP who had visited our hospital at different times between July 2016 and June 2018. Though the literature indicating association between the two conditions are meagre, there are few reports of PACG with RP [3][4][5]. Among 234 diagnosed cases of RP in our hospital during this period, five cases presented as ACG accounting for prevalence of 2.13% in our series. A prevalence of 1.03% PACG in RP was reported from Canada [3]. Similarly, a five-year study from China showed 2.3% of RP associated with glaucoma, where angle closure was more frequent than open-angle glaucoma [8].
All of our patients were above 40 years, and three were females. It is an established fact that ACG occur more commonly in females. Similar female preponderance was reported to be 54.7% [5] and 56.52% [9] for RP, respectively.
A-scan ocular biometric readings were only included in this series as ultrasonic biomicroscopy (UBM) is not available in our setup. It is also suggested that the simultaneous occurrence of nanophthalmos, angle-closure glaucoma, and pigmentary retinal dystrophy could be a new syndrome [10,11]. However, none of our patients were nanophthalmic, and the average AL was 21.87 mm OD and 21.38 mm OS, respectively. The role of UBM, where available, can provide reliable information and imaging evidence to evaluate the status of the lens position and determine subluxation for clinical use [12]. Case Reports in Ophthalmological Medicine In this series, two patients had subluxated lens which were identified intraoperatively. There have been reports of lens subluxation in RP, causing anterior luxation of cataractous lens leading to angle closure. It is contemplated that the ultrastructure of the lens in RP is altered, causing lens fibre disorganisation which may contribute to the instability of the lens with anterior displacement and narrowing of angle [13]. Recently, zonular instability is speculated to be the cause behind angle closure in RP patient [14].
Previous theories regarding glaucoma in RP suggest that the migration of pigments is a characteristic feature of retinitis pigmentosa and these pigments in the angle of the anterior chamber has been stressed as a possible etiologic factor in glaucoma [15]. However, not all RP have grave prognosis. Sectoral RP, unilateral RP, and autosomal dominant inherited RPs have very slowly progressive disease course or could even be static [16]. In our series, we encountered only one patient with unilateral presentation.
In patients with RP, the high IOP due to angle closure can cause more visual impairment. Hence, a proper clinical work up, applanation tonometry, gonioscopy and timely intervention in these RP patients could decrease the risk of more damage by the comorbidity of ACG and preserve the ambulatory vision in these susceptible cases.
Data Availability
Data is available in the text in the form of tables.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
|
2020-06-04T09:12:43.259Z
|
2020-05-29T00:00:00.000
|
{
"year": 2020,
"sha1": "003a95dba9d6c4eaa2ec64b5ad44e1bde1d85466",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/6023586",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d7692ca53c2d6449d10eb179e9d173a12224291",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55436603
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge and perceptions of tuberculosis among patients in a pastoralist community in Kenya: a qualitative study
Introduction Tuberculosis awareness is crucial to the success of control and prevention of tuberculosis. However, the knowledge and perceptions of tuberculosis patients in rural Kenya is not well documented. The study sought to explore the knowledge and perceptions of TB patients in West Pokot County Kenya. Methods This was a qualitative descriptive study conducted between January-March 2016. A total of 61 pulmonary tuberculosis patients took part in the study which comprised 6 focus group discussion and 15 in-depth interviews. Thematic analysis was used to analyse the data. Results Participants perceived TB as a serious contagious disease that is hard to diagnose and treat. They attributed tuberculosis to smoking, drinking alcohol, dust, cold air, witchcraft, trauma to the chest, contact with livestock and genetic factors. They believed that TB was transmitted through casual contact with TB patients and sharing of utensils. Conclusion The study showed a lot of misperceptions among tuberculosis patients. The tuberculosis program should heighten patient education to improve patient knowledge and put more effort to dispel misinformation about the cause and mode of transmission of the disease.
Introduction
Tuberculosis(TB) is a major global health concern [1]. It is the most common infectious cause of mortality worldwide surpassing malaria and Human Immunodeficiency Virus/Acquired Immune Deficiency Syndrome (HIV/AIDs) [2]. According to World Health organization(WHO), lack of knowledge about TB causes underutilization of the services, delay in seeking diagnosis, and poor treatment adherence [3]. Consequently creating general awareness about TB among communities and initiating community participation in the control of the disease makeup one component of the 6 basic components of the "Stop TB Strategy" of the WHO [4]. Improving community's knowledge on TB is essential in the TB Control strategy as it shapes their health-seeking behaviour [5][6][7]. Several studies have shown that dearth of knowledge about the etiology, cardinal symptoms, route of transmission as well as appropriate treatment of TB may lead to delayed or inappropriate health-seeking practices, thus sustaining the transmission of the disease within the community [7][8][9][10][11][12][13][14][15]. According to Mondal et al (2014), although people often have a general idea of what TB is, gaps in knowledge on transmission, treatment and prevention leads to diagnostic and treatment delays among people living with TB. The author argues that patients with low knowledge about TB are less likely to seek healthcare and get diagnosed rather they often turn to selfmedication and traditional healers which lead to delays in diagnosis and appropriate treatment [16]. In Ethiopia, Abebe and colleagues found that lack of awareness on TB contributed to the late presentation of suspected TB patient in the health facility [17].
While several studies done in African settings indicate that community members often have incorrect knowledge about the cause and transmission of TB [15,[17][18][19][20], the knowledge and perceptions among TB patients has not been well explored. The current study therefore, focused on TB patients on treatment. The study sought to assess the knowledge and perception of TB among TB patients in 4 health facilities in West Pokot County, Kenya.
Study design:
This was a qualitative descriptive study conducted between January-March 2016 using focus group discussion (FGDs) and in-depth interviews (IDIs).
Participant recruitment and sampling:
The study participants composed of 61 pulmonary tuberculosis patients receiving treatment from four health facilities in West Pokot County. Only confirmed adult TB cases on treatment were included in the study. The mentally ill and patients who had not completed two weeks of treatment were excluded from the study due to the infectious nature of the disease. With the guidance of the nurses at the TB clinics, participants who met the inclusion criteria were purposively selected and briefed about the study and were booked for the interviews during their appointment dates.
Data collection methods
In-depth interviews: In-depth interviews were the primary method used to collect data on patient knowledge and perceptions. The interviews were done at the TB clinics and lasted for 45-60 minutes. A semi-structured interview guide was used to collect the information. The interviews were conducted in Kiswahili language at the TB clinic and were tape recorded and later transcribed verbatim [23]. The concept of data saturation [24] was used to guide the number of IDIs conducted. Data collection was stopped at 15 IDIs. The Kiswahili tape-recorded interviews were translated into English and transcribed in English.
Focus group discussion: We conducted six FGDs (3 with males and 3 with females) each comprised of 6-10 participants. Compared to individual interviews, group interaction allows participants to agree and disagree thereby stimulating richer responses which aid in revealing the respondent's real perceptions on the subject of interest [25]. Gathering ideas and cultural beliefs surrounding TB was possible through this method of data collection. The FGDs were constituted based on gender. A semi structured FGD guide was used to collect the data. The FGDs were conducted in Kiswahili language and each lasted for 60 to 90 minutes. Each FGD was audio-recorded and later transcribed verbatim [23]. The concept of theoretical saturation was used to ensure no new conceptual information was emerging from further discussions [24]. Data saturation was reached at 6 FGDs.
Data processing and analysis: The FGDs and IDIs were transcribed by the researcher. Transcripts were analysed with the aid of the N-vivo (version 11). Data collection and data analysis were done concurrently. Thematic analysis was done by reading through the transcript multiple times and identifying, coding and categorizing meaningful patterns into themes and sub-themes.
Ethical considerations:
The research proposal was approved by Moi University College of Health Sciences/Moi Teaching and Referral Hospital Institutional Research and Ethics Committee (Formal Approval Number: IREC 0001349). Participants were all briefed on the study and each respondent asked to sign an informed consent form without coercion. Participants allowed the audio recording during data collection and were assured of confidentiality and anonymity for any information given.
Results
Socio-demographic characteristics of the participants: A total of 61 participants were enrolled in the study. The median age of the participants was 38 (range 27-61 years). A total of 29(47.5 %) were females while 32(52.5%) were males. Out of the 61 participants, 46(75%) participated in the FGD while 15(25%) participated in the in-depth interviews. About 27(44%) of the participants had no formal education while 21(35%) and 13(21%) had attained primary and secondary education respectively. Majority 34(56%) were pastoralist while 11(18%) and 10(17%) indicated business and formal employment as their main source of income. The rest (9%) indicated they had no source of income.
Knowledge and perceptions of TB: Tuberculosis was commonly referred to as "TB" by the study participants. When asked the local name for the disease, the participants reported that among the Pokot community, TB was known as Semewo takat meaning disease of the chest. The findings revealed that TB patients had different perceptions about TB. There were five themes namely: curable disease, serious illness hard to diagnose and treat, contagious disease, a disease caused by a germ, and misconceptions. The misconception theme had 2 sub-themes.
Curable disease: The data revealed that patients correctly perceived TB as a curable disease. They were also cognisant to the fact that for one to be cured of TB they needed to adhere to treatment for a long period of time. The fact that the patients were aware of efficacious drugs against TB made it less stressful for the patients to learn that they were suffering from TB. Two participants had this to say; "...TB is curable but one has to be persistent and take drugs for a very long time..." (Female 30 years). "I wasn't scared at all because TB is a disease that is treatable as well as curable." (Male 32 years). Patient's previous experiences were key in shaping their perceptions about TB. One patient expressed how devastated he was to learn that he had TB which according to him was a fatal disease. He believed that TB can be fatal particularly if one does not seek and adhere to the doctor's instruction. His experience of having witnessed a patient die of TB in his village made him worried about his illness. However, he found relief after the nurse explained that TB was curable and showed him evidence of people who had been treated and cured of TB. This implies the importance of patient education by the health workers in creating awareness on facts about tuberculosis. This was illustrated by a male patient in his response to the question, how did you feel after being told you were suffering from TB? "When they told me I had TB I got scared that I might die and leave my children without a caregiver...but the nurse reassured me that it is a treatable disease and that I will get cured when I take the drugs well. I was scared because I used to have a neighbour who had TB, he was told to stop taking alcohol and smoking but he continued until he died. I was encouraged by the doctor who gave me an example of 5 people who had TB and yet they recovered...this gave me hope" (Male 28 years).
The participants alluded to the fact that for one to be cured of TB they needed to adhere to treatment for a long period of time. Majority of the participants viewed this as a major challenge in dealing with the disease. However, some of the patients indicated having abandoned treatment prematurely after their health had improved. This only resulted to more suffering as the disease recurred and patient forced to restart a full treatment regimen. This may be attributed to lack of knowledge about TB treatment among some of the TB patients in West Pokot County. This was alluded to by the participants in their narratives. "The problem with TB is that you have to take treatment for a very long time and sometimes you give up. Personally I swallowed the drugs until I felt I had recovered. I swallowed for one and half month and I felt I had Improve and so I stopped the medications" (Female FGD two). "Even me I swallowed the drugs for some time until I had Improved and so I stopped the treatment. But after one and a half years I had severe cough and came back here..." (Female FGD two).
Contagious disease: Majority of the participants perceived TB as a contagious disease that can easily spread from one person to the other. Some of them correctly indicated that TB is an airborne disease and emphasised the need to observe cough etiquette to prevent transmission. However most of the participants held a lot of misperceptions on TB transmission. This is illustrated below in the misconceptions theme. Some of the participants had this to say; "I have heard that TB is transmitted to another person through coughing. The doctors here tell us to cover our mouth when we are coughing so that we don't pass the disease to the other people" (Female FGD one). "TB can be transmitted through air when you cough or sharing utensils." (Male 45 years).
A serious disease hard to diagnose and treat: Most patients thought that TB was a severe disease that mainly affect the chest and has an insidious onset which makes it hard to diagnose. Majority were concerned about the frustration one has to go through before getting the correct diagnosis and treatment. Due to the onset of symptoms which mimic other respiratory tract infections, patients were often treated with different medications without improvement. This was illustrated by the following sentiments; "...with TB life is complicated, some of us have gone to so many hospitals before being told the problem is TB" (Female FGD three). "TB is a bad disease it hides in the body and it is not easy to know that you are suffering from TB. Because it starts just like a common cold with a cough..." (Female 49 years). The participants perceived TB as a source of great suffering to the patient. Several participants agonised how difficult it was to go through the experience of having TB. They recounted how TB caused them a lot of pain and discomfort which left them very weak and unable to lead a comfortable life. Some of the participants had this to say; "When you have TB you suffer a lot and you experience a lot of chest pains and you also cough a lot. It is a bad disease that sucks the body making you lose weight... It makes someone to vomit a lot and loss appetite and this makes you very weak" (Male 28 years). "TB is a serious disease that makes you lose weight, "inakunyonya kama kupe" (it sucks you like a tick) it sucks you until you become very weak and everyone can notice you are unhealthy" (Female FGD one). The participants were cognizant to the fact that TB is fatal without treatment. They termed TB as "a very bad disease" which require medical attention. According to the participants, if one has TB they should seek the right treatment from what they referred to as "big hospitals" meaning the County or Sub-county hospitals. "When one has that disease he should go to the hospital because it is a very bad disease" (Male FGD one). "TB is a bad disease that can finish you and the best thing is to look for treatment in a big hospital like this one so that your problem is discovered early and cured" (Male 28 years).
TB is caused by germ:
One of the probing questions on knowledge about TB in both narrative guide and focus group discussion guide was "What causes TB"? The data from both the narrative and the focus group discussions showed that participants had different explanations as to what causes TB. Only 2 participants in the focus group discussion indicated that TB is caused by germ or bacteria. The rest of the participants held a lot of misconceptions about the cause of TB as discussed below in the misconceptions theme.
Misconceptions about TB:
The participants had a lot of false beliefs and myths concerning the cause and transmission of tuberculosis.
Notions on the cause of TB:
The participants indicated that TB was a hereditary disease. According to them, this was the explanation for having more than one person from the same family suffer from TB. Although most of them termed TB as a contagious disease they did not attribute transmission to be the reason why members of the same family could suffer from tuberculosis. To some of the patients, having a member of the family suffer from TB was expected since this was an inherited disease. This was demonstrated by some of the participants who had this to say; "TB is hereditary. It is a family disease like in my case most of the members have been treated for TB. It runs in our family. Even when they told me I had the disease I was not surprised" (Female FGD three)... if someone from your family has suffered from TB, then automatically someone else in the family will have to suffer from TB.
That is, they say that there some families/clan who have had this disease from olden days and it will continue like that even in future generations" (Female FGD one). Similarly, the participants perceived smoking and drinking alcohol as the cause of TB. The participants particular those who had the habit of drinking alcohol and smoking cigarettes found this as the only explanation as to why they acquired TB. "TB is caused by drinking alcohol and smoking cigarettes. If you look keenly you will find those who take a lot of illicit brews get TB. I used to take alcohol and that's how I got the disease but now I have stopped" (Female FGD two). The mistaken beliefs on the cause of TB affect the control and preventive measures the community may advocate for. According to the participants since drinking alcohol and smoking was a major cause of TB, one of the measures to reduce the burden of TB in the county was that the government should ban the consumption of illicit brew and cigarette smoking. This was illustrated by the following participant; "TB is caused by drinking and cigarettes smoking. To reduce this problem the government should ban smoking and stop consumption of all illicit brews" (Male FGD two). The participants attributed dusty environment and the dry weather predominant in West Pokot as the cause of the increased cases of TB in the area. According to them, TB mainly affect people who are exposed to dust due to the nature of their jobs. Some of the participants had this to say; "It affects those people who smoke and those who work in dusty places. This place is dry and thus why we have a lot of TB" (Male FGD two). "TB is a lot in this region because of the dry weather and a lot of dust" (Female FGD one).
Other participants felt TB was as result of both cold air and dust as indicated by one of the male respondent; "TB is caused by dust and cold air" (Male FGD one). For some of participants TB is a zoonotic disease that spread from the goats to humans. The participants were mainly pastoralists and believed that their interaction with the domestic animals was a source of TB. Due to animal theft, communities in the region often share room with their goats and sheep. To some of the participants this was one of the cause of tuberculosis. ...also the practice of rearing goats where some people sleep in the same room with the goats may be causing the many cases of TB" (Female FGD three). "TB is disease that affects the chest mainly and it is brought by the close contact with goats. Living in the same room with goats can bring the infection to the humans" (Male 61 years).For others TB was as result of trauma to the chest. Some of the participants associated their illness to be as result of trauma that they had suffered at some point before the TB symptoms set in. Several patients recounted that their chest problems started after some injury to the chest which was later diagnosed as tuberculosis. A participant had this to say; "One can get TB when you suffer from Trauma. My problem with TB started when I fell from a tree and hurt my ribs. After sometime I started coughing and thus how my problem all started" (Male FGD three). Worse still, for other participants TB was as a result of bad omen, curse or witchcraft. One of the key questions in the FGD guide was "What are some of the traditional explanations to the causes of TB"" In response to this question a minority of the participants indicated that TB was not viewed as an infectious disease. To them TB was a curse or a bad omen that can befall anyone. "...people say TB comes as a result of curse or a bad omen which can affect anyone...some think that it is witchcraft"" (Female FGD three).
Patients' notions on TB transmission:
When asked whether TB is transmissible, the participants correctly perceived TB as contagious, however majority had false notions on how TB spread from one person to the other. This is of concern since it is likely to affect the preventive and control measures adopted by the community. Participants believed that TB was transmitted through sharing of utensils. They noted that they all had their utensils set aside from the rest of the family to avoid transmitting the illness. "The disease can be transmitted through sharing utensils. That is why it is always good to have your own cup spoon plate even cooking pots ...I don't know of any other route. (Male 40 years) "If you have TB you are supposed to have your cup, plate, spoon cooking pot and even beddings isolated from the rest in the family." You must not share with the others (Female FGD two). For others TB was transmitted through casual contact with an infected person. In order to prevent transmission they advised that an infected person should avoid associating with the rest of the family including having their own utensils and house. This often led to acts of isolation as described by some of the participants. "When one has TB she should not shake hands with the healthy people until one completes the treatment" (Female FGD two). "TB is also transmitted when you eat and drink with a person who has TB. People with TB should have their own utensils and should not share house with the rest of the family" (Female FGD three).
Discussion
The study showed that, though the participants correctly perceived TB as a contagious disease that is curable they did not know the cause and the mode of transmission of tuberculosis. Participants attributed the cause of TB to genetic factors, drinking alcohol and smoking, cold air, trauma, dusty environment as well as bad omens while sharing of utensils and casual contact were seen as the main routes of transmitting TB. This is despite the fact that these were patients who were already on treatment and ought to have received TB education at the health facility. The lack of TB knowledge is of great concern as it leads to wrong opinions on control and prevention of TB thereby making it difficult to reduce the burden of TB. The current study revealed there were false beliefs and opinion about the cause of TB in the study area. The findings are consistent with those of a study done in rural Uganda where witchcraft, hereditary factors, heavy labour, sharing of utensils and smoking were documented as the causes of TB [26]. Similarly, in a study done Tanzania, participants attributed TB to smoking, drinking alcohol, witchcraft and genetic factors [27]. The participants did not differentiate the cause of TB and the risk factors for disease development. While smoking and drinking alcohol may serve as risk factors for developing TB, they do not cause TB. Poverty and lack of awareness are considered the most important factors that increase the risk of exposure to TB while factors such as HIV/AIDS, smoking, drinking alcohol, malnutrition, increased susceptibility of infants and the elderly and increased virulence and/or increased dose of bacilli have been recognized as important contributors to the development of the disease and its epidemiological burden [7,13,28,29]. The misperceptions about the cause of TB should be targeted through patient education and awareness creation in the community.
Similarly, the participants attributed casual contact such as greetings, eating together and sharing utensils with an infected person as a mode of TB transmission. As means of preventing TB, nearly all the participants reported the need for TB patients to have their own utensils which shouldn't be shared with the rest of the family. The findings of the current study are consistent with those of a recent study done among pastoralist communities in Ethiopia that showed significant knowledge gaps about the cause, signs and symptoms, mode of transmission, prevention, and treatment of TB among the community members [19]. While patients may have a general idea about TB, lack of knowledge about the cause, risk factors, mode of transmission and prevention may negatively affect the efforts geared to reducing the burden of TB in the community. The misconceptions about the cause and transmission affect the kind of preventive methods adopted by the community members.
Community's understanding of the human transmission of infection by the TB patients is absolutely critical to the control of the disease. TB is a contagious, communicable disease that spreads to noninfected individuals when an infected patient expels droplets with TB microorganism to the surrounding environment as aerosol when coughing [30]. It is important for the TB patients to know the mode of transmission of TB as this can influence their behaviour such as cough etiquette, respiratory hygiene as well as seek early treatment, which is critical in preventing TB transmission [7,16,31]. In the current study, TB patients perceived TB as a communicable disease but had misconception on how the disease is transmitted. The findings resonate with those of Tolossa et al (2014) who found that while 80% of the participants knew TB was transmissible, 35.6% of the participants thought that sharing utensils with a TB patient was a route of transmission for the disease. In another study among a pastoral community in Ethiopia, participants felt that avoiding sharing utensils and sexual contact with TB patients would prevent the disease transmission [7]. These misconceptions are likely to misinform the community on the control and preventive measures they ought to institute.
Conclusion
The study showed incorrect knowledge about TB among TB patients. Although the participants correctly perceived TB as a contagious disease they did not understand the correct cause and mode of transmission. There is a need to improve patient knowledge and awareness of TB. The current study is limited in that we only focused on patients and not the healthcare workers and the kind of patient education given at the TB clinics. Further studies to look into the kind of patient education given and its effectiveness in improving patients TB knowledge are recommended.
What is known about this topic Previous studies have focused on community knowledge of tuberculosis and shown poor knowledge among study participants; Community members often have incorrect knowledge about the cause transmission and treatment of tuberculosis.
What this study adds
The current study focuses on TB knowledge and perceptions among Tuberculosis patients who are already on treatment. The patients have been in contact with health workers and should have received TB education as required by the TB program; The study shows the patients still exhibit TB knowledge gaps and recommend a need to heighten TB education by the TB program.
|
2018-12-05T12:49:26.653Z
|
2018-08-23T00:00:00.000
|
{
"year": 2018,
"sha1": "71c845bfefc9452ab59df9d3fb261f69cf779107",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11604/pamj.2018.30.287.14836",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71c845bfefc9452ab59df9d3fb261f69cf779107",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4380882
|
pes2o/s2orc
|
v3-fos-license
|
Association of Respiratory Syncytial Virus Toll-Like Receptor 3-Mediated Immune Response with COPD Exacerbation Frequency
The objective of the study is to explore the role of respiratory syncytial virus Toll-like receptor 3 (TLR3)-mediated immune response in the pathogenesis of acute exacerbations of chronic obstructive pulmonary disease (AECOPD). A total of 20 AECOPD patients and 10 normal volunteers were studied. TLR3 was detected by RT-PCR, and respiratory syncytial virus (RSV) was detected by nested RT-PCR. Then, A549 cells were infected by RSV at different time points and at different viral titers. TLR3 mRNA was detected by RT-PCR, the protein of TLR3 and interferon regulatory factor 3 (IRF3) were detected by western blot, and IRF3 protein localization was detected by immunofluorescence. Interferon-β (IFN-β) and interleukin-6 (IL-6) were detected by ELISA. A total of 4 (20%) of the 20 AECOPD patients sampled were infected with RSV. The forced expiratory volume in 1 second (FEV1) percentage was lower in the AECOPD patients infected with RSV compared to those not infected (P = 0.03). The expression of IL-6 in the two groups was diametrically opposite (P = 0.04). The AECOPD group (n = 20) showed an increase in TLR3 mRNA compared with that of the control group (n = 10) (P = 0.02). The RSV-infected AECOPD group (n = 4) showed an obvious increase in TLR3 mRNA compared with that of the control group (P = 0.03). There was a significant correlation between severity of reduction in lung function at exacerbation and the increasing expression of TLR3 in AECOPD patients. The TLR3 signaling pathway was activated in lung epithelial cells. TLR3 mRNA/protein levels were increased in A549 infected with RSV compared with those of the control group. IRF3 protein also increased along with the occurrence of nuclear transfer in A549 infected with RSV. IFN-β and IL-6 were also increased in the RSV-infected A549 cells compared with those of the control (P = 0.00 and 0.00, respectively). Increased TLR3 expression in AECOPD patients is associated with declining lung function. TLR3 may be a risk factor for RSV-infected AECOPD patients.
INTRODUCTION
Chronic obstructive pulmonary disease (COPD), a common preventable and treatable disease, is characterized by persistent airflow limitation that is usually progressive and associated with an enhanced chronic inflammatory response in the airways and lung to noxious particles or gases. Exacerbations and comorbidities contribute to the overall severity in individual patients. Currently, it is now one of the leading causes of mortality and morbidity worldwide and is an important contributor to the global burden of disease [1]. Although there has been some progress in the diagnosis and treatment of the disease, there is no special treatment for damage to lung function or systemic inflammation, and patient survival has not yet been extended.
Prior to the use of polymerase chain reaction (PCR)based techniques for viral detection in acute exacerbation of chronic obstructive pulmonary disease (AECOPD) patient samples, approximately 50-70% of exacerbations were considered to be due to infection, 10% due to environmental agents, and 30% due to unknown etiologies. Isolated infectious agents were most often bacteria [2]. However, studies detecting viral infection using PCR methods have determined the incidence of virus-related AECOPD to be 56%; respiratory syncytial virus (RSV) infections make up a significant proportion of these [3][4][5][6]. COPD has been identified as an independent and significant risk factor for RSV infection that causes severe illness, hospitalization, and ICU admission [7]. Although the precise mechanisms of the onset of COPD exacerbations have not been fully clarified, the viral infection-mediated immune response is thought to play a role. To date, numerous studies have reported on Toll-like receptor 3 (TLR3) antiviral activity in in vivo and in vitro experiments, and related studies have shown that TLR3-mediated immune and inflammatory factors may play a pathogenic role in antiviral activities [8][9][10][11][12][13][14]. For example, in a study of TLR3deficient mice infected with murine encephalomyelitis virus Taylor (TMEV), it has been suggested that TLR3 signaling may be either protective or pathogenic for the development of TMEV-induced demyelinating disease [9]. Furthermore, animal mortality was observed in other studies. TLR3-deficient mice appear to be more resistant to other infections compared to that of WT mice-they display enhanced resistance to influenza virus [10], Punta Toro virus [11], vaccinia virus [12], and West Nile virus (WNV) [13] infections. A weak inflammatory response in TLR3-deficient animals might contribute to the low disease severity in these mice. There have also been related studies in humans. For example, early herpes simplex virus-1 (HSV-1) infection suggests that human TLR3dependent and interferon (IFN)-mediated immunity is essential for defense against HSV-1 in the central nervous system (CNS) during primary infection in childhood, but apparently otherwise largely redundant in host defense [14][15][16]. In studies of spleen-borne encephalitis by Kindberg [17] and Andrey V [18], it was suggested that a functional TLR3 is a risk factor for tick-borne encephalitis virus (TBEV) infection.
The above studies show that TLR3 may be a risk factor both in human and animal experiments. So, whether TLR3 is also a risk factor in AECOPD remains in question-Kinose D et al. has conducted a prospective observational study showing that TLR3 gene expression in sputum samples was not a significant predictor for COPD exacerbation [19]. RSV is the main pathogen COPD exacerbation; whether RSV-TLR3-mediated immune response plays an important role in the pathogenesis of COPD exacerbation needs to be explored. In our experiments, we detected RSV in sputum samples from patients with AECOPD. Then, we detected TLR3 in sputum samples from patients in the control group and the RSVinfected AECOPD group. Other causes of AECOPD and collected inflammatory factors, clinical signs, and lung function in the two groups were analyzed. Finally, TLR3mediated inflammatory cytokine signaling pathways were confirmed in lung epithelial cells.
Patient Selection
Patients with AECOPD and a group of normal patients were recruited from hospital clinics, outpatient clinics, and volunteers, between November 2012 and March 2013 in the First People's Hospital of Zunyi, China. COPD was defined according to guidelines (Global Initiative for Chronic Obstructive Lung Disease, GOLD). Exacerbation was defined as increased dyspnea, cough, or sputum expectoration (quality or quantity) that led the subject to seek medical attention [20]. A clinician saw patients within 24 h to confirm the diagnosis [20] via medical history and physical examination and to perform blood gas analysis and administer oxygen as required. After initial treatment with inhaled bronchodilators, when clinical condition permitted, pulmonary function was assessed, and peripheral blood and sputum samples were obtained. All patients who had not received any antibiotics or systemic glucocorticoid therapy were enrolled. All patients at some stage of the study underwent high-resolution computed tomography (HRCT) except those with concomitant pneumonia, bronchiectasis, and/or tuberculosis. A clinician confirmed the control group via medical history and physical examination.
Data Collection
The following parameters were recorded on admission: age, sex, smoking habits, current medication, clinical signs and symptoms of respiratory infection, pulmonary function testing, HRCT for identification of pulmonary infiltrates, interleukin-6 (IL-6), procalcitonin (PCT), blood gas, and routine blood chemistry and counts. Within 24 h of admission, sputum was collected; patients with less or without sputum were induced to produce sputum. TLR3 was detected in the sputum of AECOPD and control; RSV was detected in the sputum of AECOPD.
Induced Sputum and Sputum Processing
Sputum was induced and sputum processing was performed according to previously published protocols [3].
Cell Culture and RSV Infection
A549 and HEp-2 cells were cultured in DMEM (Hyclone) supplemented with 10% fetal bovine serum (FBS; Hyclone), 100 U/mL of penicillin, and 25 mg/mL of gentamicin and were incubated at 37°C and 5% CO 2 . RSV-infected A549 cells were maintained in DMEM supplemented with 2% FBS (maintenance medium) and were grown at 37°C and 5% CO 2 .
Preparation of RSV, Estimation of TCID50, and UV Inactivation RSV was passaged in HEp-2 cells, which was grown in a maintenance medium and at 37°C and 5% CO 2 . When the cytopathic effect reached 80-100%, the culture flasks were subjected to three freeze-thaw cycles and the supernatant was spun at low speed to eliminate cellular debris. The supernatant was aliquoted and frozen at − 80°C until use. While a control group was established (maintenance medium instead of RSV), a culture medium of uninfected HEp-2 cells was collected in the same way, which was used as control in subsequent experiments-the TCID50 was determined using HEp-2 cells. Serial 10-fold dilutions were made of RSV stocks, and 50-μL samples of each dilution were added to duplicate wells of a 96-well plate containing a confluent monolayer of HEp-2 cells. Cytopathological assessment was performed after 10 days. The dilution causing cytopathic effects in half the cultures (the median tissue culture infective dose or TCID50) was then calculated as described by Reed and Muench (1938), and viral titers were expressed as TCID50 per unit volume of viral suspension [21]. UV inactivation (UV-RSV) was conducted in a Stratagene (Cedar Creek, TX) UVstratalinker apparatus using 1800 mJ of UV radiation.
RNA Extraction, Reverse Transcription, Real-Time PCR, and Semiquantitative RT-PCR RNA extraction from sputum and cells was performed using a standard extraction kit (Qiagen RNeasy Mini kit). TLR3 complementary DNA (cDNA) preparation and real-time PCR were performed from sputum using the Prime Script RT reagent Kit (real time) and SYBR Premix Ex Taq™ II (real time), respectively. Quantitative PCR reactions were run on a light cycler real-time PCR system at 95°C for 30 s, followed by 40 cycles of 95°C for 5 s and 57°C for 30 s. The melting program was 55°C for 5 s, followed by 95°C for 0.5 s. Table 1 shows that all primer sequences relative to levels of mRNA for each factor were normalized to β-actin-determined by using the Ct value and the formula transcription 2 −ΔΔ Ct. RSV cDNA was prepared and nested PCR was performed from sputum using the Prime Script RT-PCR Kit. PCR reactions were run on a PCR system (BIO-RAD) at 94°C for 30 s, followed by 30 cycles of 58 and 72°C for 30 s. In the nested PCR step, 2 μL of the initial reaction product was added to a reaction mixture of 50 μL containing the same components as the first PCR step. Table 1 shows all primer sequences. Amplified PCR products were detected by electrophoresis on Goldview I-stained 2% agarose gels and photographed under UV illumination. RNA isolation and RT-PCR analysis were carried out by Rohde [3].
TLR3 cDNA was prepared from cells and PCR was performed using the Prime Script™ RT-PCR Kit. PCR reactions were run on a PCR system (BIO-RAD) at 94°C for 30 s, followed by 30 cycles of 57 and 72°C for 30 s. Amplified PCR products were detected by electrophoresis on Goldview1-stained 2% agarose gels and photographed under UV illumination. A DNA size marker ladder (MW 50, 100, 150, 200, 300, 400, and 500 bp; Sangon Corp, Shanghai, China) was also used. The density of the bands was quantitated with the Labworks software imaging densitometer. Densitometry was expressed as fold increase (experimental value/b-actin value and experimental value/control value from three independent experiments).
Western Blot Analysis of IRF-3 and TLR3
Cells were first washed in phosphate-buffered saline (PBS) and lysed in RIPA lysis buffer (Beyotime, China). The samples were left on ice for 30 min and centrifuged at 14,000g for 5 min; the supernatant containing total extracts was collected and assayed for TLR3 and IRF3 protein.
Protein concentrations in lysates were determined using the BCA protein assay kit (Solarbio, China). A total of 20 μL of each sample containing 50 μg of protein was run on an 8% SDS, tris-glycine-polyacrylamide gel, and transferred to a PVDF membrane (Solarbio). The membrane was treated with a blocking buffer for 12 h at 4°C, followed by incubation with rabbit IgG anti-IRF-3 and goat IgG anti-TLR3 at a 1:200 dilution in TBS containing 5% fatfree milk overnight at 4°C. Subsequently, the membrane was incubated in a 1:2000 dilution of biotin-labeled goat anti-mouse IgG, biotin-labeled donkey anti-goat IgG, or biotin-labeled goat anti-rabbit IgG for 2 h at room temperature (RT). The membrane was washed three times, then scanned with an Odyssey (BIO-RAD) infrared imaging system, and densitometry of individual bands was performed with the Odyssey (BIO-RAD) imaging software. Densitometry was expressed as fold increase of experimental conditions compared with that of the control.
Immunofluorescent Staining for IRF3
Cells grown on cover slips were fixed for 20 min with 4% fix and solubilized in PBS containing 0.2% Triton-X100 for 20 min at RT, followed by blocking with PBS containing 2% goat serum for 1 h at RT. Endogenous IRF3 was detected using a 1:50 dilution of SC-9082 followed by a 1:200 dilution of a goat anti-rabbit secondary antibody conjugated to FITC goat anti-rabbit IgG (Solarbio). DAPI (4′, 6-diamidino-2-phenylindole) was used as a nuclear counterstain. Samples were analyzed with a Nikon E80i epifluorescence microscope.
ELISA for IFN-β and IL-6
IFN-β and IL-6 levels were determined using a standard ELISA kit (BLKW Biotechnology, China).
Statistical Analysis
Baseline recruitment data are presented as medians (range). The remaining data are presented as the mean ± SE. Comparisons of two groups were made using analysis of t test. Comparisons of continuous variables among subgroups and multiple variables were made using analysis of variance (ANOVA). Correlation coefficients were calculated using the Pearson method. Significance was determined by SPSS19.0 statistical analysis software (Chicago, IL). A P value of < 0.05 was considered statistically significant.
Recruit Characteristics
A total of 40 AECOPD patients were enrolled in the study, 20 (50%) of whom could not enter the final analysis as a result of concurrent infections (pneumonia, n = 15; bronchiectasis, n = 3; tuberculosis, n = 4). Normal volunteers (n = 10) were recruited from outpatient clinics into the First People's Hospital of Zunyi, China. Characteristics of the 30 participants are summarized in Table 2. There were no differences between the baseline characteristics in terms of sex distribution, current smoking status, or age between control and AECOPD groups.
Detection of Respiratory Viruses and Correlation Between Virus Detection and Clinical Characteristics
Four of the 20 AECOPD patients sampled had detectable RSV (Fig. 1). RSV was detected in 20% of all sputum samples collected. Sequencing of RSV-positive PCR samples was confirmed by Rohde [3]. RSV was not detected in the control group. Table 3 shows the basic characteristics of the four AECOPD patients with detectable RSV and 16 AECOPD patients who did not have detectable RSV during the study. The AECOPD group with detectable RSV (n = 4) showed a decline in FEV1% (37.43 ± 9.89) and FEV1/forced vital capacity (FVC) (37.43 ± 9.89) compared with the FEV1% predicted (56.05 ± 15.1) and FEV1/FVC (63.94 ± 3.68) in the AECOPD group in which RSV was not detected (n = 16). The AECOPD group with detectable RSV (n = 4) showed an increase in IL-6 (26.26 ± 20.28) compared with that observed in the AECOPD group without RSV (n = 16) (9.43 ± 11.53). The differences in FEV1% predicted, decline in FEV1/FVC, and increase in IL-6 between these two groups were significant (P < 0.05; Table 3). There were no differences between the basic characteristics in terms of sex distribution, age, FVC, FEV1, PO 2 , POC 2 , WBC, N%, and PCT. The difference in TLR3 mRNA increase between these two groups was significant (P < 0.05; Fig. 2a). The AECOPD group with RSV (n = 4) showed an increase in TLR3 mRNA (48.66 ± 27.64) compared with that in the control group (n = 10; 18.35 ± 12.74). The AECOPD group without RSV (n = 16) also showed an increase in TLR3 mRNA (36.24 ± 26.21) compared with that in the control group (n = 10; 18.35 ± 12.74). The difference in TLR3 mRNA increase between these two groups was also significant (P < 0.05; Fig. 2b). The AECOPD group with RSV (n = 4) also showed an increase in TLR3 mRNA (48.66 ± 27.64) compared with that in the AECOPD group without RSV (n = 16; 36.24 ± 26.21). The difference in TLR3 mRNA increases between these two groups was not significant (P > 0.05 Fig. 2b).
Increased Severity of AECOPD Associated with TLR3
Sputum TLR3 mRNAwas markedly increased during exacerbations (Fig. 2a) in the 20 AECOPD patients. A significant relationship between TLR3 and exacerbation severity was demonstrated by significant correlations between severity of reduction in lung function (% of predicted FEV1) at exacerbation and increase in sputum TLR3 ( Fig. 3; r = 0.482; P = 0.031).
Increased TLR3 Expression After RSV Infection in A549 Cells
The ability of A549 cells to express the TLR3 gene was determined by semiquantitative RT-PCR analysis of total cellular RNA. Total RNA isolated from A549 cells was reverse transcribed and amplified with the specific primers described above. A549 cells were incubated with medium or RSV-titrated HEp-2 cell tissue culture supernatant containing various amounts of infectious RSV particles (UV-inactivated 10 3 TCID50, 10 2 TCID50, 5 × 10 2 TCID50, and 10 3 TCID50) and were cultured for 36 h. The culture medium of uninfected HEp-2 cells was used as a control. Figure 4a shows the TLR3 gene in an RSV-dosedependent manner. The most obvious expression of TLR3 was observed with 10 3 TCID50 of infectious RSV particles compared with that of the control in this study. Compared with the control group, there was no significant increase in TLR3 mRNA expression in the UV-RSV group. A549 cells were also cultured for 6, 12, 18, 24, and 36 h in the presence of 10 3 TCID50 of infectious RSV particles, as well as for 36 h in the presence of the medium of 10 3 TCID50 of UV-inactivated infectious RSV particles. Figure 4b shows the expression of the TLR3 gene in an RSV time-dependent manner. The most obvious expression of the TLR3 was observed after 36 h of incubation compared with that of the control. Compared with the control group, there was no significant increase in TLR3 gene expression in the UV-RSV group.
TLR3 protein level was determined by western blot analysis performed using the method described above. Equal protein loading was confirmed by examining βactin protein expression. The current study shows that RSV infection increases TLR3 protein expression in A549 cells in a time-and RSV-dose-dependent manner (Fig. 4c, d). The most obvious expression of the TLR3 protein was observed after 36 h in the presence of 10 3 TCID50 of infectious RSV particles compared with that of the control. However, the expression of the TLR3 protein at 12 h was significantly higher than the expression of the TLR3 gene at 6 h. Compared with the control group, there was no significant increase in TLR3 protein expression in the UV-RSV group.
Increased IRF3 Expression and Nuclear Translocation After RSV Infection in A549 Cells
IRF3 protein level was determined by western blot analysis using the method described above. Equal protein loading was confirmed by examining β-actin protein expression. A549 cells were incubated with medium or RSV- Fig. 5a shows the expression of IRF3 protein in an RSV-dose-dependent manner. The most obvious expression of protein was still observed with 10 3 TCID50 of infectious RSV particles compared with that of the control. Compared with the control group, there was no significant increase in IRF3 protein expression in the UV-RSV group. A549 cells were also cultured for 6, 12, 18, 24, and 36 h in the presence of 10 3 TCID50 of infectious RSV particles and 36 h in the presence of medium or 10 3 TCID50 of UV-inactivated infectious RSV particles. Figure 5b shows the expression of IRF3 in an RSV time-dependent manner. As was observed for the expression of TLR3 protein, the most obvious expression of IRF3 was observed after 36 h of incubation time and expression was observed after 12 h of incubation time compared with that of the control. Compared with that of the control group, there was no significant increase in IRF3 protein expression in the UV-RSV group. Next, immunofluorescence was carried out to investigate protein location of IRF3 (Fig. 5c). A549 cells were Fig. 2. TLR3 expression increased in sputum of AECOPD patients. a The AECOPD group showed an increase in TLR3 mRNA expression compared with that in the control (n = 10). The difference between these two groups was significant (*P < 0.05). Data represent the mean ± SE from 10 normal recruits and 20 AECOPD patients by real-time PCR. b Both the RSV-positive AECOPD (n = 4) and RSV-negative AECOPD groups (n = 16) showed an increase in TLR3 mRNA compared with that of the control; the difference between these two groups was significant (*P < 0.05). The RSV-positive AECOPD group showed an increase in TLR3 mRNA compared with that of the RSV-negative AECOPD group; the difference between these two groups was not significant (△P > 0.05), as indicated (1, 2, 3 = control, AECOPD without RSV, AECOPD with RSV, respectively). Data represent the mean ± SE from 10 normal subjects, 16 AECOPD subjects without RSV, and 4 AECOPD subjects with RSV by real-time PCR, respectively. 4. Increased TLR3 mRNA/protein expression after RSV infection in A549 cells. a, c TLR3 mRNA/protein in an RSV-dose-dependent manner in A549 cells was cultured for 36 h in the presence of medium, 10 3 UV-inactivated, 10 2 TCID50, 5 × 10 2 TCID50, and 10 3 TCID50 of infectious RSV particles, respectively, as indicated (1, 2, 3, 4, and 5, respectively). b, d TLR3 mRNA/protein in an RSV time-dependent manner in A549 cells was cultured for 6, 12, 18, 24, and 36 h in the presence of 10 3 TCID50 of infectious RSV particles, respectively, and cultured for 36 h in the presence medium or 10 3 TCID50 UVinactivated RSV, as indicated (1, control; 2, UV-RSV; 3-7, RSV infection at 6, 12, 18, 24, and 36 h, respectively). TLR3 mRNA/protein expression was determined by reverse transcription-polymerase chain reaction (RT-PCR) and western blot analysis, respectively. Chart data represent the mean ± SE from three times per point. The relative level of target gene expression was determined by using Labworks software *P < 0.05 compared with control; △P > 0.05 compared with control. Fig. 5. Increased IRF3 expression and nuclear translocation after RSV infection in A549 cells. a RSV-dose-dependent IRF3 protein expression in A549 cells cultured for 36 h in the presence of medium, 10 3 TCID50, UV-inactivated, 5 × 10 2 TCID50, and 10 3 TCID50 of infectious RSV particles as indicated (1, control; 2, UV-RSV; and 3, 4, 5, RSV 10 2 TCID50, 5 × 10 2 TCID50, 10 3 TCID50, respectively). b RSV time-dependent IRF3 protein expression in A549 cells cultured for 6, 12, 18, 24, and 36 h in the presence of 10 3 TCID50 of infectious RSV particles, and 36 h in the presence of medium or 10 3 TCID50 UVinactivated of RSV, as indicated (1, control; 2, UV-RSV; 3-7, RSV infection 6, 12, 18, 24, and 36 h, respectively). IRF3 protein expression was determined by western blot analysis. Chart data represent the mean ± SE from three times per point. The relative level of target gene expression was determined using Labworks software (*P < 0.05 compared with control; △P > 0.05 compared with control). c Nuclear translocation of IRF3: A549 cell were stimulated with medium, 10 3 TCID50 UV-RSV, or 10 3 TCID50 of infectious RSV particles and cells were methanol-fixed at 36 h and stained for IRF3 (labeled red) to detect nuclear translocation. DAPI (blue) served as a nuclear counterstain. cultured for 36 h in the presence of medium, 10 3 TCID50UV-RSV, or 10 3 TCID50 of infectious RSV particles. IRF3 was localized exclusively to the cytoplasm. RSV infection induced nuclear translocation of IRF3 in A549 cells.
RSV Induced IFN-β and IL-6 Protein Expression in A549 Cells
A549 cells were stimulated with medium, 10 3 TCID 50 UV-RSV, or 10 3 TCID 50 of infectious RSV particles, and IL-6 and IFN-β concentrations in cell culture supernatants were measured by ELISA 36 h post infection. The RSV group showed an increase in IFN-β (4.74 ± 0.56) compared with that in the control group (3.40 ± 0.29); the difference in IFN-β increase was significant (P < 0.05; Fig. 6a). Similar to that observed for IFN-β expression, the RSV group also showed an increase in IL-6 (59.65 ± 1.64) compared with that in the control (19.87 ± 0.88); the difference in IL-6 increase was also significant (P < 0.05; Fig. 6b). However, in the above two experiments, the UV-RSV group did not show an increase in IFN-β or IL-6 compared to that of the control.
DISCUSSION
This is the first study to investigate the role that TLR3 may play in the etiology and progression of AECOPD. We show that TLR3 mRNA can be detected in the sputum of many patients with AECOPD, and its detection may be associated with a decline in lung function.
In this study, 4 of the 20 (20%) AECOPD patients sampled had RSV detected in their sputum. A lower incidence of RSV (10.5%) was observed in the sputum of AECOPD patients by Rohde [3], while higher incidences (32.8 and 28%) were observed in AECOPD patients by Tom [36] and Borg [37], respectively. Variation in the incidence of RSV infection among the different studies is likely attributable to differences in study populations, seasonal and regional variation, sample acquisition and type, and PCR assay systems.
RSV is an established cause of acute respiratory illness in children, and RSV bronchiolitis is associated with the development of persistent wheeze in later childhood [38]. Tom [36] shows that RSV detection was associated with a decline in FEV1% predicted and heightened airway inflammation in terms of increased levels of IL-6 and IL-8. In the same study, Tom also showed that an RSV infection may persist in certain populations. We show that RSV Fig. 6. RSV induced IFN-β and IL-6 protein expression in A549 cells. A549 cells were stimulated with medium, 10 3 TCID50 UV-RSV, or 10 3 TCID50 of infectious RSV particles, and IFN-β (a) and IL-6 (b) concentrations in cell culture supernatants at 36 h post infection were measured by ELISA. Data represent the mean ± SE from three times (*P < 0.05 compared with control, △P > 0.05 compared with control). detection is associated with a decline in FEV1% predicted ( Table 2) and higher levels of airway inflammation marker, IL-6 ( Table 2). These studies suggest that RSV may play a role in the pathogenesis of airway inflammation and subsequent deterioration in lung function in the COPD. RSV may have proinflammatory effects; it is also possible that it acts by modulating the response of lung cells to other inflammatory stimuli, including bacterial lipopolysaccharide [39], or by promoting neutrophil adhesion, thereby augmenting lung damage [40].
Currently, TLR3 research focuses on its antiviral activity, and both human and animal studies suggest that TLR3 may be a risk factor for viral infection. Studies that detect viral infection using PCR-based methods have determined the incidence of virus-related AECOPD to be 56%, which also contributed to our study of the TLR3 in the AECOPD. Our data show higher levels of TLR3 mRNA in sputum samples of patients with AECOPD than those of controls by real-time PCR ( Fig. 3a; P < 0.05). However, no difference was observed between the RSVpositive AECOPD group and the RSV-negative AECOPD group with regard to the levels of TLR3 mRNA in sputum samples ( Fig. 3a; P > 0.05). This may be due to the presence of other viral or bacterial infections in AECOPD patients-after all, RSV may contribute to only a small portion of the etiology of AECOPD.
We found a significant relationship between TLR3 and exacerbation severity, demonstrated by significant correlations between severity of reduction in lung function (FEV1% predicted) at exacerbation and increase in sputum TLR3 ( Fig. 3; r = 0.482, P = 0.031). However, we did not observe TLR3 to be associated with IL-6, PCT, WBC, N%, PO 2 , PCO 2 , FVC, or FEV1 in AECOPD subjects. This may be unexpected, but it might prompt other clinical studies, such as the spleen-borne encephalitis. TLR3 may be a risk factor in acute exacerbation of COPD, having been challenged by viruses.
TLR3 consists of an extracellular leucine-rich repeat (LRR) motif, a transmembrane (TM) domain, and an intracellular Toll and IL-1R (TIR) domain [41]. TLR3 signaling will transduce down, depending on these three domains: the leucine-rich repeat responsible for recognizing PAMPs, and the transmembrane (TM) and intracellular Toll and IL-1R (TIR) domains responsible for down transduction of the activation signal [41]. The TLR3 signaling pathway is mediated exclusively by the TRIF adapter [42], which is recruited to TLR3 by interaction between the TIR domains of the two molecules. Various branches of the signaling pathway emanating from TLR3-TRIF lead to the activation of IRF3 and NF-kB [43]. This pathway together induces the production of antiviral IFNs and other cytokines [44]. We sought to determine whether the TLR3mediated immune response also works via this pathway in the lung epithelial cells. Thus, we conducted an experimental study in lung epithelial cells.
In this study, we demonstrate that RSV increases the expression of TLR3 on the surface of airway epithelial cells (Fig. 4). Then, we observed increased IRF3 expression and nuclear translocation after RSV infection in our study (Fig. 5). We know that activation of the IRF3 pathway results in expression of type I interferon including the IFN-α and IFN-β. The RSV infection group also showed an increase in IFN-β compared with that of the control and UV-RSV groups by ELISA (Fig. 6a). We know activation of NF-kB pathways results in expression of various inflammatory mediators, including the cytokines TNF-α and IL-6 and the chemokine IL-8. A TLR3-NF-kB pathway of airway epithelial cells was detected in the study of Dayna [45] and their study showed increased IL-8 mRNA and protein was accompanied by increased NF-kB nuclear localization. We also detected NF-kB-related inflammatory cytokine IL-6 and our data showed that IL-6 protein increased after RSV infection of airway epithelial cells. These studies demonstrate that RSV induces increased TLR3, IRF3, and NF-kB in airway epithelial cells, priming them for an enhanced inflammatory response when RSV induces their antiviral properties. These observations suggest that TLR3 might be an important target for therapy in RSV infection.
In conclusion, we have shown that TLR3 RNA can be detected from lower airway samples of patients with AECOPD. This is the first detection of TLR3 RNA in sputum of AECOPD patients. It is unclear what kind of role of TLR3 has in the pathogenesis of AECOPD. Our data showed TLR3 RNA detection was associated with FEV1% predicted in these AECOPD patients. The results of this study show that TLR3 may play a risk role in AECOPD patients, possibly due to viral and bacterial infection induced TLR3 activation. TLR3 also enhanced the inflammatory response when in an antiviral state, thereby augmenting lung damage. TLR3 was not associated with inflammatory cytokines (including IL-6, PCT, WBC, and N%) in our study. The reason for the discrepancy from findings of previous studies may be due to differences in detection method, different seasons, different time of sample collection, and other inflammatory markers not detected (e.g., IL-8, TNF). At the same time, we did not carry out research or analysis in patients with stable-state COPD. Therefore, further studies with more clinical trials, more sophisticated designs, and more patients/controls are needed.
|
2018-03-30T13:21:47.008Z
|
2017-12-21T00:00:00.000
|
{
"year": 2017,
"sha1": "6001cde0250299517e91647248bda66c2215f909",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10753-017-0720-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "67f1006299e73205bbca819c24eda44c36f57b49",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232224254
|
pes2o/s2orc
|
v3-fos-license
|
A Comparative Analysis of Novel Deep Learning and Ensemble Learning Models to Predict the Allergenicity of Food Proteins
Traditional food allergen identification mainly relies on in vivo and in vitro experiments, which often needs a long period and high cost. The artificial intelligence (AI)-driven rapid food allergen identification method has solved the above mentioned some drawbacks and is becoming an efficient auxiliary tool. Aiming to overcome the limitations of lower accuracy of traditional machine learning models in predicting the allergenicity of food proteins, this work proposed to introduce deep learning model—transformer with self-attention mechanism, ensemble learning models (representative as Light Gradient Boosting Machine (LightGBM) eXtreme Gradient Boosting (XGBoost)) to solve the problem. In order to highlight the superiority of the proposed novel method, the study also selected various commonly used machine learning models as the baseline classifiers. The results of 5-fold cross-validation showed that the area under the receiver operating characteristic curve (AUC) of the deep model was the highest (0.9578), which was better than the ensemble learning and baseline algorithms. But the deep model need to be pre-trained, and the training time is the longest. By comparing the characteristics of the transformer model and boosting models, it can be analyzed that, each model has its own advantage, which provides novel clues and inspiration for the rapid prediction of food allergens in the future.
Introduction
Food allergy refers to inflammation of the human body caused by the body's specific immune response through ingestion, inhalation, or skin contact with specific types of food proteins. It belongs to ones of allergic diseases. In recent years, people's attention to food allergy has been increasing because it will cause a series of complications [1]. For example, the most common manifestations of extra-intestinal symptoms of food allergies are angioedema, various skin rashes, and eczema. It can also cause rhinitis, conjunctivitis, recurrent oral ulcers, bronchial asthma, allergic purpura, arrhythmia, headache, and dizziness, and even lead to systemic reactions of anaphylactic shock. Meanwhile, the increasing prevalence of food allergies and the significant positive correlation between food allergies and the respiratory tract are becoming one of the main problems threatening human health [2,3]. Studies reveal that the occurrence rate of respiratory diseases in patients with food allergies is significantly higher than that in patients without food allergies [4]. Food allergies are mainly induced by food allergens, which are food antigen molecules that can cause immune responses to the human body. Almost all food allergens are proteins, reported recently, which greatly improves the efficiency and facilitates high-throughput prediction [16][17][18]. The efficiency of the prediction method based on machine learning algorithms is much higher than in vivo and in vitro experiments. Furthermore, the accuracy of the predictions is constantly breaking through with the improvement and optimization of the model. DNNs have become the mainstream tool for food allergen prediction in the future.
Bidirectional Encoder Representation from Transformers (BERT) is mainly used for natural language processing (NLP) and currently is rarely applied for peptide or protein function prediction [19]. We found that it can extract high-dimensional features between peptides sequence for study, which is a novel prediction method. Convolutional Recurrent Neural Network that (CRNN), that is used for end-to-end recognition of text sequences of indefinite length. Instead of cutting out a single text first, it converts text recognition into a sequence-dependent sequence learning problem. It has also been reported CRNN plays a role in the function prediction of proteins [20]. The novel ensemble learning model is becoming one of the mainstream methods to improve the performance of machine learning, which has shown superior performance compared to traditional classifiers in text classification [21], disease diagnosis [22], and other fields, and its application in the peptide sequence classification has been rarely seen. The three methods above all provide novel ideas to improve the accuracy and performance of machine learning algorithms into the allergenicity prediction of food proteins.
In this paper, after obtaining the PseAAC feature of each protein sequence, which is the efficient protein sequence feature representation methods, we originally introduced BERT, a novel pre-training model in the field of natural language processing, into the allergenicity prediction of food allergens. An independent attention mechanism in each layer was adopted, so compared to traditional Recurrent Neural Networks (RNN), our network can capture longer-distance dependencies more efficiently. Additionally, in order to make a comparison on characteristics between the deep learning model and ensemble learning model in this task, 2 novel ensemble learning models-LightGBM and XGBoost for 5-fold cross-validation were employed. The results showed that for the dataset in this work, the introduced ensemble learning models (LightGBM, XGBoost) were better than the baseline classifiers, but did not perform as well as deep learning. However, the convenience brought by its short training time makes it suitable for certain specific environments. The novel self-attention mechanism of BERT with the superior performance has infinite potential in larger-scale data training in the near future.
Materials and Methods
The whole method of this work is shown in Figure 1.
Construction of Datasets
The food allergen datasets adopted in this study are from Allergen Nomenclature (http://www.allergen.org/index.php, accessed on 9 October 2020), Structural Database of Allergenic Proteins (SDAP) (http://fermi.utmb.edu/ SDAP/, accessed on 21 October 2020), and NCBI (https://www.ncbi.nlm.nih.gov/, accessed on 3 November 2020) three public databases. This research has gathered 583 food allergens that were officially reported to be allergenic and corresponding protein sequences as positive samples, and 600 food proteins (not reported as allergens) and corresponding sequences as negative samples. The dataset has been rigorously screened, and there is no duplication between positive and negative samples.
Representation of Sequences of Food Allergens
PseAAC was first proposed by Chou [23], and it is one of the classic protein sequence feature representation methods. The type II PseAAC of a protein can be expressed as a 20 + iλ-dimensional feature vector, where the first 20 dimensions reflect the frequency distribution of each amino acid on the protein, and i represents the number of amino acid properties used when generating PseAAC (hydrophilicity, hydrophobicity, etc.), λ represents the sequence correlation factor. Therefore, PseAAC simultaneously contains amino acids'
Representation of Sequences of Food Allergens
PseAAC was first proposed by Chou [23], and it is one of the classic protein sequence feature representation methods. The type II PseAAC of a protein can be expressed as a 20 + iλ-dimensional feature vector, where the first 20 dimensions reflect the frequency distribution of each amino acid on the protein, and i represents the number of amino acid properties used when generating PseAAC (hydrophilicity, hydrophobicity, etc.), λ represents the sequence correlation factor. Therefore, PseAAC simultaneously contains amino acids' composition and sequence information and the interaction information between them. In this research, we considered six properties (hydrophobicity, hydrophilicity, mass, pK1(a-CO 2 H), pK2(NH 3 ) and pI (at 25 • C)), i was set to 6, λ was set to 5, and the weight factor ω was set to 0.05. As a result, the fixed dimension of the PseAAC feature vector of the input machine learning models (except for BERT, because it comes with a dictionary) was 50 dimensions.
Artificial Intelligence Models
This section mainly introduces the artificial intelligence models adopted in this work. Among them, the focus is on the deep model-BERT algorithm and the novel boosting model-LightGBM, which highlights their superior mechanism.
Deep Learning Model
BERT is a self-supervised method for pre-training deep transformer encoders, which can be finetuned for different downstream tasks after pre-training. BERT can be optimized for two training objectives-mask language modeling (MLM) and next sentence prediction (NSP), and only large unlabeled datasets are needed for its training. As a novel deep learning model, BERT is commonly used in the field of NLP, and it is rarely applied in the study of food allergen prediction.
The architecture of BERT is a multi-layer transformer structure. Transformer is an encoder-decoder structure formed by stacking several encoders and decoders. The encoder consists of Multi-Head Attention and a feedforward neural network, which is used to convert the input protein sequence into a feature vector ( Figure 2). The input of the decoder is the output of the encoder and the predicted result, which is composed of Masked Multi-Head Attention and a feedforward neural network. The decoder outputs the conditional probability of the final result ( Figure 2). The highlight of BERT is the use of Multi-Head Attention, which divides a word vector into N dimensions. Since the allergen sequence is mapped in the high-dimensional space in the form of multi-dimension vectors, the mechanism of Multi-Head Attention enables the model to learn different characteristics of each dimension. The information learned from adjacent spaces is similar, which is more reasonable than mapping the entire space together.
In this study, we employed the pre-training model, protBERT (specially trained from protein sequences) [24], which transferred a large number of operations deployed in specific downstream NLP tasks to pre-training word vectors. After obtaining the word vector used by BERT, a multi-layer perceptron (MLP) to the word vector was added. This experiment separated each amino acid character with a space and cut the amino acid sequence so that the amino acid chain formed a string with a certain length, which was used as a basic structure input.
Ensemble Learning Models Light Gradient Boosting Machine (LightGBM)
LightGBM was proposed by Microsoft in 2017. It is a novel Gradient Boosting Decision Tree (GBDT) algorithm framework. It currently shows excellent results in economic forecasting, disease diagnosis and other fields [25,26], but little information about its application in food allergen predictions has been reported so far. In order to solve the time-consuming problem of traditional GBDT when the training dataset is large and complicated, LightGBM model uses two methods and further improves the accuracy of the model. One is GOSS (Gradient-based One-Side Sampling, gradient-based one-side sampling). Instead of using the sample points to calculate the gradient, GOSS calculates the gradient by sampling the samples. GOSS excludes most of the samples with small gradients, and only employs the remaining samples in the calculation. Although GBDT does not have data weights, each data instance has a different gradient. According to the calculated definition of information gain, instances with large gradients have a greater impact on information gain. Therefore, when downsampling, samples with large gradients should be kept as much as possible (screened with predefined threshold or highest percentiles), and samples with small gradients should be randomly removed. Experiments show that this measure can obtain more accurate results than random sampling with the same sampling rate, especially when the range of information gain is large.
The second is EFB (Exclusive Feature Bundling). Instead of using all features for scanning to obtain the best segmentation point, some features are bundled together to reduce the dimension of the feature. A Histogram algorithm is employed in LightGBM. The basic idea is to discretize continuous eigenvalues into k integers, and construct a histogram with the width of k. When traversing the data, the discretized value is used as the index to accumulate statistics in the histogram. After traversing the data once, the histogram accumulates the required statistics. Then according to the discrete value of the histogram, an optimal split point can be found by traversing the data again ( Figure 3). This mechanism reduces memory usage and speeds up model training. One is GOSS (Gradient-based One-Side Sampling, gradient-based one-side sampling). Instead of using the sample points to calculate the gradient, GOSS calculates the gradient by sampling the samples. GOSS excludes most of the samples with small gradients, and only employs the remaining samples in the calculation. Although GBDT does not have data weights, each data instance has a different gradient. According to the calculated definition of information gain, instances with large gradients have a greater impact on information gain. Therefore, when downsampling, samples with large gradients should be kept as much as possible (screened with predefined threshold or highest percentiles), and samples with small gradients should be randomly removed. Experiments show that this measure can obtain more accurate results than random sampling with the same sampling rate, especially when the range of information gain is large.
The second is EFB (Exclusive Feature Bundling). Instead of using all features for scanning to obtain the best segmentation point, some features are bundled together to reduce the dimension of the feature. A Histogram algorithm is employed in LightGBM. The basic idea is to discretize continuous eigenvalues into k integers, and construct a histogram with the width of k. When traversing the data, the discretized value is used as the index to accumulate statistics in the histogram. After traversing the data once, the histogram accumulates the required statistics. Then according to the discrete value of the histogram, an optimal split point can be found by traversing the data again ( Figure 3). This mechanism reduces memory usage and speeds up model training. Furthermore, LightGBM adopts a Leaf-wise strategy to construct the tree models. Each time, the leaf with the largest split gain in all current leaves is chosen to split and the process is repeated. Compared with the traditional Level-wise strategy, this strategy can reduce more errors and get better accuracy with the same number splits. Meanwhile, the parameter max depth is introduced to limit the depth of the tree and avoid overfitting as shown in Figure 4.
Extreme Gradient Boosting (XGBoost)
XGBoost is one of the boosting algorithms. It employs the sum of the predicted value of each tree in the K samples (the total number of trees is K) (that is, the sum of the scores of the corresponding leaf nodes of each tree) as the prediction. A new function f is added to the prediction in each iteration to minimize the objective function. At present, as a novel ensemble learning algorithm, XGBoost presents great results and is widely used in disease detection and other fields [27], but there is no report about its application on allergenicity prediction of food proteins. Furthermore, LightGBM adopts a Leaf-wise strategy to construct the tree models. Each time, the leaf with the largest split gain in all current leaves is chosen to split and the process is repeated. Compared with the traditional Level-wise strategy, this strategy can reduce more errors and get better accuracy with the same number splits. Meanwhile, the parameter max depth is introduced to limit the depth of the tree and avoid overfitting as shown in Figure 4. Furthermore, LightGBM adopts a Leaf-wise strategy to construct the tree models. Each time, the leaf with the largest split gain in all current leaves is chosen to split and the process is repeated. Compared with the traditional Level-wise strategy, this strategy can reduce more errors and get better accuracy with the same number splits. Meanwhile, the parameter max depth is introduced to limit the depth of the tree and avoid overfitting as shown in Figure 4.
Extreme Gradient Boosting (XGBoost)
XGBoost is one of the boosting algorithms. It employs the sum of the predicted value of each tree in the K samples (the total number of trees is K) (that is, the sum of the scores of the corresponding leaf nodes of each tree) as the prediction. A new function f is added to the prediction in each iteration to minimize the objective function. At present, as a novel ensemble learning algorithm, XGBoost presents great results and is widely used in disease detection and other fields [27], but there is no report about its application on allergenicity prediction of food proteins.
Extreme Gradient Boosting (XGBoost)
XGBoost is one of the boosting algorithms. It employs the sum of the predicted value of each tree in the K samples (the total number of trees is K) (that is, the sum of the scores of the corresponding leaf nodes of each tree) as the prediction. A new function f is added to the prediction in each iteration to minimize the objective function. At present, as a novel ensemble learning algorithm, XGBoost presents great results and is widely used in disease detection and other fields [27], but there is no report about its application on allergenicity prediction of food proteins.
Random Forest (RF)
Random forest is a typical model of Bagging ensemble learning. It combines multiple weak classifiers, and adopts voting methods to make the final decision, therefore having higher accuracy and generalization. Random forest has been used in allergen prediction research and is the representative of traditional ensemble learning in this field [28].
Previous Machine Learning Models
In order to compare the performance of the novel deep learning model and ensemble learning proposed in this paper, we adopted the three baseline machine learning algorithms (SVM, K-NN and Naive Bayesian (NB), which are often employed in previous similar studies) [13,18,29]. SVM is a supervised learning algorithm that solves two or multiple classification problems. After introducing the kernel method, it can also be used to solve nonlinear problems. In this work, SVM with non-linear kernel was adopted. The principle of K-NN is relatively simple. The classifier calculates the distance between the feature values of the training data and new data, then selects K (K ≥ 1) closest neighbors for classification or regression. NB performs well on small-scale data. It is usually applied in multi-classification tasks because it is suitable for incremental training and has low sensitivity.
Performance Evaluation of Models
In this study, accuracy (Acc), recall, precision (Prec), F1 score (The definition of these indicators are shown as follows) and area under the receiver operating characteristic curve (ROC and AUC) were selected to evaluate the performance of the model. It should be noted that the classification threshold of the above indicators was uniformly set to 0.5.
where TN is the true negative number, TP is the true positive number, FN is the false negative number, and FP is the false negative number.
Experimental Set Up
The ensemble and baseline models calculations was deployed in Windows 10 system, which was configured with CPU Intel Core I7-6700HQ, 3.5 GHz, 4 GB memory. Additionally, the related experiment of the deep model was performed on another equipment with better capability and the training process was powered by NVIDIA ® Tesla T4 GPU, accelerated by CUDA. NVIDIA T4 is a universal deep learning accelerator which is widely used in distributed computing environments. The programming language used was Python 3.0 and Pytorch was chosen for the deep learning framework. In this study, each model was trained separately (BERT has been pre-trained), and the GridSearchCV interface in the scikit-learn third-party library was adopted for parameter optimization. Five-fold cross-validation was used for verification: The training set and the test set were randomly allocated at a ratio of 8:2 and repeated 5 times, and various evaluation indicators were recorded during the training. In order to reflect the performance of the model in real situations, we have calculated the mean value of each indicator 95.00% confidence interval (CI) for each model.
Performance of Deep Learning Models
By connecting left-to-right and right-to-left texts, a pre-processed deep two-way expression model was designed. After parameter optimization, the key parameters of the model were set as attention_probs_dropout_prob: 0.0, hidden_act: gelu, hid-den_dropout_prob: 0.0, hidden_size: 1024, initializer_range: 0.02, intermediate_size: 4096, max_position_embeddings: 40,000, num_attention_heads: 16, num_hidden_layers: 30, type_vocab_size: 2, and vocab_size: 30. It can be found that the accuracy of the deep model reached 0.9310 (±0.0145), the recall was 0.9419 (±0.0163), the precision was 0.9262 (±0.0203), and the F1 score was 0.9344 (±0.0141), which showed great generalization ability. Furthermore, the ROC curve of BERT and the corresponding AUC value are shown in Figure 5. Its AUC reached 0.9578, showing the outstanding performance of our proposed method. Figure 5. Its AUC reached 0.9578, showing the outstanding performance of our proposed method.
Performance of Ensemble Learning Models
Training and verification for the three ensemble learning models mentioned above were conducted in the experiment, and each model has been optimized to a greater extent after adopting the parameter adjustment methods proposed above. Table 1 shows the key parameters of the ensemble models. The cross-validation results are shown in Table 2. It can be seen that LightGBM and XGBoost performed best as novel ensemble algorithms. The average accuracy and F1 score of the two models were 0.8686, 0.8186, and 0.8684, 0.7981 respectively. RF model performed worse than the two. As a representative of Bagging ensemble models, the average accuracy and F1 score were only 0.7797 and 0.7720.
Performance of Ensemble Learning Models
Training and verification for the three ensemble learning models mentioned above were conducted in the experiment, and each model has been optimized to a greater extent after adopting the parameter adjustment methods proposed above. Table 1 shows the key parameters of the ensemble models. The cross-validation results are shown in Table 2. It can be seen that LightGBM and XGBoost performed best as novel ensemble algorithms. The average accuracy and F1 score of the two models were 0.8686, 0.8186, and 0.8684, 0.7981 respectively. RF model performed worse than the two. As a representative of Bagging ensemble models, the average accuracy and F1 score were only 0.7797 and 0.7720. In addition, the ROC curves and the corresponding AUC values of the models are shown in Figure 6. LightGBM had the highest AUC value (0.9105), so its generalization ability was the best. The second was XGBoost (0.8803). It can be seen from the ROC curves and the corresponding AUC values that there were still some differences between the XGBoost and the LightGBM. The AUC of RF was 0.8542, which was quite different from LightGBM and XGBoost. In addition, the ROC curves and the corresponding AUC values of the models are shown in Figure 6. LightGBM had the highest AUC value (0.9105), so its generalization ability was the best. The second was XGBoost (0.8803). It can be seen from the ROC curves and the corresponding AUC values that there were still some differences between the XGBoost and the LightGBM. The AUC of RF was 0.8542, which was quite different from LightGBM and XGBoost.
Performance of Previous Machine Learning Models
As baselines for novel deep learning and ensemble learning models, the previously widely used allergen identification machine learning models (SVM, K-NN, NB) were also introduced in the experiment for comparison. This study extracted the pseudo-amino acid composition characteristics of the protein sequence, then input them into the classifier for
Performance of Previous Machine Learning Models
As baselines for novel deep learning and ensemble learning models, the previously widely used allergen identification machine learning models (SVM, K-NN, NB) were also introduced in the experiment for comparison. This study extracted the pseudo-amino acid composition characteristics of the protein sequence, then input them into the classifier for training and optimization. The results of some parameters optimization are shown in Table 3. The 5-fold cross-validation results are shown in Table 4, and the ROC curve of each model and the corresponding AUC value are shown in Figure 7. Table 3. The key parameters optimization results of the previous machine learning models.
Model
Key Parameters Names and Corresponding Values SVM C = 1.0, kernel = 'rbf', gamma = 0.01 K-NN n_neighbors = 5, n_jobs = 1 NB alpha = 0.9 Table 4. Performance of previous machine learning models in the task of predicting food allergens. training and optimization. The results of some parameters optimization are shown in Table 3. The 5-fold cross-validation results are shown in Table 4, and the ROC curve of each model and the corresponding AUC value are shown in Figure 7. Table 3.
Model
The key parameters optimization results of the previous machine learning models.
Model
Key Parameters Names and Corresponding Values SVM C = 1.0, kernel = 'rbf', gamma = 0.01 K-NN n_neighbors = 5, n_jobs = 1 NB alpha = 0.9 Table 4. Performance of previous machine learning models in the task of predicting food allergens. Compared with deep learning and ensemble learning models, the performance of the baseline algorithms was generally more inferior. Among them, the SVM achieved an accuracy of 0.7418 with an F1 score of 0.7303, and its AUC was 0.8457, which cannot make Compared with deep learning and ensemble learning models, the performance of the baseline algorithms was generally more inferior. Among them, the SVM achieved an accuracy of 0.7418 with an F1 score of 0.7303, and its AUC was 0.8457, which cannot make relatively accurate predictions for whether a test sequence is allergenic or not. In contrast, K-NN performed the best accuracy, perhaps due to the architecture of the model itself.
Discussion
In order to break through the bottleneck of low accuracy encountered by traditional allergen prediction methods, this work designed a deep learning model with novel selfattention transformer structure and improved tree ensemble models to predict which was superior to the machine learning methods employed in previous similar works for the allergenicity of food proteins. The work provided new ideas for future food allergen screening. As far as we know, this is the first reported work to introduce the BERT deep model, LightGBM and XGBoost ensemble models into the food allergen prediction task. In this section, we will compare and analyze the characteristics of the proposed models and discuss their application scenes, which will definitely facilitate the future model selection.
In the deep learning model of BERT, the advantage of introducing self-attention is that it can connect two long-term dependent features in the sequence. This may require more time to accumulate and react for the recurrent neural network (RNN) structure, so the self-attention mechanism can improve the parallelism of the network. The input of this research is protein sequences of different lengths. Self-attention can ignore the distance between amino acids and directly calculate their dependence relationship. It can help learn the internal structure of protein sequences well, which is better than traditional natural language processing algorithms and more efficient. Meanwhile, BERT model employed in this work has been pre-trained, and a large number of operations done in the downstream tasks of natural language processing are transferred to the pre-trained word vector. This not only improves the efficiency of the allergen sequence recognition, but also bestows it more powerful generalization ability. The architecture of BERT is based on multi-layer two-way conversion and decoding, where "two-way" means that when the model is processing a certain word (amino acid), it can use both of the previous word (amino acid) and the following word (amino acid) at the same time, which is different from traditional RNNs. The above advantages all highlight the great potential of BERT to accurately predict food allergens. In this study, BERT's AUC reached 0.9578, which was better than all ensemble learning models, the best of which was 0.9105 and previously reported machine learning models, the best of which was 0.8529. The high AUC value shows its powerful predictive ability. In terms of recognition accuracy, BERT was 0.9310, which was also obviously excellent, better than LightGBM (0.8686) and XGBoost (0.8186). This benefits from the unique advantages of the transformer architecture, which surpasses the boosting ensemble models in the task of food allergen prediction. However, it cannot be ignored that pre-training requires a large amount of various types of protein sequences, which leads to a high cost of transfer learning. It must be emphasized that the BERT model has enormous hyperparameters and requires a long training time (the training time was about 325 min in this study), which also puts forward strict requirements on computing equipment.
The novel ensemble learning models also performed well in the task of food allergens identification. For example, LightGBM is a novel GBDT algorithm framework that has many advantages. One is GOSS, the algorithm does not adopt the sample points to calculate the gradient, but samples the samples to calculate the gradient. The second is EFB, which means that certain features are bundled together to reduce the dimensionality of the features. In addition, using the Leaf-wise strategy for iteration can reduce errors as much as possible and get better accuracy. Based on the above characteristics, LightGBM needs a shorter training time and has better learning effect than traditional machine learning algorithms for food allergen prediction. In this research, the average prediction accuracy of the model was 0.8686, the F1 score was 0.8684, and the AUC reached 0.9105, which showed that it has the ability to accurately predict the allergenicity of a test sequence under small-scale training. Additionally, as a novel ensemble learning model, XGBoost has been widely used in many fields. The study found that it has a relative excellent performance in food allergen prediction tasks through extensive experiment. As for RF, it has a relatively large gap between its performance and the former two. Compared with the BERT deep learning model, although the performance of the ensemble learning models was not as good as the former, but the algorithms represented by LightGBM and XGBoost did not require pre-training, and the training time lasted shorter (the training took about 1-2 min in this paper). This means that they can complete the task of screening food allergens on portable devices and obtain considerable results. Table 5 compares the characteristics of deep model, ensemble models and the traditional models more clearly, including the prediction effect, time-consuming (5-fold cross-validation) and corresponding computing equipment. Based on this, it can be concluded that the BERT model with high training cost is more suitable for large-scale and high-standard food allergen screening, and the boosting models proposed is more suitable for rapid operation on simple equipment. In previous similar studies, AllerHunter [30] employed a self-designed coding scheme and SVM algorithm as a classifier to predict allergens and achieved good results. The highest AUC value reached 0.928, which was lower than the AUC (0.9578) of the BERT deep model. Hassan et al. [14] used the PseAAC encoding method and selected the SVM classifier to predict the allergenicity of allergen proteins with the highest AUC value, which was lower than the novel machine learning algorithm proposed in this paper. AllerTop [31] and AllerTop.v2 [32] received more approval for proposing convenient online servers for allergen screening with optimal algorithm K-NN. After training and optimization (5-fold cross-validation), the screening accuracy was 0.8530, which was lower than the deep learning model we proposed. Furthermore, researchers have also utilized the descriptor fingerprint method to classify allergens, achieving an identification accuracy of 0.8800 in a large-scale dataset. Based on this, they developed an online service system AllergenFP [17]. The deep model BERT employed in the study and the ensemble learning models represented by LightGBM and XGBoost further improved the performance of allergen prediction. In a relatively small dataset, it still achieved the highest AUC value of 0.9578 and the highest accuracy of 0.9310.
In addition to the prediction of allergenicity based on protein sequence characteristics proposed in this paper, there are also other potential methods. For example, it is possible to analyze a deduced proteome starting from a transcriptome or a genome to screen for predicted allergenic proteins. This method analyzes the nature of allergens, and may obtain more accurate prediction results through multi-omics data and machine learning classification algorithms. It is worth trying in the future.
But it is undeniable that certain limitations still exist in this experiment. For example, since we focused on the development of rapid prediction methods for the allergenicity of food allergen, only food allergen sequences were considered in the establishment of the dataset, and the scale was small than the overall allergen. Moreover, the negative samples in the experiment are food proteins that have not been reported as allergens, so there is a possibility of mixing a small amount of allergens. This may have a slight impact on model performance. It should be emphasized that strict allergenicity prediction studies need to be verified by in vitro wet experiments (such as ELISA, etc.), which will be further improved in the future.
Conclusions
This work proposed to adopt the pre-training BERT deep learning model and novel ensemble learning models represented by LightGBM and XGBoost to predict the allergenicity of food proteins. Extensive experiments results in excellent results. They were superior to the previous studies. In the results, the AUC value of BERT (performed best) reached 0.9578, and the accuracy reached 0.9310. The experiments has been conducted to compare and analyze the characteristics of the different models and provides a guidance for the applicable scenarios. So as far as we know, this work is the first reported study of using the above method to identify the allergenicity of food proteins, which will provide an inspiration for food allergens prediction in the future. The online web developed with related models will be used soon.
|
2021-03-15T13:22:34.472Z
|
2021-03-17T00:00:00.000
|
{
"year": 2021,
"sha1": "1d7e7dd4e3d5535deba967cde73ee835cb91c57d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/10/4/809/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6d0b7c1ece956b9f4caf5895a7a457202acbf75",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
260984463
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Two High-Dose Versus Two Standard-Dose Influenza Vaccines in Adult Allogeneic Hematopoietic Cell Transplant Recipients
Abstract Background Adult hematopoietic cell transplant (HCT) recipients are at high risk for influenza-related morbidity and mortality and have suboptimal influenza vaccine immune responses compared to healthy adults, particularly within 2 years of transplant. Methods This phase II, double-blind, multicenter randomized controlled trial compared 2 doses of high-dose trivalent (HD-TIV) to 2 doses of standard-dose quadrivalent (SD-QIV) influenza vaccine administered 1 month apart in adults 3–23 months post-allogeneic HCT. Hemagglutinin antibody inhibition (HAI) titers were measured at baseline, 4 weeks following each vaccine dose, and approximately 7 months post-second vaccination. Injection-site and systemic reactions were assessed for 7 days post-vaccination. The primary immunogenicity comparison was geometric mean HAI titer (GMT) at visit 3 (4 weeks after the second dose); we used linear mixed models to estimate adjusted GMT ratios (aGMRs) comparing HD-TIV/SD-QIV for each antigen. Results We randomized 124 adults; 64 received SD-QIV and 60 received HD-TIV. Following the second vaccination, HD-TIV was associated with higher GMTs compared to SD-QIV for A/H3N2 (aGMR = 2.09; 95% confidence interval [CI]: [1.19, 3.68]) and B/Victoria (aGMR = 1.61; 95% CI: [1.00, 2.58]). The increase was not statistically significant for A/H1N1 (aGMR = 1.16; 95% CI: [0.67, 2.02]). There was a trend to more injection-site reactions for HD-TIV after the second vaccination compared to SD-QIV (50% vs 33%; adjusted odds ratio [aOR] = 4.53; 95% CI: [0.71, 28.9]), whereas systemic reactions were similar between groups with both injections. Conclusions Adult allogeneic HCT recipients who received 2 doses of HD-TIV produced higher HAI antibody responses for A/H3N2 and B/Victoria compared with 2 doses of SD-QIV, with comparable injection-site or systemic reactions.
Hematopoietic cell transplant (HCT) recipients are at high risk for infection due to respiratory viruses, including influenza, particularly within the first 2 years post-HCT.Vaccination has been essential in the prevention of influenza-associated illness and reduction of influenza-related morbidity and mortality in adult HCT recipients.Prior studies of influenza vaccination in HCT recipients have noted poor immunogenicity compared to healthy controls, with seroconversion rates ranging from 13% to 59% after single-dose vaccination [1].Despite their poor responses, the current guidelines recommend annual influenza vaccination after 3-6 months post-transplant [2,3].Multiple influenza vaccine studies in HCT recipients have noted improved immunogenicity for those who are later post-transplant; with less data about vaccine responses less than six months post-transplant [4][5][6].Strategies to improve immunogenicity in HCT recipients are needed in order to establish optimal post-transplant vaccination regimens.
One alternative strategy is the administration of a high-dose inactivated influenza vaccine, which has been proven superior in an elderly population [7].A single-center, phase I safety and immunogenicity study comparing one dose of high-dose trivalent influenza vaccine (HD-TIV) to standard-dose trivalent influenza vaccine (SD-TIV) in adult HCT recipients with a Influenza Vaccination in HCT Recipients • CID 2023:77 (15 December) • 1723
Clinical Infectious Diseases
M A J O R A R T I C L E median of 7.9 months post-transplant reported higher geometric mean titers (GMT) for the A/H3N2 influenza strain compared SD-TIV, with no major safety concerns noted [8].Another strategy is the administration of 2 standard doses of influenza vaccine in the same season, but prior studies of this strategy had small cohorts, with few participants in the early transplant period and did not compare 2 doses of HD to 2 doses of SD influenza vaccine [5,[9][10][11][12].Therefore, we conducted a phase II, multicenter trial comparing 2 doses of HD-TIV to 2 doses of standard dose quadrivalent vaccine (SD-QIV) in adult HCT recipients.
Trial Design and Participants
This was a prospective, multicenter, double-blinded, phase II, randomized controlled immunogenicity and safety trial comparing 2 doses of HD-TIV to 2 doses of SD-QIV in adult HCT recipients (ClinicalTrials.gov:NCT03179761).The trial was conducted during the 2017-18 and 2018-19 influenza seasons at 4 sites: Vanderbilt University Medical Center (Nashville, Tennessee, USA), which served as the leading site, Fred Hutchinson Cancer Center (Seattle, Washington, USA), Northwestern University (Chicago, Illinois, USA), and the University of Alabama at Birmingham (Birmingham, Alabama, USA).
Eligible participants were at least 18 years of age and 3-23 months post-allogeneic HCT.Participants with graft versus host disease (GVHD) were eligible if their disease and GVHD therapy were stable for at least 4 weeks prior to vaccination.Exclusion criteria included: hypersensitivity to influenza vaccination, eggs/egg protein, or latex; history of Guillain-Barre syndrome, current pregnancy, evidence of hematologic disease relapse, cirrhosis, human immunodeficiency virus infection; and prior receipt of influenza vaccine or documented influenza infection in the coinciding influenza season.Participants who had received a stem cell boost or delayed donor lymphocyte infusion within 90 days of enrollment or received immunoglobulin (Ig) therapy within 28 days of vaccination; and acute illness within 48 hours, receipt of any live vaccines within 4 weeks or any inactivated vaccines within 2 weeks prior to potential study vaccination were also excluded.Participants who required non-influenza vaccines while enrolled could receive these vaccines if administered at least 2 weeks prior to each study vaccine administered at visits 1 and 2.
Participants were randomized on a 1:1 basis to receive either 2 doses of the season-specific HD-TIV or SD-QIV, with a target interval of 28-42 days between vaccine doses (at the time of this study, the high-dose formulation of the quadrivalent vaccine was not available).Randomization, which occurred at visit 1 after eligibility criteria were met, was blocked and stratified by site and GVHD with systemic steroid use.Additional stratification was put in place for participants <12 months post-HCT by the following factors: alemtuzumab, anti-thymocyte globulin, cord blood transplant, haploidentical transplant, or posttransplant cyclophosphamide.
The study protocol was reviewed and approved by the Vanderbilt University Institutional Review Board (IRB), which served as the single IRB for all study sites.All participants provided written informed consent prior to conducting any study procedures.Study data were collected and managed using a REDCap database hosted at Vanderbilt.
Vaccine
Vaccines were provided by Sanofi (Swiftwater, Pennsylvania, USA) and investigational pharmacies at each site dispensed study vaccines per randomization code.SD-QIV contained 15 µg of hemagglutinin from each strain (A/H1N1, A/H3N2, B/Victoria, B/Yamagata).HD-TIV contained 60 µg of hemagglutinin from each strain except for B/Yamagata (Supplementary Table 1).
Study Procedures
Vaccines were administered as 0.5 mL intramuscular deltoid injections given at a target interval of 28-42 days apart (visits 1 and 2).Per protocol, complete blood count, CD4 + /CD8 + / CD19 + cells, total IgM and IgG concentrations, and blood for serological and cellular assays were scheduled for collection prior to administration of each vaccine dose, as well as 28-42 days (visit 3) following the second vaccine dose and 124-236 days (visit 4) following visit 3. Nasal swabs were obtained at each study visit.
Safety Evaluations
Participants recorded injection-site and systemic reactions using a memory aid for 7 days after each vaccine.Reactions were graded according to a mild/moderate/severe toxicity scale (Supplementary Tables 5 and 6) and entered into REDCap.Grade 3 or higher unsolicited adverse events and severe adverse events (SAE) were also collected through seven days after each vaccination.
Immunogenicity Assays
Serum samples were frozen at each site, shipped to Vanderbilt, and then bulk-shipped to Sanofi Global Clinical Immunology for blinded hemagglutination inhibition (HAI) testing for each vaccine-specific antigen [13].When blood volume was insufficient, HAI testing of influenza A antigens was prioritized.
Influenza Surveillance
Active influenza surveillance occurred during each site's local influenza season, defined as when ≥10% clinical or research laboratory samples tested positive for influenza for 2 consecutive weeks by either molecular or rapid testing [8,14,15].During this period, weekly communication occurred, and a nasal swab was collected when a participant had influenza-like illness (ie, presence of fever and/or 2 of any of the following symptoms: respiratory symptoms [rhinorrhea, sinus congestion, post-nasal drip, shortness of breath, cough, wheezing, sputum production, sore throat, sneezing, watery eyes, ear pain, and hoarseness] or systemic symptoms [myalgias and headache]).Nasal specimens were shipped to Vanderbilt University Medical Center and tested using Luminex NxTAG RPP® plus influenza B lineage typing by singleplex polymerase chain reaction [16,17].
Statistical Analysis
Information regarding power calculations is available (Supplementary Table 2).Baseline descriptive statistics were reported as median (interquartile range [IQR]) for continuous variables and absolute and relative frequencies for categorical variables.All descriptive analyses were based on participants receiving at least 1 vaccine dose.
HAI titers to each antigen were summarized within each vaccine group at each visit as GMT, proportion with a titer ≥1:40 (a proxy for seroprotection), geometric mean fold-rise from baseline (GMFR: eg, HD-TIV visit 2 or 3/HD-TIV visit 1), and proportion with a ≥4-fold-rise from baseline (a proxy for seroconversion).The primary immunogenicity endpoints were the adjusted geometric mean ratios (aGMR) comparing the GMT between HD-TIV and SD-QIV following the second vaccine dose (visit 3).Superiority was considered to be achieved based on lower aGMR 95% confidence interval (CI) endpoints exceeding 1.0.No multiplicity adjustments were planned as the primary endpoints were pre-specified.Furthermore, B/ Yamagata was analyzed as a control because this strain was included in SD-QIV but not in HD-TIV.The aGMR (HD-TIV/ SD-QIV) was estimated using linear mixed models with logtransformed HAI titer, adjusting for age, log-transformed baseline titer, continuous time post-HCT, CD4 + count, CD19 + count, absolute lymphocyte count (ALC), and GVHD; and with participant-and site-specific random effects.We sought to identify predictors of visit 3 titers (28-42 days following the second vaccine dose) using a model analogous to the mixed model described above.
In all model-based analyses, missing data were addressed using multiple imputation by chained equations (M = 300 iterations).A total of 6 participants died during the post-vaccine follow-up period; their observations were included in analyses for as long as they were alive, though missing values for variables due to death were not imputed.
The primary safety endpoint (reactogenicity) was summarized as frequency of injection-site reactions (swelling, erythema, tenderness, and pain) and systemic reactions (fever [defined as ≥38.0°C], decreased activity, myalgia, nausea, headache, fatigue, and vomiting) within the 7-day periods following each vaccine dose.We analyzed reactogenicity outcomes using generalized linear mixed models (with a logistic link function, including subject-and site-specific random intercepts) to compare odds of adverse injection-site or systemic reactions separately following each dose.
Predictors of Post-dose 2 Antibody Titers
Covariate-specific aGMRs for predictors of HAI titers to A/H1N1, A/H3N2, and B/Victoria following the second dose are presented in Table 3. Baseline HAI titers were predictive of post-dose 2 titers for all 3 antigens.Additionally, the receipt of HD-TIV, longer time post-HCT, higher CD4 + and CD19 + , and lower ALC cell counts at the time of enrollment were significantly associated with higher post-dose 2 titers for at least 1 antigen.
Durability of Vaccine Immunogenicity
At visit 4 (approximately six months after the visit 3), titers to all antigens included in HD-TIV (ie, A/H1N1, A/H3N2, and B/ Victoria), were significantly higher as compared to baseline titers (Supplementary Table 3).On the other hand, recipients of SD-QIV had significantly higher titers from baseline for influenza A antigens only.For both vaccine groups, the estimated visit 4 GMFRs approximately resemble the estimated GMFRs associated with a single dose (ie, at visit 2).The geometric mean titer to A/H3N2 was significantly higher for HD-TIV as compared to SD-QIV at visit 4 (aGMR = 1.87; 95% CI: [1.05, 3.34]) and for B/Victoria (aGMR = 1.63; 95% CI: [1.00, 2.65]).
Reactogenicity and Safety
The most reported injection-site reactions after each vaccine dose for both groups were pain and tenderness (Figure 3, Supplementary Figure 1).The frequency of any injection-site reaction was higher for the HD-TIV group (49%) as compared to the SD-QIV (37%) following the first dose (adjusted odds ratio [aOR] = 3.44; 95% CI: [0.57, 20.7]), but not statistically significant.Similarly, the frequency of any injection-site reaction was higher, but also not statistically significant, for the HD-TIV (50%) compared to SD-QIV (33%) following the second dose (aOR = 4.53; 95% CI: [0.71, 28.9]).The frequency of any grade 3 (severe) injection-site reactions was 11% for HD-TIV and 7.0% for SD-QIV.
Laboratory Confirmed Influenza Cases
We identified a total of 7 individuals (5.6%) with laboratoryconfirmed influenza infections; 5 cases in the HD-TIV group and 2 in the SD-QIV group (Supplementary Table 4).Two of the 5 cases in the HD-TIV group were due to B/Yamagata, which was not included in the HD-TIV, and the remaining 3 cases were A/H3N2.In the SD-QIV group, both cases were due to A/H3N2.No individuals diagnosed with influenza required hospitalization.
DISCUSSION
This multicenter, double-blinded, phase II, randomized, controlled trial of 124 adult HCT recipients demonstrated that 2 doses of HD-TIV given at least 4 weeks apart was more immunogenic for influenza A/H3N2 and B/Victoria compared to 2 doses of SD-QIV, with higher GMTs 1 month after the second dose.Furthermore, the GMTs for A/H3N2 and B/Victoria were Figure 1.Enrollment, randomization, and vaccine status.A total of 134 participants were consented, among whom 124 were subsequently randomized and vaccinated.Among the 64 participants randomized to receive SD-QIV, 59 (92%) received both doses; among the 60 participants randomized to receive HD-TIV, 59 (98%) received both doses.Abbreviations: HD-TIV, high-dose trivalent; SD-QIV, standard-dose quadrivalent.(17,197) higher 6 months after the second dose in the HD-TIV group compared to the SD-QIV, signifying that the relative benefit of HD-TIV to SD-QIV is durable throughout the length of an influenza season.In addition, the safety profiles were comparable for both systemic reactions or injection-site reactions between groups.Notably, most injection-site reactions resolved within 2 days of vaccination.The increased immunogenicity and similar safety profiles are important findings as adult HCT recipients are at considerable risk for severe influenza disease and influenza-related complications.Thus, determining the optimal influenza vaccine strategy is essential.
Our study provides further support that a high-dose influenza vaccine strategy provides better immunogenicity than standard dose influenza vaccine.Our prior phase I, single-center study of 44 adult HCT recipients (median time post-HCT: 7.9 months) reported that a single dose of HD-TIV produced a higher GMT (GMR = 6.9) and a higher percentage of individuals with protective titers to A/H3N2 (81% vs 36%) compared to a single dose of SD-TIV [8].Additionally, these results are consistent with a prior phase II trial of 161 adult solid organ transplant recipients, in which HD-TIV was associated with higher GMTs as compared to SD-TIV for all 3 antigen strains [18].These findings are further consistent with our pediatric HCT trial of 170 participants, in which we found that 2 doses of HD-TIV resulted in higher antibody responses to both influenza A antigens as compared to 2 doses of SD-QIV [19].Collectively, these data suggest HD-IIV is a practical strategy to overcome suboptimal immune responses in these vulnerable populations.
Our study is unique in that it compared 2 doses of HD-TIV to SD-QIV in an adult HCT population and found that 2 doses of either was associated with higher GMTs after each dose compared to baseline.Furthermore, the HD-TIV group met each of the 3 criteria for the historical World Health Organization biological standards for influenza vaccines after 2 doses for all Abbreviations: CI, confidence interval; HD-TIV, high-dose trivalent; SD-QIV, standard-dose quadrivalent.a B/Yamagata is not included in HD-TIV.Abbreviations: AGMR, adjusted geometric mean ratio; ALC, absolute leukocyte count; CI, confidence interval; GVHD, graft versus host disease; HD-TIV, high-dose trivalent; SD-QIV, standard-dose quadrivalent.
In a prior phase III study in the elderly comparing a single dose of HD-TIV to a single dose of SD-TIV, a superiority GMR benchmark of 1.5 was needed for licensure [20].This benchmark (ie, aGMR comparing HD-TIV to SD-QIV) was met for both A/H3N2 (aGMR: 2.03) and B/Victoria (aGMR: 1.63) after 2 doses in our HD-TIV group.The previous studies evaluating methods to improve vaccine immunogenicity in HCT recipients have primarily focused on 2 doses of standard influenza vaccine administered within the same influenza season [5,[10][11][12], In these studies, 2 doses of influenza vaccine had variable effects on the seroresponse rate in HCT recipients compared to a single dose.A study evaluating immunogenicity in HCT recipients who received 2 doses of the ASO3-adjuvanted influenza A/H1N1 vaccine showed that seroconversion rates improved from 54% after the first dose, to 84% after the second dose [21].This study also noted that those individuals who were <12 months from transplant exhibited a serological response rate of 21%.However, ASO3-adjunvanted influenza vaccines are not available universally; therefore, administration of 2 HD-IIV-dose strategy could be implemented readily.
Our study is also distinct from prior influenza vaccine studies of immunocompromised hosts by the fact we followed our cohort at least 6 months following their second dose to assess durability of immunogenicity.This study demonstrated that HD-TIV HAI titers were higher compared to baseline for at least 6 months following the completion of a 2-dose regimen for all 3 antigens.This study also demonstrated that the relative benefit of HD-TIV compared to SD-QIV was also sustained long-term for 2 out of 3 antigens (in particular, A/H3N2 and B/Victoria).These findings are particularly compelling because over half of the participants were vaccinated between 3 and 6 months post-transplant.This provides further evidence favoring a 2-dose regimen of HD influenza vaccine in this high-risk population, including in the early transplant period.This study also demonstrated that adult HCT recipients tolerated HD-TIV and grade 3 reactions were infrequent.These findings are similar to what has been observed in our previous phase I studies comparing 1 dose HD-TIV to 1 dose SD-TIV in immunocompromised populations [8,14].Collectively, the prior phase I and phase II trials in both pediatric and adult immunocompromised hosts provide sufficient evidence that HD influenza vaccines are safe in these high-risk populations [18,[22][23][24].
This study is subject to limitations.We did not include a nonimmunocompromised adult control group.Importantly, the HD-TIV product used in this trial did not include B/Yamagata; however, HD-QIV is now licensed.The study was conducted over 2 years and the specific antigen strains for A/H3N2 and B/ Victoria were different between the 2 seasons.Even though active influenza surveillance was conducted, this trial was not powered to determine the efficacy of HD-TIV compared to SD-QIV in preventing influenza infection in this population, but the cases of influenza due to vaccine strains were similar between both groups.
CONCLUSION
This study found that a 2-dose regimen of HD-TIV was associated with greater immunogenicity as compared to SD-QIV in adult HCT recipients and higher titers in the HD-TIV group were maintained over the entire influenza season.Furthermore, both vaccine regimens were well tolerated.Data from this study provide evidence to support implementation of a 2-dose regimen of HD inactivated influenza vaccine in this high-risk population.
Figure 3 .
Figure3.Injection-site and systemic reaction frequencies.Displayed are the relative frequencies of each injection site and systemic reaction type for each vaccine group (SD-QIV vs HD-TIV) following each dose.Reactions were further graded according to a mild/moderate/severe toxicity scale (grades 1 through 3, respectively), which are additionally marked by shading.Abbreviations: HD-TIV, high-dose trivalent; SD-QIV, standard-dose quadrivalent.
Table 2 . Point Estimates and 95% CIs for Group-Specific Geometric Mean Fold-Rrises (GMFRs) and Adjusted Geometric Mean Ratios (aGMRs, Comparing High Dose [HD-TIV] to Standard Dose [SD-QIV]), Shown for Each Antigen at Each Follow-up Visit
Visit 2 titers are measured at a target window of 28-42 days following the first dose (prior to the second dose), visit 3 titers are measured at a target window of 28-42 days following the second dose, and visit 4 titers are measured at a target window of 138-222 days following visit 3. Bolding indicates statistical significance at the 0.05 level (two-sided).
|
2023-08-19T15:31:22.627Z
|
2023-08-16T00:00:00.000
|
{
"year": 2023,
"sha1": "ec8c83a56845daa86ad1df117c0c4816429946ea",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/cid/advance-article-pdf/doi/10.1093/cid/ciad458/51114726/ciad458.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "83f7fbd7cea313be1d8f6c33fb26c558bdeb22b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247366672
|
pes2o/s2orc
|
v3-fos-license
|
Pluripotent Stem Cell-derived Strategies to Treat Acute Liver Failure: Current Status and Future Directions
Liver disease has long been a heavy health and economic burden worldwide. Once the disease is out of control and progresses to end-stage or acute organ failure, orthotopic liver transplantation (OLT) is the only therapeutic alternative, and it requires appropriate donors and aggressive administration of immunosuppressive drugs. Therefore, hepatocyte transplantation (HT) and bioartificial livers (BALs) have been proposed as effective treatments for acute liver failure (ALF) in clinics. Although human primary hepatocytes (PHs) are an ideal cell source to support these methods, the large demand and superior viability of PH is needed, which restrains its wide usage. Thus, a finding alternative to meet the quantity and quality of hepatocytes is urgent. In this context, human pluripotent stem cells (PSC), which have unlimited proliferative and differential potential, derived hepatocytes are a promising renewable cell source. Recent studies of the differentiation of PSC into hepatocytes has provided evidence that supports their clinical application. In this review, we discuss the recent status and future directions of the potential use of PSC-derived hepatocytes in treating ALF. We also discuss opportunities and challenges of how to promote such strategies in the common applications in clinical treatments.
Introduction
Liver diseases, including acute liver failure (ALF) are a public health challenge worldwide, because of death caused by liver dysfunction. 1-3 ALF is a severe condition with significant morbidity and mortality even for the patients without pre-existing liver disease. The causes of ALF vary geographically with viral infections of the liver, primarily hepatitis B, C, and E in developing countries and drug overdose-induced ALF, usually paracetamol (acetaminophen), in developed countries such as USA and parts of Europe. [4][5][6][7][8] Because of the severity of ALF, there are few ways to prevent or cure patients other than orthotopic liver transplantation (OLT), which is now the only treatment that is considered effective to avoid the life-threatening complications caused by ALF. [9][10][11] However, OLT is limited by the scarcity of available donor livers, complicated surgery procedures, and high financial burden. 12 Therefore, other than OLT and drug supplements for the maintenance of basic vital signs, there is a need for effective therapeutic treatments for ALF.
In recent years, hepatocytes transplantation (HT) and bioartificial liver (BAL) system have emerged as effective methods for the compensatory treatments of ALF related liver dysfunction. [13][14][15][16] These two methods potentially build up the fundamental niche for host liver regeneration and decelerate the disease progression, which creates a bridging time for OLT. As reported, effective HT involves reconstitution of as much as 2.5% functional liver tissue in treating acute-onchronic liver failure (ACLF). 17 Consistent with that, primary hepatocytes (PHs) are considered the ideal cell source for such treatments. Unfortunately, it remains a bottleneck to meet the demand of large quantity and clinical quality of PH from limited viable organ donation. To solve these problems, studies have focused on developing strategies using human pluripotent stem cell (PSC)-derived hepatic-like cells (HLCs), including hepato-blasts and hepatocytes. The differentiation of PSCs into clinical-grade HLCs has been studied. [18][19][20] The aim of this review is to summarize the current opinions regarding the therapeutic effectiveness of PSC-derived HLC for ALF treatment and to discuss recent progresses in preclinical and clinical treatments and challenges, which need to be improved in using PSC-derived HLC (Fig. 1). a rapid onset and leads to a frequent fatal outcome, with up to 30% mortality. 21 Paracetamol overdose and autoimmunity caused liver injuries are the most frequent causes in developed countries. HBV infection is the primary cause of ALF in developing countries. 2 Paracetamol toxicity, which induces mitochondrial oxidant stress-related cell death and sterile inflammatory responses in hepatocytes, accounts for more than 46% of the ALF cases in the USA. 22 At the early stage of paracetamol-induced liver injury, treatment with N-acetyl-cysteine or 4-methylpyrazole (fomepizole) can effectively control the progress. 23 However, at later stages, drugs are no longer effective to slow disease progression, which leaves OLT as the last option to save such patients. HBV infection has plagued China for a long time, and is involved in 84% of hepatocellular carcinoma and 77% of liver cirrhosis patients annually. 6 Control of HBV is fundamental to preventing ALF. Anti-HBV drugs focus on how to slow the replication of viral DNA, but completely eliminating HBV DNA is hard to achieve, and is the main reason of HBV re-lapse and progression. 24,25 Once the HBV replication is out of control, there's a large chance to cause ALF. The pathology and autopsy of ALF patients often shows widespread hepatic apoptosis and necrosis with few viable hepatocytes remaining, which leads to the failure of liver regeneration. To save ALF patients, the question to answer is how to buy time for patients to carry out liver regeneration.
Treatment of ALF must deal with systemic complications including the release of pro-inflammatory cytokines, multiple organ failure, and a hypotensive environment. Hepatic encephalopathy frequently appears because they hepatocyte death results in aberrant liver function and toxins that travel to the brain and affect the brain function. Although L-ornithine-L-aspartate and ornithine phenylacetate inhibit ammonia synthesis to relieve symptoms, OLT is current, y the last chance for ALF patients currently. Development of novel treatments of ALF patients is currently urgent.
Current knowledge of the treatments for ALF
In addition to the basic symptomatic supporting treatments to stabilize the vital signs, cell therapy-based supplement for liver regeneration and bioartificial liver (BAL) support system have been developed as effective tools for ALF patients. Both of these methods require a large quantity of viable hepatocytes.
BAL system
Before the emergence of BAL, abiotic artificial liver therapy, including plasmapheresis, hemoperfusion absorption, and venous hemodiafiltration, were used as clinical treatments with limited success. 26,27 The molecular adsorbent recirculating system and Prometheus system are widely used nonbioartificial liver systems with benefits for ALF patients. 28,29 However, as it relies on exogenous detoxification, is not able to provide an environment needed for hepatic regeneration as it is complicated to mimic all the functions of host hepatocytes. BAL systems include functional hepatocytes in a bioreactor that simulates the function of a normal human liver. To a large extent, it can not only remove the toxic substances but also provide functions such as synthesis and metabolism, which temporarily replace the function of the damaged liver in order to survive from the fatal onsets of ALF. 16,30 The indispensable factor within the BAL system are the functional hepatocytes. The quality of functional hepatocytes, the ease of obtaining them and safety are decisive in determining whether the BAL can play an important role in clinical treatment.
Prior to this, the main sources of functional hepatocytes were primary liver cells, porcine liver cells, human liver cancer cell lines like HepG2, HepaRG, and immortalized human liver cell lines like L-02. Human PH are the best for use in BALs, but organ sources are limited, and it is difficult to obtain a sufficient number of human PH for BALs. Porcine liver cells are used because of their functions, abundant source, and the easy accesses. For example, the AMC artificial liver system using porcine liver cells successfully helped 12 patients with ALF to gain time for OLT. One patient no longer needed because of the effectiveness of therapy. 31,32 The HepaAssist system, which uses porcine liver cells, is the only BAL system that has been a investigated in a multicenter randomized controlled clinical trial in the USA. Although it has achieved encouraging therapeutic effects in phase III clinical trials, it has not yet obtained Federal Drug Administration approval. It is underlying safety concerns including heterogeneous immune rejection and animal-derived virus infections have made it difficult to obtain regula- The advantage of using PSC derived of HLC is their unlimited proliferation potential, which addresses both the shortage of viable donor livers and primary hepatocytes. By differentiating PSC (hESCs or iPSCs) or genome edited PSC into HLC, we can obtain HLCs of the required quantity and quality for BAL and HT in severe liver disease (e.g., ALF, ACLF, and ESLD). After BAL or HT treatment, the ideal outcome is either graft expansion and the regeneration of the host liver or bridging to OLT. ALF, acute liver failure; ACLF, acute on chronic liver failure; BAL, bioartificial liver; ESLD, end-stage liver disease; hESC, human embryonic stem cell; HLC, hepatic-like cell; HT, hepatocyte transplantation; iPSC, induced pluripotent stem cell; OLT, orthotopic liver transplantation; PSC, pluripotent stem cell. tory approval. 33 The superiority of human liver cancer cell lines and immortalized human liver cell lines is that they can proliferate indefinitely in vitro. However, their functions are greatly compromised and there is a potential tumorigenic risk, which limits their application prospects. For example, the Vital Therapies artificial liver system, which uses C3A liver cancer cells, failed a phase III clinical trial because of poor therapeutic effects, even though the effectiveness in animal experiments was good. 34,35 Therefore, to obtain a large quantity and clinical-grade quality of functional hepatocytes is the major hindrance for BAL.
Nowadays, in the research of regenerative medicine, PSC has received much attention due to the potential to be differentiated into functional hepatocytes as the source of seed cells in the BAL system. Precise differentiation of human embryonic stem cell (hESCs) or induced pluripotent stem cell (iPSCs) into HLC has been achieved and improved tremendously. In addition, with the appearance of 3D culturing system, hepatic organoid formation brings out more mature HLC, which owns comprehensive functions. 36,37 Moreover, Lijian Hui of Shanghai also successfully transdifferentiated human fibroblasts into human hepatocytes (hiHep), and overexpressed SV40 Large T through gene editing, thus obtaining the ability to be expanded in vitro, providing a potential cell source for BAL. 38 This technology also successfully conducted a clinical trial of a bioartificial liver in 2016, and achieved good therapeutic effects, which greatly improved the confidence to promote hiHep into the clinic applications. In addition, bioreactors, as the key devices in BAL system, are able to provide a favorable proliferative and metabolic platform for a large-scale liver cell culture and storage. 39 For example, a fluidized-bed bioreactor with alginate-based spherical beads is able to scale up 10 11 liver cells culture and retains their hepatic functions. 40 Yet the challenge is to extend such design to clinical applications.
Hepatocyte transplantation (HT)
The concept of HT therapy was first described by scientists in the early 1970s. After more than 20 years of development, HT therapy was translated from animal experiments to clinical trials, and was shown to be effective in ALF, or acute-on-chronic liver failure (Table 1). 17,[41][42][43][44][45] HT has sev-eral key therapeutic advantages. (1) It is less invasive OLT surgery and can be performed multiple times. (2). The patient's liver is preserved and retains its ability to regenerate itself. (3) With the development of gene editing and stem cell technology, HT can be coupled with targeted genome modifications, realizing individualized and precise treatment. 15,46 These advantages are not available in OLT or BAL support systems. So far, many liver diseases have undergone clinical trials of HT treatment, laying the foundation for clinical promotion and application.
How to gain time is a significant issue for ALF patients. For one thing, HT helps patients to regenerate their own livers, providing a proliferative niche for transplanted hepatocytes. While OLT is inevitable, HT plays a role as a transitional bridge connecting patients with an appropriate donor liver. In animal models of drug-induced ALF, HT significantly improves survival. In clinical trials, there have been more than 40 cases of ALF caused by drugs or viral infections treated by HT worldwide. 47,48 Although, they were not multicenter randomized controlled trials and the delivery method, volume of transplanted cells, and cell sources were not standardized, which makes them difficult to compare statistically, most patients responded well to treatment, with prolonged survival time, bridging to OLT, and even fully recovery ( Table 1). 17,41 The limited clinical data fully confirms the therapeutic effect of HT, but it needs to be further standardized and unified.
PSC-derived hepatocytes
With both BAL support or HT treatment, the key to success is the quality and quantity of functional liver cells. Human PSCs, including human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs), have unlimited proliferation ability and the pluripotency to differentiate into any somatic cell type. Therefore, the differentiation of PSCs into HLCs with similar gene expression profiles and functions as human hepatocytes can, to a large extent, solve the problem of limited sources of functional hepatocytes. Recent advances in stem cell research have found methods that have increased the ease of inducing in vitro differentiation into HLCs. However, often not more than 10 9 -10 10 the hepatocytes are available for treatment, which is a barrier between PSC differentiation and clinical application. One of the obstacles is that the efficiency of differentiation is limited, which often accompanied by the risk of incomplete differentiation or incorrect cell fates, resulting in unpredictable safety issues. Additionally, the current hepatocyte culture system has not been well developed, which is hard to maintain the proliferation ability and the functions of cultured hepatocytes at the same time. Therefore, we need to reach a more comprehensive and in-depth understanding of the molecular mechanisms of direct differentiation of PSC into HLC, to establish an efficient and stable differentiation system. We need to find ways to culture and expand hepatocytes in vitro to obtain a large number of clinical-grade hepatocytes, which is of great significance for the treatment of ALF by BAL and HT. The paragraphs below review the current status and progress of PSCs used for the treatment of ALF.
Differentiation of PSCs into HLCs
The study of precise differentiation of PSC into HLC in vitro is mainly through simulating the development of human liver, which is accomplished by adding growth factors and small molecules that regulate the related signal pathways. Methods described in the available studies can be used to induce the differentiation of PSC into definitive endoderm (DE), hepatoblasts (HB), and mature hepatic cells, i.e. HLCs. Although the specific induction schemes adopted by different research groups are not the same, the basic method is: (1) induction of DE cells by activin-A; (2) Transformation of DE to HB by treatment with FGF, BMP, and HGF; and (3) use of OSM and dexamethasone (DEX) to induce maturation of HB into HLC (Fig. 2). 49 The induction of DE is the first step of differentiation and is a key step that determines the final differentiation efficiency. The most frequently used method is the induction of PSC to form DE cells by activin-A. The underlying mechanism is activation of the Nodal signaling pathway, which simulating the early steps of liver development in vivo. [50][51][52] Some studies have reported that inhibiting the PI3K signaling pathway was a prerequisite for the effective use of activin-A for DE induction. Adding PI3K signaling pathway inhibitors improves the efficiency of DE differentiation. 53 Adding a rho kinase (ROCK) inhibitor at that stage reduces cell apoptosis to a certain extent, which improves cell survival and differentiation efficiency. Compared with the complex signaling pathways regulated at the DE stage, the regulation of the differentiation of HB and HLC cells is relatively clear. In vivo studies of liver development, in-vitro coculture studies and the single-cell sequencing have shown that the transforming growth factor beta (TGF-β), Wnt and NOTCH signaling pathways are the pathways most involved in the induction of DE cells by growth factors such as BMP, FGF, and HGF. This step avoids the establishment of an incorrect cell fate (e.g., bile duct or pancreas cells) and improves the purification of HLC at the final stage. 54 Differentiation induced by growth factors is recognized as an efficient method of obtaining functional HLCs, but growth factors are expensive and difficult to store, which limits their use for large-scale production of HLCs. In addition, most growth factors are protein products containing animal components that may cause adverse reactions associated with clinical use. In that context, a combination of small molecules can be used to replace the growth factors and obtain functional HLC with high efficiency. Properties of the small molecules include the ability to freely penetrate cell membranes, stable structures, no immunogenicity, low cost, and wide variety. The use of small-molecule compounds is expected to become a safer and more effective method of inducing clinical-grade HLCs. Recent reports by multiple research groups have described the use of small molecules to induce differentiation into HLCs. IDE1 and IDE2 are small molecules that can efficiently induce PSC to form DE, act much as activin-A by simulating the Nodal signaling pathway. 55 In the HB stage, glycogen synthase kinase (GSK)-3β is used to simulate the Wnt pathway to guide DE to a hepatic fate and not bile duct fate. 56,57 Recently, Asuma et al. 20 reported the use of small molecules to differentiate hESCs into HLC. A comparison of HLCs induced by small molecules and those derived from growth factors showed a considerable number of functions, such as albumen (ALB) secretion, CYP450 activity which metabolizes drugs and enzymes. In addition, Pan et al. 58 introduced an improved combination of small molecules for robust HLC induction. The use of small molecules activity has promising prospects, but further research is needed to develop more stable and efficient combinations of small molecules to increase effectiveness and safety for adapting to clinical use.
Functional HLCs can be obtained by direct differentiation of PSCs. There are also reports of transdifferentiating somatic cells to obtain functional HLCs. Hui, L et al. 38 reported that after human fibroblasts overexpressing the transcription factors FOXA3, HNF1α and HNF4α can be transdifferentiated into HLCs and perform a series of functions similar to those of PHs. Transdifferentiation provides another way to source of HLC, but it safety needs further verification, as such transcriptional factors are known to participate in the carcinogenesis of hepatocellular carcinoma.
In vitro expansion of HLCs
Obtaining HLCs from PSCs has been validated by multiple research groups, proving its reproducibility and efficiency. However, owing to the required volume of cells for transplantation for clinical applications, relying on the differentiated HLC is not enough. As a result, how to expand hepatocytes in vitro has attracted widespread attention in recent years. Hepatocytes are terminally differentiated cells, which makes them difficult to culturing in vitro and maintain their inherent functional properties. Hui Lijian et al. 59 reported that a combination of small molecules, adding Wnt3a to hepatocyte medium and removing Rspo1, Noggin, and forskolin increased the fold-expansion of human hepatocytes by 10,000 times. However, they found that the expanded hepatocytes had a bidirectional differentiation potential that placed them between HPCs and mature hepatocytes. It seems to be a complicated task to expand hepatocytes in vitro, and the research is focused on the expansion of hepatic progenitor cells like HBs that still have some degree of stemness.
Compared with mature hepatocytes, HBs has a stronger proliferation ability and the potential of rapid differentiation into both hepatocytes and bile duct cells. [60][61][62][63] Amplifying PSC-derived HBs is an ideal alternative source of hepatocytes. On the one hand, it is feasible to develop the proliferation potential of HB, and on the other hand, amplified HBs can be frozen to establish a cell bank, acting as seed cells that could be rapidly obtained for functional HLC differentiation. Recent reports have found that multiple smallmolecule compounds are suitable for amplifying HB, such as the GSK-3β inhibitor CHIR99021, the TGF-β signaling pathway inhibitor A83-01, and the ROCK inhibitor Y27632. A recent study combined small molecules to simultaneously regulate the BMP/WNT/TGF-β/Hedgehog pathway, which not only maintains the stemness of HBs, but also retains their proliferative capacity. The HBs amplified by the combination had therapeutic effectiveness after transplantation into ALF-model mice. 64,65 Large-scale expansion of HBs, would be a major step in producing the HLCs in the quantity and with the quality required for clinical development and application.
Clinical benefits of PSC-derived cell therapy
Much effort has been made worldwide to promote PSC-derived methods to cure chronic and acute illness. Induced PSC-derived retinal pigment epithelium cells have used clinically to cure patients with macular degeneration, with good outcomes 1 year after transplantation, which supports the use of PSC-derived cells in clinical applications. 66 The use of PSC-derived HLC for ALF, HT, and BAL applications would serve as a promising tool for clinical alternatives. The clinical indications and benefits of PSC-derived cell therapies for treating ALF or end-stage liver disease are summarized below.
Modulating the regeneration niche
A positive outcome requires that HT promotes sufficient regeneration of the host liver. Besides increasing the homing and engraftment of transplanted hepatocytes, modulating the injury niche to include host immune responses such as the macrophage activation and cytokine release, 67,68 is also an important benefit of using PSC-derived HLCs. Unlike PH-derived HLCs, as hypoimmunogenic PSC-derived HLCs would modulate the host immune recruitment to restrain systemic inflammation. For example, phagocytosis mediated by macrophage activation might be limited by the CD47-SIRPα axis if PSC-derived HLCs overexpressing CD47 were transplanted. [69][70][71][72] Such clinical applications could be useful in a broader scope of liver disease and not limited to ALF.
Transplantation feasibility and safety
Even if the shortage of donor livers could be solved, OLT is still a challenging procedure with risks including intraoperative bleeding, postsurgical cardiovascular dysfunction, and unavoidable death. 73,74 PSC-derived HT is a safer alternative with infusion that does not require major surgery and the possibility of multiple transplantation procedures. 75 Improvements in cell culture would make PSC-derived HLCs are a good alternative source of hepatocytes compared with PHs. The feasibility of PSC-derived HLCs is not limited by lack of a large quantity of HLCs, which can be cryopreserved to ensure a constantly available cell source for emergency treatment of ALF patients. 76,77 Individualized treatment PSC-derived HLCs combined with Crispr/Cas9 genome editing and PSC differentiation would allow generating multiple PSC cell lines that met individual patient requirements or those of the primary illness. 78,79 For instance, the HBVinduced liver disease could theoretically be corrected by transplantation with HBV receptor (NTCP) knock-out or ectopic expression of NTCP variants in HLCs derived from edited PSCs. 80,81 Following transplantation in such patients, HBV could not enter hepatocytes as they lacked the receptor, which would avoiding the recurrence of HBV. Treatment might thus be adjusted depending on the pathophysiology of the primary illness that caused ALF.
Challenges of current PSC based options
Clinical trials of HT and BAL support systems are ongoing, and strive to promote the two therapeutic methods with broad application prospects in clinical treatment. However, the novelty of the methods and the complexity of ALF, are challenging, and can be summarized as follows: The lack of rigorous clinical trials makes it difficult to achieve a unified and standardized treatment. Most ALF patients indicated for HT and BAL are in a life-threatening stage of disease and require urgent treatment intervention. It is not possible for multiple centers to formulate detailed treatment procedures in time, which makes it difficult to reach a consensus. Standardized treatment indications, treatment procedures, countermeasures for complications, and the introduction of appropriate treatment guidelines are the prerequisites for the adoption of HT and BAL as clinical applications.
The key requirement of these two treatments is the quantity and quality of functional liver cells. No matter which method is used to obtain functional liver cells, an inevitable core problem is the immunogenicity of the cells. At present, adjuvant immunosuppressive agents or pretransplant radiotherapy are used in patients receiving HT, to suppress the patient's immune system and protect the transplanted cells. Once the immune system is suppressed, the patient is exposed to risks of tumorigenesis and infection. Recently, hypoimmunogenic PSC have been developed to overcome the issue of immune rejection. Through knocking out human lymphocyte antigen (HLA) Class I and II molecule accompanied by overexpression of the natural killer (NK) cell specific inhibition receptor (HLA-E) might help to evade host immune surveillance. 82,83 Human embryonic stem cells overexpressing CTLA4-Ig and PD-L1 are immune-evasive and have shown therapeutic effectiveness in a humanized mouse model of acute liver injury. 84,85 Further research should be carried out to elucidate the underlying mechanism. Its safety should not be neglected as the risk of tumor formation increases without host immune recognition. The development of novel immune tolerance strategies is of great significance for HT therapy.
Improvement of transplanted-cell engraftment and homing needs to be studied. After the liver is damaged, hepatic stellate cells are activated, become fibroblasts, deposit collagen that makes it difficult for transplanted cells to enter damaged regions of the liver. Different routes of delivery have been validated, among which splenic transplantation and hepatic portal vein are typically used in clinical treatments. There are three ways of delivery via the portal vein, ultrasound guided intrahepatic portal vein puncture, transcutaneous splenic vein puncture, and intrahepatic portosystemic shunt via the hepatic venous system. 35 However, the procedures are associated with risks of portal vein hypertension, bleeding, or thrombosis. 86 Alternate routes include the hepatic artery, which has a higher blood flow velocity and lower thrombosis formation risk. 87 More clinical data should be collected to choose the appropriate routes of delivery. Coupling nanomaterials and HT is a novel opinion that would improve the viability, homing, and engraftment of transplanted hepatocytes. 88,89 Micro-encapsulated HLC patches or decellularized liver scaffolds would avoid intravenous or arterial injection. [90][91][92] Increasing the rate of homing of transplanted cells is a guarantee for the clinical therapeutic effectiveness of HT and needs further validation.
Concluding remarks
In summary, HT and BAL support have bright prospects and application value in the treatment of ALF. PSC-derived HLCs have the potential for wide clinical application, but demonstration of effectiveness and lack of complications are still needed. The use of humanized immune system animal models can provide more accurate immune-response data for HT studies of reducing the immunogenicity of transplanted cells, establishing immune tolerance strategies, and safety. Last but not least, the combining various therapies for ALF treatment is a future trend.
Conflict of interest
The authors have no conflict of interests related to this publication.
Data sharing statement
All data are available upon reasonable request.
|
2022-03-11T16:20:49.083Z
|
2022-03-09T00:00:00.000
|
{
"year": 2022,
"sha1": "6bd5ef159e7f49cd7bd1aef3f0df7b167dc28e27",
"oa_license": "CCBYNC",
"oa_url": "https://publinestorage.blob.core.windows.net/journals/JCTH.2022.0(0).0.00353.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16fa1d9b9f4207114b65d286f25d87584ed239d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235663238
|
pes2o/s2orc
|
v3-fos-license
|
Estimating effects of parents’ cognitive and non-cognitive skills on offspring education using polygenic scores
Understanding how parents’ cognitive and non-cognitive skills influence offspring education is essential for educational, family and economic policy. We use genetics (GWAS-by-subtraction) to assess a latent, broad non-cognitive skills dimension. To index parental effects controlling for genetic transmission, we estimate indirect parental genetic effects of polygenic scores on childhood and adulthood educational outcomes, using siblings (N = 47,459), adoptees (N = 6407), and parent-offspring trios (N = 2534) in three UK and Dutch cohorts. We find that parental cognitive and non-cognitive skills affect offspring education through their environment: on average across cohorts and designs, indirect genetic effects explain 36–40% of population polygenic score associations. However, indirect genetic effects are lower for achievement in the Dutch cohort, and for the adoption design. We identify potential causes of higher sibling- and trio-based estimates: prenatal indirect genetic effects, population stratification, and assortative mating. Our phenotype-agnostic, genetically sensitive approach has established overall environmental effects of parents’ skills, facilitating future mechanistic work.
Supplementary Note 1: Deviation from pre-registered methods
To correct for family structure in our trio data in NTR, we planned to use the gee function in R. To correct for additional shared factors in the sibling design, we planned to use the lme function in R to specify a random intercept for family (as done by 1 ). However, the gee function led to convergence issues when bootstrapping. Additionally, simulations showed that the use of a mixed model (lme or lmer commands in R) in the sibling design leads to underestimation of indirect genetic effects, and underestimation of the direct genetic effects in the nontransmitted alleles design (see Supplementary Note 5). Hence, we use linear models in the sibling and non-transmitted allele design (lm command in R), and we bootstrapped standard errors (see Methods in the main manuscript).
Supplementary Note 2: Meta-analyses of Cognitive Performance and Educational Attainment GWAS
Before performing GWAS-by-subtraction, we ran a GWAS of Educational Attainment and Cognitive Performance in UKBiobank (polygenic score sample left-out). Genetic associations were obtained using fastGWA and controlling for age (Data-Field 21022), sex, array and the 25 first principal component analyses, for 1,246,531 SNPs (Hapmap 3 SNPs). We then meta-analysed our UKB EA GWAS (N=388,196) with the EA GWAS by Lee et al excluding 23andMe, UK Biobank and NTR cohorts (N=318,916) using the METAL software. We included SNPs with sample-size > 500,000 and MAF > 0.005. We did not apply genomic control, but following Lee et al. we inflated the standard errors from the meta-analysis by the square root of the LD score intercept (1.223, SE=0.0223). After inflation of the SEs, we found a SNP heritability of 0.1006 (SE=0.0027) with a LD score intercept of 0.9783 (SE=0.0187). We meta-analysed our UKB CP GWAS (N=202,815) with CP GWAS by Trampush et al (N=35,298) using the METAL software. We included SNPs with sample-size > 100,000 and MAF > 0.005. We found a SNP heritability of 0.1858 (SE=0.0064) with LDSC. We did not apply genomic control, and because the LD score intercept was acceptable (1.055, SE=0.0118), we did not inflate standard errors.
Supplementary Note 3: Comparison of demographic and early-life characteristics of the adopted and non-adopted samples
In the adoption design, indirect genetic effects are inferred by subtracting polygenic score associations estimated in a sample of adoptees away from those estimated in a non-adopted control group. When taking the difference, it is important that the groups are similar in characteristics other than genetic relatedness to their parents. We explored this empirically by comparing demographic and early-life characteristics of adoptees and nonadopted individuals in the UK Biobank.
Supplementary Data 11A displays results from our comparison of NonCog PGS, Cog PGS, birthweight, and educational attainment. We observed significant but small differences between the groups in their mean NonCog and Cog PGS as well as educational attainment (Cohen's d < 0.15). We do not observe differences in the variances of these measures between the two groups.
We observed differences in birthweight, with adopted individuals being lighter at birth than the non-adopted individuals (mean = 3.12kg vs 3.33kg, Cohen's d=0.31). The variance in birthweight was also significantly different, with more variance in the adoptee group (0.57 vs 0.45). However, missingness of birthweight data is severe in both groups, but particularly adoptees (72% of missing data among adoptees, 43% in the non-adopted group).
We investigated whether the birthplaces of adoptees and non-adoptees were clustered differently. If so, this could mean that population stratification effects are not consistent across the groups. Hence, we performed k-means clustering on the UK Biobank's east/west birthplace coordinates, separately for adoptees and non-adopted individuals. Without a hypothesis for the number of geographical clusters, we used different numbers of clusters (k), and then plotted the within-cluster sum of squares according to k. For both adopted and non-adopted groups, the best number of clusters was 4, indicated by the location of the bend in the plot. As shown in Supplementary Figure 2 (below), the pattern of clustering was also the same between the groups, with the clusters reflecting the regions of Wales, the South, the North, and the Midlands.
Next, we compared the proportions of adopted and non-adopted individuals born in each UK region (see Supplementary Data 11B). Adoptees were less likely to have been born in the Midlands and more likely to have been born in the South, but these differences were small. Since there could be differences between the groups in location of birth within each region, we also compared the cluster means for the two groups. There were only small differences. The largest discrepancy was that adoptees from the north were born further south than the nonadopted northerners (northern coordinate 1.93 vs 1.99).
Notably, the interpretability of these analyses are hindered by data limitations. A large fraction of birthweight data was missing (72% missingness for the adoptee group). Also, birthplace was self-reported, so could be inaccurate, particularly for adoptees.
Overall, the key variables under study are well-matched between the adopted and non-adopted groups. There may be stronger systematic differences relating to birthweight and geography (as well as unobserved variables). Differences between the adopted and non-adopted groups are more of a concern for estimating indirect genetic effects than direct genetic effects, since the former but not the latter is based on comparison between groups. Direct genetic effects estimated using this design can be interpreted specifically for the adoption sample, whereas indirect genetic effect estimates have a more ambiguous interpretation since they are based on two different groups.
Supplementary Note 4: Methods for simulating genetic and phenotypic data in the presence of different biases and components
We simulate data introducing various potential components and biases, and then fit all models used throughout the paper to identify how the estimated parental indirect effect changes in the presence of these factors.
We simulate genotype data for 20,000 families. Each family includes a mother, a father, a focal offspring, a child sibling, and an adopted child sibling. The adoptee genotypes are drawn from another simulated dataset of biological parents, independent of the focal families. Therefore, the total sample size including the main families plus biological parents of adoptees is (20,000 x 5) + 20,000 = 120,000 individuals. Genotypes are simulated as 100 bi-allelic SNP calls, using the 'coin flipping' function in R rbinom(). For individuals in the parent generation, probability values for SNPs are defined by minor allele frequencies (simulated as deviates of the uniform distribution between .1 and .5). For offspring of these individuals, probability values for SNPs are defined as each parental genotype divided by 2.
We then simulate 'true' SNP effects, drawn from a normal distribution. We use these true SNP effects to simulate 'true' genetic scores for the mothers, fathers, and biological and adopted offspring. The true SNP effects are the same for all individuals and for all sub-populations. True genetic scores are used to simulate phenotypes.
In addition to 'true' polygenic scores, we create more realistic 'GWAS-based' polygenic scores for all individuals by weighting their genotypes by GWAS SNP effects. We define GWAS SNP effects as true SNP effects with added error, and calculate them as sqrt(.2)*true effects + sqrt(.8)*error, the error following a normal distribution (this differs when simulating population stratification, see below). GWAS effects are the same for all individuals and sub-populations. GWAS-based polygenic scores are used to estimate direct and indirect effects. We also tested how sensitive the estimates from the three designs are to the amount of noise introduced in the GWAS effects, and found that this only matters for assortative mating (see assortative mating results below on page 13).
We simulate nine offspring phenotypes influenced by different factors:
i) direct genetic effects only, ii) direct and indirect parental genetic effects (maternal and paternal), iii) indirect parental genetic effects plus a prenatal indirect maternal genetic effect, iv) indirect sibling genetic effect, v) indirect parental genetic effects and an indirect sibling genetic effect, vi) assortative mating, vii) assortative mating and indirect parental genetic effects, viii) population stratification, ix) population stratification and indirect parental genetic effects.
Having simulated the nine phenotypes as detailed further below, we use three designs (sibling, adoption, non-transmitted allele, explained in the main article) to estimate indirect parental genetic effects on each phenotype. This allows us to evaluate how designs are affected by the components (prenatal and postnatal parental indirect genetic effects) and biases (sibling indirect genetic effects, assortative mating, population stratification). We repeated the simulation 100 times.
Note that these simulations are to illustrate how designs are affected by the components and biases. Effect sizes for each bias/component are not intended to represent true effects and as such are somewhat arbitrary. Additionally, by necessity we make certain untested assumptions. For example, indirect genetic effects are assumed to be equal between all siblings (i.e., no birth order effects or different effects for adoptive siblings), and population stratification and assortative mating are assumed to operate equally among biological and adoptive parents. Simulation details for the nine phenotypes i) Direct genetic effects We simulate child phenotypes influenced by direct genetic effects only, such that Where y is the child phenotype, x is the true genetic score of the child, var(g) is the variance explained by the true genetic score and e is the residual error (explaining the rest of the variance).
Parental phenotypes used below are also simulated this way (i.e., influenced by own genotype plus environment/error).
ii) Indirect parental genetic effects We simulate child phenotypes influenced by direct genetic effects and indirect parental genetic effects such that Where y is the child phenotype, x is the true genetic score of the child, var(g) is the variance explained by the true genetic effect, ℎ and ℎ are the parental phenotypes, ( ℎ ) and ( ℎ ) are the variance explained by parental phenotypes, and e is the residual error (explaining the rest of the variance).
iii)
Prenatal and postnatal indirect parental effects We simulate child phenotypes influenced by direct genetic effects and prenatal and postnatal indirect parental genetic effects such that Indirect sibling and parental indirect genetic effects After simulating all sibling phenotypes with only direct effects or with direct and indirect parental genetic effects, we simulate indirect genetic effects operating among three siblings in each family: individual 1, a biological sibling, and an unrelated adopted sibling. First, we create a matrix of sibling effects in which every effect is of the same magnitude (all siblings have an equal effect on each other regardless of adoption status, an implicit assumption), with zeros on the diagonal. To account for feedback effects (e.g., sibling 1 influences sibling 2, who influences sibling 1; this changes the coefficients of a variable on its own errors), we subtract the sibling effect matrix from an identity matrix and take its inverse. We then take the matrix product of the matrix with sibling effects and the simulated sibling data to introduce the simulated mutual sibling effects into the data.
vi)
Assortative mating vii) Assortative mating and parental indirect genetic effects Genetic assortative mating occurs when individuals with similar phenotypes mate more frequently than would be expected under a random mating scenario, and these phenotypes are heritable. To simulate assortment, we re-create offspring genotypes and polygenic scores after matching parents together systematically (instead of randomly as above). We first create phenotypes for the parents (based on true genetic score plus noise), rank the mothers and fathers by phenotype, and match couples according to rank (i.e., mothers with higher phenotypic values match with fathers with higher phenotypic values). Since mating does not perfectly track with phenotypic rank, we add noise to the ranking of mothers and fathers prior to matching, following a chosen phenotypic correlation. Offspring genotypes are then simulated as random draws from the matched couples' genotypes. Assortment is simulated to be the same strength for adoptees' and nonadoptees' parents, and we simulate random placement by un-ranking adoptees before matching them to adoptive families.
viii)
Population stratification ix) Population stratification and parental indirect genetic effects Population stratification can be conceptualized as systematic differences in allele frequencies between sub-populations. These frequency differences cause confounding in genetic studies when phenotypes also differ between sub-populations. We simulate such sub-populations in both the GWAS discovery and target PGS analyses samples. We first create new genotypes in two groups, drawing upon two different sets of simulated minor allele frequency distributions. We also define a phenotypic difference between these two groups, by including an 'environmental confounding' parameter which is noise with a different mean phenotype for the two sub-populations. We then run a single GWAS in these two populations. We create phenotypes and polygenic scores (based on the GWAS results) in a target sample of families, comprising the same two sub-populations present in the GWAS. Our simulation allows for adoptees to be matched with adoptive parents both within-and between-sub-populations. We report results from a simulation with adoptees matched with adoptive parents within the same sub-population.
Supplementary Note 5: Comparison of two implementations of the sibling design using simulation
In the sibling design presented by Selzam et al. (2019), indirect genetic effects are estimated by subtracting the within-sibling estimate from the between-sibling estimate (indexed using the average polygenic score for each sibling pair). However, the between-sibling effect is not necessarily the appropriate quantity to use 3 . An alternative is to subtract the within-sibling estimate from an estimate of the population effect obtained in a separate regression analysis using population data and ignoring family clustering. This approach was used in a recent withinsibling GWA study 4 .
To ensure that we contrast our direct genetic effects with the appropriate quantity for accurate estimation of indirect genetic effects, we use simulated data to assess the use of the betweensibling effect and the population effect. Results are presented below in Supplementary Figure 8. From these simulations, it appears that contrasting the direct effects with the betweensibling effects leads to an overestimation of indirect parental genetic effects. Contrasting direct effects with population effects results in accurate estimation of indirect genetic effects. Consequently, we use this approach in our main analyses and simulations. Therefore, our model differs slightly from the Selzam et al. analyses.
Also notable is that, whilst the Selzam et al. article (and 5 ) uses a different term -passive geneenvironment correlation -the effect being estimated is a parental indirect genetic effect. Passive gene-environment correlation refers to how the genes that parents pass on to their children may also influence how they provide the rearing environment (Plomin et al. 1977).
Supplementary Note 6: Comparison of sibling, adoption, and nontransmitted allele designs in presence of simulated components and biases
Using the simulated data, we compare the behaviour of the three designs used in our study to estimate direct and indirect genetic effects. For simplicity's sake our simulations consider one PGS (instead of both Cog and NonCog PGS). Additionally, we compare to a fourth design which we call "trios" in Supplementary Figure 9, in which the phenotype is simply regressed on child and parental PGS. As the simulation results prove, this simple approach gives identical estimates to the non-transmitted allele design, which also uses trios but requires prior identification of segments that are shared and non-shared between the generations. Figure 9 (an extended version of Figure 3 in the main text) displays the simulation results. The following text discusses the results, focusing on the main estimates of interest -indirect genetic effects of parents.
Prenatal parental indirect genetic effects
We see prenatal effects as a component of interest, rather than as a bias, in estimates of indirect genetic effects. Nonetheless, for consistency with the rest of the simulations which do not consider prenatal indirect genetic effects, the red dashed line in Supplementary Figure 9 indicates the true postnatal effect only. Simulation results show that the sibling-and trio-based designs capture indirect genetic effects occurring in both prenatal and postnatal periods. In contrast, the adoption design only captures postnatal indirect genetic effects. This is because, for both adoptees and non-adopted individuals, the prenatal environment is provided by the biological mother, so estimated polygenic score-phenotype associations for both adoptees and non-adopted individuals contain prenatal maternal indirect genetic effects. Consequently, computing the indirect genetic effect as the population effect of the polygenic score (β in nonadopted individuals) minus the direct genetic effect (β for adoptees) means that prenatal effects are cancelled out. This result suggests that prenatal indirect genetic effects could partially explain lower estimates of indirect genetic effects from the adoption design compared to the other designs.
Sibling indirect genetic effects We find that positive sibling effects result in upwardly biased estimates of indirect parental genetic effects. This bias is considerably larger for the sibling design than the adoption and trio designs. Bias in the sibling design is likely to be because positive sibling effects increase the similarity of siblings, reducing the effect of within-sibling polygenic differences 6,7 . Bias in the non-transmitted allele design likely arises because non-transmitted alleles are not only shared with parents, but also partially with (full) siblings, such that βNT might capture sibling as well as parental indirect genetic effects. It is interesting that sibling effects inflate adoption-based estimates despite the fact that adoptees are not genetically related to their siblings. These simulation results lead to the notion that higher estimates of parental indirect genetic effects in the sibling than adoption and non-transmitted allele designs is evidence of sibling indirect genetic effects. In our empirical data, we do not find differences between sibling-and triobased estimates of indirect parental genetic effects. Along with our sensitivity analyses, this suggests an absence of sibling genetic effects on educational outcomes in our datasets.
Assortative mating In our main scenario, which includes substantial error in the GWAS SNP effects used to calculate polygenic scores, so the correlation of GWAS SNP effects and true SNP effects is on average 0.45), we found that the bias from assortative mating in the indirect genetic effect estimate was lower in the adoption design than in the non-transmitted allele and sibling designs. We also tested other scenarios with lower error in the SNP effects used to make the polygenic score. In the scenario with assortative mating but not indirect effects, lower error in the polygenic scores led to decreased bias in estimates from the NT and sibling designs. In the scenario with both assortative mating and indirect effects, with decreasing error in SNP effects, the sibling estimate is consistently biased, but the adoption estimate is more biased and the NT estimate is less biased. Results of other simulations did not change according to the error. We present in the main manuscript the initial results with substantial error as the most conservative example. In real data, we expect this bias due to the combination of error in effect sizes and non-random mating to decrease as GWAS sample sizes increases.
Bias in the sibling design likely arises as the population effect contains assortative mating while the within-sibling effect does not. Bias in the non-transmitted allele design due to assortative mating, which happens due to correlations between parental alleles, is described in Kong et al. 2018 8 . Interestingly, the bias in the adoption design from assortative mating is zero in the absence of a parental indirect genetic effect, but slightly above zero when a parental indirect genetic effect was also specified. In other words, the presence of parental indirect genetic effects is required for assortative mating to bias estimates from the adoption design. We simulated the same strength of assortative mating for the parents of both adopted and nonadopted individuals, so the result cannot be due to elevated assortment in the latter group (leading to residual assortment in the indirect effect estimate when calculating βnon-adoptedβadopted). Such differences could exist in the real data, but there is scarce and inconsistent evidence regarding assortment in biological parents of adoptees versus other parents 9,10 . Overall, the results suggest that assortative mating could explain lower estimates of indirect genetic effects from the adoption design compared to the other designs, but may depend on the level of noise in the GWAS effects.
Population stratification
Simulation results show that estimates of parental indirect genetic effects based on the adoption design capture less bias from population stratification than sibling-and trio-based designs. In the sibling design, the parental indirect genetic effect is estimated as the population effect minus the direct within-family effect of the polygenic score. This means that the indirect genetic effect is likely to be inflated by population stratification, as this is captured in the population effect but not the within-family effect. Also, the effects of the non-transmitted allele PGS are influenced by population stratification, so the indirect genetic effect estimate is inflated. In contrast, population stratification only biases indirect genetic effect estimates from the adoption design to a small extent. Assuming that population stratification is similar in adoptees and non-adopted individuals, its effect will cancel out when estimating the indirect genetic effect as βnon-adopted -βadopted. The assumption of equal population stratification and assortative mating bias in adopted and non-adopted groups cannot be tested due to the lack of parental data in UKB, but is bolstered by the simulation results, and by the fact that both adoptees and non-adopted individuals are from British ancestry. Our simulation results suggest that population stratification partly explains the lower estimates of indirect genetic effects from the adoption design compared to the other designs in our empirical study.
Supplementary Figures
Supplementary Figure 1
Supplementary Figure 4. Population genetic effects
Estimates of population effects of NonCog (orange) and Cog (blue) PGS for every condition grouped by method. X axis ticks indicate the sample (NTR, TEDS and UKB) and outcomes (CITO is age 12 achievement in NTR, 12yo is age 12 teacher rated achievement in TEDS, GCSE is age 16 achievement in TEDS and EA), bars are 95% CIs. Moving from left to right on the x-axis sample sizes were: 1631 (Siblings CITO), 3163 (Siblings EA), 2862 (Siblings 12yo), 4796 (Siblings GCSE), 39500 (Siblings UKB EA), 6407 adopted and 6500 non-adopted (Adoption EA), 1526 (Nontransmitted CITO) and 2534 (Non-transmitted EA). Full estimates are shown in Supplementary Data 3.
Supplementary Figure 5. Ratios indirect effect/population effect
Estimates of the ratio of the indirect effects on the population effects of NonCog (orange) and Cog (blue) PGS for every condition grouped by method. X axis ticks indicate the sample (NTR, TEDS and UKB) and outcomes (CITO is age 12 achievement in NTR, 12yo is age 12 teacher rated achievement in TEDS, GCSE is age 16 achievement in TEDS and EA); bars are 95% CIs. Moving from left to right on the x-axis sample sizes to estimate the indirect and total effects on which these ratios are based were: 1631 (Siblings CITO), 3163 (Siblings EA), 2862 (Siblings 12yo), 4796 (Siblings GCSE), 39500 (Siblings UKB EA), 6407 adopted and 6500 non-adopted (Adoption EA), 1526 (Nontransmitted CITO) and 2534 (Non-transmitted EA). Full estimates are in Supplementary Data 3.
Supplementary Figure 6. Effect of the NonCog and Cog polygenic score on educational outcomes in monozygotic and dizygotic twins in TEDS and NTR
A. Effect of NonCog PGS on educational outcomes in MZ vs DZ. B. Effect of Cog PGS on educational outcomes in MZ vs DZ. Y axis represents the beta coefficient of the regression of the NonCog/Cog PGS on educational outcomes. In blue are results in dizygotic twins and green in monozygotic twins. Bars represent the 95% CIs of the estimate. X axis ticks indicate the individual zygosity (MZ: monozygotic twin; DZ: dizygotic twin) and outcomes (age12 is age 12 teacher rated achievement in TEDS, age16 is age 16 achievement in TEDS, EA is educational attainment in NTR and CITO is age 12 achievement in NTR). Moving from left to right on the x-axis sample sizes were 2709, 546, 2709, 546, 865, 818, 1369, 1600. Values are in Supplementary Data 8. A.
Supplementary Figure 7. Estimated effect of NonCog and Cog PGS on Educational Attainment in UK Biobank depending on the number of siblings (adopted or full-siblings) of the individual
We look at the effect of NonCog and Cog PGS in non-adopted and adopted individuals (the latter group providing a control scenario since they are not genetically related to their sibling(s)). Blue dots represent the estimates of the Cog polygenic score effect (Beta) on educational attainment, bars are 95% CI. In orange are NonCog estimates. We draw the linear regression model between these estimates (blue and orange lines). Moving from left to right on the x-axis sample sizes were 2357, 1725, 709, 336, 175 for adoptees, and 830, 2111, 1599, 926, 436 for non-adopted individuals. Values are in Supplementary Data 9.
Supplementary Figure 8. Estimation of the direct and parental indirect genetic effects in simulated data with different implementations of the sibling design
Estimates of parental direct and indirect genetic effects from two implementations of the sibling design, based on data simulated to include indirect genetic effects or not. Boxplots of 100 replicates based on a simulated sample of 20,000 families. The center line represents the median, the box limits are the 1 st and 3 rd quartile, and the whiskers reach the maximum value within 1.5 times the interquartile range. Outlying values are not represented. For clarity, the red line benchmarks the true simulated postnatal parental indirect effect, although we note that prenatal parental genetic effects are a component rather than a bias of the parental indirect genetic effect.
Supplementary Figure 9. Simulation results: Comparison of sibling, adoption, and non-transmitted allele designs in presence of components and biases
Estimates of parental direct and indirect genetic effects from the three designs, based on data simulated to include different components and biases. Boxplots of 100 replicates based on a simulated sample of 20,000 families. The center line represents the median, the box limits are the 1 st and 3 rd quartile, and the whiskers reach the maximum value within 1.5 times the interquartile range. Outlying values are not represented. For clarity, the red line benchmarks the true simulated postnatal parental indirect effect, although we note that prenatal parental genetic effects are a component rather than a bias of the parental indirect genetic effect.
|
2022-08-25T06:17:59.086Z
|
2022-08-23T00:00:00.000
|
{
"year": 2022,
"sha1": "2579a71b679bba4a7dbb1ce752d8a0d5fb5f2c48",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-022-32003-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bdaa058006c3ae578c8e6c4f7518b5620d89bfb",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
367093
|
pes2o/s2orc
|
v3-fos-license
|
Plasma Adrenomedullin level in Egyptian children and Adolescents with type 1 diabetes mellitus: relationship to microvascular complications
Background Adrenomedullin (AM) is known to be elevated in different clinical situations including diabetes mellitus (DM), but its potential role in the pathogenesis of vascular complications in diabetic children and adolescents is to be clarified. Hence, the study aimed at assessment of plasma adrenomedullin levels in children and adolescents with type 1 DM and correlation of these levels with metabolic control and diabetic microvascular complications (MVC). Methods The study was performed in the Diabetes Specialized Clinic, Children's Hospital of Ain Shams University in Cairo, Egypt. It included 55 diabetic children and adolescents (mean age 13.93 ± 3.15 years) who were subdivided into 40 with no MVC and 15 with MVC. Thirty healthy subjects, age-and sex- matched were included as control group (mean age 12.83 ± 2.82 years). Patients and controls were assessed for glycosylated hemoglobin (HbA1c) and plasma adrenomedullin assay using ELISA technique. Results Mean plasma AM levels were significantly increased in patients with and without MVC compared to control group, (110.6 pg/mL, 60.25 pg/mL and 39.2 pg/mL respectively) (P < 0.01) with higher levels in those with MVC (P < 0.05). Plasma AM levels were positively correlated with both duration of diabetes (ρ = 0.703, P < 0.001) and glycemic control (HbA1c) (ρ = 0.453, P < 0.001). Conclusion Higher plasma AM levels in diabetics particularly in those with MVC & its correlation with diabetes duration and metabolic control may reflect the role of AM in diabetic vasculopathy in the pediatric age group.
Background
Adrenomedullin (AM), a ubiquitous regulatory peptide with different actions, is expressed in many tissues throughout the body including the adrenal medulla, endothelial [1] and vascular smooth muscle cells [2], myocardium and central nervous system [3,4]. Adrenomedullin (AM) is widely synthesized and secreted from most of the cells in the body [5]. It controls proliferation, differentiation and migration of cells [6]. Adrenomedullin (AM) is able to act as an autocrine, paracrine, or endocrine mediator in a number of biologically significant functions [7]. It plays a critical role in several diseases such as cancer, diabetes, cardiovascular and renal disorders [8,9]. It has vasodilator and blood pressure lowering properties and plays important role in maintaining electrolyte and fluid homeostasis [10]. Endogenous AM may protect from organ damage by inhibiting oxidative stress production [11] and raised AM levels correlated with increased oxidative stress [12]. Moreover, evidence that AM possesses a clear cut proangiogenic effect under both physiological and pathophysiological conditions has accumulated [13][14][15]. Adrenomedullin is involved in insulin regulatory system [16][17][18] and is elevated in plasma from patients with pancreatic dysfunctions such as type 1 or type 2 diabetes and insulinoma [18]. Adrenomedullin might play a role in the pathogenesis of diabetic vasculopathy in type 1 [19] and type 2 diabetes [20]. However, to the best of our knowledge, there is no published data about AM level in type 1 diabetic children and adolescents. Hence, this study aimed at assessment of plasma adrenomedullin levels in type 1 diabetic children and adolescents and correlating levels with metabolic control and diabetic microangiopathy.
Subjects
This case control study included 55 consecutive type 1 diabetic children and adolescents recruited from Diabetology Clinic, Children's Hospital, Ain Shams University, Cairo, Egypt during the period from May 2004 to May 2006. Those with liver disease, renal failure or congestive heart failure were excluded [19]. According to the presence or absence of MVC, patients were classified into: Group 1: 40 diabetic patients without MVC. Group 2 comprised 15 diabetic patients with MVC (retinopathy, neuropathy and/or nephropathy). Thirty apparently healthy age and sex matched children and adolescents were included as a control group. Informed consent was obtained from patients' parents or their legal guardians after study approval by the Local Ethical Committee, Ain Shams University (FWA00006444).
Methods
Patients were subjected to careful history taking laying stress on onset, duration, frequency of diabetic ketoacidosis (DKA) or hyperglycemic attacks, thorough clinical examination with special emphasis on signs of diabetic complications. Fundus examination was performed by an ophthalmologist after maximum papillary dilatation using indirect ophthalmoscope to identify diabetic retinopathic changes [21].
Laboratory investigations
Assessment of glycemic control by calculating the mean glycosylated hemoglobin (HbA1c) over the last year was performed using high performance liquid chromatography (HPLC) technique [22]. Patients were considered under optimal glycemic control when their HbA1c was < 7.5% [23]. Microalbuminuria was assayed using SERA-PAK immuno-microalbumin Kit (Bayer Corporation, Benedict Ave, Tarry town, NY, USA). Persistent microalbuminuria was defined when two of three samples showed urinary albumin excretion rate of 30-300 μg/mg creatinine [24]. Two ml of venous blood were collected on EDTA tube, centrifuged for 15 minutes and plasma samples were stored at -70°C till assay. Plasma adrenomedullin level was assessed by ELISA technique using adrenomedullin (human) (EIA-3418) kit, DRG international Inc., USA).
Statistical Analysis
Analysis of data was performed by using SPSS (version 15). Comparison between 2 groups of patients was made using Student's t-test for parametric measures and Wilcoxon signed-rank test (Z value) for non parametric measures. Spearman's rank correlation coefficient was used to correlate between two quantitative variables. P value < 0.05 was considered the cut-off value for significance.
Results
Diabetic patients (n = 55) and controls were comparable as regards age, gender and BMI. Age, duration of diabetes and mean HbA1c were significantly higher in patients with MVC (n = 15) compared to patients without MVC (n = 40) (P < 0.01, P < 0.01, P < 0.05 respectively) and to controls ( Table 1). Compared to healthy subjects, each patients' group displayed significantly increased AM levels (P < 0.05), with higher values in diabetics with MVC than those without (P < 0.05) ( Table 1, Table 2). Significant positive correlation between AM levels and both duration of diabetes (ρ = 0.703, P < 0.001) and HbA1c (ρ = 0.453, P < 0.001) was observed among diabetic patients (n = 55) with a non significant correlation with age (ρ = 0.09, P = 0.51).
Discussion
In the present study, diabetic patients showed highly significant increase in plasma AM levels compared to controls. These results are in agreement with Hayashi et al [25] who reported significant increase in plasma AM in hyperglycemic patients compared with normal volunteers. However, Kinoshita et al. [26] found that when patients with nephropathy were excluded, plasma levels of AM were not significantly different in old diabetic patients and healthy individuals.
Our patients with MVC displayed higher AM levels compared to those without. Similar results were reported in other studies [19,20,27]. Yet, they reported no significant difference in patients with nephropathy, neuropathy or retinopathy (P > 0.05). In the current study, highest AM levels were observed in diabetic patients with retinopathy and nephropathy. Higher plasma AM levels in our diabetic patients with microalbuminuria compared to normoalbuminuric patients differs from Garcia-Unzueta et al [19] who reported higher levels of AM and cAMP in patients with renal insufficiency but normal in microalbuminuric patients. Furthermore, adult type 1 diabetic patients with renal insufficiency had higher levels of plasma AM than diabetics with other complications, and plasma AM increase was proportionate to kidney function deterioration [19]. Adrenomedullin could exert a wide range of vascular actions (mostly protective). These include endothelium-dependent and -independent vasodilatation, antioxidative stress, stimulation of endothelial nitric oxide production, antiproliferation of vascular smooth muscle cell, and adventitial fibroblast [28]. Taken together, the elevation of plasma adrenomedullin level in type 1 diabetes(especially in the presence of nephropathy) could participate in the mechanism against progression of vascular damage [28].
Highest individual plasma AM values recorded in our diabetics with retinopathy are similar to previous studies [19,29]. Adrenomedullin may also play a role in the neovascularization process that occurs after retinal ischemia. Increased AM levels in vitreous humor of patients with proliferative vitreoretinopathy [30,31] and diabetic retinopathy [32] suggested the involvement of AM as a possible associated factor in the course of vascular and proliferative retinal diseases.
Increased AM levels with longer duration of diabetes is consistent with Garcia-Unzueta et al. [19] who reported similar relationship suggesting that the elevation of AM levels is a late phenomenon due to endothelial dysfunction. Also, there was a significant correlation between AM levels and HbA1c with higher HbA1c levels among diabetics with MVC. Similarly, Caliumi et al. [33] reported that increased circulating AM correlates with poor glucose metabolic control in type 2 diabetics. The elevated plasma AM level originates from vascular AM expression induced by hyperglycemia through protein kinase C-dependent pathway [34].
Our patients with MVC displaying higher AM levels were older than those without. However, no significant correlation was observed between AM and age. Similarly, Hayashi et al [25] reported no change in AM levels with age. Older age, longer duration of diabetes and puberty are known risk factors for MVC [35]. It is still uncertain whether increased release of AM in diabetes is a compensatory mechanism or a coincident event. The precise role of AM in the pathogenesis of diabetic complications is still to be elucidated [36].
Conclusions
The increase in plasma adrenomedullin level in type 1 diabetic children and adolescents being correlated with disease duration and metabolic control; the two most independent risk factors for the occurrence of MVC; may declare its role in the pathogenesis of diabetic microangiopathy since childhood.
|
2014-10-01T00:00:00.000Z
|
2010-02-10T00:00:00.000
|
{
"year": 2010,
"sha1": "e22777061abc95910910e199020b924a298ed19d",
"oa_license": "CCBY",
"oa_url": "https://dmsjournal.biomedcentral.com/track/pdf/10.1186/1758-5996-2-12",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e22777061abc95910910e199020b924a298ed19d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17945165
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid Client Side Phishing Websites Detection Approach
Phishing tricks to steal personal or credential information by entering victims into a forged website similar to the original site, and urging them to enter their information believing that this site is legitimate. The number of internet users who are becoming victims of phishing attacks is increasing beside that phishing attacks have become more sophisticated. In this paper we propose a client-side solution to protect against phishing attacks which is a Firefox extension integrated as a toolbar that is responsible for checking whether recipient website is trusted or not by inspecting URLs of each requested webpage. If the site is suspicious the toolbar is going to block it. Every URL is evaluated corresponding to features extracted from it. Three heuristics (primary domain, sub domain, and path) and Naïve Bayes classification using four lexical features combined with page ranking received from two different services (Alexa, and Google page rank) used to classify URL. The proposed method requires no server changes and will prevent internet users from fraudulent sites especially from phishing attacks based on deceptive URLs. Experimental results show that our approach can achieve 48% accuracy ratio using a test set of 246 URL, and 87.5% accuracy ratio by excluding NB addition tested over 162 URL. Keywords—Phishing Attacks; Browser Plugin; Anti Phishing; Security; Firefox
I. INTRODUCTION
Phishing is an online identity theft in which attackers use social engineering to appear as a trusted identity to gain valuable information.Phishing exploits human vulnerabilities rather than software vulnerabilities.It targets many kinds of confidential information including usernames, passwords, social security numbers, credit card numbers, bank account, and other useful personal information.
In the past few years we have seen an increase in the number of phishing attacks with many variants of techniques targeting every sector of society.As reported by the Anti-Phishing Working Group (APWG) (Anti-Phishing Working Group."Phishing Activity Trends Report: Third Quarter 2013 Report, 2014 ) "Payment Services continued to be the mosttargeted industry sector throughout 2014".Many of phishing techniques are sophisticated, and it is very hard to internet users to defend against them.Damage caused by phishing ranges from minor to substantial financial loss.According to the statistics provided by APWG in their Phishing Activity Trends Report [1] "Overall phishing activity was up by 20 percent in 3rd Quarter of 2013 from the previous quarter ", and Cyveillance whitepaper 2008 reported phishing attacks against more than 2,000 brands across 30 countries which costs these organizations from thousands to millions of dollars per attack.
The phishing techniques usually involve impersonating legitimate web sites to submit personal information directly to the phisher, or using malicious software that sends victim's data without his knowledge.In a typical phishing attacks, the victim receives fraudulent email asking him to visit a web site and confirm his information in a given time.The email provides a legitimate-looking URL which direct to a spoofed web site where victims are going to enter their information.
Several of phishing solutions exist like blacklists which are databases of known phishing sites, whitelists, community ratings, analysis of the URLs and webpage content (images, and text), using machine learning techniques, and various heuristics to detect phishing attacks.This paper makes the following contribution: We uses URL structure, four lexical features and page ranking to capture phishing attacks that depends on deceptive links.Every URL is evaluated corresponding to three heuristics (sub domain, primary domain, and path) and three lexical features extracted from the URL combined with page ranking received from ranking services.The proposed method requires no server changes and will prevent from phishing attacks based on fraudulent URLs.This solution uses resources like search engine suggestions, and third party services (Alexa, and Google Page Rank).
II. RELATED WORK
There are several methods that can be used to identify a web page as a phishing site, including Whitelists/Blacklists, URL and Heuristic-based, Similarity assessment techniques, and community ratings.In this section we will go through some of these solutions.
Whitelist/Blacklist-based is one of the common used approaches.It holds URL of verified phishing site.A whitelist contains URLs of legitimate sites while a blacklist contains phishing sites.It is effective to protect against phishing attacks and generates close-to-zero false positive rate but requires regular updating and is vulnerable to zero-day attacks.Many anti-phishing technologies rely on this approach.For example, Internet Explorer has built-in blacklist-based anti-phishing solution provided by Microsoft servers.Also Google"s Safe Browsing extension which uses Google global blacklist and whitelist.www.ijacsa.thesai.orgContent-based solutions which verify web pages by examining their contents (e.g.HTML, links, images, and text) against some previously defined characteristics.CANTINA [2] is an example on this approach which uses five words taken from the website to be classified as a signature using Term Frequency-Inverse Document Frequency (TF-IDF), then submits them to Google.If the site's URL is on top results it is legitimate, otherwise it is not.In CANTINA+ [3] which is an enhanced version from CANTINA, new features added and evaluated on a larger corpus to achieve better results.The new approach extended some of previous features combine them with ten more features.And this time the model built using state-of-the-art machine learning algorithms instead of a simple linear classifier.However, both of them have a drawback of time consumption caused by querying search engines.
A third approach developed to improve authentication between the user and the server.Authentication means that before user enters login information he needs to authenticate himself to that page.Also, it means that particular page authenticates to the user that it's the real page (called two way or mutual authentication).Some of anti-phishing techniques provide mutual authentication to prevent phishing attacks.This addresses the problem of user"s inability to authenticate the website he is communicating with.The typical method used in login helps to authenticate user to server side but not the opposite which leaves a chance to attackers to exploit this failure.The success of mutual authentication techniques depends on the way used to authenticate both the client and the server.Some of existing solutions are image-based like the one provided by Confident Technologies company [4] which based on providing a number of categories instead of a specific pictures and let the user choose from them in registration process.At login, server will generate a grid of pictures and asks the user to choose the pictures matches the categories and order chosen in registration level.As soon as the server failed to provide the right grid of images or the user failed to choose the correct images it considered as security warring.Unfortunately, this maximize user"s responsibilities by relying on userto memorize more than one category in specific order besides memorizing a password.Also, it requires changes on server side and login mechanisms.Other solutions uses Imagebased user authentication to replace traditional methods (e.g.passwords, and security questions) this may provide stronger authentication but does not solve the server side authentication problem.
PwdHash [5] proposed a solution to strengthen web password authentication.It implements password hashing with domain name as a salt and keyed by the password itself.Server received password after hashing which makes it not useful if received by phishing website.As many of other solution this approach require user to remember using it every time he is about to enter a password.Dhamija et al. [6] provide an authentication scheme where password is entered into a trusted window and user recognizes one image to perform visual matching to authenticate the received content.Images are generated by the server and they are unique for each transaction.The drawback of this solution is the large amount of changes required on server side.
BogusBiter [7] solves the problem from another point of view.It focuses on the stage after phishing attack occurred and user submitted his information to the wrong recipient.It automatically generates and sends a large number of fake credentials to phishing site to hide the real one.Unfortunately, BogusBiter can't work alone it needs help to be turned on from web browser or a third-party toolbar to detect phishing sites.
Aravind et al. [8] propose an anti-phishing framework which uses visual cryptography for authentication.Image is decomposed into two shares one stored with user and the other one is on website's database.An image captcha is created from those two shares in login time.The proposed method success to authenticate both user and website.
Web Wallet [9] is a sidebar login box which displayed when a user requested to login through a trusted path.It is responsible for preventing users form submitting their sensitive data directly to any website before checking that site.The developers of this sidebar used the negative visual feedback to solve the vulnerability of spoofing the sidebar and they provide cards to most of user's sensitive data not only user name and password.
TrueWallet [10] is another wallet-based approach which works as a proxy to manage user login and protect his password and credentials.It runs isolated from browser which adds an advantage to it compared to Web Wallet approach which means it is more secure and difficult to be attacked.TrueWallet uses the standard SSL-based authentication with some modification on server side.This approach has two disadvantages.First, it is vulnerable to DNS-spoofing attacks.Second, user need to be trained in order to rely only on this method to fill in any form.
One area of work relies on URL features to detect phishing webpage.Khonji et al. [11] propose a technique for detecting phishing websites by lexically analyzing suspect URLs depending on a novel heuristic phishing feature.This technique targets a subset of phishing attacks where the victim name is included in the URL.The approach achieved 63% and 83% true positive rate for loose and strict modes respectively.Whittaker et al. [12] present the design and evaluation of a large-scale machine learning based classifier .The proposed classifier evaluates the page according to its URL, content, and host information.The dataset used in training process consists of a noisy dataset of millions of samples.The evaluation concludes with more than 90% of phishing pages correctly identified.
An approach developed by Le et al. [13] to identify phishing target using only lexical features.Authors used an online method Adaptive Regularization of Weights in classifying URLs.Analysis showed that this methodology led to high classification accuracy comparable to full featured approaches.An approach that relies on 23 features derived from URL structure, lexical features, and from brand name of website is proposed in by Huang et al. [14].These features model the SVM-based classifier used to inspect each requested URL .The evaluation done using three datasets containing more than 12,000 URLs and showed that the solution can obtain 99% accuracy.www.ijacsa.thesai.orgBlum et al. [15] have proposed a method exploits URL's lexical features that are fed to the confidence-weighted algorithm, to indicate suspicious URL.This method uses a large lexical model trained using online approach which makes it capable of detecting zero hour threats.Zhang et al. [16] proposed different method based on repository to extract features and a statistical machine learning algorithm avoiding the complexity of computation caused by URL-based method.This method succeeds to identify phishing sites with more than 93% accuracy.
Nguyen et al. [17] presented a heuristic-based algorithm uses the characteristics of the URL combined with a third party services (e.g.PageRank) giving the URL a major role in phishing detection.Another classifier produced as a toolbar (PhishShark) proposed which is heuristic-based-only combining URL and HTML features led to promising results.
Finally, we will conclude with some of existing toolbars [18] built to prevent phishing attacks.Netcraft is a Mozilla browser plug-in that displays host location and risk rating of the accessed site.User can report sites to Netcraft to validate them then add them to its blacklist database if they are phished.TrustWatch is toolbar for Internet Explorer that checks the URL in the black listed database and displays its domain name.Searching blacklist is a time consuming process since they continuously growing and they are vulnerable to zero-day-attacks.Spoofguard is an anti-phishing Internet Explorer plug-in.It examines page characteristics such as images, links, and domain name against common features extracted from phishing site to decide whether this page is spoofed or not.
III. PROPOSED APPROACH
Our system is inspired by solutions proposed by Nguye et al. [18] and Xiaoqing et al. (GU Xiaoqing, 2013).It combines both approaches the heuristic-based approach and NB classifier.In Nguye et al. solution URL-related features and Page Ranks used to classify each website.Xiaoqing et al. approach depends onto two phases.The first one is an NB classifier which uses four lexical features to decide whether URL is phishing, suspicious, or legitimate.The second phase uses SVM classifier to parse the webpage against some features.The system will enter the second phase only if URL classified as suspicious in first phase.In our proposed system we combined the first approach with the first phase of second approach without entering into its second phase.
A. System Model
Our system model consists of seven main modules as illustrated in figure 1.
Receiving URL Module
The system obtains the requested URL from the browser.The output of this module is page URL and it is a fundamental input in most of system modules.
Scoring Module
In this module heuristics derived from modules B and C are used as input and their values calculated as output.As a result, the site is considered as phishing if all calculated values are negative, and is considered as legitimate if they are all positives.
NB classifier Module
This module is responsible for classifying a URL with a classification model developed in training process.The features used by the classification system are checking whether the URL contains an IP address because this method used by phishers to hide the owner of the site.Another feature is to examine the presence of a large number of dots separating hostname.Phishers tend to use more dots in their URLs to impersonate a legitimate look of URL because there is no restrictions on the number of dots can be used in sub domains.Checking URL against special symbols such as "@" or "-" is another feature because many of phishing URLs modified using these symbols which makes it possible to write URLs that appear legitimate but actually lead to different pages.URLs corresponding to legal websites usually do not have a large number of slashes [19].As a result, URL that contains a large number of slashes is considered to be a phishing.The classifier entered into two phases training, and testing phases.Training phase used to build the classifier by calculating the probabilities that the given webpage belongs to a one of two classes (phishing, and legitimate).The testing phase is used to examine the ability of classifier to label real web pages with a correct class.
Calculating System Value Module
In this step each heuristic is given a weight obtained by a classifier.After that, system values are calculated using this equation: www.ijacsa.thesai.orgVS = ∑ i=1 to 6 (heuristic i value) * (heuristic i weight)
Labeling Module
This module deals with system value and compares it to threshold to give system output which is the URL final label.As a result, user may proceed safely, or warned about the website.
B. Structured Design
It shows data exchanged between system components.As we can notice, from the Figure-2 URL is the main part of data.Most of major components and processes need URL value as input to produce their results.Also, URL in most cases need to be decomposed into three parts (sub domain, primary domain, and path) which is the responsibility of "Dividing URL" process.Process two receives URL parts and returns suggestions of each part separately.Suggestion values used by two processes three and four to check them in a list of popular phishing targets as in process three then return a value of yes or no.Process four produces edit distance value between each suggested word and its corresponding URL part.Process six is the major part of the system since all of the data produced in other system processes will be used here to calculate final system result.Process seven is the last process in system which communicates only with one process to receive result value and compare it with a predefined threshold to make system decision.
C. Proposed Algorithm
The pseudo code of our proposed algorithms to detect phishing websites are described below.It is a service from Amazon Company since 1996, which gives a value for each page through 3 months in the Web.Increasing of this value is a good indicator.The value depends on 2 important things.First, the number of unique users entered to this site.Second, how many URL linked to this site, increasing of URL"s lead to this site will increase its value (Alexa API).This service serve project to detect phishing sites, because phishing sites has a few number of visitors and linking URLs compared to popular websites.Also, phishing sites usually have a short life cycle which helps to differentiate between legitimate and phishing sites.
PAGE RANK
It is a service from Google Company.When Google needed to improve searching on web by giving best results to searchers, they thought about giving a value for each page (Karch).High values depend on how many URL linked to the site.Also the value depends on the domain age, the older www.ijacsa.thesai.orgdomain get higher value (Strickland, 2006 ).So the proposed approach, used this value to be one of factors that affect the decision about whether the site is phishing or not.
SUGGESTIONS
When user enters to "Google.com" and type a word, there is a drop down list to suggest many words related to user"s typed word.The suggestions depend on word popularity in searching.And when you enter a word spelled wrong, there is a famous sentence says "Did you mean?" depends on common spellings (Autocomplete).
Since phishers try to make phishing URL similar to popular sites by adding some letters, removing others, or even substituting them with different letters to trick users that it is their targeted site.So, we used Google suggestions by taking the suspect URL and getting the relative spelling word, then compare those two words using levenshtein distance algorithm.
LEVENSHTEIN ALGORITHM
It is an algorithm that compares two strings and returns the number of operations (insert, delete, and substitute) known as "distance" to let these words sound the same.
Since our paper use this algorithm to compare between suspect word and Google suggested word and return the number of operations to let those two words be equivalent.If the distance is 0, this means the two strings are the same.But if the distance is between 1 and 2 that means a probability of some phisher is trying to make those two words visually similar.
WHITE LIST
Usually in phishing world, the white list is group of legitimate sites saved in database.But in our proposed algorithm white list means a list saving primary domains of sites targeted by phishers.This white list is extracted from a database of verified phishing URLs downloaded from PhishTank website.We need to check if the URL domains (primary domain, sub domain and path domain) not in the white list to ensure that there is no phisher exploits the name of a famous legitimate site to trick users.
D. Functions Implementation
Function Alexa Rank Prototype: Function Alexa (url) Output: URL"s value.
Description: This JavaScript function takes the URL as a parameter, and connects to the server using AJAX to send URL to PHP file which requests Alexa rank API for the URL.Then receive URL"s global rank from the server.After that it assigns URL a value based on its rank.Whenever rank is higher, assigned value becomes bigger.
Output: Sub domain value.
Description: This JavaScript function takes URL"s sub domain and passes it to three functions to compute sub domain value, those functions are: 1) Google's search suggestions: return the Google suggested word for the sub domain.
2) White list: check whether the suggested word is a primary domain of another targeted site.If it is in the white list, sub domain will assigned a low value.
3) Levenshtein algorithm: if the sub domain is not in the white list, this function will check the distance between sub domain and the suggested word to check if the sub domain attempts to be closed to another domain.Whenever the distance is lower, the value becomes higher.
Function White List
Prototype: Function whitelist(a) Input: Google suggestion word.
Output: True or False.
Description: This JavaScript function connects to "PhishTank" database.Each phish site in database has phish id, phish URL, target and other columns.Important columns are: * Phish URL: The URL of the phishing site.* Phish id: Id for each phish URL.* Target: The primary domain of legitimates site which are the phish site attempts to simulate.JavaScript function passes the Google suggested word to PHP file using Ajax to create connection to the database to check whether the suggested word matches any target.If it is exists that means this site attempt to disrupt the user to think it is the primary domain of another legitimate site.The returned value in matching case is true.If the result is true, white list will take low value.Evaluation phase is done in three phases as shown in Table 9.The dataset used for testing is collected using two methods from PhishTank, and manually.URLs in data sets evaluated manually by installing the toolbar and testing each URL individually.Metrics used to calculate toolbar accuracy are True Positive (classifying legitimate URL correctly), False Positive (assigning phishing label to legitimate URL), True Negative (predicting phishing site correctly), and False Negative (assigning legitimate label to phishing URL).
Function
First phase, we started by evaluating Naïve Bayes (NB) approach alone.NB classifier trained over 13117 URLs divided into 12967 phishing URLs, and 150 legitimate URLs.After that, NB tested using a test set of 13220 URLs (12967 phishing, 253 legitimate).We experimented NB using different values of α as illustrated in Table 3.The best value of α which maximizes TN and minimizes FP is 5.8.
In the second phase we evaluate system without Naïve Bayes addition.The test set consists of 162 URLs to be tested.Experiment are done using threshold of value 0. Toolbar detected 77 phishing URLs correctly out of 89, and 63 legitimate URL out of 71.The experiment results with 86.5% True Negative, 88.7%True positive, 11.2% false positive, 13.4% false negative.
Third phase, we combined both of previous approaches.The test set consists 156 phishing URLs (selected from 12,967 URLs) downloaded from PhishTank, and 90 legitimate URLs collected manually.We conclude with 246 URLs developed for testing.We experiment this approach using two different values of threshold 0, and 0.5.Threshold with 0.5 results with less False Negative, so we select it as threshold value.Although it returns a high False Positive it gives a good results of True Negative.False Positive can be reduced using "add to trusted list" feature.The toolbar detected 147 phishing URL correctly out of 156.The experiment results with 94% True Negative.
Accuracies of each approach calculated using this equation: Accuracy ratio= (TP+TN)/(TP+TN+FP+FN).Phase one has 34% accuracy ratio, phase two has 87.5% , and phase three has 48% accuracy ratio.Finally, after these experiments we concluded by choosing threshold of value 0, and remove the Naive Bayes part as it does not add any improvement on system accuracy and increases false positive rate.
V. CONCLUSION
This paper presents architecture for developing an antiphishing toolbar integrated with Firefox browser to detect phished URLs.Our proposed anti-phishing toolbar will verify user's inputted URL, if the result is phished then it warns the user through changing indicator color and gives the user the choice to unblock the website by adding it to a trusted list.In case of phishing site verified user can know the reason by viewing a report.Our approach categorizes the URL based on its features including four lexical features and three other features (sub domain, primary domain, and path) with help of Naïve Bayes classifier.Our proposed approach can minimize the false positive by giving the user a feature of adding URLs after verified to a trusted list.Experimental results show that our approach can achieve 48% accuracy ratio using a test set of 246 URL, and 87.5% accuracy ratio by excluding NB addition tested over 162 URL.
This module extracts URL domain-related features.The URL separated into different components which are Primary Domain, Sub Domain, and Path.The pervious features will play an eminent role for investigating the URL and predicting phishing pages in next modules.Ranking moduleBeside URL's feature extraction in previous modules, also this module collects URL metadata.Specifically, URL Page Rank to be used as input into next module.Google Page Rank, and Alexa Rank are used for this purpose.
NB Classifier Prototype: Function
NB_classifier(host, path)This function implemented based on Naïve Bayes classification approach.It is a learned classifier trained over a data set of 12,967 phishing URL downloaded from PhishTank and 150 legitimate URL collected manually with help of Alexa top 500 URLs.Features used for classification illustrated in table 1. Description:
TABLE I
[17] function is the main part of the program and the last step of calculations.It applies equation vs = ∑ (heuristici value) * (heuristici weight), where heuristic values are taken from global variables used to store results of previous functions.And heuristic weight calculated by experiments applied on phishing URLs.The result of these function vs returned to calling function to be compared with threshold before presenting the last decision of the program.IV.PERFORMANCE EVALUATIONOur proposed architecture for anti phishing toolbar uses an extended approach from Nguye et al.[17]by combining their approach with NB classifier proposed in X. Gu, et al.[19].Algorithms illustrated belloware based on experimental results of 9,661 phishing URL downloaded from PhishTank as Nguye et al. mentioned.Naïve Bayes classifier algorithm used for classification trained over 12,967 phishing URL from PhishTank and 253 legitimate URL collected manually. Description:
|
2016-01-15T18:20:01.362Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "823a44551aa838b565d9a01a1b3153bb3618ead6",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume5No7/Paper_20-Hybrid_Client_Side_Phishing_Websites_Detection_Approach.pdf",
"oa_status": "HYBRID",
"pdf_src": "Crawler",
"pdf_hash": "823a44551aa838b565d9a01a1b3153bb3618ead6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
211538601
|
pes2o/s2orc
|
v3-fos-license
|
Research Article Fuzzy Generalized Conformable Fractional Derivative
We give a new definition of fuzzy fractional derivative called fuzzy conformable fractional derivative. Using this definition, we prove some results and we introduce new definition of generalized fuzzy conformable fractional derivative
Introduction
Fuzzy set theory is a powerful tool for modeling uncertainty and for processing vague or subjective information in mathematical models. eir main directions of development have been diversed, and its applications have been varied [1][2][3][4]. e derivative for fuzzy valued mappings was developed by Puri and Ralescu [5], which generalized and extended the concept of Hukuhara differentiability for set-valued mappings to the class of fuzzy mappings. Subsequently, using the H-derivative, Kaleva [6] started to develop a theory for FDE. In [7], a new well-behaved simple fractional derivative called "the conformable fractional derivative" depending just on the basic limit definition of the derivative, namely, for a function f(0, ∞) ⟶ R the (conformable) fractional derivative of order 0 < q ≤ 1 of f at t > 0 was defined by and is defined the fractional derivative at 0 as (T q f)(0) � lim t⟶0 + (T q f)(t). e aim of this paper is to study and generalize the fuzzy conformable fractional derivative.
Preliminaries
Let us denote by R F � u : R ⟶ [0, 1] { } the class of fuzzy subsets of the real axis satisfying the following properties: (i) u is normal, i.e., there exists an x 0 ∈ R such that u(x 0 ) � 1. (ii) u is fuzzy convex, i.e., for x, y ∈ R and 0 < λ ≤ 1: (2) en, R F is called the space of fuzzy numbers. Obviously, By P K (R) we denote the family of all nonempty compact convex subsets of R and define the addition and scalar multiplication in P K (R) as usual.
Theorem 1 (see [8]). If u ∈ R F , then converges to α, then [9]). e following arithmetic operations on fuzzy numbers are well known and frequently used below. If u, v ∈ R F , then Theorem 2 (see [10]) en, 0 ∈ R F be a neutral element with respect to +, i.e., where d H is the Hausdorff metric: It is well known that (R F , d) is a complete metric space. We list the following properties of d(u, v): for all u, v, w ∈ R F and λ ∈ R.
Let (A k ) be a sequence in P K (R) converging to A. en, theorem in [6] gives us an expression for the limit.
Let I � (0, a) ⊂ R be an interval. We denote by C(I, R F ) that the space of all continuous fuzzy functions on I is a complete metric space with respect to the metric:
The Fuzzy Conformable Fractional Differentiability
Definition 2. Let F : I ⟶ R F be a fuzzy function. qth order "fuzzy conformable fractional derivative" of F is defined by for all t > 0 and q Hence, If F is q-differentiable in some (0, a) and lim and the limits exist (in the metric d).
Remark 1. From the definition, it directly follows that if F is q-differentiable then the multivalued mapping F α is q-differentiable for all α ∈ [0, 1] and where for all 0 ≤ ε < δ and α ∈ [0, 1]. en, F is q-differentiable, and the derivative is given by (14).
Proof. Consider the family T q F α | α ∈ [0, 1] By definition T q F α (t) is a compact, convex, and nonempty subset of R.
If α 1 ≤ α 2 , then by assumption (i), and consequently Let α > 0 and α k be a nondecreasing sequence converging to α. For h > 0 choose ε > 0 such that equation (15) holds true. en, the triangle inequality yields By assumption (i), the rightmost term goes to zero as k ⟶ ∞ and hence
Advances in Fuzzy Systems
Dividing by ε, we have Similarly, we obtain and passing to the limit gives the theorem. Note that this definition and theorem of conformable fractional derivative are very restrictive; for instance, if where c is a fuzzy number and g : [a, b] ⟶ R + is a function and is q-differentiable for some q ∈ (0, 1] with g (q) (t) < 0, then F is not q-differentiable. To avoid this difficulty, we introduce a more general definition of the conformable fractional derivative for fuzzynumber-valued function.
The Generalized Fuzzy Conformable Fractional Differentiability
We consider the following definition.
Definition 3. Let F : I ⟶ R F be a fuzzy function and q ∈ (0, 1]. One says, F is q (1) -differentiable at point t > 0 if there exists an element F (q) (t) ∈ R F such that for all ε > 0 sufficiently near to 0 there exist F(t + εt 1− q ) ⊖ F(t), F(t) ⊖ F(t − εt 1− q ), and the limits (in the metric d): where F is q (2) -differentiable at t > 0 if for all ε < 0 sufficiently near to 0, then there exist If F is q (n) -differentiable at t > 0, we denote its q-derivatives (q ∈ (0, 1]) by F (q) n (t), for n � 1, 2.
Example 1. Let g : I ⟶ R + and define F : for all t ∈ I, where c is the fuzzy number. If g is q-differentiable at t 0 ∈ I, then F is the generalized fuzzy conformable fractional derivative at t 0 ∈ I and we have
Remark 2.
In the previous definition, q (1) -differentiable corresponds to Definition 3, so this differentiability concept is a generalization of Definition 2 and obviously more general. For instance, in the previous example, for (ii) If F is q (2) -differentiable, then f α 1 (t) and f α 2 (t) are q-differentiable and F q (2) ( Proof (i) See demonstration of eorem 5.
Conclusion
We have investigated generalized fuzzy conformable fractional differentiability. e conformable q-differentiability introduced here is a very general differentiability concept, being also practically applicable, and we can calculate by the fuzzy conformable derivative of the product of two functions (T q (f · g)) because all fractional derivatives do not satisfy the known formula T q (f · g) � T q (f)g + fT q (g). e disadvantage of fuzzy generalized conformable differentiability of a function seems to be that a simple fuzzy differential equation (y (q) + y � 0, 0 < q ≤ 1, y(0) � y 0 ∈ R F ) has not got a unique solution, so it may have several solutions. e advantage of the existence of these solutions is that we can choose the solution that reflects better the behaviour of the modelled real-world system.
For further research we propose the study for fuzzy fractional differential equations, by using the generalized conformable differentiability concept. In addition, we propose to extend the results of the present paper and to combine them with the results in [15] for fuzzy fractional differential equations.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2020-02-13T09:20:23.101Z
|
2020-02-06T00:00:00.000
|
{
"year": 2020,
"sha1": "109779564ce9c24d67f1e9a90f97c705dce1dd33",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/afs/2020/1954975.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "86adc0a6b5b2fc40fe5208ed72c538627612b286",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
3112205
|
pes2o/s2orc
|
v3-fos-license
|
Naturally Occurring Mutations in the Nonstructural Region 5B of Hepatitis C Virus (HCV) from Treatment-Naïve Korean Patients Chronically Infected with HCV Genotype 1b
The nonstructural 5B (NS5B) protein of the hepatitis C virus (HCV) with RNA-dependent RNA polymerase (RdRp) activity plays a pivotal role in viral replication. Therefore, monitoring of its naturally occurring mutations is very important for the development of antiviral therapies and vaccines. In the present study, mutations in the partial NS5B gene (492 bp) from 166 quasispecies of 15 genotype-1b (GT) treatment-naïve Korean chronic patients were determined and mutation patterns and frequencies mainly focusing on the T cell epitope regions were evaluated. The mutation frequency within the CD8+ T cell epitopes was significantly higher than those outside the CD8+ T cell epitopes. Of note, the mutation frequency within predicted CD4+ T cell epitopes, a particular mutational hotspot in Korean patients was significantly higher than it was in patients from other areas, suggesting distinctive CD4+ T cell-mediated immune pressure against HCV infection in the Korean population. The mutation frequency in the NS5B region was positively correlated with patients with carrier-stage rather than progressive liver disease (chronic hepatitis, liver cirrhosis and hepatocellular carcinoma). Furthermore, the mutation frequency in four codons (Q309, A333, V338 and Q355) known to be related to the sustained virological response (SVR) and end-of treatment response (ETR) was also significantly higher in Korean patients than in patients from other areas. In conclusion, a high degree of mutation frequency in the HCV GT-1b NS5B region, particularly in the predicted CD4+ T cell epitopes, was found in Korean patients, suggesting the presence of distinctive CD4+ T cell pressure in the Korean population. This provides a likely explanation of why relatively high levels of SVR after a combined therapy of pegylated interferon (PEG-IFN) and ribavirin (RBV) in Korean chronic patients with GT-1b infections are observed.
Introduction
According to the WHO, 3% of the global population is infected with the hepatitis C virus (HCV), with 3-4 million people newly infected each year [1][2][3][4]. Most HCV infections persist, with up to 80% of all cases leading to chronic hepatitis associated with liver fibrosis, liver cirrhosis (LC) and hepatocellular carcinoma (HCC) [5][6][7]. A combinatorial treatment with pegylated interferon (PEG-IFN) and ribavirin (RBV) provides good clinical efficacy in patients infected with genotypes (GTs) 2 and 3 but is less efficacious in patients infected with the most prevalent GT-1b, thereby emphasizing the urgent need for more effective specifically targeted antiviral therapies for GT-1b [8][9][10][11].
The HCV RNA-dependent RNA polymerase (RdRp) is an essential enzyme that lacks proofreading activity, thus leading to a population of distinctive but closely related viral variants, termed viral quasispecies, within an infected individual [12][13][14]. Monitoring of the diversity of HCV quasispecies is important for the prediction of liver disease progression as well as HCV treatment outcomes [15][16][17][18][19]. Currently, studies regarding HCV quasispecies mainly focus on structural genomic regions; therefore, relatively limited data are available regarding nonstructural regions. Recently, variations in the nonstructural 5B (NS5B) protein, particularly in specific codons, were reported to be positively related to a sustained virological response (SVR) and end-of treatment response (ETR) of patients infected with GT-1b [15,16].
It was also reported that the SVR rate in patients with HCV GT-1b treated with PEG-IFN plus RBV are higher in Asian patients as compared with Caucasians [10,20]. In particular, previous studies have shown that SVR rates in Korea patients infected with GT-1b range from 56% to 62% [21,22]. Recently, two SNPs, rs12979860 and rs8099917 of the IL28B gene, showing the strongest association with treatment response, have been reported at a high frequency in Korean patients with HCV GT-1b compared to the frequencies of other ethnic groups [23,24]. Although prior investigations can partly explain the high SVR rates in Korean patients, other mechanisms may also contribute to this effect. In the present study, to address this issue, we investigated via quasispecies analysis the mutation frequencies and patterns in the partial NS5B from Korean patients infected with HCV GT-1b, as these are known to be related to the SVR rates,
Patients and HCV RNA Extraction
Serum samples were collected from a total of 73 treatmentnaïve HCV-positive patients who visited Seoul National University Hospital in 2003. The clinical statuses of the HCV-positive patients were defined as carrier (C), chronic hepatitis (CH), LC or HCC. General definitions of the C and chronic liver disease types are as follows: the diagnosis of C can be made in the presence of positive anti-HCV antibodies, of a positive HCV RNA by RT-PCR, and of normal alanine aminotransferase (ALT) levels (,40 IU/L, assay dependent) in at least three tests carried out at least two months apart over a period of six months [25,26]; CH was defined as an elevation of or fluctuation in serum ALT levels over 6 months without any evidence of any other chronic liver disease [27]; LC was diagnosed through evidence of clinically relevant portal hypertension (esophageal varices and/or ascites, splenomegaly with a platelet count of 100,000/mm 3 ) [28], ultrasonographic imaging features suggestive of liver cirrhosis [29], and a histological diagnosis with one of the following features: nodular regeneration, fragmentation of the biopsy with fibrosis at the margins and a wide postnecrotic collapse with an abnormal relationship between portal tracts and central veins, and evidence of active liver-cell hyperplasia [30]. Finally, HCC in cirrhotic patients was diagnosed either through radiological criteria (focal lesion .2 cm with arterial hypervascularization according to two coincident imaging techniques) or through combined criteria (focal lesion .2 cm with arterial hypervascularization according to one imaging technique associated with AFP levels .400 ng/ml) [31]. HCV RNA was purified using the Viral Gene-Spin Viral DNA/RNA Kit (iNtRON Biotechnology Inc., Seongnam, Korea) according to the manufacturer's guideline. This work was approved by the institutional review board of Seoul National University Hospital (IRB No. C-1304-032-479). The experiment was mainly based on the viral RNA extracted from isolates; therefore, the research was done without informed consent and a waiver of informed consent was agreed upon by the IRB.
Quantitative PCR (qPCR) and cDNA synthesis A qPCR method was used to analyze viral RNA with an ABI7500 system (Perkin-Elmer Applied Biosystems, Warrington, UK). The primers were designed to amplify the NS2 region and the sequences were as follows: sense primer HCVF (59-CGA CCA GTA CCA CCA TCC TT-39) and antisense primer HCVR (59-AGC ACC TTA CCC AGG CCT AT-39). For the detection of HCV RNA, the SensiFAST SYBR Lo-ROX kit (Bioline, Taunton, MA, USA) was used according to the manufacturer's instructions. Absolute quantification of extracted HCV RNA relies on the accuracy with the amount of HCV RNA standard measured with a lower limit of detection of 1,350 copies/ml (500 IU/ml) on the basis of earlier research (data not shown) [32,33]. Viral cDNA synthesis for Reverse-transcriptase (RT) PCR was done using the Maxime RT PreMix kit (iNtRON Biotechnology Inc., Seongnam, Korea) according to its own protocol.
Cloning and Sequencing Analysis
The PCR products of GT-1b were cloned using the TOPO TA Cloning kit (Invitrogen Corporation, Carlsbad, CA, USA). The NS5B regions were sequenced using the M13 primer. For each subject, 10 to 12 subclones were sequenced [35,36]. Sequencing was conducted using the Applied Biosystems model 377 DNA automatic sequencer (Perkin-Elmer Applied Biosystems, Warrington, UK). If there were sequence variations between the clones of a sample, the dominant sequence at each position was determined as the major sequence. Nucleotides were aligned and their similarities were calculated using the multiple-alignment algorithm in Megalign (DNASTAR, Windows Version 3.12e). A mutation in this study was defined as a sequence different from the consensus sequence of 20 GT-1b reference strains obtained from the LANL HCV database (http://hcv.lanl.gov) [accession numbers AB442219, AB691953, AF165047, D11168, D13558, D16435, D50485, D85516, D90208, EU256084, EU482859, FJ478453, HQ110091, HQ912958, J238799, L02836, M58335, M96362, S62220 and X61596] [37]. Because at aa 316 and 464, the two types of subclonal amino acids were conserved in each subject, both amino acids were considered to be a consensus sequence [38,39]. For a further comparison of the analyzed sequences, 45 HCV GT-1b sequences from other countries (China: 15, Japan: 15, Switzerland: 15 and the United States: 15) were also retrieved from the LANL HCV database and relevant nucleotide positions were compared with the consensus sequence of 15 subjects.
Prediction of novel CD4+ T cell epitopes and determination of mutations inside and outside CD4+ or CD8+ T cell epitopes 15-mer peptides containing an association between a particular HLA class II molecule and the sequenced NS5B with binding capacity ,500 mM were screened in silico for the presence of the relevant HLA-binding motif [42]. Mutations within the CD4+ and CD8+ T cell epitopes were defined as a sequence different from the consensus sequence within the four selected CD4+ T cell epitopes with above criteria and six known CD8+ T cell epitopes, respectively, on the basis of previous studies [37,[43][44][45]. Mutations outside the CD4+ or CD8+ T cell epitopes were counted according to the total number of mutations minus the sum of the respective epitope regions.
Statistical analyses
The results were expressed as percentages, means 6 SD, or as medians (range). The differences between the categorical variables were analyzed using Fisher's exact test or a Chi-square test. For continuous variables, the Student's t-test was used when the data showed a normal distribution, or the Mann-Whitney U test was used when the data was not normally distributed. The level of significance of each test was adjusted for multiple tests via Bonferroni correction. A p-value of ,0.05 (two-tailed) was considered to be statistically significant. Statistical analysis of the
Phylogenetic analysis of GT-1b and its characteristics
A phylogenetic analysis based on the 492bp GT-1b sequenced NS5B region of randomly selected subclones showed distinct sequence variation between each subject (Fig. 1 Table S3). This finding indicates a positive correlation between viral replication and the clinical severity of liver disease. The nucleotide sequence of 166 subclones is available in the GenBank nucleotide sequence databases with the following accession numbers: KF422017-KF422027.
Distribution of mutations in the sequenced NS5B region
The distribution of the mutations from the sequenced GT-1b NS5B region aa 164 is shown in Fig. 2. There were six known CD8+ T cell epitopes (Table S4) [37,[43][44][45], and the mutation frequencies inside the CD8+ T cell epitope regions (2.9%) were significantly higher than those outside the epitope regions (2.3%, p = 0.001). The mutation frequencies inside the predicted CD4+ T cell epitopes (4.8%) were significantly higher than those outside the CD4+ T cell epitope (1.4%) and were even higher than those inside the known CD8+ T cell epitopes (p,0.001) (Table S5). We designated the region including the aa 333-355 section of the CD4+ T cell epitopes as a mutational hotspot, as which an extraordinary high mutation frequency (6.7%) was observed (Fig. 2). Of note, the region was predicted to have high binding affinity for the various MHC class II HLA types prevalent in Koreans, raising the possibility that there may be distinctive MHC class II restricted immune pressure against HCV GT-1b in the mutational hotspot ( Table 2).
Comparison of synonymous (d S ) and nonsynonymous mutations (d N ) according to the NS5B region
The distinctive CD4+ T cell-mediated immune pressure was examined by comparing d N to d S . The d N /d S ratio inside the known CD8+ T cell epitopes (0.29) was slightly higher than that of the outside (0.21) region with d N frequencies of 2.9% and 2.3%, respectively. The d N /d S ratio inside the predicted CD4+ T cell epitopes (0.49) was statistically higher than that outside (0.13), with d N frequencies of 4.8% and 1.4%, respectively, although the d S frequency outside the predicted CD4+ T cell epitopes was higher at a statistically significant level. The odds ratio of the d N inside and outside the predicted CD4+ T cell epitopes was 3.55. In the mutational hotspot, d N frequencies (11.1%) were found to be higher than d S frequencies (4.8%), resulting in an elevated d N /d S ratio (1.4). This suggests there are strong MHC class II restricted immune pressures against HCV NS5B in chronic Korean patients ( Table 3).
Comparisons of d S and d N in the NS5B region between Korean patients and patients from other countries
To examine whether there was distinctive immune pressure against HCV NS5B at the CD4+ T cell level in Koreans, we compared d S and d N in the NS5B region between 15 Korean patients and 60 patients from other countries (China: 15, Japan: 15, Switzerland: 15 and the United States: 15). In the Koreans subjects, we used the consensus sequences of NS5B from more than 10 subclones of patients. For the patients from other countries, we used sequences retrieved from the LANL HCV database. In the NS5B region, the d N /d S ratio for the Korean subjects (0.23) was higher than it was for those from other countries (1.4) with statistical support (p = 0.002). The d N frequency (3.1) in the known CD8+ T cell epitopes from Korean patients was higher than that for the patients from other countries (2.1), but the difference was not statistically significant (p = 0.078). However, the d N frequency (4.5%) in the predicted CD4+ T cell epitope regions in the Korean patients was significantly higher than that in those from other countries (2.2%) (p,0.001). The d N /d S ratios in the predicted CD4+ T cell epitope regions were higher in the Koreans (0.52) by nearly twofold compared to those of the patients from other areas (0.26). In particularly, the difference in the d N frequency between the Koreans (6.4%) and the patients from other countries (2.3%) was more pronounced in the mutational hotspot. Collectively, these results suggest the presence of distinctive CD4+ T cell mediated immune pressure against HCV NS5B in Koreans (Table 4).
Correlation between NS5B mutations and the severity of liver disease
The overall mutation frequency of the entire NS5B region in C (2.8%) was significantly higher than in the comparison group, patients with CH, those with liver cirrhosis LC and those with HCC (2.2%) (p = 0.002). The mutation frequency in known CD8+ T cell epitopes was also significantly higher in C than in the comparison group [C (3.4%) vs. CH + LC + HCC (2.6%), p = 0.05]. This tendency was also found in the predicted CD4+ T cell epitopes [C (5.7%) vs. CH + LC + HCC (4.2%), p = 0.001] and in the mutational hotspot [C (7.7%) vs. CH + LC + HCC (5.9%), p = 0.004] with an increased frequency of mutations at a statistically significant level. This shows that increases in the mutation rate in the NS5B region are negatively correlated with the progression of liver disease in chronic hepatitis C patients (Table 5).
Mutation frequency in codons related to SVR and ETR in Korean patients
Mutations at the 309, 333, 338 and 355 codons are reportedly related to SVR and ETR groups as compared to non-responders (NR) [15]. Interestingly, a very high mutation rate in four SVRrelated codons was found in Korean treatment-naïve patients, with an average mutation frequency of 28.9% (192/664) in the quasispecies distributions. Of note, the average mutation frequency (31.7%) in four codons as calculated from 15 Korean patients was significantly higher than any of the other regions, including that from Japan (Table 6).
A quasispecies analysis showed a total of 10 mutations, including SVR and antiviral resistance in the sequenced NS5B region. These can be divided into two distinct groups. One is the diverse (D) type, which coexists with other quasispecies members in a patient, and the other is made up of conserved (C) types which exist alone without a quasispecies counterpart in a patient (Fig. 2, Table 1). The coexistence of diverse quasispecies at a specific codon may be indirect evidence of an important target for immune pressure or/and viral fitness. Notably, the coexistence of Q and R at codon 309, located in one of the CD8+ T cell epitopes (aa 308 and 315), was found in all 15 Korean subjects via a quasispecies distribution analysis; this may be due to the distinct CD8+ T cell immune pressure against a region between aa 308 and 315 among Koreans (Table S6). In addition, there were other D type mutations: A333V, S335N, V338A, P353L, E440G/K and C451H. On the other hand, there were only three C types of mutations (C316N, Q355K/R and E464Q). Interestingly, in all the three C-type mutations, significantly different Cq values between two counterparts in the respective mutation type were found (Table S3).
Discussion
The presence of distinct HLA types among an ethnic group could lead to distinct MHC class I or II restricted immune pressures within its population [37,43,44,47]. Therefore, the frequency and patterns of escape variants against structural and nonstructural HCV proteins reflect the background HLA types among an ethnic group [48,49]. The aim of the present study is to investigate the background mutation frequency and patterns of HCV NS5B, reportedly related to a high SVR, from treatment-naïve Korean patients chronically infected with GT-1b in an effort to explain the high SVR in Korean patients. The significant findings of this study are discussed below.
First, the entire mutation frequency in the sequenced NS5B region was positively correlated with Cs but not with patients showing disease progression (CH, LC and HCC) [C (2.8%) vs. CH + LC + HCC (2.2%), p = 0.002]. Furthermore, similar mutation frequencies were noted within both the CD4+ (p = 0.001) and CD8+ T cell epitope regions (p = 0.05) ( Table 5). This suggests that the accumulation of multiple mutations in NS5B may be induced by vigorous and multi-specific immune pressure in the HCV-acute infection phase and may lead to the functional abnormality of HCV RdRp activity, resulting in the attenuation of HCV pathogenic potentials [19]. This strongly supports previous results which showed that mutations in NS5B were related to the high SVR and EVR of GT-1b chronically infected patients [15].
Second, a pronounced d N frequency in the predicted CD4+ T cell epitopes in the NS5B region [Korean (4.5%) vs. those of patients from other countries (2.1%), p = 0.001], particularly in the mutational hotspot [Korean (6.4%) vs. other countries (3.1%), (Table 4). This suggests that there is distinct intrahepatic MHC class II restricted immune pressure at least against HCV NS5B among the Korean population [19]. Broadly directed virus-specific immune pressure at the CD4+ T cell level was recently reported to play a very pivotal role in spontaneous resolution at a very early phase of HCV-acute infection [50]. Furthermore, the presence of the multi-specific CD4+ T cell response against HCV can aid not only the induction of a vigorous antiviral CD8+ T cell response but also antibody production for the inhibition of the spread of the virus [51]. Particularly, because three codons (A333, V338 and Q355) out of four reported to be related to the high SVR are located in the mutational hotspot, the acquisition of mutations within this region induced by the distinctive Korean immune pressure at the CD4+ T cell level may contribute to the high SVR found in Korean patients infected with GT-1b. In fact, the prediction of the MHC class II HLA allele showed that a region of the CD4+ T cell epitope from NS5B, covering aa 333 to 347, one of two predicted epitopes comprising the mutational hotspot, has high binding affinity for most HLA DRB1 alleles prevalent in Korean populations [52]. In addition, HLA DQB1 03:01 and 03:02, prevalent at frequencies higher than 10% in Koreans, also are noted to be associated with viral clearance [53][54][55]. Our previous study also showed that there are distinct mutation patterns and a very high mutation frequency of the CD4+ T cell epitopes of the HBV preC/Core region in chronic Korean patients, strongly supporting the hypothesis of this study [56]. Third, the frequency of d N within the CD8+ T cell epitope region of NS5B was significantly higher than that outside the CD8+ T cell epitope region [inside CD8+ (2.9%) vs. outside (2.3%), p = 0.001], suggesting the presence of immune pressure at the CD8+ T cell level against HCV NS5B among Korean patients, as shown in patients from other areas (Table 3) [37,44,47,57]. However, pronounced differences in the mutation frequency between six regions of CD8+ T cell epitopes were found. Two of the six CD8+ T cell epitopes (308 to 315 aa and 451 to 459) with high binding affinity to two HLA allele types, HLA-A02:01 and HLA-A24:02, prevalent in Koreans, showed a higher d N frequency compared to other epitopes [308-315: 96 (7.2%) and 451-459: 49 (3.3%)], suggesting the presence of distinct MHC class I restricted immune pressure in Korean patients [52]. Particularly, it is noteworthy that the extraordinary high d N /d S ratio (2.04) found in a region of the CD8+ T cell epitope covering codons 308 to 315, was mainly due to the presence of frequent mutations in codon 309, one of four codons related to SVR rates (Table S4, Fig. 2). The mutation type, Q309R, is known to be frequently mutated in NS5B, particularly in Asian patients. However, even compared to Japanese patients, also an Asian country like Korea, the strikingly high mutation frequency of Q309R was observed in only the Korean patients [15,16]. All of the 15 patients harbored this mutation in their quasispecies distribution and more than half (96/166, 57.8%) of all quasispecies from the 15 patients had the mutation type R309. Interestingly, the co-existence of both mutated and wild types, not exclusive of the existence of one type alone, was found in all 15 patients, suggesting the advantage of the coexistence of two variants in a patient over the exclusive existence of either type alone in an escape of host immune surveillance or viral fitness (Table S6). Therefore, the high frequency of the Q309R mutation in Korean patients may be induced by CD8+ T cell immune pressure which may in part provide a likely explanation for the high SVR rates in Koreans.
Finally, it is well known that mutations in NS5B can affect the HCV replication capacity [19]. We found a total of three types of mutations (C316N, Q355K/R and E464Q) which had a significant effect on HCV replication (Cq value: C316N and E464Q p = 0.033, Q355K/R p = 0.003) (Table S3). Interestingly, our quasispecies analysis showed that two polymorphisms in aa 316, C316 and N316, were strongly related to two polymorphisms in codon 464, Q464 and E464, respectively, in an exclusive manner ( Figure 1). The type with both C316 and Q464 signatures showed a significantly higher HCV replication capacity and was more related to patients with advanced liver disease compared to the type with both the N316 and E464 signatures. The exclusive combination of the SNPs of two codons may be due to the structural constraint of NS5B. Furthermore, the coexistence of both types (C316/Q464 and N316/E464) was not found in any patients, suggesting that these two types may be from completely different resources and not a different quasispecies version induced by immune pressure from a patient. Our data showing phylogenetic segregation between the two types also supports the above hypothesis.
Our study has three potential limitations. First, the nested PCR protocol used in this study showed low sensitivity, with the amplification of only 23 samples out of 73 samples (31.5%). The strategies for the nested PCR protocol including primer sets and a PCR condition should be modified in the future study. Particularly, PCR negative amplifications were found with high frequencies in samples with lower HCV viral loads, suggesting novel nested PCR protocol to increase the degree of sensitivity should be applied in a future study. Second, the modest population size (15 patients) is relatively small to lead to a meaningful conclusion about the relationship between NS5B mutations and liver disease progression. Third, as single-genome amplification and an end-point dilution strategy were not utilized, the cloning strategy employed in this study is limited when used to represent genuine viral quasispecies in serum samples.
In conclusion, our data suggest that the distinct MHC class II restricted immune pressure against HCV NS5B in Korean patients leads to a pronounced high mutation frequency and distinct mutation patterns in HCV NS5B in Korean patients. This finding provides important insight into the high SVR and ETR rates during the treatment of GT-1b infected Korean patients. Author Contributions
|
2016-05-17T02:09:34.136Z
|
2014-01-29T00:00:00.000
|
{
"year": 2014,
"sha1": "5980f6827123666962dd35a74edfd0dcc12e0ead",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0087773&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5980f6827123666962dd35a74edfd0dcc12e0ead",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
254877592
|
pes2o/s2orc
|
v3-fos-license
|
ThreatKG: A Threat Knowledge Graph for Automated Open-Source Cyber Threat Intelligence Gathering and Management
Despite the increased adoption of open-source cyber threat intelligence (OSCTI) for acquiring knowledge about cyber threats, little effort has been made to harvest knowledge from a large number of unstructured OSCTI reports available in the wild (e.g., security articles, threat reports). These reports provide comprehensive threat knowledge in a variety of entities (e.g., IOCs, threat actors, TTPs) and relations, which, however, are hard to gather due to diverse report formats, large report quantities, and complex structures and nuances in the natural language report text. To bridge the gap, we propose ThreatKG, a system for automated open-source cyber threat knowledge gathering and management. ThreatKG automatically collects a large number of OSCTI reports from various sources, extracts high-fidelity threat knowledge, constructs a threat knowledge graph, and updates the knowledge graph by continuously ingesting new knowledge. To address multiple challenges, ThreatKG provides: (1) a hierarchical ontology for modeling a variety of threat knowledge entities and relations; (2) an accurate deep learning-based pipeline for threat knowledge extraction; (3) a scalable and extensible system architecture for threat knowledge graph construction, persistence, updating, and exploration. Evaluations on a large number of reports demonstrate the effectiveness of ThreatKG in threat knowledge gathering and management
I. INTRODUCTION
Sophisticated cyber attacks have plagued many high-profile businesses [1]- [3]. To remain aware of the fast-evolving cyber threat landscape and gain insights into the most dangerous threats, security researchers and practitioners actively gather knowledge about cyber threats from past incidents, and share the knowledge through public sources like security websites and blogs. Such open-source cyber threat intelligence (OS-CTI) [4] has received growing attention from the community.
Despite the pressing need for high-quality threat knowledge to empower defenses, existing OSCTI gathering and management systems [5]- [12], however, have primarily focused on structured Indicator of Compromise (IOC) feeds [13], which are forensic artifacts of intrusions such as hashes of malware samples, names of malicious files/processes, and IP addresses of botnets. Though useful in capturing fragmented views of threats, these IOCs are low-level and disconnected, and thus they lack the capability to uncover the complete threat scenario as to how the threat unfolds into multiple steps, ‡ Equal contribution which is typically observed in most sophisticated attacks these days [14]. Consequently, defensive measures that rely on these low-level, fragmented indicators are easy to bypass when the attacker re-purposes the tools and changes their signatures [4].
In contrast, a large number of unstructured OSCTI reports have been significantly overlooked (e.g., security blogs and news [15]- [21], threat encyclopedia pages [22], [23]), which contain more comprehensive knowledge about threats in natural language text. Besides low-level IOC entities, OSCTI reports contain various (1) higher-level threat knowledge entities (e.g., threat actors, adversary tactics, techniques, and procedures (TTPs) [24]), and (2) semantic relationships between entities that indicate their interactions (e.g., the launch relation between two IOCs Office Monkeys (Short Flash Movie).exe and player.exe in Figure 2b). Such high-level and connected knowledge is tied to the attacker's goals and thus more difficult to change, which is critical for uncovering the complete multistep threat scenario and building more robust defenses [25]. As the volume of OSCTI reports increases day-by-day, it becomes increasingly challenging for threat analysts to manually maneuver through and correlate the myriad of sources to gain useful knowledge. Unfortunately, prior approaches do not provide an automated and principled way to gather such knowledge from OSCTI reports and manage the knowledge.
Challenges. In this work, we seek to design and build a system that (1) automatically gathers high-fidelity cyber threat knowledge from a large number of OSCTI reports, and (2) manages such knowledge in a unified knowledge base to provide comprehensive views of various threats. We identify four major challenges. First, in addition to IOCs, OSCTI reports contain various other types of entities and relations that capture threat behaviors. To comprehensively model the threats, the system needs to have a wide coverage of entity and relation types. Second, OSCTI reports collected from different sources have diverse formats: some reports contain structured fields such as tables/lists, and some reports primarily contain natural language text (e.g., Figure 2). Besides, not all reports from a source capture threat behaviors (e.g., advertisements, product promotions [13]). Thus, the system needs to handle such diversity and filter out irrelevant reports. Third, accurately extracting threat knowledge from natural language text is non-trivial. This is due to the presence of massive nuances particular to the security context, such as special characters (e.g., dots, underscores) in IOCs. These nuances limit the performance of most natural language processing (NLP) modules (e.g., sentence segmentation, tokenization), making existing information extraction tools ineffective [26], [27]. Besides, learning-based information extraction approaches typically require a large annotated training corpus, which is expensive to obtain manually. Thus, how to programmatically obtain annotations becomes another challenge. Fourth, new OSCTI reports are being published every day that contain fresh knowledge about the latest threats. Being able to provide threat knowledge timely will facilitate downstream defensive measures in effectively countering these threats. Thus, the system needs to continuously gather new knowledge and integrate it to update the knowledge base. The system also needs to be scalable (to handle the large report volume) and extensible (to generalize to new reports with unseen formats).
Contributions. We propose THREATKG (∼26K LOC), a system for automated open-source cyber threat knowledge gathering and management. THREATKG automatically collects a large number of OSCTI reports from a wide range of sources, uses a combination of ML and NLP techniques to extract highfidelity threat knowledge, constructs a threat knowledge graph, and updates the knowledge graph by continuously ingesting new knowledge. To address the aforementioned challenges, THREATKG has the following key designs: (1) Hierarchical Threat Knowledge Ontology: To comprehensively model the threats, THREATKG employs a hierarchical ontology, which consists of three layers that model the threats in different dimensions and granularities. The ontology covers a wide range of low-level and high-level entities (i.e., IOCs, threat actors, malware, TTPs), the relations of which depict both low-level detailed threat behavior steps and highlevel threat contexts. Compared to other cyber ontologies that only focus on sub-domains of threat behaviors (e.g., malware behavior [28], [29]) and cover a limited set of entities and relations (e.g., lacking TTPs and IOC relations [30], [31]), THREATKG's ontology is much more comprehensive in its threat knowledge coverage (Section III-B).
(2) Deep Natural Language Understanding for Threat Knowledge Extraction: To generalize well to diverse OS-CTI report formats, THREATKG decouples the threat knowledge extraction pipeline into different processing components: parsers, checkers, and extractors. Parsers are sourcedependent: each parser handles the specific layout structure of each OSCTI source and parses the collected report files into unified threat knowledge representations (UTKRs), which contain the parsed structured fields (e.g., report title, author, publisher) and unstructured text blocks. Checkers then operate on these UTKRs and filter out non-threat reports using ML-based techniques (Section III-C). Extractors are sourceindependent: they perform an in-depth analysis of unstructured text, and extract a variety of entities and relations to further enrich the UTKRs. By decoupling the processing, THREATKG can easily generalize to new OSCTI report formats (via adding parsers) and new entity/relation types (via adding extractors).
To accurately extract threat knowledge from unstructured OSCTI text, an in-depth natural language understanding is critical. THREATKG employs a specialized NLP pipeline that targets the unique problem of extracting a variety of entities and relations from OSCTI text, which has not been studied in prior work. To deeply understand the complex logical structures of OSCTI text and the semantic meaning and connections between targeted entities, THREATKG employs a collection of rule-based and deep learning (DL)-based techniques (e.g., IOC protection, dependency parsing, neural named entity recognition and neural relation extraction) in its extractors to handle the nuances and achieve accurate threat knowledge extraction (Sections III-D1 and III-D2). In addition, to obtain a large annotated corpus for training DL models, we leverage data programming [32] to programmatically synthesize annotations for targeted entities and relations in text (Section III-D3).
(3) Scalable and Extensible System Architecture: To gather and provide threat knowledge timely, THREATKG employs a scalable and extensible system architecture that manages all components for OSCTI report collection, threat knowledge extraction, threat knowledge graph construction, persistence, and updating. The architecture employs a modular design, allowing multiple components in the same processing step to share the same interface and produce outputs together. Such modular design allows THREATKG to parallelize and pipeline the execution of the processing components to improve the throughput. Existing components can be switched off and new components can be easily added via a configuration file (e.g., adding new crawlers and parsers for a new OSCTI source), making THREATKG extensible. THREATKG is fully automated: new reports are being collected and the extracted knowledge is being continuously integrated into the knowledge graph. Upon the threat knowledge graph, various downstream security applications can be empowered. In particular, THREATKG provides a GUI that provides various types of interactivity to facilitate threat search and threat knowledge graph exploration (Sections III-E and III-F). Deployment and Evaluation. We deployed THREATKG on a lab server and evaluated its effectiveness thoroughly. At the time of writing, THREATKG has collected 149K+ OSCTI reports from 40+ sources. The constructed threat knowledge graph contains 347K+ entities and 1.73M+ relations.
Our evaluation results demonstrate that: (1) THREATKG is able to accurately filter out non-threat reports and extract various types of threat knowledge from OSCTI text, and can generalize to unseen OSCTI sources; (2) Compared to existing security information extraction approaches, THREATKG has a much wider coverage of threat knowledge types and is more accurate; (3) the entire pipeline of THREATKG is efficient (able to finish the expected daily workload in < 30s).
To the best of our knowledge, THREATKG is the first system that automatically constructs a large knowledge graph for cyber threats from OSCTI reports. A system demo video is available at [33]. II. SYSTEM OVERVIEW Figure 1 shows the architecture of THREATKG, which consists of three phases: (1) OSCTI report collection, (2) threat knowledge extraction, and (3) threat knowledge graph construction. Each phase consists of one or several processing steps (e.g., Parser, Extractor). In Phase I, THREATKG collects OSCTI reports from a wide range of sources (Crawler). In Phase II, THREATKG groups multi-page report files (Porter), parses the reports (Parser), filters out non-threat reports (Checker), and extracts threat knowledge (Extractor). In Phase III, THREATKG constructs a threat knowledge graph and persist it in the database. THREATKG is automated and continuously running, with new reports being periodically and incrementally collected and new knowledge being extracted and integrated into the knowledge graph via knowledge fusion.
Motivating Example. We show two representative OSCTI reports to motivate the design. Figure 2a shows a report snippet from Trend Micro threat encyclopedia [34] about the Ransom.Win32.LOCKBIT.YEBGW ransomware. The report is semi-structured; it contains a few structured fields about certain attributes of the malware (e.g., aliases, platform), as well as unstructured text about the detailed behaviors of the malware (e.g., dropping a file). Figure 2b shows a report snippet from Securelist blog [35] about the Office Monkeys dropper used by the CozyDuke threat actor. The complete report is about the CozyDuke threat actor, which primarily contains unstructured text on its contexts and behaviors.
We can observe that OSCTI reports have diverse formats. Besides, the unstructured text contains rich knowledge about threat behaviors. We annotated representative entities and relations in both reports. We can observe that some entity-relation triplets indicate detailed threat behavior steps (e.g., <Office Monkeys (Short Flash Movie).exe, launch, player.exe>). Besides, information about the sequential order of some steps maybe be presented (e.g., "...first...then..." in Figure 2b). In addition to detailed threat behavior steps, we can also observe that some triplets provide high-level threat contexts, in which the relations may not be explicitly associated with words in text (e.g., the CozyDuke actor uses the Office Monkeys (Short Flash Movie).exe dropper file to perform the attack). To uncover the complete multi-step threat scenario, the approach needs to accurately extract these triplets from text, infer the possible order of some triplets, and resolve entity coreferences (e.g., arrows in the figures) to connect the triplets. Significance of THREATKG. Various downstream security applications can be empowered by the constructed threat knowledge graph. To facilitate threat analysts in acquiring knowledge about desired threats, THREATKG provides a GUI for threat search and interactive threat knowledge graph exploration. Intrusion detection systems [36] can integrate the gathered threat knowledge as the signature of an intrusion, making these systems able to counter the latest threats. Cyber threat hunting systems [37] can also leverage the gathered knowledge to guide the threat hunting process. In Section V, we discuss THREATKG's usefulness and applicability in detail.
Different from existing knowledge graphs [38]- [43] designed for storing and representing general knowledge, such as person names and organizations, THREATKG automatically constructs a knowledge graph from a large number of OSCTI reports for cybersecurity domain, with the goal of empowering a wide range of downstream security applications. THREATKG releases threat analysts from intensive and tedious manual threat knowledge gathering process, enabling them to redirect energy to other more important defensive tasks.
III. DESIGN OF THREATKG
In this section, we present the design details of THREATKG.
A. OSCTI Report Collection
We built a robust multi-threaded crawler framework that manages crawlers to collect OSCTI reports from 40 major security websites, including: threat encyclopedias [22], [23], enterprise security blogs [15]- [17], influential personal security blogs [18], [19], security news [20], [21], etc. These websites provide a large number of OSCTI reports (in the form of webpages) that cover various types of threats (e.g., malware, vulnerabilities, attack campaigns), making them a valuable source of threat knowledge. Each crawler handles the specific layout structure of each website and is able to handle both static pages and dynamically generated content (e.g., the "View More" button in [15]) to collect individual report URLs. The crawler framework schedules periodic execution and reboot after failure for individual crawlers in a robust manner. To boost the crawling efficiency, the crawler framework employs a multi-threaded design that schedules parallel execution for multiple crawlers, as well as fetching multiple reports for each crawler. With THREATKG's extensible system architecture, new OSCTI sources can be easily added by adding a corresponding crawler and a corresponding parser.
To further expand the threat knowledge coverage, in addition to the 40 security websites, we collected OSCTI reports from another useful source, APTnotes [44]. APTnotes is a repository of publicly-available reports related to malicious campaigns/activities/software that have been associated with vendor-defined APT groups. These reports are in PDF format and are typically longer than the collected webpages, which provide complementary threat knowledge. We used the script provided in [44] and downloaded 542 reports in total.
B. Hierarchical Threat Knowledge Ontology
Based on our observations of a wide range of OSCTI sources, we categorize OSCTI reports into three broad types: malware reports, vulnerability reports, and attack reports. Malware reports and vulnerability reports are semi-structured reports collected from threat encyclopedias [22], [23], which contain knowledge about malware or vulnerabilities. Figure 2a shows an example malware report snippet on the Ransom.Win32.LOCKBIT.YEBGW ransomware. Attack reports are unstructured reports collected from security blogs and news [15]- [21], which contain knowledge about attack campaigns. Figure 2b shows an example attack report snippet on the CozyDuke APT attack (CozyDuke is the name of the threat actor/group that is responsible for the attack).
To comprehensively model the threats, we construct a hierarchical threat knowledge ontology that includes a variety of threat knowledge entities and relations for capturing both low-level threat behaviors and high-level threat contexts. Figure 3 shows the ontology, which consists of three layers.
The report context layer of the ontology contains report-level knowledge. Specifically, for each report, we associate it with an entity of the corresponding type. This entity has attributes like title, URL, publication date, etc. Having explicit entities for reports would help threat analysts connect other threat knowledge entities (e.g., malware, IOCs, TTPs) gathered from the same report to form a comprehensive view of the threat. Threat analysts can also view the original report by following the URL attribute to obtain more context. Besides, reports are written by specific authors and created by specific CTI vendors, for which we create entities as well. These entities and their relations form the report context layer.
The threat behavior layer of the ontology contains knowledge on low-level threat behaviors. As shown in prior research [37], [45], IOCs and their relations contain important knowledge on how the threat unfolds into low-level connected steps. Such knowledge can be used to identify system call events (e.g., process reading a file) that are part of the attack sequence, which would largely benefit defensive measures like cyber threat hunting. For example, in Figure 2b, two filename IOCs, Office Monkeys (Short Flash Movie).exe and player.exe, have a launch relation. Thus, in the threat behavior layer, we consider different types of IOCs and their relations. Example IOC types are filename, filepath, IP, URL, domain, registry, and hashes. We follow the prior research [37], [45] and consider the interaction verbs (e.g., read, write, open, send) between the IOCs as their relations.
The threat context layer of the ontology provides highlevel contexts for threats in addition to detailed threat behavior steps. Such contexts are critical to a comprehensive understanding of threats and designing effective countermeasures accordingly. For this layer, we consider a wide range of entities, including: (1) malware (e.g., BlackEnergy trojan [46]), (2) vulnerabilities (e.g., CVEs [47]), (3) threat actors (e.g., CozyDuke APT actor [35]), (4) tactics and techniques (e.g., spearphishing link [24]), (5) vulnera- Entities in different layers can also be related. For example, entities in the threat behavior layer and the threat context layer that are gathered from the same report are related to the corresponding report entity (in the report context layer) through a reported_in relation. In Figure 2a, the malware entity Ransom.Win32.LOCKBIT.YEBGW is related to several filepath IOC entities through an add relation (after coreference resolution). In Figure 2b, the threat actor entity CozyDuke is related to the filename IOC entity Office Monkeys (Short Flash Movie).exe through a use relation. Entities can also have attributes in the form of key-value pairs (e.g., type of a malware, version of a vulnerable software). The three layers of ontology collectively model the threats from multiple dimensions and in different granularities. Compared to other cyber ontologies [28]- [31], THREATKG's ontology has a much wider coverage of threat knowledge types, enabling threat analysts to obtain a more comprehensive view of threats.
C. OSCTI Report Parsing and Threat Relevance Checking
Once the crawlers collect the OSCTI reports, the porters group multi-page report files. The parsers are sourcedependent; each parser parses the specific layout structure of the corresponding OSCTI source and converts the report files into unified threat knowledge representations (UTKRs). UTKR is a JSON schema that covers relevant and potentially useful information in OSCTI data sources and lists out corresponding fields. We construct this schema by iterating through OSCTI data sources and adding fields for previously undefined types of knowledge. Figure 1 shows an example schema, which contains fields such as title, author, and publication date. Specifically, the parsers first convert the reports into UTKRs (i.e., Python objects in memory) by parsing the structured fields. Unstructured text blocks are also parsed and put into UTKRs. Then, the extractors further enrich the UTKRs by extracting additional entities and relations from unstructured text and putting them into the corresponding fields.
The UTKR is different from the ontology described in Section III-B. The ontology conceptually specifies what types of knowledge we target and how the knowledge is structured, which is used to guide the threat knowledge graph construction. In contrast, the UTKR specifies the actual form of OSCTI data that resides in the system and is passed between system components. As we will discuss in Section III-E, having a unified intermediate representation that all components (parsers, checkers, extractors) can work on will largely increase system modularity and promote scalability and extensibility.
Threat Relevance Checking. As the crawlers simply collect the report files by following the URLs and do not have visibility into the report content, there could be reports collected that do not contribute valuable knowledge to modeling cyber threats (e.g., empty pages, ads, product promotions, irrelevant news [13]). Keeping these reports in the knowledge extraction pipeline will waste computation resources for the extractors and impair the quality of the gathered threat knowledge. Therefore, THREATKG employs a set of checkers that operate on the UTKRs produced by the parsers and filter out reports that are irrelevant to cyber threats. The filtered UTKRs are then passed to the extractors for further enrichment.
Empty web pages can be easily filtered out using simple rules. Hence, we construct a rule-based checker for these reports. For ads and other irrelevant reports, we model the checking process as a binary classification task and construct learning-based checkers: given an OSCTI report, determine whether or not it is relevant to cyber threats.
To train the classifier, we extract a set of useful features, including: (1) Keyword count & density in the report title and body: We obtain a list of keywords from MITRE ATT&CK [24] for example threat actors, malware, tools, techniques, etc.; (2) IOC count & density in the report body: We extract IOCs using regex rules (see Section III-D1). We don't consider report title as most of the titles do not contain threat details like IOCs; (3) Report article length: Based on our observations, a longer report is more likely to contain threat behaviors (e.g., list of IOCs in Figure 2a); (4) TF-IDF values for tokens: We prioritize the frequent, unique tokens by calculating the TF-IDF [49] value for each token in the report. We use these features to train a variety of ML models (e.g., SVM, Random Forest, XGBoost, LightGBM) and evaluate their performance on our ground-truth dataset. The experimental results and analysis are in Section IV-B1.
D. Threat Knowledge Extraction
The extractors take the UTKRs produced by the parsers as input, perform an in-depth natural language understanding of unstructured text, and extract a variety of entities and relations to further enrich the UTKRs. The extractors are source-independent: every extractor extracts the targeted threat knowledge from unstructured text universally for all OSCTI sources, and the extraction does not depend on the layout structure of each source. By decoupling the threat knowledge extraction process into source-dependent parsing and sourceindependent extraction, THREATKG can be easily extended to incorporate new OSCTI sources (via adding crawlers and parsers) and new knowledge types (via adding extractors).
1) Threat Knowledge Entity Extraction: For IOCs, we construct a set of regex rules that cover a wide range of IOC types (e.g., filename, filepath, IP, URL, domain, registry, hashes). THREATKG incorporates these rules in a rule-based entity extractor. For other types of entities (e.g., malware, threat actors, tools) that are hard to specify using rules, THREATKG employs a DL-based extractor to perform neural named entity recognition. Named entity recognition (NER) is a task of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories [50]. Compared to conventional NER approaches like Hidden Markov Model [51], DL-based approaches avoid the time-consuming feature engineering stage and can better understand deep semantics of text and capture hidden patterns, leading to more accurate extraction.
Unique Challenges for Threat Knowledge Extraction. As mentioned in Section I, compared to general information extraction, we are faced with two unique challenges for extracting threat knowledge from OSCTI text. First, massive nuances exist in OSCTI text that are particular to the security context (e.g., dots, underscores, spaces, slashes in IOCs). These nuances confuse many basic NLP modules (e.g., sentence segmentation, tokenization) and hence the extraction techniques built upon these modules. Second, for general information extraction where the goal is to extract general entities (e.g., person names, organizations, locations) and relations, the community has already curated many benchmark datasets using sources like news corpus and Wikipedia (e.g., CoNLL-2003 [52] for named entity recognition, SemEval-2010 Task 8 [53] for relation extraction). However, there is no labeled dataset for the threat knowledge extraction task that covers wide range of entities and relations that we target. What's more, DL-based information extraction approaches typically require the annotated training corpus to be large, but it is expensive to annotate a large OSCTI corpus manually. These challenges apply to both the threat knowledge entity extraction task and the threat knowledge relation extraction task.
To address the first challenge, as these nuances mostly exist in IOCs, we leverage a method called IOC Protection proposed in our other work [37], by replacing IOCs with meaningful words in natural language context (e.g., the word "something") and restoring them after the tokenization procedure. This way, we guarantee that the potential entities are complete tokens. To address the second challenge, we leverage data programming [32] to programmatically synthesize annotations for targeted entities and relations in OSCTI text. We will discuss the details of data programming in Section III-D3.
Neural NER. We construct a Bidirectional LSTM-CRF (BiLSTM-CRF) model [54] to perform neural NER over OSCTI text. (1) First, each input sentence is tokenized and each token is transformed into an embedding vector via onehot encoding. (2) Then, the embeddings are forwarded to the bidirectional LSTM (BiLSTM) layer, which consists of two LSTM networks that process the input sentence in the forward and backward directions. LSTM (Long-Short Term Memory) [55] is known for its capability in capturing long-term dependencies of tokens. However, a single LSTM can only remember information from the past context. For tasks like NER, understanding the context of a token through both past and future contexts is necessary. Thus, we add an additional LSTM to construct the BiLSTM layer, which processes the information in a bidirectional manner. The BiLSTM essentially acts like a deep feature extractor that captures the sequential relationships among the input tokens. (3) The outputs from the BiLSTM are forwarded to a linear layer, which maps the features extracted by the BiLSTM from the feature space into the tag space. After mapping, the outputs are forwarded to a Conditional Random Field (CRF) layer [56] which outputs optimal, joint prediction of all the tags in the sentence. During inference, each input token is propagated through the network and the Viterbi algorithm [57] is applied at the CRF layer to find the most optimal sequence for the output tags.
2) Threat Knowledge Relation Extraction: As mentioned in Section III-B, there exist relations that can be directly associated with interaction verbs between two entities (e.g., drop, add relations in Figure 2a; drop, launch relations in Figure 2b), as well as relations that may not be explicitly associated with words in text (e.g., the use relation between CozyDuke and Office Monkeys (Short Flash Movie).exe in Figure 2b). These relations (and the associated entities) capture both low-level threat behaviors and high-level threat contexts.
Dependency Parsing-Based RE. For the first type of relations, THREATKG employs a dependency parsing-based relation extractor to extract interaction verbs between two entities. In our other work [37], we proposed a light-weight, unsupervised NLP pipeline for extracting various verbs between two IOCs in OSCTI text, which has achieved high extraction accuracy. Our approach leverages dependency parsing to analyze the grammatical structure of a sentence and constructs a dependency tree, and then uses a set of dependency grammar rules to locate the subject-verb-object relations between IOCs to extract the targeted relation verb. Besides, our approach can also extract the sequential order of IOC interaction steps if presented, which is useful for understanding the threat scenario. Thus, in THREATKG, we leverage this approach to extract verbs between two IOCs. We further extend it to support the extraction of verbs between any types of entities listed in our ontology (e.g., drop, add relations between malware and IOCs in Figure 2a), by extending the IOC recognition step with our neural NER model.
It is important to note that compared to our other work [37], THREATKG has a completely different goal: THREATKG targets extracting a wide range of entities and relations from a large number of OSCTI reports to automatically construct a threat knowledge graph. In contrast, [37] targets extracting only IOCs and IOC relations from a single OSCTI report and using the extracted information for threat hunting.
Neural RE. For the second type of relations, as these relations are not associated with explicit verbs in text, the previous dependency parsing-based approach will not work. Thus, we model the relation extraction as a multi-class classification task: given a sentence that contains two entities recognized by our entity extractors, determine which relation category the sentence belongs to. Here, the entities include the IOCs recognized by our IOC rules and the other entities recognized by our BiLSTM-CRF model. Example relation categories include USE (i.e., use something to achieve a goal), CREATE (i.e., generate or make something that did not exist before), BREAK (i.e., stop or prevent something from happening), FIND (i.e., discover or locate something), ALIAS (i.e., two entities are synonyms). In general, two entities could have a relation when they co-occur within a certain distance. These entities could co-occur in the same sentence or in different sentences. In the current implementation of THREATKG, we focus on entities that co-occur in the same sentence, as these entities are most likely to generate high-quality relations based on our observations. Specifically, THREATKG employs a DL-based relation extractor that leverages a Piecewise Convolutional Neural Networks model with attention mechanism (PCNN-ATT) to perform neural relation extraction (RE). The PCNN [58] model is similar to the Convolutional Neural Networks (CNN) model widely used for image and text classification tasks. However, PCNN is specially designed for relation extraction: Instead of using a single max pooling to merge features as in CNN, PCNN uses piecewise max pooling, which splits a sentence into three parts by the two entities and calculates the maximum value of each parts. Compared to CNN, PCNN is more suitable for relation extraction because the two entities in the sentence (and their locations) capture the structural information about the sentence and are critical for identifying the important tokens between them that indicate the relation. Moreover, since the tokens in a sentence are not equally helpful to relation extraction, an additional attention layer is added to make the model focus on tokens that are more important.
After the NER and before the RE, THREATKG performs coreference resolution [59] to find all expressions (e.g., pronouns) in the text that refer to a specific entity. Figure 2 shows example entity coreferences indicated by the arrows. This way, the RE can benefit from the information provided by the resolved entities and the extracted triplets can be connected to form a comprehensive view of threat knowledge.
3) Data Programming: To train DL-based models for NER and RE, a large annotated corpus is needed. However, manually annotating such corpus is expensive: for NER, we need to annotate each token in the text with a tag in the BIO format; for RE, we need to annotate each sentence in the text with a relation category as well as the types and location spans of the entities in the sentence. To mitigate the cost of obtaining supervision, we leverage data programming [32], which programmatically synthesizes annotations via unsupervised modeling of sources of weak supervision. Specifically, data programming first obtains the domain knowledge expressed by subject matter experts via labeling functions (could be noisy rules based on heuristics), and then denoises and integrates these sources of weak supervision to synthesize annotations.
We leverage an open-source realization of data programming, Snorkel [60], to programmatically build large training sets for our NER and RE tasks. The entire labeling pipeline of Snorkel is unsupervised (i.e., does not require labeled data for training): after we construct the labeling functions, Snorkel will automatically learn and assign weights for the labeling functions and produce a single set of noise-aware confidenceweighted labels for the input samples.
The key to synthesizing good annotations is to define noisy but helpful labeling functions. To synthesize annotations for the NER task, we create labeling functions based on our curated lists of entity keywords. For example, the list of threat actors, malware, techniques, and tools are constructed from MITRE ATT&CK [24]. To synthesize annotations for the RE task, we create labeling functions based on distant supervision and checking the entity types and keywords existence: (1) Distant supervision [61] is a technique that generates training data using an already existing knowledge base. The idea is that if there exists a fact between two targeted entities in the knowledge base, we can then label each pair of the targeted entities that appear in the same sentence as a positive example for the relation that the fact represents. This way, we can generate a large number of (noisy) labeled sentences. Specifically, in our setting, we leverage MITRE ATT&CK, which is a manually curated knowledge base for cyber adversary behaviors and its data is available in a downloadable structured JSON file. For example, for a sentence that contains one threat actor entity and one malware entity, if the two entities exist in the MITRE ATT&CK JSON and have the "use" relation type, we then label the sentence with the USE relation category. (2) We also construct labeling functions based on heuristic rules that check the entity types and the existence of keywords. For example, for the ALIAS relation, we check if the two entities have the same type and if there exist some keywords like "alias" or "aka". By leveraging data programming, we can generate large amount of training data with relatively low human efforts.
E. Scalable and Extensible System Architecture
Threat Knowledge Graph Construction. After the extractors enrich the UTKRs, THREATKG constructs the threat knowledge graph from the UTKRs and stores it into the backend database for persistence. Directly inserting the UTKRs leads to inefficient storage. Furthermore, these long representations are not convenient for end users (e.g., threat analysts) to comprehend and analyze. Thus, THREATKG refactors these intermediate representations to match the threat knowledge ontology, which is separately designed and has clear and concise semantics for entities and relations. THREATKG then merges the refactored representations into the database through its connectors. Currently, THREATKG uses Neo4j [62] for its storage, with nodes being entities and edges being relations. Each node is associated with a category (e.g., malware or threat actor), a unique name (e.g., specific malware name), and a set of attributes. New databases backends can be easily supported by adding the corresponding connectors, thanks to the modular design of THREATKG's system architecture.
Scalability and Extensibility. To make the system scalable, we parallelize the system components for the processing steps (e.g., crawlers, parsers, checkers, extractors). We further pipeline the processing steps to improve the throughput of threat knowledge extraction. Between different processing steps, we specify the formats of intermediate representations (i.e., UTKRs) and make these representations serializable. These UTKRs are passed through the pipeline and get enriched. With such a pipeline design, we can have multiple computing instances for a single processing step and pass serialized intermediate results across the distributed network, making multi-host deployment and load balancing possible.
To make the system extensible, we adopt a modular design, allowing multiple system components in the same processing step to work together with the same input/output interface. For example, THREATKG has multiple crawlers to collect OSCTI reports from multiple sources, and THREATKG can import report data from the collected HTML files, PDF files, or compressed formats by using different types of porters. In addition, THREATKG provides rich configuration support: the system can be configured through a configuration file, which specifies the set of components to use and the additional parameters (e.g., threshold values for NER) that are passed to these components. With this design, existing components can be switched off and new components can be easily added.
Continuous Knowledge Integration.
To provide the latest threat knowledge timely, THREATKG is fully automated and continuously running, with new reports being collected and new knowledge being extracted and integrated into the threat knowledge graph. At the time of writing, THREATKG has collected 149, 015 reports, and the threat knowledge graph has accumulated 347, 243 entities and 1, 732, 469 relations.
When referring to the same entity, different sources may use different identifiers (e.g., "ZQuest" and "Z-Quest" refer to the same adware). To make the knowledge stored in the threat knowledge graph consistent, THREATKG aggregates knowledge from multiple sources via knowledge fusion. In knowledge fusion, THREATKG scans all the extracted entities and merges facts about the same entity by creating a new entity as the merged result and migrating all relations. One key challenge is that in the threat knowledge domain, enti-ties with similar names might be completely different (e.g., "Petya" and "NotPetya" are two ransomware with names satisfying a substring relation but are different entities). To address this challenge, THREATKG takes the advantage of the contextual information stored along with the entity and only triggers merges when two candidate entities have similarity in its name (e.g., semantic similarity computed using word embeddings [63]) that surpasses a configurable threshold, have no conflicts in their attribute values, and operate in a similar environment (e.g., operating on the same platform). By considering the contextual information extracted along with the entity identifier and merging only when there is no conflict, THREATKG reduces the information loss in its knowledge fusion procedure, while providing a consistent and comprehensive view for entities mentioned in multiple sources.
F. Frontend Web GUI
To facilitate threat search and knowledge graph exploration, we built a web GUI using React and Elasticsearch. The GUI interacts with the Neo4j database and provides various types of interactivity. The user can zoom in/out, drag the canvas, click on a node and an edge to display the detailed information, and search information using keywords (through Elasticsearch) or Cypher queries (through Neo4j Cypher engine). Once the user drags a node, the GUI responds to the node movements to prevent overlap through an automatic graph layout using the Barnes-Hut algorithm [64]. The dragged nodes will lock in place but are still draggable if selected. This feature facilitates defining custom graph layouts for visualization. The GUI also supports convenient inter-graph navigation. When a node is double-clicked, if its neighboring nodes have not appeared in the view yet, these neighboring nodes will automatically spawn. On the contrary, once the user is done investigating a node, if its neighboring nodes or any downstream nodes are shown, double clicking on the node again will hide all its neighboring nodes and downstream nodes. In addition, the user can configure the number of nodes displayed and the maximum number of neighboring nodes displayed for a node, and view the previous graphs displayed.
Note that the Neo4j database also provides the Neo4j Browser for data exploration. Compared to it, our GUI is not tied to the specific database backend, and it is easy to switch to a different database (e.g., RDF store) while providing the same functionalities. Furthermore, unlike the Neo4j Browser that can only perform structured Cypher query search, our GUI also supports fuzzy keyword search powered by Elasticsearch, which is easier to use and facilitates quick exploration. Such design also opens up possibilities for building more types of threat analytics over the threat knowledge graph and integrating these anlaytics in the GUI (e.g., acquiring threat knowledge through natural language question answering).
IV. EVALUATION
We built THREATKG (∼26K LOC) upon several tools: Python for the system, BeautifulSoup and Selenium for the crawlers, scikit-learn and Ray Tune (for hyperparameter optimization) for the checkers, PyTorch for the extractors, Snorkel for data programming, and Neo4j for the storage backend. We deployed THREATKG on a lab server and conducted extensive experiments to evaluate THREATKG's performance.
We aim to answer the following research questions: • RQ1: How accurate is THREATKG in filtering out OSCTI reports that are irrelevant to cyber threats? • RQ2: How accurate is THREATKG in extracting threat knowledge entities and relations from OSCTI text? Does data programming help improve the performance? • RQ3: How good is THREATKG in gathering various types of threat knowledge compared to other baselines? • RQ4: For the runtime performance, is THREATKG efficient enough to be practical for a real-world deployment?
A. Evaluation Setup
The deployed server has an AMD EPYC 7282 CPU (2.80GHz) running Ubuntu 20.04 and an Nvidia GRID T4-16Q GPU with 16GB RAM. To evaluate the accuracy of THREATKG in extracting threat knowledge, a ground-truth labeled OSCTI report dataset is needed. To construct the ground truth, we manually labeled 141 reports selected from seven OSCTI sources, including: APTnotes attack reports, two threat encyclopedias, and four enterprise security blogs. These reports have diverse formats and cover a wide range threat knowledge. For entities specified in the ontology, we label them using the BIO tags. For relations, we label both the relation verbs (if exist) between the two entities for explicit relations, and the relation categories for implicit relations (we have 17 relation categories in total). Two of our authors are involved in the labeling process. They first independently labeled all the entities and relations. Then, they cross-checked each other's results and resolved any conflicts. Table I shows the statistics of our ground-truth OSCTI dataset.
Training DL-based models typically require a large dataset. However, annotating entities and relations in OSCTI reports is very expensive, especially given the wide range of knowledge types that we target. As mentioned in Section III-D1, we are faced with a unique challenge that there is no existing benchmark dataset available for the threat knowledge extraction domain. We have made our best effort to curate an OSCTI dataset at the current scale to evaluate our system.
B. Evaluation Results
1) RQ1: Accuracy of Irrelevant Report Filtering: As labeling whether a report is relevant to cyber threats or not is much easier compared to annotating entities and relations, we constructed a separate dataset that has more reports solely for evaluating the checker performance. The dataset comprises of 755 reports randomly selected from three random OSCTI sources: Securelist, Symantec Threat Intelligence, and Webroot. In the dataset, 517 reports are relevant to cyber threats and 238 reports are irrelevant to cyber threats.
OSCTI reports collected from different sources have different structures, writing styles, and focused topics. Considering the distributional shift in the training data, a classifier might benefit more from data within the same source compared with other sources. To investigate this, we ran two experiments: (1) For each source, we trained a source-specific classifier and evaluated its performance on its own source. (2) We combined all sources to train a universal classifier and evaluated its performance on each source individually. We trained a number of ML classifiers, including: Logistic Regression, Random Forest, Linear SVM, SVM with RBF Kernel, XGBoost, and LightGBM. The train/dev/test split is 70-10-20. Table II shows the results averaged for different ML models. We have the following observations: (1) For source-specific classifiers, the average F1 scores are above 86% and the average false negative rates (FNRs) are below 3.77%. The false positive rates (FPRs) are higher. In our problem setting, a high FPR is acceptable as long as the FNR can be sufficiently low. The reason is because a high FNR means that many relevant reports (and the contained threat knowledge) are filtered out, while a high FPR just means that the system is conservative in filtering the reports. (2) The performance of the universal classifier does not benefit from more training data, and is worse than the source-specific classifiers for some sources. This verifies the distributional shift problem in different OSCTI sources that we conjectured previously. Thus, in practice, we recommend training classifiers for different sources separately to get better checker performance.
2) RQ2: Accuracy of Threat Knowledge Extraction: Accuracy of BiLSTM-CRF Model for NER. As our labeled dataset in Table I is a bit small for training the neural NER model, we gathered the rest reports from the same seven sources. We then applied data programming to these reports to expand the dataset. We performed two experiments (80-20 train/test split): (1) We trained the BiLSTM-CRF model and evaluated it on a test set from the same OSCTI sources. (2) We evaluated the same trained model on all other OSCTI sources (i.e., excluding the seven sources). This experiment aims to evaluate the generalizability of our model on unseen sources. Table III shows the hyperparameters of the model. Table IV shows the averaged results for all entity BIO tags for the two settings. We can observe that in both settings, the model has good performance (> 99% F1). Besides, if we exclude the "O" tags (i.e., used for other tokens that are not of interest) in the calculation, which outnumber other tags in the dataset, the model's performance is still good. These results demonstrate the model's performance and generalizability. Accuracy of PCNN-ATT Model for RE. The labeled dataset in Table I contains 7308 relations. We picked 16 reports (same as the ones in Section IV-B3) and constructed the test set using the relations in them. In the remaining 125 reports, there are 1219 "non-others" relations (relations that are not "others") and 4615 OTHERS relations. This dataset is imbalanced and will negatively impact the performance of the model trained on it. Thus, we under-sampled the OTHERS relations to make the dataset more balanced. After under-sampling, there are 1732 OTHERS relations left. To further expand the dataset, we manually labeled 805 more "non-others" relations chosen from the same seven OSCTI sources. Finally, we created train/dev split of 87.5% and 12.5% respectively from the 2024 "nonothers" relations and 1732 OTHERS relations. Table V shows the hyperparameters. Table VI shows the aggregated results for all relation categories. The results (79% F1) are within our expectations as we are dealing with a challenging multi-class classification task (17 categories) and we only have a small dataset available to train the model.
Effectiveness of Data Programming. We conjecture that the reason for the current RE performance is the lack of training data. Thus, we created more training instances using data programming. We labeled 2049 more "non-others" relations and used all the 4615 OTHERS relations, and created a train/dev split of 87.5% and 12.5%. The test set is the same as the previous RE experiment (the manually labeled one). Table VI, we can see that the RE performance is significantly improved with data programming (from 79% F1 to 85% F1). In addition, the performance for the relation types with fewer training instances in the previous experiment is also improved. For example, for the relation INJECT, in the previous experiment, the F1 is only 55% with 222 training instances. But after data programming, the model was trained on 558 instances and its F1 score is improved to 72%. These results demonstrate the effectiveness of data programming in creating training data to improve the model. 3) RQ3: Comparison With Existing Security Information Extraction Approaches: To further evaluate THREATKG's effectiveness in extracting threat knowledge, we compared THREATKG with two state-of-the-art security information extraction approaches, TTPDrill [65] and EXTRACTOR [45].
From the results in
We used the 16 reports that we selected for the test set for the RE performance evaluation in Section IV-B2, and evaluated these three approaches. The reports are selected to represent a wide variety of threat scenarios: (1) 8 reports that cover major OS platforms (e.g., Linux, Windows, IOS, and Android). (2) 8 reports that cover well-known APT campaigns (e.g., Stuxnet and Beapy) and common types of cyber threats (e.g., malware and cryptojacking attacks). We ran TTPDrill and EXTRACTOR on these reports. For THREATKG, we used the same model that we trained in Section IV-B2.
TTPDrill is designed to extract threat actions and map them to TTP categories, and TTPDrill does not extract entities. This differs significantly from THREATKG as THREATKG has a much wider coverage of entity and relation types. Thus, when comparing the extraction performance, we only compare the overlapping part. Specifically, we compared the threat actions extracted by TTPDrill with the relation types extracted by THREATKG. EXTRACTOR has a similar output as THREATKG's extraction module that extracts the subjectpredicate-object triplets. However, different from THREATKG, EXTRACTOR only considers subjects/objects that involve IOCs, and IOCs are easy to extract using regular expressions Thus, we evaluated the relation extraction performance of EXTRACTOR based on the meaning of the extracted phrases, and compared with the relations extracted by THREATKG. It is also important to note that neither TTPDrill nor EXTRACTOR targets building an automated system that extracts threat knowledge from a large number of OSCTI reports to construct a threat knowledge graph Table VII shows the results. We observe that: (1) The performance of relation extraction of EXTRACTOR is lower than that of ThreatKG. The reason is that EXTRACTOR is originally designed for extracting phrases that involve IOCs.
(2) TTPDrill suffers from a low precision because the goal of TTPDrill is to extract threat actions and map them to TTP categories, so it exhaustively extracts many phrases to provide enough information for the mapping part. These results demonstrate THREATKG's wide coverage of threat knowledge types.
4) RQ4: System Runtime Performance:
We measured a single-process procedure for all the OSCTI reports with GPU enabled. The evaluation took 87.3 hours to finish, reaching a processing throughput of 24.7 OSCTI reports per minute. With 11 articles added to the system every day, the expected daily workload is less than half a minute.
We also provide a performance breakdown analysis in Table VIII. We notice that the extractors take most of the time and the dependency parsing is the bottleneck. A potential reason is that the sentence-wise dependency parsing for long content OSCTI report is time-consuming. As evidence, the dependency parsing for the source apt_notes with an average content length of 32503 characters takes 22.0 seconds on average (88.5% of total processing time for that source). In contrast, for the source symantec_vulnerability with an average content length of 332 characters, it takes 0.1 seconds on average (71.5% of total processing time for that source).
In summary, the evaluation results show that THREATKG is efficient enough for real-world use cases. Future efforts to further improve the runtime performance should focus on improving the efficiency of NLP dependency parsing modules.
V. DISCUSSION
Limitations and Design Alternatives. We identify several major limitations caused by the current design choices in THREATKG. First, although a fixed schema simplifies the interface design for downstream applications and makes the semantics for entities and relations clearer, information that is not considered by the schema cannot be captured by the system. In comparison, an OpenIE-like system [26], [27] can extract information based on triplets without a predefined schema, potentially covering more types of entities and relations. We will explore the integration of OpenIE-based extractors in the future. Second, while the modular design of having separate NER and RE models provides extensibility and robustness, without a global gradient passing mechanism, it is hard to implement an end-to-end machine learning model that is easier to manage. Third, during the system design, we assume the OSCTI sources that we choose are reliable, which might not be the case with an adversarial or compromised OSCTI publisher. An alternative is to design more complex algorithms to maintain confidence scores for all the generated facts and also revise the knowledge fusion procedure. Downstream Security Applications. THREATKG can empower many existing downstream security applications while supporting new applications that were not possible before. Existing threat intelligence research [4], [66] has shown that individual reports often cover only partial knowledge about threat behaviors. By aggregating the knowledge gathered from multiple reports into a unified knowledge graph, THREATKG provides more comprehensive results in threat search and analysis. As THREATKG automatically extracts structured knowledge from unstructured OSCTI reports, systems and platforms that previously benefit from the structured OSCTI can also benefit from the knowledge provided by THREATKG. For example, the knowledge extracted by THREATKG can be converted into open formats like STIX [67], exchanged in platforms like AlienVault OTX [9], and integrated in existing intrusion detection systems [36], [68] and attack investigation systems [69], [70] that take IOC and STIX feeds as the input.
Research has also proposed to use the knowledge extracted from individual OSCTI reports to guide threat hunting [37]. With the aggregated knowledge provided by THREATKG, a new way of threat hunting can be enabled. For example, we can reduce the efforts of manual query construction in threat hunting, by synthesizing or suggesting queries based on the threat knowledge graph and the partial user input. We will leave these applications for future work.
VI. RELATED WORK In this section, we survey four categories of related work. CTI Services and Platforms. Various platforms and services have been proposed to manage OSCTI. Platforms like AlienVault OTX [9], IBM X-Force [10], MISP [11], and OpenCTI [12] allow users to contribute, share, or manage OSCTI. Unlike these platforms that require users to contribute information, THREATKG gathers and aggregates threat knowledge automatically from OSCTI reports using ML and NLP techniques. There are also services, such as PhishTank [5], OpenPhish [6] and Abuse.ch [7], that provide real-time CTI feeds. However, they only focus on specific types of entities. For instance, PhishTank and OpenPhish focus on phishing URLs, and Abuse.ch focuses on malware and botnets. In contrast to them, THREATKG extracts a much wider range of entities (e.g., threat actors, techniques). Moreover, THREATKG aims to build a connected knowledge graph with semantic relationships between entities, which is not covered in existing platforms. Besides these services and platforms, research progress has been made to better analyze OSCTI reports, including understanding vulnerability reproducibility [66], and measuring threat knowledge quality (e.g., consistency, accuracy, and coverage) [4], [71]. Such research is orthogonal to THREATKG. CTI Formats and Ontologies. There exist open standard formats such as STIX [67], OpenIOC [72], and CybOX [73] for exchanging threat intelligence. They are schemas rather than the large threat knowledge graph as constructed by THREATKG that contains the actual knowledge. The knowledge gathered by THREATKG can be easily converted into these formats for distribution. MITRE ATT&CK [24] is a knowledge base for cyber adversary behaviors based on realworld observations. It is manually curated by security experts and does not focus on automated knowledge extraction from unstructured reports as done in THREATKG. It also does not contain IOC relations. There are some cyber ontologies [28], [29], [31], [74], [75] that support reasoning, but most of them only focus on sub-domains of threat knowledge, such as IDS [74], [75] and malware behavior [28], [29]. STUCCO ontology [30] is designed to integrate both structured and unstructured data sources but lacks support for high-level threat knowledge like techniques and tactics. UCO [31] aims to provide a unified ontology but is limited to attack information without mitigation information. Furthermore, all of these ontologies do not focus on automated knowledge extraction from reports.
Note that in this work we do not focus on standardizing OSCTI as there already exist open standards like STIX and OpenIOC. We also do not focus on building a comprehensive platform like AlienVault OTX and OpenCTI for users to share and manage the OSCTI data. Instead, we focus on automatically gathering OSCTI from unstructured reports and aggregating and structuralizing it, which has not been covered in existing solutions. The structuralized knowledge can then be easily converted into standard formats like STIX, shared on platforms like AlienVault OTX, or imported to platforms like OpenCTI for knowledge management. Threat Knowledge Extraction. Several studies have proposed to extract threat knowledge from OSCTI reports. iACE [13] extracts IOCs from security articles using a graph mining technique. ChainSmith [76] further classifies the extracted IOCs into different attack campaign stages (e.g., baiting, exploitation, installation, and command and control) using neural networks. TTPDrill [65] extracts threat actions from Symantec reports and maps them to pre-defined attack patterns. EXTRACTOR [45], ThreatRaptor [37], and HINTI [77] use various NLP techniques to extract IOC entities and IOC relations. Most of these work focus only on IOCs or IOC relations. In contrast, THREATKG covers a wider range of entities (e.g., threat actors, techniques, tools) and relations. Besides, these works only extract knowledge from a single OSCTI report. In contrast, THREATKG automatically extracts knowledge from a large volume of reports, aggregates the knowledge to construct a large threat knowledge graph, and continuously updates the threat knowledge graph by ingesting new knowledge, providing a comprehensive view of the latest threats. The scope of THREATKG is different from the scopes of these works. General Knowledge Graphs. There are a number of knowledge graphs [38]- [43] designed for storing and representing general knowledge (e.g., people, location, organizations). Different from them, THREATKG targets automatically constructing a threat knowledge graph for security domain, by gathering and aggregating knowledge from OSCTI reports. The constructed threat knowledge graph contains both detailed threat behavior steps (e.g., IOCs and IOC relations) and the high-level threat contexts (e.g., threat actors, techniques). Such domain-specific threat knowledge is not available in existing knowledge graphs. With the threat knowledge graph, various downstream security applications can be empowered.
VII. CONCLUSION
We have presented THREATKG, a system for automated open-source cyber threat knowledge gathering and management. THREATKG automatically constructs a large threat knowledge graph from a large number of OSCTI reports using ML and NLP techniques, and provides a GUI to facilitate knowledge acquisition. THREATKG has the potential to empower a variety of downstream security applications
|
2022-12-21T06:43:13.376Z
|
2022-12-20T00:00:00.000
|
{
"year": 2022,
"sha1": "4e63c3fdb6571b20281b6f21c116beb848d0cd4c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4e63c3fdb6571b20281b6f21c116beb848d0cd4c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
10189652
|
pes2o/s2orc
|
v3-fos-license
|
TGFβ signalling plays an important role in IL4-induced alternative activation of microglia
Background Microglia are the resident immune cells of the central nervous system and are accepted to be involved in a variety of neurodegenerative diseases. Several studies have demonstrated that microglia, like peripheral macrophages, exhibit two entirely different functional activation states, referred to as classical (M1) and alternative (M2) activation. TGFβ is one of the most important anti-inflammatory cytokines and its effect on inhibiting microglia or macrophage classical activation has been extensively studied. However, the role of TGFβ during alternative activation of microglia has not been described yet. Methods To investigate the role of TGFβ in IL4-induced microglia alternative activation, both, BV2 as well as primary microglia from new born C57BL/6 mice were used. Quantitative RT-PCR and western blots were performed to detect mRNA and protein levels of the alternative activation markers Arginase1 (Arg1) and Chitinase 3-like 3 (Ym1) after treatment with IL4, TGFβ or both. Endogenous TGFβ release after IL4 treatment was evaluated using the mink lung epithelial cell (MLEC) assay and a direct TGFβ2 ELISA. TGFβ receptor type I inhibitor and MAPK inhibitor were applied to address the involvement of TGFβ signalling and MAPK signalling in IL4-induced alternative activation of microglia. Results TGFβ enhances IL4-induced microglia alternative activation by strongly increasing the expression of Arg1 and Ym1. This synergistic effect on Arg1 induction is almost completely blocked by the application of the MAPK inhibitor, PD98059. Further, treatment of primary microglia with IL4 increased the expression and secretion of TGFβ2, suggesting an involvement of endogenous TGFβ in IL4-mediated microglia activation process. Moreover, IL4-mediated induction of Arg1 and Ym1 is impaired after blocking the TGFβ receptor I indicating that IL4-induced microglia alternative activation is dependent on active TGFβ signalling. Interestingly, treatment of primary microglia with TGFβ alone results in up regulation of the IL4 receptor alpha, indicating that TGFβ increases the sensitivity of microglia for IL4 signals. Conclusions Taken together, our data reveal a new role for TGFβ during IL4-induced alternative activation of microglia and consolidate the essential functions of TGFβ as an anti-inflammatory molecule and immunoregulatory factor for microglia.
Background
Microglia represent the resident immune cells of the central nervous system (CNS) and account for approximately 12% of all cells in the brain [1]. As counterparts of peripheral macrophages, microglia sense the brain parenchyma for perturbations resulting from injury or pathological conditions. Several CNS neurodegenerative pathologies including Alzheimer's disease (AD) [2][3][4], multiple sclerosis (MS) [5,6] and Parkinson's disease (PD) [7,8] are characterised by a strong microglia reaction that is, at least partially, responsible for the progressive nature of these diseases.
Increasingly, studies have demonstrated that microglia, like peripheral macrophages, exhibit two entirely functional different activation states that are referred to as classical and alternative activation. The classical activation of microglia (M1) is induced by Th1 cytokines, such as IFNγ, IL1β, IL12, and IL6 as well as lipopolysaccharide (LPS) and results in production and release of pro-inflammatory cytokines such as tumour necrosis factor-alpha (TNFα), IL6, matrix metalloproteinase (MMP)-9, nitric oxide (NO) and reactive oxygen/nitrogen species (ROS) [9][10][11] which are involved in inflammation-mediated neurotoxicity [12][13][14]. Whereas alternative activation of microglia (M2) initiated by Th2 cytokines, such as IL4 and IL13, as well as IL10 and TGFβ, results in up regulation of arginase-1 (Arg1), Chitinase 3 like 3 (Ym1) and found in inflammatory zone-1 (Fizz1), which are primarily associated with tissue repair and extracellular matrix composition [3,11]. As hypothesised by Town et al. these differently activated microglia likely exist as a dynamic continuum in vivo, with functions ranging from deleterious to beneficial [15,16]. All these notions suggest that modulating microglia activation states might be a potential therapeutic approach to different types of neurodegenerative diseases including AD, MS, and PD.
Based on the M1 and M2 activation states, a more detailed categorization of macrophage activation states has recently been discussed. As suggested by Gordon and colleagues [11,17,18], alternative activation is only limited to macrophages treated with IL4 or IL13 and is primarily associated with injury resolution including tissue repair and extracellular matrix reconstruction. While IL10 and TGFβ promote a macrophage phenotype characterised by inflammation resolution including inhibition of pro-inflammatory cytokine production, modification of inflammatory signalling pathways and increased expression of scavenger receptors, thereby, promoting debris clearance. This type of activated macrophage has been termed acquired deactivation [19][20][21][22][23]. Whereas IL10 and TGFβ induce acquired deactivation, the acquired deactivation macrophages can further produce IL10 and TGFβ in an autocrine manner [3,24,25]. Regarding the immunoregulatory function of IL10 and TGFβ produced by the acquired deactivation macrophages, this phenotype of macrophages has been also named as a regulatory macrophage by Mosser and Edwards [24]. Although these studies promoted the knowledge of microglia/macrophage activation phenotypes and their distinct functions, there is still no general agreement in the field on the nomenclature and, more importantly, the interaction between different activation states of microglia/ macrophages is still poorly understood.
TGFβ is a multifunctional cytokine involved in a variety of physiological and pathological conditions [26]. TGFβs bind to the TGFβ receptor type II, which recruits and phosphorylates a type I receptor. The type I receptor then phosphorylates Smad2/3, which further bind to Smad4 to form a heteromeric complex that translocates into the nucleus to regulate the expression of target genes [27,28]. Next to the canonical Smad-dependent pathway, TGFβs also signal via Smad-independent signalling cascades, including mitogen-activated protein kinase signalling (MAPK) pathways [29].
In this study we used the microglial cell line BV2 and primary microglia to investigate the role of TGFβ in IL4-induced alternative activation, thereby illustrating the interaction between different microglia/macrophage activation states. For the first time, we provide evidence that although TGFβ1 treatment alone is not able to induce microglia, alternative activation, treated together with IL-4, strongly enhances IL4-induced alternative microglia activation. Arg1 and Ym1 expression was significantly increased after co-treatment with IL4 and TGFβ1. To our surprise, Arg1 and Ym1 expression induced by IL4 treatment alone was significantly impaired in the presence of the TGFβ receptor type I inhibitor. Further investigation revealed that IL4 treatment alone increased microglial TGFβ2 expression and secretion, which in turn might promote IL4-induced Arg1 and Ym1 expression. Moreover, we found TGFβ1 treatment resulted in up regulation of the IL4 receptor alpha (IL4Rα). Finally, we provide evidence that the Mitogenactivated protein kinase (MAPK) pathway is essential for TGFβ-mediated enhancement of Arg1 expression after IL4 treatment in microglia.
BV2 cell culture
The murine microglia cell line BV2 was maintained in DMEM/F12 (PAA) supplemented with 10% heatinactivated FCS and 1% P/S. Cultures were kept at 37°C in 5% CO 2 /95% humidified air atmosphere. Prior to treatment cells were washed with PBS and serum-free medium was added.
Primary microglia cultures
Whole brains obtained from P0/1 C57BL/6 mice were washed twice with Hank's BSS solution and vessels and meninges were removed from brain surfaces under the microscope. Cleaned brains were collected and enzymatically dissociated with Trypsin-EDTA (1×) for 15 minutes at 37°C. An equal amount of ice-cold FCS, together with DNase I (Roche Diagnostics, Mannheim, Germany) at a final concentration of 0.5 mg/ml was added prior to dissociation with wide-and narrow-bored polished Pasteur pipettes. Cells were then washed and single cells were centrifuged, collected and suspended with (DMEM)-Ham's F12 medium containing 10% fetal bovine serum (FBS) and 1% Penicillin/Streptomycin. Cell suspensions were transferred to poly-D-lysine-coated tissue culture flasks with a density of 2 brains/75 cm 2 flask. Cultures were maintained in a humidified 5% CO 2 /95% air atmosphere at 37°C. At day in vitro (DIV) 2 and 3, cultures were washed twice with pre-warmed phosphatebuffered saline (PBS) and fresh culture medium was added. After 10 to 14 days in culture, microglia were shaken off from adhesive grown astroglia by shaking at approximately 250 to 300 rpm for 1 hour. Isolated microglia were plated into various dishes or plates and treated with proper factors, according to different experimental purposes.
RNA isolation and quantitative RT-PCR
RNA was isolated from BV2 and primary microglial cells with the RNeasy kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. RNA was reverse transcribed to cDNA with the GeneAmp RNA PCR Core Kit (Applied Biosystems, Darmstadt, Germany). Quantitative RT-PCR (qRT-PCR) analysis was performed with the MyiQ™ (BIO-RAD, München, Germany) and the Quantitect SYBR Green PCR Kit (Applied Biosystems) with 1 μl of cDNA template in a 25 μl reaction mixture. Results were analysed with the Bio-Rad iQ5 Opitcal System Software and the comparative CT method. Data are expressed as 2 -ΔΔCT for the experimental gene of interest normalized to the housekeeping gene (GAPDH) and presented as fold change relative to control. The following primers were used: TGFβ1for: 5 0 -TAATGGTGGACCGCAACAACG-3; TGFβ1rev
Characterisation of TGFβ secretion
Primary microglia were treated with or without IL4 (10 ng/ml) in serum-free DMEM-Ham's F12 medium for 24 hours. Conditioned medium was collected for the mink lung epithelial cell (MLEC) assay and ELISA. The MLEC assay is widely used to measure the amount of TGFβs in conditioned medium. The principal is that the MLECs containing a luciferase reporter under the control of a TGFβ-responsive truncated plasminogen activator inhibitor (PAI)-promoter are able to generate luciferase with a TGFβs dose-dependent manner. Since the MLECs only response to the activated TGFβs, in order to detect the latent part of TGFβs, the conditioned medium has to be acidification first to convert latent TGFβs into activated ones. To evaluate the levels of released TGFβs after IL4 treatment, the MLEC assay was performed as described by Abe et al. [30]. Briefly, MLECs were placed into 96-well plates at the density of 1.5 × 10 4 cells per well and treated with collected conditioned medium either with or without acidification with 1 M HCL and pH adjustment with NaOH (to activate latent TGFβs) as well as the standard mediums containing different contractions of recombinant TGFβ for 16 hours. Cells were washed with PBS and total proteins were extracted using lysis buffer (Tropix, Applied Biosystems). The luciferase activity was analysed in duplicates using a luminometer (LumatB5076, Berthold, Bad Wildbad, Germany).
Cytokine array
For the analysis of IL4 release, supernatant from untreated and TGFβ1-treated primary microglia was analysed using the Proteome Profiler™ Array Mouse Cytokine Array Panel A (R&D Systems, Wiesbaden-Nordenstedt, Germany) according to the manufacturer's instructions. Briefly, equal amounts of primary miroglia cells were incubated for 24 hours in the presence or absence of TGFβ1 and media were collected. Cytokine array membranes were incubated with cell culture supernatants at 4°C overnight with gentle shaking. Membrane signals were developed using Western Lightning W Plus-ECL, Enhanced Chemiluminescence Substrate (Perkin-Elmer, Germany) and signals were captured on Amersham Hyperfilm™ ECL (GE Healthcare).
Statistical procedures
The data were expressed as means ± standard error (SE). Statistical significance between multiple groups was compared by one-way analysis of variance (ANOVA) followed by an appropriate multiple comparison test. Twogroup analysis was performed using the Student's t-test. P-Values < 0.05 were considered statistically significant. All statistical analyses were performed using GraphPad Prism4 software (GraphPad Software Inc.).
TGFβ1 enhances IL4-induced alternative microglia activation
To investigate the influence of TGFβ1 on IL4-induced microglia-alternative activation, primary microglia were treated either with IL4 (10 ng/ml), TGFβ1 (1n g/ml) or with a combination of both factors for 24 hours. As a crude readout for microglia activation, the morphology change of microglia was analysed after treatment. Treatment with IL4 or TGFβ1 alone for 24 hours resulted in morphology changes in BV2 cells (data not shown) and primary microglia towards a more ramified phenotype. This extent of morphological change was remarkably increased when the cells were treated with IL4 and TGFβ1 together ( Figure 1A). As the morphological change could not always precisely reflect the activation states, the assessment of the alternative activation still relied on the molecule markers such as Arg1 and Ym1. Therefore, the expression of Arg1 and Ym1 were analysed. Immunofluorescence staining demonstrated increased Arg1 staining intensity after IL4 treatment. Figure 1 TGFβ1 enhances IL4-induced alternative activation of microglia. (A) Primary cultured microglial cells changed their morphology from round-shaped into ramified after treated with IL4 (10 ng/ml) and TGFβ1 (1 ng/ml) for 24 hours. This cellular morphological change was enhanced when microglia were co-treated with IL4 and TGFβ1. (B) Immunofluorescence staining for Arg1 demonstrated increased staining intensity for Arg1 after treatment with IL4 or TGFβ alone. Combined treatment with IL4 and TGFβ1 strongly enhanced Arg1 immunoreactivity. Scale bars indicate 100 μm. Quantitative RT-PCR showed increased Arg1 (C) and Ym1 (D) mRNA levels in primary microglia after IL4 treatment. Co-treatment with IL4 and TGFβ1 for 24 hours significantly increased Arg1 and Ym1 mRNA levels in primary microglia. (E) Western blotting revealed increased Arg1 and Ym1 protein levels in primary microglia after treatment with IL4. Again, co-treatment with IL4 and TGFβ1 increased the protein levels of Arg1 and Ym1. Representative western blot results from at least three independent experiments are shown. GAPDH was used as control for equal protein loading. (F) Densitometric evaluation of Arg1 and Ym1 band intensities and statistical analysis. Data are given as means ± standard error from three independent experiments: *P < 0.05, **P < 0.01, ***P < 0.001 (one-way analysis of variance).
Combination of IL4 and TGFβ1 further increased the staining intensity ( Figure 1B). Using quantitative RT-PCR the up regulation of Arg1 and Ym1 was determined. A significant increase in Arg1 and Ym1 RNA levels was observed after treatment with IL4 alone. TGFβ1 treatment alone did not result in increased Arg1 and Ym1 mRNA levels (P > 0.05). However, treatment with IL4 and TGFβ1 resulted in significant increase of Arg1 and Ym1 RNA levels (P < 0.001) compared to IL4 treatment alone ( Figure 1C, D). As shown in Figure 1E and F, IL4 treatment significantly increased Arg1 and Ym1 protein levels in primary microglia (P < 0.05). TGFβ1 slightly increased Arg1 and Ym1 protein levels in primary microglia, without reaching significant differences compared to control (P > 0.05). Combination of IL4 and TGFβ1 significantly increased IL4-induced Arg1 and Ym1 protein levels in primary microglia (P < 0.05).
IL4-induced Arg1 and Ym1 upregulation is dependent on TGFβ signalling
In order to address whether endogenous TGFβ signalling is involved in IL4-induced alternative microglia activation, primary microglia were treated with the combination of IL4 and TGFβ type I receptor kinase inhibitor IV (TβKI). We found expression of Arg1 and Ym1 induced by IL4 were partially impaired by TβKI. As is shown in Figure 2, primary microglia were treated either with IL4 (10 ng/ml) or IL4 (10 ng/ml) together with TβKI (2 μM/ml) for 24 hours, there the mRNA and pro- Figure 2 Arg1 and Ym1 expression induced by IL4 was blocked in the presence of a TGFβ receptor type I inhibitor. Primary microglia were treated with IL4 (10 ng/ml) combined either with or without TGFβ receptor type I kinase inhibitor IV (TβKI, 2 μM) for 24 hours. RNA and proteins were isolated for quantitative RT-PCR and western blotting, respectively. Quantitative RT-PCR shows that IL4 treatment significantly increased Arg1 (A) and Ym1 (B) mRNA levels (P < 0.001) which was partially blocked by co-treatment with TβKI (P < 0.01). Western blotting (C) shows Arg1 and Ym1 protein levels in primary microglia after different treatments, which were quantified by densitometric analysis, and normalized to Gapdh (D). IL4 treatment significantly increased Arg1 and Ym1 protein levels in primary microglia (P < 0.01), which was significantly blocked by TβKI in microglia (P < 0.05). Data are presented as mean ± standard error from three independent experiments: *P < 0.05, **P < 0.01, ***P < 0.001 (one-way analysis of variance).
qRT-PCR revealed that Arg1 and Ym1 mRNA up regulation after IL4 treatment was significantly reduced by cotreatment with TβKI (Figure 2A, B). Western blotting results demonstrate that Arg1 and Ym1 protein levels in primary microglia were increased after IL4 treatment and significantly decreased in the presence of TβKI ( Figure 2C, D). Similar results have been achieved from BV2 cells (data not shown). All these data indicate that IL4-induced Arg1 and Ym1 expression is at least partially dependent on endogenous TGFβ signalling in microglia after IL4 treatment.
IL4-treated microglia increase TGFβ2 expression and secretion
To investigate the endogenous TGFβs expression and secretion form microglia after IL4 treatment, primary microglia were treated with or without IL4 (10 ng/ml) for 24 hours. The cells were harvested for mRNA extraction and qRT-PCRs for different isoforms of TGFβ were performed. Quantitative RT-PCR demonstrates that among all TGFβ isoforms, only TGFβ2 mRNA was significantly up regulated after IL4 treatment ( Figure 3A,B, C). Since the TGFβ receptor inhibitor used above is not specific for TGFβ1, 2 or 3, in addition to the blockage of TGFβ1, 2, 3, it also inhibits Activin and Nodal signalling. Therefore, the mRNA levels of the Activin A, Activin B, and Nodal were also analysed using qRT-PCR but were not changed after IL4 treatment (data not shown). The protein levels of intracellular TGFβ2 were significantly increased (P = 0.028) after treatment with IL4 ( Figure 3D). Since endogenous TGFβ2 in primary microglial cells is up regulated after IL4 treatment, we further addressed whether TGFβ secretion from IL4-treated microglia is also increased. Therefore, the conditioned media from IL4-treated (MCM-IL4) as well as nontreated microglial cells (MCM) were harvested after 24 hours and the MLEC assay was performed to monitor TGFβ secretion. Quantification of TGFβ-induced intensity of luciferase shows that primary microglia secreted TGFβ under basal conditions and most of this was in a latent and inactive state. IL4 treatment significantly increased latent TGFβ secretion ( Figure 3E). Since the MLEC assay is not specific for different TGFβ isoforms, based on the qRT-PCR results we used a direct ELISA for TGFβ2 and demonstrated a significant increase in TGFβ2 secretion after IL4 treatment ( Figure 3F).
TGFβ1 increases IL4Rα expression in primary microglia
Based on our observation that TGFβ1 also enhances the IL13-induced Arg1 up regulation in BV2 cells and primary microglia (data not shown), as well as the knowledge that IL4 and IL13 share the IL4Rα as a common receptor, which promotes phosphorylation of the transcription factor Stat6 that finally induces Arg1 expression [11,31], we analysed whether IL4Rα is regulated by TGFβ1. Primary microglia were treated with TGFβ1 (1 ng/ml) for different time points and RNA and proteins were isolated. The qRT-PCR results demonstrate that TGFβ1 treatment significantly increased the mRNA levels of IL4Rα after 2 and 4 hours, with the peak at 2 hours. From 6 to 24 hours the levels decreased and finally returned to basal levels at 24 hours after TGFβ1 treatment ( Figure 4A). Western blotting confirmed the TGFβ1-mediated up regulation of IL4Rα. IL4Rα protein levels increased after treatment with TGFβ1 reaching the maximum from 6 to 12 hours. After treatment for 24 hours, IL4Rα protein levels returned to basal levels ( Figure 4B). Immunostaining for IL4Rα after treatment with TGFβ1 for 6, 12 and 24 hours showed a similar pattern. IL4Rα staining intensity was increased 6 and 12 hours after treatment with TGFβ1. After 24 hours, the IL4Rα signal was comparable to the control condition ( Figure 4C). We further analysed whether TGFβ1 has an effect on microglial IL4 expression and release. As shown in Figure 4D and E, IL4 mRNA levels were not significantly changed after TGFβ1 treatment for 24 hours. Using a mouse-specific cytokine array, we revealed that primary microglia release very low levels of IL4 and treatment of primary microglia with TGFβ1 did not result in any changes in IL4 release after 24 hours. These data suggest that the enhancement of Arg1 and Ym1 expression by TGFβ in IL4-treated microglia might, at least partially, be mediated by increasing IL4Rα expression, thus, enhancing the microglial sensitivity to IL4 signals.
Mitogen-activated protein kinase mediates TGFβ1-enhanced Arg1 expression in IL4-treated primary microglia
To investigate the pathways involved in TGFβ1mediated enhancement of IL4-induced Arg1 expression, the TGFβ/Smad and the IL4/Stat6 signalling pathways were analysed by monitoring Smad2/3 nuclear accumulation and phosphorylation of Smad2 and Stat6, respectively. Whereas treatment of primary microglia with TGFβ1 resulted in increased nuclear accumulation of Smad2/3, IL4 treatment failed to induce nuclear accumulation of Smad2/3 ( Figure 5A). Immunoblotting against phospho-Smad2 and phospho-Stat6 revealed that TGFβ1 exclusively increased the levels of phosphorylated Smad2 and failed to increase the levels of phosphorylated Stat6. Vice versa, IL4 treatment resulted in increased levels of phospho-Stat6, whereas phosphorylation of Smad2 was not observed after treatment with IL4 for 1 and 2 hours ( Figure 5B).
MAPK has been shown to be activated in microglia after TGFβ1 treatment [32]. To analyse the role of MAPK signalling on TGFβ1-mediated enhancement of IL4-induced Arg1 expression, BV2 cells and primary Figure 3 Treatment of microglia with IL4 increased TGFβ2 expression and secretion. Primary microglia were treated with or without IL4 (10 ng/ml) for 24 hours. Total mRNA and proteins were isolated from the cells for RT-PCR and western blotting, respectively. Conditioned medium from IL4-treated microglia (MCM-IL4) as well as non-treated microglia (MCM) was collected and the mink lung epithelial cell (MLEC) assay and enzyme-linked immunosorbent assay (ELISA) were performed. Quantitative RT-PCR for TGFβ1 (A), TGFβ2 (B) and TGFβ3 (C) revealed increased TGFβ2 expression after IL4 treatment. Intracellular TGFβ2 protein levels were significantly increased (P < 0.05) in primary microglia after treatment with IL4 (D). MLEC assay (E) shows that primary microglia secreted a certain amount of inactive TGFβ, which was significantly increased by IL4 treatment (P < 0.01). Direct TGFβ2 ELISA (F) showed that TGFβ2 secretion was significantly increased after IL4 treatment (P < 0.05). All experiments were repeated at least three times. Data are presented as mean ± standard error: *P < 0.05, **P < 0.01, ***P < 0.001(Student's t-test). microglia were treated with IL4, TGFβ1 and IL4/TGFβ1 in the absence or presence of the MEK1/2 inhibitor PD98059 for 24 hours. Western blotting results from BV2 cells showed that IL4 treatment alone increased Arg1 protein levels, which was partially inhibited in the presence of PD98059. TGFβ1 and IL4 co-treatment increased IL4-induced Arg1 protein levels and the MEK1/2 inhibitor PD89059 partially blocked the TGFβ1-mediated increase in Arg1 protein levels ( Figure 6A). Using primary microglia we confirmed the results obtained with BV2 cells. Treatment with IL4 significantly increased the protein levels of Arg1. Interestingly, in the presence of PD89059, IL4 failed to increase the protein levels of Arg1. Combination of IL4 and TGFβ1 dramatically increased the protein levels of Arg1 compared to IL4 treatment alone. However, in the presence of the MEK1/2 inhibitor PD89059, TGFβ1-enhanced Arg1 up regulation was significantly impaired and the amount of Arg1 was similar to the levels after treatment with IL4 alone (Figure 6B, C). These data demonstrate that TGFβ1-activated MAPK signalling is essential for TGFβ1-mediated enhancement of IL4induced Arg1 expression in microglia.
Discussion
In this study we demonstrate for the first time that TGFβ enhances the IL4-induced alternative activation of microglia. Using Arg1 and Ym1 as established markers for alternative activation [3,11] we provide evidence that IL4-mediated up-regulation of Arg1 and Ym1 is significantly enhanced in the presence of TGFβ1. Further, IL4 treatment resulted in increased expression and secretion of TGFβ2, whereas TGFβ treatment of microglia increased the expression of the IL4Rα. Moreover, Figure 4 TGFβ1 up regulates the IL4Rα. Primary microglia were treated with TGFβ1 (1 ng/ml) for different time points and the cells were either harvested for analysing IL4Rα mRNA and protein levels, or fixed with 4% paraformaldehyde (PFA) for IL4Rα immunostaining. Quantitative RT-PCR showed that treatment with TGFβ1 increased IL4Rα mRNA levels starting after 1 hour and peaking at around 2 hours. Afterwards mRNA levels decreased again and returned to basal levels at 24 hours (A). Western blotting showed IL4Rα protein expression starting to increase after treatment with TGFβ1 for 1 hour and peaking at around 6 to 12 hours after treatment (B). After treatment with TGFβ1 for 6 and 12 hours, IL4Rα immunoreactivity (red) was increased and cell morphology changed towards a ramified phenotype compared to control cells. After 24 hours, the IL4Rα immunoreactivity (red) was decreased but the cells still presented a ramified shape with long processes (C). Scale represents 20 μm. Treatment of primary microglia with TGFβ1 (n = 5) had no effect on IL4 mRNA levels (D). Analysis of TGFβ1-mediated changes in microglial cytokine release (n = 2) demonstrated no differences in IL4 levels after TGFβ1 treatment (E). Data (A,B, D) are presented as means ± standard error: **P < 0.01, ***P < 0.001 (one-way analysis of variance).
blocking the TGF-β receptor type I resulted in significantly impaired Arg1 and Ym1 up-regulation after IL4 treatment. Finally, we demonstrate that TGFβ-mediated enhancement of Arg1 expression in microglia is dependent on the MAP kinase pathway.
In parallel to transcriptional regulation of microglia markers, the morphological changes are used to discriminate between different activation states in vivo and in vitro. In the resting or inactive state, microglia present a ramified morphology with several processes, while stimulation with classical activation factors such as LPS or IFNγ results in retraction of microglial processes and development of an amoeboid phenotype [33,34]. Although changes in morphology also suggest changes in the functional states of microglia, the morphology alone cannot be used to predict a functional outcome. Therefore, we analysed Arg1 and Ym1 as markers for macrophage and microglia alternative activation. Arg1 has been shown to be localised in the cytoplasm of hepatocytes where it is involved in nitrogen elimination by catalysing arginine hydrolysis to urea and ornithine [11,35]. Unlike the constitutively expressed Arg1 in the liver, Arg1 in macrophages and microglia is induced by exogenous stimuli including the Th2 cytokines IL4 and IL13 [36,37]. Arg1 inhibits NO production by competing with the inducible nitric oxide synthase (iNOS) for the common substrate L-arginine [38]. On the other hand, the production of ornithine can be used to generate polyamines, glutamate, and proline, the latter being a substrate for the formation of extracellular matrix proteins such as collagen [38][39][40]. Interestingly, apart from involvement in the regulation of wound healing and fibrosis [41,42], Arg1 can directly support neuron survival [43]. Next to Arg1, Ym1 is another established marker for microglia alternative activation [2,44]. Ym1 is a heparin/heparin sulphate-binding lectin that is transiently expressed during inflammation [44] and although the precise functions of Ym1 remain elusive, recent reports have suggested an involvement in tissue remodelling and regulation of inflammation [45,46].
TGFβ has been shown to either up-regulate Arg1 expression or increase the Arg1 enzymatic activity in a cell type-dependent manner. Whereas TGFβ treatment results in increased expression of Arg1 in fibroblasts and epithelial cells [47][48][49], TGFβ strongly increases enzyme activity in macrophages [50,51]. In this study we demonstrate that TGFβ1 alone is not able to significantly increase expression of Arg1 and Ym1 in microglia. However, in the presence of IL4, TGFβ1 significantly enhanced IL4-induced Arg1 and Ym1 expression, which indicates the potential role of TGFβ in CNS tissue repair Figure 5 Direct interactions were not observed between the TGFβ/Smad signalling pathway and the IL4/Stat6 signalling pathways. Primary microglial cells were treated with either with IL4 (10 ng/ml), TGFβ1 (1 ng/ml) alone or together for 1 and 2 hours. The cells were either fixed with 4% paraformaldehyde (PFA) for pSmad2/3 immunostaining or harvested for testing pStat6 and pSmad2 levels. The treatment of TGFβ1 alone or TGFβ1 combining IL4-induced pSmad2/3 nuclear translocation both at 1 and 2 hours while IL4 treatment alone was not able to induced pSmad2/3 nuclear translocation (A). Western blotting showed that IL4 treatment alone exclusively induced Stat6 phosphorylation but not Smad2 phosphorylation after 1 and 2 hours, while TGFβ1 treatment alone or TGFβ1 in combination with IL4 only, induced Smad2 phosphorylation but not Stat6 phosphorylation (B). and neurorestoration by modulating alternative activation of microglia.
To understand the mechanisms behind this phenomenon we addressed the question of whether TGFβ interferes with the IL4 signalling pathway. We demonstrated that TGFβ1 treatment alone up regulated the common receptor for IL4 and IL13, IL4Rα, both at mRNA and protein levels in a time-dependent manner.
Opposite to the effect of TGFβ1 on IL4Rα expression, IL4 treatment alone reduced IL4Rα expression with treatment time (data not shown). Further, we observed that TGFβ1 was able to enhance the expression of Arg1 induced by IL13 (data not shown) and IL4. These data indicate that the synergistic effect of TGFβ1 and IL4 on the expression of Arg1 might partially be mediated by enhanced IL4Rα expression after TGFβ1 treatment, thereby increasing the sensitivity of microglia for IL4. IL4 signalling is further propagated by phosphorylation of the transcription factor Stat6 [11,31]. Analysis of Stat6 phosphorylation revealed that TGFβ1 failed to induce Stat6 phosphorylation. Moreover, IL4 was not able to induce Smad2 phosphorylation in microglia, indicating that the synergistic effect of TGFβ1 and IL4 on the expression of Arg1 could not be explained by the direct interaction of the TGFβ/Smad and IL4/Stat6 signalling pathways. Therefore, we further investigated if a TGFβinduced Smad-independent pathway, the MAPK pathway, is involved in this synergistic effect. By performing a pharmaceutical blockage of the TGFβ-induced MAPK pathway using the MEK1/2 inhibitor PD98059, we could show that the Arg1 protein expression, not only induced by TGFβ1 and IL4 co-treatment but also induced by IL4 treatment alone, were significantly inhibited in the presence of PD98059. These data clearly demonstrate that TGFβ signalling is involved in IL4-induced microglia alternative activation and an essential role of TGFβmediated MAPK pathway in the enhancement of IL-4 induced microglia alternative activation by TGFβ signalling.
We observed that IL4 treatment of microglia lead to up regulation of TGFβ2, whereas the mRNA levels of TGFβ1 and TGFβ3 were not changed after IL4 treatment. TGFβ2 levels were significantly increased in the supernatants of IL4-treated microglia. Although most of the secreted TGFβ2 was in a latent and inactive form, a small proportion of bioactive TGFβ2 seems to be sufficient to support IL4-induced up regulation of Arg1 and Ym1. However, microglia express several factors and enzymes that are capable of activating latent TGFβs, such as integrins, plasminogen, MMP2 and thrombospondin-1 (unpublished data). Moreover, microglia also express extracellular matrix components in vitro that might bind TGFβ. This amount of bound, and probably activated, TGFβ will escape all analyses of the microglial supernatant, but is likely to activate TGFβ signalling in these cells.
It is widely accepted that TGFβ is involved in the down regulation of microglia classical activation. TGFβ1 reduces reactive oxygen species (ROS) induced by LPS and suppresses the IFNγ-induced expression of MHC II and the production of cytokines, IL1, IL6, and TNFalpha production in activated microglia [52,53]. TGFβ also prevents IL1β-induced microglial activation [54]. Although the anti-inflammatory role of TGFβ has been widely accepted, it is still quite ambiguous whether this effect is beneficial or detrimental in terms of different CNS diseases. Whereas TGFβ1 has protective and beneficial functions in cerebral ischaemia [55], it promotes Figure 6 TGFβ1-mediated enhancement of IL4-induced Arg1 expression is dependent on the mitogen-activated protein (MAP) kinase pathway. BV2 cells and primary microglia were treated with IL4 (10 ng/ml), TGFβ1 (1 ng/ml) and IL4/TGFβ1 in the presence or absence of the MAP kinase inhibitor PD98059 (10 μM) for 24 hours. Total proteins were isolated and used for electrophoresis and western blotting. Arg1 protein levels were analysed by densitometric evaluation and normalised to GAPDH. The expression levels of Arg1 after co-treatment with IL4 and TGFβ1 were reduced in the presence of PD98059 in both BV2 (A) and primary microglia (B). Quantification of Arg1 expression levels in primary microglia after different treatments (C). Data are presented as means ± standard error from three independent experiments: **P < 0.01, ***P < 0.001 (two-way analysis of variance).
the deposition of amyloid-beta plaques in models of Alzheimer's disease [56]. Interestingly, Town and colleagues have demonstrated that blocking of TGFβ/Smad signalling almost completely abrogated the plaque formation in transgenic mice overexpressing mutant human amyloid precursor protein [57]. These results underline the importance of a tight temporal and spatial regulation of innate immune responses and further demonstrate the necessity to enhance our knowledge of the pathological conditions under which TGFβ-mediated regulation of inflammation is beneficial or detrimental.
Whereas TGFβ induces acquired deactivation, the acquired deactivation macrophages also produce TGFβ in an autocrine manner [3,24,25]. Next to down regulating the classical activation of microglia, here we show, for the first time, that the TGFβ also enhances IL4induced microglia alternative activation in vitro, which broadens the knowledge of interactions among different microglia activation states. Similar functions have been shown for another immunoregulatory cytokine, IL10. For example, IL10 is able to impair IFNγ-induced macrophage classical activation [58], increase arginase activities [59], and further enhance IL4-induced Arg1 expression, probably by increasing IL4Rα expression [60]. Findings of this work and previous studies suggest an interaction and dynamic change between different microglia activation states. TGFβ might serve as a gatekeeper to inhibit classical activation and promote alternative activation of microglia. The data presented throughout this study confirm the role of TGFβ as an anti-inflammatory molecule and broaden its functions as an enhancer of microglia alternative activation, thereby regulating microgliamediated neuroregeneration and neurorestoration in inflammatory CNS diseases.
Conclusions
Here we show, for the first time, that TGFβ1 synergises IL4 in the induction of microglia alternative activation. We demonstrate that IL4 treatment increased the expression and secretion of TGFβ2 in primary microglia and that IL4-induced up regulation of Arg1 and Ym1 is dependent on active TGFβ signalling. Finally, we provide evidence that MAPK signalling is involved in TGFβ-mediated enhancement of IL4induced microglia alternative activation. Figure 7 shows a proposed model for the role of TGFβ in microglia alternative activation. Our findings provide novel insights into the molecular mechanisms of IL4-induced microglia alternative activation, and further enhance our knowledge of TGFβ-mediated modulation of microglial functions. Proposed model for the role of TGFβ in IL4-induced alternative microglia activation. IL4 induces expression of the alternative activation markers Ym1 and Arg1 via the IL4Rα-Stat6 pathway. TGFβ binds to its receptors TGFβ type I and type II, which form a heteromeric complex and initiate Smad-dependent and Smad-independent pathways. Exogenous TGFβ1 enhances IL4induced Ym1 and Arg1 expression either by a direct effect on Ym1/ Arg1 promoter activity or indirectly by up regulating the IL4Rα through activation of the MAP kinase (Smad-independent) pathway. Furthermore, IL4 treatment alone increased endogenous TGFβ2 expression and secretion. Autocrine TGFβ2 in turn might be able to enhance IL4-induced Arg1 expression by using similar signalling mechanisms to exogenous TGFβ1.
|
2016-05-12T22:15:10.714Z
|
2012-09-04T00:00:00.000
|
{
"year": 2012,
"sha1": "a920f83a5284eed2289dfc9f1d2c799fa88476e9",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/1742-2094-9-210",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a920f83a5284eed2289dfc9f1d2c799fa88476e9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
258878937
|
pes2o/s2orc
|
v3-fos-license
|
Methods of unproved or uncertain effectiveness used by patients with Atopic Dermatitis
Introduction and Objective. Atopic dermatitis (AD) is a common, chronic, recurrent dermatosis. It frequently decreases the quality of life and leads to frustration of both patients and their families. Patients with AD seek a variety of therapeutic options, including non-conventional methods. The aim of the study was to determine which practices of unproved or uncertain effectiveness are most frequently used by AD patients in Poland. Materials and methods. 113 survey participants were enrolled (99 parents of children and 14 adults) in Poland diagnosed with AD who responded to an online survey created using Google Forms distributed to online support groups for parents of patients and patients diagnosed with AD. Respondents were given a list of methods of unproved or uncertain treatments for AD, and were asked to choose the methods that they had employed at least once in their lives to manage their or their children’s AD. Results. At least one method described in the study to manage AD had been tried by 76.1% of respondents. Black seed oil was the most popular pure oil, with up to 36.3% of respondents having tried it, making it as popular as cannabinoid-containing ointments and creams. The use of propolis was reported by 24.8% of respondents. Acupuncture had been tried at least once by 23.9% of patients or parents of patients, while 18% attempted bioresonance. Conclusions. This study reveals that AD patients engage in a wide range of practices that contradict current knowledge and recommendations. Dissemination of reliable sources of information and insightful conversations in doctors’ offices about methods seem important.
INTRODUCTION
Atopic dermatitis (AD) is a common, inflammatory, chronic dermatosis characterized by persistent itching of the skin [1].It affects up to 20% of children and 3-7% of adults worldwide [1,2].The prevalence and severity of symptoms vary by age and population [3,4].For instance, the lifetime incidence of AD varies from 0.2% in China to 24.6% in Columbia among adolescents aged 13-14 years [4].In the United States, 13% of children have AD [5].Presumably due to socio-economic conditions, the average prevalence of AD in western and northern European nations is higher than in eastern countries [3].Skin barrier disruption, environmental and genetic factors, skin microbiota dysbiosis, and altered immune response are the basis of AD pathogenesis.[1] The treatment regimen necessitates modifications to daily activities [1]; this, combined with the relapsing nature of the disease and the lack of sleep caused by pruritus, can be the cause of frustration for both patients and their families, thereby reducing their quality of life [1].
The application of emollients should be done at least two to three times per day by all patients, regardless of the disease's severity [6].Numerous topical and systemic therapy options are available and recommended for AD treatment depending on the disease's severity.However, improper adherence to their prescribed usage reduces the efficacy of the treatment [7].In addition, patients frequently fear their side-effects, sterydophobia being the most prominent example, and they may view natural therapies as being safer [8].Consequently, some patients look for various therapeutic options, including non-conventional ones [9].The aim of this study was to determine the prevalence of these strategies among patients and parents of patients with AD who visit online support groups.
OBJECTIVES
The aim of the study was to establish how widespread the problem of alternative therapies is among AD patients in Poland, and to determine which methods of unproved or uncertain effectiveness in AD management are most frequently utilized by patients.
MATERIALS AND METHOD
A total number of 113 respondents from Poland (99 parents of 123 children and 14 adults diagnosed with AD) completed an online questionnaire created via Google Forms and shared among parents of patients and patients in online support groups.The eligibility criteria included the presence of at least one child with a doctor-diagnosed case of AD or being diagnosed with AD.Patients and parents of patients were provided with a list of methods of unproved or uncertain AD treatments, and were asked to select those that they have used at least once in their lives to manage their or their children's AD.Publications and online patient support groups were used to select the methods which included: the topical use of pure oils, creams or ointments containing cannabinoid receptor agonists, bee products (propolis, beeswax, honey), aromatherapy, acupuncture, acupressure, massages, Chinese herbal medicine, fish oil and vitamin E supplementation, herbal dietary supplements, autologous blood injections, bio-resonance therapy, and homoeopathy.
The study was approved by the Independent Bioethics Committee for Scientific Research at the Medical University of Gdańsk (Approval No. NKBBN/1/2022).
RESULTS
At least once in their lives, 76.1% of respondents had attempted to use at least one method described in this study to manage AD (Fig. 1).At least one pure oil to be used topically on patients' skin or the skin of children had been tried by 61.9% of respondents (Fig. 2).The most prevalent was black seed oil (Nigella seed oil), with 36.3% of respondents having tried it.Following this, 30.1%, 22.1%, and 21.2% of respondents, respectively, had attempted using coconut oil, hemp seed oil, and evening primrose oil.Olive oil had been tried by 11.5% of those polled, followed by linseed oil (4.4%) and almond oil (4.4%).In 2.7% of cases, respondents had used sunflower seed oil on their skin or the skin of their children.
Along with topical use of black seed oil, 36.3% of respondents had used a cannabinoid receptor agonist in the form of cream or ointment on their skin or the skin of their children at least once, making them the most popular unproved or uncertainly effective practices.Propolis was the most frequently applied bee product, with up to 24.8% of patients or parents of patients having confirmed its use to treat AD.Bee wax had been used by 8% of patients or parents of patients, whereas 8.8% had used honey at least once in their lives.Massages and acupuncture had been implemented by 7.1% of respondents to manage AD, while 1.8% had used acupressure and 4.4% had utilized aromatherapy.Chinese herbal medicine had been used by only 0.9% of respondents.Vitamin E supplementation had been used by 2.1% of those who filled out a questionnaire for AD treatment, and fish oil by 31.6%.Herbal dietary supplements had been used by 12.6% of the sample population, whereas autologous blood injections had not been used by a single respondent.Homoeopathy had been used at least once by 23.9% of those who filled out the questionnaire, whereas bioresonance therapy had been used by 18.6% of them.
DISCUSSION
The use of methods of unproved or uncertain effectiveness by adults and children is on the rise, and this is especially true for allergic illnesses [10].Only a few studies have examined the national and worldwide prevalence of these treatments specifically among patients with AD [11][12][13][14][15].In both Germany and Norway, 51% of adult patients have used methods of unproved or uncertain effectiveness to manage AD, in Turkey -68.7% have done so, whereas in the United States, 43.5% have reported using at least one of these methods [11,15,16].In this study, 76.1% of respondents tried at least one of the described methods at least once in their lives to manage AD.Each method of unproved or uncertain effectiveness prevalence varies by country [11][12][13][14][15].In the study conducted in Leicester, for instance, 20% of paediatric patients with AD used Chinese herbal medicine, whereas only 0.9% of Polish patients used it according to the current study [13].Homoeopathy has a slightly larger presence in Germany, where 35.8% of patients with allergies used it, whereas in Poland just 23.9% of patients with AD did so [11].In contrast, autologous blood injections were the second most common treatment method in Germany, with 28.1% of patients using them, but not a single patient in the current study did so [11].Only 31.6% of patients in Denmark reported using methods of unproved or uncertain effectiveness at least once, which is considered to be a relatively low prevalence [11,14,15].Acupuncture was the most commonly used method in Denmark, with 15.7% of adult patient using it, compared to 7.1% in the current study [12].Oils were used by only 2.1% of Danish adult patients, 41.9% of Turkish patients, whereas in the current study, 61.9% of patients have applied pure oil on their skin at least once, making it the most commonly used method [12,16].
Depending on the geographical location, there can be some specific treatments not commonly used in other countries, such as in Malaysia, where Malay herbs, massages, and cupping are popular among AD patients [14].According to our knowledge, this is the first study to examine the prevalence of methods of unproved or uncertain effectiveness among the Polish population diagnosed with AD.
Even though there is some evidence of efficacy for a number of the methods described in this study, there is an absence of larger trials and a lack of comparison with standard AD treatment while, on the other hand, some methods are simply ineffective or even dangerous [10,[17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32].Patients frequently mistakenly believe that natural methods are safer, which is not the case [33].It is important to highlight that when treatment failure occurs, the first and most important thing to examine is the patient's adherence to the prescribed medications and the assessment of whether or not they are being used properly [7].Further causes can be examined later [7].The utilization of alternative treatments and misinformation among patients unquestionably contribute to poor compliance with prescribed treatments [7].
Topical use of pure oil has been implemented by 61.9% of patients.However, pure oil products are not recommended for topical use because they increase transepidermal water loss (TEWL), which causes skin dryness [17].Additionally, it is essential to consider the risk of allergic contact dermatitis (ACD) [19,25].Emollient therapy remains the foundation of AD regimen and should be chosen as the first-line treatment for moisturizing the skin [17].In certain instances, it is possible to use unsaturated fatty acids as an ingredient in emollients.[17] Itching is one of the most important clinical symptom of AD, with a significant impact on emotional dimensions of perception [17].Studies provided insight into the potential mechanisms of cannabinoid modulation on pruritus, with neuronal modulation of peripheral itch fibers and centrallyacting cannabinoid receptors providing the most evidence [34].Topical application of cannabinoid receptor agonists has been documented to have antipruritic and analgesic effects, additionally alleviating AD skin symptoms in some trials [20,35].Despite the fact that preliminary studies demonstrated the efficacy of cannabinoids in AD treatment, they should not be recommended to patients due to the absence of larger-scale studies [35].However, up to 36.3% of patients declared using it at least once.
Although propolis may have anti-allergic qualities, it is essential to realize that its composition might vary based on factors such as bee species and geographic area [23].In a separate study, a mixture of honey, beeswax, and olive oil reduced the use of topical corticosteroids (TCS) by 80% in AD patients [30].Even though honey showed some promising preliminary results, more studies are needed, especially with a more practical form of honey to use topically on the skin [24].However, it was the most frequently used bee product among AD patients, with 24.8% of parents of patients or patients reporting at least one use.
Acupuncture and acupressure have also been studied, mainly for the management of allergen-induced itch [21,22].Acupuncture has been used by 7.1% of those polled, whereas 1.8% utilized acupressure to manage their or their children's AD.There is an absence of evidence to support the use of acupuncture or acupressure in the treatment of AD due to the lack of rigorous methodology in the trials, combined with too small study groups [27].The same situation can be applied to massages and aromatherapy, which also showed promising preliminary results, although larger trials are lacking [21,22].Massages and aromatherapy have been used by 7.1% and 4.4% of respondents, respectively.Of concern was the fact that numerous acupuncture complications, including fatal ones, have been described in the literature, such as infections caused by poor sterilization, pneumothorax, and cardiac tamponade [10].
According to meta-analyses comparing 28 clinical trials, Chinese herbal medicine administered orally or applied topically to the skin has not been shown to reduce the severity of eczema in children or adults, [31].The usage of Chinese herbal medicine can lead to various adverse events, including gastrointestinal events, which are the most common, as well as other more severe ones [32].Chinese herbal medicine, however, seems to be sparsely used as only one respondent admitted using it.
A number of 11 trials were included in a systematic review of the effects of dietary supplements on the treatment of AD, and the results did not support the use of fish oil and vitamin E as treatments for the disease [36].However, both methods appear to be popular, as 31.6% and 21.2% of respondents use them, respectively.
Bioresonance is based on the premise that a person develops a disease when the electric fields or electromagnetic frequencies in the body are out of balance and that this imbalance can be corrected by introducing exterior electric energy [29].There is only one study evaluating the efficacy and safety of bioresonance therapy, conducted on a group of paediatric patients hospitalized for an extended period of time owing to AD; hence, no firm conclusions can be drawn due to the absence of additional trials [22,28].Surprisingly, 18.6% of patients reported using it at least once.Homeopathy selects small quantities of various chemicals by matching a patient's symptoms with symptoms caused by these substances in healthy people, with the notion that they stimulate autoregulation and self-healing processes [37].There is an absence of evidence to support the use of homeopathy in the treatment of AD [17].There have been isolated incidences of contact allergy sensitization, although systemic toxicity is unlikely but cannot be ruled out [10].
Homeopathy has been used to manage AD by 23.9% of those polled.
CONCLUSION
In conclusion, the use of the described procedures delays the introduction of an adequate treatment based on evidencebased medicine (EBM).It is recommended that physicians inquire about their patients' use of unproved or uncertain methods, as they may interact with the standard therapy or cause the patient to discontinue the prescribed treatment in favour of more 'natural' methods.In addition, physicians should have a basic understanding of the most commonly used treatments of unproved or uncertain effectiveness and be able to explain why they are not the best options for AD therapy.Due to the relatively high prevalence of alternative methods for AD management in Poland, use among patients special attention and systemic changes are required to improve education and promote treatment adherence among.
Figure 1 .
Figure 1.Methods of unproved or uncertain effectiveness used by respondents
Figure 2 .
Figure 2. Pure oils most commonly used by respondents topically
|
2023-05-25T15:10:22.903Z
|
2023-05-22T00:00:00.000
|
{
"year": 2023,
"sha1": "536afa3644d4d908c412cf7df6b03a318543b804",
"oa_license": "CCBYNC",
"oa_url": "https://www.jpccr.eu/pdf-163375-91561?filename=Methods%20of%20unproved%20or.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c3c02550a655cb0237f9b80edf9d60571e5d3657",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
208069899
|
pes2o/s2orc
|
v3-fos-license
|
Field Problems of Distance Relays with Combined Single/Double Circuit Transmission Lines
Parallel transmission lines are characterized with a significant increase in the mutual coupling effects among the different phases of the coupled line segments raising remarkable errors for impedance-based protection equipment in particular. Owing to these situations, the total line impedance of each phase of the associated line may significantly change resulting from the mutually reflected impedances from the other phases. Thus, double line segments, when combined to single circuit ones, can significantly lead to have a deviated measured fault impedance from those parameters that are adjusted during the relaying setting stage. This may, accordingly, cause remarkable errors for distance protection algorithms in the field leading to unnecessary tripping or delayed fault clearing. In this study, the impact of the combination of single and double overhead line segments on the performance of distance relaying function is thoroughly investigated. This investigation is carried out using the Electro-Magnetic Transient Program (EMTP) using distributed parameter line modeling. The accuracy of the developed simulation is validated as compared with a recorded fault case for a 400 kV combined single/double line from the field. This corroborated the correctness and the accuracy of the constructed model. The relay enabled functions and setting are collected and realized in the MATLAB environment. This research is essential for visualizing the core of the problems of these transmission networks with distance function and can consequently help to realize a practical and reliable distance relaying for such lines.
INTRODUCTION
Double circuit transmission lines are always characterized by a remarkable increase in their mutual coupling effects. This results in different problems particularly for protection equipment. For the distance relaying in particular, these effects remarkably increase the resulting estimation errors in the computed distance. Actually, when the parallel lines have similar parameters and configuration, the effects of mutual coupling are effectively cancelled as both lines will usually share the zero sequence currents due to the remote ground faults. However, for a fault on the line beyond the remote terminal end of a parallel line circuit, the distance relay will still under-reach for its zones setting. On the other hand, this balance of parameters and configuration is not the situation in most cases where tower mismatches as well as the environmental causes may increase the mutual impacts remarkably. Moreover, those hybrid transmission systems combining different line configurations have severe impacts on the distance relaying performance. These hybrid lines can combine different overhead-cable line segments or single-double circuit lines to be covered by the same distance relay. These situations are typically faced for some certain geographical or environmental circumstances including integrating different adjacent networks or passing through under-water cable segments. As a typical example for this line category is the 500 kV hybrid line in Southern California (USA) comprising approximately as follows. The 33 miles of a single-circuit overhead line sharing a common tower structure with a 230 kV line is followed by a 28 miles of a split-phase double-circuit overhead line on a double-circuit tower sharing the common tower structure and 4 miles of split-phase underground double circuit cables. Then, it is connected to a 8 miles of a single circuit overhead line with 500 kV line. This complex configuration is subjected to varieties of technical challenges regarding their protection system as described by Bucco et al. (2017). Another example is the 400 kV inter-tie Egypt-Jordan integration with both single and double circuit segments (Zahran et al., 2017). The mutual coupling is effectively measurable in all multiphase systems, in which the total line impedance significantly changes resulting from the mutually reflected impedances from the other phases. Thus, the actual line parameters significantly deviate from those parameters that are adjusted for relay setting Stenzel, 2002, 2003). Moreover, the problem of mutual coupling is really not constant and strongly depends on different interacting factors such as conductor's spacing and voltage levels. On the other hand, common utilized mathematical cores of almost all distance relays in the field deals efficiently with uniform lines depending on simple RL Model in measuring the impedance between the relay and fault point. Thus, it is expected that remarkable errors arise with such relays when employed with these complex situations. Accordingly, the majority of publications in the literature are for ordinary lines considering uniform line configuration. However, more sophisticated and accurate line modeling and relaying mathematical cores are required for investigating such cases (Kasztenny et al., 2004).
As known, the distance relay operates when the measured impedance enters the adopted operation characteristics. The non-homogeneity of the overall line sections may result in inconsistent ratios of reactance to resistance and the zero sequence to the positive sequence impedances for each line segment. Thus, combining double circuit lines with single circuit segment or combining overhead line with cable segments together can influence the performance of the distance functions letting the fault be more distant from its actual point. These effects may consequently lead to unnecessary tripping of the local relay or to undesirable acceleration of the remote one (Zipp et al., 1997). Protective relaying unfortunately, the utilized protective functions with such hybrid lines are typically similar to those used for uniform lines. Hence, some certain recommendations should be issued during the setting stage of such protection functions in order to realize their proper setting profiles.
These recommendations may differ from one situation to another depending on its own parameters and circumstances. This complicates the process of their setting assessment and may need sometimes more trails. This study presents a visualization of this particular situation using both simulation and recorded fault cases from the field. The selected line is accurately modeled with a detailed distributed line model in EMTP for realizing a close representation of this line. The MATLAB is also utilized to carry out the required protection analysis study. The second section describes the selected line configuration and its accompanied protection system whereas its modeling details are described in third section Next one described the recorded test case from the field. Finally, simulation tests are investigated and analyzed.
MATERIALS AND METHODS
Selected simulation system Main 400 kV system: A real 400 kV, 750 MW transmission system connecting two different networks are considered as a simulation example as described in Fig. 1. It comprises of a combination of a 33.6 km single circuit line and a 10 km overhead double circuit line. The single circuit line includes a 20 km overhead line and a 13.6 km of underground cable.
Modeling of the accompanied distance relays:
Adopted distance relays for the 400 kV hybrid transmission line are as given in Fig. 2 and 3. The adopted distance relays at both line ends have 4-zones using mho and quadrilateral characteristics for phase and ground faults, respectively. The related setting of the adjusted distance characteristics is described in Table 1. Each relay at both line ends are communicated together with Overreach Blocking (OB) with independent zone-1 as illustrated in Fig. 4. Simulation development: The EMTP is a widely used package for simulating the electromagnetic transients for power system studies. Various models are available in the EMTP for each element in order to fulfill all application requirements and constraints. For our application in particular, the representation perspectives for each element of the system of Fig. 1 are described as follows (EPRI and EMTP-DCG, 1999).
Overhead line modeling: Different types of transmission line models are available in the EMTP starting from simple lumped parameter one to the distributed parameters modeling. Also, different options are available to account for the mutual coupling among the adjacent conductors. Among these models, the Frequency Dependent (FD-JMARTI) line Model arises as the most accurate one for simulation purposes. The FD-JMARTI line Model represents the true nature of a transmission line by modeling the frequency dependent and distributed line parameters. All single lines segments of Fig. 1 and double circuit line ones are modeled with the FD-JMARTI Model with the Line Constant Auxiliary routine of the EMTP depending on the actual tower configuration and conductor types (EPRI and EMTP-DCG, 1999). It is worthy to know that the first 5 km of overhead line segments are selected with AAAC conductor type (400 mm 2 ) whereas the remaining lengths of the overhead line segments are selected with the ACSR conductor type (490/65 mm 2 ).
Transformer modeling: Power Frequency Transformer
Module (PFTM) in the EMTP is responsible to simulate the fundamental transformer model. This is available for normal and faulty transformer operation as long as the capacitance between windings and tank, windings and core and between winding layers can be ignored. Thus, the validity of this model is limited for frequency ranges from the power frequency to 10 kHz, depending on the transformer type (EPRI and EMTP-DCG, 1999). It is therefore, accurate enough for representing short circuit studies and relaying applications.
Underground cable modeling: Three single core cables are used for the 13.6 single circuit cable segment with 420 kV were used with cupper cross section of 1000 mm 2 , conductor inside radius of 8.65 and 20.6 mm for the conductor outside radius. The EMTP provides different alternatives for cable modeling starting from a simple lumped model to sophisticated ones taking the parameter distribution and the frequency dependency of its parameters into account. Since, the system accuracy arises as an important issue, only distributed parameter cable is considered with the FDLMARTI Model. It considers the distribution of all cable parameters as well as the frequency dependence of the transformation matrix elements. It arises as the most accurate and efficient cable model in the EMTP (Tavares et al., 1999;Marti, 1988Marti, , 1993. Therefore, it is employed for developing the related parameters of the submarine cable section. Figure 3 shows the adopted setting of the 4-zones distance relays at both line ends relays with mho and quadrilateral characteristics respectively with the parameters shown in Table 1. Each relay at both line ends are communicated together with Overreach Blocking (OB) with independent zone-1. In spite of the superior performance of the EMTP for modeling power system elements as well as some control structures, its ability for modeling large-scale control systems or sophisticated protection schemes is relatively limited. This is mainly attributed to the poor mathematical manipulation offered, especially, for those modern digital protection and control schemes evolving advanced mathematics. The MATLAB is however, more distinctive with its ultimate mathematical and logic abilities. Thus, it provides a candidate for developing accurate representation of modern digital protection schemes. Then, collaboration between both simulation tools is adopted for studying and analyzing the performance of these systems.
Recorded fault case:
Typically, fault cases are recorded by utilizing Hathaway-Digital Fault Recorder (DFR) running at both the sending end and receiving end of the selected test system. These test cases facilitate validating the constructed models as well analyzing the performance of the corresponding protective relays for various faulty abnormal conditions. A line to ground fault occurred beyond the receiving end transformer during a 150 MW transferred load from the sending end to the receiving end. The recorded case is shown in Fig. 5 where the three phase voltages are plotted with channels range 9-11 where the three phase currents are connected to channels 13-15. The fault has been classified successfully in the reverse zone (zone-4) as the fault occurred just behind the distance relay at the receiving end substation. Then, tripping of the local breaker at the receiving end substation was inhibited. Also, a blocking signal was
Simulated testing
Verifying the recorded test case: There is no doubt that comparing the performance of the developed model with some recorded field data, if available is a trustful method for fulfilling this target. Nowadays, modern multi-functions Digital Fault Recorders (DFRs) are usually installed in the substations and power plants. The DFR's enable data logging of vital events and fault oscillography records. The value of these real field cases is obvious for the validation of the system models and addressing problems that may exist. Fortunately, different relevant fault cases to the addressed problem have been recently recorded. Among these test cases, the described one was recorded as seen in the preceding section. Figure 6 show the EMTP simulated response of the sending end distance relay for the recorded line to ground fault occurring beyond the receiving end transformer with a 150 MW transferred load. As illustrated form the aforementioned figure, the estimated fault impedance to the fault point drop below zon-1 setting. Note that, the impedances are calculated using the voltage and current This orthogonal filter has been recently introduced to the area of numerical relaying. OPDFT combines accuracy, low computation time and stable output for both fixed and float point CPU's (Darwish and Fikri, 2007). This is because accurate signal processing tools is essential for realizing a precise relaying performance. The results
Simulated transient behavior of the distance relay:
Testing certain relaying functions such as the distance relaying at some selected fault points may emphasize the model accuracy. This can be carried out by comparing the computed distances by the relays with their actual fault distances. Lower errors of the estimated fault impedance as compared with the actual s reveal the accuracy grade of the developed model. Figure 7 and 8 show the estimated fault distances for line-to-ground and line-to-line faults occurring at 20 km from the sending end, respectively. The results corroborated the accuracy of the developed model with estimation error of 3% of the segment line length only.
CONCLUSION
In this study, the performance of the distance relaying with hybrid transmission systems was investigated. For faults located at the homogeneous part of the line, the distance relay reacted correctly, since, the double circuit part or the cable segment were not involved into the faulted equivalent circuit seen by the relay. However, higher percentages of estimated distance errors resulted for similar faults located involving the nonhomogeneous part of the line as indicated for the fault occurring in zone 2 (beyond the double circuit part where it was recognized as a zone1 fault by the sending end relay. It surprisingly neglected the blocking signal sent from the receiving end relay. It can be therefore, concluded that the selected communication mode played a role for inhibiting or permitting the false tripping. It should be, therefore, carefully selected and adjusted.
|
2019-10-31T09:17:46.800Z
|
2019-10-31T00:00:00.000
|
{
"year": 2019,
"sha1": "578d4bd53484780eb27e9d772cf8beca6817341f",
"oa_license": null,
"oa_url": "http://docsdrive.com/pdfs/medwelljournals/jeasci/2020/415-420.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ff0206c5f8be9656e38d0d544fd2e567e350431b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
197385406
|
pes2o/s2orc
|
v3-fos-license
|
Kinetic analysis of Wood residues and Gorse ( Ulex europaeus ) pyrolysis under non-isothermal conditions: A case of study in Bogotá, Colombia
. Thermal degradation and kinetic for biomass materials wood residues and Gorse ( Ulex europaeus ) have been evaluated under pyrolysis (N 2 ) conditions, using a non-isothermal thermogravimetric method (TGA) from 25°C to 900°C at different heating rates of 10, 20, 30 and 40°C min -1 . In DTG curves the temperature peaks at maximum weight loss rate changed with increasing heating rate. The maximum rate of weight loss (%s -1 ) was obtained at a heating rate of 40°C/min of 0,38 and 0,46 (%s -1 ) for wood residues and Gorse, respectively. Activation energy calculations were based on selected non-isothermal methods (Kissinger, FWO, KAS, and Starink). For Gorse, the energy activation was 195.41, 194.44, 214.39 and 179.42 kJmol -1 by Kissinger, FWO, KAS, and Starink methods, respectively. In the other hand, the energy activation for wood residues was 176.03, 221.75, 243.08 and 198.26 kJmol -1 by Kissinger, FWO, KAS, and Starink methods, respectively. The results showed that Gorse has a lower activation energy than wood residues, which represents a great potential to be used as a feedstock in thermochemical technologies. The Levelized Cost of Electricity (LCOE) was calculated for gasification of wood residues and Gorse, which was 186 and 169 USD/MWh, respectively.
INTRODUCTION
Nowadays, the need for new renewable energy sources to supply growing demands of energy, diversifying the energy matrix and reducing the use of fossil fuels is widely recognized. The environmental impact of the greenhouse gases (air pollution, global warming and acid rains), which are emitted by fossil fuels, are of great concern in different contexts [1,2].
One of the most promising renewable energy sources is lignocellulosic residual biomass, not only because of its availability worldwide but also because of remarkable advantages such as being CO2 neutral and promoting a large annual generation rate [3]. An example of this type of biomass is residual wood, which is produced during tree pruning activities.
In Bogotá, Colombia, approximately 5.856 t of trimming residues are disposed annually into the landfill [4]. This value excludes residues of Gorse (Ulex europaeus), which is an exotic species and it is listed as one of the most invasive species in the world, because of its high reproduction rate, rapid growth, high germination potential, high ability to disperse its seeds, resistance to different environmental factors and it burns easily.
For all these reasons, Gorse outcompetes native species and is a fire hazard.
Approximately, 15.000 hectares are invaded by Gorse in Bogotá and there are about 72.000 ha with a high probability of being invaded by this lignocellulosic material [5]. Local Authorities have been looking for an effective way to eradicate it but have not succeeded yet. At the moment, Gorse expansion is being limited putting underground the gorse into plastic bags to ensure its degradation [6]. This situation represents an opportunity to study and evaluate the conversion of energy of the Gorse and the wood residues for power generation in the capital district.
Biomass can be transformed into other forms of energy by different ways such as biological, chemical and thermochemical conversions. The latter is used for electricity and heat generation using heat and pressure. It is appropriate for dried biomass. In contrast, the biological route which is known as bio-digestion, uses microorganisms to produce gas and it is suitable for moist biomass.
The chemical route is used to produce biofuels such as ethanol and other chemical products using enzymes [7,8].
Thermochemical conversion includes transformations such as pyrolysis, direct combustion and gasification. As a separate technology and the preliminary stage of combustion and gasification, pyrolysis involves complicated chemical processes and complex physical processes such as heat and mass transfer. It has a significant effect on the gasification process. As a result, a deep understanding of pyrolysis kinetics is key to provide guidance on the design, feasibility of industrial gasification reactors and optimizing the operating conditions [9] .
In this work, pyrolysis of wood residues and Gorse (Ulex europaeus) has been studied with the goal of evaluating their thermal decomposition kinetics and estimate their energy potential.
Samples
Gorse (Ulex europaeus) and wood residues samples were supplied by the Botanical garden José Celestino Mutis of Bogotá. The material included trunks and branches without leaves. The samples were ground and sieved into a particle size less than 250 µm in preparation for the thermogravimetric analysis.
Material characteristics
Proximate analysis was performed after a drying process at 40°C for 60 h, according to ASTM D7582-12. The ultimate analysis was conducted using a CHNS/O analyzer (TruSpec micro, LECO) according to ASTM-5373-08. The high calorific value (HHV) was carried out in a calorimetric pump (plain jacket calorimeter 1341, Paar TM ,) according to ASTM D-2015. The biochemical analysis was performed using a fibre analysis system FiberCap TM according AOAC methods 962.09 and 978.10.
The proximate, ultimate and biochemical analysis results, as well as the high calorific value (HHV) of the lignocellulosic samples are listed in Table 1.
Thermogravimetric analysis
Thermogravimetric tests were performed in a thermogravimetric analyser (TGA/DSC1, Mettler Toledo), where the weight loss of a sample was measured continuously at atmospheric pressure, under a constant volume flow rate of nitrogen at 50mL/min, at different heating rates.
First, the sample was heated to 105°C for 30 min. Then, the pyrolysis process was carried out from 25°C at four different constant rates of 10, 20, 30 and 40 °C/min to 900°C. A sample of 10±2 mg was used, and all the experiments were made in duplicate.
Theory
The primary pyrolysis process of biomass is represented by the following reaction scheme: Biomass→ char + (volatiles + gases). The global kinetics of the reaction can be described as: where α is the fraction of conversion, α=(W0 -Wt)/(W0-W∞). W0 and W∞ are sample masses at the beginning and at the end of the mass loss reaction, respectively. Wt is the sample mass at time t/temperature T.
f(α) is the differential function of conversion. T is temperature, k(T) is the rate constant which is described by Arrhenius equation, k(T)= A e (-E/RT) , where A is the pre-exponential factor, E is the apparent activation energy, R is the gas constant (8.314J/mol K). Eq. (1) can be converted into Eq. (2) expressed as: (3) can be written as: This equation expresses the fraction of material consumed per unit of time. The activation energy was obtained from non-isothermal TGA. The methods used to calculate kinetic parameters are called model-free nonisothermal methods and require a set of experimental tests at different heating rates [9, 10].
Model-free methods
Activation energy was obtained from non-isothermal TGA trough the following model-free non-isothermal methods: Kissinger, Flynn-Wall-Ozawa (FWO) and Kissinger-Akahira-Sunose (KAS) and Starink. These methods allow to obtain the kinetic parameters such as activation energy (E) of a solid state reaction without knowing the reaction mechanism [10]. A summary of used methods is given in table 2.
Levelized Cost of Energy (LCOE)
The Levelized Cost of Electricity (USD/MWh) was calculated for gasification technology using as a feedstock wood residues and Gorse according to equation (4).
LCOE= (I-D+C-S)/P (4)
Where the parameters represent: I= Initial investment (USD) D= Depreciation costs (USD) C= annual costs (USD) S= value of assets at the end of the life cycle (USD) P= total power generation (MWh).
characterization of fuels
According to the proximate analysis results of the samples, it can be observed a low moisture content (8,13% and 10,70) and a high amount of volatiles ranging from 85,73% to 91,11% of these lignocellulosic biomasses.
Ln(β/ Tm 2 )=ln(AR/E)-(E/RTm)
This indicates that wood residues and Gorse can be considered as desirable feedstocks for thermochemical processes. Furthermore, the lower ash content of both samples represents an advantage for the pyrolysis process because a high ash content will result in fouling production on the reactor [11].
Thermal degradation characteristics
The TG and DTG profiles of Wood residues and Gorse (Ulex europaeus) at different heating rates of 10-40 °C/min under nitrogen atmosphere are illustrated in Fig. 1 and 2, respectively. As can be seen from the plots, the devolatilization process started at approximately 235°C and proceeded rapidly with increasing temperature and then the weight loss decreased slowly to the final temperature. The residue at the end of the process was between 25-31% of the initial weight for both samples.
The pyrolysis processes can be divided into three different zones from DTG curves. While zone (I) corresponds to the mass loss due to evaporation of water and light volatiles, zone (II) shows the main pyrolysis stage caused by devolatilization and zone (III) illustrates degradation of carbonaceous in the residue. In this last zone, little mass loss is observed [11,12]. In the main pyrolysis zone, it can be observed one peak, one shoulder and one long tailing for each DTG curve.
The shoulder represents the fastest conversion of hemicellulose, the peak corresponds to the decomposition of cellulose and the tailing mainly corresponds to the lignin degradation [11].
It can be seen from the DTG curves that Gorse had a higher hemicellulose and lignin content than Wood residues, which is supported by the biochemical analysis results (Table 1). This experimental results are in good agreement with the reported results of the pyrolysis of Cellulose, hemicellulose and lignin [12].
In order to describe the properties of the pyrolysis process of these samples under the effect of varied heating rates, some characteristic parameters were calculated from the TG-DTG curves and are listed in Table 3. The highest degradation was obtained at a heating rate of 10°C/min and the obtained solid residues were of 25,31% and 28,47% for wood residues and Gorse, respectively. This occurred because of the lower ash content and higher volatile matter of Gorse [13].
On the other hand, the Figs. 1(b) and 2(b) show that a higher heating rate, the weight loss rate is greater. Biomass had some resistance to the reaction, so peaks in DTG curves moved to the right (table 3). These results have been reported by other authors with lignocellulosic materials such as [10,14]. However, the highest weight loss rate, 0.46 %/s and 0,38% for Gorse and wood residues correspondingly, was obtained at 40°C/min. TV -the devolatilization temperature (°C), Tp -the corresponding of the peak rate of weight loss (°C), DTGmax -the maximum weight loss rate (% s-1) and the residue % (compared to the initial sample weight) of each sample and heating rate
Kinetic parameters
The TGA data were analysed using model-free methods for determination of apparent activation energies of the pyrolysis. The Kissinger, FWO, KAS and Starink methods were used to calculate the activation energy as a function of the conversion (α =0,1-0,7). Table 4 illustrates the kinetic parameters and the R 2 fitted from the plots of each method. The correlation coefficient (R 2 ) was in range from 0,951 to 1,000 for all cases. The Figure 3 shows the apparent energy activation for (a) wood residues and (b) Gorse against the conversion obtained by the four methods. It can be observed that the apparent activation energy for KAS and FWO and Starink methods was not similar for all conversions, which indicates the existence of a complex multi-step mechanism that occurs in solid state. It can be observed that activation energy is lower for Gorse than for wood residues. These results show the ability of Gorse to be transformed by a thermochemical process. It needs lower energy to be transformed compared with wood residues. Additionally, the high calorific value of Gorse is greater than that of wood residues. This characteristic could increase the conversion through a thermochemical route such as gasification.
It has been reported many studies of wood gasification. However, there are few studies about thermochemical conversion of Gorse. A study of power generation using Gorse gasification was carried out in a downdraft reactor with a coupled system of power generation in which a peak of 13.8 kW was obtained [5].
Levelized Cost of Electricity (LCOE)
The levelized cost of electricity of gasification of wood residues and Gorse is illustrated in Fig. 4. The LCOE for Gorse gasification was 169 USD/MWh which was lower as compared to LCOE for wood residues gasification, which was 186 USD/MWh. For the purpose of comparison, the cost of electricity from the local supplier of 192 USD/MWh [15].
In the same way, a study of power generation from gasification of corncobs reported a LCOE of 170
|
2019-07-17T21:04:55.898Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "2fb419295dfcc50301a9e0fe2f9e4653c26c66f1",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/29/e3sconf_icacer2019_02004.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6d77490101814740b3ece3320a5cd30029ff62da",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
226198991
|
pes2o/s2orc
|
v3-fos-license
|
Sustainable and Regenerable Alkali Metal-Containing Carbons Derived from Seaweed for CO 2 Post-Combustion Capture
: Alkali-based CO 2 sorbents were prepared from a novel material (i.e., Laminaria hyperborea ). The use of this feedstock, naturally containing alkali metals, enabled a simple, green and low-cost route to be pursued. In particular, raw macroalgae was pyrolyzed at 800 ◦ C. The resulting biochar was activated with either CO 2 or KOH. KOH–activated carbon (AC) had the largest surface area and attained the highest CO 2 uptake at 35 ◦ C and 1 bar. In contrast, despite much lower porosity, the seaweed-derived char and its CO 2 -activated counterpart outweighed the CO 2 sorption performance of KOH–AC and commercial carbon under simulated post-combustion conditions (53 ◦ C and 0.15 bar). This was ascribed to the greater basicity of char and CO 2 –AC due to the presence of alkali metal-based functionalities (i.e., MgO) within their structure. These were responsible for a sorption of CO 2 at lower partial pressure and higher temperature. In particular, the CO 2 –AC exhibited fast sorption kinetics, facile regeneration and good durability over 10 working cycles. Results presented in the current article will be of help for enhancing the design of sustainable alkali metal-containing CO 2 captors.
Introduction
Today the greenhouse effect is a well-known issue, which is a continuous matter of discussion in the scientific community. CO 2 is widely recognized as one of the most relevant greenhouse gases (GHGs). At present, its emissions are mainly due to the use of unrenewable energy sources (fossil fuels) such as coal, employed in stationary power plants for electricity generation, and oil, the derivatives of which are used for motor transport [1]. Due to the progressive increase in the CO 2 level affecting the atmosphere, the global surface temperature has already increased by 0.8 • C in the 20th Century, and is expected to rise further by 1.4-5.8 • C during the 21 st Century [2].
Several carbon capture technologies have been proposed in recent years. At present, the post-combustion route, with specific regard to chemical absorption onto amines, still represents the most ready-to-use approach in the industrial context [3]. This is because post-combustion technology can easily be retrofitted into existing plants. Nevertheless, the use of amine-based solvents poses various disadvantages [4,5]. These include the energy penalty due to the excessive amount of heat required for solvent regeneration, the corrosion affecting the reactor and the amine degradation over more working cycles. For all of these reasons, this technology has not yet been implemented on an industrial scale.
Therefore, the research community has directed its attention towards alternative post-combustion technologies for CO 2 capture. Adsorption onto solid materials is a focus of this research. Carbons are one of the most efficient types of sorbents among those available on the market (e.g., zeolites, metal organic • prepare green CO 2 sorbents starting from a novel and widely available feedstock, i.e., Laminaria hyperborea, whose intrinsic high alkalinity should allow the study of this property on CO 2 absorption; • identify the best (low-cost and environmentally sound process, i.e., no chemical addition) activation route to produce reusable CO 2 sorbents with high alkalinity.
Activated Carbon Preparation
Laminaria hyperborea was collected from Clachan Sound, West Scotland, in the summer season (July). Raw material, defined as LH_S, was water washed in distilled water, air-dried and ground prior to further treatment. Around 5 g of raw seaweed was pyrolyzed under N 2 (flow rate of 100 mL·min −1 ) at 800 • C for 1 h in a horizontal tube furnace at a heating rate of ca. 25 • C·min −1 . Char, denoted as LH_S800, was then subjected to two different thermochemical process routes (see Figure 1). The first process route, i.e., CO 2 activation, was performed by heating the char up to 700 • C according to a heating rate of 10 • C·min −1 . The char was then isothermally held for 30 min. CO 2 (flow rate of 0.6 L·min −1 ) flowed throughout the heating treatment. The second treatment, i.e., KOH activation, entailed mixing char and KOH pellets (Sigma Aldrich, P1767) in a mortar in a 1:4 ratio. Then the char/KOH dry mixture was heated up to 750 • C under N 2 (flow rate of 100 mL·min −1 ) at a heating rate of 5 • C·min −1 . Activation temperature was held for 1 h. Following heat-treatment, carbon was sequentially washed with distilled water and 1 M HCl (VWR International, 20252.420) in order Yields obtained for LH_800, LH_S800PA and LH_S800CA are given in Table S3, Supplementary Materials.
Commercial carbon (designated as AR) was supplied by Chemviron Carbon. Magnesium oxide (cat no. CHE2450) was purchased from Scientific Laboratory Supplies (SLS) Ltd.
Details of materials produced and used in this study are given in Table 1.
Activated Carbon Characterization
Measurement of N2 adsorption isotherms was performed on a Quantachrome Autosorb 1C gas sorption analyzer. A relative pressure (P/P0) ranging between approximately 10 -3 and 0.99 was considered. Surface areas were calculated by applying the Brunauer-Emmett-Teller (BET) model [20] to the N2 adsorption data over the P/P0 range recommended by ISO-FDIS 9277:2010 [21]. Gurvitsch's rule (P/P0 = 0.99) [22] was applied for the estimation of total pore volume. Micropore volumes were determined by using the Dubinin-Radushkevich (DR) model (P/P0 < 0.02) [23], whereas mesoporous volumes were obtained from integration of the Barrett-Joyner-Halenda (BJH) distribution (2 < pore diameter (d) < 50 nm). Ultimate analyses were carried out with the use of an elemental analyzer (Flash EA2000), while proximate analyses were performed by using either the British Standards Institution (BSI) procedure [24][25][26] or thermogravimetric analysis (TGA). The morphology of materials was examined with the aid of an EVO MA15 scanning electron microscope (SEM), which was operated with a working distance of 8 to 9 mm and an accelerating voltage of 20 kV using the in-lens detector. In order to make the sample conductive and avoid electric charge effect, the specimens were gold coated using an Emscope SC500 specimen vacuum gold coater and stored in a desiccator prior to analysis. A semi-quantitative analysis of sample inorganic components was conducted using energy- Yields obtained for LH_800, LH_S800PA and LH_S800CA are given in Table S3 Details of materials produced and used in this study are given in Table 1.
Activated Carbon Characterization
Measurement of N 2 adsorption isotherms was performed on a Quantachrome Autosorb 1C gas sorption analyzer. A relative pressure (P/P 0 ) ranging between approximately 10 -3 and 0.99 was considered. Surface areas were calculated by applying the Brunauer-Emmett-Teller (BET) model [20] to the N 2 adsorption data over the P/P 0 range recommended by ISO-FDIS 9277:2010 [21]. Gurvitsch's rule (P/P 0 = 0.99) [22] was applied for the estimation of total pore volume. Micropore volumes were determined by using the Dubinin-Radushkevich (DR) model (P/P 0 < 0.02) [23], whereas mesoporous volumes were obtained from integration of the Barrett-Joyner-Halenda (BJH) distribution (2 < pore diameter (d) < 50 nm). Ultimate analyses were carried out with the use of an elemental analyzer (Flash EA2000), while proximate analyses were performed by using either the British Standards Institution (BSI) procedure [24][25][26] or thermogravimetric analysis (TGA). The morphology of materials was examined with the aid of an EVO MA15 scanning electron microscope (SEM), which was operated with a working distance of 8 to 9 mm and an accelerating voltage of 20 kV using the in-lens detector. In order to make the sample conductive and avoid electric charge effect, the specimens were gold coated using an Emscope SC500 specimen vacuum gold coater and stored in a desiccator prior to analysis. A semi-quantitative analysis of sample inorganic components was conducted using energy-dispersive X-ray (EDX) spectroscopy system software integrated with a scanning electron microscope. X-ray diffraction (XRD) patterns were recorded using a Bruker D8 powder diffractometer system operating with a Cu Kα radiation source. The X-ray patterns were acquired by means of the DIFFRACPlus software and recorded in the 2θ range of 10-80 • , with a step width of 0.033 and a time per step of 1 s. In order to identify the main crystalline phases for each sample, XRD patterns were matched to an X-ray diffraction pattern library using the software package Highscore (Panalytic, UK). Basic surface functionalities of materials were measured through Boehm titrations, using the method reported elsewhere [27]. In particular, one gram of each activated carbon was mixed with 50 mL of 0.05 M solutions of either sodium hydroxide (NaOH) or hydrochloric acid (HCl) in vials, which were sealed and shaken for 24 h. Solutions were previously standardized following the procedure suggested by Oickle et al. [28]. The content of the vials was filtered, and 5 mL of each filtrate was pipetted into a beaker. The excess of base or acid was then titrated with either HCl or NaOH, respectively. The amounts of basic and acidic surface groups were determined according to the relationships reported by Goertzen et al. [29] for direct titrations and given in the following equations: Results were interpreted according to the assumption that NaOH neutralizes all acidic groups (carboxylic, phenolic and lactonic groups) and HCl reacts with all basic groups.
CO 2 Sorption Measurements
CO 2 sorption capacities were measured on a Mettler Toledo thermogravimetric analyzer (TGA)/differential scanning calorimeter (DSC) [30]. Samples were initially degassed in N 2 (50 mL·min −1 ) at 120 • C for 30 min. Materials were then cooled down to ca. 35 or 53 • C under N 2 prior to measuring CO 2 uptake. Gas atmosphere was then changed to pure CO 2 or 15% v/v CO 2 in N 2 (total 50 mL·min −1 ), and temperature was held for 30 min in order to measure adsorption at a total pressure of 1 bar. Next, temperature was increased up to 100 • C at 5 • C·min −1 to start desorbing CO 2 . Regeneration was then completed by switching the atmosphere back to N 2 and further increasing temperature up to 120 • C. The latter temperature was held for 15 min. Flow rate was kept constant throughout the experiment. In addition, the recyclability of the best performing sample was tested over ten adsorption/desorption cycles. The same program was used, but in this case regeneration was accomplished in just a single step by heating the sample (at 5 • C·min −1 ) to 120 • C under 15% CO 2 . The atmosphere during desorption was kept the same as adsorption in order to simulate a rapid temperature swing adsorption (RTSA) as a regeneration strategy, where partial pressure is not changed. Table 2 reveal the absence of a porous structure for raw Laminaria hyperborea. A negligible surface area (<1 m 2 ·g −1 ) was already reported for raw macroalgae waste (algal meal) Sus. Chem. 2020, 1 37 by Ferrera-Lorenzo et al. [31]. The pyrolytical treatment caused the development of a rudimentary porous structure, which was slightly enhanced after CO 2 activation. In contrast, KOH-activated carbon presented outstanding textural properties. The significant porosity development exhibited by KOH-treated char was attributed to the combined effect of the KOH activation mechanism (pore creation) and HCl washing (pore unblocking, i.e., removal of KOH activation residues filling the pores). Figure 3a depicts the morphology of raw Laminaria hyperborea. This was typical of a plant tissue structure [32]. Carbonization and CO2 activation processes did not significantly change the initial morphology of macroalgae (see Figure 3b,c). This agrees with the limited increase of porosity measured by gas sorption for LH_S800 and LH_S800PA (see Figure 2). SEM images at 5000x showed particles lying on the carbon substrate of LH_S800 and LH_S800PA. These particles were found to be inorganic based on the EDX chemical compositions corresponding to the SEM micrographs (see Figure 4). The fact that no inorganic particles were observed for LH_S800CA (see Figure 3d) is in agreement with the poorer inorganic fractions detected by EDX analysis for this sample (see Figure 4). As a result, an increase of the carbon abundance was measured for KOH-activated carbon. As already mentioned, the decrease in mineral matter was caused by HCl rinsing of KOH-AC, which promoted the development of porosity. Nonetheless, only macropores could be observed from Figure 3d as the scale did not allow for identification of smaller pores. [20] to N 2 adsorption data. b Total pore volume calculated by applying Gurvitsch's rule [22] at P/P 0 =0.99. c Micropore volume calculated by applying the Dubinin-Radushkevich (DR) model [23] to N 2 adsorption data. d Mesopore volume calculated by applying the Barrett-Joyner-Halenda (BJH) model to N 2 adsorption data. e Macropore volume calculated by difference. Figure 3a depicts the morphology of raw Laminaria hyperborea. This was typical of a plant tissue structure [32]. Carbonization and CO 2 activation processes did not significantly change the initial morphology of macroalgae (see Figure 3b,c). This agrees with the limited increase of porosity measured by gas sorption for LH_S800 and LH_S800PA (see Figure 2). SEM images at 5000x showed particles lying on the carbon substrate of LH_S800 and LH_S800PA. These particles were found to be inorganic based on the EDX chemical compositions corresponding to the SEM micrographs (see Figure 4). The fact that no inorganic particles were observed for LH_S800CA (see Figure 3d) is in agreement with the poorer inorganic fractions detected by EDX analysis for this sample (see Figure 4). As a result, an increase of the carbon abundance was measured for KOH-activated carbon. As already mentioned, the decrease in mineral matter was caused by HCl rinsing of KOH-AC, which promoted the development of porosity. Nonetheless, only macropores could be observed from Figure 3d as the scale did not allow for identification of smaller pores. Figure 4). The fact that no inorganic particles were observed for LH_S800CA (see Figure 3d) is in agreement with the poorer inorganic fractions detected by EDX analysis for this sample (see Figure 4). As a result, an increase of the carbon abundance was measured for KOH-activated carbon. As already mentioned, the decrease in mineral matter was caused by HCl rinsing of KOH-AC, which promoted the development of porosity. Nonetheless, only macropores could be observed from Figure 3d as the scale did not allow for identification of smaller pores.
(a) (b)
Sus. Chem. 2020, 1, FOR PEER REVIEW 6 Table 3. The amount of alkali species present within raw Laminaria, pyrolyzed Laminaria and CO 2 -activated carbon followed the sequence K>Na>Ca>Mg. This was consistent with ICP-MS results previously reported by Ross et al. [33]. After pyrolysis, the concentration of all alkali metals significantly increased. This was due to the devolatilization that occurred during pyrolytical treatment and was in agreement with proximate findings (see Table 4). Alkali metal concentration did not noticeably change following physical activation. On the other hand, chemical treatment, with particular regard to HCl rinsing, caused a dramatic decrease of all alkali species, especially for K and Na. Table 3. Alkali metal concentration measured by atomic absorption spectroscopy (AAS), inductively coupled plasma optical emission spectrometry (ICP-OES), and inductively coupled plasma mass spectrometry (ICP-MS) for raw Laminaria (LH_S), pyrolyzed Laminaria (LH_S800), CO 2 -activated char (LH_S800PA) and KOH-activated (LH_S800CA) char. As depicted in Figure 5a,b, XRD patterns measured for raw macroalgae (LH_S) and its pyrolyzed derivative (LH_S800) displayed the presence of a series of sharp and intense peaks. Most of the sharp peaks found for virgin Laminaria matched the standard pattern of potassium chloride (00-004-0587), which can be considered as the main crystalline phase for this sample. All peaks related to this phase were also detected in the XRD pattern of seaweed char. This was in line with the high content of K given in Table 3 for raw and pyrolyzed Laminaria. Most of the remaining peaks identified for LH_S were associated with sodium chloride (01-080-3939). All these peaks were also found in the pattern of pyrolyzed Laminaria, yet a slight shift toward lower angles was noticed. This might have been due to a distortion of the lattice parameter of the crystals. Sus. Chem. 2020, 1, FOR PEER REVIEW 8 Interestingly, as observed in Figure 5b, additional signals were measured for pyrolyzed seaweed. This suggested the formation of new phases after pyrolysis treatment. In particular, reflections observed at ca. 43, 62 and 78° 2θ were assigned to magnesium oxide. This compound was also identified by Song et al. [37] after pyrolysis of seaweed (i.e., Undaria pinnatifida) at 1000 °C. According to the authors, magnesium ions may react with oxygen-containing species such as H2O to form MgO during pyrolysis at high temperature. In addition to this, as suggested by Ross et al. [33], most of the inorganic fractions contained in raw seaweed tend to decompose to their oxides when pyrolyzed at 750-800 °C. Accordingly, it might also be the case that crystalline MgO arose from the pyrolytic breakdown of amorphous Mg alginates present within the raw brown algae [35]. Some residual peaks with low intensity measured at ca. 21 and 34° 2θ were ascribed to unknown impurities (see asterisks in Figure 5b). These reflections might be attributed to trace alkali metals or alkali-based alginates whose detection is prevented by overlapping with phases that are more dominant. Figure 5b also showed that all peaks identified within pyrolyzed Laminaria were observed for its CO2activated counterpart. This result confirms that inorganic phases were largely retained after CO2 Crystalline phases detected in this study for virgin and carbonized seaweed agreed with results previously reported in the literature. In particular, Wang et al. [34] identified the presence of alkali chlorides within seaweed-based ash. In addition, sylvite (KCl) was detected on seaweed ash by Yaman et al. [35], while halite (NaCl) matched the pattern of oarweed-based chars obtained after pyrolysis at 500 • C [36]. Furthermore, reflections corresponding to halite were also found for raw seaweed (i.e., Undaria pinnatifida) by Song et al. [37]. The same phase was retained after pyrolysis at 1000 • C.
AAS
Interestingly, as observed in Figure 5b, additional signals were measured for pyrolyzed seaweed. This suggested the formation of new phases after pyrolysis treatment. In particular, reflections observed at ca. 43, 62 and 78 • 2θ were assigned to magnesium oxide. This compound was also identified by Song et al. [37] after pyrolysis of seaweed (i.e., Undaria pinnatifida) at 1000 • C. According to the authors, magnesium ions may react with oxygen-containing species such as H 2 O to form MgO during pyrolysis at high temperature. In addition to this, as suggested by Ross et al. [33], most of the inorganic fractions contained in raw seaweed tend to decompose to their oxides when pyrolyzed at 750-800 • C.
Accordingly, it might also be the case that crystalline MgO arose from the pyrolytic breakdown of amorphous Mg alginates present within the raw brown algae [35]. Some residual peaks with low intensity measured at ca. 21 and 34 • 2θ were ascribed to unknown impurities (see asterisks in Figure 5b). These reflections might be attributed to trace alkali metals or alkali-based alginates whose detection is prevented by overlapping with phases that are more dominant. Figure 5b also showed that all peaks identified within pyrolyzed Laminaria were observed for its CO 2 -activated counterpart. This result confirms that inorganic phases were largely retained after CO 2 treatment. By contrast, as seen by Figure 5c, no peaks associated with alkali chlorides and magnesium oxides were found for LH_S800CA. This can be attributed to the dissolution of crystallites after acid washing, which is corroborated by EDX (see Figure 4), AAS/ICP-OES/ICP-MS (see Table 3) and proximate (see Table 4) results. However, a series of low intensity peaks were measured for LH_S800CA. Most of these were best fitted by the standard pattern of aluminum oxide (04-013-1687) and aluminum hydroxide (04-014-1754). Oxidized forms of Al seemed to be the only inorganic phases that were not fully dissolved by HCl washing. In addition to that, weak peaks related to other unknown impurities (see asterisks in Figure 5c) were observed at ca. 29 and 42 • 2θ. These might be ascribed to trace alkali metals. Nevertheless, LH_S800CA's pattern also showed two broad peaks at around 25 (002) and 43 • 2θ (100). These highlighted a more amorphous structure of the sample, typical of activated carbon [38]. Moreover, LH_S800CA's pattern suggested that after KOH activation, a decrease in long-range graphenic ordering in the carbon nanostructure (amorphization) occurred. The disordering mechanism induced by KOH activation in the nanostructure of pitch-derived carbonaceous materials was discussed by Król et al. [39]. This seems to have contributed to the generation of micropores in LHS_800CA, as also suggested in Table 2. However, the limited CO 2 capture potential measured for the KOH-activated material might suggest that the LHS_800CA nanostructure lacks narrow nanopores (diameter < 0.7 nm), which are those favoring CO 2 adsorption under post-combustion conditions [18,19].
A large number of basic groups (up to ca. 2.2 mmol·g −1 ) was measured for pyrolyzed seaweed by Boehm titration. The significant concentration of basic functionalities could be associated with the high level of alkali metals present within the macroalgae char. Basic functionalities appeared to be entirely retained after CO 2 activation, whereas much lower basicity was found for the KOH activated char (see Figure S2, Supplementary Materials). Once again, this result is in line with EDX and proximate findings, and was due to the demineralization of the KOH-activated carbon following HCl rinsing.
When measuring CO 2 uptake at 35 • C and 1 bar (see Figure S3, Supplementary Materials), seaweed char and CO 2 -activated counterpart exhibited slower kinetics and lower uptakes than those achieved by the KOH-activated sample. This suggests that surface area (and then physisorption) is a predominant feature of CO 2 capture process at low temperature and high partial pressure of CO 2 .
The post-combustion capture performance of pyrolyzed Laminaria and its activated derivatives were compared, as seen in Figure 6. A commercial carbon (AR) and pure magnesium oxide (MgO) were included for comparison purposes. At lower partial pressure (0.15 bar) of CO 2 and higher temperature (53 • C), LH_S800CA and AR exhibited the fastest sorption kinetics. In spite of this, both KOH-activated and commercial carbon attained much lower sorption capacity than those measured for seaweed char and its CO 2 -activated counterpart. In addition, the CO 2 sorption capacities recorded for LH_S800CA and AR were not steady but appeared to decrease over the equilibration time. The sorption behavior exhibited by LH_S800CA and AR was typical of physisorbents whose CO 2 capture mechanism only relies on weak bonds and becomes less effective at high temperature. Therefore, despite feed gas and adsorption temperature being kept constant throughout the adsorption process, the capacity loss measured for LH_S800CA and AR with increasing time could be reasonably attributed to the prolonged exposure of the materials at a relatively high temperature (53 • C), which appeared to have favored desorption of weakly bonded CO 2 . Furthermore, LH_S800CA and AR are highly porous carbons, which are known to strongly retain water vapor within their pores. Considering that the materials were subjected to a mild drying (120 • C under N 2 ), it is likely the so dried sorbents still contained a significant amount of residual moisture, which might have hindered the CO 2 physisorption mechanism onto their pores. Importantly, despite the very low surface area available, LH_S800 and LH_S800PA significantly outweighed the sorption performance of the KOH-activated carbon, capturing nearly twice as much CO 2 as that adsorbed by LH_S800CA after 30 min. The remarkable difference in surface area between the samples (see Figure 2) seems to suggest that textural properties did not play a key role (relative to alkali-based chemisorption) in the sorption potential exhibited by macroalgae char and its CO 2 -activated derivative under typical post-combustion conditions (0.15 bar CO 2 and 53 • C temperature). In fact, as mentioned above, unlike for LH_S800CA, which was demineralized (i.e., HCl washing), a relatively high concentration of alkali metals was found for LH_S800, which was largely retained after CO 2 activation. HCl washing was not applied post CO 2 -activation to preserve all alkali-based compounds within the LHS_800CA structure. If CO 2 -activated carbons would have been washed with HCl, only physisorption would have occurred, thus reducing the efficiency of the material at absorbing CO 2 , which might have ended up being lower than that exhibited by LHS_800CA. The inherent alkalinity of seaweed-char and its CO 2 -activated product agreed with the higher number of basic groups measured for these samples compared to that of LH_S800CA (see Figure S2, Supplementary Materials). Note that the amount of basic groups measured for LH_S800PA was ca. 2.2 mmol·g −1 , whereas the CO 2 sorption capacity of the sample was only ca. 0.25 mmol·g −1 . Assuming that each mol of basic group captured one mol of CO 2 , the discrepancy between the amount of basic groups and the CO 2 sorption capacity appears to indicate that not all the basic functionalities present on the sorbent were effective for CO 2 sorption. This suggests that the type of alkali metal-containing groups significantly affects the CO 2 capture process on macroalgae-derived sorbents.
Sus. Chem. 2020, 1, FOR PEER REVIEW 10 LH_S800CA, which was demineralized (i.e., HCl washing), a relatively high concentration of alkali metals was found for LH_S800, which was largely retained after CO2 activation. HCl washing was not applied post CO2-activation to preserve all alkali-based compounds within the LHS_800CA structure. If CO2-activated carbons would have been washed with HCl, only physisorption would have occurred, thus reducing the efficiency of the material at absorbing CO2, which might have ended up being lower than that exhibited by LHS_800CA. The inherent alkalinity of seaweed-char and its CO2-activated product agreed with the higher number of basic groups measured for these samples compared to that of LH_S800CA (see Figure S2, Supplementary Materials). Note that the amount of basic groups measured for LH_S800PA was ca. 2.2 mmol·g -1 , whereas the CO2 sorption capacity of the sample was only ca. 0.25 mmol·g -1 . Assuming that each mol of basic group captured one mol of CO2, the discrepancy between the amount of basic groups and the CO2 sorption capacity appears to indicate that not all the basic functionalities present on the sorbent were effective for CO2 sorption. This suggests that the type of alkali metal-containing groups significantly affects the CO2 capture process on macroalgae-derived sorbents. However, it seems that the effect of inorganic species on the sorption potential of LH_S800 and LH_S800PA becomes more influential at lower partial pressure, when a higher selectivity is vital. In particular, magnesium oxide, which was identified by XRD analyses onto the structure of pyrolyzed and CO2-activated macroalgae (see Figure 5), may have been responsible for the (chemi)sorption of CO2 under post-combustion conditions. As indicated by Figure 6, this assumption was corroborated by the CO2 sorption kinetic measured for magnesium oxide under simulated post-combustion conditions, which appeared to mirror those observed for CO2-activated Laminaria char (LH_S800PA). However, Figure 6 also shows that MgO exhibited lower CO2 uptakes (4.6 mg CO2·g -1 , i.e., ca. 0.105 mmol CO2·g -1 ) than that attained by LH_S800PA. Considering that adsorption conditions were common to all samples, the greater CO2 sorption potential exhibited by the CO2-activated Laminaria char might be ascribed to the higher physisorption contribution (larger surface area, see Figure 2) that occurred on the macroalgae-based carbon in comparison to that of pure magnesium oxide, whose CO2 capture capacity was mostly due to chemisorption. Accordingly, LH_S800PA's capture potential seems to be the result of a synergetic effect of physisorption and chemisorption processes. By contrast, although the macroalgae-based char (LH_S800) attained nearly as much CO2 uptake as that achieved by LH_S800PA, the former exhibited a much slower uptake rate than the latter. This may be attributed to the less developed porous network featured by this sample (see Figure 2 and Table 2). This caused a reduced mobility of CO2 through the pore channels of the material, and therefore a However, it seems that the effect of inorganic species on the sorption potential of LH_S800 and LH_S800PA becomes more influential at lower partial pressure, when a higher selectivity is vital. In particular, magnesium oxide, which was identified by XRD analyses onto the structure of pyrolyzed and CO 2 -activated macroalgae (see Figure 5), may have been responsible for the (chemi)sorption of CO 2 under post-combustion conditions. As indicated by Figure 6, this assumption was corroborated by the CO 2 sorption kinetic measured for magnesium oxide under simulated post-combustion conditions, which appeared to mirror those observed for CO 2 -activated Laminaria char (LH_S800PA). However, Figure 6 also shows that MgO exhibited lower CO 2 uptakes (4.6 mg CO 2 ·g −1 , i.e., ca. 0.105 mmol CO 2 ·g −1 ) than that attained by LH_S800PA. Considering that adsorption conditions were common to all samples, the greater CO 2 sorption potential exhibited by the CO 2 -activated Laminaria char might be ascribed to the higher physisorption contribution (larger surface area, see Figure 2) that occurred on the macroalgae-based carbon in comparison to that of pure magnesium oxide, whose CO 2 capture capacity was mostly due to chemisorption. Accordingly, LH_S800PA's capture potential seems to be the result of a synergetic effect of physisorption and chemisorption processes. By contrast, although the macroalgae-based char (LH_S800) attained nearly as much CO 2 uptake as that achieved by LH_S800PA, the former exhibited a much slower uptake rate than the latter. This may be attributed to the less developed porous network featured by this sample (see Figure 2 and Table 2). This caused a reduced mobility of CO 2 through the pore channels of the material, and therefore a delay in accessing both the carbon pores (i.e., physisorption sites) and the MgO particles (i.e., chemisorption sites) present within the porous network of the material. Therefore, it seems that the more developed porous network exhibited by LH_S800PA not only ensured a higher physisorption effect, but might have also facilitated the access of CO 2 to the chemisorption sites (i.e., MgO crystals).
However, it is speculated that the CO 2 sorption performance of LH_S800 and LH_S800PA might also be due to the presence of an additional CO 2 chemisorption mechanism. Specifically, it might be that the heat-treatment of pristine macroalgae, containing a significant proportion of Na and K (see Table 3), led to the formation of potassium and/or sodium carbonate. As given by [8,13,14], Na and K-based carbonates can absorb CO 2 under moist conditions, giving rise to alkali metal bicarbonates. Therefore, assuming that some water molecules remained trapped within the structure of the materials after initial degassing at 120 • C for 30 min, this would explain the higher sorption capacity exhibited by Laminaria-derived sorbents in comparison to magnesium oxide.
Nevertheless, the highest sorption capacity exhibited by MgO-containing materials synthesized in this work (10.7 mg CO 2 ·g −1 for LH_S800PA, see Figure 6) was lower than the capture capacity (63 mg CO 2 ·g −1 ) achieved by a MgO-containing mesoporous carbon under similar conditions (T ads = 50 • C, PCO 2 = 0.15 bar) [15]. In spite of this, the synthesis of alkali metal-containing sorbents presented in this study was more sustainable and cheaper compared to that reported in the work of Bhagiyalakshmi et al. [15], as seaweed treatment was less energy demanding than the methodology applied by these authors. In addition to this, the fabrication process reported in this work did not involve any chemical addition (i.e., impregnation of alkali-containing compounds) and entailed the use of a widely available feedstock. Moreover, the best performing sorbent prepared in this work (LH_S800PA) was fully regenerated at lower temperature (100 • C, see Figure 7) than that applied by [15] (200 • C), which would imply a lower cost associated with RTSA cyclic operations. Interestingly, the regeneration temperature of MgO-containing carbon materials fabricated in this work was noticeably lower than that usually reported for pure MgO (450-500 • C) [15,40]. Bhagiyalakshmi et al. [15], who synthesized a magnesium oxide-containing carbon, explained this behavior through the weaker interaction between CO 2 and the MgO particles not embedded within the framework of the carbon-based sorbent. On the other hand, the facile regeneration exhibited by LH_S800PA appears to suggest that Na and K-based carbonates may be also contributing in the CO 2 capture process. In fact, the corresponding Na and K bicarbonates, possibly formed after CO 2 adsorption step, are less stable and can be easily regenerated at low temperature (100-200 • C).
Sus. Chem. 2020, 1, FOR PEER REVIEW 11 However, it is speculated that the CO2 sorption performance of LH_S800 and LH_S800PA might also be due to the presence of an additional CO2 chemisorption mechanism. Specifically, it might be that the heat-treatment of pristine macroalgae, containing a significant proportion of Na and K (see Table 3), led to the formation of potassium and/or sodium carbonate. As given by [8,13,14], Na and K-based carbonates can absorb CO2 under moist conditions, giving rise to alkali metal bicarbonates. Therefore, assuming that some water molecules remained trapped within the structure of the materials after initial degassing at 120 °C for 30 min, this would explain the higher sorption capacity exhibited by Laminaria-derived sorbents in comparison to magnesium oxide.
Nevertheless, the highest sorption capacity exhibited by MgO-containing materials synthesized in this work (10.7 mg CO2·g -1 for LH_S800PA, see Figure 6) was lower than the capture capacity (63 mg CO2·g -1 ) achieved by a MgO-containing mesoporous carbon under similar conditions (Tads = 50 °C, PCO2 = 0.15 bar) [15]. In spite of this, the synthesis of alkali metal-containing sorbents presented in this study was more sustainable and cheaper compared to that reported in the work of Bhagiyalakshmi et al. [15], as seaweed treatment was less energy demanding than the methodology applied by these authors. In addition to this, the fabrication process reported in this work did not involve any chemical addition (i.e., impregnation of alkali-containing compounds) and entailed the use of a widely available feedstock. Moreover, the best performing sorbent prepared in this work (LH_S800PA) was fully regenerated at lower temperature (100 °C, see Figure 7) than that applied by [15] (200 °C), which would imply a lower cost associated with RTSA cyclic operations. Interestingly, the regeneration temperature of MgO-containing carbon materials fabricated in this work was noticeably lower than that usually reported for pure MgO (450-500 °C) [15,40]. Bhagiyalakshmi et al. [15], who synthesized a magnesium oxide-containing carbon, explained this behavior through the weaker interaction between CO2 and the MgO particles not embedded within the framework of the carbon-based sorbent. On the other hand, the facile regeneration exhibited by LH_S800PA appears to suggest that Na and K-based carbonates may be also contributing in the CO2 capture process. In fact, the corresponding Na and K bicarbonates, possibly formed after CO2 adsorption step, are less stable and can be easily regenerated at low temperature (100-200 °C).
The easier regeneration (no CO2 left at ca. 80 °C) exhibited by LH_S800CA in Figure 7 seems to indicate a weak (physi)sorption of CO2 onto this sample. In contrast, LH_S800 and pure magnesium oxide appeared to be more difficult to regenerate, as indicated by the slower release of CO2 with increasing time and the residual CO2 retained within these samples at the end of the desorption step.
Hence, based on results earlier presented, a cyclic test under post-combustion conditions was performed for LH_S800PA only, as this was the most promising sample in terms of sorption potential, kinetics and regeneration capacity. In addition, in order to ensure full regeneration of the sorbent, it was decided to extend the temperature swing up to 120 °C when performing RTSA cycling as a regeneration strategy (see Figure 8a). The easier regeneration (no CO 2 left at ca. 80 • C) exhibited by LH_S800CA in Figure 7 seems to indicate a weak (physi)sorption of CO 2 onto this sample. In contrast, LH_S800 and pure magnesium oxide appeared to be more difficult to regenerate, as indicated by the slower release of CO 2 with increasing time and the residual CO 2 retained within these samples at the end of the desorption step.
Hence, based on results earlier presented, a cyclic test under post-combustion conditions was performed for LH_S800PA only, as this was the most promising sample in terms of sorption potential, kinetics and regeneration capacity. In addition, in order to ensure full regeneration of the sorbent, it was decided to extend the temperature swing up to 120 • C when performing RTSA cycling as a regeneration strategy (see Figure 8a).
Sus. Chem. 2020, 1, FOR PEER REVIEW 12 As depicted by Figure 8a, a relatively fast CO2 uptake rate was observed for LH_S800PA, which was consistent with the sorption behavior displayed by this sample in Figure 6. Nonetheless, this sorbent appeared not to reach its maximum capacity at the end of the equilibration time, thereby indicating the potential of attaining even higher sorption capacity over a longer adsorption stage. Desorption of CO2 was efficiently accomplished after increasing the temperature up to 120 °C. On the other hand, according to results displayed in Figure 7, this material could be regenerated at even lower temperature, thus reducing the cost of the regeneration step.
However, as shown in Figure 8b, a noticeable capacity loss (ca. 20%) occurred between cycle 1 and cycle 2. The decay of sorption capacity was presumably due to the incomplete reconversion of magnesium carbonate into MgO (e.g., formation of intermediates such as Mg(OH)2), thus leading to a decrease of chemisorption potential. Yet, as highlighted by Figure 8b, the sorbent's capacity stabilized after the first cycle, thus indicating good durability over time. Note that the apparent decrease of the plateau illustrated in Figure 8a was due to a continuous baseline drift downwards.
Conclusions
The current work showed that Laminaria hyperborea processing led to a more sustainable, less costly and easier fabrication of alkali-based CO2 sorbents compared to procedures previously reported in the literature, as no chemical addition (e.g., impregnation of alkali-containing species) was required. This was achieved by exploiting the advantageous chemistry of the widely available macroalgae, intrinsically containing high concentration of alkali metals within its structure.
Pyrolysis of raw material led to the formation of a char rich in ash (up to 59.3%) but with low carbon purity. Very large surface area (up to 2266 m 2 ·g -1 ) was obtained after KOH activation of char, which implied the highest CO2 uptake (up to nearly 60 mg CO2·g -1 ) at 35 °C and 1 bar. Interestingly, despite the undeveloped porous structure, pyrolyzed Laminaria (LH_S800) and its CO2-activated counterpart (LH_S800PA) exhibited a far greater sorption potential than that measured for the KOHactivated sample (LH_S800CA) and a commercial carbon (AR) under simulated post-combustion conditions (53 °C and 0.15 bar). This result was ascribed to the chemisorption contribution given by the alkali metal-based species present within the structure of macroalgae-derived char and CO2- As depicted by Figure 8a, a relatively fast CO 2 uptake rate was observed for LH_S800PA, which was consistent with the sorption behavior displayed by this sample in Figure 6. Nonetheless, this sorbent appeared not to reach its maximum capacity at the end of the equilibration time, thereby indicating the potential of attaining even higher sorption capacity over a longer adsorption stage. Desorption of CO 2 was efficiently accomplished after increasing the temperature up to 120 • C. On the other hand, according to results displayed in Figure 7, this material could be regenerated at even lower temperature, thus reducing the cost of the regeneration step.
However, as shown in Figure 8b, a noticeable capacity loss (ca. 20%) occurred between cycle 1 and cycle 2. The decay of sorption capacity was presumably due to the incomplete reconversion of magnesium carbonate into MgO (e.g., formation of intermediates such as Mg(OH) 2 ), thus leading to a decrease of chemisorption potential. Yet, as highlighted by Figure 8b, the sorbent's capacity stabilized after the first cycle, thus indicating good durability over time. Note that the apparent decrease of the plateau illustrated in Figure 8a was due to a continuous baseline drift downwards.
Conclusions
The current work showed that Laminaria hyperborea processing led to a more sustainable, less costly and easier fabrication of alkali-based CO 2 sorbents compared to procedures previously reported in the literature, as no chemical addition (e.g., impregnation of alkali-containing species) was required. This was achieved by exploiting the advantageous chemistry of the widely available macroalgae, intrinsically containing high concentration of alkali metals within its structure.
Pyrolysis of raw material led to the formation of a char rich in ash (up to 59.3%) but with low carbon purity. Very large surface area (up to 2266 m 2 ·g −1 ) was obtained after KOH activation of char, which implied the highest CO 2 uptake (up to nearly 60 mg CO 2 ·g −1 ) at 35 • C and 1 bar. Interestingly, despite the undeveloped porous structure, pyrolyzed Laminaria (LH_S800) and its CO 2 -activated counterpart (LH_S800PA) exhibited a far greater sorption potential than that measured for the KOH-activated sample (LH_S800CA) and a commercial carbon (AR) under simulated post-combustion conditions (53 • C and 0.15 bar). This result was ascribed to the chemisorption contribution given by the alkali metal-based species present within the structure of macroalgae-derived char and CO 2 -activated derivative. The inherent alkalinity nature of the LH_S800 and LH_S800PA was corroborated by the higher number of basic surface functionalities measured through Boehm titrations compared to those found onto the surface of LH_S800CA. This was associated with the removal of mineral matter that occurred after KOH treatment (i.e., KOH activation followed by HCl washing) of macroalgae char.
Observations revealed that alkalinity-based chemisorption was dominant at higher temperatures and lower CO 2 partial pressures, while surface-based physisorption was dominant at lower temperatures and higher CO 2 partial pressures. In particular, magnesium oxide was identified by XRD within the structure of seaweed char and its CO 2 -activated counterpart. MgO crystals appeared to play a key role in the sorption of CO 2 at lower partial pressure and higher temperature. This was suggested by the similar sorption kinetics observed for pure magnesium oxide and LH_S800PA during the post-combustion test. However, CO 2 sorption capacity measured for CO 2 -activated Laminaria char was higher than that observed for magnesium oxide. This was ascribed to a synergistic effect of physisorption (more developed porous network) and chemisorption contributions occurring onto LH_S800PA. On the other hand, it is also ventured that Na and K-based carbonates, possibly present within the structure of the Laminaria-derived sorbent, might have also contributed to the capture of CO 2 .
Although LH_S800 and LH_S800PA attained similar uptakes at saturation, the CO 2 sorption performance was optimized after CO 2 activation. In fact, LH_S800PA not only retained the same alkali metal-containing basic functionalities (i.e., magnesium oxide) originally present onto the char, but also had a more developed porous network. This seemed to have promoted the physisorption of CO 2 onto the carbon pores as well as the migration of CO 2 toward the chemisorption sites (MgO crystals), thereby implying an increase of the sorption speed. Moreover, the CO 2 -activated carbon manifested a good durability and a more facile regeneration compared to the macroalgae char, which was accomplished at very low temperature (100 • C). Desorption temperature was much lower than that normally applied for similar alkali-based sorbents, thereby indicating promising and less expensive recyclability of this material.
The highest CO 2 sorption capacity (ca. 0.25 mmol·g −1 ) measured for the best performing material synthesized in this work (LH_S800PA) is lower than that (ca. 1.25 mmol·g −1 [41]) of the state of the art technology (30% MEA solution). Nevertheless, the CO 2 capture capacity of this class of sorbents may be potentially optimized by tuning the material properties (i.e., porosity, amount of alkali metal-based effective functionalities for CO 2 sorption). Furthermore, the presence of moisture in the post-combustion flue gas (not considered in this study) is believed to catalyze the reaction of Na/K carbonates (possibly present within the structure of LH_S800PA) with CO 2 to form the corresponding bicarbonates. Accordingly, this would improve both the CO 2 adsorption and desorption (regeneration) performance of the macroalgae-based sorbent. Therefore, future work should be carried out to systematically assess the influence of water vapor on the CO 2 sorption mechanism of the seaweed-based activated carbons. In addition to this, the application of seaweed-derived solid CO 2 sorbents is more eco-friendly than absorption onto liquid amines, and the facile regeneration exhibited by macroalgae-based materials could potentially imply a lower energy penalty to the power plant than that caused by the regeneration of 30% MEA solution.
|
2019-04-08T13:11:24.465Z
|
2020-06-02T00:00:00.000
|
{
"year": 2020,
"sha1": "85585ffc76547c52833c39f67364910a1af8ce0f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-4079/1/1/3/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a89c661491972a30bf81597f431833a5f5d166ba",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
267321869
|
pes2o/s2orc
|
v3-fos-license
|
Sex dependence of opioid-mediated responses to subanesthetic ketamine in rats
Subanesthetic ketamine is increasingly used for the treatment of varied psychiatric conditions, both on- and off-label. While it is commonly classified as an N-methyl D-aspartate receptor (NMDAR) antagonist, our picture of ketamine’s mechanistic underpinnings is incomplete. Recent clinical evidence has indicated, controversially, that a component of the efficacy of subanesthetic ketamine may be opioid dependent. Using pharmacological functional ultrasound imaging in rats, we found that blocking opioid receptors suppressed neurophysiologic changes evoked by ketamine, but not by a more selective NMDAR antagonist, in limbic regions implicated in the pathophysiology of depression and in reward processing. Importantly, this opioid-dependent response was strongly sex-dependent, as it was not evident in female subjects and was fully reversed by surgical removal of the male gonads. We observed similar sex-dependent effects of opioid blockade affecting ketamine-evoked postsynaptic density and behavioral sensitization, as well as in opioid blockade-induced changes in opioid receptor density. Together, these results underscore the potential for ketamine to induce its affective responses via opioid signaling, and indicate that this opioid dependence may be strongly influenced by subject sex. These factors should be more directly assessed in future clinical trials.
A sub-anesthetic dose of (R,S)-ketamine (or ketamine) rapidly and robustly attenuates depressive symptoms 1,2 , yielding much recent enthusiasm for the treatment of varied neuropsychiatric disorders and leading to the FDA approval of the (S)-ketamine stereoisomer for treatment-resistant depression 3 .Notwithstanding, our understanding of the mechanism of action of subanesthetic ketamine has remained elusive, calling for further investigation to better elucidate the mechanistic underpinnings of its therapeutic effects.
The therapeutic action of ketamine is commonly attributed to its non-competitive antagonism at the glutamatergic N-methyl D-aspartate receptors (NMDAR) 4,5 , but our picture of its underlying mechanisms is incomplete 6,7 .Other experimental drugs acting via selective NMDAR antagonism or other downstream glutamatemediated effects have provided minimal efficacy in clinical trials in psychiatry 8 , while NMDAR-independent pathways have also been suggested to mediate the antidepressant action of ketamine 9 .Recent clinical evidence suggests that pretreatment with naltrexone, a nonselective opioid receptor antagonist, attenuates the antidepressant effect of intravenous (i.v.) ketamine in humans 10,11 .However, subsequent studies challenged these findings 12,13 , warranting further investigation.Preclinical data also show diverging evidence: opioid receptor blockade suppressed the behavioral and neurophysiological responses to ketamine and its enantiomers in some studies [14][15][16][17] , while others reported no significant effects 18 .The potentially pivotal role of the opioid system in the antidepressant efficacy of ketamine raises concern for ketamine's abuse liability and potential for dependence 17,19 , as opioid signaling is thought to mediate the hedonic aspects of reward processing 20 and the reinforcing effects of drugs of abuse 21,22 .This concern is particularly relevant in light of the ongoing opioid crisis 23 .In addition, there has been relatively limited assessment of other biological variables, including subject sex, that may explain the heterogeneity of clinical and preclinical findings of the potential opioid dependence of ketamine's therapeutic efficacy and adverse effects.
Here we sought to determine how opioid receptor blockade affects ketamine-evoked neural activity changes.We imaged neural responses to ketamine in awake-restrained male and female rats using functional ultrasound imaging (fUSI).This neuroimaging modality is based on neurovascular coupling and closely tracks neural activity by way of high-resolution whole-brain maps of cerebral blood volume (CBV) [24][25][26] .Notably, our awake restraint imaging paradigm provides an opportunity to assess ketamine action in the context of an acute stress model, as transient neurophysiological changes and cognitive behavioral adaptations have been shown to take place during and immediately after acute exposure to restraint stress in rodents [27][28][29] .Finally, we establish that ketamine's acute opioid-mediated effects on neural activity are reflected in physiologic changes at the synaptic level and in the expression of behavioral sensitization, and we investigate the molecular mechanisms causing the observed opioid-dependent behavioral adaptations.
Functional ultrasound imaging of intravenous subanesthetic ketamine administration
We first determined that fUSI could resolve the acute effects of subanesthetic ketamine administration.We prepared male rats with a surgical craniotomy to allow ultrasound penetration and implanted a chronic polymeric prosthesis to enable repeated imaging (Fig. 1a).With the animals awake and restrained, we continuously recorded fUSI images at 2.5 mm rostral and 3.5 mm caudal to bregma (Fig. 1b).To avoid excessively large craniotomies, the two brain slices were imaged in different animals, and this factor was taken into account in the subsequent statistical analyses, as appropriate.Ketamine (10 mg/kg, i.v.) evoked a rapid and sustained increase (peak at 3-5 min; >50 min duration) in CBV signal that extended over cortical and subcortical regions (Fig. 1c and Supplementary Movie 1).We then infused varied doses of ketamine (0, 1, 5, and 10 mg/kg, i.v.) to determine the dose-response relationship of the CBV changes.We segmented the CBV signals in regions of interest (ROIs) obtained by registering the relevant slices from the Paxinos & Watson rat brain atlas 30 onto a power Doppler vascular template (Supplementary Fig. 1) and calculated the mean regional CBV time series (Fig. 1d, Supplementary Fig. 2).The time series presented a clear doseresponse relationship, confirmed by statistical comparisons of the peak CBV (Fig. 1e) and area under the curve (AUC; Supplementary Fig. 2b).Importantly, neither the peak CBV nor AUC were significantly correlated to the local vascularization level, as measured by the regional baseline power Doppler signal, suggesting that the recorded changes were independent of the intrinsic vascular anatomy (Supplementary Fig. 2c).These results demonstrate that fUSI is able to image the neural effects evoked by acute ketamine infusions with a dose-dependent response, and confirm previous findings using different imaging modalities 16,31 .
We then regressed the normalized ECoG power changes in each frequency band and the Cg1 CBV signal using a four-parameter gamma-distribution function (Fig. 2d, Supplementary Fig. 3c, d).The curve fitting was performed via a least-squares minimization routine, and the regressed β values were statistically compared between the ECoG spectral bands and the Cg1 CBV signal.There were significant differences between Cg1 CBV and delta/theta (corrected P < 7.91E-07), alpha (P < 6.26E-07), and beta (P < 0.0176) bands (Fig. 2d), whereas no statistical significance was observed when comparing Cg1 CBV to gamma-band power (P > 0.0756).Comparisons of the least-squares residuals were in all cases not significant (Supplementary Fig. 3f; P > 0.11), indicating that the differences in the β values were caused by actual differences in the regressed time series rather than by variability in the goodness of fit.We also performed ECoG recordings and leastsquares regressions with 1 mg/kg i.v.ketamine administration (Supplementary Fig. 3a, b, e).Although the regression did not produce reliable results for the delta/theta, alpha, and beta bands, as indicated by the high residuals (Supplementary Fig. 3f), this lower ketamine dose also showed a high degree of correlation between the gamma-band power and the Cg1 CBV time series (Supplementary Fig. 3a, b).Our observations are in agreement with a previous study reporting on the high correlation of fUSI signals with gamma (30-90 Hz) and high gamma-band (110-170 Hz) local field potentials 33 .In addition, our results are further reinforced by prior evidence using functional MRI in humans and non-human primates [34][35][36] .Altogether, these results confirm that the acute responses to subanesthetic ketamine recorded by our pharmaco-fUSI modality are driven by neural activity changes and not necessarily by a non-specific cardiovascular effect.
Opioid receptor blockade modulates ketamine responses in male, but not in female, rats Following these initial methodologic and dose-response characterizations of using fUSI to study the effects of subanesthetic ketamine, we selected 10 mg/kg as the dose of ketamine for subsequent experiments, following prior rodent studies of the affective effects of ketamine that show reliable behavioral efficacy with this dose 9 .To map the presence of region-specific opioid-mediated effects, we pretreated two groups of male and female rats with subcutaneous (s.c.) injections of either vehicle (VEH; saline) or naltrexone (NTX; 10 mg/kg) followed by i.v.ketamine (KET; 10 mg/kg) or saline after 10 min (Fig. 3a).The 10 mg/kg naltrexone dose yields near complete mu opioid receptor occupancy in the mouse brain 37 and blocked the effect of (S)-ketamine on acute locomotion 16 .Each rat was imaged three times under the treatment conditions of VEH + KET, NTX + KET, and NTX + VEH in a three-arm crossover design.Treatment conditions were assigned in randomized order to control for possible effects of prior drug exposure, and we allowed for a 7-day washout period between ketamine injections for full drug clearance.
Functional maps contrasting the NTX + KET and VEH + KET groups revealed region-specific effects of naltrexone pretreatment (Fig. 3b, c; corrected P < 0.05).Specifically, naltrexone pretreatment decreased ketamine-induced activity in Cg1, primary and secondary motor cortices (M1/2), dorsal striatum (CPu), and nucleus accumbens (NAc), and increased activity in the retrosplenial granular cortex (RSG), lateral habenula (LHb), and lateral posterior thalamic nucleus (LPLR) (Supplementary Movie 2).Interestingly, these effects were only present in male rats, whereas females showed only minor clusters of significant differences between the NTX + KET and VEH + KET treatment conditions (Fig. 3c).These sex-dependent responses were also evident in the regional CBV time series (Fig. 3d, e).The temporal dynamics of the naltrexone pretreatment effect were transient and region dependent.We observed a biphasic effect where group differences in Cg1, M1/2, CPu, NAcC, and lateral (LPtA) and medial parietal association cortex (MPtA) were mostly limited to the 0-25 min interval, whereas the effect was relatively delayed in RSG, LHb, and LPLR (Supplementary Figs. 4, 5).
To further investigate the sex dependence in the effect of naltrexone pretreatment, we analyzed intra-individual differences in peak CBV between the NTX + KET and VEH + KET treatment conditions.We observed a significant effect of sex (one-way ANOVA, F 1,178 = 7.52, P = 0.007), with specific differences between male and female rats in Cg1 (two-tailed unpaired t-test; corrected P = 0.012), CPu (P = 0.047), and LPLR (P = 0.047) (Fig. 4b).Differences in NAcC were significant before multiple comparisons correction.Interestingly, when we compared intra-individual peak CBV differences between the NTX + KET and NTX + VEH groups, we observed a much weaker sex effect (oneway ANOVA, F 1,178 = 3.22, P = 0.075) and no significant regional differences between males and females, suggesting that responses to ketamine were comparable in males and females when the opioid receptors were blocked (Supplementary Fig. 6b).
In a different cohort of male rats, we performed an orchiectomy (surgical removal of the gonads) to determine whether this sexdependent effect was driven by endocrine factors rather than developmental sexual dimorphism of the brain 38 .Importantly, the effect of naltrexone pretreatment was completely blocked in orchiectomized males (bregma +2.5 mm slice only; treatment factor: F 1,6 = 0.31, P = 0.6; Fig. 4c, d), suggesting a gating action of testosterone on the opioidmediated response to ketamine.
To determine if these region and sex-dependent effects were specific to ketamine or caused by variations in the response to naltrexone, we analyzed intra-individual mean CBV differences between the NTX + KET and VEH + KET treatment conditions during the preketamine baseline period (Supplementary Fig. 7).We observed neither .n = 9 male rats (KET groups); n = 6 male rats (MK-801 groups).Data are presented as mean +/− SEM.g CBV time series in Cg1 and NAcC in male rats receiving MK-801.n = 6 male rats.Solid lines represent the mean values and shaded areas are SEM.Source data are provided as a Source Data file.Details on the statistical analyses are provided in Supplementary Table 1.
Peak CBV difference (%) ; LHb = 0.87 (M), 0.17 (F).n = 9 male rats; n = 9 female rats.b Peak CBV differences between the NTX + KET and VEH + KET treatments in individual rats were compared between males and females.One-way ANOVA for sex factor, F a significant effect of sex (one-way ANOVA, F 1,178 = 1.83,P = 0.178), nor region-specific differences between sexes (two-tailed unpaired t-test, corrected P > 0.71), nor sex-specific differences between brain regions (two-tailed paired t-test, corrected P > 0.49), indicating that the sex dependence in the responses to ketamine was not caused by naltrexone-induced changes in the pre-ketamine baseline but resulted from a more complex pharmacological interaction.Importantly, naltrexone pretreatment produced no significant differences in CBV changes evoked by MK-801, a more selective NMDAR antagonist, at a dose of either 0.1 or 0.25 mg/kg (i.v.) in male rats (bregma +2.5 mm slice only; treatment factor: F 1,5 = 0.05, P = 0.83 (0.1 mg/kg) and F 1,6 = 0.02, P = 0.9 (0.25 mg/kg); Fig. 3f, g, Supplementary Fig. 8), suggesting that these effects are specific to ketamine.Collectively, our fUSI findings observed during awake acute restraint stress indicate that opioid receptors mediate acute responses specific to subanesthetic ketamine in key brain regions implicated in the pathophysiology of depression and in the processing of reward (e.g., mPFC, NAc, LHb), and that this opioid-dependent effect is critically gated by the presence of male sex hormones.
Naltrexone suppresses ketamine-induced expression of postsynaptic density protein in male rats
Next, we sought to determine if the opioid-mediated and sexdependent effects observed in our acute fUSI recordings were reflected in physiological changes at the synaptic level.To this end, we used immunohistochemistry of the postsynaptic density protein PSD-95 in fixed brain slices (Fig. 5a), as an indicator of ketamine-induced cellular structural plasticity.Synapse loss in prefrontal cortical neurons has been identified as a putative neurobiological substrate of depression and other stress-related diseases, and ketamine reverses such synaptic deficits by restoring post-synaptic protein expression and functional spine density [39][40][41] .Male and female rats received 10 mg/kg ketamine intraperitoneally (i.p.), preceded by naltrexone (10 mg/kg, s.Subanesthetic ketamine increased the expression of PSD-95 in the mPFC of both male (two-tailed unpaired t-test; corrected P = 1.42E-05) and female (P = 0.0071) rats compared to the saline-injected controls (Fig. 5c, d).In agreement with our imaging findings, naltrexone pretreatment completely blocked ketamine's effect in male rats (P = 1.86E-05).Meanwhile, neither males nor females receiving naltrexone alone nor naltrexone and ketamine showed significant differences compared to the saline-injected controls.Importantly, there were no significant differences in the nuclear DAPI staining, considered as a control.
Overall, these results indicate that the physiological changes at the synaptic level associated with subanesthetic ketamine administration are indeed significantly blocked with opioid blockade, but only in male rats.
Ketamine-induced locomotor sensitization is opioid-mediated and sex-dependent
To achieve sustained remission, ketamine therapy typically requires repeated administration over the course of several weeks 42 .Repeated exposure to drugs of abuse in rodent models induces a progressive increase in locomotor behavior (i.e., locomotor sensitization) reflective of neuroadaptations of the mesolimbic dopamine system 43 .Motivated by our observation of altered signaling in mesolimbic structures in our fUSI experiments, we aimed to investigate opioidmediated sex-dependent effects on behavior in the context of repeated ketamine dosing.In a chronic open-field locomotor assay, we pretreated male and female rats with naltrexone (10 mg/kg, s.c.) or saline 10 min before administering subanesthetic ketamine (10 mg/kg, i.p.).Rats were habituated for 2 days with saline only, followed by 4 daily sessions with ketamine with or without naltrexone pretreatment (Fig. 6a).Repeated ketamine administration induced locomotor sensitization in both male and female rats (Fig. 6b-e).Importantly, naltrexone pretreatment completely blocked this effect in males.However, in female rats there was no such sustained blockade of Male ketamine-induced locomotor sensitization; we observed a significant reduction of locomotor activity at Day 3, which was fully reversed at Day 4 (Fig. 6c).Statistical analyses stratified by treatment showed a significant effect of sex (two-way mixed-effects ANOVA; F 1,14 = 6.22,P = 0.026), session (F 5,70 = 8.82, P = 1.59E-06), and interaction (F 5,70 = 3.14, P = 0.013) in rats pretreated with naltrexone, and a significant effect of session (F 2.85,39.9= 29, P = 6.66E-10) in rats pretreated with saline before ketamine administration (Supplementary Table 3).Animals in the control group presented no significant effects.In summary, our results indicate that ketamine produced locomotor sensitization, a marker of mesolimbic dopaminergic adaptation, in both male and female rats, and pretreatment with the opioid receptor antagonist naltrexone completely blocked this effect in male rats only.
Chronic naltrexone upregulates mu opioid receptors in female rats
We next investigated the molecular mechanisms underlying the opioid-dependent behavioral adaptations induced by chronic dosing of ketamine and naltrexone in male rats.To this end, we performed ex vivo autoradiography with the selective mu opioid receptor (MOR) agonist [ 3 H]DAMGO, following our previous study showing significant modulation of MOR density in key brain regions with the (S)-ketamine isomer 17 .Male and female rats were pretreated with naltrexone (10 mg/ kg, s.c.) or vehicle 10 min prior to ketamine (10 mg/kg, i.p.) or vehicle for four consecutive days to mirror our behaviorally relevant dosing protocol (Fig. 6).Twenty-four hours after the last injection the animals were euthanized, and brain slices were incubated with [ 3 H]DAMGO and imaged using a phosphor imager.We have previously shown that repeated (S)-ketamine infusions (20 mg/kg/day for 8 days) decreased MOR density in the mPFC, NAc, and thalamus in male and female rats 17 .
Curiously, here we did not observe a reduction in MOR density in either males or females following daily injections of 10 mg/kg racemic ketamine (two-tailed unpaired t-test, corrected P > 0.217) (Fig. 7).However, we found a statistically significant effect of sex in the NAc (two-way ANOVA; F 1,40 = 5.12, P = 0.029) (Fig. 7b), and post-hoc analysis in this region revealed statistically significant differences with naltrexone treatment in female rats (both in the NTX + KET vs VEH + KET and NTX + VEH vs VEH + VEH comparisons; two-tailed unpaired t-test, corrected P < 0.036).In addition, there were statistically significant differences between males and females in the NTX + VEH group in the NAc Male and female rats received an s.c.injection of either vehicle (VEH; saline) or naltrexone (NTX; 10 mg/kg) followed by ketamine (KET; 10 mg/kg, i.p.) or vehicle 10 min later on each of four days.All animals were habituated for 2 days (Hab1-2) with vehicle only, followed by 4 daily sessions with ketamine with or without naltrexone pretreatment (D1-4).b Total distance traveled by male rats normalized to the mean of Hab1-2.Two-way mixed-effects ANOVA; between-subjects factor of treatment, F 2,17 = 5.24, P = 0.017; within-subjects factor of session, F P < 0.027).Our results suggest that repeated naltrexone administration increased MOR density in the NAc selectively in female rats.This effect could potentially explain why ketamine-induced locomotor sensitization was not blocked by naltrexone pretreatment in females: while naltrexone fully blocked MORs in male rats preventing behavioral sensitization, females had increased opioid system capacitance to increase MOR density in response to naltrexone exposure, and as a result, racemic ketamine could still actuate a sufficient number of receptors in female rats to elicit a sensitized locomotor effect.Importantly, although we did not find significantly decreased MOR density from 10 mg/kg racemic ketamine (equivalent to 5 mg/kg (S)-ketamine) administration compared to vehicle, we observed a mean reduction of 9.8% in [ 3 H]DAMGO binding in the NAc when we aggregated male and female rats.For comparison, in our previous study the binding potential in the NAc was reduced by 29.5% in male and female rats after 8 daily i.v.infusions of 20 mg/kg (S)ketamine 17 .While it is difficult to directly compare these results due to the different dosing regimens, it is worth noting that in our prior study the cumulative dose of (S)-ketamine was 8x higher (160 mg/kg vs 20 mg/kg assuming 50% (S)-ketamine isomer in the racemic formulation).Therefore, considering that (S)but not (R)-ketamine binds MORs 44 , it was expected that we would observe a lesser reduction in MOR density in this current experiment.
Discussion
Our results indicate that opioid receptors mediate, at least partly, the neural activity changes elicited by a subanesthetic dose of racemic ketamine as measured by pharmaco-fUSI in the acute restraint rat model of stress.Although ketamine's mechanism of action in this context is not fully clear, one prevailing hypothesis is that subanesthetic ketamine causes NMDAR-mediated inhibition of fastspiking gamma aminobutyric acid (GABA)-ergic interneurons in the mPFC, with a consequent glutamate surge and disinhibition of pyramidal cells, resulting in a net cortical excitatory effect 45,46 .NMDARindependent pathways have also been reported 9 , along with inhibition of different types of cortical interneurons 47 .We demonstrate here that opioid receptors also play a critical role in subanesthetic ketamine's action in the mPFC and other related cortical and subcortical regions, and importantly, that these opioid-mediated effects are sexdependent.
In the Cg1 sub-region of the mPFC, we show that ketamine-evoked fUSI signals closely track ECoG power changes in the gamma band, confirming the neural basis of these fUSI signals.Gamma-band oscillations are regulated by NMDAR signaling in fast-spiking interneurons 48 .These neurons send wide-spread inhibitory projections to excitatory pyramidal cells and are important for cognitive functions such as attention, learning, and memory 49 .Acute ketamine administration leads to robust, dose-dependent gamma-band power enhancement throughout the neocortex and subcortical regions 32,50 , which coheres with our fUSI observations.Notably, a high correlation between fUSI signals and gamma (30-90 Hz) and high gamma-band (110-170 Hz) local field potentials has been previously reported in the visual cortex and hippocampus of mice 33 .
Ketamine's opioid-mediated responses were observed in the mPFC, NAc, and LHb, neural structures implicated in processing of reward and cognitive functions relevant for the pathophysiology of depression and other psychiatric diseases [51][52][53][54] .Notably, many of these regions have been previously reported to show increased early gene (cFos) expression upon ketamine and psylocibin administration 55 .Our results are also in agreement with prior reports of ketamine directly recruiting opioid receptors in the mPFC 16 and LHb 14 .Opioid signaling in the mPFC is consistent with ketamine's antinociceptive and analgesic action 56 , as this region executes descending pain control via functional connections with the periaqueductal gray 57 , and activation of opioid receptors decreases LHb activity 58 processing 59,60 and closely linked to the anhedonic symptoms of depression 52 , may originate either from direct opioidergic action at this site 61 or via downstream projections from the mPFC 62 .Our recent study showed that (S)-ketamine occupies mu opioid receptors in the NAc and elicits opioid-mediated activation of this region 17 .Racemic ketamine was also found to induce dopamine transients in the NAc similar to those evoked by cocaine 63 , although it was concluded to have limited addiction potential.Interestingly, the opioid-mediated effects that we observed were critically dependent on the presence of male sex hormones, as modulation of ketamine-evoked responses by opioid blockade was not evident in female subjects and was fully reversed by surgical removal of the male gonads.These acute observations were reflected in downstream changes in postsynaptic density in the mPFC, a putative biomarker of the antidepressant action of ketamine 5 .Importantly, repeated dosing of subanesthetic ketamine induced behavioral sensitization, a robust measure of the neurobehavioral adaptations caused by repeated exposure to drugs with abuse liability 64 , in both male and female rats.Blocking the opioid receptors also suppressed this behavioral effect, but only in male subjects.Finally, we showed that the lack of ketamine-evoked behavioral adaptations in females may be explained by a compensatory effect of repeated naltrexone infusions, which increased MOR density in the NAc in female rats only.Previous studies reported upregulation of MORs following chronic administration of naltrexone or naloxone 65,66 .Here we add to this knowledge by highlighting a clear sex dependence of these effects.Synthesizing our current results with those in the literature, including the differential pharmacokinetics of ketamine in male and female humans and rodents 67 , our findings suggest that at doses that we observed to be behaviorally relevant, ketamine does indeed act partly through opioid pathways to induce varied physiologic, cellular, and behavioral-level effects, but that female rats have a greater capacity to upregulate opioid pathways and activate a compensatory mechanism in the setting of opioid blockade.We posit that this differential capacitance for modulating opioid signaling between varied subject populations may account for the heterogeneity of results across clinical trials that have attempted to investigate the role of the opioid pathway in the affective effects of subanesthetic ketamine [10][11][12][13] .
It is important to note that these results from rats would need to be explicitly verified in clinical trials before drawing meaningful conclusions with respect to clinical treatments using subanesthetic ketamine.With regards to a sex-dependence of the effects of ketamine, proper comparisons of ketamine's therapeutic and adverse effects in male and female patients are lacking, possibly due to the limited statistical power for such subgroup analysis in current clinical trials, although sex differences have previously been reported at an anecdotal level 68 .Sex-dependent ketamine and norketamine pharmacokinetics have been observed 67 , and diverging correlations between treatment outcomes and depression-related inflammatory cytokines have been reported in male and female subjects 69 .Moreover, preclinical investigations showed sex differences in the pharmacokinetics of ketamine and its metabolites, as well as in behavioral and physiological readouts 9,67,70 .Our results elucidate and emphasize both the sex-dependent and opioid-based mechanisms underlying the actions of subanesthetic ketamine.The timeliness for such investigation is made more urgent by the current widespread administration of ketamine, in both its racemic and isomer-specific forms, in patients with treatment-resistant depression and other affective disorders.Indeed, it is possible that the current heterogeneity in the clinical and preclinical findings and the associated controversy surrounding the potential opioid dependence of ketamine's clinical effects could reflect the existence of explanatory demographic-based biological variables, such as sex.As these current results regarding a sex dependence in rats were revealed only with opioid blockade, they underscore the need for future clinical trials of the interaction of ketamine and opioids to be sufficiently powered to detect sex-based differences.To this end, we point out that in one of the recent studies reporting no effects of opioid receptor blockade on ketamine's antidepressant action 13 , the single subject receiving naltrexone concurrently with ketamine infusions was a female participant.While several factors of that study prevent definitive deductions, including its limited sample size and the investigation in a population of substance abuse and pain patients with likely differential status from the general population in terms of opioid receptor density, our results appear to be in agreement with these findings and may help explain the conflicting observations in the clinical literature.
There are several limitations to our study.First, as we have noted, these results from rodents would need explicit verification in controlled clinical trials to draw meaningful conclusions for clinical treatments using ketamine.Second, subanesthetic ketamine causes transient cardiovascular effects in both humans and rats 3,71 , which may confound the interpretation of our fUSI results.However, the dynamics of ketamine's activity and the effect of naltrexone pretreatment varied substantially between brain regions and throughout the scanning time, with opposing effects in different brain regions, suggesting that a central cardiovascular modulatory effect of ketamine is unlikely to account for our brain region-specific results.In addition, we show that ketamine-evoked CBV signals in the Cg1 sub-region of the mPFC closely tracked electrophysiological changes in the gammaband measured over this region, supporting that our fUSI results correlate more to changes in neural activity than a central cardiovascular modulation.Also, we did not see a correlation of the imaging results with the baseline vascular density of each region.Moreover, previous studies also showed similar patterns of neural activity with complementary modalities including pharmacological magnetic resonance imaging 72 and [ 18 F]-fluorodeoxyglucose positron emission tomography 16 .Therefore, we consider that our observed fUSI findings were unlikely to reflect a ketamine-induced global cardiovascular modulation.Third, in female rats we did not control for the estrous cycle at the time of injection 73 .However, the physiological effects of ketamine have not been shown to be dependent on the estrous phase (diestrus or proestrus) 40 , and surgical ovariectomy did not alter the plasma levels of ketamine and its metabolites in mice 67 .Moreover, our randomized study design should mitigate any confounds related to the estrous phase, as female rats in each treatment group were likely imaged during different phases of the estrous cycle.
In summary, our results establish that opioid blockade can modulate neural activity, cellular physiologic, and behavioral changes induced by subanesthetic ketamine, but only in male rats.Therefore, it is imperative that future clinical trials focus on sex as a biological variable in assessing the affective responses to subanesthetic ketamine, including its antidepressant efficacy, especially with respect to potential abuse liability or withdrawal type responses upon discontinuation 74,75 .Excitingly, our regional mapping may inform and guide future studies with ultrasound-mediated interventions for focal delivery of ketamine 76,77 .
Animals
All animal procedures were approved by the Institutional Animal Care and Use Committee at Stanford University and at the National Institute on Drug Abuse.Male and female Long Evans rats (Charles River Laboratories) were used in the experiments.All animals were 9-10 weeks old and weighed 278 ± 40 g (mean ± s.d.) when they entered the study.Animals had ad libitum access to water and food for the entire duration of the experimental protocols.Rats were housed in a temperature-controlled vivarium on a 12-h light-dark cycle (lights on at 7 AM; lights off at 7 PM) and were acclimated to their home cage for one week before experimentation.In case of surgical procedures, the animals were singly housed following the surgery.
Drugs
Naltrexone hydrochloride (Sigma for the autoradiography experiment; Tocris Bioscience for all the other experiments) was suspended in 0.9% sterile saline to obtain a 10 mg/mL solution.MK-801 (Tocris Bioscience) was suspended in 0.9% sterile saline to obtain 0.1 and 0.25 mg/mL solutions.Ketamine hydrochloride (Covertus for the autoradiography experiment; Dechra Veterinary Products for all the other experiments) was diluted in 0.9% sterile saline to obtain 1, 5, and 10 mg/mL solutions.All drugs were administered in a bolus injected volume of 1 mL/kg.
Surgical procedures
Craniotomy.Rats received a bilateral surgical craniotomy and chronic prosthesis implantation as previously reported 26 .Briefly, animals were anesthetized with 3.5% isoflurane in 100% oxygen, and anesthesia was maintained with 1.5% isoflurane.The incision region was prepared by shaving the skin using a depilatory cream.Rats were then placed in a stereotaxic frame for head fixation and orientation.Body temperature was maintained at 37 °C by a warming pad with rectal probe monitoring (RightTemp Jr.; Kent Scientific).Heart rate and arterial oxygen saturation were monitored by a pulse oximeter (MouseStat Jr.; Kent Scientific).Anti-inflammatory (dexamethasone, 1 mg/kg; i.p.) was administered to prevent brain swelling and inflammation.The incision site was disinfected by applying alternating povidone-iodine and 75% EtOH, and a skin incision was performed.The bone was cleaned with 75% EtOH and a window (5 mm AP × 10 mm ML centered at bregma +2.5 mm) was marked on the skull with a surgical pen.The bone around the window was pretreated using a bonding agent (iBOND Total Etch; Kulzer).We then cut parietal and frontal bone fragments using a handheld high-speed drill with a 0.7 mm drill bit (Fine Science Tools).We gently removed the bone flaps paying attention to avoid damaging the dura mater, and sealed with dental cement (Tetric EvoFlow; Ivoclar Vivadent) a 125-μm polymethylpentene film covering the cranial window.The space between the dura and the prosthesis was filled with 0.9% sterile saline.A dose of 0.5 mg/kg Buprenex SR was administered subcutaneously for analgesia.The animals were allowed to recover for 1 week before the first imaging session.
Electrode implantation.Male rats were prepared for surgery as described above.After the skull was exposed, cleaned, and pretreated with bonding agent, we drilled burr holes (0.7-mm drill bit; Fine Science Tools) using a handheld high-speed drill for electrode implantation.A PFA-coated 100-μm stainless steel wire (A-M Systems) was used to create electrical contacts with the cerebral cortex at AP 2.5 mm -ML 0 mm (Cg1), AP -10 mm -ML 0 mm (reference), and AP -10 mm -ML −2.5 mm (ground).Dental cement (Tetric EvoFlow; Ivoclar Vivadent) was used to secure the electrodes.A dose of 0.5 mg/kg Buprenex SR was administered subcutaneously for analgesia, and the animals were allowed to recover for 1 week before the recording session.
Orchiectomy.To assess the effect of sex hormones, adult male rats were orchiectomized following previously published protocols 78 .The animals were anesthetized with 3.5% isoflurane in 100% oxygen, and anesthesia was maintained with 1.5% isoflurane.The incision site was prepared by shaving and disinfecting the skin as described above.An incision of about 10 mm was made on the ventral side of scrotum along the midline.The testicular content was exposed, the vas deferens and blood vessels were clamped to prevent bleeding, and the testicles were removed.The incision was then closed with monofilament sutures.We waited for 10 days before experimentation to allow for recovery and testosterone washout.
Pharmaco-functional ultrasound imaging
Ultrasound system and power Doppler processing.A Vantage 256 research scanner (Verasonics Inc.) was connected to a linear array transducer (Vermon; 128 elements, lateral pitch of 100 μm) operating at a 15-MHz center frequency.The imaging probe was housed in a custom 3-D printed holder mounted on a motorized positioning system.For acoustic coupling, we used ultrasound gel that was centrifuged to remove air bubbles.The imaging sequence consisted of five tilted plane waves (-6°, -3°, 0°, 3°, 6°) emitted with a pulse repetition frequency of 19 kHz.Two plane waves were averaged for each angle to increase the signal-to-noise ratio.We acquired data for 200 compound frames at a rate of 1 kHz, and the frames were beamformed in a regular grid of pixels with in-plane resolution of 100 μm × 100 μm.Beamforming was performed in real-time in an NVIDIA Titan RTX using a GPU beamformer 79 .
Sequences of 200 compound ultrasound frames were processed offline in MATLAB (MathWorks, Inc.) for clutter filtration and power Doppler computation.To eliminate the Doppler signal component originating from the stationary tissue, we used a 5th-order temporal high-pass Butterworth filter with a cutoff frequency of 40 Hz and a singular value decomposition filter that eliminates the first singular value 80 .The power Doppler intensity at each pixel was calculated by squaring and averaging the filtered Doppler signals.The final power Doppler frame rate was 1 frame/s.
Imaging session.At the beginning of each imaging session, rats were briefly anesthetized with isoflurane and a catheter was placed in the tail vein for vascular access.While under anesthesia, animals were placed in a plastic restraint cone (Stoelting Co.) and positioned in a custom head-restraining apparatus 81 .Oxygen was flowed through the nose cone to prevent hypoxia.The ultrasound probe was positioned over the slice of interest.The relevant brain atlas slice was plotted overlaid on the real-time power Doppler images to facilitate accurate probe positioning based on vascular landmarks (Supplementary Fig. 1).With the animal in the imaging apparatus, we waited for 30-45 min before data acquisition to allow for complete isoflurane clearance.An s.c.injection of naltrexone or vehicle was performed, followed by an i.v.injection of drug (ketamine or MK-801) or vehicle after 10 min.After the i.v.injection, the catheter was flushed with 200 µL of sterile saline.We acquired data continuously for 50 min following drug administration.
Functional ultrasound data pre-processing.To prevent motion artifacts in the processed CBV signals, translational and rotational movements were corrected by applying a motion correction algorithm to the image time series (Supplementary Fig. 1).For each acquisition, a power Doppler template was calculated via median filtering of the first 500 images.Then, all power Doppler frames from the same acquisition were registered to the template using a rigid transformation that included rotations, translations, and cubic interpolations.A filter was used to remove registered data frames affected by excessive motion or other artifacts.This filter was adapted from previously published code 82 .Each power Doppler dataset was then manually registered to the relevant slice of the Paxinos & Watson rat brain atlas 30 (at bregma +2.5 mm or bregma −3.5 mm).
Cerebrovascular time series.The pixel-wise relative CBV signal was calculated as the normalized difference with a baseline (i.e., ΔCBV/ CBV = (CBV t − CBV 0 ) / CBV 0 ).For each acquisition, the baseline was calculated by averaging 10 min of power Doppler data immediately before drug administration.The regional time series were computed by spatially averaging the pixel ΔCBV/CBV signals in the relevant segmented ROIs in each brain slice (Fig. 1a).Time series were time-locked to the time of ketamine administration.
Functional maps.To assess the effect of ketamine administration and naltrexone pretreatment, we used an approach similar to direct pharmaco-fMRI 72 , where we used pixel-wise statistical inference to analyze group-level differences in peak CBV signal.For each preprocessed power Doppler acquisition, an image was created by calculating the temporal CBV peak at each spatial location.Peak CBV images were registered to the template atlas space by performing a rigid transformation, and t scores were calculated for the contrasted groups (NTX + KET vs VEH + KET; two-tailed paired t test).Thresholded t scores were corrected for multiple comparisons across each slice using a cluster-size threshold of 34 contiguous pixels.The threshold was determined via Monte Carlo simulations using the 3dClustSim program of the AFNI library 83 to obtain an overall cluster P < 0.05, family-wise error rate corrected.Color-coded functional maps were displayed overlaid on a power Doppler template to enable a visual comparison of the analyzed groups.
Electrocorticography
Recording.At the beginning of the recording session, rats were briefly anesthetized with isoflurane and a catheter was placed in the tail vein for vascular access.While under anesthesia, animals were placed in a plastic restraint cone (Stoelting Co.) and positioned in a custom headrestraining apparatus 81 .Oxygen was flowed through the nose cone to prevent hypoxia.Electrocorticography recording was performed with an 8 Channel Cyton Biosensing Board (OpenBCI) using the OpenBCI GUI at a sampling frequency of 500 Hz.With the animal in the restraint, we waited for 30-45 min before data acquisition to allow for complete isoflurane clearance.After a baseline acquisition (10 min), an i.v.injection of drug ketamine (10 mg/kg or 1 mg/kg) was performed, and the catheter was flushed with 200 µL of sterile saline.We acquired data continuously for 50 min following ketamine administration.
Processing.Raw ECoG traces were processed using a 5 th -order Butterworth filter with cutoff frequencies 1-100 Hz.A short-time Fourier transform was computed in 1-s nonoverlapping temporal segments, and the power of the resulting spectrogram was calculated.The spectral power was then averaged in each band (delta/theta: 1-8 HZ; alpha: 8-12 Hz; beta: 12-30 Hz; gamma: 30-80 Hz) and normalized to the mean power of the baseline period (10-min pre-ketamine) to compute the time series.The time series were time-locked to the time of ketamine administration, and a median temporal filter was applied with a kernel of 15 s for smoothing.
We regressed the ECoG time series in each spectral band and the Cg1 CBV signal using a single-gamma distribution function (Fig. 2c) with four β parameters.The fitting was performed using the 'lsqcurvefit' function in Matlab and iteratively minimized the sum of the squared residuals between the target signal and the fitted curve.The initial β values were (2, 10, 0.5, 0).Stopping criteria were gradient step tolerance and function tolerance of 1E-15 or max 1E4 function evaluation.
The regressed β values were compared between the ECoG bands and the Cg1 CBV time series.A time-delay parameter (β 4 ) was included to account for potential uncertainties in the ketamine injection time and to improve the goodness of fit.This parameter was not included in the statistical analysis.All the filtering and regression was performed in Matlab using custom-built scripts.
Postsynaptic density protein PSD-95
Drug administration and immunohistochemistry. Rats were administered an s.c.injection of 10 mg/kg naltrexone or vehicle.After 10 min, an i.p. injection of 10 mg/mg ketamine or vehicle was performed, and the animals were returned to their home cage.After 24 h post-ketamine, the animals were anesthetized with isoflurane (5%) and transcardially perfused with 1x phosphate-buffered saline (PBS) followed by 4% paraformaldehyde (PFA) diluted in PBS.Brains were extracted and fixed overnight in 4% PFA, then subsequently washed in PBS and frozen in embedding medium.Coronal sections 40 µm thickness were cut on a CM1800 Cryostat (Leica Microsystems), transferred to tissue storage solution (30% sucrose and 30% ethylene glycol in 0.1 M PB), and stored at −20 °C until immunohistochemical processing.Four tissue sections per rat (from Bregma +2.7 to +1.2; one 40 µm section every 400 µm) were selected for PSD-95 and DAPI labeling.Floating sections were rinsed with PBS then blocked with 4% normal goat serum and 0.3% Triton-X 100 diluted in PBS.Sections were then incubated overnight at 4 °C in primary antibody, rabbit monoclonal to PSD95 (ab238135; Abcam).Following incubation, sections were rinsed in PBS and incubated in secondary antibody, goat-antirabbit Alexa Fluor 555 (Invitrogen) at 1:500 for 2 h.Sections were then mounted on super-plus glass slides (VWR), airdried in the dark, and cover-slipped with hard-set mounting medium containing DAPI (Vector Labs).
Microscopy and image analysis.Images for PSD95-stained sections were acquired on a Keyence BZ-X800 fluorescence microscope (Keyence Corp.).Acquisition settings remained strictly constant between all images acquired at the same magnification.Specific ROIs were chosen to sample the mPFC at approximately the infralimbic, prelimbic, and cingulate area 1. High-resolution z-stacks of each ROI were acquired using a 40× magnification with a step size of 0.4 µm and total depth of 6 µm.Each Z-stack image set was merged and analyzed with BZ-X Advanced Analysis Software (Keyence Corp.).Signals above thresholded background were used for manual ROI segmentation to calculate the area of mean fluorescent signal intensity of each ROI, averaged across the four sections collected per animal.Mean fluorescent intensity is reported in arbitrary units.
Locomotor sensitization
All behavioral tests were performed in an environmentally controlled room.Open-field locomotor activity was recorded in a custom-built white Plexiglas apparatus (90 cm × 90 cm × 40 cm) divided in four equal compartments.Videos were collected for batches of 4 animals using an overheard camera placed at the center of the field.Animals in each batch were randomized for sex and treatment group.Prior to the behavioral tests, rats were handled for 3 days to acclimate to the experimenter and reduce stress.Then, locomotor activity was recorded for a total of 6 days.In the first two habituation days (HAB1/2), rats received an s.c.injection of vehicle and were then returned to their home cage.After 10 min, rats received an i.p. injection of vehicle and were immediately placed at the center of the arena, where they were allowed to freely explore for 20 min while locomotor activity was recorded.In the following 4 days (D1/4), rats were administered an s.c.injection of vehicle or naltrexone (10 mg/kg) and returned to their home cage.After 10 min, rats received an i.p. injection of ketamine (10 mg/kg) and were placed at the center of the arena while locomotion was recorded.Animals in the control group (VEH + VEH) continued to receive vehicle injections for the entire duration of the experiment.The compartments were thoroughly cleaned with Virkon between each recording session to control for scent-related confounds.White noise (65 dB) was played during the sessions to attenuate any external noise.Both a male (TDI) and a female (SNE) experimenter conducted the tests to control for any confounds introduced by the experimenter's sex 84 .All behavioral tests were performed at the end of the light cycle, between 4:00 PM and 7:00 PM.The videos were analyzed in ToxTrac 85 to track the instantaneous animal center position and quantify distance traveled.During habituation, female rats showed higher locomotion than males (two-sided unpaired t-test, P = 0.0006), therefore we normalized the distance traveled to the habituation baseline to isolate the effect of ketamine.
[ 3 H]DAMGO autoradiography Rats were pretreated with naltrexone (10 mg/kg, s.c.) or vehicle 10 min prior to treatment with ketamine (10 mg/kg, i.p.) or vehicle for four consecutive days.Twenty-four hours after the last treatment, rats were euthanized, the brains were flash frozen, and stored at -80 °C until they could be sectioned (20 µm) on a cryostat (Leica) and thaw mounted on ethanol cleaned glass slides.Slides were pre-incubated in 50 mM Tris-HCl buffer for 10 min at room temperature.The pre-incubation buffer was removed and the slides were placed in incubation buffer containing 5 nM [ 3 H]DAMGO (46 Ci/mmol, NIDA Drug Supply) for 45 min at room temperature (total binding).For non-specific binding nontritiated DAMGO (10 µM) was also added.The sections were then washed by two 30-s washes in the Tris buffer.Finally, slides were dipped in ice cold distilled water to remove salts.After exposure to the radioligand and washing, slides were allowed to dry and were then placed into a Hypercassette™ covered by a BAS-TR2025 phosphor screen (FujiFilm; Cytiva).The slides were exposed to the phosphor screen for 12 days and then imaged using a phosphor imager (Typhoon FLA 7000; GE Healthcare).The digitized images were calibrated using 14 C standard slides (American Radiolabeled Chemicals).ROIs were hand-drawn based on anatomical landmarks and radioactivity was quantified using ImageJ (NIH).The activity in 4 different sections was averaged per animal and brain region.
General statistical analysis
Rats were randomly assigned to treatment conditions.When withinsubjects factors were present in the ANOVA, Mauchly's test for sphericity was performed to determine whether the sphericity assumption was satisfied.In cases where the assumption was violated, we used a Greenhouse-Geisser adjustment to the degrees of freedom.Pairwise post-hoc comparisons were performed in case of significant ANOVA effects.In the pairwise tests, multiple comparisons were controlled using Benjamini-Hochberg false-discovery-rate (FDR) correction (α = 0.05).All comparisons were two-tailed.We calculated effect sizes using Hedge's g.Statistical tests, sample sizes n, corrected P values, and effect sizes g are reported for each analysis in the text and figure captions.All statistical analyses were performed using custom scripts in R Studio and MATLAB.
Fig. 1 |
Fig.1| Functional ultrasound imaging of intravenous ketamine administration.a Schematic representation of the imaging setup.A surgical craniotomy enables ultrasound penetration, and an implanted chronic prosthesis allows imaging over repeated sessions.Drawing created in BioRender.b Coronal slices of the rat brain were imaged at bregma +2.5 mm and bregma −3.5 mm.The segmented regions of interest (ROIs) are highlighted on the Paxinos & Watson rat brain atlas30 .c Sequence of cerebral blood volume (CBV) coronal maps at bregma +2.5 mm and bregma −3.5 mm following administration of 10 mg/kg i.v.ketamine.The pixel intensity shows the CBV signals as a normalized difference with a pre-injection baseline (10 min).The time axis was zeroed at the time of ketamine injection.d The coronal maps were segmented and the CBV signals were averaged in the relevant ROIs.The plots show CBV time series in response to increasing doses of i.v.ketamine.Solid lines represent the mean values and shaded areas are SEM from n = 9 rats/group (10 and 5 mg/kg) or n = 8 rats/group (1 and 0 mg/kg).e Peak CBV in the segmented ROIs.Two-way mixed-effects ANOVA; within-subjects factor of region, Fig. 2 | Pharmaco-fUSI closely tracks ketamine-evoked gamma-band power in the prefrontal cortex.a Schematic representation of the setup for recording intracranial electrocorticography over the Cg1 sub-region of the prefrontal cortex.Coronal slice drawing adapted from the Paxinos & Watson rat brain atlas 30 .b Representative spectrogram for i.v.administration of 10 mg/kg ketamine (KET).c Time series of normalized electrocorticography (ECoG) power changes in each frequency band and cerebral blood volume (CBV) signal in the Cg1 region.Solid lines represent the mean values and shaded areas are SEM.d For each rat, the time series of normalized ECoG power changes and the Cg1 CBV signal were regressed
Fig. 3 |
Fig.3| Pharmaco-fUSI reveals a sex-dependence of opioid-mediated effects of ketamine administration.a Rats received an s.c.injection of either vehicle (VEH; saline) or naltrexone (NTX; 10 mg/kg) followed by ketamine (KET; 10 mg/kg, i.v.) or vehicle after 10 min.Each animal was imaged three times under the treatment conditions of VEH + KET, NTX + KET, and NTX + VEH.b, c Functional maps in male (b) and female (c) rats.The t scores were calculated by contrasting the pixel-wise peak cerebral blood volume (CBV) in the NTX + KET versus VEH + KET groups.Statistically significant clusters are displayed overlaid on a power Doppler template (one cohort of n = 9 females and n = 9 males imaged at bregma +2.5 mm; one cohort of n = 9 females and n = 9 males imaged at bregma −3.5 mm; two-tailed paired t-test, corrected P < 0.05).In male rats, functional maps show that naltrexone reduced peak activity in M1/2, Cg1, NAc, and CPu, and increased activity in RSG, LHb, and LPRL.There were only minor clusters in female rats.d, e CBV time series in Cg1, Cpu, NAcC, RSG, and LHb in male (d) and female (e) rats.Solid lines represent the mean values and shaded areas are SEM.f A different cohort of male rats received an i.v.dose of 0.1 mg/kg MK-801 with naltrexone or vehicle pretreatment.The bar plots display the peak CBV in the Cg1 and NAcC regions.Two-tailed paired t-tests (NTX + KET vs VEH + KET and NTX + MK-801 vs VEH + MK-801), corrected P: Cg1 = 0.016 (KET), 0.572 (MK-801); NAcC = 0.024 (KET), 0.922 (MK-801).Hedge's g effect sizes: Cg1 = −1.2(KET), −0.3 (MK-801); NAcC = −0.97(KET), 0.04 (MK-801).n = 9 male rats (KET groups); n = 6 male rats (MK-801 groups).Data are presented as mean +/− SEM.g CBV time series in Cg1 and NAcC in male rats receiving MK-801.n = 6 male rats.Solid lines represent the mean values and shaded areas are SEM.Source data are provided as a Source Data file.Details on the statistical analyses are provided in Supplementary Table1.
|
2024-01-31T14:14:26.418Z
|
2024-01-30T00:00:00.000
|
{
"year": 2024,
"sha1": "3eb6445bbd4710cd0cd4da281c07f8f582923d4f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-024-45157-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62cd476d9f629a1f2b9ef2996057c6de5e0ed425",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257627098
|
pes2o/s2orc
|
v3-fos-license
|
Hand-powered centrifugal micropipette-tip with distance-based quantification for on-site testing of SARS-CoV-2 virus
This paper proposed a hand-powered centrifugal micropipette-tip strategy, termed HCM, for all-in-one immunoassay combined with a distance-based readout for portable quantitative detection of SARS-CoV-2. The target SARS-CoV-2 virus antigen triggers the binding of multiple monoclonal antibody-coated red latex nanobeads, forming larger complexes. Following incubation and centrifugation, the formed aggregated complexes settle at the bottom of the tip, while free red nanobeads remain suspended in the solution. The HCM enables sensitive (1 ng/mL) and reliable quantification of SARS-CoV-2 within 25 min. With the advantages of free washing, free fabrication, free instrument, and without the optical device, the proposed low-cost and easy-to-use HCM immunoassay shows great potential for quantitative POC diagnostics for SARS-CoV-2.
Introduction
The coronavirus disease 2019 (COVID-19) pandemic has become a severe global threat and is continuing to severely affect public health and the economy [1][2][3]. According to the World Health Organization (WHO), there are over 600 million confirmed cases and more than 6 million deaths by the end of September 2022. Considering its high infectivity rate, on-site, rapid, and early point-of-care testing (POCT) is particularly critical for timely isolation and intervention [4].
In terms of assay formats, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) testing can be mainly carried out by nucleic acid-based molecular amplification and serum-based immunoassays [5][6][7]. In practice, classical quantitative assays, such as reverse transcription polymerase chain reaction/quantitative polymerase chain reaction (RT-PCR/qPCR) [8], enzyme-linked immunosorbent assay (ELISA), and chemiluminescence (CLIA), have played key roles for health monitoring during the outbreaks of COVID-19 [9,10]. However, the requirements of sophisticated equipment, specialized personnel, and multi-step processes have limited their potential application in POCT for on-site detection [11]. Alternatively, turbidimetric inhibition immunoassay (TIIA) was able to partially overcome the above-mentioned shortcomings, but still require optical detection equipment for quantification [12,13] (Fig. 1a).
Direct detection of SARS-CoV-2 antigens by lateral flow immunochromatography (LFIA) has been widely applied for preliminary largescale screening due to its advantages of superior simplicity in operation and visualized detection, especially the in-home-self-testing scenarios could effectively reduce the risk of spreading during the epidemic by the mass gathering [14] (Fig. 1b). Considering the limitation of low sensitivity caused by the insufficient reaction between antigens and antibodies of conventional LFIA, Xu et al. recently reported a promising handheld microfluidic filtration platform for self-testing of the SARS-CoV-2 Virus [15]. The antigen-antibody binding was carried out within a test tube, enabling an immediate and synchronal contact of antigen-and antibody-conjugated beads and achieving a low detection limit of less than 100 copies mL − 1 within 30 s. The detection signal was then obtained by the naked-eye reading of the red color intensity, which was potentially affected by subjective interpretation. Subsequently, to fulfill quantitative measurement while maintaining a simple and user-friendly interface, Wu et al. demonstrated a novel microfluidic chip with a particle dam for quantitative visualization of SARS-CoV-2 antibody levels from the accumulation length of the microparticles functionalized with antibodies [16]. However, despite promising progress, most of these POC platforms enable naked-eye qualitative detection or additional instrument-assisted quantitative detection. Thus, it is desirable to develop a low-cost, simple, instrument-free platform with quantitative visualization for POC diagnostics of SARS-CoV-2 [17].
Recently, distance-based signal readout, which does not need fluorescence or optoelectronic detector, has attracted increasing attention for biochemical sensing because of its potential capability of naked eye quantitative detection [18][19][20][21][22][23]. Additionally, compared to other instrument-free approaches, cost-effective and portable centrifugal toys have emerged as one of the most promising bio-analysis systems [24]. For example, inspired by historic whirligig (or buzzer) toys, Bhamla et al. first developed a hand-powered ultralow-cost paper centrifuge for plasma separation and malaria parasites isolating from whole blood [25]. Later, Michael et al. also described a custom-made fidget spinner for rapid on-site detection of urinary tract infections [26]. Given the above-mentioned factors, in this paper, we proposed a novel hand-powered centrifugal micropipette-tip (HCM) based platform that allows direct quantitative visualization of SARS-CoV-2 virus antigen levels (Fig. 1c). The centrifugal micropipette tip with an inner diameter of 300 μm is prepared by loading homogeneous immunoreaction reagents into a commercial long micropipette tip with the tip end sealed with epoxy glue to avoid liquid leakage during centrifugation (Fig. S1). The presence of the target SARS-CoV-2 Virus antigen can trigger the multiple monoclonal antibodies coated red latex nanobeads (200 nm) to bind together to form larger complexes of different sizes and shapes (Figs. S2a-d). After incubation at 37 • C and then centrifugation of the HCM, the formed aggregated complexes are settled in the bottom of the tip, while free red nanobeads will remain suspended in the solution because of their lower density. Finally, the concentrated nanobeads of different distances can be easily quantitatively read with the naked eye, without dependence on fluorescence or an optoelectronic detector, which will be one of the ideal solutions for point-of-care or self-service testing.
Results and discussion
To assess the feasibility of HCM, we first conducted a theoretical analysis. The sedimentation rate of spherical nanobeads is calculated by Stokes' law (Eqn (1)) [27]: (1) where Us represents the sedimentation velocity of the nanobeads, ρ m is the density of the nanobeads, ρ f is the density of the fluid, μ represents the fluid's viscosity, and g is the acceleration due to gravity or centrifugation, and R is the radius of nanobead. Eqn (1) indicated how this approach could be a simple but very effective method to separate aggregated complexes from unreacted nanospheres. The radius of the free unreacted nanosphere (200 nm) is significantly smaller than that of aggregated complexes (Figs. S2b-d). Theoretically, because of this size difference, the large complexes are pelleted quickly to the bottom of the tip, and free red nanobeads will remain suspended in the solution under suitable centrifugation conditions. We have conducted three iterations of the proposed strategy to meet different application scenarios. Initially, as shown in Figure S3a-c, a home-made t ray was fabricated by a portable CO 2 laser cutter (Laser technology, China) with two (Polymethyl methacrylate) PMMA sheets (YoungChip, China) via a fast laser print, cut, and laminate (PCL) methodology [28]. As a proof of concept, six separate micropipette tips were successfully loaded and then tested simultaneously on a custom electric motor, demonstrating the great potential of high throughput. To further lower the application threshold, the experiments can be carried out directly on a mini tabletop centrifuge by modifying the centrifuge tube and micropipette tip, which can be easily performed in a routine laboratory (Figs. S3d-f). To meet POC testing requirements, a commercial pulling-force spinning top was customized for high throughput loading of the micropipette tip, eliminating the need for electrically charged instruments (Fig. 2a-c and Video S1). The centrifugal speed of the pulling-force spinning top is easily stabilized in the range of 3000-4000 rpm powered by hand drive, which meets our experimental requirements (Fig. 2d-e and Fig. S4).
The key parameters in the homogeneous immunoreaction and subsequent centrifugal process were next investigated. Visual detection is achieved by reading the accumulated distance after simple centrifugation. Since the diameter of the commercial micropipette tip is not uniform (Fig. S1), we quantified the preliminary results with the gray value of the cumulative area instead of the cumulative distance. As shown in Fig. 3a-d, the optimal experimental conditions were determined to be 250 μg/mL of red latex nanobeads, 20 min incubation time, and 60-s centrifugation at 1500 rpm. Benefitting from the all-in-one homogeneous immunoreaction and portable visual reading, the analysis can be completed within half an hour, which is significantly shorter than the standard ELISA method. Our HCM provides a distance-based quantitative result, which can be more accurate than other commercially available SARS-CoV-2 POCT kits, providing only semi-quantitative and user-biased readouts based on color intensity.
To verify the sensitivity and reliability of our platform, we simultaneously compared the proposed HCM with conventional LFIA by tenfold series dilutions of samples in PBS solution. As displayed in Fig. 4, the intensity of signal production showed a corresponding increase with the growing concentration of nucleocapsid protein (N) of SARS-CoV-2 ranging from 0 to 100 μg/mL. There is a weak signal from the LFIA when the concentration is 10 ng/mL, while no visible signal was observed at 1 ng/mL (Fig. 4a-b). Importantly, the lowest detection concentration of our free-instrument HCM was determined to be 1 ng/ mL, with a dynamic range from 1 to 10 μg/mL (Fig. 4c-d). Conceivably, the detection limit and sensitivity can be further improved by reducing the tip's size in the future.
We further investigated the specificity of this approach by testing potential interfering influenza A/B. As shown in Fig. 5a-b, the interfering influenzas (10 μg/mL) all resulted in shorter accumulation lengths comparable to the negative control sample. A long accumulation length was observed with SARS-CoV-2 of the same concentration, which was about more than two times of other influenzas. Together, these findings suggest that forming longer lengths was due to the specific binding of SARS-CoV-2 spike N protein against antibodies labeled nanoparticles.
To further verify the practicability of this method, we first attempted the application to the mock swab sample. Six nasal swab samples from healthy volunteers with negative results from nucleic acid testing, spiked with 10 μg/mL SARS-CoV-2 N protein, were used as testing samples or with PBS for negative samples. As shown in Figs. S5a-b, the results demonstrated that our method can still distinguish positive and negative samples successfully, indicating no interference with the results from the nasal secretions.
Finally, we evaluated the performance of the optimized HCM method by comparing the results obtained from RT-PCR, LFIA, and HCM testing on nine clinical samples (Fig. 5c). Detailed information regarding clinical samples and their testing results is presented in Table S1. As demonstrated in Fig. 5c-d, all six PCR-confirmed SARS-CoV-2-positive samples (CS5-CS9) yielded a longer agglutination distance than the negative samples (CS2-CS4) (Fig. 5c). Furthermore, the statistical result indicates a significant differentiation between negative and positive samples (Fig. 5d). These results demonstrate the clinical potential of the proposed HCM technology, particularly in low-resource settings.
Conclusion
In summary, our proposed method offers many practical advantages: 1) no washing due to on-pot homogeneous reaction, 2) just needs commercial micropipette-tip avoiding complicated processing, 3) low cost for one test (<$0.1/test), 4) testing on a hand-powered centrifugal toy, no instrumentation required, 5) distance-based naked eye reading, no optical equipment required, 6) massive parallelization for multiple testing. Despite the obvious advantages, there remains significant room for improvement: 1) Based on the versatility of the platform, the antigen or antibody can be easily replaced for other virus detection, 2) Integration of the whole blood separation module to achieve serological immune detection, 3) Integration of liquid distribution module to realize simultaneous detection of multiple targets in a single sample [29]. Due to the non-uniform diameter of commercial micropipette tips, we quantified the results using the gray value of the cumulative area rather than the cumulative distance. Alternatively, the assay can be quantitatively scored using image analysis, including automated smartphone-based approaches. Additionally, producing micropipette tips with uniform diameter through injection molding or soft lithography can further improve the accuracy of our results. Overall, we have represented a simple-step immunoassay with a centrifugal micropipette-tip design for portable quantitative self-testing of the SARS-CoV-2 Virus, which will have broad applications in POCTs, especially in rural areas where laboratory equipment and resources are scarce.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
Data will be made available on request.
|
2023-03-20T15:03:59.261Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c8dc4900542a43f12e3b8bdb566a32eddfa359e6",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10023210",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c0a7d4bd105466625cbda6a5af8b7705dc48163",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234030501
|
pes2o/s2orc
|
v3-fos-license
|
Effect of dihydroquercetin on the stability of the properties of rendered fats
To reduce the negative effects of the oxidative effects of oxygen during fat storage, the use of antioxidants is provided, but currently the synthetic ones are mainly used, and some of them can have a toxic effect on the human body if proper concentration is not maintained. The article presents data on the effect of an antioxidant of natural origin - dihydroquercetin–on the stability of the properties of rendered fats during storage. As the material for the study, we used rendered elk fat and beef fat with dihydroquercetin (control) and without dihydroquercetin (experiment). Dihydroquercetin was administered in a 1% alcohol solution in the amount of 0.01%, 0.03%, 0.05%, 0.07% and 0.09% of the mass of raw materials. In the process of work, the generally accepted methods of studying the development of oxidative spoilage were used by determining the acid, peroxide and thiobarbituric numbers. The conducted research led to the conclusion that the inhibitory property of dihydroquercetin is directly dependent on its concentration, the higher its proportion in the product, the lower the indicators of oxidative spoilage. Depending on the type of fat, this additive in the amount of 0.01% allowed to prolong the shelf life of the product by a factor of 1.7 to 3.7 on average.
Introduction
To give meat and meat-containing products the necessary tenderness, juiciness, nutritional and energy value, a variety of fat-containing raw materials is used. Edible rendered fat is used in meat production as a substitute for such raw materials.
During the storage of fats, complex chemical processes occur in them, which are characterized by the fact that fats acquire a specific smell and unpleasant, sometimes bitter taste.
When fats become rancid volatile low molecular weight compounds are formed, causing the peculiar rancid smell. These compounds are ascertained to include aldehydes, ketones, and low molecular weight acids.
The oxidation process, which is typical for rendered fats, can be significantly suspended with the help of antioxidants that have the property of prolonging the induction period, i.e. the period when the oxidation processes in fat have not yet developed. Antioxidants are administered in extremely small doses -usually in thousandths and hundredths of a percent of fat mass. The effect of antioxidants is due to their ability to interact differently with intermediate products of the oxidation reaction -hydroperoxides of hydrocarbons (-ROOH), free radicals (R+) and peroxide (RO2+) radicals, as well as to break the reaction chains by capturing active organic fat radicals [1,2,3].
This paper presents the results of the study of the antioxidant capacity of dihydroquercetin produced by Ametis JSC (Russia, Amur region, Blagoveshchensk) when it is added to rendered elk and beef fat when stored at a temperature of -18°C in consumer containers made of polymer materials.
Dihydroquercetin (DHA) is a natural bioflavonoid, a vitamin of the P group, obtained by extracting the crushed butt log portion of Siberian, Daurian and Gmelin larch, which is the richest in extractive substances and is a waste product of logging and wood processing enterprises.
In Russia, this substance is included in the list of permitted food additives and is recommended for use in the production of food products as one of the recipe components.
Materials and methods
As the material for the study, we used elk fat, beef fat (control) and elk fat, beef fat with dihydroquercetin (experiment). Dihydroquercetin was administered in a 1% alcohol solution in the amount of 0.01%, 0.03%, 0.05%, 0.07% and 0.09% of the mass of raw materials.
According to the current regulatory documentation (GOST 25292-2017), the recommended shelf life of rendered fat is 6 months, in this regard, the development of oxidative spoilage in the presence of additives was estimated by the values of acid (GOST R 55480 -2013), peroxide (GOST 34118 -2017) and thiobarbituric (GOST R 55810 -2013) lipid numbers of the fats under study compared to control samples (samples of fat without dihydroquercetin added) on the 0th, 90th, 180th and 216th days of storage. Reserve ratio is 1.2.
There is no normative technical documentation for elk fat, so it was decided to adhere to Sanitary Regulations and Standards (SanPiN) 2.3.2.1078-01 "Hygienic requirements for safety and nutritional value of food products" in estimation of quality. For beef fat, GOST 25292-2017 "Rendered edible animal fats. Technical conditions" was used as well.
Investigation of the effect of dihydroquercetin on the stability of the properties of rendered fats during storage
The study of oxidative changes in elk fat ( Figure 1-3) established the inhibitory effect of dihydroquercetin on the oxidation process. In all the experimental samples, regardless of the DHA concentration added, there was an inhibition of the spoilage present.
At the beginning of elk fat storage, the amount of free fatty acids (acid number (AN)) in all the samples was 1.16 mg KOH/kg. During subsequent storage, the control sample showed an increase in the acid number: on the 216th day there was an increase by a factor of 4.7 with respect to day 0, while the sample with a concentration of 0.01% -by a factor of 2.1, and the one with a concentration of 0.03% -by a factor of 1.5. There is no established difference between the AN of samples with 0.05%, 0.07% and 0.09% DHA. The AN increase averaged 1.3 times. The comparison of the studied samples parameters to those of the control sample on day 216 established a decrease in the growth of the acid number by a factor of 2.2 (0.01%), 3.1 (0.03%), 3.5 (0.05%) and 3.7 (0.07%) and 3.8 (0.09%). The content of primary oxidation products (peroxide number (PN)) on day 0 in all samples amounted to 2.40 mmol of active oxygen/kg. Throughout the whole storage period the samples demonstrated growth in peroxide value: on day 216 day as compared to day 0 in the control sample PN increased 3.3 times, in the sample with a concentration of 0.01% -1.6 times, with a concentration of 0.03% -1.5 times. The difference between the PN of samples with a concentration of 0.05% and 0.07% was within the limits of experimental error and the PN growth amounted to 2.8 times, and in the 0.09% sample -to 2.9 times. The data obtained for the samples with the antioxidant indicate its pronounced inhibitory effect. The comparison of the studied samples parameters to those of the control sample on day 216 established a decrease in the peroxide number 2.0 (0.01%), 2.3 (0,03%), 2.7 (0.05%) and 2.8 (0.07%), and 2.9 (0.09%) times. The amount of malonic aldehyde (thiobarbituric number (TN)) during storage in all samples also increased. On day 216, the thiobarbituric number increased 6.5 times compared to day 0 for the control sample, 2.4 times for a sample with the concentration of 0.01%, 2.1 times for a sample with the concentration of 0.03%, 2 times for a sample with a concentration of 0.05% and 1.9 times on average for samples with concentrations of 0.07% and 0.09%. The difference between the TN of samples with 0.07% and 0.09% of DHA was within the error range. The comparison of the studied samples parameters to those of the control sample on day 216 established a decrease in TN 2.7 (0.01%), 3.1 (0,03%), 3.3 (0.05%) and 3.4 (0.07%), and 3.7 (0.09%) times. At the end of the storage period, the acid number of the control sample did not meet the requirements of regulatory and technical documentation, while the samples with dihydroquercetin did and did not exceed 4 mg KOH/g. The peroxide number of the control sample was of questionable freshness, the samples with DHA were fresh. Analyzing the data obtained, it can be noted that the use of such an antioxidant can extend the shelf life of rendered elk fat 2 -3.8 times.
The results of the study of oxidative changes in beef fat (Figure 4-6) also allow us to note the negative effect of dihydroquercetin on the oxidation process in all the studied samples with its presence.
Also, as in the previous study, all samples showed an increase in the amount of free fatty acids (acid number (AN)). The acid number of the control sample for the entire storage period compared to day 0 increased by 4.5 times, the sample with a concentration of 0.01% -2.7 times, with a concentration of 0.03% -2.3 times, with a concentration of 0.05% -1.9 times, with a concentration of 0.07% -1.7 times and with DHA 0.09% -1.5 times consequently . The comparison of the studied samples parameters to those of the control sample on day 216 established a decrease in AN by 1.7 (0.01%), 2.0 (0,03%), 2.4 (0.05%) and 2.7 (0.07%), and 2.9 (0.09%) times. All the tested samples at the end of the storage period according to the acid number indicators in accordance with the requirements of the current regulatory documentation were attributed: the control sample -to below the first grade, samples with 0.01% and 0.03% DHA -to the first, samples with 0.05% -0.09% -to the highest. According to the indicators of peroxide number, the control sample was of questionable freshness, close to spoiled, samples with the concentration of 0.01% and 0.03% -fresh, non storage, the remaining samples with DHA were fresh. All tested samples met the requirements of
Conclusion
Studies of the effect of dihydroquercetin on the stability of the properties of food fats during storage have allowed making the conclusion that the inhibiting property of DHA is directly dependent on its concentration, the higher the proportion of dihydroquercetin in the product, the lower the indicators of oxidative spoilage.
Taking into account the maximum permissible concentration of antioxidants for rendered fats -0.02% according to the recommendations of FAO/WHO, restrictions TRCU 029/2012, GOST 25292-2017, it can be noted that the use of dihydroquercetin in the amount of 0.01% allows to prolong the shelf life of fat on average 1.7 -3.7 times, depending on the type of fat.
Special attention should be paid to the fact that some permitted antioxidants in the fat and oil industry (butyloxyanisol, butyloxytoluene, butylhydroquinone, etc.) at high concentrations in the product can have a toxic effect on the human body, while DHA is a non-toxic food additive [4].
|
2021-05-10T00:04:01.147Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e687844aa85b06c18a5f9eb2303ed333585fadad",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/640/4/042019",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f1065546f3cebd9eaa23f0e5359ebe413a70a3d3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
7039204
|
pes2o/s2orc
|
v3-fos-license
|
Structural and Mechanistic Analysis of Drosophila melanogaster Agmatine N-Acetyltransferase, an Enzyme that Catalyzes the Formation of N-Acetylagmatine
Agmatine N-acetyltransferase (AgmNAT) catalyzes the formation of N-acetylagmatine from acetyl-CoA and agmatine. Herein, we provide evidence that Drosophila melanogaster AgmNAT (CG15766) catalyzes the formation of N-acetylagmatine using an ordered sequential mechanism; acetyl-CoA binds prior to agmatine to generate an AgmNAT•acetyl-CoA•agmatine ternary complex prior to catalysis. Additionally, we solved a crystal structure for the apo form of AgmNAT with an atomic resolution of 2.3 Å, which points towards specific amino acids that may function in catalysis or active site formation. Using the crystal structure, primary sequence alignment, pH-activity profiles, and site-directed mutagenesis, we evaluated a series of active site amino acids in order to assign their functional roles in AgmNAT. More specifically, pH-activity profiles identified at least one catalytically important, ionizable group with an apparent pKa of ~7.5, which corresponds to the general base in catalysis, Glu-34. Moreover, these data led to a proposed chemical mechanism, which is consistent with the structure and our biochemical analysis of AgmNAT.
Results and Discussion
Crystal structure of AgmNAT. A homology model for AgmNAT was constructed using the Aedes aegypti arylalkylamine N-acetyltransferase structure 21 as a template for molecular replacement. The AgmNAT (CG15766) crystal structure was determined at 2.3Å, with two monomers in the asymmetric unit of the P2 1 space group ( Table 1). The two monomers are nearly identical with an RMSD value of 0.262 Å when aligning 862 backbone atoms. Similar to the arylalkylamine N-acetyltransferase model, the new structure is primarily composed of six α-helices and seven anti-parallel α-strands (Fig. 1A). The AgmNAT structure displays a conserved GNAT fold, similar to that observed for D. melanogaster AANATA and human spermidine/spermine N 1 -acetyltransferase (SSAT) ( Supplementary Fig. S2), though the sequence identity is low when compared to these N-acetyltransferase enzymes (24% with AANATA and <20% for SSAT), a known feature of GNAT enzymes 2 . Based on the functional and structural similarities between AgmNAT and other GNATs such as AANATA (PDB 3TE4) 15,60 , we predict the active site pocket to be similar, though not identical, for the binding of the acyl-CoA and amine substrates (Fig. 2). The active site is well defined in the 2Fo-Fc electron density map (Fig. 1B,C) and is located near the crystal packing interface for both monomers. Based on the structure of AANATA with acetyl-CoA bound (PDB 3TE4) 60 , the binding surface for the adenosine 3-phosphate 5-pyrophosphate moiety of CoA-SH is blocked by protein-protein interactions in the AgmNAT structure, but the rest of the active site is open. The splaying of β-strand four and five, a conserved structural feature in GNAT enzymes, is also displayed in AgmNAT, which is the binding site for the pantetheine arm of acetyl-CoA 2 . Moreover, a conserved glutamate, Glu-34, that serves as the catalytic base for other D. melanogaster N-acyltransferase enzymes, is located within an accessible pocket that can accommodate the acyl-CoA and amine substrate, similar to that observed for AANATA (Fig. 1B) 15 . Also observed in the active site pocket are the residues, Pro-35 and Ser-171 (Fig. 1B,C), which are conserved amino acids that regulate catalysis in other D. melanogaster N-acyltransferases 15,61,62 . The functional roles of Pro-35 and Ser-171 of AgmNAT are discussed in subsequent sections.
Evaluation of acyl-CoA steady-state kinetic constants. AgmNAT showed minimal differences in the measured K m,app values for acyl-CoA substrates ranging from acetyl-CoA to decanoyl-CoA (C2-C10) ( Table 2) when agmatine was used as the saturating amine substrate. However, there was an acyl chain length dependent decrease in the apparent k cat value for the acyl-CoA substrates as the chain length is increased. This apparent decrease in the turnover number of ~150-fold from acetyl-CoA to decanoyl-CoA, led to the observed acyl-chain length specific decrease in the (k cat /K m ) app value. In addition, oleoyl-CoA was not a substrate at a concentration of 500 μM. These data likely result from the acyl-chain partially (decanoyl-CoA) or fully (oleoyl-CoA) occupying the amine binding site, perturbing the productive binding of agmatine; therefore, resulting in a decrease in or complete loss of catalysis. Similar results were observed for other D. melanogaster N-acyltransferases 15,61,62 . Evaluation of amine substrate steady-state kinetic constants. We screened >50 amines as potential AgmNAT substrates using acetyl-CoA or oleoyl-CoA as the co-substrate because of our interests in fatty acid biosynthesis, structure function relationships of GNAT enzymes, and the development of novel insecticides targeted to this class of enzymes. Our amine substrate screen included the canonical amino acids (except for Cys because Cys reacts with DTNB), amino acid analogs, other biogenic amines, and different xenobiotic amines. Only six amines (Table 3) showed AgmNAT activity >3-fold higher than the level of background acetyl-CoA thioesterase activity, whereas none showed a greater rate for oleoyl-CoA. Also, we identified five polyamines as AgmNAT substrates: spermine, N 8 -acetylspermidine, putrescine, spermidine, and cadaverine ( Table 3). The (k cat /K m ) app values for the polyamines were lower than that measured for agmatine, the (k cat /K m ) app,agmatine /(k cat / K m ) app,polyamine ratio ranging from 15 for spermine to 1900 for cadaverine. Structural evidence for the specificity for agmatine and different polyamines likely results from the acidic nature of the active site, similar to that observed for the human ortholog (human SSAT) (Fig. 3) 2 . A more acidic active site can accommodate an amine substrate with a basic guanidinium group better than one with a hydrophobic aromatic group, giving rise to the difference in substrate specificity when compared to an AANAT 15,60 . AgmNAT was originally named AANATL8 based on primary sequence similarity 15 ; however, the substrate specificity data reported here support a new designation: agmatine N-acetyltransferase. This is the first report of agmatine serving as the best amine substrate for an N-acyltransferase. There are only a few reports of agmatine serving as a substrate within this family of enzymes 17,62,63 and only two reports on the identification of N-acetylagmatine from a biological source 64,65 . Rats fed heavy-atom labeled agmatine yielded two major urinary products; heavy-atom labeled N-acetylagmatine and unprocessed, but labeled agmatine 64 , suggesting a similar conversion as that catalyzed by AgmNAT. Inactivation of agmatine neurotransmission by N-acetylation is an underappreciated reaction between arginine, agmatine, and human disease 27,66-68 , the search for a human ortholog of Drosophila AgmNAT could lead to a new target for drug development. Additionally, selective targeting of Drosophila AgmNAT could result in the development of novel insecticides for insect control [20][21][22][23] . We found that arginine, arginine methyl ester, N-acetylputrescine, and N 1 -acetylspermidine were not AgmNAT substrates. The ~25-fold increase in k cat,app for N 8 -acetylspermidine when compared to spermidine, together with our data demonstrating that N-acetylputrescine and N 1 -acetylspermidine were not substrates all suggest that AgmNAT, most likely, catalyzes the mono-and N1-specific acetylation of these biogenic amines, similar to what is observed for the mammalian spermidine N-acetyltransferase 69,70 . The increase in the k cat,app value, together with the small ~2-fold difference in the K m,app for N 8 -acetylspermidine relative to spermidine, could result from non-productive binding of the N8-amine of spermidine in the AgmNAT active site, whereby the N1-amine is better positioned for catalysis: deprotonation and then nucleophilic attack of the -NH 2 at the carbonyl of the acetyl-CoA thioester moiety. This means both of the amine moieties can bind in the active site, but only the N1-amine is acetylated.
While arginine and arginine methyl ester are not AgmNAT substrates, we further evaluated these for AgmNAT inhibition to determine if either could bind to the enzyme. Arginine methyl ester proved to weakly inhibit AgmNAT, decreasing the rate of N-acetylagmatine formation from acetyl-CoA and agmatine by ~50% at 10 mM. In contrast, we found no inhibition of N-acetylagmatine formation at both 10 mM and 25 mM arginine. These data show that a modification of the α-position of agmatine inhibits binding to AgmNAT and that the inhibition results from both electronic and steric effects. The presence of the negatively charged α-carboxylate seems to eliminate or significantly weaken AgmNAT binding, likely the result of charge-charge repulsion. Evidence for this suggestion comes from the weak inhibition by arginine methyl ester (K i,s and K i,i ≥ 1 mM, Supplementary Fig. S3), but no apparent inhibition by arginine at a concentration as high as 25 mM.
( Fig. 4B). These data suggest that the AgmNAT-catalyzed formation of N-acetylagmatine occurs via a sequential mechanism; catalysis takes place only after formation of the AgmNAT•acetyl-CoA•agmatine ternary complex. Next, we determined if the AgmNAT kinetic mechanism is an ordered or random sequential mechanism by using substrate analogs, oleoyl-CoA, arcaine, and arginine methyl ester, as dead-end inhibitors vs. acetyl-CoA and agmatine. The inhibitor data is summarized in Table 4 and we have included the double reciprocal plots for the inhibitors in the Supplementary Materials. Arcaine is structurally related to agmatine, with its primary amine moiety replaced with a guanidinium group. Arcaine serving as an AgmNAT inhibitor supports our conclusion that AgmNAT does not acetylate the guanidinium amine of agmatine. None of these inhibitors showed any rate of catalysis above the slow, background rate of acetyl-CoA or oleoyl-CoA hydrolysis. Oleoyl-CoA produced competitive and noncompetitive inhibition plots for acetyl-CoA and agmatine (Table 4 and Supplementary Fig. S4A,B). Arcaine produced uncompetitive and competitive inhibition plots for acetyl-CoA and agmatine (Table 4 and Supplementary Fig. S4C,D). As observed for arcaine, arginine methyl ester produced uncompetitive and competitive inhibition plots for acetyl-CoA and agmatine (Table 4 and Supplementary Fig. S3). These data demonstrate that AgmNAT catalyzes the formation of N-acetylagmatine through an ordered sequential mechanism: acetyl-CoA binding first followed by agmatine to generate the AgmNAT•acetyl-CoA•agmatine complex prior to catalysis. This is similar to the kinetic mechanism for other D. melanogaster GNAT enzymes, including AANATA, AANATL2, and AANATL7 [15][16][17] . Support for ordered sequential mechanism for AgmNAT comes from a statistically better fit to Equation 3 (as shown in Fig. 4) and the noncompetitive inhibition of oleoyl-CoA vs. agmatine (Table 4 and Supplementary Fig. S4A,B). Additional support and further details for the kinetic mechanism are revealed by N-acetylagmatine product inhibition. N-Acetylagmatine produced uncompetitive and competitive inhibition plots for acetyl-CoA and agmatine (Table 4 and Supplementary Fig. S5). Uncompetitive inhibition by N-acetylagmatine vs. acetyl-CoA ( Supplementary Fig. S5A) is inconsistent with a ping pong kinetic mechanism. In sum, the kinetic analyses are consistent with two kinetic mechanisms: (a) ordered sequential substrate binding with acetyl-CoA binding first followed by ordered sequential product release with N-acetylagmatine being released last or (b) ordered sequential substrate binding with acetyl-CoA binding first followed by ordered sequential product release with CoA-SH being being released last. Uncompetitive inhibition by N-acetylagmatine vs. acetyl-CoA would be explained by the formation of a non-productive AgmNAT•acetyl-CoA•N-acetylagmatine complex with no reversible connection between the AgmNAT•acetyl-CoA complex and the AgmNAT•CoA-SH complex. We favor the latter mechanism because we have demonstrated that CoA-SH will bind to other D. melanogaster AANATs 15,61 and many other N-acetyltransferases exhibit ordered product release with CoA-SH being released last 71-74 . Proposed AgmNAT chemical mechanism. We combined the pH-dependence of the kinetic constants, primary sequence alignment to other D. melanogaster GNAT enzymes 15 , determination of three-dimensional structure, and site-directed mutagenesis of a putative catalytically important residue to provide insights into the AgmNAT chemical mechanism. First, the pH-dependence of the kinetic constants was assessed for acetyl-CoA to assign apparent pK a values to ionizable groups involved in catalysis. Both the k cat,app and (k cat /K m ) app pH-rate profiles produced a rising profile with a pK a,app of 7.7 ± 0.1 and 7.3 ± 0.2, respectively (Fig. 5). An apparent pK a of ~7.5 can be attributed to a general base in catalysis, likely either deprotonation of the primary amine of agmatine or the zwitterionic tetrahedral intermediate generated upon nucleophilic attack of agmatine at the carbonyl thioester of acetyl-CoA. A second, higher pK a,app , possibly resulting from the deprotonation of a catalytically important general acid, was not observed in our pH-activity data, a surprising result given that a pK a ~8.5-9.5 has been observed for many other N-acyltransferases 2,75,76 . Explanations for these data include: (a) AgmNAT catalysis does not require a general acid, (b) the general acid in catalysis is not rate-limiting under our assay conditions, or (c) the general acid in AgmNAT catalysis has an apparent pK a > 9.5. Because of the high rate of base-catalyzed acyl-CoA hydrolysis, we cannot perform experiments at pH > 9.5 to define a pK a > 9.5.
Next, we combined information from primary sequence alignments, the AgmNAT structure, and site-directed mutagenesis to define potential amino acids that could function in catalysis. A conserved glutamate has been proposed as the catalytic base in two D. melanogaster arylalkylamine N-acetyltransferases (AANATs), which corresponds to Glu-34 in AgmNAT 15,16 . Additionally, the AgmNAT structure shows that Glu-34 is in the active site, a buried region with several structural waters positioned within proximity of Glu-34 (Fig. 1B), similar to D. melanogaster AANATA (PDB code: 3TE4) 15 . Ordered water molecules within the active site of other GNAT enzymes are thought to form a "proton wire" that assists the general base in catalysis 2,15,17,63,[75][76][77] . Although only a number of water molecules (36 in total) were sufficiently ordered to be modeled in the current structure, the majority of them are in the active sites of the two monomers. The closest ordered water molecules to Glu-34 is ~ 3.7 Å from the Oε 1 , positioned slightly too far for a hydrogen bond; however, we anticipate that the conformational changes upon substrate binding could promote hydrogen bond interactions between ordered water molecules and the functional groups in AgmNAT and substrate. Such hydrogen bonds could facilitate proton transfer from the amine substrate to initiate catalysis. In addition, unlike Glu-33, which is exposed to the bulk solvent, Glu-34 is relatively sheltered and placed close to the hydrophobic core of the protein and next to residues such as Leu-36. This microenvironment could be responsible for a pK a shift of Glu-34, as that identified in the pH-rate profiles. Therefore, we sought to interrogate the catalytic role of Glu-34 by evaluating the kinetic constants of the E34A mutant. The E34A mutation produced a catalytically deficient enzyme, exhibiting only 0.05-0.07% of the wildtype k cat,app value indicating that Glu-34 does function in the catalytic cycle. Furthermore, Glu-34 seems to have a role in substrate binding because the K m,app values for both agmatine and acetyl-CoA for the E34A mutant differ from wildtype values, the K m,app for agmatine increases 20-fold and the K m,app for acetyl-CoA decreases 6-fold ( Table 5). The data generated for the E34A mutant is consistent, but does not prove, that Glu-34 serves as the general base in AgmNAT catalysis. To further investigate the role of Glu-34 in catalysis, we generated pH-activity profiles for the E34A mutant (Fig. 6). The k cat,app profile produced a pH-dependent linear increase with slope of 0.7 and (k cat / K m ) app profile with no slope. Attempts to titrate the pH < 8.0 were unsuccessful, by which a rate of CoA-SH release was not observed above the background hydrolysis rate. The linear profile in both the k cat,app and (k cat /K m ) app pH profiles, combined with the deficiency in catalytic rate suggest that Glu-34 serves as the general base in catalysis.
Our steady-state kinetic data identified an ordered sequential mechanism with acetyl-CoA binding first, followed by agmatine to generate the AgmNAT•acetyl-CoA•agmatine ternary complex prior to catalysis. After the ternary complex formation, Glu-34 functions as the general base to deprotonate the positively charged amine moiety of agmatine, most likely involving a "proton wire" of ordered water molecules, followed by nucleophilic attack of the carbonyl of the acetyl-CoA thioester to generate a zwitterionic tetrahedral intermediate. Breakdown of the intermediate ensues by the departure of coenzyme A, which is, most likely, protonated by the positively charged amine of the intermediate (Fig. 7). This mechanism is consistent with other proposed chemical mechanisms for the N-acyltransferases of D. melanogaster and other organisms 15,16,24,78 . Other amino acids in AgmNAT that function in substrate binding and modulating catalysis. In addition to Glu-34, three other amino acids were individually mutated to alanine to define their function. These residues, Pro-35, Ser-171, and His-206, are conserved between D. melanogaster GNAT enzymes 15 and are proposed to function in active site formation, substrate binding, and/or regulation of catalysis 16,17 . The P35A mutant is catalytically deficient, with a k cat,app value that is ~2% of wildtype, while exhibiting only minimal K m,app differences when compared to wildtype for both acetyl-CoA and agmatine (Table 5). Similar results were observed for the corresponding proline in other GNAT enzymes, except most exhibited a significant K m increase for the corresponding amine, suggesting a role in substrate binding. Furthermore, the structure of sheep serotonin N-acetyltransferase (PDB code: 1CJW), co-crystalized with the tryptamine-acetyl-CoA bisubstrate inhibitor, shows that the corresponding Pro-64 interacts with this inhibitor via a CH-π interaction with the negatively charged face of the aromatic tryptamine moiety 77,79 . Agmatine lacks an aromatic moiety; thus, the Pro-35 of AgmNAT cannot form a CH-π interaction with agmatine, which we propose is the reason for no K m effect for the P35A mutant. observed for other GNAT enzymes [15][16][17]79 . In the current AgmNAT structure, Pro-35 is stacked on top of the imidazole ring of His-206 side chain (Fig. 2). The extensive van der Waals interaction may make significant contributions to particular active site configurations. Another active site residue evaluated for its role in substrate binding and catalysis is Ser-171. The S171A mutant only retained ~9% of the wildtype k cat,app and also showed a 3-to 4-fold change in the K m,app values for the substrates (a decrease in the K m,app for acetyl-CoA and an increase in the K m,app for agmatine) ( Table 5). The decrease in the k cat,app could be interpreted that Ser-171 functions as a general acid in catalysis to protonate CoA-Sas it leaves the AgmNAT active site. For Ser-171 to function as a general acid during catalysis, the pK a of the serine hydroxyl would have to decrease by ~3-5 pH units to protonate the thiolate anion of the CoA product. We did not observe an apparent pK a in the pH-rate profiles that would correspond to a general acid, arguing against Ser-171 serving in this role. Alternatively, Ser-171 could have an important role in organizing the active site architecture to accommodate both substrates to enable efficient catalysis. Ser-171 is located in the active site, where its Oγ side chain atom forms hydrogen bonds with the backbone oxygen and nitrogen atoms of Ser-168, and a water-mediated interaction with the Thr-167 backbone nitrogen atom, suggesting that the 165-169 strand region in addition to Ser-171 is important in stabilizing the active site pocket to accommodate both substrates and allow for efficient catalysis to occur (Fig. 1C).
The H206A mutant resulted in a k cat,app value that is ~18-fold lower than the wild-type value, whereas the K m,app-acetyl-CoA and K m,app-agmatine increased 2.3-fold and 1.4-fold, respectively. The corresponding residue (His-220) in D. melanogaster AANATA 15 was shown to interact with Tyr-185 and Pro-48 to form part of the active site, an interaction potentially resulting from a conformational change driven by acetyl-CoA binding. We assign a similar function for His-206 in AgmNAT since its general location in the active site is similar to His-220 in D. melanogaster AANATA, and the van der Waals interaction with Pro-35, as described above, is conserved (Fig. 2B). In addition, the His-206 side chain is in van der Waals contact with Ser-168 Cα and Tyr-188 Cε2, as well as several local prolines, Pro-203 and Pro-205. This means that His-206 is contributing to the formation of the active site by interacting with multiple residues. The apo-AgmNAT structure shows Tyr-170 in a position that is not optimal for a direct interaction with His-206 ( Fig. 2A), unlike that shown for the corresponding residues in the AANATA structure co-crystalized with acetyl-CoA 15,60 . Tyr-170 occupies space near the entry point for acetyl-CoA into its binding pocket; therefore, we predict that a conformational change will occur that will move Tyr-170 into position for optimal acetyl-CoA binding, possibly by interacting with His-206.
The findings presented in this manuscript highlight mechanistic and structural insights for D. melanogaster AgmNAT, an enzyme that catalyzes the formation of N-acetylagmatine from acetyl-CoA and agmatine. We provide evidence for an underappreciated reaction in arginine metabolism; however, it still remains unclear if N-acetylation of agmatine by an N-acetyltransferase enzyme is biologically relevant. A combination of data provided herein and reported from other labs speaks to its relevancy, warranting further investigation into this chemical transformation as a part of arginine metabolism. Furthermore, we outline a chemical mechanism for the AgmNAT-catalyzed formation of N-acetylagmatine (and, by extension, other N-acylamides), which is consistent with the data presented herein. We also provide evidence for important active site residues involved in substrate binding and maintaining the structural integrity of the active site for efficient catalysis, though further work is necessary to provide more evidence for the dynamic nature of the AgmNAT active site.
Oligonucleotides were purchased from Eurofins MWG Operon. PfuUltra High-Fidelity DNA polymerase was purchased from Agilent. BL21 (DE3) E.coli cells and pET-28a(+) vector were purchased from Novagen. NdeI, XhoI, Antarctic Phosphatase, and T4 DNA ligase were purchased from New England Biolabs. Kanamycin monosulfate and IPTG were purchased from Gold Biotechnology. Acyl-CoAs were purchased from Sigma-Aldrich. Cayman Chemical commercially synthesized N 1 -acetylspermidine. All other reagents were of the highest quality and purchased from either Sigma-Aldrich or Fisher Scientific.
AgmNAT: sub-cloning, expression, and purification. AgmNAT was inserted into a pET-28a vector using NdeI and XhoI restriction sites, yielding the final expression vector: AgmNAT-pET-28a, that after transformation into E.coli BL21 (DE3) cells expressed a protein with an N-terminal His 6 -tag followed by a thrombin cleavage site. The E. coli BL21 (DE3) cells containing the AgmNAT-pET-28a vector was cultured using LB media supplemented with 40 μg/mL kanamycin at 37 °C. The culture was induced with 1.0 mM isopropyl β-D-1-thiogalactopyranoside (IPTG) at an OD 600 ~ 0.6, followed by an additional four hours at 37 °C. The final culture was harvested by centrifugation at 5,000 × g for 10 min at 4 °C and the pellet was collected. The pellet was resuspended in 20 mM Tris, pH 7.9, 500 mM NaCl, 5 mM imidazole, lysed by sonication, and then centrifuged at 10,000 × g for 15 min at 4 °C. The supernatant was collected and loaded onto 6 mL of ProBond ™ nickel-chelating resin, followed by two wash steps: wash one -10 column volumes of 20 mM Tris, pH 7.9, 500 mM NaCl, 5 mM imidazole followed by wash two -10 column volumes of 20 mM Tris, pH 7.9, 500 mM NaCl, 60 mM imidazole. AgmNAT was eluted in 1 mL fractions using 20 mM Tris, pH 7.9, 500 mM NaCl, 500 mM imidazole, the protein pooled, and extensively dialyzed at 4 °C against 20 mM Tris pH 7.4, 200 mM NaCl. The concentration of AgmNAT was determined using the Bradford assay indexed against BSA as a standard, and purity was assessed by a SDS-PAGE gel (proteins visualized using by Coomassie stain). Purification of recombinant AgmNAT by nickel affinity chromatography yielded pure protein (≥95%) as visualized by SDS-PAGE ( Supplementary Fig. S6).
AgmNAT crystallography. After nickel-affinity purification, 30 mg of AgmNAT was subjected to dialysis against 50 mM HEPES pH 8.2, 200 mM NaCl, followed by removal of the His 6 affinity-tag using 60 U of biotinylated thrombin for 18 h in a fresh batch of 50 mM HEPES pH 8.2, 200 mM NaCl leaving an unnatural Gly-Ser-His at the N-terminus. The protein mixture was again subjected to nickel-affinity chromatography to remove undigested AgmNAT. AgmNAT was eluted in the 20 mM Tris, pH 7.9, 500 mM NaCl, 60 mM imidazole fraction, whereas the His 6 -AgmNAT was retained on the column until eluted with 20 mM Tris, pH 7.9, 500 mM NaCl, 500 mM imidazole. The biotinylated thrombin was removed by using 3 mL of Pierce monomeric avidin agarose resin at 4 °C for 30 min, followed by centrifugation to recover AgmNAT, and AgmNAT concentrated to ~10 mg/mL by ultrafiltration. Further purification was performed using a HiTrap Q FF column with a linear gradient from 50 mM HEPES pH 8.2 to 50 mM HEPES pH 8.2, 0.5 M NaCl with AgmNAT eluting in fractions containing ~150 mM NaCl. A final SEC purification step was used after the ion exchange step and purified AgmNAT was concentrated to ~8 mg/ml in 50 mM HEPES pH 8.2, 100 mM NaCl for crystallization screening. The Phoenix crystallization robot and Qiagen screening kits were used to evaluate different crystallization conditions for AgmNAT. AgmNAT was crystallized using the hanging-drop vapor diffusion method in 100 mM Tris pH 8.0, 200 mM sodium acetate, 30% PEG 4000. The drop contained a 1:1 ratio of 1 μL of 8 mg/mL AgmNAT with 1 μL of well solution and incubated at 20 °C. Crystals were of elongated rod-shape. Diffraction was measured at the 22-ID-D SER-CAT beamline at the Advanced Photon Source (APS), Argonne, IL. Data were indexed, scaled, and merged with iMosflm using the CCP4 suite 80 . A homology model was constructed based on the AgmNAT sequence using the program SWISS-MODEL 81 with mosquito arylalkylamine N-acetyltransferase (PDB ID 4FD4) 21 as a template for molecular replacement. The molecular replacement program Phaser-MR was used in PHENIX. The models of refinement were first obtained using a rigid-body refinement using phenix.refine in PHENIX. PHENIX 82 and Coot 83 were used to complete the model rebuilding and refinement. For refinement, data was cut at 2.3 A due to relatively poor data quality at higher resolutions. The crystal structure has been deposited into the Protein Data Bank with accession code 5K9N.
Construction of AgmNAT site-directed mutants.
Site-directed mutants of AgmNAT were constructed by the overlap extension method. Using the primers shown in Table S1, each mutant was amplified using pfuUltra High-Fidelity DNA polymerase with the following PCR conditions: initial denaturing step of 95 °C for 2 min, then 30 cycles of 95 °C for 30 s; 60 °C annealing temperature for 30 s; 72 °C extension step for 1 min; then a final extension step of 72 °C for 10 min. Following the amplification of the AgmNAT site-directed mutant, the sub-cloning, expression, and purification procedures are the same as discussed for the wild-type enzyme.
Measurement of enzyme activity. Steady-state kinetic constants for AgmNAT were determined by measuring the rate of coenzyme A release using Ellman's reagent (DTNB) at 412 nm (molar absorptivity = 13,600 M −1 cm −1 ) [15][16][17] . The assay consisted of 300 mM Tris pH 8.5, 150 μM DTNB, and the desired concentration of acyl-CoA and amine substrates. Initial velocities were measured using a Cary 300 Bio UV-Visible spectrophotometer at 22 °C. Acyl-CoA kinetic constants were evaluated by holding the concentration of agmatine at a constant saturating concentration (5 mM). Amine kinetic constants were evaluated by holding the concentration of acetyl-CoA at a constant saturating concentration (500 μM). The apparent kinetic constants were determined by fitting the resulting data to equation 1 using SigmaPlot 12.0, where v o is the initial velocity, V max,app is the apparent maximal velocity, [S] is the substrate concentration, and K m,app is the apparent Michaelis constant. Each assay was Scientific RepoRts | 7: 13432 | DOI:10.1038/s41598-017-13669-6 performed in triplicate and the uncertainty for the k cat,app and (k cat /K m ) app values were calculated using equation 2, where σ is the standard error.
x 2 y 2 Kinetic mechanism and inhibitor analysis. Defining the kinetic mechanism of AgmNAT was accomplished by evaluating double reciprocal plots of initial velocity data for acetyl-CoA and agmatine, followed by determining the type of inhibition for substrate analogs used as dead-end inhibitors or N-acetylagmatine for product inhibition. Initial velocities were determined by varying the concentration of one substrate, while holding the other substrate at a fixed concentration. Acetyl-CoA was evaluated at 20, 50, 100, 250 and 500 μM, whereas agmatine was evaluated at 60, 300, 750 and 1500 μM. The resulting initial velocity data was fit to equation 3 for an ordered Bi-Bi mechanism and equation 4 for a ping pong mechanism using IGOR Pro 6. Inhibition experiments by either substrate analogs or N-acetylagamatine were used to discriminate between an ordered, random sequential, or ping pong kinetic mechanism. Oleoyl-CoA, arcaine, and L-arginine methyl ester were used as dead-end inhibitors for AgmNAT while N-acetylagmatine was used for product inhibition. Initial velocity patterns were generated by varying the concentration of one substrate, holding the other substrate concentration at its apparent K m , and changing the concentration of inhibitor for each data set in triplicate. The resulting data was fit to equations 5-7, for competitive, noncompetitive, and uncompetitive inhibition respectively using SigmaPlot 12.0. For equations 4-6, v o is the initial velocity, V max,app is the apparent maximal velocity, K m,app is the apparent Michaelis constant, [S] is the substrate concentration, [I] is the inhibitor concentration, and K i is the inhibition constant. Rate versus pH. The pH-dependence on the kinetic constants for acetyl-CoA was determined using intervals of 0.5 pH units, ranging from 6.5-9.5. Buffers used to measure the pH-dependence were MES (pH 6.5 and 7.0), Tris (pH 7.0-9.0), AmeP (pH 9.0 and 9.5). The resulting data were fit to equations 8 (log (k cat /K m ) app -acetyl-CoA and equation 9 (log k cat,app -acetyl-CoA ) to determine the apparent pK a values using IGOR Pro 6.34 A, where c is the pH-independent plateau. The wild-type enzyme is reported in triplicate, whereas the E34A mutant was evaluated in duplicate.
Agmatine. To a solution of putrescine (2.0 g, 22.7 mmol) in water (20 mL) was added 2-methylisouronium sulfate (2.7 g, 11 mmol). The mixture was heated to 50 °C for 6 hours, then cooled in an ice bath for 30 minutes. During this time, a white precipitate was formed, which was collected by filtration, and then washed with ice water to give agmatine (1.3 g, 44%) as a white solid that was used without further purification. 1 H NMR (500 MHz, D 2 O) δ 3.08 (t, J = 6.0 Hz, 2 H), 2.81 (t, J = 6.8 Hz, 2 H), 1.53 (br. s., 4 H) ppm. 13 N-Acetylagmatine. To a mixture of agmatine (1.0 g, 7.62 mmol) in pyridine (10 mL) was added acetyl chloride (542 μL, 7.62 mmol) dropwise. The mixture was allowed to stir at room temperature for 4 hours, then was concentrated on a rotary evaporator. The crude residue was adsorbed onto silica gel and purified by flash column chromatography (methylene chloride/methanol 19:1) to give N-acetylagmatine (400 mg, 30%) as a viscous, colorless oil. 1
|
2018-04-03T00:20:41.999Z
|
2017-10-18T00:00:00.000
|
{
"year": 2017,
"sha1": "01e8e832f8e138c0ac0395dc8b93e4ce0322ffec",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-13669-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20b25812a6f8ec22acd337a3765fccc597a3f4e4",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
2990144
|
pes2o/s2orc
|
v3-fos-license
|
Green tea consumption and risk of esophageal cancer: a meta-analysis of epidemiologic studies
Background Green tea has shown the role of chemoprevention for cancer. Recently, several studies suggested that green tea intake may have effect on esophageal cancer risk, whereas the results were inconsistent. Methods We performed a meta-analysis of all English and Chinese language studies of green tea consumption and esophageal cancer risk indexed in Medline, Embase, the Science Citation Index, the Chinese Biomedical Database and Wanfang Data from 1980 to June 2012. After reviewing each study, extracting data, and evaluating heterogeneity (Chi-square-based Q test and Ι2) and publication bias (Begg and Egger test), a meta-analysis was performed to evaluate the association between high/medium/low green tea consumption and non-drinking esophageal cancer risk. Pooled relative risk (RR) or odds ratio (OR) with 95% confidence intervals (CIs) were calculated using the fixed- or random-effect models. Results Ten eligible epidemiologic studies including 33731 participants and 3557 cases for esophageal cancer were included. Eight of which were case–control studies, and two were cohort studies. Overall, there were no association between high/medium/low green tea consumption and non-drinking risk of esophageal cancer (High: highest vs non-drinker: RR/OR = 0.76, 95% CI: 0.49 to 1.02. Medium: drinker vs non-drinker: RR/OR = 0.86, 95% CI: 0.70 to 1.03. Low: lowest vs non-drinker: RR/OR = 0.83, 95% CI: 0.58 to 1.08). When stratified analyses according to study design (case–control and cohort studies), country (China and Japan), participates source (population-based and hospital-based case–control), and gender (female and male), there were significant association between high/medium/low green tea consumption and non-drinking risk of esophageal cancer among female (High: RR/OR = 0.32, 95% CI: 0.10 to 0.54. Medium: RR/OR = 0.43, 95% CI: 0.21 to 0.66. Low: RR/OR = 0.45, 95% CI: 0.10 to 0.79), but not the others. Conclusions We did not found significant association between green tea consumption and non-drinking esophageal cancer risk, but an evidence of protective effect was observed among female.
Background
Esophageal cancer is a major concern in the world, ranking the sixth most common cause of cancer mortality [1]. Lifestyles such as cigarettes smoking, alcohol drinking and dietary habits have been suggested to be associated with the carcinogenesis of esophageal cancer [2,3]. Tea is one of the most widely consumed beverages in the world [4]. Tea is divided into three major types: green tea (non-fermented), oolong tea (half-fermented) and black tea (fermented) according to on the manufacturing process. Green tea and its constituents such as epigallocatechin-3 gallate (EGCG), epigallocatechin (EGC) and epicatechin-3 gallate (ECG) have been shown to inhibit tumorigenesis in many animal models [5,6]. There have been a number of epidemiologic studies evaluated the relation between green tea intake and esophageal cancer risk in human, but with different results. Two large case-control studies [7,8] showed the protective effect of green tea intake on esophageal cancer incidence. However, another case-control study including 883 cases showed that people who have more consumption of green tea more susceptible to esophageal cancer [9]. No quantitative attempt has been to summarize the results of studies exploring a possible association between green tea and esophageal cancer. Therefore, we conducted this meta-analysis to examine the association in epidemiologic studies.
Search strategy
The electronic databases, Medline (1966 to June 2012), Embase (1980 to June 2012), the Science Citation Index (1945 to June 2012), the Chinese Biomedical Database (1981 to June 2012) and Wanfang Data (1980 to June 2012) were searched for epidemiologic studies published in English or Chinese of green tea intake in relation to esophageal cancer risk. We used the search terms "tea", "food", "diet", "beverage", "drinking" or "tea polyphenol" combined with "esophageal", "oesophageal", or "esophagus". Firstly, the title and abstract of identified relevant studies were used to exclude any obviously irrelevant studies. The full-texts and tables of the remaining articles were retrieved and perused to determine the relevancy of the study design and data, according to the inclusion criteria detailed below. Additional studies were identified by screening the reference lists of each relevant study. Furthermore, reviews concerning the relevant topic were retrieved from the above-mentioned databases in order to potentially broaden the search by identifying additional relevant publications from the studies cited in the reviews.
Inclusion criteria
The following inclusion criteria were used to select relevant studies for the meta-analysis: (a) human studies, not laboratory or animal studies were included; (b) the daily consumption of the natural green tea product, not of green tea extracts or supplements were recorded; (c) the outcome of interest had to be an incidence of esophageal cancer; (d) relative risk (RR) or odds ratio (OR) estimates with corresponding 95% CIs (or sufficient information to calculate them) were reported. If two or more studies used the same population resource or had overlapping subjects, only the study reporting the largest population was selected for inclusion in the meta-analysis.
Data extraction
Two reviewers (Ping Zheng and Haiming Zheng) independently performed the data extraction. Disagreements were resolved by reviewers (Deng and Zhang), and a consensus was reached for all data prior to meta-analysis. The following information was collected: the first author's name, publication year, the country of origin, follow-up duration, gender, the number of participants (cases and cohort size), measurements of green tea consumption, relative risk (RR, which is a ratio of the exposed group and non-exposed group incidence rate and suitable for cohort/prospective study), or odds ratio estimates (OR, which is suitable for case-control studies), and their corresponding 95% confidence intervals (95% CIs). We treated them as two different studies when a study provided separate RR/OR estimates for men and women. If a study provided several RR/ORs, we extracted the RR/ORs reflecting the greatest degree of control for potential interaction factors. When a study provided RR/ OR for both esophageal cancer and invasive esophageal cancer, we used the former due to getting more cases.
Statistical analysis
To evaluate the association between green tea consumption and risk of esophageal cancer, the RR/OR with 95% CIs were calculated using pooled data from the studies. Data pooling was carried out by using the fixed effects model (based on the Mantel-Haenszel method) or the random effects model (based on the Dersimonian and Laird method) [10,11] The random effects model was used if heterogeneity existed between the studies from which the data was extracted; otherwise, the fixed effects model was used. Statistical heterogeneity between studies was assessed with the Chi-square-based Q test and Ι 2 , and heterogeneity was considered significant when the two-tailed P value was less than 0.10 [12]. Ι 2 was used to qualify variation in RR/OR that was attributable to heterogeneity [13]. Publication bias was estimated by using the Begg and Mazumdar adjusted rank correlation test and the Egger regression asymmetry test [14,15]. Finally, the statistical significance of the RR/OR was determined by using the Z test.
Since the original data of tea consumption dose is nonlinear, we divided the level of consumption into high, medium and low groups. We calculated the highest/lowest level of tea consumption as high/low group to compare with the non-drinking when the original literature provided the group of tea-drinking dose. When there is no dose group, we use the drink group as highest/lowest level of tea consumption respectively. We calculated the drinking of tea consumption as medium group to compare with the non-drinking when the original literature provided the group of tea-drinking. And when there is several dose groups, we use the combination as drink group. So we generated three group of tea intake: highest vs nondrinker, drinker vs non-drinker, lowest vs non-drinker.
We performed meta-analysis for all the included studies, and then made subgroup analysis according to study design, country, participates source and gender. This work was conducted on the basis of MOOSE guidelines proposed by the Meta-Analysis of Observational Studies in Epidemiology group [16]. All P values are two-tailed. For all tests, P values < 0.10 are considered statistically significant, except for heterogeneity. All analysis was performed by the Stata version 11.0 software (Stata Corporation, College Station, Texas).
Characteristics of included studies
Ten epidemiologic studies [7][8][9][17][18][19][20][21] including 33731 participants and 3557 cases of esophageal cancer were identified according to the inclusion criteria of the metaanalysis. The characteristics of the included studies are summarized in Table 1. The publication dates in this study ranged between 1994 and 2011. Eight of them [7][8][9][18][19][20][21] were case-control studies (seven conducted in China and one in Iran), and the other two were cohort studies (conducted in Japan) [17]. Among eight casecontrol studies, seven studies were population-based Table 1 Characteristic of including studies of green tea intake and incidence risk of esophageal cancer case-control (PCC) [7][8][9]18,20,21]. Besides, the other was hospital-based case-control (HCC) [19]. In addition, there were two studies [7,18] provided gender-specific OR estimates and 95% CIs for the association between green tea consumption and esophageal cancer risk.
Meta-Analysis of Case-control studies
Eight case-control studies were included. Of which, two studies [7,18] presented the OR and CIs for female and male respectively, and one study for participates from two different areas. All of them were treated as two studies when analysed.
In the meta-analysis, green tea consumption was found to be associated with a significantly lower risk of esophageal cancer in high group (RR/OR = 0.72, 95% CI: 0.45 to 0.98, Table 2). The P value of heterogeneity chi-squared test was < 0.01, and the corresponding I 2 statistic was 76%, suggesting variability between studies. The P values for the Begg's and the Egger's tests were P = 0.07 and P = 0.02, respectively, suggesting the probability of publication bias.
The P value of heterogeneity chi-squared test were 0.90, 0.90 and 0.59, respectively. The corresponding I 2 statistic were all 0.0%, indicating a low variability between studies. The P value for the Begg's test and Egger's test were 1.00 and not applicable, 0.26 and 0.17, 0.07 and 0.01, respectively.
Combined and Subgroup Analysis
Furthermore, we performed the combined analysis of case-control and cohort studies. The association between green tea consumption and non-drinking risk of esophageal cancer were not statistically significant in three group (High: RR/OR = 0.76, 95% CI: 0.49 to 1.02, Table 2. Medium: RR/OR = 0.86, 95% CI: 0.70 to 1.03, Table 3. Low: RR/OR = 0.83, 95% CI: 0.58 to 1.08, Table 4.). The P value of heterogeneity chi-squared test were all < 0.01. The corresponding I 2 statistic were 73%, 56%, 66%, respectively. The P value for the Begg's test and Egger's test were 0.37 and 0.16, 0.24 and 0.22, 0.16 and 0.03, respectively. Overall, no association was found between green tea consumption and non-drinking risk of esophageal cancer.
When stratified by country, we did not found association between green tea consumption and non-drinking risk of esophageal cancer in China, Japan and Northern Iran (Tables 2, 3, 4).
When stratified by participates source, we found a significant association between high green tea consumption and non-drinking esophageal cancer risk among PCC (RR/OR = 0.71; 95% CI: 0.43-0.98, P < 0.01 for heterogeneity, I 2 = 78%), but not the HCC (RR/OR = 0.92; 95% CI: 0.49-2.32, with only one study, Table 2). We did not found association between medium/low green tea consumption and non-drinking risk of esophageal cancer in PCC and HCC. There were two studies [7,18] provided gender-specific RR estimates and 95% CIs for the association between green tea consumption and esophageal cancer risk, therefore we also made stratified analysis by gender. The results of meta-analysis showed that there were significant association between high/medium/low green tea consumption and non-drinking risk of esophageal cancer among female (High: RR/OR = 0.32, 95% CI: 0.10 to 0.54, P = 0.75 for heterogeneity, Table 2. Medium: RR/OR = 0.43, 95% CI: 0.21 to 0.66, P = 0.35 for heterogeneity, Table 3. Low: RR/OR = 0.45, 95% CI: 0.10 to 0.79, P = 0.16 for heterogeneity, Table 4), but not the male.
Sensitivity analysis
Sensitivity analysis has been carried out by excluding one study from others step by step in each group. They did not alter the original results.
Discussion
Our meta-analysis of epidemiologic studies did not found significant association between high/medium/low green tea consumption and non-drinking esophageal cancer risk, while an evidence of protective effect was observed among female.
However, there are only two cases of case-control in female studies which existed all in China, this result can lead to selection bias. In addition, the positive results are more easily published, making publication bias generated. All that makes worthy of further consideration about female results. If excluding these factors, some Figure 4 Forest plot: Results of the studies on medium green tea intake. The size of the data markers (squares) corresponds to the weight of the study in the meta-analysis. The combined relative risk is calculated using the random effects method.
other reason should be considered of the impact among female.
Sex hormone may be an explanation for why female experiencing significantly a lower risk of esophageal cancer when take high level green tea. A sex hormonemediated pathway may be involved in esophageal carcinogenesis, which was supported by two experimental studies [22,23]. A suppressing effect of estrogen and a promoting effect of androgen were shown in the experimental induction of esophageal cancer by the administration of chemical carcinogen [22]. Meanwhile, the growth rate of metastatic squamous cell carcinoma of the esophagus was inhibited by estrogen and enhanced by testosterone, respectively [23]. Additional studies are warranted to explain and confirm this preliminary evidence.
There have been a number of experimental and clinical studies suggesting drinking beverages at high temperatures to be a cause of esophageal cancer. The facts that more tumors were showed and larger size of esophagus papillomas were rapidly increased when the temperature at 70°C and above was reported by a previous experimental study [24]. In our meta-analysis, two included studies [9,19] both found that drinking tea at high temperature significantly increases risk of esophageal cancer incidence. However, the two studies had different definition for high or normal temperature of green tea drinking, which make it difficult to be stratified for further analysis.
Three studies [9,17,19] included in the meta-analysis have investigated the effects of green tea drinking and dose response relationship. No dose-response relationship was observed among the two studies [17,19]. In the study conducted by Wu [9], higher monthly consumption of tea (P for trend = 0.07) and usually drinking tea in high concentration (P for trend = 0.01) showed a positive tendency with cancer risk for ever drinker after adjusting for tea temperature. We have analysised data by grouping high/medium/low intake. However, we have not found it is effective between green tea consumption and non-drinking in esophageal cancer risk.
The protective effect of high green tea consumption on esophageal cancer was observed among case-control studies and PCC, but both the heterogeneity and publication bias are significant. So the protective effect among case-control studies and PCC may be incredible. Furthermore we performed the combined analysis of case-control and cohort studies. The heterogeneity and publication bias are not significant in the overall study. So, there may be no significant association between high green tea consumption and non-drinking esophageal cancer risk in the meta-analysis. Focus on heterogeneity and publication bias of the analyses, we found all the estimates with several studies with a large sample size are very heterogeneous, while estimates with a couple of studies and small sample size have no heterogeneity. For example, both meta-analyses of all studies (ten studies) and case-control studies (eight studies), the P for heterogeneity were < 0.01, suggesting variability between studies. However, for meta-analyses among female (two studies) or male (two studies), the P for heterogeneity were both > 0.05, suggesting a low variability between studies. When heterogeneity existed we used the random effects model to adjusted. The publication bias among case-control and PCC studies are significant in high group. The cautions for above phenomenon may be as follows: (a) Most of case-control and population-based case-control studies were performed among China [7][8][9][18][19][20][21], and that may cause bias. (b) Retrospective bias. (c) Positive results may be publicated more easily. Furthermore, we performed subgroup analysis by sex, geographic region and type of epidemiologic studies.
However, there are several disadvantages should be considered in our meta-analysis. First, publication bias in China studies or case control studies cannot be missed. The protective effect of green tea in female may be misled by a publication bias because of the female studies all from China studies or case control studies. Second, the epidemiologic studies were not much enough to be stratified for dose and temperature of green tea intake, which may mitigate the result. Third, the non-English and non-Chinese literature could not be reviewed because of the language barrier. Last, most of the studies included in the analysis had been conducted among Asian populations due to popularity of green tea in East Asia. Therefore, the results should be cautious extrapolated to other populations.
Conclusions
The results of our meta-analysis did not found significant association between green tea consumption and non-drinking esophageal cancer risk, but an evidence of protective effect was observed among female. Additional more studies (especial the cohort studies, and studies from more countries) with careful control of interaction factors including dose and temperature of green tea intake are needed to provide a more definitive conclusion focusing on whether the routine consumption of green tea can guard against esophageal cancer.
|
2016-05-12T22:15:10.714Z
|
2012-11-21T00:00:00.000
|
{
"year": 2012,
"sha1": "1c422260c71682b47a2e778452b797c5ef3328b4",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/1471-230X-12-165",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "993f40614400b39ba87fd7588dd33e0b5176d7fe",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215779746
|
pes2o/s2orc
|
v3-fos-license
|
Is health literacy related to health behaviors and cell phone usage patterns among the text4baby target population?
Background Text4baby provides educational text messages to pregnant and postpartum women and targets underserved women. The primary purpose of this study is to examine the health behaviors and cell phone usage patterns of a text4baby target population and the associations with health literacy. Methods Pregnant and postpartum women were recruited from two Women, Infant and Children clinics in Atlanta. Women were asked about their demographics, selected pregnancy or postpartum health behaviors, and cell phone usage patterns. Health literacy skills were measured with the English version of the Newest Vital Sign. Multivariable logistic regression was used to examine health behaviors and cell usage patterns by health literacy classification, controlling for commonly accepted confounders. Results Four hundred sixty-eight women were recruited, and 445 completed the Newest Vital Sign. Of these, 22% had inadequate health literacy, 50% had intermediate health literacy, and 28% had adequate health literacy skills. Compared to adequate health literacy, limited literacy was independently associated with not taking a daily vitamin during pregnancy (OR 3.6, 95% CI: 1.6, 8.5) and never breastfeeding their infant (OR 1.4, 95% CI: 1.1, 1.8). The majority (69.4%) of respondents received nine or more text messages a day prior to enrollment, one in four participants (24.6%) had changed their number within the last six months, and 7.0% of study participants shared a cell phone. Controlling for potentially confounding factors, those with limited health literacy were more likely to share a cell phone than those with adequate health literacy (OR 2.57, 95% CI: 1.79, 3.69). Conclusions Text4baby messages should be appropriate for low health literacy levels, especially as this population may have higher prevalence of targeted unhealthy behaviors. Text4baby and other mhealth programs targetting low health literacy populations should also be aware of the different ways that these populations use their cell phones, including: sharing cell phones, which may mean participants will not receive messages or have special privacy concerns; frequently changing cell phone numbers which could lead to higher drop-off rates; and the penetrance of text messages in a population that receives many messages daily.
Background
Poor pregnancy outcomes, including low birth weight and preterm birth, continue to be a problem in the US, particularly for minority women and those with few resources [1]. Rates for low birth weight and preterm births have remained relatively constant since 1980, with improvements in infant mortality attributable primarily to advanced health care interventions for preterm infants as opposed to increased utilization of preventive services. Infant mortality remains high in the United States when compared with other industrialized countries, with a rate of 6.42/1,000 live births between 2008 and 2009 [2].
Unhealthy behaviors in the prenatal period, including smoking, alcohol use, and poor diet, are linked to poor pregnancy outcomes [1]. Conversely, proactive healthy behaviors in the preconception and prenatal period, such as vitamin use, influenza vaccine, and regular prenatal care lead to improved outcomes [1,3]. While behavior modification has had limited success in modifying poor pregnancy outcomes, the combined effect of a multipronged behavior intervention has the possibility to have a significant impact on poor pregnancy outcomes [4]. Ideally, these interventions would be targeted to women who have the greatest potential to benefit. Women at higher risk of poor outcomes, however, have traditionally been the most difficult to reach.
The recent explosion of new technologies offers novel opportunities for counseling and behavior change for these historically underserved groups. Text messaging is unique among newer technologies as it is widely used across income and educational strata. According to the Pew Research Center's Global Attitudes Project, a survey of 21 representative countries found that 85% of those surveyed owned a cell phone, and of those, 75% reported regularly using text messaging [5]. Significantly, text messaging was more common in the poorest nations surveyed. In a separate Pew Research Center survey of Americans conducted in 2011, 83% of Americans owned a cell phone, and 73% of cell phone owners used text messaging [6]. The groups who sent the most text messages were young (18-24 years), earned less than $30,000 a year, and had less than a high school education.
Text messaging is thus a potentially powerful avenue for reaching low-resource populations, and has led to the creation of mobile health interventions, known as mhealth programs. One of the few programs that focuses on maternal and infant health, text4baby, sends educational messages to pregnant and postpartum women with the goal of promoting healthy, preventative behaviors. The program was created by a public-private partnership overseen by the National Healthy Mothers, Healthy Babies Coalition. Text4baby developed a series of messages from evidence-based guidelines from the American College of Obstetrics and Gynecology and the Bright Futures Guidelines for Infants, Children, and Adolescents. Participants in the text4baby program receive one free educational message three times a week timed to their gestational age or infant's birthdate.
Text4baby's educational messages were refined in focus groups at community centers in six cities across the country and are aimed at women with low health literacy, with messages written at a sixth grade level [7]. This target population therefore likely overlaps with the group most likely to text: young, less educated, and low-income women. Defined as "the degree to which individuals can obtain, process, and understand basic health information and services needed to make appropriate health decisions", health literacy has emerged as a marker of existing knowledge, the ability to process new health information, and a strong predictor of health behaviors [8]. Some studies have indicated that it is a stronger predictor of outcomes than education alone [9]. Targeting women with low health literacy for health education could potentially have the greatest impact, encouraging women at high risk for poor pregnancy outcomes to make healthy decisions for themselves and their children [10].
The effect of health literacy on outcomes may be mediated by higher rates of unhealthy behaviors. A 2011 meta-analysis conducted for the Agency for Healthcare Research and Quality found that low health literacy is associated with lower acceptance of influenza vaccine and decreased ability to interpret health messages [11]. Other studies have found that women with low health literacy are less likely to breastfeed [12], plan their pregnancy [13], and to be insured [14]. These associations are not consistent across all studies, suggesting that other factors, such as attitudes and cultural beliefs about medicine, may mediate the effect of health literacy on health behaviors [15]. More information is needed about how the prevalence of these behaviors varies with health literacy and how to best support behavior change in these populations.
In addition to differences in health behaviors, successful mhealth education requires intimate knowledge of the way that target populations use their cell phones. Though women who are likely to have low health literacy have adopted text messaging, they may use it in different ways. Younger cell phone users in the US are more likely to share a cell phone, for instance, and lower income cell phone users are more likely to use prepaid cell phone plans [6]. Those with prepaid plans are in turn less likely to use text messaging, and to change their numbers frequently. Americans with higher education are more likely to look for health information on their phones, as are ethnic minorities [16]. Since text message is a written medium, health literacy may influence the type of messages that users send and receive, and their understanding and use of these messages.
We explored the prevalence of healthy behaviors in a group of pregnant and postpartum women enrolled in text4baby, and their relationship with health literacy skills. In addition, we explored the relationship between cell phone usage characteristics and health literacy skills to better understand how mhealth programs like text4baby can potentially be used to improve maternal and child health in high risk populations.
Study design
Cross-sectional survey using a stratified random sample design.
Setting
The study was conducted in two Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) Clinics in Metro Atlanta as part of a broader evaluation of text4baby in this population.
Study population
Women were recruited from nutrition classes (which are mandated for all those receiving WIC support) at the two WIC clinics. Interviewers attended all classes during the study collection period, and either approached all women in the class if the class was small, or randomly selected participants using numbered slips of paper, creating a stratified random sample. Women who were willing to participate were eligible for the study if they: 1) were the biological mother of a child under 10 months old (postpartum) or were currently pregnant; 2) had a working cell phone; 3) could receive text messages; 4) had not been enrolled in text4baby previously; and 5) spoke English. Those who qualified to participate were consented orally with both the Emory University approved consent and HIPAA agreement forms. Recruitment procedures have been described previously in more detail [17].
Data collection
Participants were read an in-person pregnant or postpartum baseline survey by a trained interviewer to ensure comprehension, which took approximately ten to fifteen minutes. Data was collected at three points: baseline, two weeks, and two-to-six month follow-up. This paper analyzes baseline data.
Measures
At the baseline interview, women self-reported all demographic, behavioral, and cell usage data. Women were asked about behaviors that depended on whether they were pregnant or postnatal: 1) All women were asked if they currently smoked (possible answers "no", "some days", or "every day"); if they had rules about smoking in the house ("no", "no one is allowed to smoke in the house", or "people are allowed to smoke in some rooms sometimes"); how often they felt "down-hearted or blue", ("all of the time", "most of the time", "some of the time", "a little of the time", "none of the time"); if they had had an alcoholic drink in the past thirty days ("yes", "no", or "don't know"). 2) Pregnant women were asked how many days a week they participated in physical activity for thirty minutes or more ("less then one day a week", "one to two days", "three to four days", or "five a more", or that they were advised against exercise by a health professional"); if they had a seasonal flu shot in the last year ("yes", "no", "don't know"); how often they kept their appointments ("always", "nearly always", "sometimes", "seldom", and "never"); and how often they took a multivitamin in the past week ("I did not take any vitamins at all"; "1-3 times a week", "4-6 times a week", "daily"). 3) Postpartum participants were asked if they were currently breastfeeding, and if not, if they breastfed at any point after birth; and how often they put their baby in a car seat ("always", "nearly always", "sometimes", "never" and "don't have a car"). All answers were collapsed into healthy and unhealthy behaviors; for instance, those who smoked sometimes or always versus those who did not smoke. These collapsed categories are presented in the results.
The other outcomes were cell phone usage patterns. Women were asked the average number of text messages they received per day ("less than 2 per day;" "3 to 5 per day;" "6 to 8 per day;" "9 or more per day"). Answers were recoded into "9 or more per day" versus all others. Women were also asked if they currently shared a cell phone ("yes" or "no"). Finally, women were asked how many phone numbers they had had in the past six months ("1, 2, 3, 4, 5, >6"). They were classified into more than one versus one cell phone number.
The primary predictor was health literacy. This was measured during the final portion of the baseline survey using the Newest Vital Sign (NVS) assessment [18]. The NVS is a six-question instrument that asks respondents to interpret an ice cream label, and incorporates both reading literacy and numeracy skills. In the original paper on this health literacy metric, the creators of the NVS found that those with a score less than two on the English version were likely to have inadequate health literacy, and those with a score of four or greater were likely to have adequate health literacy when measured against the Test of Functional Health Literacy in Adults. Women were divided into three health literacy categories: 0-1 for limited health literacy, 2-3 for intermediate health literacy, and 4-6 for adequate health literacy [18].
Demographics were included as covariates: education level (less than high school, high school or GED, or beyond high school), ethnicity (black versus all others), income (less than $10,000, $10,001 to $20,000, and more than $20,000), employment (any current employment versus all others), and marital status (living with a partner/married versus all others).
Data analysis
All analyses took into account the sampling design and weights (based on selection probability, adjusted for non-response) using SAS v9.3 survey procedures (SAS Institute, Cary, NC, USA) so that reported estimates reflect the clinic population. Demographics, behaviors, and cell phone usage characteristics were summarized by median and interquartile range or proportions, overall and by NVS. The statistical significance of crude associations between NVS and other characteristics are evaluated with the Rao-Scott likelihood ratio chi-square test. Multiple logistic regression with a generalized logit function was used to evaluate the association of health literacy categories and those health behaviors that were significantly associated in crude tests, controlling for common confounders income and education (p = .05). The association of health literacy and cell phone usage characteristics were similarly assessed, controlling for the most common confounders identified in the literature: age, education, income, employment status, and marital status [11]. Race was not controlled, as our sample was more than 90% African American. Linearity of the logit for age was confirmed and multicollinearity and other model diagnostics were performed. The type I error rate (alpha) was set at 0.05.
Baseline characteristics
A baseline survey was read to 468 women, and the 445 participants who completed the NVS were included in the analysis. Participants had an estimated median age of 25 (Table 1). The majority (92.3%) of study participants were African American; 57.3% had twelve or fewer years of schooling, and 81.1% had a household income under $20,000. Slightly more than half (56.7%) of participants were unemployed or students, and 29.7% were married or living with a partner. Health literacy scores overall were low, with 22% having limited health literacy, 50% intermediate health literacy, and 28% adequate health literacy. Higher health literacy was significantly associated with older age, higher education, higher income, and being employed. Race and marital status and racial distribution did not differ significantly between health literacy categories.
Health literacy and health behaviors
The prevalence of many unhealthy behaviors was significantly associated with low health literacy ( (Table 3).
Health literacy and cell phone usage characteristics
Overall, an estimated 7.0% of the sample population shared cell phones, 24.6% had changed their cell phone number at least once in the six months prior to enrollment, and over two-thirds received 9 or more texts per day (Table 4). Sharing a cell phone was more than twice as common among those with the lowest health literacy scores (p < 0.01), but How often do you put your child in a car seat?
health literacy was not significantly associated with changing cell phone numbers or receiving more texts. For sharing a cell phone, NVS score remained predictive in the presence of all potential confounders, with those in the lowest health literacy category having 2.57 times the odds of sharing a cell phone compared to those with intermediate health literacy (95% CI 1.79, 3.69), and 1.67 times that of those with adequate health literacy (95% CI 1.06, 2.63) ( Table 5). Health literacy was not independently associated with changing cell phone numbers or number of texts received daily. Younger women were significantly more likely to change phone numbers and receive more texts, and lower income was significantly associated with sharing a cell phone. Other factors were either not significantly associated with cell phone usage, or the results did not demonstrate a consistent trend.
Discussion
In this study population, lower health literacy was significantly associated with a variety of unhealthy behaviors that are known to have a negative impact on maternal and infant health. This is consistent with several studies that have found a similar association with low health literacy and certain unhealthy behaviors, including smoking and not receiving an influenza vaccine [11]. Fewer studies, however, have looked directly at the target population of text4baby, pregnant and postpartum women. Importantly, daily prenatal vitamin intake was mediated by health literacy in our sample even after controlling for confounders, making this an important target for future mhealth programs aimed at lower health literacy levels.
Given the higher prevalence of unhealthy behaviors amongst the lowest health literacy groups, it is important that future analyses of text4baby examine the relative impact of the program at different literacy levels. Though the developers have written messages that are meant to accommodate lower health literacy levels, the messages may need to be simplified further: limited health literacy on the NVS corresponds to less than a sixth grade reading level, the level at which text4baby messages are written. Supplemental information delivery, which several studies have found to be effective, may be incorporated in the future, especially using smart phone platforms. These supplemental delivery methods include using videos, icons, and verbal narratives [11]. This study is also one of the first to examine directly how people enrolled in an mhealth program use their cell phones, and how these patterns of usage are related to health literacy and demographic variables. Our analysis shows that those with low health literacy are more likely to share a cell phone, and this relationship remained significant after adjusting for age, sex, income, education, and employment status. Those in the lowest income category were also more likely to share a cell phone. The youngest group (ages 18-22) was the most likely to have changed their cell phone number at least once in the previous 6 months, as were the unemployed. The youngest participants were also the most likely to receive nine or more text messages a day. These findings are largely consistent with national surveys of text messaging. Our findings are also supported by data that the youngest Americans are more likely to share a cell phone [6]. Though we did not find data on cell phone number instability, low income populations are more likely to use prepaid cell phone cards, and therefore more likely to experience service disruptions [5]. Use of prepaid plans that do not require reading and signing a complicated contract and are also less expensive may be more common among women with lower health literacy. So far, however, this relationship has not been explored directly.
To determine the effectiveness of text4baby, researchers will need to determine if rates of knowledge acquisition and behavior change differ depending on health literacy skills. Text4baby continues to be promising in this population given that it incorporates a few core features of effective communication with low health literacy populations, namely presenting important information by itself and using limited numeracy in messages [11]. It is possible that text message may not be the most effective medium to reach those with low health literacy, or that supplemental learning aides will be necessary to effect change in this population. The Agency for Healthcare Research and Quality, for instance, found in their systematic review that those with low health literacy benefit from visual aides and videos. Given the rapid expansion of smartphones, which are now available on many prepaid plans, text4baby could explore the advantages and use of these expanded platforms.
Several mhealth interventions have successfully improved health behaviors known to impact maternal and infant health [10][11][12][13][14]. Mhealth programs have rarely, however, examined directly the ways that participants use their cell phones, or the ways that these usage patterns may affect the design, measurement, or retention of these programs. Drop-off is a particularly important problem for our study and larger mhelath studies in general. One pilot study of text message reminders for parents in a low-income urban clinic found that 19 of 48 participants changed their number at seven-month contact, and were thus lost-to-follow-up [19]. Determining what leads to successful retention in these programs is essential to designing an intervention that can be evaluated and scaled up beyond a pilot study. As of yet, very little data is available on what leads to drop-off and how it might be prevented.
There are several limitations of our research. Health behaviors were self-reported, and therefore may not represent the true behavior of the baseline population, especially for socially undesirable behaviors. For instance, only four participants indicated that they drank during pregnancy. Secondly, there were few events, and therefore not enough events to control for multiple confounders. We also did not ask participants for more information about their cell phone usage patterns, particularly whether they used prepaid cell phone cards or had long-term service plans. In large-scale surveys, these plans are more common among low-income populations, and therefore were likely common in our study sample. This may be an important factor in the usage patterns we found. Fourth, the relationship between health literacy and other predictors with these cell phone usage characteristics may have been underestimated in this population, as it was primarily a low literacy population and fairly homogenous with respect to demographic variables. We also do not have data on how long participants continued to receive text4baby messages after enrollment, and therefore cannot infer how these different usage patterns would affect retention or receipt of messages. Finally, our use of women from two urban clinics may not be generalizable to the whole population, and cell phone usage may vary by region. Despite the noted limitations, this study provides insight to the ideal design of mhealth programs, particularly for low health literacy and low resource populations in urban centers. As preliminary data and surveys indicate that the ways that people use their cell phones is not uniform, this is particularly important for programs like text4baby that are aimed at large and diverse populations. In addition, the study population is similar to the ideal target population of the national text4baby program, with participants having significant health burdens and low health literacy.
One implication of this study is that mhealth participants should be asked about how they use their cell phone. If targeting those with low health literacy or other groups who may be more likely to share a cell phone, designers of mhealth programs should consider how they will determine that the intended recipient actually read the message, and how they will ensure privacy of the participant. Programs could build in ways to determine that the intended recipient had read the message by: using the name of the recipient, having them text back, providing a free cell phone to recipient, or sending password-protected messages. In promoting retention, program designers should consider if their participants are likely to change their numbers often, especially if their target group is young, unemployed, or has less than some college education. The appropriateness of text messaging as a means of targeting low socio-economic and health literate populations is reaffirmed by our study, as the majority of participants receive more than nine messages a day. This, however, means that text4baby and similar mhealth programs must rise above the noise of other messages that enrollees receive in a given day.
Future studies of text4baby should identify opportunities to determine that the intended recipient has received the message. They should also measure retention directly to determine what infrastructural barriers may lead to drop-off. Finally, participants should be asked directly about how they use their cell phones and ways that text4baby could more effectively address the needs of its target population. Addressing these infrastructural issues is an important step in refining the design of these programs and measuring their impact.
Conclusion
Health promotion through text messaging is a promising avenue to target maternal and infant health given the high prevalence of text messaging among women who are young and have lower health literacy. The ways in which low health literacy groups use their cell phones should affect the design of text4baby and similar programs to ensure success. Participants in these programs should be asked about the ways in which they use their cell phones in order to ensure receipt of messages, privacy amongst groups more likely to share phones, and penetrance of the messages amongst high volume texters.
|
2016-05-16T18:35:46.333Z
|
2014-05-07T00:00:00.000
|
{
"year": 2014,
"sha1": "7f2d10e4b4608dfa98fa0bd42d8ca7ac3e3a2cf3",
"oa_license": "CCBY",
"oa_url": "https://archpublichealth.biomedcentral.com/track/pdf/10.1186/2049-3258-72-13",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83ece50316a5a80427fec5b85d4b929904d201dc",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247158743
|
pes2o/s2orc
|
v3-fos-license
|
EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Skin lesions can be an early indicator of a wide range of infectious and other diseases. The use of deep learning (DL) models to diagnose skin lesions has great potential in assisting clinicians with prescreening patients. However, these models often learn biases inherent in training data, which can lead to a performance gap in the diagnosis of people with light and/or dark skin tones. To the best of our knowledge, limited work has been done on identifying, let alone reducing, model bias in skin disease classification and segmentation. In this paper, we examine DL fairness and demonstrate the existence of bias in classification and segmentation models for subpopulations with darker skin tones compared to individuals with lighter skin tones, for specific diseases including Lyme, Tinea Corporis and Herpes Zoster. Then, we propose a novel preprocessing, data alteration method, called EdgeMixup, to improve model fairness with a linear combination of an input skin lesion image and a corresponding a predicted edge detection mask combined with color saturation alteration. For the task of skin disease classification, EdgeMixup outperforms much more complex competing methods such as adversarial approaches, achieving a 10.99% reduction in accuracy gap between light and dark skin tone samples, and resulting in 8.4% improved performance for an underrepresented subpopulation.
Introduction
Early detection of skin lesions can aid in identifying a range of infectious diseases. We consider Lyme disease [14,17]-which affects nearly 476,000 cases per annum during 2010-2018 [18]. Lyme disease is caused by the bacterium Borrelia burgdorferi, which manifests via a red concentric lesion, called Erythema Migrans (EM), at the site of a tick bite [21]. While the EM pattern may appear simple to recognize, its diagnosis can be challenging for those with or without a medical background alike, as only 20% of United States patients have the stereotypical bull's eye lesion [29]. When skin lesions are atypical they can be mistaken for other diseases such as Tinea Corporis (TC) or Herpes Zoster (HZ) [20], two other diseases acting a confusers for Lyme, considered herein. This has increased interest in medical applications of deep learning (DL), and using deep convolutional neural networks (CNNs), to assist clinicians in timely and accurate diagnosis of conditions including Lyme disease, TC and HZ [8,10,4]. equal contribution arXiv:2202.13883v1 [cs.CV] 28 Feb 2022 A major challenge in diagnosing skin diseases with CNNs is that they have been shown to learn and exhibit bias inherent in training data [13]. For example, the diagnostics accuracy of people with light skin is often higher than those with dark skin because a) the training may not have sufficient samples of dark skin with the condition, or b) there may exists an inherent correlation between image markers of protected factors and disease. In response, the AI community has been investigating bias mitigation strategies such as data generation for underrepresented subpopulations [22] or adversarial debiasing [32]. However, while applying CNNs to dermatology is of growing interest, insufficient attention has been directed towards identifying or reducing the prevalence of bias in CNN prediction for skin disease classification and segmentation. Existing bias mitigation strategies often perform poorly on skin diseases, especially for segmenting and classifying Erythema Migrans (EM), because they tend to remove important information on the lesion area or important image markers after debiasing.
We propose a novel data preprocessing and alteration method, called EDGEMIXUP, to improve fairness in skin disease classification and segmentation. The key insight of this approach is to alter a skin image with a linear combination of the source image and a detected edge mask so that the lesion structure is preserved while minimizing skin tone information, which is done by altering the color composition in HSV space, thereby minimizing the ability of the model to infer information about the protected factor. This combined preprocessing approach, while simple, is shown to be significantly more effective than competing methods such as adversarial approaches which are also aimed at masking markers or protected factors.
We evaluate EDGEMIXUP with fairness metrics for skin disease segmentation and classification tasks. First, for the segmentation task, we construct a dataset composed of 185 publicly available diseased skin images with annotations for three regions: background, skin and lesion, conducted under clinician supervision and Institutional Review Boards (IRB) approval. Next, we demonstrate the existence of segmentation model bias on our annotated dataset. Our results show that EDGEMIXUP is able to reduce bias to improve fairness and increase utility (as measured via Jaccard and Dice). Second, for the classification task, we collect and have a clinician supervise annotation for a skin disease dataset with 2,712 (publicly-available) skin images classified into four classes, i.e., No Disease (NO), TC, HZ, and EM. We perform evaluation on the classification task using a traditional ResNet34 baseline and demonstrate the existence of significant bias. We show that EDGEMIXUP substantially improves model fairness compared to the baseline and also significantly outperforms state-of-the-art (SOTA) debiasing methods in improving performance on joint fairness-utility metrics. Our contributions are: • We collect, annotate, and present two novel skin disease datasets with emphasis on Lyme disease, Tinea Corporis, and Herpes Zoster, for studying segmentation, classification, and addressing fairness, which we will publicly release upon publication.
• We demonstrate for the first time that a segmentation model may exhibit bias for these important diseases.
• We propose EDGEMIXUP, a novel data preprocessing method that jointly addresses utility and fairness for the tasks of classification and segmentation of skin diseases. • We evaluate EDGEMIXUP on skin lesion classification and segmentation, showing that it improves utility and fairness for segmentation and their tradeoff for classification, which also outperforms the SOTA approach.
Related Work
We provide an overview of prior work in skin disease classification and segmentation, as well as bias mitigation methods in the domain of medical imaging. Skin Disease Classification and Segmentation: Deep CNNs have gained popularity for automated melanoma skin lesion segmentation due to disease relevance and model performance, despite the prevalence of fuzzy borders, inconsistent lighting conditions, and image artifacts [2,33]. Individual Topology Angle (ITA) has been used as a proxy for skin tone labels in medical imagery for segmentation and classification tasks. Little bias was found in skin disease segmentation and classification models using the SD-136 [28] and ISIC2018 [7] datasets [16], which differ from the diseases this study focuses on. In this work, we reach the opposite conclusion for segmentation and classification of specific skin diseases and their lesions, including Lyme, EM, TC, and HZ. This also motivates the design of EDGEMIXUP in improving the fairness of skin lesion segmentation and classification.
Bias Mitigation: Addressing bias in deep learning models can be categorized into three categories [6]: (1) preprocessing, such as augmentation and re-weighting; (2) in-processing, like adversarial debiasing; and (3) post-processing, such as thresholding. First, masking sensitive factors in imagery is shown to improve fairness in object detection and action recognition [30]. Second, adversarial debiasing operates on the principle of simultaneously training two networks with different objectives [9,19,24]. The competing two-player optimization paradigm is applied to maximizing equality of opportunity in [1]. This technique has shown success for tabular data [32], word embeddings [3], and imagery [34]. Lastly, Hardt el al. [11] adjust model outputs using thresholds to mitigate discrimination against a specified sensitive attribute.
By contrast, we propose EDGEMIXUP a much less complex but also more effective preprocessing approach to debiasing when applied to skin disease, and particularly Lyme-focused, classification and segmentation tasks.
Datasets
We collect, annotate, and then present two datasets for skin disease segmentation and classification which we will publicly release upon publication. First, we collect skin 0 7 7 7 7 7 7 21 21 86 4 73 7 51 9 66 6 276 26 images either from publicly available sources or from clinicians with patient informed consent. Second, a medical technician and a clinician in our team manually annotate each image. Data annotation follow the specific task/dataset as indicated below: • Segmentation: We annotate skin images into three classes: background (black), skin (yellow), and lesion (blue), see Table 1 shows the characteristics of these two datasets broken down by the disease type and skin tone, as calculated by the Individual Typology Angle (ITA) [31]. Specifically, we consider tan2, tan1, and dark as dark skin (ds) and others as light skin (ls). One prominent observation is that ls images are more abundant than ds images due to a disparity in the availability of ds imagery found from either public sources or from clinicians with patient consent. This disparity motivates the design of EDGEMIXUP in improving model fairness in diagnosing skin diseases.
Method
We present our core method in reducing skin tone bias for segmentation and classification CNNs. We start by describing the design of EDGEMIXUP, and then present how we apply EDGEMIXUP for the tasks of segmentation and classification.
EdgeMixup data preprocessing
The key insight of EDGEMIXUP is to "mix-up" a detected edge image with the original skin image for data preprocessing via a linear combination. Intuitively, such preprocessing not only highlights the skin lesion, via an edge image, but also suppresses the skin tone. While this idea is intuitively simple, the edge detection is challenging due to color similarity causing ambiguous edges between skin and lesions. We start with a motivating example to illustrate this challenge.
Motivating example: Fig. 2a Fig. 2b-2d shows the edge detection result. Clearly, Canny fails to even detect a basic human silhouette; DexiNedfused detects some of the human body's edge, but not the lesion's. DexiNet-avg is better at detecting some parts of the lesion, but not its edge. As a comparison, we also depict the edge detection of EDGEMIXUP in Fig. 2e, which clearly shows the lesion boundary. Approach: Fig. 3 summarizes the overall process of EDGEMIXUP's data preprocessing into four steps. First, EDGEMIXUP converts a given image to the Hue-Saturation-Value (HSV) color space. Then, EDGEMIXUP applies a red mask in the HSV color space to zero-out the red and blue channels and maximize the green color to 255. The image from this step is called contrast augmented. Second, EDGEMIXUP selects the value (V), or lightness, channel of the contrast augmented image from the previous step to produce a gray-scale image. Third, EDGEMIXUP applies a Canny edge detector to extract edge boundary and generates an edge image. Lastly, EDGEMIXUP combines the edge image and the original sample image linearly, like a mixup, to generate an altered image (called a result image). If not otherwise specified, the default weight for edge image in the linear combination is 0.3.
Application of EDGEMIXUP on different diagnostics-related tasks:
The purpose of EDGEMIXUP is to improve fairness in diagnostics models via data alteration and pre-processing. Next, we apply EDGEMIXUP to two types of DL-based diagnostics tasks with the aim of improving fairness.
Lesion Segmentation: Lesion segmentation aims to separate a skin lesion from regular skin to assist clinicians in the examination and diagnosis of EM by simplifying time-series clinical comparisons. EDGEMIXUP preprocesses training images before feeding them into a segmentation model (both at training and inference time), e.g. U-Net [23], which then segments the images into three regions: background, skin, and lesion.
Disease Classification: Disease classification aims to prescreen and diagnose, principally, EM (for Lyme Disease), and also classify possible Lyme confusers including: Tinea Corporis (TC), Herpes Zoster (HZ), and no disease (NO). Again, EDGEMIXUP alters the original training images prior to training a classification model, such as a ResNet34 [12].
Lesion Segmentation
Our evaluation baseline is a U-Net trained to segment images of skin lesions into three categories: background, skin, and lesion. Our evaluation metrics include metrics for utility and fairness, since often (but not always) these two may tradeoff. Utility is measured using a Jaccard score [15] and Dice coefficient [25], which measure the similarity between a predicted mask and the manually annotated ground truth. Higher similarity results in higher the model performance. Fairness is evaluated by the gap of the Jaccard score and Dice coefficient between ls and ds images, notated as J gap and D gap respectively. The smaller the gap is, the more fair the model.
Disease Classification
Baselines: We select ResNet34 as a baseline model, with ImageNet pretrained weights, early stopping, and a learning rate of 1e-3 trained for 100 epochs. Our evaluation for classification debiasing involves the following competing debiasing approaches: • Adversarial Debiasing (AD). An in-processing method [32] using adversarial debiasing, where a separate classifier/player is tasked to predict the protected factor using the true class and the task prediction classifier's internal representation of a given input image. • Mask. A mask-based debiasing approach leveraging a synthesized mask from a segmentation network to mask out skintone in images input to the classifier. • Mask+AD. A combination of Mask and AD aimed at masking skintone information both in image and embedded space. • DexiNed-avg. DexiNed-avg entails the use of the average version of DexiNed [26] as an edge detector used by EDGEMIXUP.
Evaluation Metrics: We use the following disease classification evaluation metrics.
• Accuracy based metrics: We measure accuracy to characterize utility. To measure fairness we use accuracy gap between ls and ds subpopulations, and the (Rawlsian) minimum accuracy across subpopulations. To characterize the tradeoff between utility and and fairness we use the joint metric from [22]: • AUC (Area under the receiver operating characteristic curve): Similarly, we also measure utility with AUC, and fairness via AUC gap and minimum AUC. Likewise, following prior work [22] we also calculate the AUC-based joint utility/fairness metric defined as:
Results
In this section, we present results on on the task of lesion segmentation and skin disease classification. We also evaluate the performance and fairness of EDGEMIXUP compared with adversarial debiasing (AD), synthesized masking (Mask) in terms of fairness improvement and both (Mask+AD). Note, our code will be released upon publication. Skin Lesion Segmentation: Table 2 shows the performance of EDGEMIXUP and a baseline U-Net on our segmentation dataset. We compare predicted masks with the manually-annotated ground truth by calculating the Jaccard and Dice scores, and computing the gap for each of the two scores for subpopulations with ls and ds (based on ITA). The results present two clear findings. First, EDGEMIXUP, as a data preprocessing method, improves the utility of lesion segmentation in terms of Jaccard and Dice. A possible reason is that EDGEMIXUP clearly preserves key skin lesion information, thus improving the segmentation quality, while attenuating markers for protected factors. Likewise, EDGEMIXUP also improves the fairness of the segmentation task by lowering the gap of the Jaccard and Dice scores between people with ls and ds. As a result, EDGEMIXUP demonstrates consistency in improving both utility as well as fairness in term of the utilized metrics.
Skin Disease Classification: Table 3 shows utility performance ( acc and AUC) and fairness results (gaps of acc and AUC between ls and ds subpopulations). Note that we list the margin of error of each number in the parenthesis. Clearly, EDGEMIXUP outperforms SOTA approaches in balancing the model's performance and fairness, i.e., the CAI α and CAUCI α values of EDGEMIXUP are the highest compared with the vanilla ResNet34 and other baselines. Next, we examine the different metrics separately.
First, the acc value of EDGEMIXUP is the second largest, which is only second to the baseline ResNet34, but higher than all other competing debiasing methods. While a decrease in utility often arises for debiasing, our results show that EDGEMIXUP is effective in largely preserving the model's utility. The acc gap is also the second smallest, which is only second to the Mask approach (i.e., applying a segmentation mask to disease images). Note that the acc value of the Mask approach is a mere 73.84%, suggesting utility was substantially sacrificed to maximize fairness. Next, the acc min,ds of EDGEMIXUP is the highest among all approaches, meaning that EDGEMIXUP is superior at improving classification performance for underrepresented ds subgroups. Second, the AUC value of EDGEMIXUP is only around 1% smaller than of the baseline ResNet34 model, highlighting EDGEMIXUP's strong performance in disease classification. At the same time, the AUC gap is the smallest among all approaches, while AUC min,ds is the largest. This showcases that EDGEMIXUP has the best characteristics in terms of fairness, as well as addressing the overall fairness/utility criterion; thus, improving the overall system performance.
Discussion
Our study, performed under IRB approval (and to be publicly released), demonstrates for the first time the possible presence of bias when addressing Lyme disease, and other important conditions that act as confusers to Lyme (HZ and TC) when using a vanilla classifier. A fact, never reported before and also in contrast to other skin diagnostic studies. This observation highlights the importance of studying skin disease bias with datasets that have much larger exemplar cardinality for Lyme, HZ, and TC when compared to the other prevalent datasets, such as SD-198, that may not focus as much on those diseases. We also present a simple, yet highly effective, method to debias models, and show how the method produces a censoring/masking effect, vis-a-vis protected attributes markers, without a debilitating effect on utility.
Conclusion
We present a study to identify, quantify, and mitigate bias of skin image classification and segmentation models trained from two datasets collected in our study. Specifically, we propose EDGEMIXUP, a novel data preprocessing method that utilizes edge detection to isolate skin lesions. EDGEMIXUP outperforms the previous SOTA (81.58%) by 1.86% accuracy and other debiasing methods with a CAI 0.5 of 4.6650. We adapt EDGEMIXUP for the task of skin lesion segmentation on our new dataset and surpass the baseline method by 0.0544 in Jaccard score and reduce the Jaccard gap (J gap ) in performance between the light and dark skin subpopulations by 0.0387. EDGEMIXUP is an effective approach that achieves fair performance across subpopulations with respect to skintone.
|
2022-03-01T06:47:50.993Z
|
2022-02-28T00:00:00.000
|
{
"year": 2022,
"sha1": "a65d813bf500c1f856752608ec5e2d98761eaa51",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a65d813bf500c1f856752608ec5e2d98761eaa51",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
266217249
|
pes2o/s2orc
|
v3-fos-license
|
Monolithic beam combined quantum cascade laser arrays with integrated arrayed waveguide gratings
We report the fabrication of a monolithic closed-loop wavelength beam combined quantum cascade laser (QCL) source. The chip comprises five QCL gain sections connected to 5 × 1 arrayed waveguide gratings (AWG) via active/passive tapered couplers and a router. The chip is fabricated on a MOCVD-grown III-V semiconductor substrate. The entire passive section of the chip undergoes ion implantation to reduce the propagation losses due to free carrier absorption. The peak power for all the QCL array elements reached 600 mW per facet with a 2 kA/cm2 threshold current density under pulsed operation. Furthermore, our WBC approach is compatible with buried heterostructure processing, which allows continuous wave operation with high output power. Our results hold promise in manufacturing compact and multiwavelength mid-infrared sources with good beam quality.
Introduction
The development of mid-infrared laser sources is of significant importance for a wide range of applications, including free-space communication, hyperspectral imaging, laser surgery, and chemical sensing.Various lasers operating in the mid-infrared range have emerged to meet the demands of these applications, including rare-earth-doped gain media lasers [1,2], semiconductor lasers [3][4][5], and optical parametric oscillators (OPOs) [6,7].Among these, quantum cascade lasers (QCLs) [8] have achieved remarkable success at wavelengths beyond ∼3.5 µm, making them the technology of choice for applications that require versatile, compact, and affordable laser sources.The performance of QCLs has rapidly improved over the last two decades, reaching a record output power of 5.1 W (8.3 W peak) and a 21% (27%) wall-plug efficiency (WPE) in continuous wave (pulsed mode) operation at room temperature [4].Accelerated aging experiments have also established the reliable long-term operation of QCLs with an average power of 200 mW [9].Tremendous progress has also been made in terms of achieving precise mode control and spectral tuning, which are critical for spectroscopic applications [10].However, further advancement in the capabilities of compact QCL chips is highly desirable to meet the requirements of many emerging applications.This includes increasing the output power and producing a single-mode, multi-wavelength output while reducing the footprint and/or costs of QCL sources.
Wavelength beam combining (WBC) offers a simple and effective way to achieve broadly tunable sources that are well-suited for high-speed hyperspectral imaging, for example, and chemical sensing.Moreover, WBC can be implemented to scale the overall power by merging the outputs from the QCL arrays into a single beam for applications that do not require a single-wavelength source.Previous studies have demonstrated the feasibility of WBC using free-space optics and showcasing, for example, their potential to boost power [11][12][13][14] and the usefulness of the approach for hyperspectral remote sensing [15].With the emergence of photonic integrated circuits (PICs), there has been a significant push to leverage this new technology for the transition from bulky beam-combined laser sources to compact chip-scale devices.
Silicon photonics, owing to its low-cost structure, low-loss waveguides [16] and compact components, have emerged as market leaders for integrated photonic devices [17].As a result, most mid-infrared on-chip WBC elements, such as Echelle or arrayed waveguide gratings (AWG) [18], have been developed to date on either silicon-on-insulator (SOI) [19][20][21] or germanium-on-silicon platforms [22].However, this approach requires the integration of III-V semiconductor lasers with silicon platforms, which is challenging.While the majority of commercially available devices still rely on fiber coupling of the lasers to the silicon photonic chips, which requires additional connections [23], the integration of III-V diode lasers through hybrid [24], heterogeneous [25], and monolithic approaches [26] has steadily matured over the years.
Hybrid integration is yet to be explored beyond the 2 µm wavelength [18], and the monolithic growth of QCL layers onto Si substrates involves complex growth strategies owing to the lattice mismatch and the difference in thermal expansion coefficients between III-V materials and silicon.However, heterogeneous integration successfully demonstrated high-brightness QCLs through WBC facilitated by AWGs [27,28].Nevertheless, the output power levels were only a few milliwatts owing to the poor heat extraction and low transmission of III-V/Si evanescent tapers.
To overcome the challenge of producing a high-performance integrated WBC QCL source, a shift to an all-III-V monolithic semiconductor chip with integrated active and passive components was proposed [27].The integration of low-loss passive III-V semiconductor waveguides with active QCLs has been studied [29] and achieved power outputs of 50 mW and 880 mW in continuous wave and pulsed operation, respectively, at room temperature [30].Our recent work on high-efficiency AWGs fabricated on III-V semiconductor substrates demonstrated comparable performance to their silicon counterparts, making them suitable for WBC through an active/passive integration approach [31].
In this report, we present the first experimental demonstration of a monolithic closed-loop WBC QCL source.The chip comprises five QCL gain sections connected to 5 × 1 arrayed waveguide gratings (AWG) via tapered couplers and a router, as shown in Fig. 1(a).The QCL gain medium is used as the waveguide core in the passive portion of the chip, which undergoes ion implantation to reduce optical losses due to free carrier and intersubband absorption, as proposed by Montoya et al. [32] and in Ref. [33].This approach avoids the additional etch and regrowth step or evanescent active/passive coupling schemes required by the previous methods [29,30] for obtaining low-loss waveguides.In addition to its beam-combining functionality, the AWG acts as a wavelength-selective optical filter, ensuring that each laser ridge locks its emission wavelength to its corresponding input channel.Our WBC chips operate at wavelengths close to 4.9 µm, producing peaks with a narrow linewidth of 2.4 nm (0.98 cm −1 / 30 GHz) and 27 dB side mode suppression ratio (SMSR).In pulsed mode, the peak power measured from the common aperture reaches 0.6 W for each QCL array element.
Design
The core of the base material used to fabricate our WBC QCL arrays is a strain-compensated InAlAs/InGaAs gain medium with 48 periods designed for emission around 4.85 µm.The band structure is based on a non-resonant extraction design similar to that described in [34].A detailed description of the different layers comprising the top and bottom cladding is provided in Table 1.
As illustrated in Fig. 1(d) and (e), the ridge waveguides fabricated in the active (i.e., array of gain sections) and passive (i.e., low-loss router and AWG) portions of the WBC chips have significantly different geometries, although they share the QCL gain medium as the same core material.As discussed in detail in [30,31], this choice enables coupling between the passive/active sections with low insertion loss and greatly simplifies the processing because a regrowth step is not required, unlike for butt-coupled waveguides.However, ion implantation is required to reduce the material losses associated with free carriers and intersubband absorption.The electrically pumped ridges had a thicker top cladding and were wider (9.5 μm) to minimize the overlap of the laser mode with the lossy metal contacts.The length of the gain section was chosen to be ~ 6.9 mm, to ensure sufficient optical gain to reach the threshold and obtain a high peak power.In the passive section, the waveguide is significantly narrower (4.9
Design
The core of the base material used to fabricate our WBC QCL arrays is a strain-compensated InAlAs/InGaAs gain medium with 48 periods designed for emission around 4.85 µm.The band structure is based on a non-resonant extraction design similar to that described in [34].A detailed description of the different layers comprising the top and bottom cladding is provided in Table 1.
As illustrated in Fig. 1(d) and (e), the ridge waveguides fabricated in the active (i.e., array of gain sections) and passive (i.e., low-loss router and AWG) portions of the WBC chips have significantly different geometries, although they share the QCL gain medium as the same core material.As discussed in detail in [30,31], this choice enables coupling between the passive/active sections with low insertion loss and greatly simplifies the processing because a regrowth step is not required, unlike for butt-coupled waveguides.However, ion implantation is required to reduce the material losses associated with free carriers and intersubband absorption.The electrically pumped ridges had a thicker top cladding and were wider (9.5 µm) to minimize the overlap of the laser mode with the lossy metal contacts.The length of the gain section was chosen to be ∼ 6.9 mm, to ensure sufficient optical gain to reach the threshold and obtain a high peak power.In the passive section, the waveguide is significantly narrower (4.9 µm), and the top cladding consists only of a 0.4 µm thick InP layer for three main reasons: (1) to ensure a single TM mode operation, (2) to facilitate dry etching, especially in areas of the AWG where ridges are very close to each other, and (3) to minimize the implantation energy needed to reach depths over which material losses need to be reduced.Considering the relatively low index contrast (0.1515) between gain medium (3.2348) and InP (3.0834), it was crucial to avoid significant bending losses.Thus, a minimum bending radius of 700 µm was used when designing the router and the AWG elements of the passive WBC PIC, resulting in radiation and mode-mismatch losses of less than 0.05 dB as shown in Fig. 2(a).We did not increase the bend radius beyond 700 µm to reduce the overall scattering losses, which were not accounted for in the simulations.
μm), and the top cladding consists only of a 0.4 μm thick InP layer for three main reasons: (1) to ensure a single TM mode operation, (2) to facilitate dry etching, especially in areas of the AWG where ridges are very close to each other, and (3) to minimize the implantation energy needed to reach depths over which material losses need to be reduced.
Considering the relatively low index contrast (0.1515) between gain medium (3.2348) and InP (3.0834), it was crucial to avoid significant bending losses.Thus, a minimum bending radius of 700 μm was used when designing the router and the AWG elements of the passive WBC PIC, resulting in radiation and mode-mismatch losses of less than 0.05 dB as shown in Fig 2(a).We did not increase the bend radius beyond 700 μm to reduce the overall scattering losses, which were not accounted for in the simulations.
As depicted in Fig. 1(b) and (f), a two-step coupler with two linear tapers was designed using the commercial Lumerical MODE package to minimize the insertion losses from the laser ridge to the passive waveguide.A first taper is used to accommodate the difference in the top cladding thickness between the ridge geometries, that is, the top ~4.5 μm from the upper cladding is gradually reduced to a point over a 155 μm length.A second tapered section is then implemented to narrow the waveguide width from 9.5 μm to 4.9 μm over a distance of 200 μm, as shown in Figure 1.According to our eigenmode expansion (EME) simulations, the insertion losses were lower than 0.025 dB for the chosen taper lengths and geometry.It is also important to consider any misalignment of the tapers to the laser ridges during fabrication, as shown in Fig. 2(b).We designed the AWG with confocal star couplers according to the principles proposed by Smit et al. [18], which were followed in our previous work [31].The key design parameters and As depicted in Fig. 1(b) and (f), a two-step coupler with two linear tapers was designed using the commercial Lumerical MODE package to minimize the insertion losses from the laser ridge to the passive waveguide.A first taper is used to accommodate the difference in the top cladding thickness between the ridge geometries, that is, the top ∼4.5 µm from the upper cladding is gradually reduced to a point over a 155 µm length.A second tapered section is then implemented to narrow the waveguide width from 9.5 µm to 4.9 µm over a distance of 200 µm, as shown in Fig. 1.According to our eigenmode expansion (EME) simulations, the insertion losses were lower than 0.025 dB for the chosen taper lengths and geometry.It is also important to consider any misalignment of the tapers to the laser ridges during fabrication, as shown in Fig. 2(b).
We designed the AWG with confocal star couplers according to the principles proposed by Smit et al. [18], which were followed in our previous work [31].The key design parameters and characteristics of AWG are listed in Table 2.We designed two extra input channels not coupled to the laser ridges to verify the WBC operation with an external optical source, if needed.Hence, the central wavelength was assigned to the second input channel connected to the laser ridge instead of the third channel.The AWG transmission was obtained by Lumerical varFDTD simulation for the m th and (m-1) th diffraction orders, as shown in Fig. 2(c).The simulated peak positions deviate from the design values because of the 2.5-D nature of the simulation instead of being a full 3-D FDTD.Nevertheless, the experimentally determined channel spacing and free spectral range (FSR) agree very well with our model, as discussed later.Without accounting for waveguide propagation and bending losses, our simulations estimated a 1.1 dB insertion loss and 1.4 dB non-uniformity for the set of peaks centered at 4.92 µm wavelength.
Fabrication
The QCL layers were grown via molecular beam epitaxy (MBE) on a conducting Si-doped InP wafer.The upper and lower cladding layers have low doping levels to minimize losses owing to free-carrier absorption.The fabrication of the WBC arrays started with two dry etching steps to create the ridges forming the gain sections, taper #1 in the top cladding, and define a large area for the passive WBC elements.This first phase required the deposition of multiple hard mask layers using plasma-enhanced chemical vapor deposition (PECVD), which was then patterned using standard photolithography (AZ3312 resist , Heidelberg MLA-150 exposure, AZ300 MIF developer), followed by an Ar/SF 6 reactive ion etching (RIE) step.The III-V material was subsequently etched using inductively coupled (ICP) RIE (SAMCO-200iP) at a high-temperature (250 °C) process based on Ar/BCl 3 /SiCl 4 gases.Alignment of taper #1 to the laser ridge was performed using the standard MLA-150 alignment procedure.The passive region was simultaneously etched down through most of the top InP cladding layer.Because of the poor selectivity of our RIE recipe between InP and InGaAs, HCl:H 2 O (1:1) and H 2 SO 4 :H 2 O 2 :H 2 O (1:1:40) solutions were then used to selectively etch any InP and the 200 nm InGaAs layer remaining on top of the 400 nm thick InP spacer above the QCL gain region.This process resulted in a ∼500 nm step in the waveguide, as shown in Fig. 1(f).According to our eigenmode expansion (EME) simulations, the insertion loss associated with this discontinuity was less than 0.2 dB.
The second taper, narrowing the ridges from 9.5 µm down to 4.9 µm, the passive waveguides forming the router, and the AWG structure were later fabricated simultaneously using e-beam lithography (Elionix F-125) and dry etching, following the established protocols described in [31].Figure 1(b) depicts the entire coupler region, which transitions from the active region of the chip to the passive region.A 600 nm nitride layer was then deposited by PECVD and patterned to cover only the gain sections, leaving a narrow opening on top of these ridges to allow the electrical connection.The passive regions, including the couplers, were then ion-implanted with protons to lower material losses, especially those originating from the unpumped QCL gain medium.An 8-step implantation recipe with energies ranging from 45 to 600 keV and an average dose of 5 × 10 13 /cm 2 was used.The areas where ion implantation was not desired were protected by a photoresist layer that was at least 20 µm thick.
Finally, individual electrical contacts were created for each gain section by sputtering and patterning a thick Ti/NiV/Au stack.The same metal layers were deposited on the back of the substrate to act as ground electrodes.After fabrication, the chips were cleaved to expose the waveguide facets at the common AWG output and at the back of the five laser ridges, which act as mirrors for the AWG coupled Fabry-Perot (FP) laser.
Measurement and analysis
The laser spectra were measured by electrically pumping each laser ridge individually in pulsed mode using custom drive electronics.The pulse width was typically 300 ns, and the repetition rate was maintained between 5 and 30 kHz.For spectral measurements, the laser beam emanating from either the common output waveguide or the back facet was collimated using an f/1 AR-coated CaF 2 lens and analyzed using a Bruker Fourier transform infrared (FTIR) spectrometer equipped with a deuterated triglycine sulfate (DTGS) detector.A calibrated thermopile power meter is used to measure the laser output power.The QCL chip was pressed against a copper heatsink connected to a thermoelectric cooler (TEC).The latter maintained the copper mount temperature at 25 °C for the duration of all measurements, unless specified otherwise.
The optical losses incurred in various passive elements were inferred by cleaving the chip several times and measuring the laser output power versus current data after each step, as discussed in Section 3.2.The total losses in the WBC chip can be divided into five terms: the propagation losses in the active and passive waveguides over their respective lengths (that is, α active .L active and α passive .L passive ), transmission through the active-to-passive coupler (T coupler ), AWG (T AWG ), and mirror losses.The dependence of the threshold current density (j th ) on the total optical loss is given by: where Γ is the gain overlap factor; g is the gain coefficient per cascade; r 1 and r 2 are the amplitude reflectivities of the front and back facets, respectively; and j tr is the transparency current density of the gain medium.The values of Γ .g and j tr were previously determined using the inverse cavity length experimental method described in [35].The first four loss terms can be estimated sequentially by measuring the threshold current density after each cleave and comparing the results obtained with the values calculated using Eq.(1).
Laser spectra and power characteristics
The emission spectra of the five array elements of a representative WBC QCL chip are presented in Figs.3( of each laser ridge.When the drive current is close to the threshold, each laser emits a single and well-defined peak aligned accurately with a different input channel of the AWG.This was expected because the AWG acted as an optical filter integrated within the laser cavity formed by the two cleaved end facets.The laser emission is located at wavelengths corresponding to the (m-1) th AWG order mentioned in Table 2 because at low voltages, the gain of the QCL has its maximum near 4.85 µm.As the current increased and approached the rollover point, the gain spectrum experienced a blue shift, as shown by the luminescence data presented in Fig. 3. Consequently, narrow secondary peaks that closely align with the AWG wavelengths from the lower AWG order appear in the emission spectra of lasers #4 and #5.Several broad and low-intensity features consisting of many FP modes can also be observed at high currents but only in the spectra from the back facet.These parasitic FP modes do not have a consistent free spectral range, and are thus likely due to minor fabrication defects in the coupling regions or along the length of the passive waveguides, leading to additional reflections/feedback into the laser cavities.Nevertheless, the spectra obtained from the common output waveguide consist only of narrow peaks selected by the AWG.These experimental findings provide compelling evidence that the AWG structure controls the optical feedback into the gain medium and, consequently, the selected laser wavelengths, demonstrating that on-chip WBC are achieved.Figures 3(e) and (f) show the laser power measured from the common output and back facet, respectively.For the common output facet, more than 0.7 W of peak power is achieved for lasers 4 and 5 with no apparent saturation, whereas for lasers 1, 2, and 3, the intensity reaches a plateau around 0.6 W. This difference can be traced back to the emergence of secondary peaks related to the AWG in the emission spectra of lasers 4 and 5 at high currents.Hence, we conclude that the m th order diffraction contributes the extra 0.1/0.2W for lasers 4 and 5.In the case of the power measured from the back facet, none of the lasers showed saturation, and the peak power was close to 1 W.However, part of the power measured was a contribution from the low-intensity peaks shown in Fig. 3(d), which do not have the desired wavelengths and probably originate from fabrication defects in the coupler region or along the passive section of the WBC chips between the backfacet and the AWG.These defects scatter enough light back into the gain sections to allow lasing on modes not controlled by the AWG transmission.Note that the saturation effect observed in the data presented may be due to these undesirable modes, although other phenomena such as spatial hole burning and phase errors in the AWG may also play a role.We are currently investigating this subject and additional details will be given in a future publication.
Figure 4(a) shows the deviation between the emission wavelengths obtained experimentally at 25 °C and the AWG design values for five different WBC chips under similar driving conditions.Without active control, the measured values deviated by less than 5 nm (2 cm −1 ) from the design.More precisely, the observed discrepancy had two components.First, the average wavelength for each AWG channel was systematically detuned by approximately 4% with respect to the designed value.This error is likely due to the finite accuracy with which the refractive indices of the different waveguide materials are known and their temperature dependence.Second, the spread in emission wavelengths, which is less than +/-3 nm (∼1.5 cm −1 ), is random and likely related to fabrication errors such as non-uniformity in ridge width and etch depth across the chip.We expect that the systematic error observed can be significantly reduced by refining the input parameters used in our simulations and by using the data presented in Fig. 4(a) for calibration.
According to Fig. 4(b), the linewidth of one of the lasers measured varies between 0.5-0.98 cm −1 (15-30 GHz) as the current increases up to rollover.This occurred because of the low-quality factor (∼400) of the AWG and the relatively long pulses (300 ns) used to drive our arrays.This is on one hand quite broad compared to the intrinsic linewidth of distributed-feedback (DFB) QCLs, but on the other hand relatively narrow compared to the chirp (3 to 10 cm −1 ) experienced by DFB QCLs with similar geometry fabricated from the same material and operated under the same pulsed conditions.Additionally, the linewidth broadened by less than a factor of two as the pulse free spectral range, and are thus likely due to minor fabrication defects in the coupling regions or along the length of the passive waveguides, leading to additional reflections/feedback into the laser cavities.Nevertheless, the spectra obtained from the common output waveguide consist only of narrow peaks selected by the AWG.These experimental findings provide compelling evidence that the AWG structure controls the optical feedback into the gain medium and, consequently, the selected laser wavelengths, demonstrating that on-chip WBC are achieved.
Fig. 3 Emission spectra obtained at room temperature of the WBC chip from (a) common output near threshold current, (b) back facet near threshold current, (c) common output waveguide near rollover current, (d) back facet near rollover current.The luminescence measured at the voltage as the laser data presented is also shown.Measured peak power vs. current density from (e) common output waveguide, (f) back facet.All the measurements were performed under pulsed operation (300 ns pulses, 30kHz repetition rate) at room temperature.
Figures 3(e) and (f) show the laser power measured from the common output and back facet, respectively.For the common output facet, more than 0.7 W of peak power is achieved for lasers 4 and 5 with no apparent saturation, whereas for lasers 1, 2, and 3, the intensity reaches a plateau around 0.6 W. This difference can be traced back to the emergence of secondary peaks related to the AWG in the emission spectra of lasers 4 and 5 at high currents.Hence, we Fig. 3. Emission spectra obtained at room temperature of the WBC chip from (a) common output near threshold current, (b) back facet near threshold current, (c) common output waveguide near rollover current, (d) back facet near rollover current.The luminescence measured at the voltage as the laser data presented is also shown.Measured peak power vs. current density from (e) common output waveguide, (f) back facet.All the measurements were performed under pulsed operation (300 ns pulses, 30kHz repetition rate) at room temperature.
length increased beyond 1 µs.The SMSR remained above 27 dB in the spectra measured from the common output because the AWG efficiently blocked unwanted FP modes.The laser wavelength redshifts by less than 0.5 nm (0.22 cm −1 ), as the current density increases from the threshold to the rollover.This weak wavelength dependence with current indicates that the AWG passive region remains mostly thermally insulated from the Joule heating that occurs in the electrically pumped ridges.The inset of Fig. 4(b) plots the laser peak on a linear scale and highlights that the line shape may result from the superposition of a few peaks.This is expected because of the multitude of FP modes supported by the cavity, which can lase owing to the relatively broad transmission of the AWG channels (full width at half maximum ∼ 11 nm or ∼7.5 cm −1 ).
Figure 4(c) shows the shift in the output wavelength as a function of heatsink temperature.The AWG exhibited a linear temperature dependence for the selected wavelengths, as shown in the low-intensity peaks shown in Figure 3(d), which do not have the desired wavelengths and probably originate from fabrication defects in the coupler region or along the passive section of the WBC chips between the backfacet and the AWG.These defects scatter enough light back into the gain sections to allow lasing on modes not controlled by the AWG transmission.Note that the saturation effect observed in the data presented may be due to these undesirable modes, although other phenomena such as spatial hole burning and phase errors in the AWG may also play a role.We are currently investigating this subject and additional details will be given in a future publication.Figure 4(a) shows the deviation between the emission wavelengths obtained experimentally at 25 °C and the AWG design values for five different WBC chips under similar driving conditions.Without active control, the measured values deviated by less than 5 nm (2 cm -1 ) from the design.More precisely, the observed discrepancy had two components.First, the Fig. 4(d), with slopes of 0.1376 cm −1 /K and 0.1271 cm −1 /K for the m th and (m-1) th diffraction order for laser 5.The former value is close to the temperature tuning coefficient (0.142 cm −1 /K) of λ∼4.65 µm, 9.5 µm wide DFB QCLs fabricated from the same wafer.Hence, our monolithic WBC chip can achieve a wavelength tunability of approximately 5 cm −1 by varying the temperature by 40 K.
Performance comparison between Fabry-Perot QCLs and WBC chips
To determine the optical losses in different parts of our WBC chips, we measured the laser characteristics of a few samples after cleaving them at three different positions, as shown in Fig. 5(a).Figures 5(b) and (c) depict how the output power varies after each cleave in the case of a representative array element.This correlates with the contribution of each section to absorption/scattering losses in the cavity.We observe that most of the losses are incurred in the AWG and the long passive waveguide section, that is, after the first cleave.
Figure 4(c) shows the shift in the output wavelength as a function of heatsink temperature.The AWG exhibited a linear temperature dependence for the selected wavelengths, as shown in Fig. 4(d), with slopes of 0.1376 cm -1 /K and 0.1271 cm -1 /K for the m th and (m-1) th diffraction order for laser 5.The former value is close to the temperature tuning coefficient (0.142 cm -1 /K) of ~4.65 m, 9.5 m wide DFB QCLs fabricated from the same wafer.Hence, our monolithic WBC chip can achieve a wavelength tunability of approximately 5 cm -1 by varying the temperature by 40 K.By measuring the threshold current density after each cleave and comparing the results obtained with the values calculated using Eqs.(1), we estimated the optical losses listed in Table 3.The transmission of AWG channel #1 could not be calculated, because the corresponding lasers suffered from electrical shorts.Also, the value of j tr , and Γ .g used in our calculations came from a previous experiment that yielded 0.52 kA/cm 2 and 3.06 cm −1 /kA respectively.The reflectivities of the front and back facets were simulated to be approximately 0.235 and 0.245, respectively.The results presented in Table 3 were obtained by measuring three independent WBC chips, and were generally larger than the predictions of our models.Although the discrepancy between the results of our simulations and our experiment is notable, the information collected and presented in Table 3 allows us to identify critical areas that need improvement.The performance and overall beam combining efficiency of our WBC chips can be drastically improved, for example, by implementing AWGs with higher transmission, minimizing the number of fabrication defects, and reducing waveguide propagation losses.The use of wet etching to smoothen the sidewall roughness is a possible solution to achieve better performance [36].With further loss mitigation, the current results pave the way toward accomplishing on-chip WBC in the mid-IR without significantly compromising the QCL power levels.
Fig. 1 (
Fig. 1 (a) Schematic of a monolithic wavelength beam combined chip with five QCL gain sections integrated with an arrayed waveguide grating (AWG) to generate high-power multi-wavelength emission around λ ≈ 4.92 μm.(b) Magnified view of the coupling regions featuring two linear tapers in the upper cladding and waveguide core.Some of the dimensions are exaggerated for clarity.(c) Size comparison of the fabricated chip against a US one-dime coin.(d) SEM cross-section of the active QCL gain medium ridge with false colors added to highlight different materials in the stack.(e) SEM cross-section of the passive ion-implanted region with false colors added to highlight the core material.(f) SEM top view of the coupling region for active to passive transition with marked etch step obtained after wet etching.
Fig. 1 .
Fig. 1.(a) Schematic of a monolithic wavelength beam combined chip with five QCL gain sections integrated with an arrayed waveguide grating (AWG) to generate high-power multi-wavelength emission around λ ≈ 4.92 µm.(b) Magnified view of the coupling regions featuring two linear tapers in the upper cladding and waveguide core.Some of the dimensions are exaggerated for clarity.(c) Size comparison of the fabricated chip against a U.S. one-dime coin.(d) SEM cross-section of the active QCL gain medium ridge with false colors added to highlight different materials in the stack.(e) SEM cross-section of the passive ion-implanted region with false colors added to highlight the core material.(f) SEM top view of the coupling region for active to passive transition with marked etch step obtained after wet etching.
Fig. 2
Fig.2Simulated plot of (a) transmission per 90-degree circular bend vs. bend radius, (b) misalignment tolerance of the linear tapers with the laser in the lateral direction, (c) simulated transmission of the 7 × 1 AWG vs. input wavelength for two different diffraction orders with inset depicting the maximum channel transmission.
Fig. 2 .
Fig. 2. Simulated plot of (a) transmission per 90-degree circular bend vs. bend radius, (b) misalignment tolerance of the linear tapers with the laser in the lateral direction, (c) simulated transmission of the 7 × 1 AWG vs. input wavelength for two different diffraction orders with inset depicting the maximum channel transmission.
a)-(d).Figures 3(a) and 3(c) illustrate the spectra measured from the common output waveguide, whereas Figs.3(b) and 3(d) correspond to the spectra emitted from the back facet
Fig. 4 (
Fig.4(a) Deviation of measured wavelengths with respect to the AWG design values for both m th and (m-1) th diffraction orders.Different colors are used to represent data obtained from different chips.(b) (m-1) th order peak of laser 5 for increasing current densities on a logarithmic scale.The same data plotted on a linear scale is shown in the inset.(c) Peak position shift of the m th (left) and (m-1) th (right) order for laser 5 with increasing heatsink temperature.Each spectrum was normalized with respect to the maximum intensity of the (m-1) th order peak.(d) Linear fit for peak positions vs. temperature values calculated from (c).
Fig. 4 .
Fig. 4. (a) Deviation of measured wavelengths with respect to the AWG design values for both m th and (m-1) th diffraction orders.Different colors are used to represent data obtained from different chips.(b) (m-1) th order peak of laser 5 for increasing current densities on a logarithmic scale.The same data plotted on a linear scale is shown in the inset.(c) Peak position shift of the m th (left) and (m-1) th (right) order for laser 5 with increasing heatsink temperature.Each spectrum was normalized with respect to the maximum intensity of the (m-1) th order peak.(d) Linear fit for peak positions vs. temperature values calculated from (c).
Fig. 5 (
Fig. 5 (a) Schematic of the WBC chip with red dotted lines marking the position of each cleave for estimating the losses originating from various sections on the chip.Measured peak power vs. current density from (b) common output waveguide/front facet and (c) back facet for a representative laser element.
Fig. 5 .
Fig. 5. (a) Schematic of the WBC chip with red dotted lines marking the position of each cleave done to estimate the losses originating from various sections on the chip.Measured peak power vs. current density from (b) common output waveguide/front facet and (c) back facet for a representative laser element.
|
2023-12-15T16:09:31.886Z
|
2024-03-04T00:00:00.000
|
{
"year": 2024,
"sha1": "6f94d46cd5dfc525645cb3ddec0628fb92105f87",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.518357",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f75952dd7406438f004594d73ca11e2780e937cc",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Engineering",
"Medicine"
]
}
|
51791629
|
pes2o/s2orc
|
v3-fos-license
|
Fire in the Smoke: Battling Brain Tumors.
Therapeutic vaccines, drugs, and modified human cells that activate the immune system against cancer have improved outcomes and prolonged lives in some types of cancer in the past few years. For patients with glioblastoma, the most common primary brain tumor in adults, immunotherapy is still struggling to overcome this lethal malignancy.
It was 20 years ago when someone we will call Mr. H set off on a unique path. He was commuting home from work along his usual route on Interstate 95 when he forgot which exit to take. For the next two hours he wandered through the Baltimore suburbs trying to find his way home. Finally, he gave up and called his wife, who called 911.
At the hospital, brain resonance magnetic imaging (MRI) foretold a future that blended the uncertainty of a life changing event with the sobering clarity of now knowing precisely how soon that life will end. Mr. H was in his late 30s and was otherwise healthy, exercising two or three times per week and watching what he ate, while being generally content in his career and the time he spent at home with his wife and two young children. Now, fate had brought him face-to-face with glioblastoma, a deadly form of brain cancer with no cure and a life expectancy of less than two years.
After being rushed into surgery, he awoke to a cacophony of monitors, IV pumps, and conversations full of unfamiliar abbreviations and numbers without units of measure. Pathology had confirmed the diagnosis of glioblastoma multiforme (GBM). He spent the next three days in the hospital recovering and was discharged home to a familiar life that was now anything but familiar. Two weeks later he spiked a fever and noticed redness around the c-shaped incision on his head. Back at the hospital, laboratory results and imaging confirmed that he had a severe wound infection. At the time of surgery, the infection was so extensive that bone had to be removed and could not be replaced. He was started on IV antibiotics and sent home to recover with a helmet to protect his compromised skull.
Once the infection had cleared, the skull defect was repaired and he went back to the planned course of chemotherapy, radiation, and preparing for his family's future without him. Then the unexpected happened. Months passed, then a year, then two years, with successive MRI scans failing to show any evidence of the tumor returning. Five years later he was in a rare minority: patients who had survived at least five years with GBM. More than two decades later and now in his late 50s, there is still no sign of the tumor that once promised to take his life. His tumor has been studied by the world's most eminent pathologists and confirmed to be GBM. But if there is nothing too distinct about this patient or this tumor, what could explain his remarkable clinical course?
Could it be the infection?
A Brief History of Cancer Immune Therapies
Tumors were first noted in an ancient Egyptian textbook on surgery and medicine. But it wasn't until the 1700s that the dramatic regression of tumors in the presence of an infection was first observed. Scattered reports of this phenomenon were recorded over the next century, and by the mid-1800s such anecdotes led to a few small-scale therapeutic efforts to introduce infection in cancer patients, with limited success.
Purposefully stimulating a patient's own immune system to fight cancer was first systematically attempted by William B. Coley in the late 1800s. In May 1891, he reviewed the cases that had been reported of patients with infections who had lived longer than expected and concluded that most were sarcoma patients who developed streptococcus infection. Coley injected streptococcal broth cultures in a patient who had a large, recurrent sarcoma of the head and neck. The treatment resulted in a near fatal infection, but the tumor drastically regressed and the patient was once again able to swallow food. According to Coley's records, the patient would go on to survive for eight years before dying of recurrent disease. 1 More than a century would pass before rigorous study of the immune system yielded clinical therapies capable of reliably generating antitumor responses. This work coalesced into two general lines of research: anticancer therapeutic vaccines that train the immune system to recognize and destroy tumor cells, and immune checkpoint inhibitors that overcome the tumor's defenses against immune attack.
In 2010, the Food & Drug Administration (FDA) approved the first antitumor therapeutic vaccine for the treatment of castrate-resistant metastatic prostate cancer. 2 The following year saw FDA approval of the first immune checkpoint inhibitor for the treatment of metastatic or un-resectable melanoma. 3 The immunologic strategies exemplified by these agents-stimulating an immune response to a specific cancer antigen or overcoming the tumor's ability to evade an immune response-have served as the framework for immuno-oncology, forging the way for next generation immunotherapeutics that have dramatically improved the prognosis for many patients with advanced cancers.
Those with brain tumors, however, have not been among them. Of more than 100 types, the most common malignant brain tumors are gliomas, which arise from the glia, the brain's supportive cells.
Depending on their grade, or degree of differentiation, gliomas can be benign or highly malignant.
The most common malignant primary brain tumor among adults is GBM, which is invariably fatal and associated with a median survival of approximately 20 months despite surgery, radiation, and chemotherapy. These and other high-grade gliomas present unique challenges for immunotherapy due to patient, treatment, and tumor intrinsic factors that have thus far limited the effectiveness of immunotherapies.
Recent negative results of large clinical trials have placed researchers at a crossroads: can immunotherapy in fact generate robust, durable responses in brain tumors? The discussion below aims to provide a framework for understanding cancer immunotherapy, highlight how deviations from this framework might explain the resistance of gliomas, and suggest a path forward.
Initiating an Immune Response: Lessons from Vaccines
Tumor vaccine development is predicated on many of the same principles that govern vaccine development against infectious pathogens. An antigen (a foreign molecule that induces an immune response) and adjuvant (a substance that enhances that immune response) are introduced. They stimulate immune T cells that recognize that specific antigen undergo clonal expansion. Unlike foreign pathogens, tumors are derived from host tissues and typically express antigens that the immune system recognizes as self. This triggers processes that have evolved to protect the immune system from targeting the body's own cells (to prevent an "autoimmune" response), resulting in immune tolerance to the tumor rather than immune activation.
In addition, vaccines targeting antigens that are not only on the tumor but are also expressed on normal tissues may generate unacceptable autoimmune side effects. An anticancer vaccine, therefore, must target antigens expressed only on tumor cells (neo-antigens) or on tumor cells as well as expendable normal tissues. The latter strategy, for instance, enables use of a therapeutic vaccine for treating prostate cancer. This vaccine targets prostatic acid phosphatase (PAP), which is exclusively on prostate tissue. Since this tissue does not serve a vital function, damage by the immune system is well-tolerated. 4 Another example of a successful vaccine-type approach is the use of genetically engineered T cells targeting a B cell antigen known as CD19 in the treatment of lymphomas. 5 CD19 is expressed exclusively on B cells and is often over-expressed on lymphoma cells. The treatment eliminates normal as well as cancerous B cells, but the normal cells recover.
Since gliomas are derived from non-expendable cells of the brain (glia), vaccination strategies have primarily targeted neo-antigens and are produced by tumor-specific mutations that are not shared by healthy tissues. The best studied of these, EGFRvIII, is a mutated form of a normal protein known as epithelial growth factor receptor. This mutated protein is expressed on approximately 40 percent of GBMs. The relative lack of expression on normal tissues makes this a promising target for immunotherapy. 6 One EGFRvIII peptide vaccine, rindopepimut, showed promise in phase II clinical trials that included patients who had undergone complete resection of all tissue identified on a preoperative MRI and demonstrated an absence of tumor progression after radiation and chemotherapy. 7 These trials were the basis for a randomized, placebo-controlled trial of the vaccine in patients with newly diagnosed GBM. It was stopped, however, in 2016 when an interim analysis concluded that the primary endpoint of improved overall survival was unlikely to be met. A randomized trial of rindopepimut in combination with bevacizumab (a drug that inhibits development of blood vessels to feed a tumor) for recurrent GBM also failed to meet its primary endpoint of progression-free survival at six months.
A consistent finding in these studies was the absence of EGFRvIII antigen in up to 80 percent of recurrent tumors. 8 While it is possible that recurrent tumors down-regulate EGFRvIII expression, independent of immunologic pressure, 9 it is more likely that the tumor escapes by down-regulating EGFRvIII or expanding tumor cell clones that do not express it. 10 This process is known as immuneediting, a sort of cellular Darwinian process whereby an external pressure (immune response) selectively destroys subtypes of cells within the tumor while allowing resistant cells to continue growing unimpeded, resulting in a change in the molecular composition of the tumor so that it is no longer susceptible to destruction by the immune system.
Two additional findings from this work are important lessons moving forward. First, antibodies against EGFRvIII were consistently detected in patients undergoing treatment, but their presence did not predict clinical response. This underscores that not all immune responses are created equal when it comes to fighting cancer. Specifically, even though the immune system recognizes the antigen and produces an antibody, the presence of antibodies does not guarantee tumor regression. Rather, the immune response must be of a specific type directed toward cell lysis, similar to the immune responses to viruses or intracellular bacteria. Accordingly, while a humoral (antibody) response may coincide with a cytotyoxic response, antibody titers alone are not a reliable biomarker of antitumor activity.
Second, radiographic tumor responses were observed in patients with recurrent tumors or when a larger volume of residual tumor tissue remained following surgical debulking. There has been an assumption in immune-oncology that if a tumor is immunosuppressive, eliminating the bulk of the tumor prior to initiating immunotherapy will result in a more vigorous immune response. This finding appears to undermine this assumption and may suggest that having more available antigens at the initiation of immunotherapy may be advantageous even in the setting of a higher tumor burden. Although this remains to be proven, we believe that this phenomenon may be mediated by a process known as epitope spreading. 11 Epitope spreading occurs when antigens other than the targeted antigen (in this case EGFRvIII) are recognized by the immune system and an immune response is generated. An immune response is then generated against these "bystander" tumor antigens even if EGFRvIII is no longer present in the tumor.
Breaking Immune Tolerance
Although cancers are derived from healthy tissues, the mutations that drive malignancy result in a molecular signature that distinguishes them from their normal counterparts. These tumor-specific neo-antigens can be recognized by the immune system, resulting in elimination of cancer cells before they organize into a solid tumor. For malignant cells to progress to a tumor, they must usurp the mechanisms that protect healthy tissues against an autoimmune attack. These immunologic brakes that protect against autoimmunity, known as "checkpoints," are non-redundant signaling pathways that reduce the degree and duration of immune responses. 12 Clinical development of agents that block these pathways has revolutionized oncology, but an understanding of which patients and cancers will respond to this approach remains elusive. Two signals are required for an immune T cell to kill a cell with which it comes into contact. The first signal is the T cell recognizing the antigen presented on the surface of the tumor (or healthy) cell.
Each T cell recognizes a single cognate antigen. In essence, this is the key that turns on the immune cell's engine. The second signal is a co-stimulatory molecule that puts the immune cell in drive.
Without the second signal the immune cell determines that the cell it has come into contact with is part of normal tissue and should not be destroyed. The first immune checkpoint discovered, CTLA-4, was initially identified based on its similarity to the co-stimulatory molecule CD28. Research demonstrated that CTLA-4 prevents activation of the second signal. This work led to the understanding of immune checkpoints as negative feedback mechanisms that mitigate collateral damage from overly vigorous and/or non-specific inflammatory responses. 13 With the discovery of several additional immune checkpoints, we now know that these pathways are much more nuanced than simple immunologic on/off switches. Each immune checkpoint has a distinct function and can signal alone or in combination with others. For immune checkpoint blockade to be effective, a baseline immune response must be present. It is no surprise, therefore, that most of the cancers that respond well to these therapies are highly immunogenic (they elicit a strong immune response). PD-1 and CTLA-4 blocking antibodies, for example, are approved for a growing list of solid malignancies, including melanoma, renal cell carcinoma, and non-small cell lung cancer; they can generate objective responses and significantly improve survival in more than 20 percent of patients with advanced cases of these cancers, 15 which carry a grave prognosis and previously had few treatment options.
Other malignancies, however, including GBM, show little or no response to PD-1 or CTLA-4 inhibitors. The reason is unclear and a topic of intense study. PD-L1 expression, 16 mutational burden (a high number of mutations), 17 and DNA repair deficiencies 18 are some characteristics that correlate with responses to checkpoint blockade. Mutational burden and DNA repair deficiency reflect back on the first strategy of immunotherapy illustrated by vaccines-recognition of foreign antigens and initiation of an immune response. Each mutation in a tumor further differentiates tumor cells from their normal counterparts. Therefore, a tumor with a high burden of mutations provides more targets for the immune system, increasing the probability that an immune response will be specific to the tumor and fueling epitope spreading as the immune response evolves.
Unique Challenges
Despite encouraging laboratory data, clinical results with immunotherapy for patients with GBM have generally been disappointing. The largest trial of PD-1 blockade was stopped early when the PD-1 blocker nivolumab failed to show a survival benefit over the angiogenesis drug bevacizumab, which is standard of care for recurrent GBM. Despite the overall negative results, however, in a small subgroup of patients (eight percent) the response was significantly more durable than that observed for bevacizumab. In addition, there have been anecdotal reports of GBM patients, particularly those with tumors that have unusually high mutational burdens, whose response to PD-1 blockade was remarkable. 19 Ultimately, the question is whether the dismal prognosis for GBM patients can be reliably and meaningfully improved with immunotherapy.
These findings indicate that GBM may play by some of the same rules as other tumors that respond favorably to immunotherapy, but if this is the case, why do so few patients benefit? The situation for GBM patients is dire. They are traveling through one of the remotest regions in medical science, night is falling, and the temperature is rapidly dropping. There is little time for indecision and we, the medical professionals specializing in this disease, are their guides. In this oncologic wilderness, the rare durable responses are like smoke on the horizon of neuro-oncology that keeps us moving forward. But where's the fire?
Combination immunotherapy is being explored as a means of improving responses in tumors that do not respond well to single immunotherapeutic agents. This "get a bigger hammer" approach may work well in tumors that employ multiple common immunosuppressive pathways. We believe, however, that not all "cold" tumors are the same and that GBM, in particular, should be considered a distinct immunologic entity. Not only does GBM activate multiple immune checkpoint pathways and secrete immunosuppressive cytokines, but its location in the immunologic milieu of the central nervous system (CNS) presents unique challenges for immunotherapy. 20 Furthermore, GBM induces a profound state of systemic immunosuppression infrequently encountered with other tumors.
Failure to understand how the immune system interacts with gliomas locally, regionally, and systemically is the most significant impediment to successful implementation of immunotherapy.
Although it has long been known that patients with GBM exhibit signs of immunologic dysfunction, recent work has begun to delve into the underlying mechanisms of immunosuppression and its effect on patient outcomes. A study in 2011 by Stuart Grossman and colleagues showed that GBM patients receiving chemotherapy and radiation experienced profound and prolonged reductions in immune CD4 counts that negatively correlated with survival. 21 One of the unanswered questions from this study is the relative contribution of the disease process vs. side effects of treatment.
Nevertheless, the correlation of poor immune function with decreased survival from a tumor that is thought to have little or no immunogenicity is provocative. If there is no immune response to the tumor, why would immune suppression matter? If there is an immune response to GBM, how can we fan the flame? Intrigued by these possibilities, we and others are attacking immunosuppression in GBM on multiple fronts.
Any successful immunotherapy for GBM is likely to be administered in combination with chemotherapy and radiation, both of which are immunosuppressive. We have shown that focal, single fraction radiation therapy can work synergistically with PD-1 blockade, 22,23,24 and hypothesize that a single-dose regimen may be immunologically superior to standard, fractionated radiation therapy by minimizing exposure to normal tissues and circulating immune cells. Similarly, orally administered temozolamide, a chemotherapy drug that is standard-of-care for newly diagnosed GBM, is profoundly immunosuppressive; when delivered locally however, it mitigates unwanted effects on memory T cell populations and potentiates the efficacy of PD-1 blockade. 25 We envision a paradigm shift from standard oral chemotherapy and radiation to local chemotherapy and intense, abbreviated radiation therapy, which will minimize immune dysfunction and may prime an antitumor response by increasing the availability of tumor-associated antigens.
In parallel with our efforts to optimize conventional therapies, we are exploring the relative contributions of tumor and host factors to immunosuppression. While experimental models of GBM are intrinsically immunosuppressive, 26 we have shown in a non-glioma model that CNS location induces more profound immune dysfunction than equivalently progressed tumors at other sites. 27 Interestingly, our data suggest that CNS tumors induce a state of systemic tumor antigen-specific tolerance. In other words, having a brain tumor suppresses not just local immune activity, but the entire immune system in a way that has not been described in other tumors. In these experiments, vaccination, adoptive transfer of high-affinity T-cells, and radiation can mediate tumor regression; however, a measurable degree of immune dysfunction persists in brain tumors compared with tumors outside the CNS. Our data indicate that a circulating factor is responsible, possibly in relation to the TGF (transforming growth factor)-beta pathway.
Others have corroborated and expanded on the principle of systemic immune dysfunction in GBM patients. For example, it has been shown that immune cells of these patients are sequestered in the bone marrow and, therefore, are unable to access the brain tumor. 28 Investigations into the mechanisms of brain tumor-mediated tolerance are ongoing, and we think this will be a critical step in developing glioma-directed immunotherapies.
The tragically rare, but undeniably compelling stories of patients like Mr. H offer hope that the immune system can conquer this devastating disease. Ultimately, we believe that immunotherapy will play a pivotal role in significantly prolonging survival for patients with GBM, and other brain tumors. An effective approach will need to generate and maintain a robust response against multiple tumor antigens in the CNS, while minimizing collateral damage. Patients must have a normally functioning baseline immune system to generate such a response; therefore, reversing the profound systemic immune suppression associated with CNS malignancies is of paramount importance. The negative results of clinical trials to date represent a call to action for a more intense focus on the unique aspects of brain tumor immunology.
|
2019-03-08T14:16:03.677Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f74432576f3e89b3388132fce2a53d195b015c81",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8a904a0a5a9127079cce60635f69ec6020f123c5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
116770342
|
pes2o/s2orc
|
v3-fos-license
|
Accurate Planar Spiral Inductor Simulations with a 2.5-D Electromagnetic Simulator
: Different published approaches to simulate an inductor, by means of the 2D electromagnetic tool Momentum (cid:1) , are revised in this article. A new layout configuration, including the measurement structure and postsimulation de-embedding, is proposed. Simulated results are verified with inductors fabricated in a SiGe 0.35-l m foundry process. Our proposed layout provides more reliable results than other configurations. V V C 2008 Wiley Peri-odicals, Inc. Int J RF and Microwave CAE 18: 242–249, 2008.
I. INTRODUCTION
Nowadays, wireless access devices have spread through every aspect of life. The efforts of designers are encouraged to reduce the cost and power consumption of these terminals. One of the most effective solution to this problem is increasing the level of integration, so silicon-based processes are preferred. However, it is a challenging task to design high-quality integrated inductors, due to the low resistivity of standard silicon substrates. The performance of voltage-controlled oscillators [1][2][3], low noise amplifiers [4], matching networks, and distributed amplifiers depends strongly on inductors quality factors [5]. Therefore, inductor becomes a critical component in RF circuits' response, and should be carefully designed.
The most reliable way to determine the quality of an inductor involves its fabrication and measurement. But design flow is expensive and time-consuming.
Currently, designers make use of cheaper and faster methods to predict the inductor characteristics, such as physical models or electromagnetic (EM) tools. Since deriving parametric expressions to model all physical effects in coils is quite difficult [6,7], a number of designers employ EM simulations, which provide great flexibility in optimizing the inductor layout structure. Inductors can be simulated using a three-dimensional (3D) design tool [8] or a two-dimensional one (2D). The former requires long CPU times in spite of being able to fully simulate all the inductor parasitic effects; therefore, in this work, we will study one of the planar 2D (or 2.5D) simulators, which is fast and admits complex coil geometries: Agilent's Advanced Design System planar EM simulator, Momentum [9].
There are a number of published studies about inductors based on EM simulations. However, few of them describe thoroughly how to configure the tool in order to simulate an inductor and obtain reliable results. This work analyses the different ways to simulate inductors with Momentum, and proposes the most dependable way to do it. Section II is devoted to report some guidelines about modeling thick conductors with Momentum. In Section III, different approaches to simulate an inductor are analyzed, and results are discussed in Section IV. Finally, some conclusions are given in Section V.
II. THICK CONDUCTOR MODELING
Different Momentum versions, starting from ADS 2003A, have been used for simulations as this research has been carried out. Subsequent releases of the tool have implemented new important features. Among them, it is worthy of mention the approach to model thick metal, because of its influence on inductor simulations.
Integrated inductor's quality factor values are mainly affected by series resistance of the metal traces and substrate losses. These effects are properly taken into account by an EM simulator through an accurate set-up of the process parameters. Besides, it is important to define the substrate and metallization layers correctly.
Thick conductors can be simulated with Momentum following two different approaches: zero thickness or finite thickness [9]. With the former, a 3D conductor is modeled like a sheet conductor using the surface impedance model, Z s (t,r,x), where t is the actual metal thickness, r is the metal conductivity, and x is the angular frequency. Z s accounts for losses in the conductor associated with thickness and frequency variations (skin effect). With this approach, low-frequency currents flow through the entire cross section of the metallization, while high-frequency currents flow in a layer with a thickness equal to the skin depth, d S . For a cylindrical conductor d S is given by where l is the metal permeability. In this case, the current is assumed to be limited to one side of the finite thickness conductor (see Fig. 1a).
However, the finite thickness approach considers thick conductors as two planar metallization layers, each one characterized by Z s (t/2,r,x), being the layers spaced by means of a thickness t. Thus, lowfrequency currents flow through the entire cross section of the metallization, and high-frequency currents flow through a double surface layer of one-skin depth each, with equal distribution on both sides of the conductor (see Fig. 1b).
The method of moments [10] considers all metal conductors as infinitely thin sheets. Although we specify the thickness of each strip, this is only used in resistive loss calculations, not during the actual EM simulations [9].
So, when we model conductors as zero thickness layers, we do not define properly the substrate distances. However, the finite thickness approach takes into account the actual distances from substrate. Then, the parasitic capacitances between coil and substrate, and between metal tracks are correctly simulated. As a consequence, the quality factor profile will be centered at the correct frequency.
Previous Momentum versions only modeled zero thickness conductors (see Fig. 2a). If the user wanted to define a finite thickness metal, he or she had to include manually the two different t/2-thickness layers separated by a t-thickness in the substrate definition. Current versions include a 3D metal expansion feature that automatically executes the process. This expansion can be done upward or downward. In the former case, the extra dielectric layer to be inserted has the same electric properties of the layer above the metal, as shown in Figure 2b. With expansion down- ward, the extra layer has the material properties of the layer below it, as it is illustrated in Figure 2c.
III. INDUCTOR SIMULATION
Once the substrate layers have been properly specified, the next step in any inductor simulation process is to define the spiral in a layout drawing environment. However, drawing a spiral is not a simple task, being advisable to use an automatic layout generation tool [11].
Inductors can be simulated in two different ways, isolated or surrounded by a metallic guard-ring. In this section, both approaches are revised and simulated results are compared with measurements of 10 octagonal inductors fabricated in a 0.35-lm SiGe foundry process, whose geometry is summarized in Table I, where r EXT is the inductor external radius, w the metal width, and n the number of turns. The spacing between metal tracks of different turns, s, is fixed to the minimum allowed by technology, in order to minimize the occupied area and maximize the inductance value [12].
The measurement system used for the inductor characterization consists of the HP8720ES Vector Network Analyzer and the Summit 9000 Probe Station. To calibrate the measurement system, the short-open-loadthrough method was applied. Finally, parasitic effects introduced by measurement structures are removed with the four-steps de-embedding method [13].
A. Simulation Without Guard Ring
Since simulated S-parameters are compared with deembedded measurements, the easiest way to run the simulation seems to be drawing the isolated spiral, without any guard ring surrounding it. Once the coil is drawn, a single port is added to terminals letting energy to flow into and out of it [9], as shown in Figure 3.
Since Momentum is a simulator based on the method of moments, a mesh is required in order to simulate the design effectively. A mesh is a pattern of triangles and rectangles applied to a design in order to break it down into small cells. Momentum computes the current within each cell and identifies any coupling effect in the circuit during simulation. From these calculations, S-parameters are then derived for the circuit, an inductor in this case.
As far as inductors are concerned, the two important mesh parameters are the Edge Mesh and the Number of Cells per Wavelength. The former must be enabled to take into account the proper currents distribution at high frequencies, and the latter is used to determine the density of the mesh. For 2004A version and on the default value, (20 cells/k) is enough to assure accurate results. Therefore, this will be the set value in all inductor simulations.
As it has been stated in previous section, conductor layers can be defined in Momentum as finite or zero thickness. To find out the most reliable approach, both of them are considered. Figure 4 shows comparisons between measurements (scatter) and simulated results using the different available approaches to model thick conductors for two of the fabricated inductors (see Table I As it was expected, results are similar for all three finite thickness modeling methods, because the substrate is hardly changed when expansion is automatically set.
Simulated inductance results show good agreement with measurements using the finite and zero thickness approaches, although the error is slightly lower with the first one. With regard to the quality factor, there are more significant differences between simulated and measured data.
According to Momentum user's manual, finite thickness approach should agree better with measurements in cases where width/height aspect ratio is bigger than 5. However, Figure 4 shows that results hardly improve for L3, which belongs to this group of coils. Anyway, since inductance is more accurately predicted using the finite thickness approach, we will select this configuration to simulate inductors, not taking into account its aspect ratio.
Since inductors are simulated using single ports and no metallization guard-ring, the tool considers that the circuit implicit ground is the potential at infinity. Thus, it is assumed that the ground is placed in that infinite metal layer that is closest to the substrate; in this case is a default layer called GND, located below the substrate. Therefore, Momentum will infer that the ground is below the substrate. However, inductors have been measured with coplanar probes, so the ground plane in the measurement set-up is placed on the guard ring surrounding the coils. For that reason, it becomes necessary to simulate the circuits with a metallization guard-ring to match simulation and measurement set-ups.
B. Simulation with Guard Ring
As already mentioned in the introduction, very few publications describe how to simulate an inductor in detail. As far as we know, there are only two that considers the guard ring surrounding the spiral and give some guidelines about the simulation set-up [14,15]. Table I Van Hese reports in [14] that the spiral should be surrounded by a metallization ring, which is connected to the silicon substrate through a number of ways. Two internal ports are placed on each side of the inductor, and two additional ground reference ports are added close to the internal ports and connected to the guard-ring. These two reference points ensure that the returning current follows the intended path in simulation, instead of using a master ground below the substrate. The measured inductors were simulated following these guidelines, including the same guard-ring as the fabricated coils. Figure 5 illustrates the simulated layout.
Scuderi et al. employs in [15] a set-up based on [14], but more similar to the one used in measurements, since the guard-ring includes the ground and signal pads. As shown in Figure 6, one internal port is inserted on each side of the inductor, presenting two associated ground reference ports, each one connected to the corresponding ground pad in the ring.
Fabricated inductors were simulated following the guidelines already stated. Figure 7 shows measured and simulated data for inductor L6 (see Table I). Results show that, when a ring is added, the simulated quality factor differs substantially from that without guard-ring. It is also noticeable that when simulations are run according to [14] and [15], the simulated quality factor is far from the corresponding measured values. Inductance results, however, differ slightly in the new layout configurations.
Therefore, it is necessary to improve the inductor and guard-ring layouts in order to obtain consistent simulated results for the quality factor. We propose to simulate the fabricated structure as it is, with the inductor connected to the guard-ring, and internal ports connected to the signal pads (see Fig. 8). This solution requires making a postsimulation de-embedding to remove the parasitic effects introduced by measurement structures. Therefore, the three fabricated measurement fixtures (open, short, and thru), must also be simulated separately. The same deembedding technique employed with measurements is then applied with simulated results [13].
Measured and de-embedded simulated results for L6 are shown in Figure 9. The quality factor shape, although slightly overestimated around the central frequency, is now more precisely predicted by Momentum. The inductance value is also accurately estimated, even reducing the low errors of previous approaches.
IV. DISCUSSION
Different approaches to simulate inductors with Momentum have been revised. First of all, for simulations without guard-ring, we have verified that the most reliable way to model thick conductors is using the finite thickness approach. This layout configuration provides accurate inductance predictions, although the error becomes significant for the quality factor.
On the other hand, the ground plane is correctly placed when inductor simulations are run with a metallization guard-ring. After revising two different reported methods to implement the guard-ring, a new one is proposed that demonstrate that the best layout set-up involves simulating the structure as it is fabricated, including a postsimulation de-embedding.
To produce some guidelines to select one of these two approaches (with or without guard-ring) as the optimum configuration to obtain reliable results in Momentum simulations, relative errors in key parameters are compared for all fabricated inductors. Figure 10 shows that relative errors in inductance predictions are slightly lower when simulating the coil surrounded by a guard ring. Nevertheless, both approaches provide very good results, errors below 5%, for typical frequencies of use (under resonant frequency). Although the error seems to be more significant for L1 and L2, the inductance value for these coils is lower than 1 nH. Therefore, errors around 15% involve divergence of only 0.1 nH between measured and simulated data, a figure which is quite acceptable in usual designs.
Apart from the inductance value, it is critical that the simulator estimates accurately the frequency of maximum quality factor, f QMAX . Thus, the inductor may operate at the suitable frequency range. Figure 11 illustrates the deviation between measured and simulated f QMAX . Simulations with guard-ring, and subsequent de-embedding, predict correctly this fre- quency, and the relative error is lower than 10% for nine of the ten fabricated inductors. This involves that the quality factor shape is shifted only 200 MHz. The error grows if the coils are simulated without the guard-ring, generating displacements higher than 500 MHz for half of the fabricated inductors.
The relative errors for the maximum quality factor, Q MAX , are shown in Figure 12. Although the error is smaller in simulations including the guard-ring, it becomes significant for wide-metal inductors (w ! 18 lm), such as L1, L2, or L3, and for those with small enclosed area, as L4. It could be deduced from this fact that Momentum does not accurately take into account some fields appearing at high frequencies. These effects, such as skin effect or eddy currents in inner turns, are more significant in inductors with that geometry.
Apart from the inductor performance estimation, another important element to be considered by the designer is the CPU time needed to perform simulations. Figure 13 summarizes the CPU time on a Pentium IV PC fully dedicated to Momentum simulations up to a simulation frequency of 10 GHz. Simulations with guard ring are considerably more time consuming. In addition to this, simulation time required for measurement fixtures, and some time to perform the deembedding should be added to the results shown in the figure. These simulations for the short open and thru structures are not longer than 10 min.
In spite of this, any of the inductor simulations takes less than 90 min, which can be considered small comparing to the time required by a 3D EM tool.
V. CONCLUSION
In this article, different approaches to simulate inductors using the EM tool Momentum have been revised.
Simulated data have been compared to measurements of inductors fabricated in a 0.35-lm SiGe process.
Results show that the most reliable way to simulate an integrated spiral inductor is to include a guard-ring, surrounding the coil, and connected to it. The ring should contain signal and ground pads, and internal and ground reference ports must be placed on them. This layout approach provides accurate inductance values and quality factor versus frequency shapes, centered at the same frequency as measurements. Simulations without guard-ring adequately predict the inductance value too. However, quality factor curves are overestimated and shifted to higher frequencies.
Finally, it is worth noting that both layout configurations, with and without ring, overestimate the quality factor for inductors, where skin effect or eddy currents are particularly significant. Antonio Hernández received the doctorate in Telecommunication Engineering in 1992 from the University of Las Palmas de Gran Canaria, Spain. He is founder member of IUMA, Institute for Applied Microelectronics of the University of Las Palmas de Gran Canaria, where he is Professor. His current research interests include modeling of active and passive devices for microwave and very high-speed applications, and RF integrated circuits.
|
2019-08-17T00:56:40.915Z
|
2008-05-01T00:00:00.000
|
{
"year": 2008,
"sha1": "60e8e24ade1118ae37bf9fe69903e6e4724f6594",
"oa_license": null,
"oa_url": "https://doi.org/10.1002/mmce.20283",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "31800454d0d3bfa57e750511fe3a351e05a09155",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
136619732
|
pes2o/s2orc
|
v3-fos-license
|
Ion Bean Etching on Ti-30Ta Alloy for Biomedical Application
Titanium and titanium alloys are currently being used for clinical biomedical applications due to their high strength, corrosion resistance and elastic modulus. However, these materials have recently been shown to exhibit ion release and poor physiological integration that may result in fibrous encapsulation and further biomaterial rejection. In order to be a successful replacement for bone current approaches for enhancing the mechanical and biological properties of Ti was alloyed Ti with Ta due to it provides greatly improved mechanical properties which include fracture toughness and workability. Studies have shown techniques such ion beam etching, heat and alkaline treatment, SBF coatings and anodization to promote altered cellular response on Ti and Ti-alloys. In this study Ti-30Ta alloy was investigated ion beam etching. The SEM was used to investigate the topography, EDS the chemical composition, and surface energy was evaluate with contact angle analyze due to the topography have effect on protein adsorption, platelet adhesion, blood coagulation and bacterial adhesion. This study concludes Ti-30Ta alloy substrate with ion beam etching was not favorable for biomedical application.
Introduction
Metallic materials have been used as implantable for orthopedic and dental implants.The interaction between the implant surface and the tissue plays an important role on the success of this implantable device. Several studies have shown that by modifying the surface at a nanoscale or a microscale can alter cellular response. Studies have shown techniques such ion beam etching to promote altered cellular response on Ti and Ti-alloys. One effective parameter to evaluate the biological response of the metallic biomaterials is investigated the wettability of the surface of this material due to the topography have effect on protein adsorption, platelet adhesion, blood coagulation and bacterial adhesion.
In this study, the substrate surface of the Ti-30Ta alloy was altered by topographical surface modification. The Ti-30Ta alloy substrates were modified with ion beam etching. Following techniques were used for characterized all groups: scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS) and contact angle analyze.
Materials and Method
Ti and Ta were combined by a melting process, homogenized in a vacuum, cold-worked by a rotary swaging process and cut in discs. The Ti-30Ta modified by etching the alloy surface to oblique angle oxygen ion beam by increasing the oxygenation of the near surface regions. Etching was done with a 16 cm ion source in a low-pressure environment (approximately 1.6 x 10 -4 Torr). Gas flow rates through the source and neutralizer were 20 sccm O 2 and 8 sccm Ar respectively. An energetic ion beam of 1200 eV ions with a beam current of 200 mA was used for a 3 hour etch. This beam consisted primarily of oxygen ions though it is possible that minute amounts of background Ar gas could diffuse into the source, ionize and be included. The substrates were placed on an inclined holder so the resulting angle of ion incidence was approximately 75 degrees from the surface normal.
Physical characterization
The Ti-30Ta alloy substrates surfaces were examined before and after the surface modification. The surface topography of the substrates was characterized using a JEOL JSM-6500FESEM. The surface elemental composition of the Ti-30Ta alloy substrates was further characterized with energy-dispersive X-ray spectroscopy (EDS, JSM-6500F SEM). Wettability of the modified surfaces were determined by measuring water contact angle (FTA1000B Class,First Ten Angstroms, Inc). A 2 µl droplet of distilled water was dropped on the surface. Immediately after this the droplet images captured using a camera. The image were then processed with the accompanying Fta32 software determine give contact angle and the droplet volume. All the studies were conducted for minimum 6 samples to ensure appropriate statistical variability. Further, the results indicate that the ion etching did not significantly alter the surface. It seems that etching at 1200eV was not enough to significantly change the surface topography. EDS spectra for both group shows peaks for titanium and tantalum. The Figure. 2 shows contact angle measurements Ti-30Ta control group and Ti-30Ta etch. The results indicate following order of surface hydrophilicity: Group 2 ˃ Group 1 This behavior is extremely important since cell and bacterial adhesion, protein adsorption, platelet adhesion and activation and blood coagulation may be affected. The materials for biomedical application need to be more hydrophilic since they have higher surface energy which is desirable for biological interaction.
58
A Special Issue in Memory of Dr. Lucio Salgado
Conclusion
In this study, the Ti-30Ta alloy substrates were modified by ion beam etching. SEM results show different structure on the surface . EDS spectra identified similarity on Group 1 and 2. The results presented here a little alteration in the topography on the substrate surfaces. Overall the contact angle shows Ti-30Ta etch more hydrophobic than Ti-30Ta control. This study concludes Ti-30Ta alloy substrate with ion beam etching was not favorable for biomedical application.
|
2019-04-28T13:07:44.748Z
|
2014-09-01T00:00:00.000
|
{
"year": 2014,
"sha1": "99474faebc814089948ca4c1aa380a9e5c42f0f0",
"oa_license": "CCBY",
"oa_url": "https://repositorio.unesp.br/bitstream/11449/167760/1/2-s2.0-84922256391.pdf",
"oa_status": "GREEN",
"pdf_src": "ScientificNet",
"pdf_hash": "486babc8f8f284b59cd313983fb67b5bb106ddfa",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
126196195
|
pes2o/s2orc
|
v3-fos-license
|
Spacetime coverings and the casual boundary
: We consider the relation between the c-completion of a Lorentz manifold V and its quotient M = V=G , where G is an isometry group acting freely and properly discontinuously. First, we consider the future causal completion case, characterizing virtually when such a quotient is well behaved with the future chronological topology and improving the existing results on the literature. Secondly, we show that under some general assumptions, there exists a homeomorphism and chronological isomorphism between both, the c-completion of M and some adequate quotient of the c-completion of V de(cid:12)ned by G . Our results are optimal, as we show in several examples. Finally, we give a practical application by considering isometric actions over Robertson-Walker spacetimes, including in particular the Anti-de Sitter model.
Introduction
The AdS/CFT correspondence, also known as Maldacena's duality, states the duality between gravitational theories, as string or M-theory, on a bulk space (usually a product of the Anti-de Sitter spacetime with spheres or other compact sets) and conformal field theories defined on the boundary of the bulk space which behaves as a hologram of inferior dimension (see [1]). As it is apparent, the conjecture relies strongly in the notion of boundary of Lorentz manifolds. However, the problem to attach a natural boundary for any Lorentz manifold encoding relevant information on it, as its conformal structure and related elements (event horizons, singularities, etc.) has been a long standing issue along the last four decades.
Among the several constructions proposed (see [2][3][4] for nice reviews on the classical elements and [5,6] for updated progress), two approaches have had a specially important role in general relativity, the conformal and the causal boundaries.
The conformal boundary is the most applied one in mathematical relativity and several notions, as asymptotic flatness or tools as Penrose-Carter diagrams rely on it. Even in the original approach of the AdS/CFT correspondence, it is the conformal boundary the JHEP04(2017)051 chosen as the holographic one. In fact, the Anti-de Sitter spacetime can be conformally embedded in the Lorentz-Minkowski model, obtaining a simple (and non-compact) conformal boundary. However, it has important limitations as it is an ad hoc construction: no general formalism determines when the boundary of a reasonably general spacetime is definable, intrinsic, unique and contain useful information of the spacetime (see [7] and [5, section 4] for studies regarding the uniqueness of the conformal boundary). In fact, as it was putted forward by Bernstein, Maldacena and Nastase [8]), there seems to be problems when the conformal boundary is considered on plane waves. Indeed, Marolf and Ross [9] realized that the conformal boundary is not available for non-conformally flat plane waves. So, they proposed a redefinition of the c-boundary applicable to such waves [10] which was refined and systematically studied by Flores and Sánchez in [11].
This motivated a reconsideration of such constructions by substituting the conformal boundary by the causal one, which is intrinsic, conformally invariant and it can be computed systematically, as it was carried out in [5]. It is worth emphasizing that both the conformal and causal boundaries are shown to coincide in most relevant classes (so, previous results based on the conformal case are not required to be re-obtained for the causal one).
Returning to the problem of AdS/CFT correspondence, it is our aim to present the causal boundary of different classes of Lorentz manifolds, allowing the study of such a correspondence with different bulk spaces. In this sense let M be, for instance, a Lorentz manifold with constant negative curvature, and so, a spacetime that can be locally modelled by the Anti-de Sitter spacetime. Recalling that the universal coveringà dS is maximal, simply-connected and with constant negative curvature, it is expectable that M can be described as a quotient space ofà dS by an appropriate group of isometries (in fact, for certain spacetime topologies, the existence of such an appropriate group was proved by Mess [12]). This is the particular case of the BTZ blackholes, the (2 + 1)-model of spacetime first introduced by Bañados, Teitelboim and Zanelli [13]; and the Hawking-Page reference space [14], whose representations as a quotient of the Anti-de Sitter model are well known [15? , 16].
Due the fact that the causal boundary is well known forà dS (see [17, section 4.1]), the following question, particularly natural from the mathematical viewpoint, arises: given two (general) Lorentz manifolds M and V where M is constructed as the quotient of V by some group of isometries, what is the relation between the causal boundaries and completions of M and V? An adequate answer for this question will give us tools to easily compute the causal completion of M once we know the corresponding on V. For instance, such a result will be applicable to models like the BTZ blackholes or the Hawking-Page reference model, besides other models constructed in a similar way (as the case of Cosmic Strings, see [18]). It will also give us relevant information of the c-completion on V whenever the c-completion in M is known.
The first studies in this direction are due to Harris [19]. In his work, he studied how isometrical actions affect the causal structures of the spacetimes, with special attention to the future causal boundary and related concepts (as strong causality). Concretely, he considers a projection π : V → M given by a discrete subgroup G of isometries acting freely and properly discontinuously in V, i.e., where M = V/G and the elements on M JHEP04(2017)051 represents G-orbits in V. In this settings, Harris characterizes the strong causality and global hyperbolicity of M in terms of the global causal structure of V. Moreover, and under the assumption of M being distinguishing (which implies, in particular, that V also is), he presents necessary conditions in order to ensure when the future causal completion of M is homeomorphic to an appropiatre quotient of the future causal completion of V.
Our aim in this work is to extend the results obtained by Harris for the future causal completion to the c-completion. However, several problems have to be addressed first. On the one hand, the main result in [19] imposes that both, the future causal boundary of M and V have only spacelike future boundaries. This condition, even if reasonable (specially recalling the final example of his paper), is too strong for the c-completion context, where particularly timelike boundary points are specially relevant. On the other hand, and in spite with the partial case, the c-completion requires the study of the so-called S-relation between future and past sets, as well as some "compatibility" between the topology of the future and past completions.
The contents of the paper are organized as follow. In section 2 we will give the preliminaries that we are going to need along the rest of the paper. Most of them are well known (for instance, the construction of the c-completion was developed in [5]), but we have also introduced concepts (as first order UTS, definition 2.1) and results (lemmas 2.2 and 2.4; and some of the assertions in Theorem 2.10) that, as far as we know, are new. Section 3 is devoted to the study of the future (and, by analogy, past) causal boundary. Here, at the point set level, we will recall the bijection defined by Harris between a suitable quotient ofV (the future completion of V) andM (the future completion of M). Then, we will perform a detailed comparison between the topologies in both spaces (the first one with the induced quotient topology). The results of this section are summarized as follow: Theorem 1.1. Let π : V → M be a spacetime covering projection (see section 2.3) and denote byπ the extension to future c-completions (3.1). LetV/Ĝ be the quotient space defined by the following relation: two points P, P ∈V are ∼Ĝ-related if they project onto the same point inM. Then, we obtain the following commutative diagram: whereî is the natural quotient projection. From construction, the map is bijective. At the topological level,
JHEP04(2017)051
In particular, if M has only spatial future boundary points, is a homeomorphism betweenV/Ĝ andM. The same result follows if G is finite andV is Hausdorff.
As we can see on previous (ii), we have obtained almost a characterization of the continuity of, up to the first order UTS property. In fact, such a result generalizes [19, theorem 3.4], as the last assertion of theorem 1.1 shows.
Section 4 is focused on the study of the (total) c-completion at all possible levels, namely at the point set, at the chronological and at the topological level. In section 4.1 are given simple and general sufficient conditions to ensure the definition of the map between a reasonable quotient of V (the c-completion of V) and M (the c-completion on M). Then, it is shown in section 4.2 that previous map is well behaved respect to the chronological relation, whenever an appropriate chronological relation is defined on the quotient space. Finally, in sections 4.3 and 4.4, it is studied the conditions to ensure that the map is both, continuous and open respectively. Now the latter becomes subtler and a simple condition (to be finitely chronological ) is introduced. This property also simplifies the conditions to ensure the well posedness and continuity of .
(PS3) Finally, if (V, G) is finitely chronological and bothV,V are Hausdorff then the projection π restricts properly to M, it is surjective and univocally determined.
Moreover, when the map π restricts properly to M and it is surjective, it defines the following relation between points in V: two points are ∼ G -related if they project onto the same point in M. Then, denoting by V/G the quotient space, we obtain the following commutative diagram: where ı is the natural projection to the quotient and is the induced bijection.
JHEP04(2017)051
At the chronological level, and once an appropriate chronological relation is defined on V/G (see section 4.2), it follows that: (CH) the map is a chronological isomorphism. Finally, at the topological level, satisfies the following properties: (TP1) The map is continuous if one of the following hypotheses hold: (i) π satisfies that π((P, ∅)) = (P, ∅) and π((∅, F)) = (∅, F) (this follows if, for instance, π is tame or (V, G) is finite chronological); and M has no sequence with (future or past) divergent lifts.
In particular, π restricts properly to M, it is surjective, univocally determined and induces a homeomorphism and chronological isomorphism between V/G and M if it is satisfied one of the following assertions: (a) (V, G) is finite chronological and M admits no sequence with (future or past) divergent lifts.
(b) (V, G) is finitely chronological, bothV,V are Hausdorff and M has no lightlike boundary points.
(c) (V, G) is finitely chronological, V has no lightlike boundary points, and bothV,V are Hausdorff and have closed G-orbits. In particular, if π is (future and past) tame and there are no constant sequences with divergent lifts in M, then the G-orbits inV anď V will be closed.
In section 5 we include several technical examples showing the optimality of our results, that is, we show that if we remove any of our sufficient conditions (tameness, no existence of sequences with divergent lifts or finite chronology), the results are, in general, false. Finally in section 6, and as a physically relevant application of our result, we use the developed theory to compute the causal boundary of quotients of Robertson Walker spacetimes, including quotients of the AdS Spacetime.
Sequential topologies and limit operators
Along this section we will include all the basic facts about sequential topologies and limit operators that we will require for the rest of the paper. Most of the results are known (see [6,20]), but we present the concept of first order UTS along some associated results that, as far as we known, are new.
JHEP04(2017)051
Let X be an arbitrary space with a limit operator L defined on it, that is, an operator L : S(X) → P(X), where S(X) is the space of sequences in X and P(X) is the space of parts of X. We will always assume that the limit operator is: (a) coherent, and so, that L(σ) ⊂ L(κ) where κ, σ ∈ S(X) and κ is a subsequence of σ (this will be denoted by κ ⊂ σ); and (b) finite-invariant ensuring that L(σ) = L(κ) if a common subsequence is obtained by deleting a finite number of terms in both sequences.
Any (coherent and finite-invariant) limit operator defines naturally a topology τ L on X on the following way: a set C is closed for τ L if and only if L(σ) ⊂ C for all sequence σ ⊂ C. Such a topology is sequential, i.e., it is completely determined by the convergence of its sequences (a subset is closed if and only if it contains all its convergent sequences); this happens even if L(σ) only determines some of the possible limits of σ. Reciprocally, any sequential topology τ has associated a limit operator L τ (its usual convergence) such that τ = τ L τ (see [6, proposition 2.6]). Observe however that the previous limit operator is not the unique limit operator which determines the same topology τ. Among the limits defining a concrete sequential topology τ, it is always possible to choose one satisfying that p ∈ L({p} n ) for all p ∈ M, where {p} n denotes the constant sequence p. In the particular case when {p} = L({p} n ), we will say that the limit operator is idempotent. Finally, the pair (X, L) will also represent the sequential topological space (X, τ L ).
In general, the limit operator L does not determine the complete set of convergence points of a sequence σ with the topology τ L . In fact, the only implication which is always true is that: p ∈ L(σ) =⇒ σ converges to p with the topology τ L .
When the other implication is satisfied for all sequences, we will say that the limit operator is of first order. In general, there are not many results determining when a limit operator is of first order. In fact, in practical cases, the proof is done case by case, taking special care of "problematic" sequences. However, if we relax slightly the first order condition on L, we can obtain simply-to-check conditions which will be enough for our purposes. In this sense, let us introduce some definitions.
Definition 2.1. Let X be a space and L a limit operator defined on X. Let us denote by τ L the associated sequential topology and let σ ⊂ X be a sequence. We will say that L is of first order for σ if p ∈ L(σ) ⇐⇒ σ converges to p with the topology τ L .
Additionally, we will say that L is of first order up to a subsequence for σ (or first order UTS for short), if σ has a subsequence κ ⊂ σ such that L is of first order for κ. Finally, we will say that L is of first order UTS if it is of first order UTS for all sequence σ ⊂ X.
The following result give us a sufficient condition to ensure when a limit operator is of first order for a given sequence.
Proof. The proof is quite straightforward and we include it here for the sake of completeness. Observe that the set C = σ ∪ L(σ) ⊂ cl(σ) from (2.1), so the first assertion follows if we prove that C is closed. For this, let κ ⊂ C and let us prove that L(κ) ⊂ C. Recall that, due the finite number of elements in L(σ), we have have two possibilities (up to a subsequence) for κ ⊂ C: or the sequence κ is a subsequence of σ, and so, L(κ) = L(σ) ⊂ C; or κ is constantly an element p ∈ C, and so, L(κ) = {p} ⊂ C. In both cases, L(κ) ⊂ C and hence C is closed.
For the last assertion, that is, the first order character of L on σ, let us assume that σ → p. Again, we distinguish two cases: • We can exclude a finite number of elements in σ such that the refined sequence σ does not contain p. As we are removing only a finite number of elements, L(σ ) = L(σ) and it follows from the first assertion that cl(σ ) = σ ∪ L(σ ). As σ → p, we have that p ∈ σ ∪ L(σ ). From construction σ does not contain p, so p ∈ L(σ ) = L(σ).
In conclusion, p ∈ L(σ) and L is of first order for σ.
Previous result give us a relatively simple way to determine when L is of first order for a given sequence σ (and so, to determine when L is of first order) and it is usually enough in particular cases. However, we can go a step further on the search of a easily verifiable condition. For this, let us note that most of the results we will present on this paper require, not a complete control of the convergence of sequences, but the existence for any sequence of a subsequence sufficiently well behaved. This is make apparent in the following result which ensures continuity of a map between sequential spaces: Proof. Let C be a closed set in (N, L ), and let us show that f −1 (C) is closed on (M, L). Assume by contradiction that f −1 (C) is not closed and so, from definition, that there exists a sequence σ ⊂ f −1 (C) and a point p ∈ M with p ∈ L(σ) \ f −1 (C). From hypothesis, there exists a subsequence κ ⊂ σ such that f(p) ∈ L (f(κ)). But f(κ) ⊂ C, which is closed for the topology τ L . Therefore f(p) ∈ C, and so, p ∈ f −1 (C), a contradiction. This is one of the reasons why the condition of L being of first order UTS is specially interesting for us. Moreover, as we can see on the following result, it is possible to obtain the following sufficient conditions for the first order UTS, which is particularly simple to verify in practical cases: JHEP04(2017)051 Lemma 2.4. Let X be any space with an idempotent limit operator L defined on it. Assume that #L(σ) < ∞ for all sequenceσ ⊂ X. Then, L is of first order UTS. 1 Proof. Let σ ⊂ X be an arbitrary sequence. Observe that there are two (exclusive) possibilities for the sequence: (a) for all subsequence κ ⊂ σ, L(κ) = L(σ) or (b) there exists κ 1 ⊂ σ with L(σ) L(κ 1 ). In particular, #L(κ 1 ) #L(σ) + 1.
In the first case, L is of first order for σ according to lemma 2.2 and we are done. In the second case, we can repeat the same argument with κ 1 on the role of σ. Again there are two possibilities: either it ends in a finite number of iterations with a sequence κ n 0 satisfying previous (a), hence, with L being of first order for κ n 0 ; or we obtain a chain of subsequences with #L(κ i+1 ) #L(κ i ) + 1. However, this second posibility will lead us to the existence of a sequence with infinite limits, a contradiction. In fact, if κ i = {x i n } n then the diagonal sequence {x n n } n satisfies: which implies that #L({x n n } n ) = ∞ due the increasing character of #L(κ i ). In conclusion, previous inductive process should end in a finite number of steps, obtaining a subsequence of σ where L is of first order.
Finally, let us review how sequential topologies behaves under a quotient. As it was proved on [20, remark 5.12], given a sequential space (X, L) and an equivalence relation ∼ defined on it, the quotient topological space X/ ∼ (with the induced topology) is again a sequential space. In fact, it is possible to give explicitly a limit operator L Q whose associated topology coincides with the quotient topology in X/ ∼ in the following way: where i : X → X/ ∼ is the natural quotient projection and [x], [x n ] ∈ X/ ∼. As it happens in the general case of topological spaces, the quotient topology of sequential spaces could not preserve the separability conditions of the original topological space. This is particularly interesting regarding the T 1 condition, which is translated on limit operators by the idempotent property (so points are closed with the associated sequential topology). As we will see on example 5.1, we can obtain a non idempotent limit operator L Q even when L is.
C-boundary construction
The causal completion was firstly introduced by Geroch, Kronheimer and Penrose in their seminal work [21]. The main idea for such a construction is to attach for any future-past inextensible timelike curve an ideal point characterized by the past-future of the curve. The original construction presents several problems mainly related with the topology considered. However, the notion of causal boundary and completion have been widely developed [22][23][24][25][26][27] (see also the reviews in [2,4]), reaching a definition for the causal completion (named c-completion) fully satisfactory on [5].
Let us review some classical concepts of causal theory, referring the reader to [28] for further details and classical notation. Let (V, g) be a connected, time-oriented Lorentz manifold. Denote by the chronological relation (respectively the causal relation), that is, p q (p q) iff there exists a future-directed timelike (causal) curve from p to q. In what follows, the spacetime V will be considered strongly causal, and so, the intersections between the chronological future and past of points generate the topology in V. In particular, strong causality ensures also that V is distinguishing, hence two different points p, q ∈ V have different future I + (p) = I + (q) and past I − (p) = I − (q).
A non-empty subset P ⊂ V is called a past set if it coincides with its past, i.e., P = I − (P) := {p ∈ V : p q for some q ∈ P}. Let S ⊂ V and define the common past of S as ↓ S := I − ({p ∈ V : p q ∀q ∈ S}). Observe that, from definition, the past and common past sets are open. A past set that cannot be written as the union of two proper past sets is called indecomposable past set, IP for short. An indecomposable past set P belongs to one of the following two categories: P can be expressed as the past of a point of the spacetime, i.e., P = I − (p) for some p ∈ V, and so, P is called proper indecomposable past set, PIP ; or P = I − ({x n } n ) for some inextensible future-directed chronological sequence {x n } n , 2 and then P is called terminal indecomposable past set, TIP. The dual notions of future set, common future, IF, PIF and TIF, are defined just by interchanging the roles of past and future in previous definitions.
The future causal completion is defined as the set of all indecomposable past sets IPs. As the manifold V is distinguishing, the original manifold points p ∈ V are naturally identified with their past p ≡ I − (p), and so, V is identified with the set of PIPs. Therefore, the future causal boundary∂V is defined as the set of all TIPs in V, obtaining the following identifications: V ≡ PIPs,∂V ≡ TIPs,V ≡ IPs.
The future causal completion will be endowed with the future chronological topologyτ chr , a sequential topology defined by the following limit operator: for σ = {P n } n ⊂V, Here by maximal we mean that no other P ∈V satisfies the stated property and includes strictly P. The symbols LS and LI denotes superior and inferior limits for sets respectively,
JHEP04(2017)051
which are defined in the following way: given a sequence {A n } n of sets, An analogous definition follows for the past causal completion by interchanging the roles of future and past sets. Hence, andV is endowed with the past chronological topologyτ chr defined by a limit operatorĽ chr .
For the (total) c-boundary, we need to recall that some IPs and IFs represent naturally the same point of the completion. This is quite evident for PIPs and PIFs, where future and past sets can be identified if they are future and past of the same point respectively. However, previous identification is insufficient, as other indecomposable sets have to be identified. For this, let us define the so-called S-relation (introduced on [26]). Denote bŷ As proved by Szabados [26], the past and future of a point p ∈ V are S-related, I − (p) ∼ S I + (p), and these are the unique S-relations (according to our definition (2.4)) involving proper indecomposable sets. We also define that P ∼ S ∅ (respectively ∅ ∼ S F) if P (respectively F) is a non-empty, necessarily terminal indecomposable past (respectively future) set that is not S-related by (2.4) to any other indecomposable set (note that ∅ is never S-related to itself). Now, we can introduce the notion of c-completion. At the point set level, and following the idea of Marolf and Ross [10], the c-completion is formed by S-related pairs of indecomposable sets V := {(P, F) ∈V ∅ ×V ∅ : P ∼ S F}.
(2.5) Every point p ∈ V of the manifold is naturally identified with its corresponding pair (I − (p), I + (p)), so V can be (and will be) considered a subset of V. The c-boundary is then defined as ∂V = V\V. The boundary points can be classified in three different classes, that we will define now for future reference.
Definition 2.5. Let (P, F) ∈ ∂V be an arbitrary point on the c-boundary. We will say that (P, F) is a timelike boundary point if both components are non empty P = ∅ = F. The point is a lightlike boundary point if one of the components is empty and, in the case P = ∅ (respectively F = ∅) there exists P (respectively F ) a proper indecomposable set such that P P (respectively F F ). Finally, in the remainder case, a terminal set P (respectively F) not contained in any other IP (respectively IF), is a spatial boundary point. 3
JHEP04(2017)051
The chronological relation on V is also extended to the c-completion in the following way (by abuse of notation, we denote the chronological relation on V with the same symbol): given two points (P, F), It is not possible to obtain in general a explicit expression for the causal relation, as we have done for the chronological relation. However, it is known that any chronological relation has naturally associated a causal relation (see [30, definition 2.22] for details).
Remark 2.6. Now that we have defined the chronological relation in V, we can understand better the terminology introduced in definition 2.5. As it is clear, if a boundary point (P, F) is timelike (and so, with both components non empty), then (P, F) lies in the past (respectively future) of any point y ∈ F (respectively y ∈ P). Otherwise, we know that P or F should be empty. Let us assume that F = ∅ (the other case will be analogous). Now observe that if (P, ∅) is lightlike, then there exists another point (P , F ) ∈ V with P P . It is clear that previous points cannot be timelike related according to (2.6), however it follows that (P, ∅) (P , F ) according to [31, section 6.4], being natural to assume that both points are horismotically related. Finally, if (P, ∅) is a spatial boundary point, then no pair (P , F ) ∈ V will satisfy that (P, ∅) (P , F ).
Finally, V is endowed with the chronological topology τ chr , a sequential topology associated to the following limit operator (known as the chronological limit): for a sequence It is important to recall, as it will be used later, that due the definition of the S-relation between terminal sets, the definition of the chronological limit is simplified when both terminal sets on the limit are non empty (see [5, lemma 3.15]). Concretely: Proposition 2.7. Let {(P n , F n )} n be a sequence of pairs in V and assume that P ∼ S F with The following result will summarize the main properties of the c-completion endowed with the chronological relation and topology (see [5, theorem 3.27] and its proof).
Theorem 2.8. Let (V, g) be a strongly causal Lorentzian manifold and V its causal completion endowed with the chronological structure induced by (2.6) and the topology induced from the chronological limit (2.7). Then: (i) The inclusion V → V is continuous. Moreover, the restriction of the chronological limit on V is a first order limit operator.
(ii) Let {x n } n ⊂ V be a future (respectively past) chronological sequence Then,
JHEP04(2017)051
(iii) The c-completion is complete: for any past terminal set P (respectively future terminal set F) there exists F (respectively P) such that (P, F) ∈ V. In particular, any inextensible timelike curve γ on V (respectively any inextensible chronological sequence {x n } n on V) has an endpoint in V.
(iv) The sets I ± ((P, F)) are open for all (P, F) ∈ V.
(v) V is a T 1 topological space.
Spacetime covering projections: the causal ladder and main properties
Let us consider that we have an action on V given by a group G of isometric maps. 4 We will always assume that the action preserves time-orientation, and acts freely and properly discontinuously, where the latter means: (a) for each p ∈ V, there exists a neighborhood U such that g U ∩ U = ∅ for all g ∈ G \ {e} and; (b) for p 1 , p 2 ∈ V there are neighbourhoods U 1 and U 2 such that g U 1 ∩ U 2 = ∅ for all g ∈ G.
Previous conditions over the action let us ensure that the quotient space M = V/G is also a Lorentzian manifold with the induced metric (which will be denoted by an abuse of notation as g). The canonical projection to the quotient space, denoted by π : V → M, will be called a spacetime covering projection. The following result let us understand clearly the relation between the chronological relation on M and V (the same result follows for causal relations, see [19, proposition 1.1]). Proposition 2.9. Let us consider π : V → M a spacetime covering projection. Then: • If p, q ∈ V satisfy that p q, then π(p) π(q).
• If x, y ∈ M satisfy that x y, then for any p, q ∈ V with π(p) = x and π(q) = y, there exists an element g ∈ G such that p g q.
As it is clear, previous result is key to understanding the relation between the causal structures of both, V and M. From a global viewpoint, it is possible to characterize all the stages of the well known causal ladder on M (see [30]) in terms of the global causal structure of V. We will summarize in the following result some of such characterizations, which proofs can be found on [19, Props. 1.2, 1.3 and 1.4].
Theorem 2.10. Let π : V → M be a spacetime covering projection with group G. Then: (CL1) M is non-totally vicious if, and only if, there exists p, q ∈ V with π(p) = π(q) and p q.
(CL3) M is causal if, and only if, for all p, q ∈ V with π(p) = π(q), p q.
(CL4) M is strongly causal if, and only if, for all p ∈ V there is a fundamental neighbourhood system {U n } for p such that for each n, no causal curve can have one endpoint in U n and another endpoint in a component U n of π −1 (π(U n )) unless U n = U n and the curve remains wholly within U n .
(CL5) M is globally hyperbolic if, and only if, (CL5-2) every point p ∈ V has a fundamental neighbourhood system as in (CL4) and Let us remark that in all previous cases, a global causal condition on M (i.e., the assumption of a stage in the causal ladder) implies a stronger global condition on V. However at this point, it is not clear for us at what extent the same property follows for the rest of the causal ladder (particularly with causally continuous and causally simple), being necessary a detailed study on such cases. However, that study is out the scope of this paper.
Partial Boundaries under the action of the group
In this section, we will study the behaviour of the future causal completion under the action of an isometry group G, being the past case completely analogous. Let us begin with a point in the future completion of V, that is, an indecomposable set P = I − ({p n } n ), where {p n } n is a future-directed chronological sequence. As the group G acts by isometries in V, the sequence {x n } n with π(p n ) = x n is also future-directed and chronological (proposition 2.9), hence, it defines the indecomposable set P = I − ({x n } n ) in M. Therefore, the projection π extends naturally to the corresponding partial completions on the following way: (3.1) We will say that an indecomposable set P ∈V is a lift of P ifπ(P) = P. Previous map is always surjective, as any future-directed chronological sequence {x n } n in M can be lifted to a future-directed chronological sequence {p n } n in V (by proposition 2.9). However, the map is not injective in general, as previous lift is not unique. For instance, if {p n } n is a lift of {x n } n , {g p n } n (for any g ∈ G) is also a lift of the same sequence. Even more, the pre-image of a terminal set P can be easily characterized. Let us denote by P = I − ({p n } n ), where {p n } n denotes one fixed lift of {x n } n . It follows that π −1 (P) = ∪ g∈G g P,
JHEP04(2017)051
i.e., the pre-image of P is the union of what we are going to call the G-orbit of P inV, which is the set {g P} g∈G . The left inclusion is straightforward, asπ(g P) = P for all g ∈ G. For the right one, take a point x ∈ P and let p ∈ V be a point such that π(p) = x. As x ∈ P, there exists n big enough such that x x n . Hence, proposition 2.9 ensures that p g p n ∈ g P for some g ∈ G.
Convention 3.1. From this point, there are some useful conventions that we will use along the paper. For instance, the points on M will be denoted by x, y, z, while the points on V will be denoted by p, q, r. Moreover, unless stated otherwise, we will always assume that π(p) = x, π(q) = y and π(r) = z.
For any chronological sequence {x n } n in M (respectively, an indecomposable set P), we will consider a fixed lift on V denoted by {p n } n (respectively P). As an abuse of notation, we will use the same symbol I ± for future/past of sets when there is no confusion if we are in M or V.
Finally, and in order to compute both, partial and c-boundary, we will assume from this point that M is strongly causal and so that V satisfies the condition described on theorem 2.10 (CL4).
The projectionπ let us define an equivalence relation onV: two indecomposable sets P 1 , P 2 ∈V areĜ-related, P 1 ∼Ĝ P 2 , if and only if both terminal sets projects onto the same P ∈M, i.e.,π(P 1 ) =π(P 2 ). Of course, previous relation lead us to a bijection between the quotient spaceV/Ĝ(≡V/ ∼Ĝ) andM. However, the following two observations are in order: on the one hand, one could expect naively that for any two terminal sets with P 1 ∼Ĝ P 2 , there exists g ∈ G such that P 1 = g P 2 . Nonetheless, the following simple example shows that such a property is not true: consider the two-dimensional Minkowski spacetime, L 2 , with the action defined on it. The lightlike line γ(t) = (t, t) defines naturally a terminal set P = V. The Z-orbit of P is the complete spacetime L 2 , so both L 2 and P will beẐ-related, but no element of the group send one to the other.
In any case, there are several examples where previous property is naturally satisfied. For instance, the same previous group action will satisfy the property if it is restricted to V = R×(a, b) ⊂ L 2 . In fact, we can construct even more physically appealing examples for Robertson Walker spacetimes satisfying the integral conditions (6.2) (recall that, in terms of causality, Robertson Walker models satisfying such a integral conditions behave like Lorentzian product spaces with finite time interval, see [17]). This motivates the following definition: Definition 3.2. A spacetime covering projection π : V → M is future tame if given two terminal sets P 1 , P 2 with P 1 ∼Ĝ P 2 there exists g ∈ G such that P 1 = g P 2 .
JHEP04(2017)051
On the other hand, the induced map is not well behaved at the topological level. In fact, Harris shows in the last example of [19] thatπ is not, in general, continuous (see also example 5.1 for details).
The rest of this section is devoted to make a deep comparison between the topologies ofM andV/Ĝ, where the latter has the induced quotient topology. Let us first fix some notation. As we have mention on section 2.2,M andV will be endowed with the future chronological topology, which is defined by a limit operator (2.3). In order to differentiate both limits, we will denote byL M the future chronological limit onM and, accordingly,L V the limit onV. The quotient topology onV/Ĝ is also a sequential topology (see section 2.1) and it is defined from a limit operator (2.2) which will be denoted here byLĜ. Finally, recall that the mapπ induces a bijective map betweenV/Ĝ andM which makes the following diagram commutative:V Previous map is always open. In order to prove this, we require the following technical lemma. Lemma 3.3. Consider a sequence σ = {P n } n ⊂M and a point P ∈M such that P ⊂ LI({P n } n ). For P a fixed lift of P, there exist lifts P n of P n such that P ⊂ LI({P n } n ).
Proof. Let us begin by taking {P n } n some fixed lifts of {P n } n . Denote also by {p n } n and {x n } n future chronological chains defining P and P respectively and satisfying that π(p n ) = x n (as stated in Convention 3.1). As P ⊂ LI({P n } n ), for any element x n there exists m n ∈ N (that we can consider strictly increasing on n) such that, for all m m n , x n ∈ P m . In particular, and due to proposition 2.9, we can ensure the existence of g ∈ G in such a way that p n ∈ g P m . Then, for m m n , let us denote by G(n, m) ⊂ G the non-empty subset defined in the following way: Let us make a straightforward (but necessary) observation about previous sets. As p n p n+1 , for m m n+1 ( m n + 1), Now, for each m n m < m n+1 , let us consider a group element g m ∈ G(n, m) and consider the sequence {g m P m } m (for m < m 1 , just consider g m = e, the identity). Now, let us show that previous sequence is the desired, that is, P ⊂ LI({g m P m } m ). In fact, JHEP04(2017)051 for any n ∈ N, consider m m n and denote by k ∈ N ∪ {0} the natural ensuring that m n+k+1 > m m n+k . Then, from the choice of {g m } m and (3.3), we have that: In conclusion, from (3.2) we deduce that p n ∈ g m P m for all m m n , and the result follows. Proof. Let us prove that the map −1 is continuous by using proposition 2.3. For this, consider a sequence σ = {P n } n ⊂M and a point P ∈L M (σ), and let us show that −1 (P) ∈LĜ( −1 (κ)) for some subsequence κ ⊂ σ. Recall that, from the definitions ofLĜ and −1 , this is the same that show the existence of lifts P n and P of P n and P respectively such that First observe that, by using previous lemma, we can find lifts P n and P of P n and P respectively such that P ⊂ LI({P n } n ). If P is maximal in LS({P n } n ), then we have that P ∈L V ({P n } n ), and we are done.
Otherwise, take P a maximal set in LS({P n } n ) containing P, and let {p n } n be a future chronological sequence generating P . As P ⊂ LS({P n } n ), it is possible to find a strictly increasing subsequence {k n } n such that p n ∈ P k n for all n. Then, it follows readily that P ∈L V ({P n k } k ). Now observe that the sets P =π(P ) and P n k =π(P n k ) satisfy the following chain (π preserves contentions) But as P ∈L M ({P n k } k ), it follows that P = P (recall the maximal character on (2.3)) and so that P is also a lift of P.
In both cases, and up to a subsequence, we show the existence of lifts {P n } n and P with P ∈L V ({P n } n ), and then the continuity of −1 follows from proposition 2.3.
Remark 3.5. Previous proof shows in particular that for all P ∈L M ({P n } n ), there exist lifts P and P n of P and P n respectively with As we have already pointed out, the map is not continuous in general. If we look into the details of example 5.1, we see that the non-continuity is related with the following situation: there exists a (non-necessarily chronological) sequence {P n } n ⊂M admitting two different lifts such that (i) both lifted sequences are convergent and (ii) the projection of one limit point contains strictly the other. As we will see, such a situation represent, essentially, the only cases where continuity ofπ can fail, so it is convenient to give a proper name for it:
JHEP04(2017)051
Definition 3.6. Let π : V → M a spacetime covering projection andV,M the corresponding future causal completions of V and M. We will say that a sequence σ = {P n } n ⊂M has future divergent lifts if there exist two lifts {P n } n , {P n } n ⊂V of σ and two points P, P ∈V such that: (ii)π(P) π(P ).
If there exists no such a sequence onM, we will just say that M does not admit sequences with future divergent lifts.
As a side remark, observe that the concept of divergent lifts is quite related with the topological structure of the G-orbits inV. In fact, we can prove the following result: Proof. For the right implication let P ∈M and P ∈V withπ(P) = P. Observe that by the tame condition every lift of the constant sequence {P} n has the form {g n P} n where g n ∈ G. So, if P ∈V is such that P ∈L V ({g n P} n ) then the closedness of the G-orbit ensures that P = g 0 P for some g 0 ∈ G. Therefore, {P} n admits no divergent lifts as condition (ii) in definition 3.6 cannot be fulfilled.
For the left one, assume that M admits no constant sequence with divergent lifts and let us prove that the G-orbits inV are closed. Let P, P, P and {g n } n as in previous implication. As M admits no constant sequence with divergent lifts, then necessarily it follows thatπ(P ) = P. Moreover, as π is future tame, then there exists g 0 ∈ G such that P = g 0 P, and so, P belongs to the G-orbit {g P} g∈G and the G-orbit is closed.
The optimality of previous result follows from example 5.5 where it is shown a case where M admits no constant sequence with divergent lifts but the G-orbits are not closed.
Our main technical result on this section is the following characterization of the continuity ofπ (up to the first order UTS condition): Proposition 3.8. Let {P n } n be a sequence whose projection {P n } n does not admit divergent lifts. Then, In particular, if M does not admit sequences with future divergent lifts, the mapπ is continuous. Conversely, if the mapπ is continuous and additionally the future chronological limitL M onM is of first order UTS, then there are no sequences with divergent lifts.
Proof. Let σ = {P n } n be a sequence as in the first statement of the proposition, and consider P ∈L V (σ). By recalling thatπ preserves contentions, we deduce that P ⊂ LI({P n } n ). If P is maximal among the IPs in LS({P n } n ), then P ∈L M ({P n } n ) and we are done. So, let us assume by contradiction that P is not maximal on the LS({P n } n ). Consider P a maximal IP on LS({P n } n ) containing strictly P. From the definition of the superior limit,
JHEP04(2017)051
and up to a subsequence, we can assume that P ⊂ LI({P n } n ), and so, that P ∈L M ({P n } n ). Now, recalling remark 3.5, we ensure that P n and P admit lifts P n and P such that P ∈L V ({P n k } k ). Summarizing, the sequence {P n k } k admits two lifts {P n k } k and {P n k } k converging to P and P respectively, where P =π(P) π(P ) = P . That is to say, {P n k } k admits future divergent lifts, a contradiction. In conclusion, P ∈L M ({P n } n ). Moreover, if M does not admit sequences with divergent lifts,π is continuous (recall proposition 2.3).
For the final assertion, assume thatL M is of first order UTS and that there exists a sequence σ = {P n } n ⊂M with divergent lifts. Let {P n } n , {P n } n be two sequences inV and P, P two terminal sets as in definition 3.6. Assume by contradiction thatπ is continuous. In particular, we have that {P n } n (the projection byπ of both sequences {P n } n and {P n } n ) converges to P and P . AsL M is of first order UTS, we can assume that (up to a subsequence)L M is of first order for {P n } n , and so, that P, P ∈L M ({P n } n ). But this is a contradiction with the definition ofL M (2.3) (concretely the maximal character of the limit points) and the fact that P P (definition 3.6 (ii)). Therefore, the mapπ cannot be continuous.
There are several ways to prove the non-existence of sequences with divergent lifts. For instance, we can impose conditions on the causality of the boundary (re-obtaining [19, theorem 3.4
])
Corollary 3.9. IfM has only spatial future boundary points (see definition 2.5 and Footnote 3), thenπ is continuous, and so, is a homeomorphism betweenM andV/Ĝ. Proof. Assume by contradiction thatπ is not continuous and so, from previous result, that there exists a sequence σ ⊂M admitting divergent lifts. Let σ, σ be two sequences inV and P, P be two points inV as in definition 3.6. AsM only contains spatial future boundary points, no IP can contain a TIP. Hence, from (ii) in definition 3.6, we deduce that P = I − (x) for some x ∈ M, and then, P = I − (p) for some point p ∈ V. As π : V → M is continuous and the future chronological topology preserves the manifold topology (which follows from theorem 2.8, (i)), we have that P ∈L M ({P n } n ). Finally from (i) and (ii) in definition 3.6 we have that P P ⊂ LI({P n } n ), in contradiction with the maximality on (2.3).
Another possibility is to impose conditions over the topology of the future causal completion. In this case, we have also need to impose the finiteness of the group G: Corollary 3.10. Consider π : V → M a spacetime covering with associated group G. Assume that G is finite and thatV is Hausdorff. Then,π is continuous, and so, is a homeomorphism.
Proof. As we will see in the forthcoming sections, if G is finite then π is future tame (see lemma 4.18). Hence, let us consider two sequences {P n } n , {P n } n ⊂V and two points P, P ∈V with P ∈L V ({P n } n ) and P ∈L V ({P n } n ) and such thatπ(P n ) =π(P n ). Our aim is to prove thatπ(P) =π(P ) as then no sequence with divergent lifts can exists.
Recalling the tameness of π, there exists a sequence {g n } n ⊂ G such that P n = g n P n . Due the assumption that G is finite, we can assume (up to a subsequence) that g n ≡ g 0 for JHEP04(2017)051 all n and some constant g 0 ∈ G. Therefore, P ∈L V ({P n } n ) and P ∈L V ({g 0 P n } n ). From the first inclusion and the fact that G acts by isometries, we deduce that g 0 P also belongs toL V ({g 0 P n } n ) and recalling thatV is Hausdorff (and so, for any sequence σ,L V (σ) can contain at most one element, recall (2.1)), it follows that g 0 P = P , as desired.
3.1 Proof of theorem 1.1 Assertion (i) follows from proposition 3.4, while (ii) from proposition 3.8. The last assertion is proved in corollaries 3.9 and 3.10.
The C-completion under the action of the group
Once we have determined the requirements to ensure the well behaviour of the partial boundaries, we are in conditions to study the (total) c-completion. As a first step, we will deal with the projection and lift of points of the corresponding c-completions, in order to define an extension π : V → M. Later, we will study the properties of such a map at both, the chronological and the topological level.
Point set level
Let us begin by considering P ∈V and F ∈V two non empty indecomposable sets which are S-related, so (P, F) ∈ V; and let us study when the projections of each component of the pair of such terminal sets are S-related. Of course, if these sets correspond to the past and future of a point p ∈ V, their projections will correspond to the past and future of the projection x = π(p) ∈ M (and so, they are S-related). Therefore, we can assume that P and F are terminal sets. Let us denote by {p n } n and {q n } n the corresponding inextensible (future and past respectively) chronological sequences defining them. From the definition of the S-relation and the chronological limits, it follows that P ∈L V ({I − (q n )} n ) (see Thm 2.8 (ii)). If the past chronological sequence {y n } n (projection of {q n } n ) does not admit future divergent lifts, then proposition 3.8 ensures that P :=π(P) ∈L M ({I − (y n )} n ). Then, taking into account that the past chronological sequence {y n } n determines F :=π(F), we obtain that P ⊂↓ F and it is maximal inside such a subset (see (2.3)). Analogously, assuming that the future chronological chain {x n } n does not admit past divergent lifts, we can prove that F ⊂↑ P and it is maximal, so we have that: Proposition 4.1. Let π : V → M be a spacetime covering projection. Assume that M does not admit an inextensible sequence {x n } n ⊂ M which is either past-directed chronological with future divergent lifts or future-directed chronological with past divergent lifts. If (P, F) ∈ V with P = ∅ = F, then (P, F) ∈ M, where P =π(P) and F =π(F).
Previous condition for the future and past sequences is fulfilled in strongly regular cases as globally hyperbolic models, where inextesible past-(respectively future-)directed chronological sequences has no future (respectively past) limit. But of course, there will be other (not so regular) cases, as the one showed in corollary 4.23 (including for instance some Robertson-Walker models with an appropriate group action, see section 6) or the one in example 5.4, where the condition is naturally fulfilled.
JHEP04(2017)051
At the point set level, previous proposition is the only case where points are well projected in general. In fact, examples 5.2 and 5.3 show cases of points in V with no natural projection in M. Moreover, these examples also show that the lifts of points from M are not, in general, well behaved either. Concretely, as we can see in example 5.3, the point (P 2 , ∅) has no natural lift in V. The only possible candidate is the point (P 2 , F), but (P 2 , ∅) = (π(P 2 ),π(F)), evenmore this last point does not belong to M.
However, if we characterize the conditions under which the lift of points (P, F) ∈ M with both components non empty are well defined, then we will be in conditions to define the projection between V and M. Proof. The right implication is trivial, so we only need to focus on the left one, that is, consider a point (P, F) ∈ M and suppose that there exist lifts P and F such that P ⊂↓ F (and so, with F ⊂↑ P ). We can ensure then the existence of an IP P with P ⊂ P and maximal among the indecomposable sets contained in ↓ F . Recalling that the projection is well behaved with contentions, we deduce that P ⊂π(P) ⊂↓ F. However P ∼ S F, so the maximality on (2.4) implies that P =π(P).
Reasoning in the same way with F ⊂↑ P, we can prove that there exists F witȟ π(F) = F and being a maximal IP contained in ↑ P. In conclusion, P ∼ S F and the pair (P, F) belongs to V. Moreover, from constructionπ(P) = P andπ(F) = F, as desired.
Remark 4.3.
Recall that previous proof does not imply that the initial P and F are S-related, but that there exist others indecomposable sets S-related P and F such that: (a) P ⊂ P, F ⊂ F and (b)π(P) =π(P ) andπ(F) =π(F ). Now, we are ready to extend the projection to the c-completions. However, the definition of the projection is far more technical than the partial cases. The main problem here is the existence of different candidates for the projection of pairs (P, ∅) and (∅, F), and no reason to prioritize one of the candidates over the other. This will be reflected on the existence of different extensions of π (depending on the choice we made for the projection) for the general case. Nonetheless, as we will see along this section, all the possible definitions will share the same properties. Moreover, all the ambiguity in the choice of an extension will disappear under some additional properties such as tameness or finite chronology (see section 4.4).
Let (P, ∅) ∈ ∂V be a point in the c-boundary and let us analyse the possible projections that such a point can have on ∂M (an analogous study can be made for (∅, F)). The first natural candidate to consider (taking the corresponding projection of each component) is the pair (P, ∅) with P =π(P). However, as shown by example 5.5, it is not necessarily true that P ∼ S ∅. In fact, another pair (P , F ) with P = ∅ = F and withπ(P) =π(P ) can exist. If the projection of the components is well behaved under the S-relation (for example, as in proposition 4.1), then P =π(P ) ∼ Sπ (F ). Therefore, it seems more natural to define the projection of (P, ∅) as the projection (by components) of (P , F ) instead of (P, ∅).
JHEP04(2017)051
Nonetheless, previous process does not give an unique way to define such a projection. This is shown in example 5.6, where we have three points (P, ∅), (P , F ), (P , F ) ∈ ∂V with P =π(P) =π(P ) =π(P ) but also satisfying thatπ(F ) =π(F ). Both points (P , F ), (P , F ) share the same properties, having no argument to prioritize one over the other. So, a choice has to be made and different projections between V and M appear.
In order to formalize previous process for the definition of the extended projection, let us define ∼ G 0 as a relation between pairs satisfying that (P, ∅) ∼ G 0 (P , F ) (respectively (∅, F)) ∼ G 0 (P , F )) ifπ(P) =π(P ) (respectivelyπ(F) =π(F )). Then, define a map α : V → V as if some component is empty and there is some (P , F ) ∼ G 0 (P, F), a choice of one of this (P , F ); otherwise (P, F). The existence of such a map is always ensured, but it is not in general unique as it depends on the selected element (P , F ). Once a map α is chosen, we are in conditions to define the extended projection.
Remark 4.5. (a) Let us emphasize that the definition of α is nothing but a technical requirement in order to define the extension of the projection, and its concrete definition and properties will not affect the results from this point (see for instance the discussion in example 5.6). Therefore, and in order to simplify the notation, we will drop the subindex α on the definition of π α , always assuming that a map α has been fixed from the beginning. (b) It is also worth mentioning at this point that, in our main results, we have to include additional hypothesis as tameness or finite chronology (see definition 4.14), which will imply that there are no pairs (P, ∅) and (P , F ) in V with P = ∅ = F and (P, ∅) ∼ G 0 (P , F ) (with analogous version for the future, see lemma 4.7 and proposition 4.21). In these cases, α becomes the identity and so π α = π. Along the paper we will emphasize these situations by saying that the extended projection π is univocally determined.
Previous construction give us a reasonable way to define an extension for the spacetime covering projection π. However, as shown in example 5.3, such a map does not restrict properly to M because π((P, F)) could not belong to M. Yet, we can overcome this problem under the assumptions of Props. 4.1 and 4.2.
Proposition 4.6. If we assume that the points (P, F) ∈ M with P = ∅ = F have lifts in V (see proposition 4.2) and that M does not admit an inextensible sequence {x n } n ⊂ M which is either past-directed chronological with future divergent lifts or future-directed chronological with past divergent lifts, then π restricts properly to M and it is surjective.
JHEP04(2017)051
Proof. Let us begin by showing that π restricts properly to M. Take (P, F) ∈ V an arbitrary point and let us consider π((P, F)). Observe that there are essentially two possibilities for the projection: or it has both components non empty, or has one empty component. The former case follows if the initial point (P, F) has both components non empty or if one component is empty (without loss of generalization F = ∅) but there exists (P , F ) with both components non empty G 0 related to (P, ∅). In any such cases proposition 4.1 ensures that π((P, F)) ∈ M In the latter, no point (P , F ) with both components non empty can be ∼ G 0 related with (P, F). In particular, one of the components of the point should be empty, say F = ∅ (the other case is analogous). In this case, π((P, ∅)) = (P, ∅), and so, we have to prove that P ∼ S ∅. If not, from the completeness of the c-completion (recall theorem 2.8 (iii)), the terminal set P should be S-related with a terminal set F = ∅, determining the point (P, F) ∈ M. From the hypothesis, there are non empty lifts P and F of P and F such that (P , F ) ∈ V. However, we have thatπ(P) =π(P ), and so, that (P, ∅) ∼ G 0 (P , F ), a contradiction. In conclusion, P ∼ S ∅ and the projection restricts properly to M.
For the surjectivity, consider (P, F) ∈ M. If (P, F) has both components non empty, then by hypothesis admits a lift on V which projects on it. Otherwise, assume without loss of generality that F = ∅ and take P any lift of P. From completeness of the c-completion, there exists F such that (P, F) ∈ V. Moreover F has to be empty as, otherwise, recalling that π restricts properly to M, P ∼ Sπ (F) (which is not possible as P ∼ S ∅). Hence, any point (P, F) ∈ V withπ(P) = P has F = ∅ and, from the definition of π, we deduce that π((P, ∅)) = (P, ∅), as desired.
Let us remark that the hypothesis of non existence of future sequences with past divergent lifts nor past sequences with future divergent lifts in M, even if it appears to be quite strong, it is easily verifiable. In fact, example 5.4 illustrates how to verify the absence of such sequences, while corollary 4.23 gives hypothesis that guarantees such absence.
In any case, whenever π restricts properly to M and it is surjective, we can proceed in complete analogy with the partial cases and obtain the following diagram: where two points in V are G-related if they project by π into the same point of M; and V/G denotes the related quotient space. From its definition, defines a bijection between V/G and M. As a final remark in this section, simple cases where the map π is univocally determined are pointed out now.
JHEP04(2017)051
Proof. The result is straightforward, once we recall that in tame projections, ifπ(P) = π(P ), then there exists g ∈ G such that P = g P. Therefore, if F = ∅ and P ∼ S F , it follows that P = g −1 P ∼ S g −1 F , in contradiction with P ∼ S ∅.
Proposition 4.8. Assume that the points (P, F) ∈ M with P = ∅ = F have lifts in V (see proposition 4.2), M does not admit an inextensible sequence {x n } n ⊂ M which is either past-directed chronological with future divergent lifts or future-directed chronological with past divergent lifts, and M is Hausdorff. Then, π restricts properly to M, it is surjective and univocally determined.
Proof. By proposition 4.6 we have that π restricts properly to M and it is surjective, so we only have to show that π is univocally determined. Having different possible definitions for π is only possible under the following situation (or its analogous for the future): there exist three points (P, ∅), (P 1 , F 1 ), (P 2 , F 2 ) ∈ V withπ(P) =π(P 1 ) =π(P 2 ) = P but with F 1 =π(F 1 ) =π(F 2 ) = F 2 . However, from proposition 4.1 we know that both (P, F 1 ), (P, F 2 ) belong to M, which is not possible from the Hausdorffness of the latter (observe that any future chronological sequence {x n } n defining P will converge to both points, see theorem 2.8 (ii)).
In general, and under the hypothesis that π restricts properly to M and it is surjective, we can obtain that both spaces inherits the same causal structure. Proposition 4.9. Let π : V → M a spacetime covering projection and assume that π restricts properly to M and it is surjective. Denote by the corresponding map between V/G and M. Then, the bijection is a chronological isomorphism, that is, Proof. Let us start by fixing some notation. Consider (P, F), (P , F ) ∈ M and denote by (P, F), (P , F ) ∈ V two corresponding lifts. It follows that −1 ((P, F)) = ı((P, F)) and −1 ((P , F )) = ı((P , F )). Assume that ı((P, F)) ı((P , F )) and, without loss of generality, that (P, F) (P , F ). Then, F ∩ P = ∅ and from the first bullet point of proposition 2.9, that F ∩ P = ∅. Therefore, (P, F) (P , F ) and the left implication follows. For the other implication, assume that (P, F) (P , F ), i.e., F∩P = ∅ and let x ∈ F∩P . As x ∈ F andπ(F) = F, proposition 2.9 ensures that there exists a point p ∈ V with π(p) = x such that p ∈ F. Reasoning in the same way but fixing this lifted p ∈ V of x, we can show that there exists g ∈ G such that p ∈ g P (recall thatπ(P ) = P ). In conclusion, JHEP04(2017)051 p ∈ F ∩ g P and so (P, F) (g P , g F ). Hence, ı((P, F)) ı((P , F ))(= ı((g P , g F ))) and the right implication follows.
At the topological level
Finally, in this section we will compare the topological structures of both, V/G and M. Let us start by fixing some notation. M and V will be endowed with the corresponding chronological topology, while V/G will be with the induced quotient topology from V. In concordance with section 3, we will denote by L M the chronological limit on M, by L V the chronological limit on V and by L G the quotient limit operator on V/G induced from L V (recall equation (2.2)).
In spite of the partial cases where the openness of the map is always ensured, in general the map is neither continuous nor open. In fact, the following result summarizes the only cases where is well behaved with respect the limit operator. where (P, F) = π((P, F)) and σ = π(σ).
Proof. Assertion (b) is a direct consequence of (2.7), remark 3.5 and the fact that any lift (P, F) ∈ π −1 ((P, ∅)) should have F = ∅, so let us focus on assertion (a). For this, recall that from the definition of the chronological limit, P ⊂ LI({P n } n ) and F ⊂ LI({F n } n ). As the projection is well behaved with contentions, we have that P ⊂ LI({P n } n ) and F ⊂ LI({F n } n ), which is enough to ensure that (P, F) ∈ L M ({(P n , F n )} n ) (see proposition 2.7).
The other cases (that is, when (P, F) has one empty component or when P = ∅ = F) are false in general, as it is proved by examples 5.1 and 5.7. On the first one there exists a sequence {q n } n ⊂ V converging to a point of the form (P, ∅), while its projection converges to a point (P , ∅) withπ(P) = P P . On the second example, the sequence {x n } n converges to (P, F) in M, however {x n } n has no convergent lift on the corresponding V.
The first case is directly related with the non continuity of. In fact, we can easily prove that: Proposition 4.11. Let π : V → M a spacetime covering with π restricting properly to M and surjective. If π((P, ∅)) = (P, ∅), π((∅, F)) = (∅, F) for any IP P and IF F (so that, in particular, π is univocally determined, see remark 4.5); and M has no sequence with divergent lifts, the map π (and so, ) is continuous.
Let us give a closer look to previous proof. Observe that the non existence of divergent lifts is used precisely when we deal with limit points of the form (P, ∅) or (∅, F). In this case,π(P) := P does not belong toL M ({P n } n ) if it is not a maximal IP in LS({P n } n ) and then, necessarily, it should exists another IP P with P P ∈L M ({P n } n ) (up to a subsequence). Therefore, if we assume that M has no lightlike boundary points (and conditions ensuring that (P, ∅) ∈ M), such a situation is not possible and the continuity of π follows. In conclusion we have: Proposition 4.12. Let π : V → M be a projection satisfying: (i) π restricts properly to M and it is surjective, (ii) π((P, ∅)) = (P, ∅) and π((∅, F)) = (∅, F) for any IP P and IF F (hence π is univocally determined, see remark 4.5); and (iii) M has no lightlike boundary points. Then, the map π (and so, ) is continuous.
Remark 4.13. Observe that, in spite of corollary 3.9, here we do not need to impose that the boundary M has only spatial boundary points; indeed, we can also include timelike ones. The reason is simple: unlike partial boundaries, the total c-completion takes into account more information for each point in the boundary, specially with timelike boundary points where both, the future and past components, are non empty. In fact, such an additional information let us simplify the definition of the limit operator (see proposition 2.7), as we have used on the proof of proposition 4.11.
As we have mention at the beginning of the section, and in spite of the continuity, the openness of the partial maps and is not enough to ensure the openness of , as we can see on example 5.7. This means that an additional condition has to be imposed to obtain such an openness. In this sense, we will consider the condition of finite chronology whose properties will be studied in the following section.
Group actions with the finite chronology property
First of all, let us introduce the definition of finite chronology. Definition 4.14. Let V be a spacetime and G a group of isometries. We will say that the pair (V, G) is finitely chronological if given two points p, q ∈ V with p q, there exists only a finite number of elements g ∈ G such that p g q.
The finite chronology property will be enough to ensure the openness of and it will also simplify the conditions to ensure when the map π restricts properly to M, it is univocally determined and surjective. However, such a condition will not be enough to prove the continuity of or, as it is showed by example 5.3. Let us begin with a crucial lemma: Proof. The proof follows essentially by recalling that, for a fixed k 0 ∈ N and n k 0 , p g n p n g n p k 0 . In particular, as there exist a finite number of elements g ∈ G such that p g p k 0 , g n should belong to a finite family of elements in G for n big enough. Moreover, we can take {h 1 . . . , h r } ⊂ G such that, for all h i , there exists a subsequence {g n i k } k with g n i k = h i . In particular, there exists n 0 such that for each n n 0 there exists i(≡ i(n)) with g n = h i .
For the second assertion, recall that the set G(p, {p n } n ) is finite by the finitely chronological property. Now, we will show that {h 1 , . . . , h r } ⊂ G(p, {p n } n ). As we have stated before, for each h i there exists a subsequence {g n i k } k ⊂ {g n } n such that g n i k = h i , and therefore satisfying p g n i k p n i k = h i p n i k for all k. Now observe that, for any m ∈ N, we can take k ∈ N such that m < n i k , and it follows that p h i p n i k h i p m (as {p n } n is past-directed chronological chain); concluding then that h i ∈ G(p, {p n } n ).
If we consider two points p, p ∈ V with p p , then it follows that G(p , {p n } n ) ⊆ G(p, {p n } n ). This relation allow us to prove that the lifts of terminal sets are well behaved, at least when (V, G) is finitely chronological, with respect to the future and common pasts. Concretely, Lemma 4.16. Consider an IP P and an IF F on M satisfying that P ⊂↓ F; and take P, F the corresponding lifts. If (V, G) is finitely chronological then the set G(P, F) defined by is non empty and finite.
Proof. As a first step, we are going to characterize the set G(P, F) in terms of the sequences defining P and F. In this sense, let {x n } n and {y n } n be chronological sequences defining P and F respectively, and {p n } n , {q n } n the corresponding chronological lifts defining P and F. Observe that the following chain of equivalences follow g ∈ G(P, F) ⇐⇒ P ⊂↓ g F ⇐⇒ p n ∈ g F for all n ∈ N ⇐⇒ p n g q m for all n, m ∈ N ⇐⇒ g ∈ G(p n , {q m } m ) for all n ∈ N In particular, G(P, F) = ∩ n∈N G(p n , {q m } m ).
JHEP04(2017)051
As a second step, recall that from hypothesis P ⊂↓ F, and so, x n y m for all n, m ∈ N. Hence, proposition 2.9 ensures that there exists a sequence {g m } m ⊂ G such that p n g m q m and so, from lemma 4.15, G(p n , {q m } m ) is non empty and finite for all n. Then, G(P, F) is the intersection of a numerable family of non empty and finite sets ordered by G(p n+1 , {q m } m ) ⊂ G(p n , {q m } m ). Therefore, it is a non empty and finite set.
In particular, and as a consequence of previous lemma and Props. 4.2 and 4.6, we have that: At this point a natural question arise at the point set level: is there any relation between π −1 ((P, F)) and the set G(P, F)? Intuitively, one can expect that for a fixed lift P, the set G(P, F) determines all the pairs of the form (P, g F) ∈ V with projection (P, F). However, as we recall in remark 4.3, it is not clear that, in general, all the lifts preserving the relation with the common future (or past) are S-related. Again, the finite chronology condition will be enough for this, as we will see on lemma 4.20. In order to prove such a lemma, we need first the following technical result: Lemma 4.18. Let P, P ∈V (respectively F, F ∈V) be two points of the future (past) causal completion projecting to the same set P ∈M (F ∈M). Suppose one of the following situations: (H1) (V, G) is finitely chronological and there exists p ∈ V such that P ⊂ I − (p) (F ⊂ I + (p)).
Then there exists h ∈ G such that P = h P (F = h F ). In particular, it follows that if G is finite the projection π is future (past) tame.
Proof. Let {p n } n , {p n } n be future chronological chains defining P and P respectively. As both sets project onto the same P, it follows that the projection of such sequences {x n } n , {x n } n generate P. In particular, for each n there exists m(n) big enough such that x n x m(n) . We will consider {m(n)} n a strictly increasing sequence, so {x m(n) } n is a subsequence of {x n } n and generates the same P (and, accordingly, {p m(n) } n generates P ). From proposition 2.9 it follows that there exists a sequence {g n } n ⊂ G such that g n p n p m(n) for all n. Now observe that, in either situation (H1) nor (H2), and up to a subsequence, {g n } n can be considered a constant sequence (say g n = h ∈ G for all n). In the case that G is finite the argument is straightforward. In the other case, recall that from (H1) we have that g n p n p m(n) p, and so the assertion follows from lemma 4.15. Therefore,
JHEP04(2017)051
h p n p m(n) for all n, and hence, h P ⊂ P . By interchanging the roles of P and h P (recall that, now, h P ⊂ P ⊂ I − (p)), we find anotherh such thath P ⊂ h P or, by considering h = h −1h , that h P ⊂ P. Now, we can join both contentions h P ⊂ P and h P ⊂ P together in the following way g P ⊂ h P ⊂ P (4.6) for g = h h; and then construct the chain: where (g) i denotes the iteration of the action by g i-times. Now observe that under the hypothesis of the lemma, there exists i 0 such that (g) i 0 = e. This assertion is again straightforward under the assumption of G finite, so let us focus on the hypothesis (H1). If by contradiction (g) i = (g) j for all i = j, and recalling that P ⊂ I − (p), we deduce that (g) i P ⊂ I − (p) for all i. which contradicts that (V, G) is finitely chronological (the point p will be chronologically related with (g) i q for any q ∈ P and i ∈ N). Summarizing we deduce that g P = P and from (4.6) we obtain that P = h P , as desired.
Remark 4.19. Observe that we have also proved in previous lemma that if g P ⊂ P for some g and, or G is finite, or (V, G) is finitely chronological and there exists p ∈ V with P ⊂ I − (p), then g P = P (an analogous result for past sets follows).
Lemma 4.20. Assume that (V, G) is finitely chronological. If P and F are terminal sets withπ(P) = P ∼ S F =π(F), then P ∼ S g F for all g ∈ G(P, F).
Proof. Assume without loss of generality that e ∈ G(P, F), and so, that P ⊂↓ F. By contradiction, let us assume that P is not S-related with F. Recalling remark 4.3, we ensure the existence of a terminal set P with P P ⊂↓ F and satisfying thatπ(P) =π(P ). As (V, G) is finite chronological and there exists p ∈ V such that P , P ⊂ I − (p) (take any p ∈ F), lemma 4.18 ensures that there exists h ∈ G such that P = h P. But then, recalling remark 4.19, we arrive to a contradiction with P P = h P.
The technical lemma 4.18 allow us to prove that π is univocally determined, as it follows from the following result (recall also remark 4.5): Proposition 4.21. Assume that (V, G) is finitely chronological. If (P, ∅) ∼ G 0 (P , F ), then F = ∅ (an analogous result follow for a pair (∅, F)).
Proof. Assume by contradiction that (P, ∅) ∼ G 0 (P , F ) with F = ∅. By recalling that both sets P and P projects onto the same set P inM, and that for any p ∈ F , P ⊂ I − (p); we can apply lemma 4.18 (H1) deduce the existence of h ∈ G such that P = h P . Since G acts by isometries, it follows that P = h P ∼ S h F , which is a contradiction to P ∼ S ∅.
With all previous machinery set, we are now in conditions to prove the openness of under the assumption of finite chronology:
JHEP04(2017)051
Proposition 4.22. Let π : V → M be spacetime covering projection with (V, G) finitely chronological and assume that π restricts properly to M and it is surjective. Then, the univocally determined map π induces an open map from V/G to M.
Proof. Let {(P n , F n )} n ⊂ M be a sequence and (P, F) ∈ M a point such that (P, F) ∈ L M ({(P n , F n )} n ). Our aim is to show that, up to a subsequence, (P n , F n ) and (P, F) admit lifts (P n , F n ) and (P , F ) with (P , F ) ∈ L V ({(P n , F n )} n ), and hence, that −1 ((P, F)) ∈ L G ({ −1 (P n , F n )} n ) (recall (2.2)). Observe that the case where F or P is empty follows from proposition 4.10 (b), so we only need to focus on the case where both sets are non empty.
Assume that P = ∅ = F and let P, F, P n , F n be some fixed lifts of P, F, P n , F n respectively. Consider {x n } n and {y n } n chronological sequences defining P and F and, as usual, denote by {p n } n and {q n } n the corresponding lifts defining P and F. Let us denote by {m(n)} n a sequence in N with m(n + 1) m(n) + 1 and satisfying that x n ∈ P m(n) and y n ∈ F m(n) . Now, as x n ∈ P m(n) , proposition 2.9 ensures that p n ∈ g n P m(n) for some g n ∈ G. From lemma 4.16, we know that the set G(g n P m(n) , F m(n) ) is non empty and, from lemma 4.20, that for any g n ∈ G(g n P m(n) , F m(n) ), g n P m(n) ∼ S g n F m(n) . Finally, again from proposition 2.9 and y n ∈ F m(n) , there exists h n ∈ G such that h n q n ∈ g n F m(n) . Now, let us observe that from g n P m(n) ⊂↓ g n F m(n) , it follows that p n h n q n . In particular, we have the chain p 1 p n h n q n and then, from lemma 4.15, we can ensure that, up to a subsequence, {h n } n is constant, say h n = h ∈ G for all n. In particular, for any i and all n > i, it follows that p i p n h q n .
In particular, P ⊂↓ h F and so h ∈ G(P, F). Hence, lemma 4.20 ensures that both sets P and h F are S-related. Summarizing: • The pairs (P, h F) and (g n P m(n) , g n F m(n) ) belongs to V.
In conclusion, and always up to a subsequence, if (P, F) ∈ L M ({(P n , F n )} n ) we can always obtain appropriate lifts (P , F ) and {(P n , F n )} n such that (P , F ) ∈ L V ({(P n , F n )} n ). The result follows then as a consequence of proposition 2.3 applied to −1 .
As a final remark of this section, we will show how finite chronology let us simplify some of our previous hypothesis for the definition and continuity of . In fact, the condition of M having no sequence with divergent lifts (which is almost equivalent to the continuity ofπ andπ, recall proposition 3.8) imposed in proposition 4.6 can be substituted by a topological requirement onV andV respectively:
JHEP04(2017)051
Corollary 4.23. Assume that (V, G) is finitely chronological and that bothV,V are Hausdorff. Then, π restricts properly to M, it is surjective and univocally determined.
Proof. We only need to show, according to corollary 4.17, that any past-directed chronological chain on M has no future divergent lifts (the other case will be completely analogous). Let {y n } n be a past-directed chronological chain and consider {q n } n a past chronological sequence in V with π(q n ) = y n and defining a IF F. Suppose that there exist {h n } n , {g n } n ⊂ G and P, P ∈V such that P ∈L V ({I − (h n q n )} n ) and P ∈L V ({I − (g n q n )} n ).
Take p ∈ P. From P ∈L V ({I − (h n q n )} n ) we have that p h n q n for n big enough. As (V, G) is finitely chronological, lemma 4.15 ensures that, up to a subsequence, h n = h 0 for some fixed h 0 ∈ G. Reasoning in the same way with P and {g n } n , we can ensure that, up to subsequence, g n = g 0 for some fixed g 0 ∈ G.
Hence, we have that P ∈L V ({I − (h 0 q n )} n ) and P ∈L V ({I − (g 0 q n )} n ) and, from the first inclusion, we deduce that AsV is Hausdorff, then (g 0 h −1 0 ) P = P and both sets projects into the same set inM. In conclusion, {y n } n cannot admit future divergent lists. Finally, π is univocally determined as it follows from proposition 4.21.
At the topological level, we also have to impose some conditions on M, obtaining: Hence, it only rest to show that is continuous. But this follows from proposition 4.12, recalling that proposition 4.21 ensures that π((P, ∅)) = (P, ∅) and π((∅, F)) = (∅, F).
Ideally, one would like to impose conditions only on V in order to ensure that V/G and M have the same structures. For example, and in the spirit of corollary 4.24, we would like to impose on V the non-existence of lightlike boundary points to obtain the non-existence of lightlike boundary points on M, and therefore the continuity of . However, the lack of lightlike boundary points in V is not enough to ensure the same property on M (see example 5.8). Nevertheless the situation is very controlled and it is related again with the existence of very particular divergent lifts. In fact, we can prove that (compare with proposition 3.7): Corollary 4.25. Let π : V → M be a spacetime projection. Assume that π restricts properly to M, it is surjective and it satisfies that π((P, ∅)) = (P, ∅) and π((∅, F)) = (∅, F) (hence univocally determined, see remark 4.5) for any IP P and IF F. If V has no lightlike boundary points and the G-orbits for bothV andV are closed (with the corresponding topologies), then M has no lightlike boundary points.
JHEP04(2017)051
Proof. Assume by contradiction that M has lightlike boundary points, that is, that there exists (P, ∅) ∈ M and P ∈M such that P P (the case with past sets will be analogous). Let {x n } n and {x n } n be chronological chains generating P and P respectively and consider P, P , {p n } n and {p n } n the corresponding lifts onV. From hypothesis, it follows that (P, ∅) ∈ V.
As P ⊂ P we deduce that, for all n, x n x n with n big enough, so proposition 2.9 ensures that there exists g n such that p n g n p n ∈ g n P . It follows then that P ⊂ LI({g n P } n ). Moreover, it also follows that P ∈L V ({g n P } n ) as, otherwise, there exists P such that P P and this is not possible as V has no lightlike boundary points.
Finally, and from the hypothesis that the G-orbits are closed onV with the future chronological topology, it follows that P ∈ {g P } g∈G , i.e., there exists g 0 ∈ G such that P = g 0 P . In conclusion, and taking projections, we obtain that P = P , a contradiction.
As a consequence of corollaries 4.23, 4.24 and 4.25, we obtain the following result: For the last assertions, π being univocally determined is a consequence of the finite chronology and (PS2). (a) follows from (PS1), (CH), (TP1) (i) and (TP2), while for (b) we have to consider (PS3) and (TP1) (ii) instead of (PS1) and (TP1) (i). The last assertion (c) is proved in corollary 4.26, recalling that the closedness of the G-orbits under π tame is proved in proposition 3.7.
On the optimality of the results: some examples
Along this section, we will include some examples showing that our main results are optimal. It is worth pointing out that in all the examples #L M (σ) will be bounded, and so, according to lemma 2.4, that L M will be of first order UTS. This is specially relevant recalling proposition 3.8, as it means that in all our examples the non existence of divergent lifts characterize the continuity ofπ andπ.
Let us start with the example due to Harris where is not continuous. Here, we will include only the main properties of his example, referring the reader to [19] for details. Example 5.1. (Behaviour of the universal cover and non-continuity ofπ) In this example we will see: first, the main properties about the universal cover of a spacetime M where we have removed a numerable family of compact segments (these properties will be used frequently on the forthcoming examples). Second, a case whereπ (and so,) is noncontinuous. Finally, thatV/Ĝ is not a T 1 -topological space.
JHEP04(2017)051
Let us consider a spacetime M as in the figure 1, and let V denote its universal cover. As it is described in the last example of [19], V contains a numerable family of copies of M, that we will denote by {n} × M with n ∈ Z, glued coherently along the segments H n . For a given element x ∈ M, let us denote by p its lift in V living in the fibre {0} × M. We will also denote by n · p the lift of x in the fibre {n} × M (i.e., p ≡ 0 · p).
In order to understand how the fibres are glued along H n , let us show how the lifts of curves behave. Consider γ a curve on M as it is showed in figure 1 (A), which is a timelike curve joining two points x and y. Let p and q be the corresponding lifts in the fibre {0} × M and consider γ a lift of γ on V with start point m · p. The fibres are glued in such a way that, as γ intersects the segment H n , the lifted curve γ moves from the fibre {m} × M to {m + n} × M, being (m + n) · q its final point.
Once we have pointed out this behaviour, let us observe the particularities of the example regarding the continuity ofπ. Let us observe now figure 1 (B), where we have two TIPs JHEP04(2017)051 P P defined by the sequences {x n } n and {y n } n (P is filled in dark grey, while P has a lighter grey). Consider p n and q n lifts of x n and y n respectively living in the fibre {0} × M. It is not difficult to observe, due to the behaviour described before, that m · p n m · q n for any m ∈ N. In fact, it follows that m · p n (m + n) · q n for all n ∈ N as we can consider timelike curves on M joining x n with y n and intersecting H n . From this, we can prove that: (a) the sequence σ = {I − (q n )} n has P (the lift of P on the fibre {0} × M) on its limit, (b) the sequence {I − (n · q n )} n has P on its limit (the inclusion on the inferior limit is straightforward, while the proof of the maximal character is detailed in [19]) and (c)π(P) = P P =π(P ). In conclusion, and recalling proposition 3.8,π is not continuous. As a final observation, let us consider the future causal completion of the universal cover V (which is a T 1 topological space) and its quotient spaceV/Ĝ, where G = π 1 (M) is the fundamental group of M acting on V. Consider P ∈V and its corresponding class [P ] in the quotient spaceV/Ĝ. It is now straightforward to see that P ∈L V ({P } n ) and P ∈L V ({n · P } n ). Hence, recalling the definition of L Q (see (2.2)), it follows that both [P] and [P ] belong to L Q ({[P ]} n ), makingV/Ĝ a non T 1 topological space.
Example 5.2. (Optimality of proposition 4.6) The following example shows that a point (P, ∅) ∈ V (respectively (∅, F)) is not well projected (recall proposition 4.1), even whenπ andπ are continuous. With this aim, a point (P, F) ∈ M with no natural lift on V will be exhibited.
Let us consider M a spacetime as described in figure 2 and V its universal cover. As it is pointed out in [5, figure 11], both sets P ∼ S F are S-related. Now, let us fix P and F lifts of the corresponding terminal sets on {0} × M ⊂ V as we have done on example 5.1; and denote by {p n } n ⊂ {0} × M a future chronological sequence which is lift of the sequence {x n } n showed in figure 2. Recall that the lifts on V of timelike curves of M moving between S n and S n+1 behave essentially as described in example 5.1. Hence, it follows that In fact, for a given point y ∈ F (and so, with x n y for all n) with fixed lift q ∈ {0} × M ⊂ V, there is no constant element g ∈ G such that (p 1 )p n g q for all n big enough (since p n mq n for m n). This shows in particular that (V, G) is not finitely chronological (hence theorem 1.2 (PS3) is not applicable) and that the set ↑ P is empty, so P ∼ S ∅.
However, it is not difficult to see that bothπ andπ are continuous. Recall that the non-continuity of such maps can only follow by the existence of a sequence {y n } n ⊂ M admitting divergent lifts.
The only case we have to be concerned is when {y n } n converges on R 2 , or to the point (0, 1) or to (0, 0) (in the other cases, the convergence is essentially the usual one in R 2 ). Assume for instance that the sequence {y n } n converges to the point (0, 1) (the other case is completely analogous). It is straightforward to check that any convergent lift with the past chronological topology of {y n } n in V are, up to a subsequence, of the form {m · q n } n , with JHEP04(2017)051 Figure 2. M is constructed by removing from L 2 the black square and the vertical segments S n .
As it was pointed out in [5, figure 11], the terminal sets P and F are S-related, and so, they form a pair (P, F) ∈ M. However, if P is a lift of P to the universal cover V, it follows that ↑ P = ∅.
m ∈ Z constant and q n ∈ {0} × M ⊂ V a fixed lift of {y n } n . In particular, their limits are of the form m · F. This is due the fact that the IFs involved will not have points between the segments S n , and so, we do not have to move between different fibres of V. Therefore, any convergent lift of {y n } n with the past topology converges to a terminal set onπ −1 (F), and so, {y n } n does not have past divergent lifts (condition (ii) in definition 3.6 cannot be fulfilled).
For the future topology however the situation is a little more technical, as the involved IPs contain these points between segments S n . With some effort, it can be proved that if LI({I − (g n q n )} n ) = ∅ for some {g n } n ⊂ Z, then LI({I − (g n q n )} n ) = m P for some m ∈ Z. In particular, any convergent lift with the future topology of {y n } n will converge to some TIP onπ −1 (P), and so, reasoning as in previous case, {y n } n does not admits future divergent lifts.
In conclusion, M does not admit (future or past) divergent lifts, and so, bothπ andπ are continuous. Let us consider a space V ⊂ R 2 as showed in figure 3. On such a space, consider G ≡ Z an isometry group given by the following action: The quotient M = V/Z can be seen as a cylinder with some cuts on it (see figure 3 (B)). Let us summarize the properties of the spacetime covering projection π : V → M. On the one hand, and observing figure 3 (B), it follows easily that M contains the pairs (P 1 , F) and (P 2 , ∅). Indeed, both sets P 1 , P 2 are contained in ↓ F, but thanks to the identification of both lateral sides, it follows that P 2 P 1 , so only P 1 is maximal on the common past of F. However, on V we have both pairs (P 1 , F) and (P 2 , F), so the thesis on proposition 4.1 is false on this case.
On the other hand, the non continuity ofπ can be deduced from the fact that P 2 ∈ L V ({p n } n ) while P 2 =π(P 2 ) / ∈L M ({x n } n ), as P 1 breaks the maximality of P 2 in the superior limit. Finally, it is quite straightforward to see that (V, G) is finitely chronological. If p q in V, it could exists (at most) one element in g ∈ Z such that p g q (specifically, g = ±1). However, we cannot apply theorem 1.2 to ensure that π restricts properly to M asV is not Hausdorff (recall that P 1 , P 2 ∈L V ({p n } n )). As the spacetime (V, g) is, in fact, a static spacetime, we can calculate directly its cboundary which structure is given by: figure 4). Concretely, recall that the structure of the c-boundary for static models depends essentially on the so-called Busemann completion of its spatial fibre (see for instance [17, theorem 3.10] as well as section 6 for details). In this case, it follows that the associated Busemann completion for R × (−1, 1), dx 2 + dy 2 is formed by the Cauchy boundary (R × {−1}) ∪ (R × {1}) and two additional points, each determined by inextensible curves whose x-component diverge (one point when the x-component diverges to +∞ and the other to −∞). The Cauchy boundary points generate in the c-boundary two copies of L 2 (denoted in figure 4 by L 2 × {−1} and L 2 × {1}) formed by timelike points, and so, with both components non empty; while the two points in the proper Busemann boundary generates four lightlike lines, two for the future and two for the past boundary, denoted by ξ + R , ξ + L , ξ − R and ξ − L respectively From the topological viewpoint, and due the simplicity of the example, it follows that the chronological topology works as expected in this c-completion, being the convergence with the chronological topology the same as the usual convergence in figure 4 (after the appropriate identifications).
In previous space we define the following group action Z × V → V (z, (x, y, t)) → (x + z, y, t) so M = V/Z is, in fact, M = S 1 × (−1, 1) × R with the induced metric. Again, the c-boundary is computable by previous methods, obtaining (here, the proper Busemann boundary is empty while the Cauchy boundary is formed by two copies of R × S 1 ).
Let us describe briefly how π works: it takes all the lightlike points of∂V and∂V to i + and i − respectively; and it mods out by a properly discontinuous Z-action on each L 2 ×{±1}. In particular, observe that any boundary point in ξ + R \ {i + } ⊂ ∂V and i + ∈ ∂V are both projected to i + ∈ ∂M, but no element on Z send an element in the former to the latter (this can be seen as no translation in L 2 sends a terminal set P ∈ ξ + R \ {i + } to i + ); proving thatπ is not (future) tame. In order to show that proposition 4.6 is applicable, we have to show that any inextensible past chronological sequence on M has no future divergent lifts (being the future case completely analogous).
Let σ = {(x n , y n , t n )} n be a past inextensible chronological sequence on M, and let σ = {(x n + z n , y n , t n )} n an σ = {(x n + z n + z n , y n , t n )} n be two lifts on V. The inextensibility of σ determines two possibilities: or {t n } n −∞, or {t n } n Ω and {y n } n converges to some point in {−1, 1}. Observe that in the first case there is nothing to do asL chr (σ) = L chr (σ ) = ∅. Hence we can assume that we are in the second case and, without loss of generality, that {y n } n → 1. Recalling the (well) behaviour of the topology in the future completion, ifL(σ) = ∅ =L(σ ) then {(x n + z n , y n , t n )} n → (x 0 , 1, Ω) and {(x n + z n + z n , y n , t n )} n → (x 0 , 1, Ω) for some x 0 , x 0 ∈ R. In particular, and given that {x n + z n } n → x 0 , {x n + z n + z n } n → x 0 and z n ∈ Z, we conclude that for n big enough z n = z 0 for some fixed z 0 . It follows then that x 0 = x 0 + z 0 and, therefore, that if σ and σ have both limit points, such limits points are unique and project into the same point inM. In conclusion, the sequence σ has no future divergent lifts.
Example 5.5. (Optimality of several results) In this example we will show: (i) a case of a non tame spacetime covering projection, where (and unlike example 5.4) two terminal sets P, P ∈V project into the same set onM with no element in the group G sending one to the other but with P S-related to a non empty set, (ii) that even if M does not admit constant sequences with future divergent lifts, the G-orbits can be non closed and (iii) a spacetime covering projection with a sequence {q n } n ⊂ V and a TIP P ∈V with P ∼ S ∅ and such that P ∈L V ({I − (q n )} n ), P ∈L M ({I − (y n )} n ) but P ∼ S F with F = ∅ (showing the optimality of proposition 4.11).
Let us consider the Lorentz manifold (see figure 5), and take V its universal cover. The behaviour of the lifts of curves in M to V behaves essentially in the same manner described in example 5.1, that is, it contains a numerable family of copies of M (which will be denoted again by {n} × M) glued together accordingly; and whenever a curve γ ⊂ M pass between two holes of M, the initial point and the endpoint of the lifted curve γ live in two different fibres of such a numerable family. It follows that the point (0, 0) ∈ R 2 has associated in M a singular point (P, F) ∈ M. However, the lift of the terminal set P in a concrete fibre, say {0} × M, determines two different terminal sets P, P . The reason is simple, any timelike curve joining a point of JHEP04(2017)051 On the first one, associated to the point (0, 0) we have the point (P, F) ∈ ∂M. However, the set P lifts to a fixed fibre {0} × M ⊂ V as two different terminal past sets P and P , creating two different points (P , F), (P, ∅) ∈ ∂V. In particular, it follows that the sequence {y n } n depicted on the right is not convergent, while its lifts {q n } n converges to (P, ∅).
the sequence {x n } n with {x n } n should pass between two holes of M, and so, its lift moves along different fibres. Moreover, from construction, we have that for each p n there exists g n ensuring that p n ∈ g n P . However, the sequence {g n } n cannot be considered constant (not even up to a subsequence), so there is no g ∈ G such that P ⊂ g P and the projection cannot be tame. Moreover, it follows from the construction that P ⊂ LI({g n P } n ) and it is maximal on the superior limit, i.e., P ∈L V ({g n P } n ). Therefore, the G-orbit {g P } g∈G is not closed as P is an element not belonging to the G-orbit of P but which is in its closure.
Let us now show the existence of a sequence {q n } n as described in the first paragraph of the example. Consider a sequence {y n } n as in figure 5 and {q n } n its lift in the fibre {0} × M ⊂ V. As we can see in the figure, P ∈L M ({I − (y n )} n ) and P ∈L V ({I − (q n )} n ). Moreover, as we have mention before, P ∼ S F with F = ∅. So, it only rest to show that P ∼ S ∅. But this follows from the fact that ↑ P = ∅ (recall that whenever a timelike curve moves through the space between two holes, it pass to another fibre in V). Summarizing, we have shown in particular that the map π is not continuous. The sequence {q n } n converges to the point (P, ∅) ∈ V, while its projection {y n } n does not converge to (P, F) ∈ M (note that LI({I + (y n )} n ) = ∅).
Example 5.6. (Several candidates for the projection of a pair) The following example is a three dimensional version of the previous one, and aims to show three points (P, ∅), (P , F 1 ), (P , F 2 ) ∈ V withπ(P) =π(P ) but withπ(F 1 ) =π(F 2 ).
Let us consider the following open set M of the three-dimensional Minkowski spacetime (with the induced metric): Figure 6. The space M is an open set of the three-dimensional Minkowski spacetime with the sets C 1 , C 2 and the sequence of lines {l n } removed. The point (0, 0, 0) is represented on the c-boundary of M as two points (P, F 1 ) and (P, F 2 ). As happen in figure 5, the lift of P to a fixed fibre of the universal cover of M give us two terminal sets P and P . In particular, and recalling again that ↑ P = ∅, we deduce that the pairs (P , F 1 ), (P , F 2 ) and (P, ∅) belong to ∂V, the c-boundary of the universal cover of M.
The behaviour of the lifts/projections in this case works essentially as in previous example. In fact, if we project the figure into the plane y, z we will obtain almost the same setting as in figure 5, with the first quadrant removed (and so, sharing the same properties). Hence, the set P is naturally lifted as two different terminal sets P and P living in the same fibre of V. The main difference between this case and previous example is that, even if P is still S-related with the empty set, the set P is S-related with two sets, F 1 and F 2 corresponding to the lifts of F 1 and F 2 . Therefore, the points (P , F 1 ) and (P , F 2 ) are both ∼ G 0 -related with the pair (P, ∅), whileπ(F 1 ) = F 1 = F 2 =π(F 2 ).
This suggests two different possible definitions for the function α, regarding the image of the point (P, ∅). In fact, we can consider: α 1 ((P, ∅)) = (P , F 1 ) and α 2 ((P, ∅)) = (P , F 2 ) making (P, ∅) projects to (P, F 1 ) in the first case, or to (P, F 2 ) on the second one. In JHEP04(2017)051 Figure 7. Let M be L 2 with the segments S n removed. On this example, the sets P and F are S-related and the sequence {x n } n has (P, F) on its limit. However, for n big enough, the timelike curves from x n to points in F should pass between S n and S n+1 . In particular, if {p n } n ⊂ {0} × M is a lift of {x n } n , and F is the corresponding lift of F in {0}×M, then g F ⊂ LI({I + (p n )} n ) for any g ∈ G.
both cases, the corresponding extended projections share the same properties: both restrict properly to M, are surjective and non-continuous (as in previous example, it is possible to construct a sequence {q n } n converging to the pair (P, ∅) whose projection {y n } does not converge to either (P, F 1 ) nor (P, F 2 )). So, as we pointed out in remark 4.5, there are no actual differences between π α 1 and π α 2 regarding the satisfied properties. figure 7 and V the universal cover of M. On M, both sets P and F are S-related and the sequence {x n } n converges to the point (P, F). On V, and thanks that we can take curves joining points from P to F without moving between any S n and S n+1 , we can obtain lifts P and F with P ∼ S F (we can assume that both sets live in the fibre {0} × M).
However, no lift of the sequence {x n } n converges to (P, F). In fact, let us take {p n } n a fixed lift of {x n } n contained in {0} × M. It is not difficult to observe that this lift is the only one satisfying that P ∈L V ({I − (p n )} n ). Even so, it is not true that F ∈Ľ V ({I + (p n )} n ), as any timelike curve joining a point x n with points on F should pass through two lines S n , hence its lift moves between two different fibres. Therefore, the sequence {x n } n has no natural convergent lift and the map is not open.
Finally, let us observe that and are continuous. This follows by reasoning as in example 5.2, recalling that the only cases where the continuity could fail is considering sequences {y n } n converging to (0, 0). Even if the segments are spacelike, the terminal set P 0 (the past of the boundary point (1, 1)) contains P 0 (the past of (0, 0)). The c-boundary of M is represented on the right of the figure. Observe that, in the c-boundary, each segment S n is represented by a thin ellipse. This is due the fact that any non-extremal point of the segment is reachable by a future and past inextensible timelike curve, but the corresponding terminal sets are not S-related. So, such points are represented in the c-boundary as two points of the form (P, ∅) and (∅, F). Only on the extremal points the corresponding TIP and TIF are S-related, and so, they determine only one point in the c-boundary.
be a manifold as in figure 8 endowed with the induced Minkowski metric, where each S n is a spacelike segment obtained from a small variation of the lightlike segment joining (1/n, −1/n) and (1, 1 − 2/n). Due the fact that (1/n, −1/n) (1, 1 − 2/(n + 1)), such a variation can be taken in such a way that the past of the upper-right extreme of S n+1 contains the down-left extreme of S n . Let V be the universal cover of M.
The c-boundary (and so, the c-completion) of M is represented on the right of figure 8 and it is formed almost entirely by spatial and timelike boundary points. However, the points (0, 0) and (1, 1) are represented on the boundary by pairs of the form (P 0 , ∅) and (P 0 , ∅) with P 0 P 0 , 5 hence M has lightlike boundary points. Topologically the c-completion M is Hausdorff, as it has the induced topology from R 2 . Now, if we look into the lifts of boundary points from M to V, we observe that timelike and spatial boundary points are lifted to timelike and spatial boundary points respectively. However, there exist no lifts (P 0 , ∅) and (P 0 , ∅) of (P 0 , ∅) and (P 0 , ∅) respectively such that P 0 P 0 , as any timelike curve moving from a point close to (1, 1) to a point close to (0, 0) should move between two segments S m and S m+1 , and so, it will move between different fibres of V (recall again the behaviour of the universal covering described on example 5.1). Therefore, V will have no lightlike boundary points. Finally, and due the fact that the topology around a point of V coincides again with the induced topology from R 2 , we have that V is also Hausdorff.
JHEP04(2017)051
6 A physical application: quotients on Robertson-Walker spacetimes As a final section of this paper, we will show how our results are applicable to concrete and physically relevant models of spacetimes. Our main aim will be to apply corollaries 4.24 and 4.26 where, in addition to the finite chronology, we need Hausdorffness on bothV anď V and the non existence of lightlike boundary points on M (recall also corollary 4.25).
We will focus on the case of Robertson Walker models, even if our results are extensible to other more general ones (see remark 6.4). The c-completion of such a models is well known [17, section 4.2], but we include here the details for completeness. Observe that we are not going to follow the original approach proposed in [17], but the approach introduced in [20, section 3].
Let (Σ, g Σ ) be a Riemannian manifold. Denote by t : R × Σ → R and π Σ : R × Σ → Σ the corresponding projections; and consider a smooth positive function α : R → (0, ∞). A Robertson Walker model with base Σ and warping function α is given then by the pair (V, g), where V = R × Σ, and g = −dt 2 + (α • t) π * Σ (g Σ ). (6.1) For simplicity, α • t will be denoted just by α(t) and, whenever there is no confusion, we will omit the pullback π * Σ . The chronological relation on these models is characterized as (see [20, proposition 3.1]): where d denotes the distance on Σ defined by g Σ . Thanks to previous characterization, it follows that any future terminal set P is determined by the so-called Busemann functions. Such functions are defined in the following way: given a curve c : [a, Ω) → Σ satisfying that g Σ (ċ,ċ) < 1, we define the associated Busemann function as: b c (·) = lim t→Ω t 0 1 α(s) ds − d(·, c(t)) Then, for any indecomposable past set P, it follows that P = P(b c ) for some curve c with g Σ (ċ,ċ) < 1, where P(b c ) = (t, x) ∈ V : it follows that c(t) → x * ∈ Σ C , where Σ C denotes the Cauchy completion associated to (Σ, g Σ ). Moreover, b c (·) = d (Ω,x * ) (·) := Ω 0 1 √ α(s) ds − d(·, x * ) (see [20,Equations (3.7) and (3.8)]. In this way, and under the assumption of previous integral condition, we have that the future causal completion has the following point set structure:
JHEP04(2017)051
The study is completely analogous for the past orientation, where if we assume the integral condition 0 −∞ 1 √ α(s) ds < ∞, the past causal completion is identified with: Finally, for the (total) c-completion, we only need to observe that past and future indecomposable past sets are S-related if they are associated to the same pair (Ω, x * ) ∈ R × Σ C (see [20,Equation (3.14)] and the paragraph above). In conclusion, the following result follows: Proposition 6.1. Let (V, g) be a Robertson Walker model as in (6.1), and assume the following integral conditions Then, the c-completion, as point set, becomes Chronologically, the c-boundary has two copies, one for the future and one for the past, of the Cauchy completion Σ C formed by spatial boundary points; and timelike lines over each point of the Cauchy boundary of Σ. Topologically, and assuming that Σ C is locally compact, the chronological topology on V coincides with the product topology in Σ C ×{{−∞}∪R∪{∞}}. Morever, bothV andV are Hausdorff.
Proof. The pointset and causal structure can be deduced from previous comments (see also [17, theorem 4.2]). For the topological structure, we only need to recall that [31, proposition 5.24] is also applicable to this approach and, moreover, it is also true when Ω = ∞ if the integral condition holds.
Therefore, when the integral conditions are satisfied and the associated Cauchy completion Σ C is locally compact, bothV andV are Hausdorff and V has no lightlike boundary points. Therefore, and as a consequence of corollary 4.26: Theorem 6.2. Let (V, g) be a Robertson Walker model as in (6.1) and assume both, the integral conditions in (6.2) and that Σ C is locally compact. Then, if π : V → M is a spacetime covering projection with associated group G, (V, G) is finitely chronological and the G-orbits are closed for bothV andV, then V/G and M are both, chronologically isomorphic and homeomorphic.
Obviously, our results are applicable in other Robertson Walker models without the integral conditions (6.2). For instance, the Anti-de Sitter model also satisfy both, it has Hausdorff partial completions and has no lightlike boundary points (see [17, section 4.1]). Moreover, the only pairs in V with an empty component are of the form (V, ∅) and (∅, V), corresponding to i + and i − , so it follows readily that M has no lightlike boundary points. Hence:
JHEP04(2017)051
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
|
2019-04-22T13:05:50.118Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "4ce1f81e880bde63d1eb5dd0baf7538c4109db0f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP04(2017)051.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c0467101a4a7f7190eeabd7ec8f9825bdecc2427",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233303555
|
pes2o/s2orc
|
v3-fos-license
|
CircLIFR synergizes with MSH2 to attenuate chemoresistance via MutSα/ATM-p73 axis in bladder cancer
Background Cisplatin (CDDP) has become a standard-of-care treatment for muscle-invasive bladder cancer (MIBC), while chemoresistance remains a major challenge. Accumulating evidence indicates that circular RNAs (circRNAs) are discrete functional entities. However, the regulatory functions as well as complexities of circRNAs in modulating CDDP-based chemotherapy in bladder cancer are yet to be well revealed. Methods Through analyzing the expression profile of circRNAs in bladder cancer tissues, RNA FISH, circRNA pull-down assay, mass spectrometry analysis and RIP, circLIFR was identified and its interaction with MSH2 was confirmed. The effects of circLIFR and MSH2 on CDDP-based chemotherapy were explored by flow cytometry and rescue experiments. Co-IP and Western blot were used to investigate the molecular mechanisms underlying the functions of circLIFR and MSH2. Biological implications of circLIFR and MSH2 in bladder cancer were implemented in tumor xenograft models and PDX models. Results CircLIFR was downregulated in bladder cancer and expression was positively correlated with favorable prognosis. Moreover, circLIFR synergizing with MSH2, which was a mediator of CDDP sensitivity in bladder cancer cells, positively modulated sensitivity to CDDP in vitro and in vivo. Mechanistically, circLIFR augmented the interaction between MutSα and ATM, ultimately contributing to stabilize p73, which triggered to apoptosis. Importantly, MIBC with high expression of circLIFR and MSH2 was more sensitive to CDDP-based chemotherapy in tumor xenograft models and PDX models. Conclusions CircLIFR could interact with MSH2 to positively modulate CDDP-sensitivity through MutSα/ATM-p73 axis in bladder cancer. CircLIFR and MSH2 might be act as promising therapeutic targets for CDDP-resistant bladder cancer. Supplementary Information The online version contains supplementary material available at 10.1186/s12943-021-01360-4.
Background
Bladder cancer is one of the most common cancer in the world and the most costly cancer to treat on a per patient basis due to required clinical surveillance and multiple therapeutic interventions [1]. Clinically, cisplatin (CDDP)-based gemcitabine and cisplatin (GC) regimen has become a standard-of-care treatment for muscle-invasive bladder cancer (MIBC) [2,3]. Unfortunately, although 60% of patients with metastatic MIBC demonstrate an objective response to CDDP-based chemotherapy, this response is rarely durable, and chemoresistance remains a major challenge in this disease setting [2,4]. More recently, immune checkpoint inhibitors (ICIs) have demonstrated robust evidence of therapeutic activity in metastatic MIBC [3,5]. However, response rates from these uncontrolled immunotherapy trials are less than 30% [6]. Worse still, a retrospective cohort study shows decreased survival in patients treated with immunotherapy monotherapy relative to the chemotherapy arms [6]. In the management of MIBC, while combining ICIs with CDDP-based chemotherapy is an attractive approach, CDDP is still a first-line chemotherapeutic agent [3]. Thus, a better comprehension of the mechanisms underlying the development of CDDP resistance in patients with bladder cancer will represent a major step forward in optimizing patients' outcomes.
The DNA mismatch repair (MMR) system guards against genomic instability, and mutations in the human MMR genes MutS homolog 2 (MSH2) and MutL homolog 1 (MLH1) are the cause of the majority of hereditary nonpolyposis colorectal cancer (HNPCC) [7]. In addition to the role in DNA repair, it is a somewhat unexpected finding that a major issue confronting the clinical management of tumors with MSH2 defects is that they are resistant to several of the common treatment regimes, such as CDDP [8][9][10][11]. In MSH2-deficient cells, DNA damage signaling involving p53 is suppressed during CDDP treatment in MEF cells [12]. Indeed, bladder tumors with low protein levels of MSH2 have poorer overall survival when treated with CDDP-base therapy, and the CDDP resistance screen suggests that MSH2 is the top one gene candidate based on statistical significance [13]. Due to the frequent mutation of TP53 in bladder cancer [14], the mechanism of MSH2 regulating chemotherapy resistance needs further study. On the other hand, it has been discovered that the interaction of MSH2 with other proteins is essential for triggering DNA damage signaling. Specifically, MSH2 interacts with MSH6 or MSH3 to form the MutSα or MutSβ complexes, respectively [15]. Nonetheless, intrinsic regulatory mechanisms of MSH2 affecting CDDP sensitivity remain largely unknown. Therefore, how to improve the chemosensitivity of bladder cancer with low expression of MSH2, as well as elucidating the underline mechanisms of MSH2-mediated CDDP sensitivity are of paramount importance.
Circular RNAs (circRNAs), which are a newly discovered class of non-coding RNAs (ncRNAs), are generated from back-splicing of pre-mRNAs to form covalently closed transcripts [16]. They were originally considered as erroneous products of splicing, but it has become clear that circRNAs are discrete functional entities [17,18]. Cir-cRNAs can serve as miRNA sponges to affect translational processing [19]. Additionally, circRNAs can interact with different proteins to form specific circRNA-protein complexes (circRNPs) that subsequently influence modes of action of associated proteins [20]. Notably, recent studies suggest an emerging picture of bladder cancer based on circRNAs, with unambiguous evidence of tumor promoting or inhibiting properties [21]. We recently found that circHIPK3, circNR3C1, BCRC-3 and has_circ_0001361 could affect the biological function by sponging miRNAs in bladder cancer [22][23][24][25]. Nevertheless, the regulatory functions as well as complexities of circRNAs in modulating CDDP resistance in bladder cancer are yet to be revealed.
In this study, we discovered that circLIFR, a cir-cRNA generated from the circularization of LIFR gene, was significantly downregulated in bladder cancer. Subsequent studies showed that circLIFR could interact with MSH2 to positively modulate CDDPsensitivity through MutSα/ATM-p73 axis in bladder cancer cell lines. Importantly, by using patient-derived xenograft (PDX) model, we further revealed that MIBC with high circLIFR and MSH2 levels were more sensitive to CDDP. Our findings provided a systematic elucidation into the regulation of circLIFR on the function of MSH2, and indicated that circLIFR and MSH2 might be act as promising therapeutic targets for CDDP-resistant bladder cancer.
Patients and tissue specimen collection
Seventy-nine pairs of bladder cancer tissues and paired adjacent normal bladder tissues were obtained from patients who underwent radical cystectomy at Department of Urology of the Union Hospital of Tong Medical College (Wuhan, China) between January 2015 and March 2019. With the instruction of a skillful pathologist, we collected the normal bladder urothelium samples (≥ 200 mg/sample) with a distance of ≥3 cm from the edge of cancer tissues in the resected bladder. All specimens were immediately snap-frozen in liquid nitrogen after surgical removal. Histological and pathological diagnoses were confirmed, and the specimens were classified by at least two experienced clinical pathologists according to the 2004 World Health Organization Consensus Classification and Staging System for bladder neoplasms. All specimens were obtained with appropriate informed consent from the patients and approved by the Institutional Review Board of Tongji Medical College of Huazhong University of Science and Technology. Detailed information is presented in Table 1. All of the patients were followed up on a regular basis, overall survival (OS) time was determined from the date of surgery to the date of death or the date of the last follow-up visit for survivors.
Induction of cisplatin-resistance in T24 cells
Cisplatin-resistant variants of T24 (T24-CDDP) were derived from original parental cell line by continuous exposure to cisplatin (Sigma-Aldrich, UK). Initially, T24 cells were treated with cisplatin (IC50) for 72 h. The media was removed and cells were allowed to recover for a further 72 h. This development period was carried out for approximately 4 months, after which time IC50 concentrations were re-assessed in resistant cell line. Cells were then maintained continuously in the presence of cisplatin at new IC50 concentration for a further 4 months.
RNA preparation, RNase R, and qRT-PCR Total RNA was isolated from cells or tissues using miR-Neasy Mini Kit (Qiagen). Nuclear and cytoplasmic RNA was extracted using nuclear and cytoplasmic RNA purification kit (Fisher scientific, AM1921). For RNase R treatment, 1 μg of total RNA was incubated 15 min at 37°C with or without 3 U of RNase R (Epicentre Technologies, Madison, WI). To validate backspliced junction point of circRNAs, the total RNA samples were treated with the RiboZero rRNA Removal Kit (Epicentre, WI, USA) for deleting rRNA, according to the manufacturer's instructions; next, the rRNA depleted and RNase R digested RNA samples were synthesized cDNA with random primer (Takara, Dalian, China). To quantify the amount of mRNA and circRNA, cDNA was synthesized with the PrimeScript RT Master Mix (Takara, Dalian, China) from 500 ng of RNA. The real-time PCR analyses were performed using SYBR Premix Ex Taq II (Takara). In particular, the divergent primers annealing at the distal ends of circRNA were used to determine the abundance of circRNA. The primers are listed in Supplementary Table 1. Amplification was performed using the StepOnePlus Real-Time PCR System (Applied Biosystems, Foster City, CA) and Ct thresholds were determined by the software. Actinomycin D treatment and RNA stability assay for RNA lifetime For actinomycin D treatment, cells were planted into six-well plates. Up to 60% confluency after 24 h, cells were treated with 5 μg/ml Actinomycin D or DMSO and collected at indicated time points. The turnover rate and half-life of RNA was estimated according to a previously published paper [27]. As actinomycin D treatment results in transcription stalling, the change of RNA concentration at a given time (dC/ dt) is proportional to the constant of RNA decay (K decay ) and the RNA concentration (C), leading to the following equation: Thus, the RNA degradation rate K decay was estimated by: To calculate the RNA half-life (t 1/2 ), when 50% of the RNA is decayed (that is, C/C0 = 1/2), the equation was: From where:
RNA pull-down assays
Biotin-labelled circLIFR (sense) and control (antisense) probes (Supplementary Table 1) were synthesized by TSINGKE (Wuhan, China). RNA pull-down assays were performed as described [20]. In brief, 10 7 cells were washed in ice-cold phosphate-buffered saline, lysed in 500 μl Co-IP buffer (20 mM Tris-HCL, pH 7.5, 150 mM NaCl, 1 mM EDTA, 0.5% NP-40, and complete protease inhibitors cocktail and RNase inhibitors), and incubated with 3 μg biotinylated DNA oligo probes, at room temperature for 2 h. A total of 50 μl washed Streptavidin C1 magnetic beads (Invitrogen) were added to each binding reaction and further incubated at room temperature for another hour. The beads were washed briefly with Co-IP buffer for five times. The bound proteins in the pull-down materials were analyzed by mass spectrometry or western blotting.
Silver staining and mass spectrometry analysis
Silver staining was performed using the PAGE Gel Silver Staining Kit (Solarbio, Beijing, China) as the protocol described, while mass spectrometry analysis was done by Novogene (Tianjin, China). Afterwards, protein identification and quantification were accomplished by Proteome Discoverer software (version 1.4; Thermo Fisher Scientific, USA).
Fluorescent in situ hybridization (FISH)
Cy3-labelled circLIFR probes (Supplementary Table 1) were synthesized by TSINGKE (Wuhan, China) and circLIFR FISH was performed as described with minor modifications [28]. Briefly, cells were fixed with the fixative solution, followed by permeabilization. Then hybridization was performed at 37°C overnight in a dark moist chamber. After being washed three times in 2 × SSC (Solarbio, Beijing, China) for 10 min, the coverslips were sealed with parafilm containing DAPI. The images were acquired using a confocal laser scanning microscope (LSM 780, Carl Zeiss).
Immunofluorescence
Bladder cancer cells grown on the coverslips were fixed with 4% paraformaldehyde in PBS for 20 min on ice and then permeabilized with 0.1% TritonX-100 in PBS for 10 min. After washing twice with PBS, cells were blocked with 5% BSA for 30 min at 37°C and incubated with MSH2 antibody overnight at 4°C. The next day, cells were washed with PBS and then incubated with corresponding secondary antibody for 30 min at 37°C, followed by sealing with parafilm containing DAPI. Fluorescent images were acquired using a confocal laser scanning microscope (LSM 780, Carl Zeiss).
Nuclear and cytoplasmic extraction
Cytoplasmic and nuclear fractions were isolated as described by the manufacturer, using the reagents supplied in PARIS™ Kit (AM1556, Thermo Fisher Scientific, Waltham, USA). Briefly, cells were lysed in Cell Fraction Buffer on ice for 10 min. After centrifugation at 500×g for 3 min at 4°C, the supernatant was collected as cytoplasmic fraction. Followed by washing the pellet with Cell Fraction Buffer, the nuclei were collected.
Gene set enrichment analysis (GSEA)
Gene set enrichment analysis was performed as previously described [29]. The published gene sets were used as indicated. Datasets were generated from TCGA database [30].
Cell counting Kit-8 (CCK-8) assay
The proliferation of cells was tested by CCK-8 kit (Dojindo, Japan) following the manufacturer's instructions. The optical density at 450 nm was measured using an automatic microplate reader (Synergy4; BioTek, Winooski, VT, USA).
Apoptosis assay
For the apoptosis assay, cells were seeded into a six-well plate with or without CDDP treatment. The cell apoptosis assay was determined according to the manual of FITC Annexin V Apoptosis Detection Kit I (BD Biosciences). Data were analyzed by FlowJo software (FlowJo).
Tumor xenograft model
All animal experiments were approved by the Animal Care Committee of Tongji Medical College. The BALB/c nude mice (4 weeks old, ♀) were obtained from Beijing Vital River Laboratory Animal Technology Co., Ltd. and housed in a specific pathogen free facility. Cells were injected subcutaneously into the dorsal flanks of nu/nu mice (3 × 10 6 cells per mouse). Tumors were measured with calipers and calculated using the following formula: a 2 × b × 0.5, where a is the smallest diameter and b is the diameter perpendicular to a. At the end of the experiment, mice were sacrificed and tumors were excised and weighed.
For orthotopic bladder tumor model, the experiments were performed as described previously with minor modifications [31]. In brief, under anesthesia, the nu/nu mice were placed in a supine position on a thermostatic blanket and urethras were catheterized with 18G intravenous. Silver nitrate was injected and allowed to dwell for 10 s, following bladder irrigation by injecting sterile water. Then, prepared 2 × 10 6 cells were injected using the stylet needle. Tumors were monitored by ultrasound imaging twice a week.
For in vivo drug studies, CDDP or PBS was administered by intraperitoneal injection three-times weekly at the dose of 1 mg/kg.
Patient-derived xenograft model
The effects of circLIFR and MSH2 were evaluated by using the widely accepted patient-derived xenograft (PDX) model. The tumors were removed and cut into small pieces with a volume of 30-60 mm 3 when grown to~800 mm 3 , and subcutaneously inoculated into the flanks of the NOD-SCID mice. The tumor xenografts were used for experiments after three serial passages. Tumor pieces of~60 mm 3 were subcutaneously grafted into the flanks of the NOD-SCID mice. When tumors grown to~200 mm 3 , the mice were randomly divided into PBS or CDDP subgroups. After that, mice with tumors were injected intraperitoneally with either PBS, or CDDP (2 mg/kg) at day 1, 2, 3, 15, 16, and 17. Tumor growth was assessed with caliper every 3 to 4 days. Tumor volumes were measured using the following formula: 4π / 3 × (width / 2) 2 × (length / 2). At day 28, animals were sacrificed under anesthesia, after which tumors were harvested and immediately snap-frozen in cold 2-methylbutane.
Statistics
Data were expressed as Mean ± SD. Analyses were performed using Prism 8.1.2 (GraphPad Software Inc.). Mean of the groups were compared using a student t-test and ANOVA. Kaplan-Meier survival curves for mice and P-values were calculated using a logrank test. P values of < 0.05 indicate statistical significance.
Identification and characterization of circLIFR in bladder cancer
Previously, we have analyzed the expression profile of circRNAs in human bladder cancer tissues and paired normal tissues through high-throughput sequencing [22]. Among the differentially expressed circRNAs, we noted that hsa_circ_0072309 (termed as circLIFR) was derived from the exons 2, 3, 4 and 5 regions within the LIFR locus (Fig. 1a), utilizing the human reference genome (GRCh37/hg19). LIFR is a key gene in the pathogenesis of tumors of different histology [32][33][34]. Consistent with the RNA-seq results [22], circLIFR was significantly downregulated in bladder cancer tissues (Fig. 1b), while the expression of LIFR pre-mRNA (pLIFR) and mRNA (mLIFR) showed no significant difference between bladder cancer and paired normal tissues (Fig. S1, A and B). Down-regulation of circLIFR was also found in human muscle-invasive bladder cancer cells T24 and UMUC3, compared with human immortalized uroepithelium cells SV-HUC-1 and UROtsa (Fig. 1c). Moreover, Kaplan-Meier curves showed that low levels of circLIFR predicted a shorter survival times for overall survival (OS) (Fig. 1d), while similar survival times for OS were found between different expression levels of mLIFR (Fig. S1C). Therefore, these findings indicated that the lower expression of cir-cLIFR in bladder cancer was not simply a by-product of splicing and was suggestive of functionality.
CircLIFR was a 580-nt circRNA, the backspliced junction point of which was amplified with divergent primers and validated by Sanger sequencing (Fig. 1e). To further confirm the circular characteristics of circLIFR, comparison of random 6 mers-versus oligo dT-primed cDNA synthesis was performed. It showed that circLIFR was retro-transcribed more efficiently with random 6 mers than with oligo dT primer, which indicated that circLIFR had no poly-A tail (Fig. S1D). Next, the head-to-tail splicing of endogenous circLIFR was assayed by RT-PCR with convergent and divergent primers. As expected, cir-cLIFR could be amplified by the divergent primers in cDNA but not genomic DNA (gDNA) (Fig. 1f). Resistance to digestion with RNase R exonuclease also confirmed that circLIFR harbored a circular RNA structure (Fig. 1g). Moreover, circLIFR transcripts were more stable in comparison to LIFR mRNA upon treatment with actinomycin D (Fig. 1h and Fig. S1E). In addition, qRT-PCR analysis of the nuclear/cytoplasmic fractionation and fluorescence in situ hybridization (FISH) detection showed that circLIFR was mainly localized in the nucleus ( Fig. 1i and j, and Fig. S1, F and G).
Collectively, these findings established that circLIFR was a bona fide circRNA, which was predominantly distributed in nucleus and was significantly downregulated in bladder cancer.
CircLIFR interacts with MSH2 protein in bladder cancer cells
Similar to other ncRNAs, defining the subcellular localization of circRNAs could provide valuable insights into their functions. To determine whether cytoplasm-localized circLIFR functions as a miRNA sponge, we analyzed argonaut 2 (AGO2) CLIP and found that circLIFR did not bind to AGO2 [35], which was supported by an AGO2 reciprocal immunoprecipitation (RIP) assay (Fig. S2A). Thus, we ruled out the function of circLIFR that acted as miRNA sponge.
Given that circLIFR mainly located in the nucleus, we next performed RNA pulldown assays to explore its protein binding role, using biotinylated probes targeting the circLIFR back-spliced sequence ( Fig. 2a and Fig. S2B). Following the analysis pipeline ( Fig. 2b and Supplementary Table 2 and 3) to identify RBPs, a major differential band precipitated in T24 lysates was identified to be MSH2 through mass spectrometry (Fig. 2c). The interaction between circLIFR and MSH2 was further validated through probing the precipitates immunoprecipitated by anti-MSH2 antibody (Fig. 2d) and RIP analysis (Fig. 2e). Furthermore, we confirmed the colocalization of endogenously expressed circLIFR and MSH2 in the nucleus by performing immunofluorescence and fluorescence in situ hybridization assays (Fig. 2f).
To delineate the structural determinants of the interactions between circLIFR and MSH2, we carried out deletion mapping by subdividing the MSH2 functional domains. Using catRAPID algorithm for RNA-protein interaction [36], circLIFR was predicted to bind with the lever, clamp and ATPase domains of MSH2 protein (Fig. 2g). An anti-Flag RIP assay showed that removal of the ATPase domain (aa620-934) of MSH2, which domain is intrinsically linked to conformational changes of MMR proteins [10,[37][38][39], abolished its association with circLIFR, while deletion of the lever and clamp domains (aa300-620) had no effect on its interaction with circLIFR (Fig. 2h). In summary, these results proposed that circLIFR/MSH2 formed an RNAprotein complex through the ATPase domain of MSH2 in bladder cancer cells.
MSH2 is a mediator of up-regulation of CDDP sensitivity in bladder cancer cells
It was previously reported that MSH2 could not only protect mammalian genomes by repairing mismatched bases resulted from erroneous DNA replication, but also promote apoptosis as part of the cellular response to CDDP [10,11,40]. Recent study indicated that bladder cancer cells depleted of MSH2 were resistant to CDDP in vitro, in part due to a reduction in p53-dependent apoptosis [13]. However, the role of MSH2 in CDDPbased chemotherapy, especially in p53-deficient bladder cancer, remains to be further investigated. In this regard, we explored whether MSH2 played a vital role in CDDP resistance in T24 and UMUC3 bladder cancer cell lines, which are p53-deficient cells (Supplementary Table 4). Gene set enrichment analysis (GSEA) indicated that MSH2 was highly associated with DNA repair and apoptosis based on the data from TCGA database (Fig. 3a). Knockdown of MSH2 markedly decreased the apoptosis rate in T24 and UMUC3 cells treated with CDDP ( Fig. 3b and c, and Fig. S3, A to C). Moreover, our results showed that IC50 value of CDDP was increased when MSH2 was knocked down, and decreased when MSH2 was overexpressed ( Fig. 3d and Fig. S3D). Collectively, our findings demonstrated that MSH2 was a mediator of up-regulation of CDDP sensitivity through inducing apoptosis in p53-deficient bladder cancer cells.
CircLIFR positively modulates sensitivity of bladder cancer cells to CDDP
Given that circLIFR could interact with MSH2 to form RNA-protein complex, we subsequently evaluated the potential effect of circLIFR on CDDP sensitivity in bladder cancer cells. First, the fidelity of the knockdown and overexpression systems used to manipulate circLIFR expression was evaluated. CircLIFR knockdown experiments with independent small hairpin RNAs (shRNAs) designed against back-splicing between exons 2 and 5 of circLIFR revealed that sh-circLIFR#2 could specifically target circLIFR, but not mLIFR (Fig. S3E). Meanwhile, overexpression of circLIFR was confirmed to have no effect on expression of mLIFR (Fig. S3F). Next, we observed that silencing of circLIFR decreased CDDPinduced apoptosis in T24 and UMUC3 cells (Fig. 3e and f and Fig. S3, G and H). Moreover, as determined by CCK8 assay, CDDP sensitivity was enhanced upon overexpression of circLIFR, and was decreased after knockdown of circLIFR in bladder cancer cells (Fig. 3g and Fig. S3I). We then sought to define whether circLIFR was effective against acquired CDDP resistance in bladder cancer. To this end, we continuously exposed T24 cells to stepwise escalating concentrations of CDDP and established a CDDP resistant T24 cell line (named T24-CDDP). We confirmed that T24-CDDP resistant cells exhibited a high level of resistance to CDDP (Fig. 3h), while there was no significant difference in circLIFR levels and MSH2 mRNA/protein levels between T24-CDDP resistant and parental T24 cells (Fig. S3J). Importantly, overexpression of circLIFR could sensitize T24-CDDP resistant cells to CDDP-induced apoptosis (See figure on previous page.) Fig. 1 Identification and distribution of circLIFR. a Scheme illustrating the production of circLIFR. b, c The expression of circLIFR was detected by qRT-PCR in 79 pairs of bladder cancer and paired adjacent normal bladder tissues, SV-HUC-1, UROtsa, T24, and UMUC3 cells. GAPDH was used as internal control. Data were mean ± SD. ***P < 0.001 (Student's t-test). d Kaplan-Meier curves of OS in bladder cancer patients. Patients were grouped by the median circLIFR expression. P-value was calculated using a log-rank test. e Sequencing analysis of head-to-tail splicing junction in circLIFR. f The existence of circLIFR was validated in T24 and UMUC3 bladder cancer cell lines by qRT-PCR. Divergent primers amplified circLIFR in cDNA but not genomic DNA (gDNA). GAPDH was used as negative control. Red arrows indicated divergent primers, and black arrows indicated convergent primers. g The relative RNA levels were analyzed by qRT-PCR in T24 and UMUC3 cells treated with or without RNase R. Data were mean ± SD, n = 3. ns, not significant, ***P < 0.001 (Student's t-test). h The relative RNA levels of circLIFR and mLIFR were analyzed by qRT-PCR after treatment with actinomycin D at the indicated time points in T24 cells (n = 3). i Identification of circLIFR cytoplasmic and nuclear distribution by qRT-PCR analysis in T24 cells. GAPDH and U1 were applied as positive controls in the cytoplasm and nucleus, respectively (n = 3). Western blots of total cell lysates (T), cytosolic extracts (C) and nuclear extracts (N) with α-tubulin as a cytosolic marker, histone H3 as a nuclear marker. j Identification of circLIFR cytoplasmic and nuclear distribution by FISH in T24 cells. 18S and U6 were applied as positive controls in the cytoplasm and nucleus, respectively; circLIFR, 18S, and U6 probes were labeled with Cy3; nuclei were stained with DAPI ( Fig. 3i and j). Altogether, we concluded that circLIFR promoted apoptosis and overcame acquired resistance of bladder cancer cells to CDDP in vitro, and might be a potential therapeutic target for CDDP resistance.
CircLIFR/MSH2 complex contributes to the CDDP sensitivity via MutSα/ATM-p73 axis in bladder cancer cells To further determine the role of circLIFR and MSH2 complex in bladder cancer CDDP chemosensitivity, we performed MSH2 knockdown in circLIFR-overexpressed bladder cancer cells, and observed that circLIFR induction of cell apoptosis upon CDDP treatment was reversed by knockdown of MSH2 (Fig. 4a to d), suggesting that the up-regulation of cell apoptosis and CDDP sensitivity by circLIFR was dependent on its interaction with MSH2. On the other hand, we knocked down circLIFR in MSH2-overexpressed bladder cancer cells, and found that MSH2 promotion of cell apoptosis upon CDDP treatment was also down-regulated by knockdown of cir-cLIFR ( Fig. 4e to h). These results provided the evidences that circLIFR could synergize with MSH2 to enhance CDDP chemotherapeutic efficacy of bladder cancer cells. We then elucidated the molecular mechanism by which circLIFR/MSH2 complex contributed to CDDP sensitivity in bladder cancer cells. It showed that overexpression of circLIFR had no effect on both mRNA and protein levels of MSH2 (Fig. S4A). Meanwhile, we found no significant difference of circLIFR levels between MSH2-overexpressed cells and control cells (Fig. S4B). These results led us to speculate that circLIFR might regulate apoptosis and CDDP sensitivity by affecting the activity, rather than the abundance of MSH2 protein. As an obligate subunit for MMR proteins in eukaryotic cells, MSH2 interacts with MSH6 or MSH3 to form the MutSα or MutSβ complexes, respectively [15]. Furthermore, it has been demonstrated that MutSα forms a complex with ATM, which is well known for its role as an apical activator of the DNA damage response [41].
Therefore, we carried out co-immunoprecipitation with anti-MSH2 antibody, and it indicated that MSH2 existed as a stable complex with MSH6, MSH3 and ATM, but not ATR, in T24 and UMUC3 cells (Fig. 5a). Meanwhile, immunoprecipitation of ATM co-precipitated MutSα, but not MutSβ (Fig. 5b). We further observed that circLIFR overexpression enhanced the interaction of endogenous MSH2 with MSH6, which was attenuated upon MSH2 knockdown, while circLIFR had slight effect on the binding of MSH2 with MSH3 ( Fig. 5c and Fig. S4C). Importantly, we also found that the association of MSH2 with ATM was greater in extracts of cells overexpressing cir-cLIFR, whereas the increased interaction was ablated when MSH2 was silenced ( Fig. 5c and Fig. S4C). These results demonstrated that circLIFR augmented the binding of MutSα with ATM.
Previous studies have shown that MMR proteins contribute to the activation of apoptosis through p53-dependent and p53-independent mechanisms [13,42,43]. Thus, MMR-deficient cells exhibit variable defects in the induction of p53 and its two homologs p63 and p73, which are regulators of CDDP-induced apoptosis [44,45]. Notably, p53 is the most frequently mutated gene in bladder cancer, while p63 and p73 are rarely mutated or deleted (Supplementary Table 4 and 5). Previous findings also confirmed that ATM played an important role in the regulation of p73-mediated apoptosis in response to CDDP [45]. Therefore, to further elucidate the pathway that mediated cell apoptosis, we examined the effect of circLIFR/MSH2 complex on ATM phosphorylation, as well as p63 and p73 expression. It showed that CDDP stimulated ATM phosphorylation and p73 expression in time-dependent manner, which were suppressed by MSH2 knockdown (Fig. 5d and Fig. S4D), and we observed that knockdown of MSH2 could impair both the extent and reaction rate of CDDP-induced ATM phosphorylation (Fig. 5d). Similarly, circLIFR silencing also inhibited the increase of ATM phosphorylation and p73 (See figure on previous page.) Fig. 2 CircLIFR binds to MSH2 protein. a Biotin-labeled sense or antisense circLIFR probes were used for RNA-protein pull-down against T24 cell lysates. Identification of proteins that interact with circLIFR by silver staining. Red arrow indicates the major differential band precipitated in T24 lysates. b Analysis pipeline was performed to identify proteins that interact with circLIFR: (1) The 149 proteins that were only pulled down by sense probe were screened; (2) The 15 proteins with molecular masses of 100-130 kDa were then selected as the candidates according to the positive band found in silver staining; (3) MSH2 was selected as it was the only protein with high abundance (no less than 3 peptides). c Mass spectrometry assay depicted the MSH2 peptides pulled down by sense circLIFR probes. d MSH2 immunoblot analysis of the biotin-labeled sense and antisense circLIFR probes pull-down eluate from lysates of T24 and UMUC3 cells. GAPDH was used as loading control. e RNA immunoprecipitation (RIP) assays in T24 and UMUC3 cells using MSH2 and IgG antibody. The precipitate was subjected to western blotting with the antibodies against MSH2 and GAPDH. The MSH2-enriched circLIFR relative to the IgG-enriched value was calculated by qRT-PCR. Data were mean ± SD. *P < 0.05, **P < 0.01 (Student's t-test). f Dual RNA-FISH and immunofluorescence staining assay indicating the co-localization of circLIFR (red) and MSH2 (green), with nuclei staining with DAPI (blue). g Prediction of circLIFR-MSH2 interaction by using the catPAPID algorithm and schematic of MSH2 with functional protein domains. MSH2 truncations lacking the region 620-934 aa (3xFlag Δ620-934), 300-934 aa (3xFlag Δ300-934), 1-619 aa (3xFlag Δ1-619), or 1-299 aa (3xFlag Δ1-299). h Relative enrichment of endogenous circLIFR in truncated MSH2 RIP was measured by qRT-PCR, following T24 cells transfected with 3xFlag-MSH2 truncations. Data were mean ± SD. ns, not significant, **P < 0.01 (Student's t-test) Fig. 3 MSH2 and circLIFR can improve CDDP chemosensitivity. a Gene set enrichment analysis (GSEA) of TCGA datasets showed that higher MSH2 expression was significantly associated with DNA repair and apoptosis in bladder cancer. b, c T24 cells were stably transfected with scramble, shMSH2#1, or shMSH2#2 vector. After T24 cells were treated for 36 h in the absence or presence of 5 μM CDDP, apoptosis was measured by Annexin-V plus PI staining and fluorescence-activated cell sorter (FACS) analysis. Bars show the percentages of cells that were early apoptotic (Annexin-V + /PI − ) and late apoptotic/dead (Annexin-V + /PI + ). Data were mean ± SD. ***P < 0.001, ****P < 0.0001 (Student's t-test). d Determination of IC50 values for CDDP treatment 24 h in T24 cells which were stably transfected with scramble, shMSH2#1, shMSH2#2, mock, or MSH2 vector. e, f T24 cells were stably transfected with scramble, sh-circLIFR#2 vector. After T24 cells were treated for 36 h with or without 5 μM CDDP, apoptosis was measured by Annexin-V plus PI staining and FACS analysis. Data were mean ± SD. **P < 0.01, ****P < 0.0001 (Student's t-test). g Determination of IC50 values for CDDP treatment 24 h in T24 cells which were stably transfected with scramble, sh-circLIFR#2, vector, or circLIFR. h Determination of IC50 values for CDDP treatment 24 h in T24 and T24-CDDP cells. i, j T24-CDDP cells were stably transfected with vector or circLIFR. After T24-CDDP cells were treated for 36 h in the absence or presence of 5 μM CDDP, apoptosis was measured by Annexin-V plus PI staining and FACS analysis. Data were mean ± SD. ns, not significant, **P < 0.01, ***P < 0.001 (Student's t-test) Fig. 4 CircLIFR synergizes with MSH2 to enhance CDDP chemosensitivity of bladder cancer cells. a-d FACS assay showing the apoptosis of T24 and UMUC3 cells stably transfected with vector or circLIFR, and those cotransfected with scramble, shMSH2#1, or shMSH2#2. Data were mean ± SD. **P < 0.01, ****P < 0.0001 (Student's t-test). e-h FACS assay showing the apoptosis of T24 and UMUC3 cells stably transfected with vector or MSH2, and those cotransfected with scramble, sh-circLIFR#2. Data were mean ± SD. **P < 0.01, ****P < 0.0001 (Student's t-test) expression upon CDDP treatment ( Fig. 5e and Fig. S4E). Moreover, enforced MSH2 expression, without exposure to CDDP, could significantly up-regulate phosphorylation of ATM and expression of p73, which were partially attenuated by circLIFR knockdown (Fig. 5f and Fig. S4F). In addition, we observed that circLIFR regulation of ATM phosphorylation and p73 expression were completely abrogated by MSH2 knockdown (Fig. 5g and Fig. S4G). Importantly, overexpression of circLIFR restored the ability of CDDP to induce ATM phosphorylation and p73 expression in CDDP resistant cells (Fig. 5h). These data supported the hypothesis that p73 could be positively regulated by cir-cLIFR/MutSα complex through up-regulation of ATM phosphorylation.
To confirm whether the effects of MSH2 and circLIFR on cell apoptosis were mediated via p73, we conducted a series of rescue experiments. It showed that knockdown of P73 abolished MSH2-mediated increases of the basal level and CDDP-induced cell apoptosis ( Fig. 5i and Fig. S4H). Similarly, circLIFR promotion of cell apoptosis was also completely reversed by p73 knockdown, both at the basal level and upon CDDP treatment (Fig. 5j and Fig. S4I). These results demonstrated that CircLIFR/MSH2 complex contributed to the CDDP sensitivity via MutSα/ATM-p73 axis in bladder cancer cells.
CircLIFR is a potential therapeutic target to improve CDDP chemosensitivity in bladder cancer
To determine whether circLIFR is an alternative therapeutic target which could improve CDDP-based therapy in CDDP-resistant tumors, T24-CDDP cells stably transfected with circLIFR or control vector were injected subcutaneously into BALB/c nude mice, followed by intrapleural PBS or CDDP treatment. Supporting the results obtained in vitro, as shown in Fig. 6 (A to C) and Fig. S5A, circLIFR strikingly decreased the tumor volumes and weights, prolonged survival, and weakened the CDDP resistance of T24-CDDP cells, whereas the administration of CDDP alone without the assistance of circLIFR overexpression could not retard tumor growth. Furthermore, in terms of that the subcutaneous model does not faithfully recapitulate the microenvironment of bladder cancer, we applied the orthotopic xenograft bladder tumor model along with PBS or CDDP treatment. Subsequent growth of bladder cancer was confirmed and monitored by urinary bladder ultrasound. Strikingly, we found that orthotopic transplantation of T24-CDDP cells with stable enforced expression of cir-cLIFR displayed smaller tumor size and effectively resensitized CDDP-resistant cells to CDDP (Fig. 6d and e). These findings indicated that circLIFR could suppress tumor growth and be essential for governing CDDP chemotherapy efficacy even in CDDP-resistant bladder cancer cells in vivo.
To gain further insights into the potential therapeutic application of circLIFR and MSH2 on CDDP in patients, we used bladder cancer PDX model to explore the efficacy of CDDP. Based on the co-expression levels of cir-cLIFR and MSH2 (Fig. S5B), we divided the clinical bladder cancer tissues into two groups, circLIFR low / MSH2 low group (patient #135 and patient #150) and cir-cLIFR high /MSH2 high group (patient #348 and patient #615) (Fig. S5C). The PDX models of each patient were randomly separated and followed by intraperitoneal administration of PBS or CDDP, respectively (Fig. S5C). Of note, we found that the circLIFR high /MSH2 high group responded much better to CDDP than the circLIFR low / MSH2 low group ( Fig. 6f and g). Consistent with these biological effects, a more intense TUNEL staining in cir-cLIFR high /MSH2 high group compared with circLIFR low / MSH2 low group after administration of CDDP was appreciable ( Fig. 6h and i). Importantly, IHC analysis revealed a more obvious improvement of ATM phosphorylation and p73 up-expression upon CDDP (See figure on previous page.) Fig. 5 CircLIFR/MSH2 complex contributes to the CDDP sensitivity via MutSα/ATM-p73 axis in bladder cancer cells. a Co-IP assay was analyzed using T24 and UMUC3 cells lysate which was immunoprecipitated by anti-MSH2 antibody. The precipitate was subjected to western blotting with the antibodies against MSH2, MSH6, MSH3, ATM, ATR, and GAPDH. b Co-IP assay was analyzed using T24 and UMUC3 cells lysate which was immunoprecipitated by anti-ATM antibody. The precipitate was subjected to western blotting with the antibodies against ATM, MSH2, MSH6, MSH3, and GAPDH. c Interaction between MSH2, MSH6, MSH3, and ATM in T24 cells stably transfected with vector or circLIFR, and those cotransfected with scramble, or shMSH2#1. Co-IP experiments with anti-MSH2 antibody were performed, and the precipitate was detected by western blot with the antibodies against MSH2, MSH6, MSH3, ATM, and GAPDH. d T24 cells, which were stably transfected with scramble, shMSH2#1, or shMSH2#2, were treated with 5 μM CDDP for the indicated time. Whole cell lysates were collected for western blot analysis of MSH2, MSH6, ATM, pATM, p73, p63, and GAPDH. e T24 cells, which were stably transfected with scramble, or sh-circLIFR#2, were treated with 5 μM CDDP for the indicated time. Whole cell lysates were collected for western blot analysis of MSH2, MSH6, ATM, pATM, p73, p63, and GAPDH. f Western blot analysis with the indicated antibodies in T24 cells stably transfected with vector or MSH2, and those cotransfected with scramble, or sh-circLIFR#2. g Western blot analysis with the indicated antibodies in T24 cells stably transfected with vector or circLIFR, and those cotransfected with scramble, shMSH1#1, or shMSH1#2. h T24-CDDP cells stably transfected with vector or circLIFR were treated with or without 5 μM CDDP for 24 h, and cell lysates were subjected to western blot analysis with the indicated antibodies. i T24 cells were stably transfected with vector or MSH2 and cotransfected with scramble, or sh-p73. Apoptosis was measured by Annexin-V plus PI staining and FACS analysis. Data were mean ± SD. ***P < 0.01 (Student's t-test). j T24 cells were stably transfected with vector or circLIFR and cotransfected with scramble, or sh-p73. Apoptosis was measured by Annexin-V plus PI staining and FACS analysis. Data were mean ± SD. ***P < 0.01 (Student's t-test) treatment in circLIFR high /MSH2 high group, compared with circLIFR low /MSH2 low group ( Fig. 6h and i). Together, these data suggested that circLIFR and MSH2 status might be used as a stratification biomarker to select bladder cancer patients who may respond and benefit from CDDP treatment.
Discussion
With a high rate of tumor heterogeneity, a large proportion of CDDP-treated bladder cancer patients experience therapeutic failure and tumor recurrence due to the acquisition of CDDP resistance, which is complex and poorly defined [1]. Understanding key pathway nodes that are crucial for driving resistance, especially genetic changes and/or epigenetic modifications, can provide a critical step toward circumventing cisplatin resistance in bladder cancer [46]. For instance, whole-exome sequencing and clonality analysis are performed to understand the relative contributions of different subclones and the effects of chemotherapy as a selective pressure in urothelial carcinoma [4]. Through an unbiased CRISPR screen in bladder cancer cells, MSH2 has 3 significantly CDDP resistant sgRNA constructs, and the importance of MSH2 is underscored by the fact that cancer cells lacking or expressing a low level of MSH2 lead to chemotherapy insensitivity and worse prognosis [13]. Herein, MSH2 was identified to interact with circLIFR by mass spectrometry analysis. Mechanistically, circLIFR bound and synergized with MSH2 protein, which augmented the interaction between MutSα and ATM, to up-regulate p73 expression, ultimately contributing to attenuate bladder cancer growth and cellular tolerance to CDDP (Fig. 7). Moreover, bladder cancer cell lines xenograft models and PDX models provided preliminary assessment of the response to CDDP therapy with different levels of circLIFR and MSH2. These findings uncovered circLIFR and MSH2 as tumor suppressors involving novel layers of CDDP chemotherapy regulation and provided further evidence that circRNAs are fundamental players in bladder cancer progression. It is evident that circRNAs are prevalent genes with frequently exquisite regulation and recognized as promising candidates for the identification of additional layers of gene expression control in human tissues [16]. CircRNAs have been well characterized in a variety of human diseases, including cancer, neurological disorders, cardiovascular diseases and metabolic disorders [16,18,20]. Recently, it has also been reported that cir-cRNAs regulate CDDP chemotherapy by sponging miR-NAs. Specifically, circAKT3, which localizes to and functions in the cytoplasm, modulates CDDP sensitivity by sponging miR-198 that suppresses PIK3R1 expression in gastric cancer [47]. Circular RNA Cdr1as sensitizes bladder cancer to CDDP by upregulating APAF1 expression through miR-1270 inhibition [48]. In the present study, we identified that circLIFR was a bona fide cir-cRNA and was mainly localized in the nucleus. Gainand loss-of-function studies demonstrated that circLIFR could increase cell apoptosis and sensitize cells to CDDP treatment. Although related circRNAs of chemotherapy regulation are now documented, circLIFR was distinguished by its role in influencing chemosensitivity by combining and synergizing with MSH2, a wellestablished key protein regulating CDDP chemotherapy. More importantly, our results suggested that circRNAs, as regulatory factors, could bind to key effector proteins that influenced chemotherapy, providing a novel model of chemotherapy regulation mechanism. Furthermore, circLIFR could act as potent chemosensitizer in the nucleus, suggesting new ideas for clinical transformation. In our experiments, we ruled out the function of cir-cLIFR that acted as miRNA sponge, and we found that circLIFR performed its protein binding role in the nucleus. However, the potential abilities to translate peptides of circLIFR in cytoplasm still need to be further clarified.
MSH2 and MSH6 proteins are divided into five conserved domains, among which the C-terminal is ATPase domain [10]. Moreover, ATPase domain of MSH2 exhibits multiple interaction sites with MSH6 in MutSα [10]. In this paper, RNA pulldown and RIP analysis demonstrated that circLIFR interacted with the ATPase domain of MSH2 and resulted in promoting the assembly of MutSα, which indicated that circLIFR might favour (See figure on previous page.) Fig. 6 Biological implications of circLIFR in bladder cancer. a, b Response of T24-CDDP expressing vector or circLIFR xenografts to treatment with PBS or CDDP. The tumors on the 28th day of the treatments were shown (a); Graph showing the weight of tumors at the end of the treatment (b). Data were mean ± SD. ns, not significant, **P < 0.01, ***P < 0.001 (Student's t-test). c Overall survival of T24-CDDP expressing vector or circLIFR xenografted mice treated with PBS or CDDP. P-value was calculated using a log-rank test. ns, not significant, **P < 0.01, ***P < 0.001. d, e The ultrasound images of orthotopic xenograft bladder tumor model established by T24-CDDP expressing vector or circLIFR, along with PBS or CDDP treatment. The low echo area with irregular surface between two lines represented the tumor and the echo free area inside the red line was the urine in urinary bladder. White line, the wall of urinary bladder; Red line, the convex surface of tumor toward the bladder lumen. Data were mean ± SD. ns, not significant, **P < 0.01, ***P < 0.001 (Student's t-test). f, g Efficacy of CDDP therapy against the circLIFR low /MSH2 low (patient #135 and patient #150) and circLIFR high /MSH2 high (patient #348 and patient #615) PDX xenografts. Data were mean ± SD. ns, not significant, ****P < 0.0001 (Student's t-test). h, i) Immunohistochemical images of MSH2, pATM, and p73 and TUNEL staining on circLIFR low /MSH2 low (patient #135 and patient #150) and circLIFR high /MSH2 high (patient #348 and patient #615) PDX xenografts. Scale bar: 50 μm protein folding and act as a molecular chaperone. Our results also showed that circLIFR mediated MSH2dependent apoptosis through MutSα. How the MutSα complex participates in the apoptotic signaling cascade remains subject to debate, with two competing hypothesis dominating academic contention [49,50]. The "futile repair cycle" hypothesis entails repetitive repair attempts of a DNA strand containing lesions. DNA damage signaling is triggered by abortive repair attempts and persistent DNA damage. Because of this, functional repair activity of the MMR proteins is a prerequisite for this proposed mechanism [49]. Conversely, the "direct signaling" hypothesis propounds a dual functionality for MutSα complex: a "pro-repair" conformation in which DNA repair is promoted, and an alternative "pro-apoptotic" conformation in which the protein abandons its repair function and instead activates apoptosis response [51]. Herein, based on the findings that circLIFR and MSH2 were sufficient to mediate apoptosis in the absence of DNA damage, we speculated that circLIFR induced a MutSα "pro-apoptotic" conformation to initiate MSH2-dependent cell death. Combined with prior published studies where a small molecule, reserpine, capable of binding MSH2 can stimulate the conformational change and initiate the same cellular response as DNA damage [51], our results supported the "direct signaling" hypothesis. Nevertheless, we cannot rule out the possibility that the "futile repair cycle" hypothesis is participated in the apoptosis regulated by circLIFR/MSH2 and CDDP. Furthermore, given MSH2 can ensure genetic stability by correcting DNA biosynthetic errors [7], it is yet undetermined whether circLIFR plays a role in DNA mismatch repair. The topological structure of circLIFR/ MutSα complex is still need to be further characterized, which may reveal detailed features of this interaction and find out whether circLIFR plays an important role in the conformational change of MutSα.
MutSα forms a complex with ATM, which is the central checkpoint kinases in signaling DNA damage [41]. As predicted, CDDP and circLIFR augmented the formation of MutSα/ATM complex, which in turn, phosphorylated ATM. Although ATM signaling in breast cancer or cervical cancer leads to doxorubicin or MNNG resistance [52,53], cell-death to etoposide or curcumin chemotherapy may arise from ATM signaling in osteosarcoma or pancreatic cancer [54,55], highlighting the contextual importance of individual studies where activation of ATM may have divergent roles. More importantly, previous results suggest that ATM signaling, which stabilizes p73, is one of the main apoptotic pathways in response to CDDP [45]. Likewise, ATM-p73 axis regulated by circLIFR/MutSα was an important determinant of chemotherapy susceptibility in bladder cancer. Moreover, p73 does not appear to be inactivated during malignant transformation, whereas p53 is frequently mutated [14,42,56]. Hence, a therapeutic that activates the circLIFR /MutSα/ATM-p73 axis-dependent cell-death pathway might be advantageous as it would eliminate the requirement for functional p53. However, whether the mechanism might exist in other cell types other than bladder cancer cells needs to be further investigated.
Conclusions
In summary, our work provides a proof of concept for circRNAs as molecular regulators of MMR proteins and of key cellular functions relevant to chemotherapy for bladder cancer. These findings implicate a circLIFR/ MutSα/ATM-p73 axis in the progression of bladder cancer and the role of chemotherapy resistance. Therefore, the mechanistic characterization of circLIFR and its functional crosstalk with MSH2 may help to pave the way to develop bladder cancer chemotherapies that target MSH2 and its interaction with circLIFR.
Additional file 1: Fig. S1. Identification and distribution of circLIFR. (A, B) The expression of pLIFR and mLIFR was detected by qRT-PCR in 79 pairs of bladder cancer and paired adjacent normal bladder tissues. Data were mean ± SD. ns, not significant (Student's t-test). (C) Kaplan-Meier curves of OS in bladder cancer patients. Patients were grouped by the median mLIFR expression. P-value was calculated using a log-rank test. (D) Reverse transcription was performed by random 6 mers and oligo dT primer, respectively. Then, the relative RNA levels of circLIFR and mLIFR were analyzed by qRT-PCR. Data were mean ± SD. ns, not significant, ***P < 0.001 (Student's t-test). (E) The relative RNA levels of circLIFR and mLIFR were analyzed by qRT-PCR after treatment with Actinomycin D at the indicated time points in UMUC3 cells. (F) Identification of circLIFR cytoplasmic and nuclear distribution by qRT-PCR analysis in UMUC3 cells. GAPDH and U1 were applied as positive controls in the cytoplasm and nucleus, respectively (n = 3). Western blots of total cell lysates (T), cytosolic extracts (C) and nuclear extracts (N) with α-tubulin as a cytosolic marker, histone H3 as a nuclear marker. (G) Identification of circLIFR cytoplasmic and nuclear distribution by FISH in UMUC3 cells. 18S and U6 were applied as positive controls in the cytoplasm and nucleus, respectively; circLIFR, 18S, and U6 probes were labeled with Cy3; nuclei were stained with DAPI Additional file 2: Fig. S2. CircLIFR binds to MSH2 protein. (A) RIP analysis was carried out using anti-AGO2 or IgG antibodies. circLIFR, CDR1as, and U6 levels in the samples were quantified using qRT-PCR. CDR1as and U6 were applied as positive and negative controls that interacting with AGO2, respectively. Data were mean ± SD. ns, not significant, **P < 0.01 (Student's t-test). (B) Schematic of biotin-labeled sense or antisense circLIFR probes and efficient pull-down of circLIFR in T24 and UMUC3 cells. Data were mean ± SD. ****P < 0.0001 (Student's t-test) Additional file 3: Fig. S3. MSH2 and circLIFR can improve CDDP chemosensitivity. (A) Determination of MSH2 protein levels in T24 and UMUC3 cells transfected with scramble, shMSH2#1, or shMSH2#2. (B, C) UMUC3 cells were stably transfected with scramble, shMSH2#1, or shMSH2#2 vector. After UMUC3 cells were treated for 36 h in the absence or presence of 3 μM CDDP, apoptosis was measured by Annexin-V plus PI staining and FACS analysis. Data were mean ± SD. ***P < 0.001, ****P < 0.0001 (Student's t-test). (D) Determination of IC50 values for CDDP treatment 24 h in UMUC3 cells which were stably transfected with scramble, shMSH2#1, shMSH2#2, mock, or MSH2 vector. (E) Efficient knockdown of circLIFR in T24 cells. Data were mean ± SD. ns, not significant, ***P < 0.001, ****P < 0.0001 (Student's t-test). (F) Effect of overexpression of cir-cLIFR on mLIFR expression. Data were mean ± SD. ns, not significant (Student's t-test). (G, H) UMUC3 cells were stably transfected with scramble, sh-circLIFR#2 vector. After T24 cells were treated for 36 h in the absence or presence of 3 μM CDDP, apoptosis was measured by Annexin-V plus PI staining and FACS analysis. Data were mean ± SD. ***P < 0.001, ****P < 0.0001 (Student's t-test). (I) Determination of IC50 values for CDDP treatment 24 h in UMUC3 cells which were stably transfected with scramble, sh-circLIFR#2, vector, or circLIFR. (J) circLIFR levels and MSH2 mRNA/protein levels between T24-CDDP and parental T24 cells. Data were mean ± SD. ns, not significant (Student's t-test) Additional file 4: Fig. S4. CircLIFR/MSH2 complex contributes to the CDDP sensitivity via MutSα/ATM-p73 axis in bladder cancer cells. (A) The relative RNA levels were analyzed by qRT-PCR in T24 and UMUC3 cells stably transfected with vector or circLIFR. Western blot analysis with the indicated antibodies in T24 cells and UMUC3 stably transfected with vector or circLIFR. Data were mean ± SD. ns, not significant, ***P < 0.001 (Student's t-test). (B) The relative RNA levels were analyzed by qRT-PCR in T24 and UMUC3 cells stably transfected with mock or MSH2. Western blot analysis with the indicated antibodies in T24 cells and UMUC3 stably transfected with mock or MSH2. Data were mean ± SD. ns, not significant, ***P < 0.001 (Student's t-test). (C) Interaction between MSH2, MSH6, MSH3, and ATM in UMUC3 cells stably transfected with vector or circLIFR, and those cotransfected with scramble, or shMSH2#1. Co-IP experiments with anti-MSH2 antibody were performed, and the precipitate was detected by western blot with the antibodies against MSH2, MSH6, MSH3, ATM, and GAPDH. (D) UMUC3 cells, which were stably transfected with scramble, shMSH2#1, or shMSH2#2, were treated with 3 μM CDDP for the indicated time. Whole cell lysates were collected for western blot analysis of MSH2, MSH6, ATM, pATM, p63, p73, and GAPDH. (E) UMUC3 cells, which were stably transfected with scramble, or sh-circLIFR#2, were treated with 3 μM CDDP for the indicated time. Whole cell lysates were collected for western blot analysis with the indicated antibodies. (F) Western blot analysis with the indicated antibodies in UMUC3 cells stably transfected with vector or MSH2, and those cotransfected with scramble, or sh-circLIFR#2. (G) Western blot analysis with the indicated antibodies in UMUC3 cells stably transfected with vector or circLIFR, and those cotransfected with scramble, shMSH1#1, or shMSH1#2. (H) Western blot analysis with the indicated antibodies in T24 cells stably transfected with vector or MSH2, and those cotransfected with scramble, or sh-p73. (I) Western blot analysis with the indicated antibodies in T24 cells stably transfected with vector or circLIFR, and those cotransfected with scramble, or sh-p73.
Additional file 5: Fig. S5. Biological implications of circLIFR in bladder cancer. (A) Tumor growth curve showing the response of T24-CDDP expressing vector or circLIFR xenografts to treatment with PBS or CDDP. Data were mean ± SD. ns, not significant, ***P < 0.001, ****P < 0.0001 (Student's t-test). (B) The relative RNA levels were analyzed by qRT-PCR in
|
2021-04-20T13:52:37.824Z
|
2021-04-19T00:00:00.000
|
{
"year": 2021,
"sha1": "4c4803c32ac1aed00895faf0e05334ee62a55117",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/s12943-021-01360-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7a2603dd9e6753f078964c9a552be288959991e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14499374
|
pes2o/s2orc
|
v3-fos-license
|
The Standard Model and the Top Quark
The top quark is one of the least well-studied components of the standard model. In these lectures I discuss the expected properties of the top quark, which will be tested at the Fermilab Tevatron and the CERN Large Hadron Collider. I begin with a modern review of the standard model, emphasizing the underlying concepts. I then discuss the role the top quark plays in precision electroweak analyses. The last two lectures address the strong and weak interactions of the top quark, with an emphasis on the top-quark spin.
The top quark is the least well-studied of the quarks. Why is the top quark an interesting and worthwhile object to study? Here are four of the most compelling reasons: A more accurate measurement of the top-quark mass is valuable as an input to precision electroweak analyses.
We would like to know if the top quark is just an ordinary quark, or if it is exotic in some way.
The top quark may be useful to discover new particles. For example, of all the fermions, the Higgs boson couples most strongly to the top quark. It might be possible to observe the Higgs boson produced in association with a tt pair.
Events containing top quarks are backgrounds to new physics that we hope to discover. This may sound mundane, but it is extremely important. For example, the discovery of the top quark itself was only possible once we understood the background from W +jets.
The Standard Model
In Table 1 I list the fermion fields that make up the standard model, along with their SU (3) × SU (2) × U (1) Y quantum numbers. The index i = 1, 2, 3 on each field refers to the generation, and the subscript L, R refers to the chirality of the field (ψ L,R ≡ 1 2 (1 ∓ γ 5 )ψ). The left-chiral and right-chiral fields corresponding to a given particle have different SU (2) × U (1) quantum numbers, which leads to parity violation in the weak interaction.
Let's break the Lagrangian of the standard model into pieces. First consider the pure gauge interactions, given by where G µν is the field-strength tensor of the gluon field, W µν is that of the weak-boson field, and B µν is that of the hypercharge-boson field. These terms contain the kinetic energy of the gauge bosons and their self interactions. Next comes the gauge interactions of the fermion ("matter") fields, These terms contain the kinetic energy and gauge interactions of the fermions, which depend on the fermion quantum numbers. For example, since the field Q L participates in all three gauge interactions. A sum on the index i, which represents the generation, is implied in the Lagrangian.
We have constructed the simplest and most general Lagrangian, given the fermion fields and gauge symmetries. 1 The gauge symmetries forbid masses for any of the particles. In the case of the fermions, masses are forbidden by the fact that the left-chiral and right-chiral components of a given fermion field have different SU (2) × U (1) Y quantum numbers. For example, a mass term for the up quark, is forbidden by the fact that u L is part of the SU (2) doublet Q L , so such a term violates the SU (2) gauge symmetry (it also violates U (1) Y ). Although we only imposed the gauge symmetry on the Lagrangian, it turns out that it has a good deal of global symmetry as well, associated with the three generations. Because all fermions are massless thus far in our analysis, there is no difference between the three generations -they are physically indistinguishable. This manifests itself as a global flavor symmetry of the matter Lagrangian, Eq. (2), which is invariant under the transformations where each U is an arbitrary 3 × 3 unitary matrix. Since there are five independent U (3) symmetries, the global flavor symmetry of the Lagrangian is [U (3)] 5 . The Lagrangian thus far contains only three parameters, the couplings of the three gauge interactions. Their approximate values (evaluated at M Z ) are These couplings are all of order unity. Electroweak symmetry breaking -The theory thus far is very simple and elegant, but it is incomplete -all particles are massless. We now turn to electroweak symmetry breaking, which is responsible for generating the masses of the gauge bosons and fermions.
In the standard model, electroweak symmetry breaking is achieved by introducing another field into the model, the Higgs field φ, with the quantum numbers shown in Table 2. The simplest and most general Lagrangian for the Higgs field, consistent with the gauge symmetry, is The first term contains the Higgs-field kinetic energy and gauge interactions. The remaining terms are (the negative of) the Higgs potential, shown in Fig. 1. The quadratic term in the potential has been chosen such that the minimum of the potential lies not at zero, but on a circle of minima where φ 0 is the lower (neutral) component of the Higgs doublet field. This equation defines the parameter v ≈ 246 GeV, the Higgs-field vacuum-expectation value. Making the substitution φ = (0, v/ √ 2) in the Higgs Lagrangian, Eq. (6), one finds that the W and Z bosons have acquired masses from the interaction of the gauge bosons with the Higgs field. Since we know g and g ′ , these equations determine the numerical value of v. The Higgs sector of the theory, Eq. (6), introduces just two new parameters, µ and λ. Rather than µ, we will use the parameter v introduced in Eq. (7). The parameter λ is the Higgs-field self interaction, and will not figure into our discussion.
Fermion masses and mixing -In quantum field theory, anything that is not forbidden is mandatory. With that in mind, there is one more set of interactions, involving the Higgs field and the fermions. The simplest and most general Lagrangian, consistent with the gauge symmetry, is where Γ u , Γ d , Γ e are 3 × 3 complex matrices in generation space. 2 We have therefore apparently introduced 3 × 3 × 3 × 2 = 54 new parameters into the theory, but as we shall see, only a subset of these parameters are physically relevant. These so-called Yukawa interactions of the Higgs field with fermions violate almost all of the [U (3)] 5 global symmetry of the fermion gauge interactions, Eq. (2). The only remaining global symmetries are the subset corresponding to baryon number and lepton number Exercise 1.2 ( * ) Show this.
The conservation of baryon number and lepton number follow from these symmetries. These symmetries are accidental; they are not put in by hand, but rather follow automatically from the field content and gauge symmetries of the theory. Thus we can say that we understand why baryon number and lepton number are conserved in the standard model.
Replacing the Higgs field with its vacuum-expectation value, φ = (0, v/ √ 2), in Eq. (9) yields where are fermion mass matrices. The Yukawa interactions are therefore responsible for providing the charged fermions with mass; the neutrinos, however, remain massless (we will discuss neutrino masses shortly). The complete Lagrangian of the standard model is the sum of the gauge, matter, Higgs, and Yukawa interactions, 2 The matrix ǫ = 0 1 −1 0 in SU (2) space is needed in order for the first term in Eq. (9) to respect SU (2) gauge invariance. This is the simplest and most general Lagrangian, given the field content and gauge symmetries of the standard model. Given this Lagrangian, one can proceed to calculate any physical process of interest. However, it is convenient to first perform field redefinitions to make the physical content of the theory manifest. These field redefinitions do not change the predictions of the theory; they are analogous to a change of variables when performing an integration. To make the masses of the fermions manifest, we perform unitary field redefinitions on the fields in order to diagonalize the mass matrices in Eq. (12): Exercise 1.3 ( * ) Show that each matrix A must be unitary in order to preserve the form of the kinetic-energy terms in the matter Lagrangian, Eq. (2), e.g.
Once the mass matrices are diagonalized, the masses of the fermions are manifest. These transformations also diagonalize the Yukawa matrices Γ, since they are proportional to the mass matrices [see Eq. (13)]. However, we must consider what impact these field redefinitions have on the rest of the Lagrangian. They have no effect on the pure gauge or Higgs parts of the Lagrangian, Eqs. (1) and (6), which are independent of the fermion fields. They do impact the matter part of the Lagrangian, Eq. (2). However, a subset of these field redefinitions is the global [U (3)] 5 symmetry of the matter Lagrangian; this subset therefore has no impact.
One can count how many physically-relevant parameters remain after the field redefinitions are performed [1]. Let's concentrate on the quark sector. The number of parameters contained in the complex matrices Γ u , Γ d is 2 × 3 × 3 × 2 = 36. The unitary symmetries U Q L , U u R , U d R are a subset of the quark field redefinitions; this subset will not affect the matter part of the Lagrangian. There are 3 × 3 × 3 degrees of freedom in these symmetries (a unitary N × N matrix has N 2 free parameters), so the total number of parameters that remain in the full Lagrangian after field redefinitions is where I have subtracted baryon number from the subset of field redefinitions that are symmetries of the matter Lagrangian. Baryon number is a symmetry of the Yukawa Lagrangian, Eq. (9), and hence cannot be used to diagonalize the mass matrices.
Exercise 1.4 ( * ) Show that the quark field redefinitions are the symme- The ten remaining parameters correspond to the six quark masses and the four parameters of the Cabibbo-Kobayashi-Maskawa (CKM) matrix (three mixing angles and one CP -violating phase). The CKM matrix is The mass matrices are related to the Yukawa matrices by Eq. (13). If we make the natural assumption that the Yukawa matrices contain elements of order unity (like the gauge couplings), we expect the fermion masses to be of O(v), just like M W and M Z [see Eq. (8)]. This is not the case; only the top quark has such a large mass. We see that, from the point of view of the standard model, the question is not why the top quark is so heavy, but rather why the other fermions are so light.
Similarly, for a generic Yukawa matrix, one expects the field redefinitions that diagonalize the mass matrices to yield a CKM matrix with large mixing angles. Again, this is not the case; the measured angles are [2] which, with the exception of the CP -violating phase δ, are small. 3 The question is not why these angles are nonzero, but rather why they are so small. The fermion masses and mixing angles strongly suggest that there is a deeper structure underlying the Yukawa sector of the standard model. Surely there is some explanation of the peculiar pattern of fermion masses and mixing angles. Since the standard model can accommodate any masses and mixing angles, we must seek an explanation from physics beyond the standard model.
Beyond the Standard Model -Let us back up and ask: why did we stick to the simplest terms in the Lagrangian? The obsolete answer is that these are the renormalizable terms. Renormalizability is a stronger constraint than is really necessary. The modern answer, which is much simpler, is dimensional analysis [3].
Since the action has units ofh = 1, the Lagrangian must have units of mass 4 , since From the kinetic energy terms in the Lagrangian for a generic scalar (φ), fermion (ψ), and gauge boson (A µ ), we can deduce the dimensionality of the various fields: All operators (products of fields) in the Lagrangian of the Standard Model are of dimension four, except the operator φ † φ in the Higgs potential, which is of dimension two. The coefficient of this term, µ 2 , is the only dimensionful parameter in the standard model; it (or, equivalently, v ≡ µ/ √ λ) sets the scale of all particle masses. Imagine that the Lagrangian at the weak scale is an expansion in some large mass scale M , where dim n represents all operators of dimension n. By dimensional analysis, the coefficient of an operator of dimension n has dimension mass 4−n , since the Lagrangian has dimension mass 4 . At energies much less than M , the dominant terms in this Lagrangian will be those of L SM ; the other terms are suppressed by an inverse power of M . This is the modern reason why we believe the "simplest" terms in the Lagrangian are the dominant ones.
The least suppressed terms in the Lagrangian beyond the standard model are of dimension five. We should therefore expect our first observation of physics beyond the standard model to come from these terms.
Given the field content and gauge symmetries of the standard model, there is only one such term: where c ij is a dimensionless matrix in generation space. 4 Exercise 1.7 ( * * ) -Show that a similar term, with L L replaced by Q L , is forbidden by This dimension-five operator contains the Higgs-doublet field twice and the lepton-doublet field twice. Replacing the Higgs-doublet field with its vacuum-expectation value, φ = (0, v/ √ 2), yields This is a Majorana mass term for the neutrinos. The recent observation of neutrino oscillations, which requires nonzero neutrino mass, is indeed our first observation of physics beyond the standard model. The moral is that when we are searching for deviations from the standard model, what we are really doing is looking for the effects of higherdimension operators. Although there is only one operator of dimension five, there are dozens of operators of dimension six, some of which are listed below [4]: 4 The 2 × 2 matrix ǫ in SU (2) space was introduced in an earlier footnote. The 4 × 4 matrix Thus far, none of the effects of any of these operators have been observed. The best we can do is set lower bounds on M (assuming some dimensionless coefficient). These lower bounds range from 1 TeV to 10 16 GeV, depending on the operator. As we explore nature at higher energy and with higher accuracy, we hope to begin to see the effects of some of these dimension-six operators.
The mass scale M corresponds to the mass of a particle that is too heavy to observe directly. At energies greater than M , the expansion of Eq. (20) is no longer useful, as each successive term is larger than the previous. Instead, one must explicitly add the new field of mass M to the model. For example, if nature is supersymmetric at the weak scale, one must add the superpartners of the standard-model fields to the theory and include their interactions in the Lagrangian. If we raise the mass scale of the superpartners to be much greater than the weak scale, then we can no longer directly observe the superpartners, and we return to a description in terms of standard-model fields, with an expansion of the Lagrangian in inverse powers of the mass scale of the superpartners, M .
Virtual Top Quark
The top quark plays an important role in precision electroweak analyses. In this lecture I hope to clarify this sometimes confusing subject.
Recall from the previous lecture that the gauge, matter, and Higgs sectors of the standard model depend on only five parameters: the three gauge couplings, g S , g, g ′ , and the Higgs-field vacuum-expectation value and self interaction, v and λ. At tree level, all electroweak quantities depend on just three of these parameters, g, g ′ , and v. We use the three best-measured electroweak quantities to determine these three parameters at tree level: where the uncertainty is given in parentheses. The value of α is extracted from low-energy experiments, G F is extracted from the muon lifetime, and M Z is measured from e + e − annihilation near the Z mass. From these three quantities, we can predict all other electroweak quantities at tree level. For example, the W mass is Exercise 2.1 ( * ) Verify the expression for M W in terms of α, G F , and M Z .
A more civilized expression for M W is obtained by defining This is the so-called "on-shell" definition 5 of sin 2 θ W ; it has a numerical value of s 2 W = 0.2228 (4). Using this parameter, we can write a simpler expression than Eq. (23) for M W at tree level: (25)
Exercise 2.2 ( * ) -Verify this equation.
At one loop this expression is modified: where ∆r contains the one-loop corrections. The top quark makes a contribution to ∆r via the one-loop diagrams shown in Fig. 2, which contribute to the W and Z masses: 5 So called because it is defined in terms of physical, or "on shell," quantities. Figure 3. Virtual Higgs-boson loops contribute to the W and Z masses.
where t 2 W ≡ tan 2 θ W . This one-loop correction depends quadratically on the top-quark mass.
The Higgs boson also contributes to ∆r via the one-loop diagrams in Fig. 3: where c 2 W ≡ cos 2 θ W . This one-loop correction depends only logarithmically on the Higgs-boson mass, so ∆r is not nearly as sensitive to m h as it is to m t .
Due Neutral current -Rather than using the direct measurements of M W and m t to infer the Higgs-boson mass, one can use other electroweak quantities. The Fermi constant, G F , is extracted from muon decay, which is a charged-current weak interaction. That leaves the neutralcurrent weak interaction as another quantity of interest. There is an enormous wealth of data on neutral-current weak interactions, such as e + e − annihilation near the Z mass, νN and eN deep-inelastic scattering, νe elastic scattering, atomic parity violation, and so on [2].
Let's consider a simple and very relevant example, the left-right asymmetry in e + e − annihilation near the Z mass, shown in Fig. 5. Left and right refer to the helicity of the incident electron, either negative (left) or positive (right). The asymmetry is defined in terms of the total cross section for a negative-helicity or positive-helicity electron to annihilate with an unpolarized positron and produce a Z boson, Neutral-current coupling of an electron to the Z boson. A left-handed electron has negative helicity, a right-handed electron has positive helicity. where are the vector and axial-vector couplings of the electron to the Z boson. At tree level, ρ e = κ e = 1, but there are one-loop corrections. The correction quadratic in the top-quark mass is Different neutral-current measurements have different dependencies on m t and m h , so by combining two or more measurements one can extract both m t and m h . The solid ellipse in Fig. 4 represents the 68% CL constraint from all neutral-current measurements combined. It is in good agreement with the direct measurements of M W and m t , and strengthens the case for a light Higgs boson. Combining all precision electroweak data, one finds 45 GeV ≤ m h ≤ 191 GeV [2].
Historically, neutral-current data were used to successfully predict the top-quark mass several years before it was discovered. This is a good reason to trust the prediction of a light Higgs boson from precision electroweak analyses.
It is also significant that the two ellipses in Fig. 4 lie on or near the lines of constant Higgs mass (within the allowed range of the Higgs mass). These measurements could have ended up far from those lines, thereby disproving the existence of the hypothetical Higgs boson. Instead, these measurements bolster our belief in the standard model in general, and in the Higgs boson in particular.
MS scheme -Before we leave this topic, let's discuss the other most often-used definition of sin 2 θ W . This is the minimal-subtraction-bar (MS) scheme, so-called due to the simple way in which ultraviolet divergences in loop diagrams are subtracted.
The MS scheme promotes this to the definition of sin 2 θ W : where the gauge couplings are evaluated at the Z mass. Its numerical value isŝ 2 Z = 0.23113 (15). The analogues of Eqs. (26) and (24) in the MS scheme are Unlike its on-shell analogue ∆r, the one-loop quantity ∆r W has no quadratic dependence on the top-quark mass. This appears instead in the quantityρ (which is unity in the on-shell scheme): Although the quadratic dependence on the top-quark mass has been shifted from one relation to another, the physical predictions, such as the constraint on the Higgs mass, remain unchanged. 3.
Top Strong Interactions
We now begin to discuss the study of the top quark itself. In the introduction we listed several reasons why the top quark is an interesting object to study. The strategy that follows from these motivations is to get to know the top quark by measuring everything we can about it, and comparing with the predictions of the standard model. This program will occupy a large portion of our efforts at the Fermilab Tevatron and the CERN Large Hadron Collider (LHC). In this section I discuss some of the measurements that can be made at these machines related to the strong interactions of the top quark, and in the next section I turn to its weak interactions.
The top quark is produced at hadron colliders primarily via the strong interaction. The Feynman diagrams for the two contributing subprocesses, quark-antiquark annihilation and gluon fusion, are shown in Fig. 6. In Table 3 I give the predicted cross sections, at next-to-leadingorder (NLO) in QCD, for m t = 175 GeV. I also show the percentage of the cross section that results from each of the two subprocesses. At the Tevatron, the quark-antiquark-annihilation subprocess dominates; at the LHC, gluon fusion reigns. To understand why this is, we need to discuss the parton model of the proton.
The parton model is shown schematically in Fig. 7, where I illustrate how a proton-antiproton collision results in a tt pair produced via the q q t t g g t t + g g t t + g g t t Figure 6. Top-quark production via the strong interaction at hadron colliders proceeds through quark-antiquark annihilation (upper diagram) and gluon fusion (lower diagrams). Table 3. Cross sections, at next-to-leading-order in QCD, for top-quark production via the strong interaction at the Tevatron and the LHC [5]. Also shown is the percentage of the total cross section from the quark-antiquark-annihilation and gluon-fusion subprocesses. quark-antiquark-annihilation subprocess. The proton is regarded as a collection of quarks, antiquarks, and gluons (collectively called partons), each carrying some fraction x of the proton's four-momentum. Figure 7 shows a proton of four-momentum P 1 colliding with an antiproton of four-momentum P 2 .
(neglecting the proton mass) is the square of the total energy in the center-of-momentum frame.
The quark is carrying fraction x 1 of the proton's four-momentum, the antiquark fraction x 2 of the antiproton's four-momentum. The square of the total energy of the partonic subprocess (in the partonic center-ofmomentum frame) is similarlŷ Since there has to be at least enough energy to produce a tt pair at rest, s ≥ 4m 2 t . It follows from Eq. (38) that Since the probability of finding a quark of momentum-fraction x in the proton falls off with increasing x, the typical value of x 1 x 2 is near the threshold for tt production. Setting as the typical value of x for tt production. Figure 8 shows the parton distribution functions in the proton for all the different species of partons. 7 The probability of finding a given parton species with momentum fraction between x and x + dx is f (x)dx.
[What is plotted in Fig. 8 is actually xf (x)]. The parton distribution functions also depend on the relevant scale of the process, µ, which for top-quark production is of order m t .
The typical value of x for top-quark production may be computed from Eq. (40). For the typical value of x at the Tevatron, x ≈ 0.18, the up distribution function is larger than that of the gluon, and the down distribution function is comparable to it. This explains why quarkantiquark annihilation dominates at the Tevatron. In contrast, for the typical value of x at the LHC, x ≈ 0.025, the gluon distribution function is much larger than those of the quarks; this explains why gluon fusion reigns at the LHC.
Higgs and top -I mentioned in the introduction that the top quark could be used to discover the Higgs boson. To derive the coupling of the Higgs boson to fermions, write the Higgs-doublet field as where h is the Higgs boson, which corresponds to oscillations about the vacuum-expectation value of the field, Eq. (7). Inserting this expression for φ into the Yukawa Lagrangian, Eq. (9), yields the desired coupling, shown in Fig. 9. The Feynman diagrams for Higgs-boson production in association with a tt pair are the same as those of Fig. 6, but with a Higgs boson attached to the top quark or antiquark. The Higgs boson can also be produced by itself via its coupling to a virtual top-quark loop, as shown in Fig. 10. Remarkably, this is the largest source of Higgs bosons at the Tevatron or the LHC. It is amusing that the virtual top quark points to the existence of a light Higgs boson, as discussed in the previous section, and may also help us discover the Higgs boson. Top-quark spin -One of the remarkable features of the top quark is that it is the only quark whose spin is directly observable. This is a consequence of its very short lifetime, Γ −1 t ≈ (1.5 GeV) −1 . Figure 11 shows an example of the evolution of a heavy quark of a definite spin after it is produced in a hard-scattering collision. On a time scale of order Λ −1 QCD ≈ (200 MeV) −1 , the heavy quark picks up a light antiquark of the opposite spin from the vacuum and hadronizes into a meson. Some time later, on the order of (Λ 2 QCD /m Q ) −1 ≈ (1 MeV) −1 (for m Q = m t ), the spin-spin interaction between the heavy quark and the light quark 8 cause the meson to evolve into a spin-zero state, (|↑↓ − |↓↑ )/ √ 2, thereby depolarizing the heavy quark [6]. The top quark is the only quark that decays before it has a chance to depolarize (or even hadronize), so its spin is observable in the angular distribution of its decay products. 9 Let's discuss the spin of a fermion in some detail. For a moving fermion, it is conventional to use the helicity basis, in which the spin quantization axis is the direction of motion of the fermion. The free 8 This is the QCD analogue of the spin-spin interaction that produces the hyperfine splitting in atomic physics. 9 Actually, the spin of a long-lived heavy quark is observable if it hadronizes into a baryon, such as a Λ b . Figure 11. A heavy quark hadronizes with a light quark of the opposite spin, then evolves into a spin-zero meson. fermion field may be decomposed into states of definite four-momentum, where the sum is over positive and negative helicity, a λ p and b λ † p are the annihilation and creation operators for a fermion and an antifermion, and u λ (p) and v λ (p) are the momentum-space spinors for a fermion and an antifermion. These spinors are given explicitly in Table 4, in the representation where the Dirac matrices are [7] where each entry in the above matrices is itself a 2 × 2 matrix. We used the concept of chirality when formulating the standard model in Section 1. In the representation of the Dirac matrices given above, so a left-chiral spinor has nonzero upper components and a right-chiral spinor has nonzero lower components. Chirality is conserved in gauge interactions because the matter Lagrangian, Eq. (2), connects fields of the same chirality. In the massless limit, helicity and chirality are related, Table 4. Spinors for a fermion of energy E and three-momentum of magnitude p pointing in the (θ, φ) direction. The spinors u λ (p) and v λ (p) correspond to fermions and antifermions of helicity λ 1 2 . because the factor √ E − p vanishes in the expressions for the spinors in Table 4, causing either the upper or lower components to vanish: Note that the relationship between helicity and chirality is reversed for fermions and antifermions. For massless fermions, chirality conservation implies helicity conservation, as shown in Fig. 12. For massive fermions, helicity is no longer related to chirality, so although chirality is conserved, helicity is not. This is illustrated in Fig. 13. Both helicity-conserving and helicitynonconserving gauge interactions occur; the latter are proportional to the fermion mass, since they are forbidden in the massless limit. Parity and rotational symmetry are used to show that the top quark is produced unpolarized in (unpolarized) pp collisions.
Helicity flips under parity, because although spin does not flip, 10 the direction of motion of the fermion does. One can show that the spinors of Table 4 are related to each other under parity as follows: where p = (E, p),p = (E, −p). This demonstrates that parity flips the helicity. Parity can be used to show that top quarks are produced unpolarized in QCD reactions. Let's consider the quark-antiquark-annihilation subprocess, for example; a similar argument can be given for the gluonfusion subprocess. In Fig. 14 I show a quark and an antiquark of opposite helicity annihilating to produce a top quark and a top antiquark of opposite helicity. (Due to helicity conservation in the massless limit, the Figure 15. The cross section for opposite-helicity tt production is greater than that for same-helicity tt production. helicities of the light quark and antiquark must be opposite; this is not true of the top quark and antiquark.) Applying a parity transformation to this reaction yields the second diagram in Fig. 14. Rotating this figure by 180 • in the scattering plane yields the third diagram, which is the same as the first diagram but with all helicities reversed. Since parity is a symmetry of QCD, the rates for the first and third reactions are the same. The light quarks are unpolarized in (unpolarized) pp collisions, so the first and third reactions will occur with equal probabilities. The first reaction produces positive-helicity top quarks, the second negativehelicity top quarks. Thus top quarks are produced with positive and negative helicity with equal probability, i.e., they are produced unpolarized.
However, there is another avenue open to observe the spin of the top quark. Although the top quark is produced unpolarized, the spin of the top quark is correlated with that of the top antiquark. This is shown in Fig. 15; the rate for opposite-helicity tt production is greater than that of same-helicity tt production. Exercise 3.4 ( * ) -Argue that in the limit E ≫ m, the correlation between the helicities of the top quark and antiquark is 100%.
There is a special basis in which the correlation is 100% for all energies, dubbed the "off-diagonal" basis [8]. This basis is shown in Fig. 16. Rather than using the direction of motion of the quarks as the spin quantization axis, one uses another direction, which makes an angle ψ with respect to the beam, related to the scattering angle θ by where β is the velocity of the top quark and antiquark in the centerof-momentum frame. When the spin is projected along this axis, the correlation is 100%; the spins of the top quark and antiquark point in the same direction along this axis.
The moral of this story is that, for massive fermions, there is nothing special about the helicity basis. We will see this again in the next section on the weak interaction. The spin correlation between top quarks and antiquarks should be observed for the first time in Run II of the Tevatron.
Top Weak Interactions
In this section I discuss the charged-current weak interaction of the top quark, shown in Fig. 17. This interaction connects the top quark with a down-type quark, with an amplitude proportional to the CKM matrix element V tq (q = d, s, b). The interaction has a vector-minusaxial-vector (V − A) structure because only the left-chiral component of the top quark participates in the SU (2) gauge interaction (see Table 1).
The charged-current weak interaction is responsible for the rapid decay of the top quark, as shown in Fig. 18. The partial width into the final state W q is proportional to |V tq | 2 . 11 The CDF Collaboration has measured [9] This implies that |V tb | ≫ |V td |, |V ts |, but it does not tell us the absolute magnitude of V tb . Thus, if we assume three generations, Eq. (49) implies |V tb | = 0.97 +0.16 −0.12 . However, we already know V tb = 0.9990 − 0.9993 if there are just three generations [2]. 11 The W boson then goes on to decay to a fermion-antifermion pair. Figure 19. Single-top-quark production via the weak interaction. The first diagram corresponds to the s-channel subprocess, the second to the t-channel subprocess, and the third to W t associated production (only one of the two contributing diagrams is shown).
Single top -The magnitude of V tb can be extracted directly by measuring the cross section for top-quark production via the weak interaction. There are three such processes, depicted in Fig. 19, all of which result in a single top quark rather than a tt pair [10]. The cross sections for these single-top processes are proportional to |V tb | 2 .
The first subprocess in Fig. 19, which is mediated by the exchange of an s-channel W boson, is analogous to the Drell-Yan subprocess. The second subprocess is simply the first subprocess turned on its side, so the W boson is in the t channel. The b quark is now in the initial state, so this subprocess relies on the b distribution function in the proton, which we will discuss momentarily. 12 In the third subprocess, the W boson is real, and is produced in association with the top quark. This subprocess is also initiated by a b quark. The s-and t-channel subprocesses should be observed for the first time in Run II of the Tevatron; associated production of W and t must await the LHC.
The cross sections for these three single-top processes are given in Table 5 at the Tevatron and the LHC. The largest cross section at both ma- Table 5. Cross sections (pb), at next-to-leading-order in QCD, for top-quark production via the weak interaction at the Tevatron and the LHC [11,12,13]. Figure 20. When theb is produced at high transverse momentum, the leading-order process for t-channel single-top production is W -gluon fusion.
chines is from the t-channel subprocess; it is nearly one third of the cross section for tt pair production via the strong interaction (see Table 3). The next largest cross section at the Tevatron is from the s-channel subprocess. This is the smallest of the three at the LHC, because it is initiated by a quark-antiquark collision. As is evident from Fig. 8, the light-quark distribution functions grow with decreasing x more slowly than the gluon or b distribution functions, so quark-antiquark annihilation is relatively suppressed at the LHC. For a similar reason, associated production of W and t (which is initiated by a gluon-b collision) is relatively large at the LHC, while it is very small at the Tevatron.
Let's consider the largest of the three processes, t-channel single-top production, in more detail. This process was originally dubbed W -gluon fusion [14], because it was thought of as a virtual W striking a gluon to produce a tb pair, as shown in Fig. 20. If theb in the final state is at high transverse momentum (p T ), this is indeed the leading-order diagram for this process. If we instead integrate over the p T of theb, we obtain an enhancement from the region where theb is at low p T , nearly collinear with the incident gluon.
Exercise 4.2 ( * * ) -Show that a massless quark propagator blows up in the collinear limit, as shown in Fig. 21. The b mass regulates the collinear divergence, such that the resulting cross section is proportional to α S ln(m 2 t /m 2 b ), where the weak couplings are tacit.
This collinear enhancement is desirable -it yields a larger cross section -but it also makes perturbation theory less convergent. Each emission of a collinear gluon off the internal b quark produces another power of α S ln(m 2 t /m 2 b ), because it yields another b propagator that is q g p′ k p p = k -p′ Figure 21. When a gluon splits into a real antiquark and a virtual quark, the quark propagator becomes singular when the kinematics are collinear.
nearly on-shell, as shown in Fig. 22. The result is that the expansion parameter for perturbation theory is α S ln(m 2 t /m 2 b ), rather than α S [12]. Fortunately, there is a simple solution to this problem. The collinear logarithms that arise are exactly the ones that are summed to all orders by the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equations. In order to sum these logarithms, one introduces a b distribution function in the proton. When one calculates t-channel single-top production using a b distribution function, as in the second diagram in Fig. 19, one is automatically summing these logarithms to all orders. The expansion parameter for perturbation theory is now simply α S [15]. Figure 23 shows how the b distribution function in the proton arises from a gluon splitting into a (virtual) bb pair. The strange and charm distributions arise in the same way; this also explains the presence of up and down antiquarks in the proton (see Fig. 8). Unlike the other "sea" quark distributions, which are extracted from experiment, the b distribution function is calculated from the initial condition b(x) = 0 at µ = m b , and is evolved to higher µ via the DGLAP equations. Top-quark spin -In the previous section we studied the top-quark spin in the context of the strong interaction. Let's now consider this topic in relation to the weak interaction, beginning with the decay of the top quark.
The top-quark decay to the final state blν is depicted in Fig. 24. The partial width for this decay, summed over the two spin states of the top quark, is given by a very simple formula: where the four-momentum of the fermion or antifermion is denoted by its label. To undo the sum over the top-quark spin, it is useful to decompose the four-momentum of the top quark, t, into two lightlike four vectors, where s is the spin four-vector. In the top-quark rest frame, the spin four-vector is s = (0,ŝ), whereŝ is a unit vector that defines the spin quantization axis of the top quark.
In the top-quark rest frame, the spatial components of t 1 point in the spin-up direction, while the spatial components of t 2 point in the spin-down direction. The partial widths for the decay of these two spin states are Note that Eq. (50) is the sum of these two partial widths, as expected. Let's consider the decay of a top quark with spin up along theŝ direction in its rest frame, as depicted in Fig. 25. In this frame, the spatial components of t 2 point in the −ŝ direction. Hence In single-top production, the top-quark spin is polarized along the direction of thed quark in the top rest frame.
where θ is the angle between the spin direction and the charged-lepton three-momentum (see Fig. 25). Thus which means that the charged lepton in top decay tends to go in the direction of the top-quark spin. In fact, the charged lepton is the most efficient analyzer of the top-quark spin, via the angular distribution of Eq. (56) [16]. We can use these same formulas to analyze the top-quark spin in single-top production [17,18]. The Feynman diagram for the s-channel subprocess, Fig. 19, is the same as that for top-quark decay, Fig. 24, with the replacement ν → u,l →d. Thus from Eqs. (54). If we choose the spin-quantization axis to point in the direction of thed (in the top-quark rest frame), then t 1 ∼ d, and the latter cross section above vanishes. Thus the top-quark is 100% polarized in the direction of thed (in the top-quark rest frame) in s-channel singletop production, as depicted in Fig. 26. This result holds true for tchannel single-top production as well, since it proceeds via the same Feynman diagram, just turned on its side. Although the top quark is 100% polarized when produced via the weak interaction, it is not in a state of definite helicity. Just as we saw in the previous section, there is nothing special about helicity for massive fermions. It may be possible to observe the polarization of single top quarks in Run II of the Tevatron. Solutions to the exercises Section 1 Exercise 1.1 -It is easiest to show this using index-free notation. Write the first term in the Lagrangian of Eq. (2) as where Q L is a 3-component vector in generation space. This term is invariant under the transformation Q L → U Q L Q L , because the 3 × 3 unitary matrix U Q L commutes with the Dirac matrices (which are the same for all three generations): where I have used U † Q L U Q L = 1. The same argument applies to the other terms in the matter Lagrangian and their corresponding symmetries.
This is not invariant under the symmetry transformation, so U Q L is violated. In contrast, baryon number symmetry, Eq. (10), is respected: The same applies to the other terms in the Yukawa Lagrangian, and also to lepton number, Eq. (11).
The last step requires that A u L be unitary, A † u L A u L = 1. The same argument applies to the other fermionic kinetic-energy terms in the Lagrangian.
then we may combine the first two field redefinitions in Eq. (15) into one equation: where This is exactly the symmetry U Q L of Eq. (5). The field redefinitions of u i R and d i R in Eq. (15) are the symmetries U u R and U d R of Eq. (5).
Exercise 1.5 -This follows from the definition of the CKM matrix,
where I have used the unitarity of the A matrices.
Exercise 1.8 -Lepton number, Eq. (11), is violated because is not invariant. Recall that lepton number is an accidental symmetry of the standard model. Once you go beyond the standard model by including higher-dimension operators, there is no reason for lepton number (and baryon number) to be conserved. Exercise 1.9 -We'll follow a similar argument as the one made to count the number of parameters in the CKM matrix. The Yukawa matrix Γ e has 2 × 3 × 3 parameters, and the complex, symmetric matrix c ij has 2 × 6 parameters. The symmetries U L L and U e R contain 2 × 3 × 3 degrees of freedom, so the number of physically-relevant parameters is [Note that we did not remove lepton number from the symmetries, because lepton number is violated by L 5 , Eq. (21)]. Of these parameters, six are the charged-lepton and neutrino masses, leaving six parameters for the MNS matrix. Three are mixing angles, and three are CP -violating phases.
Section 2
Exercise 2.1 -Plug the expressions for α, G F , and M Z in terms of g, g ′ , and v, given at beginning of Section 2, into Eq. (23) and carry through the algebra to obtain M 2 W = (1/4)g 2 v 2 . Exercise 2.2 -Using Eq. (24), we can write Eq. (25) as Solving this quadratic equation for M 2 W yields Eq. (23). Alternatively, one could plug the expressions for α, G F , and M Z in terms of g, g ′ , and v, given at beginning of Section 2, as well as M 2 W = (1/4)g 2 v 2 , into the above equation to check its veracity.
The differential of this equation (with respect to M 2 W and m 2 t , keeping everything else fixed) is where I have used Eq. (27) for ∆r. We can now set ∆r = 0 to leadingorder accuracy, and solve for dM 2 W /dm 2 t : where I've used Eq. (24). Using dM W /dm t = (m t /M W )dM 2 W /dm 2 t and evaluating numerically (for M W = 80 GeV, m t = 175 GeV) gives a slope of 0.0060, in good agreement with the slope of the lines of constant Higgs mass in Fig. 4.
The differential of this equation is
Thus S ≡ (P 1 + P 2 ) 2 = (2E, 0, 0, 0) 2 = (2E) 2 , which is the square of the total energy of the collision. The last expression in Eq. (37) follows from (P 1 + P 2 ) 2 = P 2 1 + P 2 2 + 2P 1 · P 2 ≈ 2P 1 · P 2 , if we neglect the proton mass, P 2 1 = P 2 2 = m 2 p . Exercise 3.2 -Inserting Eq. (41) into the second term in the Yukawa Lagrangian, Eq. (9), yields (analogous results are obtained for the other terms in the Lagrangian). Using Eq. (13), this can be written The field redefinitions that diagonalize the mass matrix, Eq. (15), will therefore also diagonalize the couplings of the fermions to the Higgs boson. The coupling to a given fermion is thus given by −m f /v (times a factor i since the Feynman rules come from iL), as shown in Fig. 9.
3 -The answer is evidently no, since these terms connect fields of different chirality.
Exercise 3.4 -In the ultrarelativistic limit, E ≫ m, the mass of the top quark is negligible. Since helicity is conserved for massless quarks, the top quark and antiquark must be produced with opposite helicities.
Exercise 3.5 -In the limit E ≫ m (β → 1), Eq. (48) implies ψ = θ, which means that the off-diagonal and helicity bases are the same. This is as expected, because in the massless limit the helicities of the top quark and antiquark are 100% correlated (see Exercise 3.4), which is the defining characteristic of the off-diagonal basis.
Exercise 3.6 -At threshold (β → 0), Eq. (48) implies ψ = 0, which means that the top quark and antiquark spins are 100% correlated along the beam direction. This is a consequence of angular-momentum conservation. At threshold, the top quark and antiquark are produced at rest with no orbital angular momentum. The colliding light quark and antiquark have no orbital angular momentum along the beam direction. Thus spin angular momentum along the beam direction must be conserved. The light quark and antiquark have opposite helicity (due to helicity conservation in the massless limit), so the top quark and antiquark are produced with their spins pointing in the same direction along the beam.
Section 4
Exercise 4.1 -This follows from the unitarity of the CKM matrix, V V † = 1. Displaying indices, this may be written For i = j, this implies k=d,s,b which yields the desired result for i = t. Figure 27.
(a) Leading-order subprocess for W production. (b) Leading-order subprocess for W + 1 jet production.
Thus the denominator of the quark propagator vanishes in the collinear limit (if we neglect the quark mass). There are two contributing subprocesses, gq → W q and qq → W g; each consists of two Feynman diagrams, shown in Fig. 27(b) for gq → W q. The two diagrams for qq → W g may be obtained by radiating a gluon off either fermion line in Fig. 27(a).
Exercise 4.4 -The charged-current weak interaction couples only to left-chiral fields. Thus the fermions in the final state (b, ν) have negative helicity, and the antifermion (l) has positive helicity, due to the relationship between chirality and helicity for massless particles (discussed in Section 3).
Exercise 4.5 -In the top-quark rest frame, s 2 = (0,ŝ) 2 = −1, sinceŝ is a unit vector. Because s 2 is Lorentz invariant, this is true in all reference frames. Similarly, t · s = 0, because t = (m, 0, 0, 0) in the top-quark rest frame. Thus Single-top production in the ultrarelativistic limit, as viewed from the top rest frame. and similarly for t 2 2 . Exercise 4.6 -The spatial part of the lightlike four-vector t 2 is pointing in the −ŝ direction. Thus t 2 · ℓ ∼ 1 − cos α, where α is the angle between −ŝ and the direction of the charged lepton. This angle is supplementary to θ (α + θ = π), so t 2 · ℓ ∼ 1 − cos α = 1 + cos θ. Fig. 28; the u andd approach each other along a line and annihilate to make a top quark at rest and ab that carries off the incoming momentum. As always, the top-quark spin points in the direction of thed. To view this event from the center-of-momentum frame, one boosts opposite the direction of motion of the u andd. This boosts the top quark in the direction opposite its spin, so it is in a state of negative helicity. This is as expected; in the limit E ≫ m, the top quark acts like a massless quark, and is therefore produced in a negative-helicity state by the weak interaction (see Exercise 4.4).
|
2014-10-01T00:00:00.000Z
|
2002-11-05T00:00:00.000
|
{
"year": 2002,
"sha1": "d1a1fd15b3dda17e075bddddf3df6a6a19eabd8e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0211067v2.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "06abbd20e95df3dc168ad3b9d11f7c84e0cb5fdb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
16098052
|
pes2o/s2orc
|
v3-fos-license
|
ERCC1 polymorphisms as prognostic markers in T4 breast cancer patients treated with platinum-based chemotherapy
Background Polymorphisms in the excision repair cross-complimentary group 1 (ERCC1) gene have been involved in the prognosis of various cancers. In the present study, we evaluated the prognostic role of the two most common ERCC1 polymorphisms in patients with T4 breast cancer receiving platinum-based chemotherapy. Methods A total of 47 patients with T4 breast cancer undergoing treatment with a platinum-based regimen were collected and followed up (median 159 months; range, 42–239 months). ERCC1 C8092A (rs3212986) and T19007C (rs11615) polymorphisms were genotyped, using an automated sequencing approach. The same series was screened for BRCA1/2 mutations by DHPLC analysis and DNA sequencing. Results Among the tested patients, 16 (34%) and 25 (53%) presented the 8092A (homo-zygosity A/A or heterozygosity A/C) and the 19007C (homozygosity C/C or heterozygosity C/T) genotypes, respectively. The 8092A and 19007C genotypes in ERCC1 were significantly associated with overall survival in T4 breast cancer patients treated with chemotherapy containing platinum (p-values = 0.036 and 0.004, respectively). Univariate and multivariate Cox regression analyses showed that combination of 8092A and 19007C genotypes acts as a significant prognostic factor in women with T4 breast cancer receiving platinum-based chemotherapy (p-values = 0.022 and 0.049, respectively). Two (4.3%) out of 47 cases were found to carry BRCA1/2 mutations; they presented the highest overall survival rates into the series. Conclusions The ERCC1 8092A and 19007C genotypes or their combination may predict a favorable prognosis in T4 breast cancer patients undergoing a platinum-based treatment. Further large-scale, prospective studies are needed to validate our findings.
Introduction
Breast cancer remains the most frequent tumor and the leading cause of cancer-related death among the female population worldwide [1].
Locally advanced breast cancer (LABC) represents a heterogeneous group of diseases associated with a poor prognosis. According to the International Union Against Cancer (UICC)/American Joint Committee on Cancer (AJCC) TNM staging system, primary breast cancers with extension to the skin, with or without lymph node involvement, and without distant metastases (T4 N0-2 M0), may be included in stage III and considered as LABC [2].
Overall, patients with LABC -including cases presenting an inflammatory disease and, mostly, those carrying a triple-negative breast cancer -are particularly responsive to DNA-damaging agents such as platinum compounds; for this reason, platinum-based chemotherapy is frequently used as neoadjuvant treatment in such disease types [3][4][5][6]. The cytotoxic effect of platinum drugs is ascribed to the formation of bulky platinum-DNA adducts, which block replication and transcription through inter-strand cross-link of the two DNA strands, leading to cancer cell death. These adducts are recognized and removed, with subsequent repair of the inter-strand cross-links in DNA, by factors of the nucleotide excision repair (NER) pathway [7]. Recent published data have revealed that single nucleotide polymorphisms (SNPs) in DNA repair genes may represent an underlying molecular mechanism that explains inter-individual variation in DNA repair capacity [7].
The excision repair cross-complementing 1 (ERCC1) is one of the key effector of the NER pathway. This enzyme acts as a DNA damage repair gene, which is essential for the removal of platinum-DNA adducts as well as for recognition and correction of DNA damage [8]. Functional variants in genes involved in the DNA repair pathway may be important determinants of platinum response [9]. Therefore, ERCC1 mRNA and protein expression level or ERCC1 gene polymorphisms may be used to predict the outcome in patients receiving platinumbased chemotherapy [10,11].
C8092A (rs3212986) and T19007C (rs11615) are two common polymorphisms in ERCC1 gene. The C8092A polymorphism is located in the 3′untranslated region of the gene and may affect ERCC1 messenger RNA stability. The synonymous T19007C polymorphism at codon 118 (Asn118Asn) converting a common codon usage (AAC) to an infrequent one (AAT), both coding for asparagine, has been proposed to impair ERCC1 translation and to affect the response to chemotherapy [12].
In recent years, many studies focused on the association between clinical behavior of different types of cancer and specific SNPs in genes involved in DNA repair, including genes of the NER pathway. Although controversial results have been reported for association between polymorphisms of ERCC1 and cancer outcome (see below), increasing and more consistent evidence suggest a relationship between the level of ERCC1 expression and the response to chemotherapy. Higher mRNA levels of ERCC1 are associated with lack of platinum response in advanced lung [13,14], ovarian [15], bladder [16], and gastrointestinal [17][18][19] cancers. Consequently, lower ERCC1 mRNA levels have been found consistently associated with an improved tumor response using platinum-containing compounds [8,14,17].
Polymorphisms in ERCC1 gene have been studied extensively, with controversial results. A study indicates that the codon 118 C > T polymorphism was not associated with clinical outcome in women with stage III ovarian cancer, whereas the C8092A polymorphism was an independent predictor of progression-free survival and overall survival (OS) in the same series of patients [20]. In contrast to this findings, additional reports indicated that the C/C genotype at codon 118 of ERCC1 expression may predict the response to platinum in either the same type of ovarian cancer [15] or other malignancies [12,21]. Overall, the T19007C polymorphism showed a controversial association with clinical outcome (in terms of either tumor response or OS) among different types of cancers [22][23][24]. The C8092A polymorphism was instead associated with a more favorable outcome in head and neck squamous cell carcinoma and advanced non small cell lung cancer patients [25,26]. Finally, the accumulated evidence provided by a meta-analysis of the literature clearly indicated that ERCC1 T19007C and C8092A polymorphisms might not act as risk factors for cancer [27].
There are differences in survival among patients who begin treatment in a similar disease status and genetic factors may influence the effectiveness of therapy. For this reason the availability of new biomarkers that can accurately predict the prognosis and patient response to the treatment is a central issue to improve therapeutic strategies.
The aim of the present study was to investigate whether the ERCC1 19007C > T and 8092C > A polymorphisms may influence the clinical outcome in response to treatment with platinum within a well-characterized cohort of patients with T4 breast carcinoma and long follow-up evaluation. Since germline mutations in BRCA1 and, to a less extent, BRCA2 genes have been found in a variable proportion (ranging from 10% to 30%) of patients with LABC or, mostly, triple-negative breast cancer [28], and such gene dysfunctions seem to be associated with prognosis [29], we also evaluated the prevalence of BRCA1-2 mutations in our series.
Samples
Germline DNA samples of 47 consecutive patients with T4 breast cancer [12 (26%) of them classified as inflammatory breast cancer (T4d-IBC)] were included into the study. Cases were enrolled between 1995 and 2004, and observed up to July 2013 for an overall median of 161 months (range, 13-242 months). Patients were assessed by physical examination and mammography, confirmed via core-needle biopsy. All patients completed a treatment plan including neoadjuvant platinum-based chemotherapy, surgery, radiation therapy, adjuvant chemotherapy, and hormone therapy, when indicated (see below). All patients were of Sardinian origin; the median age at diagnosis was 51 years (range 33-67 years). Patients' characteristics are summarized in Table 1.
The study was approved by the Review Board at the University of Cagliari (Prot. 102/1996). A written informed consent was obtained for using tissue specimens in molecular analyses.
Clinical evaluations were performed every 3 months for 2 years and every 6 months thereafter. Instrumental examinations (e.g., mammography, liver ultrasound, chest X-ray, bone scan, and echocardiogram) were performed every 6 months for the first 2 years, and every 12 months thereafter.
Genetic analysis
Genomic DNA from all patients was isolated from peripheral blood nucleated cells, using standard methods, and then screened for the C > A and C > T polymorphisms at positions 8092 and 19007, respectively, of the ERCC1 gene, using an automated direct sequencing approach. Primers for polymerase chain reaction (PCR) assays and protocols for PCR-based amplification have been previously described [30].
The entire coding sequences and intron-exon boundaries of the BRCA1 and BRCA2 genes were screened for germline mutations among all patients from our series. Mutation analysis was performed by denaturing highperformance liquid chromatography (DHPLC), followed by automated sequencing, as we previously reported [31,32]. Briefly, DHPLC analysis was carried out with the Wave® nucleic acid fragment analysis system (Transgenomic, Santa Clara, CA). Suspected variants are visualized as a characteristic pattern of peaks corresponding to the mixture of homo-and heteroduplex formed when wild-type and mutant DNA are hybridized. Abnormal PCR products identified by DHPLC analysis were directly sequenced using an automated fluorescence-cycle sequencer (ABIPRISM 3130, Life Technologies/Thermo-Fisher Scientific, Waltham, MA, USA).
Statistical analysis
Odds ratios of carrying the ERCC1 19007C > T and 8092C > A polymorphisms were estimated by the logistic regression model and reported with 95% confidence interval (95% CI). Analyses were performed with the statistical package SPSS/7.5 for Windows.
Patient characteristics and treatments
Forty-seven patients with diagnosis of T4 breast carcinoma (T4-N0/2-M0, according to the TNM classification by Sobin et al. [33]) were included into the study. Considering the disease staging system [33], all cases from our series were classified with the highest stage of nonmetastatic disease (Stage IIIB). Among them, 19 (40%) patients presented the subtype of triple-negative breast cancer, with the characteristics of estrogen receptor (ER) negative, progesterone receptor (PR) negative, and human epidermal growth factor receptor-2 (HER-2) negative ( Table 1). All the patients received platinum-based chemotherapy as the neoadjuvant treatment: nearly all of them (89%; see Methods) received PEV treatment. After surgery, adjuvant treatment included CMF chemotherapy for six cycles and locoregional radiotherapy (see Methods).
BRCA mutation analysis
The germline DNA samples from the T4 breast carcinoma patients of the present study was analyzed for mutations in both the BRCA1 and BRCA2 genes, as previously described [31,32]. Among the 47 cases of the series, two germline coding region mutations of known functional significance in either BRCA1 or BRCA2 were detected in two (4.3%) patients. Both mutations, BRCA1 (2CA) 916delTT and BRCA2 3951del3insAT, were absent in normal genomic DNA from 103 unrelated healthy individuals (corresponding to 206 control chromosomes) and were classified as disease-causing variants due to their predicted effects on proteins.
Survival analysis
As of July 2013, 18 (38%) patients have died due to disease, with the median overall survival of the whole sample being 108 months and the median follow-up of live patients of 153 months.
According to the C8092A polymorphism status, the median OS was significantly higher for patients carrying the A (AA + AC) genotype as compared to those with the C (CC) genotype (123.5 vs.101.6 months; p = 0.036). Analogously, a significantly strong association between the T19007C polymorphism and overall survival was observed; the median OS was 131 months for carriers of the C (CC + CT) genotypes and 66.5 months for TT homozygotes (p = 0.004) (Table 2A). When the combination of both ERCC1 polymorphisms were included in univariate analysis, carriers of the combined 8092A/19007C genotype presented an overall survival significantly longer than that of patients carrying the combined C8092/ T19007 genotype (143.5 vs. 91.7 months; p-value = 0.022) (Table 2B). No association between T19007 or C8092A ERCC1 polymorphisms and age at diagnosis, tumor grade, histology, menopausal condition was detected (not shown).
Using the Kaplan-Meier method, survival curves indicated that patients carrying either 8092A or 19007C genotypes presented a statistically-significant better overall survival in comparison with those carrying the remaining other genotypes (p < 0.001 for both 8092A and 19007C genotypes; Figure 1). Using the Cox model adjusted according to age at diagnosis for a multivariate analysis, pathological response to primary chemotherapy and both two ERCC1 polymorphisms remained the only parameters with a significant impact on prognosis in our series of breast cancer patients; no other association with overall survival was observed for the remaining variables (Table 3A). Again, the combined 8092A/19007C genotype remained a statistically independent factor predicting a more favourable prognosis in multivariate logistic regression (p = 0.049) (Table 3B).
Finally, it is worthy to notice that the two patients who carried a BRCA1 or BRCA2 germline mutation presented the highest overall survival rates of the series (238 and 195 months, respectively).
Discussion
In this study, two single nucleotide polymorphisms, C8092A and T19007C, in ERCC1 gene were retrospectively evaluated for their association with the clinical behavior in a group of T4 breast carcinoma patients of Sardinian origin, receiving platinum-based chemotherapy. We have demonstrated that two genotypes within such polymorphisms have a role as independent prognostic factors for a more favorable clinical outcome in this subset of patients.
In breast cancer, the choice of cytotoxic chemotherapy is generally based on tumor extent and disease features. Identification of surrogate markers as potential predictive factors will be useful to improve the clinical management of breast cancer patients (i.e., to identify which subset of patients is expected to show either a response or a lack of response to a particular therapy). Various attempts have been made to improve the survival of patients with T4 breast carcinoma, for instance, discovering novel predictive biomarkers to identify patients who may really benefit from platinum-based chemotherapy.
Platinum agents such as cisplatin and carboplatin are DNA-damaging agents with activity in breast cancer, particularly in the triple negative subgroup [34][35][36]. In neoadjuvant strategies for the treatment of locally advanced breast cancer (LABC), the utility of platinum agents in addition to standard chemotherapy is yet to be completely clarified. Therefore, identification of factors relevant to better predict the clinical outcome in response to platinum agents may be helpful in achieving an absolute advantage for the management of the LABC disease. ERCC1 (excision repair cross complementation group 1) can recognize and correct DNA damage through a DNA strand incision and homologous recombination mechanisms [8,30]. High ERCC1 expression has been demonstrated to be associated with resistance to platinum-based chemotherapy and worse prognosis in cancer patients [20,37,38]. To this aim, we here assessed that the ERCC1 gene polymorphisms may play an analogous clinical role as predictive and prognostic factor among patients with T4 breast cancer receiving platinumbased therapy.
Several clinical studies have explored the role of ERCC1 as a marker for platinum sensitivity in cancer patients. For example, various studies have focused on the relationship between ERCC1 polymorphisms and prognosis for the treatment with platinum agents in colorectal cancer patients. Sequence variations in ERCC1 gene may indeed alter the DNA repair capacity, making biologically plausible to assume that polymorphisms of this gene might have functional significance in cancer.
In our study, the ERCC1 polymorphisms were determined using an automated sequencing method. For C8092A polymorphism, the A genotype was significantly associated with overall survival of T4 breast cancer patients treated with chemotherapy containing platinum compounds (OR 1.957, 95%CI 1.276-5.675; p value = 0.036). Analogously, the C genotype of T19007C polymorphism was significantly associated with overall survival in the same series (OR 3.875, 95%CI 1.865-18.851; p-value = 0.004). Univariate and multivariate Cox regression analyses showed that the combination of the 8092A and 19007C genotypes acts as an independent prognostic factor in this group of T4 breast cancer patients receiving platinum-based chemotherapy (p-values = 0.022 and 0.049, respectively).
The precise mechanism by which the C8092A polymorphism is positively associated with a favorable prognosis remains indeterminate, as there are no direct functional data available for this polymorphism. Since the SNP is located at the 3′untranslated region (3′-UTR) which can be controlled by regulatory proteins and micro-RNAs, the C8092A polymorphism in the 3′-UTR region could however affect the stability and, thus, translation rates of the corresponding mRNA, finally influencing the expression levels of the ERCC1 protein into the cells. On the other side, the synonymous T19007C polymorphism at codon 118 (Asn118Asn) is a common silent substitution, and the exact function of Asn118Asn has not been clarified yet. Again, it could also affect the stability of mRNA or influence the rates of translation by converting a high usage codon to a low usage one. Alternatively, it could be biologically plausible that this correlation may be mediated by linkage disequilibrium with other potentially functional single-nucleotide polymorphisms.
The entire group of 47 patients was then screened for germline mutations in BRCA1/2 genes and, of note, two (4.3%) carriers of BRCA1/2 mutations presented the highest rates of overall survival within the series. No statistical analysis was carried out due to such a limited number of BRCA-mutated cases. The lack of functional BRCA (mainly, BRCA1) can lead to increased sensitivity of the tumor cells to molecular damage, demonstrating that BRCA mutations represent a predictive marker of response to DNA-damaging chemotherapies [39][40][41]. Certainly, the reduced ability to repair damaged DNA along with the impairment of the function of the ERCC1 protein, due to the presence of the combined 8092A and 19007C genotypes, may have contributed to determine the longest overall survival in such a limited subset of patients from our series.
Several limitations of this study should be addressed. Firstly, the sample size may limit the statistical power of our study (findings need further replication in a larger patients' collection); next, the belonging of cases to a particular, genetically-homogeneous population (ethnicity may interfere with observed associations in multifactorial diseases; extension of the study in other geographical areas with general, genetically-heterogeneous populations is also recommended). However, to our knowledge, this is the first study clearly demonstrating that polymorphisms in ERCC1 gene are significantly associated with overall survival in patients with T4 breast cancer receiving platinumbased treatment. Overall, this seems to suggest that such sequence variations could be considered as novel prognostic biomarkers for the management of the T4 breast cancer patients.
Competing interests
The authors declare that they have no competing interest.
Authors' contributions
GrP, performed DNA sequencing, helped to draft the manuscript; FA, performed data interpretation; MB, performed statistical analysis; MO, performed data analysis, helped to draft the manuscript; AC, performed quality control of pathological data; MS, contributed to mutation analysis; VP, BM, BF, and FN, participated in patients' collection and data acquisition; MI, participated into the study design and data discussion; GP, conceived of the study and participated in its design and coordination, helped to draft the manuscript. All authors read and approved the final manuscript.
|
2017-07-01T17:13:58.979Z
|
2014-09-25T00:00:00.000
|
{
"year": 2014,
"sha1": "2eb7fe7b1e98507a0148b3cd2f1a5d02df2bb1a3",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/s12967-014-0272-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9fc1bdadc17fc6d50053bce5548e76bab0e4890",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232259319
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of 3D Templated Synthetic Vascular Graft Compared with Standard Graft in a Rat Model: Potential Use as an Artificial Vascular Graft in Cardiovascular Disease
Although the number of vascular surgeries using vascular grafts is increasing, they are limited by vascular graft-related complications and size discrepancy. Current efforts to develop the ideal synthetic vascular graft for clinical application using tissue engineering or 3D printing are far from satisfactory. Therefore, we aimed to re-design the vascular graft with modified materials and 3D printing techniques and also demonstrated the improved applications of our new vascular graft clinically. We designed the 3D printed polyvinyl alcohol (PVA) templates according to the vessel size and shape, and these were dip-coated with salt-suspended thermoplastic polyurethane (TPU). Next, the core template was removed to obtain a customized porous TPU graft. The mechanical testing and cytotoxicity studies of the new synthetic 3D templated vascular grafts (3DT) were more appropriate compared with commercially available polytetrafluoroethylene (PTFE) grafts (ePTFE; standard graft, SG) for clinical use. Finally, we performed implantation of the 3DTs and SGs into the rat abdominal aorta as a patch technique. Four groups of the animal model (SG_7 days, SG_30 days, 3DT_7 days, and 3DT_30 days) were enrolled in this study. The abdominal aorta was surgically opened and sutured with SG or 3DT with 8/0 Prolene. The degree of endothelial cell activation, neovascularization, thrombus formation, calcification, inflammatory infiltrates, and fibrosis were analyzed histopathologically. There was significantly decreased thrombogenesis in the group treated with the 3DT for 30 days compared with the group treated with the SG for 7 and 30 days, and the 3DT for 7 days. In addition, the group treated with the 3DT for 30 days may also have shown increased postoperative endothelialization in the early stages. In conclusion, this study suggests the possibility of using the 3DT as an SG substitute in vascular surgery.
Introduction
In 2016, cardiovascular disease (CVD) led to 1.68 million deaths in the European Union [1]. Although endovascular treatment for vascular disease is popular and considered the first option for treatment in many cases, the size of vascular graft market has increased continuously and rapidly [2,3]. In other words, the management of diseases such as coronary and peripheral artery disease, congenital heart defects, and end-stage renal disease often requires vascular surgery for revascularization or formation of a fistula track using grafts of various types and sizes [4]. The safety and appropriateness of a vascular graft determine the treatment outcomes in CVD, which is the leading cause of death. In addition to vascular graft-related complications such as infection, thrombogenesis, rupture, and (pseudo-)aneurysmal changes, the lack of availability of small-diameter (<6 mm) vascular grafts is another challenge limiting the use of vascular grafts clinically due to a high risk of intimal hyperplasia, luminal thrombosis, inflammation, and consequent compliance mismatch with human vessels [5,6].
Commercial vascular grafts (polytetrafluoroethylene, ePTFE; standard graft, SG) have been frequently used in clinical applications. However, the SG has low efficacy relative to an autologous and tissue-engineered vascular graft because of the slow rate of endothelialization and increased risk of thrombogenesis. Endothelialization is a crucial factor for long-term implantation due to its outstanding anticoagulant effects, resulting in resistance to inflammation and thrombosis [7][8][9][10]. Endothelialization of SGs is slow and difficult because of the highly crystalline and hydrophobic behavior toward endothelial cell adhesion, spread, and growth [11,12], therefore, SGs lack luminal surface endothelial cell coverage after graft implantation in human vessel. In addition, SGs show poor long-term patency because of platelet adhesion and the adsorption of plasma proteins, which may induce early thrombus formation [7,8]. Moreover, the porosity of vascular grafts is positively correlated with endothelialization and vascularization [13]. However, materials with standard low-porosity SGs (internodal distance ≤ 30 µm) could be enriched with amorphous platelets or eventually a thin fibrin coagulum after human implantation [9,14]. To overcome these limitations, the current study explored the strategies to effectively promote endothelialization and inhibit thrombogenicity for developing improved vascular grafts compared with commercially available grafts in addition to appropriate mechanical and cytological properties.
Therefore, we re-designed the vascular graft with modified materials and 3D printing techniques and also demonstrated the improved clinical applications of our new vascular graft. First, we developed the 3D templated vascular graft (3DT), followed by mechanical testing and cytotoxicity studies comparing 3DTs and SGs. Rats and humans exhibit similar blood pressure and homeostatic mechanisms [15]. Later, we performed implantation of 3DTs and SGs into the rat abdominal aorta using a patch technique. The purpose of our study is to investigate the clinical efficacy of 3DTs compared with SGs.
Fabrication of Artificial Vascular Graft
The original techniques for the fabrication of artificial vascular grafts have been reported previously in detail [16]. In brief, polyvinyl alcohol (PVA) filament feedstocks (ESUN, Shenzhen, China) were processed into sacrificial templates for the shaping of grafts using a material extrusion 3D printer (Ender 3 pro K, Creality, China) equipped with a nozzle measuring 0.4 mm in inner diameter. The printing conditions were set at a nozzle temperature of 200 • C and a nozzle speed of 60 mm/s. The 3D printed PVA templates were dip-coated with a salt-suspended thermoplastic polyurethane (TPU) solution, which was prepared by mixing salt powders (MW 0.44, Sigma Aldrich, St. Louis, MO, USA) and a TPU solution at a ratio of 4:1 (w/w). The TPU (DAELIM chemical, Korea) granules were dissolved in dimethylformamide (DMF, Samchun Pure Chemical, Korea) at a concentration of 15% (w/v) to obtain the solution. After dipping the PVA template into the TPU-salt suspension, the dip-coated template was dried and thereafter immersed in sonicated water to leach out the salt powders. In the final step, the core template was removed through cut ends in the sonicated water, resulting in a customized porous TPU graft.
Mechanical Characterization of Vascular Grafts
The mechanical properties of customized graft were evaluated and compared with those of an SG. Uniaxial tensile tests were performed using a typical universal testing machine (Instron 3367, Norwood, MA, USA). Tensile specimens were cropped from the as-prepared artificial graft measuring 15 mm in diameter and 45 mm in length. The graft was cut into rectangular sheet pieces with a width of 6 mm and a length of 40 mm. Using the resized samples, the tensile tests were performed at a rate of 500 mm/min and a gauge length of 20 mm. In total, five samples were tested for the measurement of average elongation at break and elastic modulus (20% secant).
Surface Characterization of Vascular Grafts
The porous structure of 3DT and SG inner surfaces was analyzed via field emission scanning electron microscopy (FE-SEM, Hitachi SU-8010, Hitachi High-Tech Co., Tokyo, Japan). Prior to imaging, the samples were sputter-coated with gold for 250 s using a 15 mA current.
Animals
Sixteen-week-old specific pathogen-free Sprague Dawley male rats (total number = 24, 440~460 g) were used in this study. All of animal housing, breeding, and experiments were approved by the Animal Care and Use Committee of Korea University (KOREA-2019-0060-C1).
Experimental Design
The operation was done according to our previously described method [16]. On day 0, rats were anesthetized with isoflurane. A longitudinal laparotomy incision was made and the infrarenal abdominal aorta was exposed. After 3 min of heparin injection (50 IU/kg, intra-peritoneally), the abdominal aorta was cross-clamped and opened longitudinally to a length of 1.5 cm. The opened aorta was sutured and covered with an SG or 3DT vascular patch. The wound was closed layer by layer (Figure 1a-h). In total, four groups of the animal model were enrolled in this study: SG_7 day (standard graft; ePTFE), SG_30 day, 3DT_7 day (customized vascular patch; 3D templated graft), and 3DT_30 day. After 7 to 30 days, the aorta and surrounding tissues of these operated animals were harvested for histopathologic analysis after euthanasia. Figure 1i shows the experimental design.
Tissue Harvesting and Staining
After extraction of the aorta and surrounding tissues, these samples were fixed with 4% paraformaldehyde and stained with hematoxylin and eosin (H&E). After that, histopathological analysis including the luminal thrombus, neovascularization, calcification, fibrosis, and inflammatory infiltrates was conducted.
Histopathology Scoring
Histopathology grading of thrombi, calcification, neovascularization, inflammatory cell infiltrates, and fibrosis was evaluated and rated by a pathologist in a blinded manner. This grading consisted of grade 0 (for none), 1 (for mild), 2 (for moderate), and 3 (for severe) and scoring was calculated as a mean value.
Statistical Analysis
The data are expressed as the mean ± standard deviation (SD). Statistical significance was determined using the Mann-Whitney test. A p-value of <0.05 is considered to indicate statistical significance. All statistical analyses were conducted using GraphPad Prism software (GraphPad, La Jolla, CA, USA).
3D Customized Artificial Vascular Grafts
We previously reported a template-based fabrication of tubular tissue engineering scaffolds [17,18]. In this study, using the same process, the artificial vascular grafts were fab-ricated and customized using a 3D printing technique. Figure 2a shows the 3D printing of a cylindrical PVA template. Artificial grafts with morphology similar to the corresponding templates were obtained after dip-coating in the TPU-salt suspension, salt leaching, and the removal of the PVA template. Owing to the inherent customizability of the 3D printing method, the artificial grafts could be custom-made to clinical specifications. Figure 2b displays two different grafts with inner diameters of 5 mm (upper) and 10 mm (lower). In addition to facile dimensional control, the artificial grafts exhibit good mechanical flexibility for clinical efficacy, as shown in Figure 2c. To evaluate the mechanical flexibility quantitatively, tensile tests were conducted using the as-prepared sheet specimens cut from the artificial grafts. Figure 3a,b show the resulting stress-strain relationship of specimens cropped from the as-fabricated porous 3DT. Compared with the SG with elongation at break of 89% and elastic modulus of 61 MPa (25% secant), the porous 3DT exhibited superior flexibility with 456% elongation and an elastic modulus of 19 MPa (Figure 3a,b). The advantageous properties were attributed to the intrinsic softness of the material (TPU) and the high porosity generated by salt leaching. Figure 3c,d show the porous structures of the inner wall surfaces of the 3DT and the SG samples. A higher porosity and larger pore sizes were observed in the 3DT sample compared to those of the SG sample. The similar tendency was revealed in the cross-sectional views of the samples. As expected, the differentiated pore morphology of 3DT caused the superior mechanical flexibility over that of SG.
In Vitro Cytotoxicity
Next, the viability of L-929 cells with or without extracts of 3DT or SG was investigated using an MTS assay. As shown in Figure 4, the cell viability of the 3DT or SG was around 90% and 106%, respectively.
Endothelialization
To evaluate the postoperative effects of the 3DT vascular patch, the endothelial cell activation was compared with that of the SG for 7 or 30 days postoperation. The endothelial cells are originally flat cells, but when activated they become round. The SG and 3DT groups revealed endothelial cell activation on day 7 ( Figure 5), whereas no endothelial cell activation was detected in both groups on day 30. Endothelialization is a crucial factor for long-term implantation due to anticoagulation, which decreases thrombus formation and prolongs implant function [7,19]. The 3DT_7 day group revealed higher endothelial activation compared with the SG_7 day group ( Figure 5). These data reveal that endothelialization indicates activation at an early stage, but is not continued in both vascular patches.
Histopathological Analysis
To further investigate the postoperative effects of the 3DT vascular patch, the histopathological findings of neovascularization, luminal thrombus, calcification, inflammatory cell infiltrates, and fibrosis were compared with those of the SG for 7 or 30 days postoperation ( Figure 6). Among the groups, the SG_30 day group manifested slightly higher neovascularization (Figure 6a,e). The 3DT_30 day group revealed slightly higher fibrosis compared with the SG_30 day group (Figure 6e). However, the SG_30 day group showed advanced fibrosis, whereas the 3DT_30 day group exhibited early fibrosis ( Figure 6b,d). The SG_30 day and 3DT_30 day groups revealed higher calcification compared with the SG_7 day and 3DT_7 day groups (Figure 6e). Calcification of blood vessels occurs along with normal aging but otherwise represents the initiation of thrombi [20,21]. The SG_30 day group revealed slightly higher calcification compared to the 3DT_30 day group (Figure 6e). In addition, among these four groups, only the 3DT_30 day group had significantly less thrombus formation (Figure 6e). However, neovascularization, inflammation, and fibrosis were not significantly different in all groups.
Discussion
The ideal vascular graft should exhibit appropriate mechanical strength and compressibility, permeability and viscoelasticity, biocompatibility and biostability, hemocompatibility and non-thrombogenicity. These prerequisites have already been reported in many studies [22][23][24][25]. In addition, the ideal graft meets the physiological, manufacturing and optional needs. Physiologically, the ideal vascular graft cannot disturb tissue healing, and should be non-toxic. It should also be devoid of antigenicity, oncogenicity, and adverse immune effects, and suppress intimal hyperplasia [26][27][28][29][30]. Manufacturing conditions for the ideal vascular graft should be easy and accurately reproducible. Such grafts can be rapidly manufactured with diverse diameters and lengths as well as shapes, and easily stored and shipped. In this respect, our method, including 3D printing and solution coating, is characterized with facile manufacturing and shape customizability. Besides, drug elution is often required clinically. The manufacturing price is one of the most important prerequisites [31][32][33][34][35]. The prevalence of vascular disease is increasing rapidly because of the graying of the population. In addition, morbidity and mortality related to vascular ailments are also increasing. In fact, vascular disorders can deteriorate the quality of life in patients and entail enormous social and healthcare costs. Therefore, many investigations and studies have focused on elimination of vascular disease for several decades. These efforts have resulted in significant achievements, especially in endovascular treatment. However, the role of vascular surgery using the vascular graft with or without concomitant endovascular treatment is still important clinically. Thus, many of the challenges and limitations related to vascular grafts, such as infection, thrombus formation, rupture, and (pseudo-)aneurysmal changes, need to be resolved. Accordingly, more advanced vascular grafts with a diverse array of materials and manufacturing techniques are necessary. In addition, these grafts should be commercially available "off the shelf".
This study introduces a new synthetic 3DT. Although this study had some limitations due to the small sample size and animal operations using a patch technique, which is not a full graft technique, it suggests the possibility of clinical applications using the newly fabricated synthetic 3DT. Our synthetic graft was not inferior compared to the commercially available SG, which is used worldwide clinically, in terms of mechanical strength, surface morphology, cytotoxicity, and histological results following in vivo implantation. The elastic moduli of typical vascular tissues range from approximately 1 MPa to 40 MPa, depending on the type of artery or vein [36][37][38]. In this context, the SG sample showed excessively high stiffness, exceeding the feasible range of native vascular tissues. In contrast, our graft is soft and alleviates the mechanical mismatch between graft and native tissue. The 3DT graft is expected to provide great potential as a biomimetic vascular graft owing to its mechanical softness, which has been continuously challenged in previous studies [17,39,40]. The cytotoxicity of our synthetic 3DT is not different from that of the commercially available SG. In addition, significantly reduced calcification and thrombus formation were observed in the 3DT_30 day group compared with the SG_30 day group in our study. Fibrosis and thrombus formation may induce structural degeneration, including calcification [41], and vascular calcification is generally seen with aging [20]. Of course, this study did not establish the superiority of the newly fabricated graft compared to the commercially available SG. However, this study represents a preliminary analysis to determine the possibility of clinical applications using the synthetic 3DT. In addition, our graft has advantages related to an additional layer of coating with repurposed materials and is designed to improve clinical outcomes, such as reduced risk of thrombogenesis and intimal hyperplasia via drug-eluting techniques, and facilitates endothelial cell activation by modifying the coating of the graft's inner layer. Our new technique facilitates the fabrication of patient-specific synthetic grafts tailored to individual patients based on data obtained using computed tomography or magnetic resonance imaging, especially in diverse branched and severely distorted vessels. We are currently investigating the functionally improved and patient-specific customized 3DT and will report the detailed results soon.
Conclusions
Our study revealed that the 3DT was biocompatible and reduced the risk of thrombogenesis in the early postoperative stages of vascular surgery compared with the SG. These results suggest the possibility of clinical applications using the 3DT as a substitute for pre-existing SGs in vascular surgeries.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-03-18T05:13:33.038Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2a1d2c68deca510bc9c93b4b475e98e5fd4e5412",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/5/1239/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a1d2c68deca510bc9c93b4b475e98e5fd4e5412",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5107993
|
pes2o/s2orc
|
v3-fos-license
|
Send Orders for Reprints to Reprints@benthamscience.net Patient Outcome in Pregnancy Requiring Dialysis: a Case Series
The optimal management of pregnant dialysis patients remains a great challenge for nephrologists, end-stage renal disease being a predictor of adverse outcomes in this condition. We report a single-center experience of four patients requiring dialysis during pregnancy, all of which resulted in successful delivery of viable infants. Our success rate may reflect an overall improvement in management of this population, with special attention paid to multiple risk factors. These include blood pressure and volume control, anemia management with erythropoietin analogues, nutritional intake and total dose of dialysis.
INTRODUCTION
Women with end-stage renal disease (ESRD) who are on dialysis rarely get pregnant and if they do, pregnancy outcomes are fraught with risks to infant and mother.Advanced chronic kidney disease leads to secondary amenorrhea from anovulation and uremia diminishes the luteal hormone surge and estradiol peak essential for conception [1].Reported conception rates have been exceedingly low worldwide [2][3][4], with infant survival rates dropping if conception occurred after dialysis initiation [5].
Once pregnancy occurs in patients with chronic kidney disease unique management issues come into play and a thorough knowledge of these helps the nephrologist and obstetrician navigate the patient to a successful pregnancy.Since the publication of the first successful pregnancy in a dialysis patient in 1971 [6], the optimal management of pregnant dialysis patients has been an area of much discussion.The nature of this condition and the limited numbers preclude the inclusion of these patients in large clinical trials.Hence the bulk of data available to clinicians to help guide management is largely in the form of case reports and series.Despite major advances in this field, chronic and end-stage renal disease remains a major predictor of adverse outcomes in pregnancy [7].In the 1990s, a rate of 30% for first-trimester and 15% for second-trimester loss were reported.These studies also reported a live birth rate of 52% and an infant survival rate of 37% [8].
Herein, we report a single-center experience from 2003 to 2010, of four cases of pregnant patients who had ESRD requiring dialysis.Each of these cases resulted in the delivery of a live fetus.This improved outcome may reflect an overall improvement in management of this population, with special attention paid to multiple risk factors.These include blood pressure, intravascular volume, anemia, acid-base balance, nutrition and adequacy of dialysis.
Case 1
A 39-year-old African American female G8, P1-1-6 was admitted to the LSU Health Sciences Center at 25 weeks of gestational age.Her past medical history included hypertension, diabetes mellitus, and hepatitis B and C.She also had a history of chronic kidney disease (CKD stage 4) thought to be secondary to a combination of diabetic nephropathy and extensive cocaine use.She was admitted for management of worsening renal function and uncontrolled hypertension.BUN and creatinine levels on admission were 53 mg/dl and 4.6 mg/dl respectively (increased from 45 mg/dl and 2.8 mg/dl a month prior).Hemodialysis was initiated five days later via a right internal jugular vein tunneled catheter for worsening renal function, hypertension and volume overload.Her blood pressure remained reasonably controlled with ultrafiltration on dialysis and medications.Antihypertensive medications were labetalol 600 mg BID and long-acting nifedipine 90 mg BID in conjunction with hydralazine when needed.In addition, she received phosphate binders, vitamin D, multivitamin supplements and erythropoeitin during dialysis.Her hemoglobin levels rose steadily from 8.5 g/dl to within normal range.She underwent dialysis for four hours six days a week until delivery, with appropriate increases being made to her dry weight in consultation with her obstetricians to help estimate fetal growth.The average BUN and creatinine on dialysis were 31 mg/dl and 3.0 mg/dl.She successfully delivered a baby girl by Caesarean section at 34 weeks, with APGARS 9 and 9. Dialysis was continued from the day of delivery until her discharge and she remains dialysis dependent to this day.The child is healthy and developing normally as reported by the patient.
Case 2
A 34-year-old African American woman with a history of diabetes mellitus with proteinuria and retinopathy, hypertension and chronic kidney disease was admitted to the hospital with a gastrointestinal bleed, urinary tract infection, and deteriorating renal function.She was at twenty two weeks gestational age.Her obstetric history included two preterm caesarian sections at 32 and 30 weeks for nonreassuring fetal heart tones and preeclampsia respectively, and three 1st trimester spontaneous abortions.She also admitted to cocaine use during this pregnancy.Her admission laboratory tests showed: BUN 34 mg/dl, creatinine 4.3 mg/dl, sodium 139 mEq/L, chloride 111 mEq/L, potassium 3.3 mEq/L, bicarbonate 16 mEq/L, and hemoglobin 9.2 g/dl.Her kidney function deteriorated subsequently, a tunneled catheter was placed in the right internal jugular vein and hemodialysis was initiated.Renal replacement therapy was continued for six days a week, each session lasting 4 hours, until delivery which was a month later at twenty seven weeks gestational age.Her average serum BUN while on dialytic therapy was less than 30 mg/dl.This was attributed to her severe hypoalbuminemia of less than 2 mg/dl, presumably due to diabetic nephropathy with 8 grams of proteinuria and possibly low protein intake.Her course on dialysis was uneventful except for intermittent difficulty controlling her blood pressure which improved after increase in the dose of her long-acting nifedipine to 60 mg PO BID, in addition to methyldopa 250 mg PO TID.She required blood transfusion for her gastrointestinal bleed and later received 3000 units of erythropoietin during dialysis.She was not prescribed any binders during her admission.She delivered a viable male infant with an APGAR score of 6 and 9 by emergency caesarian section following a 40% placental abruption.She was lost to follow up after the discharge, and information regarding her child's health is not available.
Case 3
A 38-year-old African American female with a history of chronic kidney disease (CKD stage 5) and uncontrolled hypertension, was admitted to the hospital for blood pressure control at seven weeks gestation.She had four previous pregnancies with two successful deliveries and two miscarriages.She was not on any medications and had a history of marijuana use.Her laboratory tests showed a BUN of 24 mg/dl, creatinine 5.0 mg/dl and bicarbonate 14 mEq/L, all other electrolytes were within normal limits.A tunneled catheter was placed in the right internal jugular vein and she was initiated on hemodialysis the following day for blood pressure elevation and symptoms of uremia.Hemodialysis was provided 6 days a week with each session lasting 4 hours.She was however noncompliant with both medical therapy and dialysis, signing out against medical advice and refusing admission on several occasions.As such her blood pressure control was extremely variable, but almost always normalized on dialysis.Average BUN and creatinine levels were 25 mg/dl and 4.5 mg/dl, but renal function did deteriorate towards the date of delivery.She was readmitted later that month at nine weeks gestation on two occasions for episodes of hypertensive urgency with possible seizures.In the subsequent months she underwent hemodialysis at the labor unit six days a week but refused admission multiple times for further monitoring.During the days immediately preceding delivery she was on methyldopa 250 mg TID and labetalol 300 mg BID for blood pressure, and oral iron pills for anemia.She did receive 17,200 units of erythropoietin with her last dialysis session prior to delivery.She underwent emergency caesarian section at thirty three weeks gestation for non-reassuring fetal heart tones and premature rupture of membranes, delivering a viable male infant with an APGAR score of 0, 4 and 7. Her postoperative course was complicated by group B streptococcal bacteremia, for which she received antibiotics.Both mother and child were otherwise healthy.She continued to remain in ESRD and changed her dialysis modality to peritoneal dialysis.Her child is three years old and is in good health.
Case 4
A 39-year-old African American female G3 P1-0-11 presented at 28 weeks gestation for a dental abscess.Her medical history was significant for diabetes with retinopathy and proteinuria with CKD III/IV, hypertension, mitral valve repair, hyperthyroidism, and atrial fibrillation with a history of cardioversion.Laboratory tests on admission were BUN 31 mg/dl, creatinine 3.2 mg/dl, sodium 132 mEq/L, potassium 3.9 mEq/L, chloride 105 mEq/L, bicarbonate 17 mEq/L, glucose 395 mg/dl, LDH 668 u/L, uric acid 9.6 mg/dl, ALT 10 u/L, AST 23 u/L, calcium 7.8 mg/dl, albumin 2.3 gm/dl and BNP 15,955 pg/ml.Hematology revealed anemia and thrombocytopenia.As her liver function tests remained normal she was not thought to have HELLP syndrome.Her abscess was drained; she was started on antibiotics but left against medical advice without being dialyzed.She was readmitted two days later with progressive dyspnea, leg swelling and worsening renal function.An internal jugular tunneled catheter was placed and hemodialysis initiated.She underwent daily dialysis, with appropriate increases being made to her dry weight in consultation with her obstetricians.The average BUN ranged between 30 and 40 mg/dl.She was given intravenous iron for anemia and did not receive any erythropoeitin.Her hemoglobin was maintained around 10 g/dl throughout her pregnancy.Her blood pressure remained elevated (average BP 140/110) except during dialysis despite adjustment in ultrafiltration on dialysis and treatment with carvedilol, longacting nifedipine and hydralazine.At 29 weeks, she underwent emergent caesarian section due to persistently elevated blood pressure, thrombocytopenia and abnormal liver function tests.She delivered a viable infant with an APGAR score of 8 and 9. Until last follow up the child was alive and healthy.She remained dialysis dependent thereafter.
DISCUSSION
Pregnancy in advanced kidney disease has traditionally been thought to be associated with poor outcomes, with early studies reporting live birth rates anywhere between 9-23% [2,9].Babies born were likely to be premature, further decreasing the likelihood of progressing to healthy lives.Women on dialysis were thus previously counseled to avoid pregnancy.Recent studies have reported more successful outcomes based on observational data, which include different modalities of renal replacement therapy [10].However, if pregnancy were to occur in a patient on hemodialysis today, the results reported in the literature [11] and our single center experience (80% fetal survival due to one pregnancy resulting in neonatal demise shortly after delivery) would suggest a favorable outcome for pregnancy.In a recent case series reported by Luders et al. from Brazil the success rate was 87% and was attributed to advances in care of the pregnant dialysis patient [12].
A recurrent theme in the success of pregnancy in hemodialysis appears to be the increased and sustained dialysis dosing throughout gestation.This theory has been suggested since the 1980s [9], but not until recently has significant increases in dialysis dosing led to reported deliveries of viable infants [3,13].Centers are now developing protocols based on factors affecting pregnancy outcomes to standardize the care of this population [14].The increased frequency and dosage of hemodialysis leads to lower blood urea concentration and better control of volume and blood pressure.In addition to ultrafiltration of fluid gained during the interdialytic interval, adjustments to the dry weight of the patient must be made, to accommodate the growing fetus.In the second and third trimester the dry weight should be increased by 0.5 kg each week.Excessive ultrafiltration should be avoided as extreme falls in blood pressure during hemodialysis will compromise fetal blood flow.The dialysis regimen in each of our patients involved dialysis 6 days each week achieving a total dose in excess of 20 hours per week that resulted in a blood urea nitrogen level less than 50 mg/dl and optimal blood pressure control (Tables 1 and 2).Daily dialysis can be provided by two other modalities, namely nocturnal hemodialysis and peritoneal dialysis.The use of nocturnal hemodialysis has been reported to result in successful outcomes in pregnancy, with the most promising results being reported in five women who had seven pregnancies and delivered six live infants using this modality [15].Several centers have reported successful pregnancies with peritoneal dialysis [16]; however registry data do not show differences in outcome between hemodialysis and peritoneal dialysis [5].
Besides providing adequate hemodialysis there are several other complications of ESRD that have to be addressed in the pregnant hemodialysis patient.Hypertension is a commonly reported maternal complication [11,17], and was present in each of our patients (Table 1).Commensurate with our experience, adequate control of blood pressure can decrease maternal complications and increase the likelihood of a successful delivery.Since ultrafiltration alone was insufficient to control the blood pressure, combinations of antihypertensive medications, known to be safe in pregnancy, were administered.These medications included long-acting nifedipine, methyldopa, hydralazine, and carvedilol.
Iron deficiency anemia is common in pregnant women especially in the third trimester as fetal demands for iron increase [18].It has been shown to increase preterm delivery and subsequent low birth weight [18].Besides pregnancyrelated factors, chronic renal disease and the hemodialysis procedure are additional risk factors for iron deficiency.Since chronic kidney disease is an inflammatory state, increased levels of the protein hepcidin interfere with the gut absorption of iron and the hemodialysis procedure itself results in blood loss in the dialyzer.Thus, the nephrologist caring for the pregnant hemodialysis patient should frequently check iron stores and provide intravenous iron to prevent iron-deficiency anemia.Anemia also results from lack of erythropoietin which can be easily administered either IV or SQ when the patient is receiving hemodialysis.Erythropoietin can be safely administered during pregnancy and is well tolerated.It does not appear to cross the placenta due to its large size and there has been no reported fetal or maternal morbidity from its use [19].Cytokine production in gestation may lead to erythropoietin resistance [3]; despite receiving erythropoietin (Table 1), only 3 patients were within the goal of 10-11 g/dl prior to delivery (Table 2).Pregnant women have a chronic respiratory alkalosis from stimulation of the respiratory center by progesterone and lifting of the diaphragm by the enlarged uterus.The serum bicarbonate is typically 20-22 mg/dl in compensation for the respiratory alkalosis.The pregnant hemodialysis patient should be dialyzed against a lower bath concentration of bicarbonate to achieve lower serum bicarbonate levels [14,16].Hypophosphatemia can develop in patients due to Premature delivery in these patients continues to be an issue, as confirmed by the cases presented, and progress in the quality of care delivered to these neonates has also helped to improve outcomes.Our single-center experience is comparable to similar reports worldwide, and indicates a trend toward improved outcomes in pregnancy, if managed with vigilance and close follow-up.The management of renal disease in the patients at our center was commensurate with proposed targets in the literature.The experience at our center does highlight the challenges of managing high-risk patients with great difficulties in achieving compliance as well as a history of toxic ingestion.Despite these presumably poor prognostic indicators, each pregnancy did result in a live birth and favorable maternal outcomes.Often we were not able to reach the optimal goals recommended due to patient factors.Further study of specific strategies and their relative importance in optimizing care should be further researched.Registry data or pooled information from case series is probably the best approach to develop protocols and guidelines to optimize the care of pregnant patients on hemodialysis.
All four patients became pregnant when they had CKD and were not on hemodialysis.This may have played a role in their ability to conceive and the successful outcome of their pregnancies.Hemodialysis was needed during the course of their pregnancy.All patients came from a poor socio-economic background and therefore did not get prenatal care.In addition they had a history of drug use and were all over thirty years of age.However, once they needed hemodialysis they received sustained medical attention from their nephrologists and high-risk obstetricians that may have also played a part in the successful outcome of their pregnancies.
Table 2 . Laboratory data on all patients drawn on date of last dialysis prior to delivery.
dialysis especially in patients who are compliant with a low phosphorus diet and phosphate binders.Two of the four patients in this series had phosphate levels above the goal of 5.5 mg/dl just prior to delivery.All patients received folate and prenatal vitamins as water soluble vitamins are lost across the dialysis membrane.
*Serum phosphorus was drawn 2 days prior to delivery.daily
|
2016-10-06T20:11:54.042Z
|
2014-07-11T00:00:00.000
|
{
"year": 2014,
"sha1": "8282b3fe96cde7b385ecab747429460ebd545751",
"oa_license": "CCBYNC",
"oa_url": "https://openurologyandnephrologyjournal.com/VOLUME/7/PAGE/52/PDF/",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "8282b3fe96cde7b385ecab747429460ebd545751",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221939997
|
pes2o/s2orc
|
v3-fos-license
|
Cadaveric simulation versus standard training for postgraduate trauma and orthopaedic surgical trainees: protocol for the CAD:TRAUMA study multicentre randomised controlled educational trial
Introduction The quantity and quality of surgical training in the UK has been negatively affected by reduced working hours and National Health Service (NHS) financial pressures. Traditionally surgical training has occurred by the master-apprentice model involving a process of graduated responsibility, but a modern alternative is to use simulation for the early stages of training. It is not known if simulation training for junior trainees can safeguard patients and improve clinical outcomes. This paper details the protocol for a multicentre randomised controlled educational trial of a cadaveric simulation training intervention versus standard training for junior postgraduate orthopaedic surgeons-in-training. This is the first study to assess the effect of cadaveric simulation training for open surgery on patient outcome. The feasibility of delivering cadaveric training, use of radiographic and clinical outcome measures to assess impact and the challenges of upscaling provision will be explored. Methods and analysis We will recruit postgraduate orthopaedic surgeons-in-training in the first 3 years (of 8) of the specialist training programme. Participants will be block randomised and allocated to either cadaveric simulation or standard ‘on-the-job’ training, learning three common orthopaedic procedures, each of which is a substudy within the trial. The procedures are (1) dynamic hip screw, (2) hemiarthroplasty and (3) ankle fracture fixation. These procedures have been selected as they are very common procedures which are routinely performed by junior surgeons-in-training. A pragmatic approach to sample size is taken in lieu of a formal power calculation as this is novel exploratory work with no a priori estimate of effect size to reference. The primary outcome measure is the technical success of the surgery performed on patients by the participating surgeons-in-training during the follow-up period for the three substudy procedures, as measured by the implant position on the postoperative radiograph. The secondary outcome measures are procedure time, postoperative complication rate and patient health state at 4 months postoperation (EQ-5D—substudies 1 and 2 only). Ethics, registration and dissemination National research ethics approval was granted for this study by the NHS Research Authority South Birmingham Research Ethics Committee (15/WM/0464). Confidentiality Advisory Group approval was granted for accessing radiographic and outcome data without patient consent on 27 February 2017 (16/CAG/0125). The results of this trial will be submitted to a peer-reviewed journal and will inform educational and clinical practice. Trial registration number ISRCTN20431944
ABSTRACT Introduction
The quantity and quality of surgical training in the UK has been negatively affected by reduced working hours and National Health Service (NHS) financial pressures. Traditionally surgical training has occurred by the master-apprentice model involving a process of graduated responsibility, but a modern alternative is to use simulation for the early stages of training. It is not known if simulation training for junior trainees can safeguard patients and improve clinical outcomes. This paper details the protocol for a multicentre randomised controlled educational trial of a cadaveric simulation training intervention versus standard training for junior postgraduate orthopaedic surgeons-in-training. This is the first study to assess the effect of cadaveric simulation training for open surgery on patient outcome. The feasibility of delivering cadaveric training, use of radiographic and clinical outcome measures to assess impact and the challenges of upscaling provision will be explored. Methods and analysis We will recruit postgraduate orthopaedic surgeons-in-training in the first 3 years (of 8) of the specialist training programme. Participants will be block randomised and allocated to either cadaveric simulation or standard 'on-the-job' training, learning three common orthopaedic procedures, each of which is a substudy within the trial. The procedures are (1) dynamic hip screw, (2) hemiarthroplasty and (3) ankle fracture fixation. These procedures have been selected as they are very common procedures which are routinely performed by junior surgeons-in-training. A pragmatic approach to sample size is taken in lieu of a formal power calculation as this is novel exploratory work with no a priori estimate of effect size to reference. The primary outcome measure is the technical success of the surgery performed on patients by the participating surgeons-in-training during the follow-up period for the three substudy procedures, as measured by the implant position on the postoperative radiograph. The secondary outcome measures are procedure time, postoperative complication rate and patient health state at 4 months postoperation (EQ-5Dsubstudies 1 and 2 only).
INTRODUCTION
It is imperative that surgeons are trained to a high standard, so they can perform safe and effective operations for patients. The quality and quantity of surgical training in the UK is currently under threat from a 'perfect storm' of factors. 1 These include reduced working hours, 2 3 shift-based working patterns 4 with the loss of the traditional surgical firm and a move to expedite training and shorten specialist programmes. 5 6 This is set within a climate of unprecedented financial austerity in the NHS and ever-increasing service pressures.
Strengths and limitations of this study
► This is the first randomised controlled trial assessing the impact of cadaveric simulation training on clinical outcomes. ► Patient-centred outcome measures are used to measure an educational intervention for surgeons. ► Multicentre study to maximise external validity of the results. ► The training dose is small as cadaveric training is expense to deliver. ► Pragmatic approach to sample size which is limited by the capacity of the surgical training centre.
Open access
Simulation offers a solution to some of these challenges by moving the early part of the surgical learning curve away from patients into a controlled environment, 7 where skills may be more rapidly acquired as compared with the clinical environment. Simulation is also potentially a very efficient way of training, as large numbers of trainees can be trained simultaneously, at an intensity not feasible in the clinical environment due to competing service demands.
Cadaveric simulation-training using deceased, preserved or fresh human bodies-is a particularly promising modality for training. Fresh-frozen cadavers retain many of the soft tissue handling characteristics seen in live patients, and in combination with presenting the correct anatomy, particularly complex neurovascular relationships, may offer a more realistic simulated operation than would be possible on a plastic model or virtual reality simulator. 8 9 Cadaveric material does not bleed 10 and hence may be less useful for simulating procedures where haemorrhage control is an important feature.
The operating theatre environment can be simulated, including (but not limited to) surgical dress, draping, instrumentation and multidisciplinary team. This 'whole dress rehearsal' for surgery may enhance development of nontechnical skills in addition to the technical operative surgical skills. 11 There are several challenges in delivering cadaveric simulation training. It is expensive to provide, 9 particularly when cadaveric material has to be purchased under license, where there is not a local body donation programme. It requires considerable infrastructure to deliver, including specialist wet laboratory facilities with the appropriately trained staff. These challenges become particularly pressing when provision of cadaveric training on a large scale is considered, and are an important driver in the development of high-quality evidence of educational impact. This evidence is necessary before considerable financial investment can be recommended in providing cadaveric simulation training on a larger scale.
There is abundant low-quality evidence showing cadaveric simulation may induce short-term skill improvement as measured by subjective and behavioural metrics, but there is a lack of high-quality, quantitative evidence that skills learnt in cadaveric simulation can transfer to the workplace, leading to improved outcomes for patients. 8 Our trial attempts to address this evidence deficit, and is both topical and timely.
GOOD CLINICAL PRACTICE
This trial will be undertaken in compliance with Good Practice Guidelines, complying with the Declaration of Helsinki and UK Legislation. Warwick standard operating procedures (SOPs) will be followed.
CONSOLIDATED STANDARDS OF REPORTING TRIALS
The results of the trial will be reported in line with the Consolidated Standards of Reporting Trials (CONSORT) statement. 12 This protocol has been written according to the Standard Protocol Items: Recommendations for Interventional Trials reporting guidelines. 13
AIM
The aim is to determine which of the two surgical training strategies for junior orthopaedic surgeons-in-training lead to the best patient outcomes for three common procedures.
OBJECTIVES
1. To assess the impact of a cadaveric simulation training intervention on the patient outcome of operations performed by junior orthopaedic surgeons-in-training. 2. To define the early learning curve of dynamic hip screw (DHS), hemiarthroplasty and ankle fracture fixation. 3. To explore the feasibility of using postoperative X-rays to assess technical skill.
METHODS AND ANALYSIS Study design
This is a UK multicentre, two-arm, group parallel randomised controlled educational trial.
Sample size
This trial is the first attempt to objectively measure transfer of open operative skills from cadaveric simulation into the workplace using patient-based outcome measures.
There is no available estimate of effect size to reference against a priori in determining sample size, therefore a pragmatic approach to sample size will be taken in lieu of a formal power calculation. The surgical training centre can accommodate 16 delegates at one time and financial resources permitted one iteration of the cadaveric training course. Our maximum sample size is therefore 16 participants in each arm of the study.
OUTCOME MEASURES Radiographic outcomes
The radiographs will be obtained electronically from hospital servers and the implant position measured manually using computer software. The operations will be identified retrospectively by access to the participating surgeons' electronic logbooks. The measurements vary by operation type and are defined as follows.
Clinical outcomes
The clinical outcome measures for substudies 1-3 are as follows: 1. Procedure time Defined as knife-to-skin/surgical start time to wound closure/surgical stop time. These will be obtained from hospital theatre management systems. Procedure time has been chosen as an outcome measure as there is evidence in the literature that procedure time is inversely related to experience, and so can be used as a surrogate measure of technical proficiency. 14 2. Intraoperative radiation dose to patient Defined as time under fluoroscopy (seconds) and radiation dose (mGym 2 ). There is evidence that with increasing experience and skill, surgeons use less intraoperative X-rays to adjust the position of the fracture and implant. 14 Hemiarthroplasty does not require fluoroscopy so this will not be used as an outcome measure for substudy 2.
Postoperative complication rate
The complications of interest are the acute postoperative complications during the inpatient admission. These will be subcategorised as acute medical complications (hospital-acquired pneumonia, renal complications, cardiac complications, Deep Vein Thrombsis (DVT)/Pulmonary Embolism (PE) and surgical complications (wound infection, wound dehiscence, metalwork failure, deep infection). 4. Health state at 4 months postoperation (EQ-5D).
Health state at 4 months postoperation will be measured using EQ-5D, which is a standardised instrument measuring generic health status, which has been widely validated in clinical trials. These data are being collected separately as part of the WHiTE comprehensive cohort study of patients with hip fracture (ISRCTN63982700) and reported elsewhere. 15 EQ-5D will be used for substudies 1 and 2 only as these involve hip fractures.
Surgeon participants
Potential study participants will be provided with written and verbal information about the study. Consent will be obtained by the trial team. The right to refuse participation without giving reasons will be fully respected, and enrolled participants will be free to withdraw from the study at any time without reason, and without prejudice to their training. All participants will be provided with the contact information of a team member who can provide further information about the study. All participants who are allocated to the control group will have the opportunity to undertake the cadaveric simulation training intervention at the end of the study follow-up. This provision is being offered so that the control group are not disadvantaged in their access to educational opportunity by virtue of being randomised to the control group.
Patients whose operations are assessed Patients who undergo an operation by a surgeon who is participating in the study will not be separately consented to allow access to radiographs to assess their implant position or clinical outcome data. Permission to access this information for the purposes of this study without patient consent has been granted from the confidentiality advisory group (16/CAG/0125). It is recognised that seeking consent from a group of primarily elderly, frail patients to assess low risk, routine clinical data in a secure manner for a trial they are not directly participating in would be unduly burdensome for the patients. All patient data will be fully anonymised and handled securely in line with university data regulations.
Randomisation
Participants will be randomised at the point of recruitment using block randomisation (block size 4) to generate a random sequence list, to which participants Open access will be assigned in the order that they enter the study. The allocation sequence will be generated by a senior medical statistician, participants will be enrolled by the trial team.
Postrandomisation withdrawals
Withdrawn participants will not be replaced.
Study setting
The study participants will be on training rotations within the regional hospitals of the West Midlands during the study follow-up. The hospitals where trainees have been working, and performing operations, during the study follow-up will be identified from the participants electronic surgical logbook records.
Control group
The control group will undertake standard residency training according to the master-apprentice model, which is the current standard practice in UK. No additional training or access to learning materials will be provided beyond the fortnightly didactic teaching sessions which are delivered as a part of routine training.
Intervention group (cadaveric simulation trained)
Participants allocated to the intervention group will receive an intensive, 2-day cadaveric simulation training course at the start of the training year, where four common orthopaedic surgical procedures will be taught (DHS, hemiarthroplasty, ankle fracture fixation and lower limb fasciotomy). All intervention participants will receive training on all four procedures, which will be considered separately in the analysis as individual substudies (as they have different radiographic outcome measures). The fasciotomy procedure is included as a 'filler' to make the course structure work, and chosen because it is an important high-stakes, anatomically critical operation that is rarely performed by trainees. Outcomes related to the fasciotomy procedure will not be collected or included in the analysis.
The cadaveric simulation training course The course will be delivered in September at the start of the surgical training year (which runs August to August). The course will take place in the WMSTC at the University Hospital Coventry & Warwickshire (UHCW). The WMSTC is a specialised wet-laboratory facility for delivering cadaveric training, and has an experienced dedicated faculty to facilitate training delivery. The course will consist of two full days of teaching, with expert consultant faculty teaching on fresh-frozen hemicadavers (waist-to-toe-tip). The participant:faculty ratio will be 2:1, and participant:cadaver ratio will be 2:1. Each participant will undertake each of the four procedures in their entirety as primary surgeon ('skin-to-skin'), and will act as assistant when their partner is the primary surgeon four times. Hence each participant is exposed to eight procedures during the course.
The environment and psychological fidelity of the simulation will be maximised by providing: 1. Full surgical dress including masks, gloves, gowns and lead X-ray aprons. 2. The usual disposable surgical drapes. 3. Skin preparation (iodine solution) to prepare the surgical site, and participants and faculty will be asked to observe the usual sterile field precautions as in real theatre. 4. Full surgical instrument trays, surgical implants and cement (for hemiarthroplasty) of the same type as in real theatre will be used. 5. Image intensifier (mobile X-ray) will be available for intraoperative use. 6. Background noise levels and room temperature were maintained at what would usually be expected in the operating theatre. The simulated operating theatres will be set up within the WMSTC as two parallel round-robin circuits. The two stations requiring X-ray use (DHS and ankle open reduction internal fixation (ORIF)) will be set up at the far end of the room to create a radiation zone and where appropriate, standard precautions will be used. Careful consideration will be given to the optimum sequential use of the cadaveric specimens in planning the course structure. For example, it is necessary that the DHS station precedes the hemiarthroplasty station as it would obviously not be possible to perform a DHS operation when the femoral head had been removed. Similarly, the fasciotomy incisions would compromise the soft tissue envelope of the lower limb to a sufficient degree that the fidelity of the ankle ORIF station would be compromised. It is important to make the best and most efficient use of the cadaveric material, for both ethical and financial reasons.
Blinding
The participants cannot be blinded to the type of training they receive, neither can the trial team in organising the cadaveric simulation training. The trial team will take no part in the training of participants. The assessment of radiographic images will be made blinded to group allocation.
Adverse event management
In the unlikely event of a serious adverse event, the chief investigator will report to the sponsor (University of Warwick), ethics committee and project supervisors.
Patient and public involvement
There was no direct patient or public involvement in the design of the study, although clearly training competent surgeons is in the public interest. There is qualitative work to be done around this trial to better understand patient expectations of surgical training.
End of trial
The trial will end when all the radiographic and clinical outcome data have been collected from the participating sites. The trial will be stopped prematurely if required by Open access the ethics committee, following recommendations from the sponsor, or if funding for the study is withdrawn. The research ethics committee and confidentiality advisory group will be notified in writing once the trial is complete.
Trial oversight
This trial is being undertaken as part of a doctoral research project (HKJ), and supervised by three senior supervisors (DG, JDF, GTRP). The supervisors will act as the trial management group and steering committee. The trial is being conducted within a registered Clinical Trials Unit (CTU), and will follow the CTU SOPs.
Data collection plan
Data on the numbers of procedures performed by the participating surgeons at baseline will be collected. The operations performed by the participants during study follow-up will be identified by the surgeons' electronic logbook. Only procedures coded as 'S-TS: supervised-trainer scrubbed' or 'S-TU: supervised trainer unscrubbed' will be included in the analysis. This is to ensure that only procedures where the trainee has performed the key parts (S-TS) or the entire procedure (S-TU) are included. If further information on supervisor input/takeover is required this can be obtained by accessing the corresponding procedure based assessment (PBA) record for the operation. PBAs are routinely collected as part of training.
Procedure data will be extracted and anonymised to study identifier by the electronic logbook data team, before being sent to the trial team. The data will include operation type, date, hospital, hospital ID, patient age, American Society of Anaesthesiologists Grade and supervision code. The radiographs and clinical outcome data relating to these procedures will then be obtained from the study sites via liaison with the respective Research & Development Departments. Data will be entered into a secure trial database on a professionally encrypted trialspecific computer, fully anonymised with only study identifiers. Once data collection is complete, and prior to analysis, range checks for data values will be undertaken, and data will be double checked on entry to the statistical software package. The project supervisors will act as the data monitoring committee. No interim analysis will be undertaken. The trial team and statistician will have access to the final trial dataset.
Statistical analysis plan
Baseline data including completed months of training and number of prior procedures performed will be summarised and compared between the two arms of the study. A CONSORT chart showing the flow of participants through the study will be produced. The three taught procedures (substudies 1-3) will be analysed and reported individually.
The main analysis will investigate and report differences between the two groups with respect to the implant positions (as measured from radiographs), the procedure times, the intraoperative radiation dose to the patient, and patient outcomes, as measured by postoperative complications and health state at 4 months postoperation (hip fractures only).
Statistical tests will be two-sided and considered to demonstrate a significant difference when p<0.05. Temporal trends by group for implant position, procedure time and radiation dose will be presented. Linear mixed-effects models will be fitted to allow for withinsurgeon correlation between repeated observations (surgeon clustering as a random effect), and to adjust for important covariates such as patient condition, age and surgeon experience. These will be summarised by plotting individual learning curves, and then modelled to estimate the overall learning curves for the two arms of the study.
Descriptive statistical analyses of between-group comparisons will be presented for complication rate and health state, with temporal analysis of the latter being reported if appropriate and feasible. The statistical analysis will be supervised and checked by a senior medical statistician at Warwick University.
In the event of missing data, statistician advice will be sought on multiple imputation.
ETHICS AND DISSEMINATION
Master-apprentice 'on-the-job' training for surgeons is the current training standard in the UK, 10 16 and therefore the control arm of the study reflects usual practice. The cadaveric simulation training intervention is an experimental educational intervention and does not expose trial participants to any substantial risks of harm. The trial results will be reported in accordance with the CONSORT statement, and disseminated through publication in peerreviewed journals and conferences. The results of the trial will be presented to Health Education England and the Royal Surgical College. The dataset, statistical code and technical appendices will be made available on request to the corresponding author. The study was approved by the NHS Research Authority South Birmingham Research Ethics Committee (15/WM/0464).
Twitter Hannah K James @hannah_ortho and Damian Griffin @DamianGriffin
Contributors HKJ designed the study and wrote the manuscript. GTRP codesigned the study and the intervention and edited the manuscript. JDF edited the manuscript, made a substantial contribution to the design and is lead supervisor for the qualitative part of the project. DG codesigned the study, edited draft protocols and is lead supervisor for the quantitative part of the project.
Funding This work was supported by Versus Arthritis grant number 20845.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https:// creativecommons. org/ licenses/ by/ 4. 0/.
|
2020-09-27T13:05:33.710Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "ecff880bbf8804886c2d94368263cccf4334670b",
"oa_license": "CCBY",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/9/e037319.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07128eaf8351b11c75893778257db57da2c111f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7406078
|
pes2o/s2orc
|
v3-fos-license
|
Results of the cementless Plasmacup in revision total hip arthroplasty: a retrospective study of 72 cases with an average follow-up of eight years
Background There are multiple revision implant systems currently available for socket revision in revision total hip arthroplasty. Up until now, not all of these systems have been followed up with regards to their long-term use as a revision implantation. For the first time, this study presents the hemispherical porous-coated socket Plasmacup SC, produced by Aesculap, Tuttlingen, Germany, and the clinical and radiological mid-term results of this revision cup implant. Methods Over a period of ten years the Plasmacup SC press-fit-cup was used as a revision implant in 72 consecutive aseptic cases which were included in this retrospective study. The mean follow-up period was 8 years. Bone graft transplantation was performed in 32% of all cases. In 90%, the cup was fixed with additional screws. The follow-up radiographs were analysed with regards to cup migration, osteointegration and osteolysis in the DeLee zones using a computer aided program taking the teardrop figure as a main point of reference. For clinical evaluation the Harris-Hip-Score and the WOMAC-Score were utilized. Results At the follow up examination, the mean Harris-Hip-Score was 83.5 points and the mean WOMAC-Score 34.7 points. 93% of all patients were satisfied with the result of the operation. No aseptic cup loosening could be observed and only one cup had to be removed due to infection. No significant longitudinal or transversal cup migration could be observed. Conclusion Aesculap's Plasmacup SC is suitable as a cementless cup revision implant. There is stable cup osteointegration, post press-fit implantation, even in the case of major acetabular bone defects.
Background
The significance of revision total hip arthroplasty is continuously increasing. While in 2003 the ratio of primary endoprosthesis to revision surgery was approximately 1:14, it was stated as 1:7 in 2006 [1]. In a study published recently, based on data obtained from the Finnish arthroplasty register, similar long-term survival rates were described for cemented and cementless THA in patients aged more than 55 years [2]. Whereas aseptic loosening is the most common reason for revision of cemented cups, polyethylene wear and osteolysis are mainly responsible for revision of cementless cups [3]. The acetabular component is affected twice as frequently as the stem [4]. The aim of socket revision surgery is the permanent and solid fixation of the new socket, the reconstruction of the acetabular bone stock and the correct rebuilding of the hip's centre of rotation. For implant revisions there is a great variety of models using cementless or cemented fixation techniques. Cementing a cup into the existing defect often provides bad results in the case of revision. Engelbrecht et al. report 29% of loosening after 8 years [5]. The results of cemented cups in revision THA can be improved using impaction bone grafting. The advantages of this method include the ability to restore bone stock, rebuild normal hip center and hip biomechanics, and increase bone stock for future revisions [6]. Sembrano and Cheng described acceptable results with five-year loosening-free and acetabular reoperation-free survivorships of 80.7% after application of trabecular metal acetabular cages as acetabular revision implants [7]. One the one hand this procedure is complex but on the other hand treatment even of major bone defects is possible [8].
There are different results after the implantation of a cementless oblong revision cup. Koster et al. noticed 2% of aseptic loosening after 3.6 years, Götze et al. 12% after an average of 2.8 years [9,10].
Another possibility is the implantation of a cementless hemispheric press-fit-cup with the option of additional screw fixation. The applicability of this type of socket has been well documented for some models [11][12][13]. One significant advantage is that the implantation technique is less complex. The question if bone graft transplantation is necessary in order to achieve good long-term results is still point of discussion. Parratte et al. found good results using hemispherical press-fit cups with morselized bone graft for both, the restoration of the acetabular bone stock and the stabilization of the cup [14].
Other authors define the position that hemispherical sockets can only achieve long-term implant fixation in acetabular defects that are not extensive. Christie points out that in order to achieve intimate contact between implant and host bone, which is critical for stability since bone ingrowth requires complete absence of micromotion, the implant must match the defect or be able to bridge it. For this reason, increasing bone loss requires the use of other revision implants such as a hooked roof cup or an oblong cup [15]. Although the design of the Plasmacup SC (Aesculap, Tuttlingen, Germany) is similar to other press-fit cups, the Plasmacup is provided with a special rough titanium micro-porous coating with smaller pore size (50 -200 μm) compared to other pressfit cups. Because of the rough surface und the osteoconductivity of the titanium coating higher primary and secondary stability is expected [16].
The aim of the present retrospective study was to describe the clinical and radiological results of the Plasmacup SC in order to show the applicability of this devise as revision implant in revision THA.
Methods 72 socket revisions were carried out from 01 January 1998 until 31 December 2007, using the cementless press-fit-cup Plasmacup SC, produced by Aesculap, Tuttlingen, Germany.
Surgery was performed on 69 patients, whereupon 3 patients were operated on both sides. At the time of surgery, the average age of the patients was 65.4 (43 -81). The reason for the revision surgery was an aseptic loosening of the socket. 47 cemented cups and 25 cementless cups were revised. 31 socket revision procedures were performed carrying out stem revisions. Bony defects were classified by an independent examiner according to Paprosky on the basis of the operative reports and using preoperative radiographs (a.p. and oblique views) [17]. The decision whether and to which extent the autogenous or allogenic bone grafting was necessary, was decided by the surgeon during surgery. 95% of the operations were carried out by one surgeon, 3% by a second and 2% by a third one. Bauer's transgluteal approach was chosen in all cases and the cementless press-fit-cup, the concept of which was thoroughly researched and described by Stalforth et al. in 1998, was implanted [18]. The average size of the implanted sockets in men was 60.1 (52 -66) mm and in women 56.3 (46 -66) mm. The cup diameter exceeded that of the explanted sockets by an average of 7.3 (4 -10) mm.
Patient follow-up examination
Full ethical approval was granted for the project by the local ethics committee. Preoperative informed consent was obtained in all cases prior to the inclusion into this study.
The mean observation period was 97 (5 -120) months. 58 patients had a minimum follow-up of 24 month. Out of 72 socket revisions performed on 69 patients, 68 socket revisions performed on 66 patients could be entered into the study. 4 socket revisions performed on 3 patients could not be entered into the study. Due to an early infection, one socket had to be explanted post operatively 35 days after surgery. No data collection could be carried out on 2 further patients who received 3 socket revisions because the patients moved to an unknown address and no information was obtainable about their postoperative course.
Out of the 66 patients, 55 were followed up within the framework of study. 11 patients were deceased at the time of the follow-up examination. For possible future studies, we routinely collect the data for the Harris-Hip-Score and the WOMAC-Score ((Western-Ontario and Mac Masters University Score) for our patients treated with revision total hip arthroplasty in each check-up examination. For the deceased patients the data for this study was taken from their last check-up. The deaths were not connected to the socket revisions.
For all other patients the Harris-Hip-Score and the WOMAC-Score were collected at the follow-up examination by an independent examiner (orthopedics specialist) [19,20]. The Harris-Hip-Score result was assessed as "very good" with a score of 90 -100 points, as "good" with 80 -89 points and as "satisfactory" with 70 -79 points. Point scores below 70 showed a bad result. The WOMAC-Score examined the areas "pain", "stiffness" and "physical activity". The maximum points achievable were 240. A high score indicated a bad clinical result.
All patients were asked whether and to what extent they were still taking pain killers due to hip pain at the time of examination. For pain quantization, all participants assessed their existing hip pain using the "visual analogue scale" (VAS) and points scores between 0 (no pain) and 10 (strongest pain) [21].
Radiograph analysis
All available radiographs were divided into 6 groups in order to obtain sufficiently sized sets: -postoperative (all photographs up to postoperative day 42) -0.5 years (all photographs from postoperative day 43 to 9 months post operation) -1 year (all photographs starting 10 months postoperation to 1.75 years post-operation) -2-3 years (all photographs from 1.76 to 3.5 years post operation) -4-5 years (all photographs from 3.6 to 5.5 years post operation) -more than 6 years (all photographs older than 5.5 years) The classification was necessary because the check-up examinations did not always occur at regular intervals and radiographs were not always available for every patient at the time of every follow-up examination.
Thus, 336 radiographs were entered retrospectively into the study. The processing of the photographs was done digitally after scanning the radiographs using a film digitizer VXR-12 (Vidar Systems Corporation, Herndon, Virginia, USA) digital and the "Wristing" programme. This programme was first introduced by Bach et al. in 2005 and was validated for the digital measurement of radiographs [22]. It uses the bottom edge of the tear drop figure as point of reference. In order to determine the cup migration, the following four distances were observed using the "Wristing" programme: -Top edge of cup -tear figure -Medial edge of cup -tear figure -Cup centre -tear figure longitudinal -Cup centre -tear figure transversal When evaluating the postoperative results, the following radiological findings were taken as an indication for cup loosening [23,24]: -a circumferential zone pervious to radiographs of more than 2 mm -cup migration of more than 3 mm -change of inclination of more than 8 degrees Osteolyses and radiolucent lines were determined in the zones defined by DeLee and Charnley by dividing the contact area from cup to bone into three segments.
Radiolucency was classified according to position, size and progression [25].
Statistics
The measurements for the cups' individual movement directions were evaluated using a mixed linear model. As an accidental effect, the individual patient measurements were modelled. The basis for this was the immediate postoperative reading. For the observation of the change in position over the entire period, a variance analysis (Ftest) was applied. All available radiographs were used for the adaptation of the model. The evaluation was carried out using the statistics programme "R" of the R-Foundation for Statistical Computing, Vienna, Austria. The significance level was set at 5%. A normal distribution of the measured data was assumed for the calculation.
Results
The average Harris-Hip-Score was 83.5 (9 -100) points during the follow-up examination receiving a corresponding assessment of "good". The mean value for the category "pain" was 41.5 (0 -44) points, and hip function had an average of 34 (0 -47) points. The Harris-Hip-Score result was assessed as "very good" in 45% of the cases; 21% were "good" and 23% of patients had a "satisfactory" result. In 11% of the cases, the result was "bad".
The mean WOMAC Score was 37.4 (0 -204) points. Patients quoted pain in the operated hip joint with an average of 1.3 (0 -7) on the VAS. 4 patients (5%) took pain killers regularly at the time of the last follow-up examination due to discomfort in the hip joint operated on. These patients stated an average value of 4.8 (4 -7) on the VAS. 7% of all patients had a positive "sign of Trendelenburg". No patient showed symptoms of anterior psoas irritation.
All cups included in the follow-up examination were in place at that time.
Radiograph analysis
The average distance between the top edge of the cup and the tear figure changed only very slightly during the follow-up examination period of more than 6 years (additional file 1). The differences of up to 0.75 mm in the measurement data fall within the measurement limits of inaccuracy (additional file 1).
No migration in longitudinal direction could be ascertained during the follow-up examination period.
No significant change of position in transversal direction could be detected. The distances between the medial cup edge and the tear figure, if anything, became smaller. The comparison of the distances from the tear figure to the centre of the cup in transversal direction shows no statistically relevant movement of the cup's edge towards the medial. All observed differences are within the margin of error of measurement for the procedure.
The comparison of cup inclination shows an increase in inclination (p < 0.0001) of an average of 3.4° between the postoperative radiograph and that taken after more than 6 years. The anteversion showed no significant change in position over the entire period (additional file 1).
In summary, it can be stated that a statistically relevant change in position of the implanted cups only exists with regards to the increase of 3.4° in cup inclination between the radiograph taken directly post-operation and the one taken after 6 years. A large part of this change in position took place between the time directly post-operation and 1 year post-operation. The exact data are shown in additional file 1.
Evidence of radiolucent lines could be confirmed postoperatively as follows: 24% of patients in DeLee zone 1, 6% in zone 2 and 8% in zone 3. Apart from 3 lines in DeLee zone 3, the radiolucent lines regressed in the course of 2 years (Figures 1 and 2). The cups with the 3 remaining radiolucent lines showed no increased migration and the lines did not increase during the observation period. Allogenic bone material was transplanted in 2 of these patients during revision surgery and in one case revision surgery was done without performing bone graft transplantation. Regarding the preoperative acetabular defects, the following data emerged in the 68 post examined cases: 15% Paprosky Type 1, 15% Paprosky Type 2a, 44% Paprosky Type 2b, 7% Paprosky Type 2c, 17% Paprosky Type 3a and 2% Paprosky 3b. There was no correlation found between the Paprosky classification and the clinical results.
In 23 of the 72 cases, intraoperative bone graft transplantation was assessed as necessary. In 14 cases, autogenous bone was used and in 9 cases additional allogenic bone material was used. The autogenous material consisted of ream material obtained during preparation of the acetabulum for cup implantation. In 6 of the 14 cases where the amount of ream material was not sufficient, so that additional cancellous bone in the form of chips was taken from the patient's equilateral iliac crest. The amount of autogenous bone material available for defect replenishment was not sufficient in 9 patients, so that additional allogenic cancellous bone chips from donor femoral heads were used. There was no difference in the clinical result between patients who received intraoperative bone graft transplantation and those whose defects were not intraoperatively replenished with bone.
Complications
Prosthesis luxation occurred postoperatively in 3 cases (4%). 2 patients were treated conservatively after closed reposition; one patient needed a revision of the femoral head which in retrospect had been chosen too short. One case showed a postoperative femoral paresis which completely regressed after 6 months. A deep venous thrombosis of the leg was detected in 2% of the cases and treated with medication.
Postoperative wound healing disorders occurred in 4 cases (6%) and were revised by surgery. 2 (3%) of these patients underwent a soft tissue revision with wound irrigation.
Another patient was additionally treated with a head and inlay revision during the early postoperative stage. After this, undisturbed healing occurred in these patients.
The worst case scenario was the explantation of the complete prosthesis in one patient because of prosthesis infection performed on 35th postoperative day. 3 soft tissue revisions were performed prior to the explantation. The Girdlestone situation in this patient was left permanently uncorrected.
Discussion
Achieving good, long term clinical and radiological results even in case of major bone defects is a great challenge for revision total hip arthroplasty. The present study revealed during the observation period of an aver- age of more than eight years, only a minor migration in the first postoperative year in terms of increased inclination movement. Kärrholm et al. as well as Krismer et al. emphasize the high predictive value of early migration regarding aseptic loosening (inclination movement), especially with cementless press-fit-cups [26,27]. The extent of cup migration, which suggests an early loosening is however, assessed differently. While some authors take the view that in primary THA a migration of just 1 mm in the first 2 years considerably reduces the cup's probable lifespan, others state that even a cup migration of 2 mm in the first 2 years only rarely causes aseptic loosening [27,28].
The "Wristing" programme applied for migration analysis in the present study has a specified limit of accuracy of 2 mm or 3.2° [22]. More precise, but incomparably more complex, are the "radiographic stereometry analyses (RSA)" with a limit of accuracy of 0.1 mm and the "single radiograph analysis (EBRA)" with a limit of accuracy of 1 mm [27]. It was not the aim of the present study to observe single cup migration, but to observe migration behaviour of cup types within the entire group over a long period of time. The study design allows for 96% of the patients operated on during the time of survey to be entered into the research and to evaluate radiographs regarding the postoperative course of cup migration. The comparison with prospective studies shows that a followup of over 90% over an observation period of more than 8 years can not usually be achieved in prospective studies [29,30]. Despite the retrospective approach, relevant cup migration during the observation period can be ruled out in our study group with a high probability.
The good clinical and functional results achieved in this study are also reported in literature for comparable pressfit-cups of other manufacturers. In 1997 Moskal et al. publicized a study of 31 patients, in which 94% of the cases had good postoperative results after the implantation of cementless press-fit-cups as revision implants [31]. Lachiewicz et al. were able to prove good, to very good, results after the application of press-fit-cups with additional screw fixation as revision cup implants [30,32].
The postoperative points achieved in the Harris-Hip-Score during the aforementioned studies were comparable to those of our study. The Harris-Galante socket used in two of the aforementioned studies is very similar to the one used in the present study. The WOMAC-Score was not applied in the studies mentioned above. The low average of 37.4 points in the WOMAC-Score in our study indicates good postoperative patient satisfaction. 90% of the cases used the possibility of fixing the cup with an additional screw. The benefit of additional screws is unexplained. The decision whether additional screws are used is made by the surgeon during surgery, depending on his view regarding the primary stability achieved.
There is no literature containing randomised studies which test their application. Many authors advise using cups as large as possible. Gustke et al. and Obenhaus et al. proved that even major acetabular defects could be reconstructed using large press-fit-cups [33,34]. The definition of the jumbo cup in literature is not clear-cut. According to Patel et al. and Whaley et al., cups are called jumbo cups when their diameter is greater than 65 mm for men and 61 mm for women [35,36]. Ito et al. however, define the jumbo cup using the relative ratio between the size of the implanted acetabular cup and the size of the patient's pelvis [37].
Cementless press-fit-cups with large diameters offer a wide contact area between the acetabular bone and the cup and this is supposed to induce healing. The cup diameters used in our study are, on average, below the mentioned data for jumbo cups. In terms of greater contact between implant and bone, attention was paid to choosing the largest possible cup for every implantation. The migration analysis and radiological results suggest good bone-cup integration. Radiolucent lines in particular, which appeared directly post-operation in 24% of the cases, had become invisible 2 years post-operation.
In our study the acetabular defects were determined according to Paprosky. The allocation of the Paprosky types is comparable to those of other major studies [38]. There is no correlation to be found, neither in literature, nor in this approach, between the classification of the acetabular defect according to Paprosky and the result of the revision surgery. The Paprosky classification seems likely to be more suitable to testing comparability of different studies than to predicting a revision surgery result or influencing the choice of surgical procedure. Elke et al. are even of the opinion that for this purpose the differentiation between "press-fit-suitable" and "press-fit-unsuitable" would be sufficient [39]. The revision situation is defined "press-fit-suitable" when, despite acetabular defects, the press-fit-cup can be fixed to provide leverout stability.
Literature describes various surgical procedures for the refilling of acetabular defects. This indicates that these defects cannot be reconstructed using a standard method and that the selected method often depends on the individual experience of the surgeon. In the present study, bone material was used in 32% of the cases using allogenic and autogenous bone.
The number of complications in our study is comparable to those of other studies [29,40]. The high death rate of patients, particularly in the postoperative course, expresses the multimorbidity of patients.
A weak point of this retrospective study is the fact that follow-up radiographs were not always available for all patients at the time of examination. This could be made up for through a detailed statistical evaluation of the measured data. A comparison between the pre and postoperative Harris-Hip-Score and WOMAC-Score data is not possible as preoperative scores were not collected. The Harris-Hip-Score and WOMAC-Score data collected postoperatively are, however, within the ranges achieved by other studies. In addition, the small number of patients who, at the time of the follow-up examination, were still regularly taking pain killers and complaints of minor hip pain, document the good result of socket revision surgery using Plasmacup SC.
Conclusion
The study results support the suitability of the Plasmacup SC press-fit-cup as a secondary cup implant and demonstrate results similar to comparable prostheses of other manufacturers.
The cup can also be used on major medial cup defects. It gains its stability from the contact with the original bone. None of the cups had to be removed because of aseptic loosening.
|
2014-10-01T00:00:00.000Z
|
2010-05-27T00:00:00.000
|
{
"year": 2010,
"sha1": "7d7a25626aea6c830a541a435d0c55201f922504",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-11-101",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30999f3738a857d01a5cd8f8b49476a7a58a381c",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9415105
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Trichoderma-enriched organic charcoal in the integrated wood protection strategy
The gradual elimination of chromium from wood preservative formulations results in higher Cu leaching and increased susceptibility to wood decay fungi. Finding a sustainable strategy in wood protection has become of great interest among researchers. The objective of these in vitro studies was to demonstrate the effect of T-720-enriched organic charcoal (biochar) against five wood decay basidiomycetes isolated from strongly damaged poles. For this purpose, the antagonistic potential of Trichoderma harzianum (strain T-720) was confirmed among other four Trichoderma spp. against five brown-rot basidiomycetes in dual culture tests. T-720 was genetically transformed and tagged with the green fluorescent protein (GFP) in order to study its antagonistic mechanism against wood decay basidiomycetes. It was also demonstrated that T-720 inhibits the oxalic acid production by basidiomycetes, a well-known mechanism used by brown-rot fungi to detoxify Cu from impregnated wood. Additionally, this study evaluated the effect of biochar, alone or in combination with T-720, on Cu leaching by different preservatives, pH stabilization and prevention of wood decay caused by five basidiomycetes. Addition of biochar resulted in a significant Cu binding released from impregnated wood specimens. T-720-enriched biochar showed a significant reduction of wood decay caused by four basidiomycetes. The addition of T-720-enriched biochar to the soil into which utility poles are placed may improve the efficiency of Cr-free wood preservatives.
Introduction
Wood is still one of the most used construction material due to its abundancy, production costs and environmental benefits. However, as wood is biodegradable it has a limited service PLOS life and is a mandatory requirement to impregnate wood products in ground contact with copper (Cu)-based wood preservatives that are effective against a range of soil microorganisms [1]. The increasing interest to protect the environment resulted in the phase out of the traditional wood preservative formulations that contained strongly carcinogenic compounds such as arsenic, fluor or chromium (Cr) [2]. The absence of Cr results in higher Cu leaching from impregnated wood even after short periods of installation in the ground [2][3][4][5]. The released Cu, as inorganic compound, is not subjected to biological degradation and it is therefore persisting in the environment affecting bioaccumulation and producing toxicity [6]. Moreover, the continuous use of Cu-based wood preservatives has resulted in the development of resistance in a range of wood decay fungi [7], through the production of oxalic acid [8][9][10][11][12][13]. The resulting Cu oxalate complex loses its toxicity properties as a fungicide which results in a reduction in the service life of the wood products [14,15]. Alternative management strategies to improve and prolong the service life of wood products have resulted in great interest due to healthy and environmental reasons [16][17][18]. The possibility to develop an integrated wood protection method has been evaluated by several authors that studied the effect of biological control agents against wood decay fungi [19][20][21]. In recent laboratory studies, the possibility to use an integrated control strategy combining a biocontrol agent (Trichoderma spp.) with low concentrations of Cr-free wood preservatives was demonstrated [22]. Thus, Trichoderma harzianum (strain T-720) showed a strong tolerance to Cu-amended medium (up to 0.1% of CuSO 4 ) and a high antagonistic potential in combination with a range of wood preservative formulations against three wood decay basidiomycetes [15,22].
In the last decades, charcoal (biochar) has been used as soil amendment to improve soil properties and increase agriculture productivity. The application of biochar in soils can be beneficial as it results in increase surface areas, retention of water and heavy metals, stabilisation of pH and carbon sequestration in different substrates [23,24]. Moreover, many biocharassociated components have biocidal activity which increases the stability against soil microorganisms. During the process of biochar production from organic matter, the pyrolization shifts the chemical composition of the raw material into condensed aromatic structures that improve long term properties [25]. In addition, biochar has been postulated as potential bioremediation method for polluted soils [23]. These biochar properties may play an important role in improving the efficiency of Cr-free wood preservatives.
The main aim of this study was to confirm the antagonistic potential of T-720 and evaluate its potential to control the oxalic acid production by wood decay basidiomycetes. In addition, we examined the capacity of biochar to bind Cu released from impregnated wood specimens exposed to leaching. And finally, the integrated control potential of T-720-enriched biochar against wood decay basidiomycetes was also evaluated.
Antagonistic potential of Trichoderma against wood decay basidiomycetes
The antagonistic potential of strain T-720, used in previous studies, was confirmed in dual cultures against five wood decay basidiomycetes. For this purpose, agar discs (5 mm) of cultures of wood decay basidiomycetes (Table 1) were inoculated on one side of the Petri dish containing 2% malt extract agar (MEA) (Oxoid, Pratteln, Switzerland) and incubated at 22(±1)˚C and 70% relative humidity. After 10 days, the opposite side of the Petri dish was inoculated with 100 μL of Trichoderma spp. (Table 1) spores suspension adjusted to 10 6 spores mL -1 [22]. Five Petri dishes (biological replicates) for each combination of Trichoderma strains and basidiomycetes were evaluated. Four weeks after the inoculation of Trichoderma, the overgrowth, sporulation tufts and pustules of Trichoderma strains on the basidiomycetes were used to evaluate its activity [26]. The rate of mycoparasitism in dual cultures was assessed as + = slow overgrowth, ++ = fast overgrowth, +++ = very fast overgrowth. In order to check whether the Trichoderma species and strains were able to parasitize and eradicate the challenged basidiomycete, five agar discs (5 mm) were removed from non-sporulating regions of the basidiomycete and placed on a basidiomycete's selective medium containing 20 mL of 2% MEA with 2 mL of thiabendazole dissolved in lactic acid (Merck, Darmstadt, Germany) [27]. If the basidiomycete failed to grow on thiabendazole amended medium, the lethal effect of Trichoderma was considered to be 100% [22,28].
Trichoderma harzianum (T-720) and oxalic acid production of wood decay basidiomycetes Dual cultures with wood decay basidiomycetes and T-720 (Table 1) were prepared in liquid culture with 120 mL of 1% malt (OXOID). One agar disc of fresh cultures of basidiomycetes were inoculated in 250 mL Erlenmeyer flasks and incubated in the dark at 25(±1)˚C and 120 rpm on a shaker. After 20 days, 100 μL of spore suspension from fresh cultures of T-720 (10 6 spores mL -1 ) were inoculated in the liquid medium containing the wood decay basidiomycetes. Three Erlenmeyer (biological replicates) for each combination of T-720 and basidiomycetes were evaluated. Three flasks for each wood decay basidiomycete were kept as controls. Four weeks after the Trichoderma treatment, the supernatant was filtered with 5 μm Millipore filters (Sigma-Aldrich, Buchs, Switzerland) and three aliquots (3 repetitions) of 500 μL were removed. Furthermore, the aliquots were centrifuged at 725 x g for 1 min to remove cell debris and the obtained solution was analyzed according to the Oxalic acid colorimetric assay kit (Sigma-Aldrich).
Biochar and Cu-leaching from Cu-treated wood specimens
Wood specimens of Scots pine (Pinus sylvestris L.) sapwood (2 x 2 x 8 cm; radial, tangential, longitudinal) were separately impregnated with CC (copper-chromium), CCB (copper-chromium-boron), Cu-HDO (Bis-(N-cyclohexyldiazeniumdioxy)-copper) and ACQ (Alkaline Copper Quaternary) ( Table 2) according to EN 252 [29]. The wood specimens were impregnated with concentrations of wood preservative that demonstrated toxic effect in previous studies by Ribera et al. [22] (Table 2). Afterwards, the wood specimens were placed into Erlenmeyer flasks with 500 mL of deionized water containing 2.5 g of sterilized biochar powder (Carbon Gold, Bristol, UK). The source of biochar was a commercial blend of hardwood species made at a pyrolysis temperature between 500-700˚C. Three flasks for each combination of wood preservative and biochar treatment were evaluated. Duplicates without biochar were used as Cu-leaching controls. After 10 days immersed in water and water containing biochar, wood specimens were removed and the solutions were centrifuged at 700 x g for 30 min. The obtained supernatant was filtered with Whatman No. 1 filter paper (Sigma-Aldrich) and the solution was separated from the biochar. The pH in the supernatant was measured and then mixed with 5 mL of 2% HNO 3 for directly quantification of Cu in solution using inductively coupled plasma optical emission spectrometry (ICP-OES). Additionally, the retention capacity of Cu by the removed biochar was analyzed. For this purpose, 1 g of the extracted biochar was mixed with 3 mL of 2% HNO 3 and 1 mL of 35% H 2 O 2 . After 10 min digestion in the microwave at 500 Watt, the Cu content in solution was measured using ICP-OES.
Trichoderma harzianum (T-720)-enriched biochar and wood mass loss reduction by wood decay basidiomycetes
Interaction tests with wood block specimens of Scots pine sapwood (2.5R x 1.5T x 5L cm) were performed as described by Ribera et al. [22] with the following modifications. For evaluation of the effect of T-720 and biochar on reducing decay against wood decay basidiomycetes (Table 1) autoclavable plastic containers (WEZ, Oberentfelden, Switzerland. dimensions; 25L x 25W x 20H cm) with 180 g of vermiculite (VTT AG, Muttenz, Switzerland) and 5 g of Scots pine sawdust were used. The moisture content and water holding capacity of the substrate was determined according to ENV 807 [30]. The amount of water needed to bring the substrate to 75% of its water holding capacity was calculated and added to the containers. After autoclave sterilisation, three containers for each decay basidiomycete were inoculated (replicates) and used as controls. After 8 weeks incubation with the basidiomycetes, three wood specimens were sterilised with ethylene oxide and placed into each container. Determination of the initial wood dry mass was calculated by oven drying (103˚C) test specimens during 24 h. Twelve weeks after incubation, the specimens were removed, oven dried (103˚C) and the mass loss recorded. The influence of biochar on the basidiomycetes was evaluated in containers with 180 g of vermiculite, 50 g of biochar and 5 g sawdust. Three containers for basidiomycete were inoculated. And 8 weeks after inoculation, three wood specimens (repetitions) were placed into each container as described above. Twelve weeks later the specimens were removed and the mass loss was recorded. In order to study the effect of Trichoderma-enriched biochar on preventing wood decay, 50 g of biochar per container were incubated with 5 mL of T-720 spore suspension (10 6 spores mL -1 ). After two weeks colonisation, the T-720-enriched biochar was added into the vermiculite boxes containing basidiomycetes as described above. After 2 weeks of the T-720-enriched biochar treatment, three specimens were placed into each container. Three boxes for each basidiomycete were treated with T-720-enriched biochar and 12 weeks later, the specimens were also removed and mass losses recorded.
Generation of T. harzianum transformants
The pCAMBgfp binary vector containing the hygromycin B resistance gene and the gfp (green fluorescent protein) gene [31] was introduced into Agrobacterium tumefaciens AGL-1 for
Microscopy
The overgrowth of T-720G on the mycelium of the basidiomycetes was observed by confocal laser scanning microscopy (CLSM) (Zeiss LSM T-PMT). Fungal mycelium was collected from the contact area of dual cultures after 48 h of the first mutual contact. The samples were stained with 10 μL propidium iodide (Sigma-Aldrich) for 10 min to label the mycelium of the basidiomycetes. Microscopic preparations were visualized at excitation/emission of 488/550 mm wavelengths for the GFP and 600/750 mm for propidium iodide as described by Chacón et al. [35]. Additionally, interactions between the T-720G and biochar were analyzed by SEM (Hitachi S-4800) and fluorescence microscopy (Leica DM 4000 B LED), respectively. To study the colonisation of Trichoderma on the biochar substrate, 5 g of sterile biochar were inoculated with 5 mL of T-720G spore suspension (10 6 spores mL -1 ). After 48 h of inoculation the colonised biochar was collected and directly observed with both microscopic techniques.
Statistical analysis
To evaluate the effect of the different treatments compared to controls such as the influence of T-720 on the oxalic acid production, the Cu-retention and the influence on the pH by biochar, a t-test was applied. Besides, comparison between the basidiomycetes for oxalic acid production and the different wood preservative formulations for the Cu-retention assay were assessed by a Tukey's HSD test. To evaluate the preventative effect of T-720 and biochar against each basidiomycete a Tukey's analysis was also performed. The statistical analysis were performed using the statistical software SPSS 1 (Version 22, SPSS Inc., Chicago, IL, USA).
Results and discussion
Antagonistic potential of Trichoderma against wood decay basidiomycetes During initial screening of the Trichoderma species and strains a range of reactions were recorded as a result of antagonism. Contact between basidiomycetes and Trichoderma occurred in all cultures but the ability to overgrow and parasitize the mycelia of the basidiomycetes was dependent on the antagonistic potential of each Trichoderma and the resistance of the challenged fungi to antagonism ( Table 3). The antagonistic potential of Trichoderma was most prevalent for T-720 showing very fast overgrowth for the studied basidiomycetes ( The genetically transformed T-720G showed in vitro antagonistic activity similar to the parental strain (Fig 1A), and thus was used for further characterization of its biocontrol activity and colonization. After 48 h of contact between T-720G and R. placenta, overgrowth and development of typical parasitic colonization by Trichoderma on the target basidiomycete was observed. Confocal microscopy of these samples showed actively growing hypha of T-720G (Fig 1B, green) that became attached to the basidiomycete, surrounding it and generating structures similar to the appressoria-like structures (arrows in Fig 1B) as previously described by Harman et al. [36] and Schubert et al. [37,38]. The lethality to the target fungus was confirmed by its staining with the cell death marker propidium iodide (Fig 1B, red). Additionally, the lethal effect demonstrated by the applied Trichoderma fungi in dual culture was highest for T-720 that recorded 100% deadlock within four weeks against Gloeophyllum sepiarium, R. placenta and S. himantioides and 96% deadlock against A. serialis and Fibroporia vaillantii (Table 3). T. harzianum (T-721) showed the weakest antagonistic potential against most of the basidiomycetes. Trichoderma harzianum (T-720) and oxalic acid production wood decay basidiomycetes The data on the oxalic acid production by the wood decay basidiomycetes are shown in Table 4. Production of oxalic acid by the basidiomycetes A. serialis, F. vaillantii, G. sepiarium and S. himantioides was in good agreement with previous studies [13,39]. Schmidt [40] showed most oxalic acid production by Serpula lacrymans followed by Antrodia vaillantii and A. sinuosa. Generally brown-rot fungi acidify their growth substrate more than whiterot species because the latter degrade the produced oxalic acid by oxalate decarboxylase to formate and CO 2 [40]. G. sepiarium produced the highest amounts of oxalic acid for the control treatments (20.13 μg mL -1 ). However the production of oxalic acid by G. sepiarium did not show significant differences compared to F. vaillantii and S. himantioides. Results obtained for R. placenta (0.64 μg mL -1 ) showed a very low production of oxalic acid as previously demonstrated by Civardi et al. [13] and Ritschkoff et al. [39]. Studies by Ritschkoff et al. [39] on the oxalic acid production by R. placenta showed the most pronounced production (1 g L -1 ) after three weeks of cultivation. One week later, the oxalic acid content in the same cultures was significantly reduced (0.25 g L -1 ). Thus it seems that degradation of oxalic acid by R. placenta is also involved in the pathway of other oxidative reactions as hypothesized by Ritschkoff et al. [39]. Furthermore, the same tendency of decrease of oxalic acid measurements with time was observed for Gloeophyllum trabeum and Coniophora puteana by Hastrup et al. [41]. The analysis of the supernatant in dual cultures showed that oxalic acid production was significantly reduced for F. vaillantii, G. sepiarium and S. himantioides compared to controls (p<0.05). After the T-720 treatment, S. himantioides showed the highest reduction of oxalic acid production (8.53 μg mL -1 ) compared to controls (19.82 μg mL -1 ). G. sepiarium produced the highest quantities of oxalic acid after the T-720 treatment (11.54 μg mL -1 ) but nevertheless the results were not statistically significant from the ones recorded for F. vaillantii (9.40 μg mL -1 ). Although A. serialis did not show significant reduction of oxalic acid production after the T-720 treatment (7.61 μg mL -1 ) compared to controls (11.41 μg mL -1 ), the production of oxalic acid in dual cultures was inferior than the production by F. vaillantii (9.40 μg mL -1 ), G. sepiarium (11.54 μg mL -1 ) and S. himatioides (8.53 μg mL -1 ). The effect of different biocontrol agents and organic biocides to reduce the production of oxalic acid has been previously demonstrated by some authors [42,43]. For instance, Paramasivan et al. [43] demonstrated that the application of T. viride is a useful approach in controlling Sclerotium rolfsii in the soil by reducing more than three times the oxalic acid production (0.79 mg mL -1 ). Biochar and Cu-leaching from Cu-treated wood specimens Our data showed that biochar binds Cu released during the leaching process of wood preservatives ( Table 5). The amount of Cu released from treated wood was significantly higher in the case of Cr-free treated wood specimens (139.51 mg L -1 Cu-HDO and 52.51 mg L -1 ACQ) than in wood treated with Cr-containing formulations (17.57 mg L -1 CC and 28.71 mg L -1 CCB). Cu-adsorption by biochar was higher for Cr-free wood preservatives (95.0% Cu-HDO and 84.1% ACQ). However, significant differences between all wood preservative treatments were found after the Tukey's test. The chemical composition of wood preservatives may also play an important role on the adsorption/desorption process of Cu due to the specific physical properties of biochar as previously demonstrated by Beesley et al. [23]. Chen et al. [44] demonstrated that the adsorption of Cu by biochar strongly correlates with pH, i.e. the highest adsorption of Cu occurred between pH = 4-8. Wood preservatives change the pH of a solution during the event of leaching. The Cu-retention by biochar from impregnated wood specimens in this study was in good agreement with other biochar products that are based on wood as demonstrated by Chen et al. [44] and Han et al. [45] (25.4-1.59 mg g -1 ). The Cu-retention capacity was strongly correlated with the source of the raw material used for producing biochar. For instance, studies by Tong et al. [46] showed a Cu-adsorption of 89.0 mg g -1 by Peanut straw and Pellera et al. [47], 0.27 mg g -1 by rice husks. The change of the pH on the supernatant after 10 days in water solution is shown in Table 6. The pH of the water controls appeared to solubilise the wood preservative formulations when Table 5. Effect of biochar on Cu binding from Cu-treated wood specimens.
Treatment
Initial Different letters denote significant differences between wood preservatives after the Tukey's HSD test (column-wise). Data represented as mean ± SD of three replicates.
https://doi.org/10.1371/journal.pone.0183004.t005 Table 6. Influence of biochar on the pH of water solution containing Cu-impregnated wood specimens. Different letters denote significant differences between the fungi after the Tukey's HSD test (column-wise). Data represented as mean ± SD of three replicates.
Treatment H 2 O H 2 O + biochar
https://doi.org/10.1371/journal.pone.0183004.t006 Trichoderma-enriched biochar strategy compared to the Cu-leached in solution ( Table 5). The release of wood preservative compounds had a variety of influence on the pH values in solution. Wood specimens impregnated with CCB and ACQ preservatives did not alter the pH values significantly compared to controls. In contrast, CC and Cu-HDO demonstrated a significant increase in pH after 10 days in water. The addition of biochar significantly increased the pH values in all water solutions (Table 6). Higher differences were found on wood specimens impregnated with Cr-free preservatives (pH = 8.38 for Cu-HDO and pH = 8.08 for ACQ) compared to the treatments without biochar (pH = 6.88 and 5.64, respectively) and compared to the other wood preservatives after the Tukey's test.
There are many factors that influence leaching of preservatives from wood into the soil such as exposure time, temperature, moisture content, inorganic ions or pH values [3]. The combination of all elements plays an important role on the amount of leachate in the field. For instance, Bergholm [48] demonstrated the correlation between the mobility of CCA components and the pH in the soil. Studies by Murphy and Dickinson [49] on the effect of acid rain on leaching of CCA-C, demonstrated that 40% of the Cu was lost at pH = 3, however, there was no significant loss of Cu at pH > 5.6. The wood preservatives used in this study are primarily in the form of Cu(OH) 2 and CuCO 3 that become more stable at pH around 7 [50][51][52].
Trichoderma harzianum (T-720)-enriched biochar and wood mass loss reduction by wood decay basidiomycetes
Microscopic interactions between T-720-biochar and T-720G -biochar revealed a rapid colonisation of the substrate by both strains. After 48 h incubation with T-720G (Fig 2A), almost all the biochar was colonised as observed under the fluorescence microscope (Fig 2B). Observations with SEM confirmed the behaviour of T-720 to develop a compact matrix between the biochar particles (Fig 2C and 2D). The addition of T-720 to the biochar substrate creates the Trichoderma-enriched biochar strategy opportunity to use biochar as a carrier substance for an integrated control strategy of wood decay basidiomycetes in soils.
The influence of biochar and T-720 treatments to protect wood is shown in the Table 7. The mass loss recorded by decay basidiomycetes was in the range of our previous studies [22]. The effect of T-720-enriched biochar on preventing mass loss was positively influenced for all basidiomycetes. However, the T-720-enriched biochar revealed a significant reduction of mass loss for A. serialis (0.51%), G. sepiarium (0.43%), R. placenta (0.37%) and S. himantioides (0.85%). Moreover, mass losses recorded in wood specimens placed into T-720-enriched biochar was below 3% of the initial dry mass for the basidiomycetes that is the adequate threshold recommended in the ENV 807 [30]. Although the treatment with biochar reduced wood decay by basidiomycetes, this treatment only showed a significant effect (p<0.05) on the mass loss caused by S. himantioides (2.28%) compared to controls (19.44%).
The possibility to control wood decay by basidiomycetes in the laboratory has already been demonstrated by Ribera et al. [22]. In this previous study, T-720 demonstrated high antagonistic potential in combination with low concentrations of wood preservative formulations against basidiomycetes. The internal decay in wood poles is usually developed in the ground line and the application of T-720-enriched biochar in highly infected soils would reduce the damage by basidiomycetes. However, variation within species and regular application strategies should be considered to design further long-term studies in order to maintain the activity of Trichoderma in the field.
Conclusions
We demonstrated the positive effect to use Trichoderma harzianum (T-720)-enriched biochar as integrated wood protection method against wood decay basidiomycetes in the laboratory. T-720 was confirmed as antagonistic strain demonstrating also significant reduction of oxalic acid by five brown-rot fungi. Reduction of the oxalic acid production around Cu-impregnated wood products could possibly enhance the efficacy of wood preservatives avoiding Cu-removal from the wood. It was also validated that in the absence of Cr in wood preservative formulations Cu leaching occurs rapidly and biochar can bind the released Cu from impregnated wood. The application of T-720-enriched biochar in combination with new generation of wood preservatives (Cr-free) may provide an additional value as a method of integrated wood protection in highly infested soils. Long-term field studies in collaboration with telecommunication companies from Switzerland and Germany are currently in progress to develop a suitable application strategy and confirm these results under natural conditions. Successful results in the field will help to develop a sustainable wood protection strategy to counteract damage Different lowercase letters denote significant differences between the fungi after the Tukey's HSD test (column-wise). Different uppercase letters denote significant differences between the control, biochar and T-720-enriched biochar treatments after the Tukey's HSD test (row-wise). Data represented as mean ± SD of three replicates. https://doi.org/10.1371/journal.pone.0183004.t007 Trichoderma-enriched biochar strategy by wood decay basidiomycetes in soils, prevent the unnecessary release of contaminants in the environment and ultimately extend the service life of wood products in ground contact.
|
2018-04-03T06:01:59.405Z
|
2017-08-10T00:00:00.000
|
{
"year": 2017,
"sha1": "5c485a926f473c761e15246195b826247feee172",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0183004&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c485a926f473c761e15246195b826247feee172",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
44022959
|
pes2o/s2orc
|
v3-fos-license
|
Additive Manufacturing: Reproducibility of Metallic Parts
The present study deals with the properties of five different metals/alloys (Al-12Si, Cu-10Sn and 316L—face centered cubic structure, CoCrMo and commercially pure Ti (CP-Ti)—hexagonal closed packed structure) fabricated by selective laser melting. The room temperature tensile properties of Al-12Si samples show good consistency in results within the experimental errors. Similar reproducible results were observed for sliding wear and corrosion experiments. The other metal/alloy systems also show repeatable tensile properties, with the tensile curves overlapping until the yield point. The curves may then follow the same path or show a marginal deviation (~10 MPa) until they reach the ultimate tensile strength and a negligible difference in ductility levels (of ~0.3%) is observed between the samples. The results show that selective laser melting is a reliable fabrication method to produce metallic materials with consistent and reproducible properties.
Introduction
Ever since the manufacturing of materials took place in the Bronze Age, the existing techniques have been constantly developed and new manufacturing processes have been invented [1]. Conventional casting and powder metallurgy (powder production followed by consolidation) are two widely used manufacturing processes to produce parts for different applications [2][3][4][5]. Even though these processes are widely used, there are several problems associated with them. For example, the parts fabricated by conventional casting processes may tend to have one of the following processing defects: surface defects, internal defects, inconsistency in chemical composition (segregation) and/or unsatisfactory mechanical properties (inconsistencies in the grain structure) [6]. Similarly, the parts manufactured by powder metallurgy may have defects introduced at various stages during the fabrication chain, such as powder production (non-uniform chemical composition), powder compaction (porosity) and sintering (porosity and oxidation of surface) [7,8]. Defects can also originate during post-processing of the parts after fabrication [7]. All these defects, which are introduced at different stages of manufacturing or post-processing may lead to inferior/inconsistency in properties [9,10]. However, the stringent industrial regulations (automobile, aeronautical, power plants and nuclear industry) nowadays require parts to have highly reproducible mechanical properties [11].
To conform to the stringent regulations, efforts have been made to find alternative processing routes or to reduce the unreliability factor in the existing processing capabilities. Additive manufacturing is seen as one of the viable alternative processing routes which may lead to consistent properties in materials. The laser-based powder bed fusion process (ISO/ASTM52900:2015 Standard Terminology for Additive Manufacturing-General Principles-Terminology), which is commonly known as Selective Laser Melting (SLM) is one of the additive manufacturing processes, which produces three-dimensional metal parts layer by layer with superior properties compared to conventional manufacturing processes such as casting and powder metallurgy [12][13][14][15]. A suitable combination of the processing parameters such as the laser power, laser scan speed, hatch distance, hatch style, layer thickness and laser spot size leads to the fabrication of a defect-free component by SLM [12]. The above-mentioned parameters, with the exception of hatch style, determine the heat/energy supplied to the powder bed (heat/energy input). The amount of powder surface exposed to the laser during the SLM process is rather small and hence a very high energy density is involved during the SLM process. This intense energy input leads to very high cooling rates observed in the rate of~10 5 -10 6 K/s [16,17]. Such high cooling rates will result in substantially refined microstructures compared to the conventional manufacturing processes, and hence improved properties [16,17].
The majority of the research on the SLM process is focused on parameter optimization, alloy development, topology optimization/structure optimization and microstructure-property correlation. The intent of the present manuscript is to highlight the repeatability/reproducibility aspects of the samples produced by SLM in terms of the material properties. Five different materials-Al-12Si, Cu-10Sn, and 316L, belonging to the face-centered cubic structure, CoCrMo and CP-Ti with hexagonal closed packed structure-were evaluated and their properties are reported to show consistency in the properties of the samples produced by SLM.
Experimental Section
Cylindrical tensile samples (total length 52 mm, length and diameter of the gauge length 17.5 and 3.5 mm) were fabricated from spherical gas-atomized powders at room temperature using an SLM 250 HL device (from SLM Solutions and formerly Machine Tool Technologies Solutions). The device is equipped with a Yb-YAG laser. All the samples were built over a base plate made of the same material as the building material under an Ar environment (in order to avoid oxygen contamination during the building process) with a hatch style rotation of 73 • . Hatch style is defined as the design/pattern in which the hatches (melting sequences or melt lines or melt tracks) are oriented within and between the layers [18]. Detailed information about the hatch style can be found in [19]. All the samples were built perpendicular to the base plate (i.e., XY direction). An allowance of 1-2 mm was given for these samples, so that they can be machined with the abrasive papers to smoothen their surface before the tensile test. The tensile test samples used in the present study were selected randomly from different batches at randomly built positions in the substrate plate, in order to ascertain the reproducibility criteria and were used in the as-built condition. The Al-12Si samples (from gas atomized powder with a nominal composition of Al-12Si (wt.%)) were fabricated with a laser power of 320 W for both the bulk of the sample and the contour and the laser scan speed of 1455 mm/s for the bulk and 1939 mm/s for the contour. A layer thickness of 50 µm is used with a laser spot size of~80 µm and a hatch distance of~110 µm. Detailed information about the fabrication of the Al-12Si samples can be found elsewhere [20]. The following parameters were used for the fabrication of CoCrMo parts (from CoCrMo gas atomized powder from SLM solutions): laser power-100 W; laser scan speed-140 mm/sec; layer thickness-30 µm; and hatch distance-100 µm, with 90 • hatch rotation between the layers [21]. For further processing details about CoCrMo, see [21]. Commercially pure Ti (CP-Ti) samples were built from CP-Ti grade 2 powder supplied by TLS Technik GmbH, Germany with the following parameters: laser power-165 W; laser scan speed-138 mm/s; layer thickness-100 µm; and hatch distance-100 µm, with 90 • hatch rotation between the layers. Detailed information about the fabrication of the CP-Ti can be found at [22]. Gas atomized 316L powders were used to fabricate SLM parts with the following parameters: laser power-100 W; laser scan speed-800 mm/s; layer thickness-30 µm; and hatch distance-120 µm, with 90 • hatch rotation between the layers [23,24]. Similar gas atomized bronze powders (with the following parameters: laser power-271 W; laser scan speed-210 mm/s; laser thickness-90 µm; and hatch distance-90 µm) were used for producing the bulk SLM bronze parts [25]. Cylindrical bulk samples were also prepared by graphite mold casting in order to compare the properties of the conventionally fabricated cast samples with the SLM samples. Room temperature tensile tests were carried out using an Instron 8562 testing facility (strain rate 1 × 10 −4 s −1 ) and the strain during the tensile test was measured directly on the specimen using a Fiedler laser-extensometer. At least three specimens were tested under each condition to ascertain the reproducibility/repeatability of the properties. The wear and corrosion test conditions have been reported elsewhere [26]. For the corrosion experiments, the samples were mounted in polymer resin and are polished metallographically to mirror finish. A Solarton SI Electrochemical Interface connected to a tempered three-electrode-cell with a Pt net as the counter electrode was used, with a saturated calomel reference electrode (SCE with Standard Hydrogen Electrode potential E SHE = 0.241 V at room temperature) and the embedded alloy sample as the working electrode. Before the actual polarization measurements, the samples were kept at open circuit potential (OCP) conditions for 1 h; meanwhile, the potential was monitored. The linear dynamic polarization was started at −0.2 V vs. OCP, and the potential was increased at a constant rate of 0.5 mV/s up to a value of 1.5 V vs. SCE. Figure 1 shows the room temperature tensile curves of the Al-12Si samples manufactured by SLM and the corresponding mechanical data are summarized in Table 1. The tensile curves (in color) in Figure 1a show the consolidated data of six tensile tests that are shown individually in Figure 1b also produced Al-12Si samples by SLM using the SLM solutions device and the tensile properties lie within the above said range [29]. This suggests that with the optimized parameters for full density, the Al-12Si SLM samples will show repeatable/reproducible tensile properties within the experimental errors. However, there are reports of anisotropy in the SLM produced samples and the mechanical properties vary depending on the building direction [30][31][32][33]. Alsalla et al. have shown that the tensile strength and the fracture toughness of the 316L cellular lattice manufactured by the SLM technique, depends greatly on the building direction. This is essentially due to the anisotropic behavior of the SLM-prepared samples [30]. Similar anisotropy has been reported by Suryawanshi et al., where the fracture toughness of the Al-12Si samples depends strongly on the building direction [17]. On the other hand, some results also suggest that the sample building direction does not have a significant effect on the tensile properties [20]. Hence, there exists a contradiction between the consistencies in the tensile properties of samples prepared with different build orientation. However, it may be safe to say that even if there is a difference in properties between the samples prepared with different build directions, the differences are consistent and reproducible within the experimental limits. This suggests that the samples built in each orientation (XY/YZ/XZ) should give repeatable and reproducible mechanical properties, when tested in similar conditions.
Samples Designation Properties
Al-12Si-SLM The sliding wear test data of Al-12Si SLM samples are shown in Figure 2a. The data points (corresponding to sample number 1) are the consolidated data points of six sliding wear test experiments that are shown as samples 2-7 (Figure 2a). The wear test results are quite repeatable with the wear rate varying between 9.23 and 9.24 × 10 −13 m 3 /m, showing consistency within the experimental errors. Similar results have been observed for the corrosion studies (conducted in an acidic HNO3 medium), where the potentiodynamic polarization curves between two test samples almost overlap each other except for small but negligible differences within the experimental limits ( Figure 2b). The above results indicate that the tensile properties, wear rate and potentiodynamic corrosion results obtained for the Al-12Si samples produced by SLM are very consistent and reproducible in nature. It might be thought that the Al-12Si samples show consistent and reproducible properties because both Al and Si phases constituting the structure have a face centered cubic (fcc) crystal structure. Hence, to further check the reproducibility of the mechanical properties of SLM parts, other fcc systems such as Cu-10Sn bronze and 316L (predominantly austenite phase) and hexagonally closed packed (hcp) systems, CoCrMo and commercially pure Ti (CP-Ti), were evaluated. Figure 3 shows the room temperature tensile tests for CoCrMo, 316L, commercially pure Ti (CP-Ti) and Cu-10Sn bronze alloys. Two tensile curves for each alloy are shown in a consolidated fashion (in color) followed by their individual tensile curves (in black). The consolidated curves for Cu-10Sn overlap and no significant differences are found from the tensile test results. A similar trend is observed for the 316L samples, where a marginal difference of ~8 MPa in YS is realized between two tensile tests along with a difference in UTS of ~15 MPa and ductility of ~0.3%. The tensile test curves for CP-Ti do not show any difference in YS between two tensile test results and a marginal difference in UTS and ductility of ~5 MPa and ~0.3%, respectively, is observed. Similar results were found in the case of the SLM-processed CoCrMo alloy. The alloy shows a difference of ~1 MPa in YS between two tensile test results and the difference between the UTS and ductility is ~9 MPa and ~0.45%, respectively. The tensile properties of the Al-12Si, 316L and CoCrMo samples fabricated by casting are shown in Table 1. It can be observed that the samples fabricated by casting have inferior strengths. Moreover, the cast samples show a larger standard deviation compared to the samples fabricated by SLM. The above results from different alloy systems reveal that the SLM-processed materials show very good consistency in their properties (mechanical, tribological and corrosion properties) within the experimental errors, even though the samples were picked randomly from Table 1. Tensile properties of samples produced by selective laser melting (SLM) and casting (cast).
Samples Designation
Al-12Si-SLM The sliding wear test data of Al-12Si SLM samples are shown in Figure 2a. The data points (corresponding to sample number 1) are the consolidated data points of six sliding wear test experiments that are shown as samples 2-7 (Figure 2a). The wear test results are quite repeatable with the wear rate varying between 9.23 and 9.24 × 10 −13 m 3 /m, showing consistency within the experimental errors. Similar results have been observed for the corrosion studies (conducted in an acidic HNO 3 medium), where the potentiodynamic polarization curves between two test samples almost overlap each other except for small but negligible differences within the experimental limits (Figure 2b). The above results indicate that the tensile properties, wear rate and potentiodynamic corrosion results obtained for the Al-12Si samples produced by SLM are very consistent and reproducible in nature. It might be thought that the Al-12Si samples show consistent and reproducible properties because both Al and Si phases constituting the structure have a face centered cubic (fcc) crystal structure. Hence, to further check the reproducibility of the mechanical properties of SLM parts, other fcc systems such as Cu-10Sn bronze and 316L (predominantly austenite phase) and hexagonally closed packed (hcp) systems, CoCrMo and commercially pure Ti (CP-Ti), were evaluated. The placement of the sample during the building process was also selected randomly. The results were conclusive that the sample batches, irrespective of the sample position, will yield similar, consistent and reproducible properties, if the hardware remains the same along with the quality of the laser. This is because the same hardware with the same quality of laser source will yield a similar amount of defects (porosity level) and hence similar or reproducible properties. This suggests that the SLM process can lead to the production of metals and alloys with superior as well as more reproducible properties compared to their counterparts produced by conventional casting. Figure 3 shows the room temperature tensile tests for CoCrMo, 316L, commercially pure Ti (CP-Ti) and Cu-10Sn bronze alloys. Two tensile curves for each alloy are shown in a consolidated fashion (in color) followed by their individual tensile curves (in black). The consolidated curves for Cu-10Sn overlap and no significant differences are found from the tensile test results. A similar trend is observed for the 316L samples, where a marginal difference of~8 MPa in YS is realized between two tensile tests along with a difference in UTS of~15 MPa and ductility of~0.3%. The tensile test curves for CP-Ti do not show any difference in YS between two tensile test results and a marginal difference in UTS and ductility of~5 MPa and~0.3%, respectively, is observed. Similar results were found in the case of the SLM-processed CoCrMo alloy. The alloy shows a difference of~1 MPa in YS between two tensile test results and the difference between the UTS and ductility is~9 MPa and 0.45%, respectively. The tensile properties of the Al-12Si, 316L and CoCrMo samples fabricated by casting are shown in Table 1. It can be observed that the samples fabricated by casting have inferior strengths. Moreover, the cast samples show a larger standard deviation compared to the samples fabricated by SLM. The above results from different alloy systems reveal that the SLM-processed materials show very good consistency in their properties (mechanical, tribological and corrosion properties) within the experimental errors, even though the samples were picked randomly from several batches (8-10 batches over 1 year in the case of Al-12Si). The placement of the sample during the building process was also selected randomly. The results were conclusive that the sample batches, irrespective of the sample position, will yield similar, consistent and reproducible properties, if the hardware remains the same along with the quality of the laser. This is because the same hardware with the same quality of laser source will yield a similar amount of defects (porosity level) and hence similar or reproducible properties. This suggests that the SLM process can lead to the production of metals and alloys with superior as well as more reproducible properties compared to their counterparts produced by conventional casting. several batches (8-10 batches over 1 year in the case of Al-12Si). The placement of the sample during the building process was also selected randomly. The results were conclusive that the sample batches, irrespective of the sample position, will yield similar, consistent and reproducible properties, if the hardware remains the same along with the quality of the laser. This is because the same hardware with the same quality of laser source will yield a similar amount of defects (porosity level) and hence similar or reproducible properties. This suggests that the SLM process can lead to the production of metals and alloys with superior as well as more reproducible properties compared to their counterparts produced by conventional casting.
Conclusions
Five different metal/alloy systems (Al-12Si, Cu-10Sn and 316L-face centered cubic phase and CoCrMo and CP-Ti-hexagonal closed packed phase) were fabricated by SLM using commercially available parameters. The Al-12Si fcc samples show uniform and consistent mechanical, tribological and corrosion properties within the experimental errors. It is noteworthy that the room temperature tensile curves overlap one another up to the yield point and show similar behavior, beyond yielding or marginal differences in the ultimate tensile strength (difference ~10 MPa) and/or ductility (~0.2%), thus demonstrating the reliability of the samples fabricated by SLM. Similar tensile results were observed in the case of the other four metal/alloy systems (Cu-10Sn, 316L, CoCrMo and CP-Ti), where the room temperature curves show consistency in their mechanical properties. These results suggest that the selective laser melting process can be used to produce parts with consistent and reproducible properties, provided the powder quality and the parameters for fabrication remain the same.
Conclusions
Five different metal/alloy systems (Al-12Si, Cu-10Sn and 316L-face centered cubic phase and CoCrMo and CP-Ti-hexagonal closed packed phase) were fabricated by SLM using commercially available parameters. The Al-12Si fcc samples show uniform and consistent mechanical, tribological and corrosion properties within the experimental errors. It is noteworthy that the room temperature tensile curves overlap one another up to the yield point and show similar behavior, beyond yielding or marginal differences in the ultimate tensile strength (difference~10 MPa) and/or ductility (~0.2%), thus demonstrating the reliability of the samples fabricated by SLM. Similar tensile results were observed in the case of the other four metal/alloy systems (Cu-10Sn, 316L, CoCrMo and CP-Ti), where the room temperature curves show consistency in their mechanical properties. These results suggest that the selective laser melting process can be used to produce parts with consistent and reproducible properties, provided the powder quality and the parameters for fabrication remain the same.
|
2018-05-31T12:02:13.987Z
|
2017-02-22T00:00:00.000
|
{
"year": 2017,
"sha1": "d53af643d66198c22b2872f9a1eff61e80edb1cf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7080/5/1/8/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d53af643d66198c22b2872f9a1eff61e80edb1cf",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
235482417
|
pes2o/s2orc
|
v3-fos-license
|
A Hybrid Approach of Deep Semantic Matching and Deep Rank for Context Aware Question Answer System
Most of the existing Question Answer Systems focused on searching answers from the Knowledge-Base (KB) , and ignore context aware information. Many Question Answer models perform well on public data-sets, but too complicated to be efficient in real world cases. Effectiveness, concurrency and system availability are equally important in industry which have large data and requests, we propose a Context Aware Question Answer System based on the Information Retrieval with Deep Semantic Matching and Deep Rank. It has been applied to the online question answer system for insurance Question Answer. By these means, we achieve both high QPS (Query Per Second) and effectiveness. Our approach improves the system’s ability to understand the question with context aware coreference resolution, subject completion, and the long sentence compression. After the matching questions are recalled from the ElasticSearch, Siamese CBOW (Continues Bag-Of-Words Model) and KBQA filter some unreasonable ones by entity alignment. After the result is sorted by the deep rank model with co-occurrence words and semantic features, our system does clarification or answer output. Finally, for those questions that we are unable to provide answers, a dialogue mining module as part of our Smart Knowledge-Base Platform is developed. This results in more than 10 times improvement in terms of efficiency for manpower involved in data labeling process.
INTRODUCTION
The question answer system has been widely used in intelligent customer service, personal assistants, and dialogue robots. In 2018, the pretrain techniques based on a massive corpus pretraining model have made breakthroughs in multiple NLP tasks including Semantic Match.
Representative models are Elmo [9], GPT [10], BERT [8]. Higher accuracy, compared with the Siamese CBOW, can be achieved by fine-tuning BERT on downstream tasks, but the model makes inference time much longer, the running efficiency does not meet the requirements of our online products. We propose a high-efficiency contextual referential solution based on syntax analysis to solve the problems of subject missing and pronoun resolution in the questionand-answer scenario in insurance industry that achieved good results. The voice input brings convenience to users but at the same time introduces typos in the results after the text processing. We use the insurance specific noun dictionary with the error correction model of Transformer [7] to improve the input from ASR. For the purpose of increasing the accuracy of matching sentences of the user's input with terms from Knowledge-Base, we use an efficient sentence compression algorithm, which can filter some insignificant content and retain some core content of the insurance industry. We rank all the answers from the retrieval module and do answer output finally. Our contributions are following: • Propose novel and efficient error correction, sentiment analysis, coreference resolution, sentence compression and other methods to enhance question comprehension ability especially in insurance domain.
• Using ElasticSearch, deep semantic matching and KBQA combined the IR method to quickly recall matching questions. Improve the accuracy of the QA through deep learning rank while ensuring the overall efficiency of the system.
• Proposed a number of new industry test set construction methods and the QA evaluation methods.
• Full-life processing management and optimization for the QA knowledge including question type identification, clustering and annotation dispatch for no answer questions.
RELATED WORK
Most of the existing professional domain question answering systems search for the most matching questions (question in KB and user query similarity matching) from the Knowledge-Base through information retrieval. Some existing question-and-answer systems such as the Ali Xiaomi and the Baidu AnyQ are single-round questions and answers that do not consider the context information. The AliMe from Alibaba, which combines the Knowledge-Base search and Seq2Seq generation, makes achievements in the e-commerce domain [2]. We use the same method as the AliMe and the Baidu AnyQ to match the question and user query similarity and consider context chat history at the same time.
3 SYSTEM OVERVIEW Our overall system architecture is shown in Figure 1. The user's question (that is query) is used as input. If it is a voice, it will be converted into text first. The context information is passed to the pre-processing module. After error correction and coreference resolution, the processing is passed to the retrieval module. It returns the best matching with the user problem respectively from ElasticSearch based on text retrieval, the semantic retrieval based on the Siamese CBOW and the KBQA based on knowledge graph. The question list is passed into the sorting module, and the multi-way matching list is merged, and some unreasonable matching questions are removed through the entity alignment, and the final related question list will be generated through deep learning sorting. Finally, the answer will be returned to the user according to the matching question with business type. We use open sourced NLP Tools with the insurance terminology dictionary for word segmentation, part-of-speech tagging and entity recognition. The multi-intention detection uses the method of splitting the sentence by punctuation and then classifying it. The question rewriting is mainly for the insurance product name, and the sentiment analysis is used to judgment the intents of the user's affirmation, negation and double negation. Following we describe more detail implements.
Long Sentence Compression
Step1: Divide the long sentence into several short sentences by punctuation or space, then classify the short sentences and remove the saliva statement Step2: Based on the sentence compression scheme of probability and syntax analysis, we only retain the core sentence components. Combined with the insurance keyword dictionary to ensure the keywords are retained.
Example: Hello, I bought an insurance for my son in 2006 and I only paid 581 yuan for a year, however I didn't pay for it after that. Now I want the customer service to refund my money.
Compress result: I bought an insurance in 2006. Now I want to refund my money.
Error Correction
Two solutions are used for business selection. The simple solution is based on the error correction of the insurance noun dictionary. According to the results of the previous word segment and syntactic analysis, the possible nouns are converted into PinYin and compared with the proper nouns in the dictionary for error correction. The general solution is the Transformer model with a special noun dictionary, the training datasets use about 32 million universal corpora from public news and the PinYin dictionary that from insurance domain. The input of encoder in the model is non-dictionary Chinese PinYin and Chinese word characters in the dictionary. The Decoder's output is a pure Chinese character, where the Chinese characters in the input dictionary do not participate in the prediction, then directly generated.
75
We use context chat history as Coreference Resolution reference. Our implementation ideas are word segmentation, part-of-speech tagging, dependency syntax analysis, subject-predicate extraction, entity substitution. For example: (Question) What is the price of life insurance? (Answer) 300 yuan per year. In terms of feature extraction, for the purpose of extracting local word order relationships and context information better, we use LSTM, CNN, BERT and other networks to extract features.
BERT performs best, but it takes a long time for online inference. Due to the limited quality of large-scale industrial corpus annotation, some data noise exists. The more complex models the more noise is fitted so the generalization ability is not as good as the simple model. In KBQA, it receives the pre-processed question information, characterized by the context information, the entity type, and the entity relationship, and predicts the subject entity to be queried through the question recognition model [1], and the neighboring nodes centered on the entity from the KG.
Ranking Module
The ranking module includes a deep ranking model and a rule sorting. The deep ranking model is mainly used to merge and score the answers of multiple recalls. The rule sorting is mainly used to verify the rules of the sorted answers again to ensure not only the stability but also reasonability of the sorted answers. In the choice of deep ranking model, we use the commonly It gets the matching question list from rank module. If the confidence level is lower than the preset threshold, it will response a question to have user clarification and let the user to confirm the question he wants to ask and make a related question. If the confidence is high, the answer corresponding to the top one matching question or the recommendation question is returned according to the business rule.
Intelligent Knowledge-Base
The intelligent Knowledge-Base is a behind-the-scenes role in the Q&A system. In addition to providing the FAQ engine with raw materials, it also manages and optimizes the life-cycle of the question-and-answer knowledge. The specific process can be seen in Figure 3.
Conclusions
This paper proposes a context aware, error correction, coreference resolution, long sentence compression, ElasticSearch and deep semantic matching with the Siamese CBOW and deep learning sorting for the question-and-answer system. Our approaches not only have good performance in engineering but also in model accuracy. Its architecture supports high concurrency requirements in real world use cases and has high availability that fits the standard production environment. We have already applied this system in on-line intelligent customer service bot, AI assistant, AI selling bot and other human-computer interaction AI products. In the future, we hope our question-and-answer system could support multimedia interaction, such as pictures, audios and videos in addition to text and voice so that we could solve more problems for users with more intelligence.
|
2021-06-20T13:10:15.198Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "5d015add6290ecaf44fe474e1c17a3becb17c2d8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "5d015add6290ecaf44fe474e1c17a3becb17c2d8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
248236862
|
pes2o/s2orc
|
v3-fos-license
|
Effect of micro-osteoperforations on the rate of orthodontic tooth movement and expression of biomarkers: a randomized controlled clinical trial
ABSTRACT Introduction: Micro-osteoperforation is a minimally invasive technique that has been used to accelerate orthodontic tooth movement and reduce treatment duration. However, literature presents conflicting reports about this technique. Objective: To evaluate the effectiveness of micro-osteoperforations on the rate of canine retraction and expression of biomarkers in gingival crevicular fluid (GCF). Methods: This was a randomized clinical trial with split-mouth study design. Thirty adult subjects with age above 18 years (20.32 ± 1.96) who required fixed orthodontic treatment and extraction of maxillary first premolars were enrolled and randomly allocated to either the experimental or control group. Randomization was performed by block randomization method, with a 1:1 allocation ratio. The experimental group received three micro-ostoperforations (MOPs) distal to maxillary canine, using the Lance pilot drill. The retraction of maxillary canine was performed with NiTi coil-spring (150g) in both experimental and control groups. The primary outcome was the evaluation of canine retraction rate, measured on study models from the baseline to 16 weeks of canine retraction. Secondary outcomes were the estimation of alkaline and acid phosphates activity in GCF at 0, 1, 2, 3, and 4 weeks. Results: There was a statistically significant difference in the rate of canine retraction only after the first 4 weeks. Subsequently there was no statistically significant difference from the eighth to the sixteenth weeks between MOPs and control group. There was a statistically significant difference in alkaline and acid phosphates activity in GCF between MOPs and control groups during the initial 4 weeks of canine retraction. Conclusion: Micro-ostoperforation increased the rate of tooth movement only for the first 4 weeks; thereafter, no effect was observed on the rate of canine retraction during 8, 12 and 16 weeks. A marked increase in biomarker activity in the first month was observed.
INTRODUCTION
Over the past decade, accelerated orthodontic movement has become an encouraging area of research in the orthodontic field. Several techniques have claimed to improve orthodontic treatment efficiency, by reducing treatment duration in complex adult treatment. 1,2 Current research indicates that the most effective methods for the acceleration of tooth movement are the surgical approaches, including distraction osteogenesis, corticotomy, osteotomy, and piezocision technique. However, it is assumed that the surgical approaches have not been widely employed, due to the aggressiveness and associated complications. [3][4][5][6] Recently, less invasive and controlled micro-trauma through This evidence indicates that the MOPs increase the catabolic Raghav P, Khera AK, Preeti P, Jain S, Mohan S, Tiwari A -Effect of micro-osteoperforations on the rate of orthodontic tooth movement and expression of biomarkers: a randomized controlled clinical trial and anabolic activities, thus reducing tooth movement resistance. These catabolic and anabolic activities can be measured by the expression of bone resorption and bone formation biomarkers in gingival crevicular fluid (GCF). 9 Teixeira et al. 8 conducted a study on rats and stated that minimum cortical perforations increased the inflammation and enhanced tooth movement. Recently, many studies have been conducted to evaluate the effect of MOPs on the rate of tooth movement. 10-14 Some of these studies have shown an increase in the rate of tooth movement in the experimental group of more than 2 folds, when compared to the control group. 10,11 However, contradictory results have also been reported by several studies. 12,13 According to a Cochrane review, 14 most of the randomized clinical trials presented small sample size and unclear risk of bias. A recently conducted meta-analysis indicated that there was a statistically significant difference in the rate of canine retraction after performing MOPs; however, clinically, it failed to show substantial outcomes. 15 Up to the present date, several articles have been published on accelerated orthodontics, but there is a lack of information on the relationship between bone catabolic and anabolic biomarkers. Ferguson et al. 9 conducted a systematic review to evaluate the effect of various surgical accelerating techniques Raghav P, Khera AK, Preeti P, Jain S, Mohan S, Tiwari A -Effect of micro-osteoperforations on the rate of orthodontic tooth movement and expression of biomarkers: a randomized controlled clinical trial 6 on the expression of biomarkers, and found that most of the studies were done using animals. Assessing a human sample, only the report published by Alikhani et al. 4 was found.
The authors evaluated the effect of MOPs on the expression of inflammatory markers, and found that there was 2.3-fold increased rate of canine retraction, with increased expression of cytokines. The study comprised a small sample size, and follow-up was done for only 28 days; moreover, a possible conflict of interest should be discussed, because they used commercially available appliances, and randomization and allocation concealment was unreported. In addition, they used lateral incisor as a reference point to measure canine retraction, which is considered an unstable point, and used 0.016 x 0.022-in stainless wire in 0.022-in slots for canine retraction, a procedure that allows more tipping movement and could give a false perception of tooth movement acceleration. All these shortcomings signify that there is an urgent need for high-quality randomized controlled clinical trials that helps proving the effectiveness of MOPs and its correlation with the expression of bone biomarkers. Therefore, the present study is, as far as we know, the second study done with human sample, with a follow-up of 16 weeks, to evaluate the effect of MOPs on the rate of canine retraction and its correlation with expression of biomarkers in GCF.
Raghav P, Khera AK, Preeti P, Jain S, Mohan S, Tiwari A -Effect of micro-osteoperforations on the rate of orthodontic tooth movement and expression of biomarkers: a randomized controlled clinical trial 7 The null hypothesis tested was that there is no difference in the rate of canine retraction and the level of the biomarkers in the GCF between control and micro-osteoperforated group.
SPECIFIC OBJECTIVES OR HYPOTHESES
The objectives of the present study were: » Evaluate the effect of MOPs on the rate of canine retraction for a period of 16 weeks.
» Evaluate the changes in the level of biomarkers in the GCF.
TRIAL DESIGN
The present study was a single-center randomized controlled clinical trial using a split-mouth design, with 1:1 allocation.
No changes were made after the initial trial.
PARTICIPANTS, ELIGIBILITY CRITERIA AND SETTINGS
Ethical approval was obtained from ethical reviewer board of the institute at Swami Vivekanand Subharti University (Meerut/India). The trial was also registered at ICMR with CTRI number 01516450. Subjects were screened from the depart- Table 1. A detailed medical history was recorded for each patient, followed by a detailed clinical examination. Informed written consent was taken from patients or parents/legal guardians, after informing the study procedures.
SAMPLE SIZE CALCULATION
The sample size was calculated based on a type I error frequency of 5%. Power analysis with G*Power software showed that 27 subjects per group would be needed for a statistical power of more than 80% to detect a significant difference, with 0.66 effect size and 0.05 as the significance level.
11
Raghav P, Khera AK, Preeti P, Jain S, Mohan S, Tiwari A -Effect of micro-osteoperforations on the rate of orthodontic tooth movement and expression of biomarkers: a randomized controlled clinical trial for retraction (Fig 1). The NiTi closed coil spring was attached from the canine power arm to the hook of a molar tube.
At each appointment, a Dontrix gauge was used to measure the retraction force. If the force level was found to be less than 150 g, then NiTi coil spring was activated to maintain the force level. The bite contact was raised in those subjects in which occlusal interferences were present.
Primary outcome
The rate of canine retraction was assessed as the primary outcome of the study. To monitor the rate of canine movement, alginate impressions were made before the canine retraction (T 0 ) and after four weeks (T 1 ), eight weeks (T 2 ), twelve weeks (T 3 ) and sixteen weeks (T 4 ); and study models were fabricated with Type-II dental stone. For the measurement of canine retraction on the study model, the method used by Loztof et al. 16 was applied, with a slight modification. In our method, an acrylic palatal plug with reference wires was fabricated on the first study model (T 0 ) over the Nance palatal button used for anchorage control, and was stabilized with pinheads on anterior teeth. Reference stainless steel wires (0.9 mm) were placed mesial to canine on both sides, and the terminal ends were embedded in the acrylic plug. A long axis of canine was drawn from cusp tip to cervical end, and the midpoint of this axis was marked. The base value was set by measuring the distance from this midpoint to the mesial reference wire on the first model (T 0 ) (Fig 2). The plug was then transferred to the subsequent study models (T 1 , T 2 , T 3 and T 4 ), and the distance that the canine has moved at every 4-week interval was measured. The palatal plug was considered as the reference device for all the study models of the same patient. All measurements were recorded by other blinded investigator, using a
RESULTS
Two subjects were excluded from the study due to irregular follow-up. Data from 28 subjects were analyzed (Fig 3). Subjects data including gender, age, amount of extraction space, and cephalometric analysis are listed in Table 2. Subject's age Table 3). p ≤ 0.05 is significant (S), p > 0.05 is non-significant (NS).
SECONDARY OUTCOME
There was a statistically significant difference in the level of alkaline phosphatase (ALP; Table 4, Fig 4) and acid phosphatase (TRAP; Table 5, Fig 5), on both mesial and distal sides, between experimental and control groups at different time intervals. The level of alkaline phosphatase was found significantly higher on the mesial side, while the level of acid phosphatase was found significantly higher on the distal side.
DISCUSSION
The principle of micro-osteoperforations is a regional con- Nimeri et al. 19 has also documented that RANKL:OPG ratio is proportional to age, which affects the rate of bone remodeling and tooth movement. To minimize the effect of age, only adult subjects (age >17 years) were included in this study.
Strict discipline and clear instructions were given to maintain excellent oral hygiene. Occlusal interferences in the path of canine retraction were removed by raising the bite, when necessary. According to Yang et al. 24 , maximum stress during the canine retraction was distributed on the cervix at distolabial Since the purpose of MOPs is to perforate only the alveolar bone, a Lance pilot drill was used, because it has sharp edges, calibrated length and is designed to cut the bone effectively.
To effectively perforate the cancellous bone, the perforation depth was kept at least 5 mm, because average gingival thickness is 2 to 3 mm, and average cortical bone thickness is 1.5 to 2.0 mm. Although the thickness of attached gingival tissue may vary from patient to patient, for those patients in which attached gingival thickness was more than average, the perforation depth was increased to reach the cancellous bone.
After MOPs, a force of 150 g was used for canine retraction, as recommended by Samuels et al. 26 Raghav P, Khera AK, Preeti P, Jain S, Mohan S, Tiwari A -Effect of micro-osteoperforations on the rate of orthodontic tooth movement and expression of biomarkers: a randomized controlled clinical trial 24 In this trial, the accelerating effect of MOPs was observed only during the first four weeks. In this period, the rate of canine Raghav P, Khera AK, Preeti P, Jain S, Mohan S, Tiwari A -Effect of micro-osteoperforations on the rate of orthodontic tooth movement and expression of biomarkers: a randomized controlled clinical trial 25 The secondary outcome of the present study was to correlate the rate of tooth movement with the expression of biomarkers after MOPs. The systematic review conducted by Kapoor et al. 30 showed that there was a positive correlation between the levels of ALP and TRAP in the GCF and the velocity of orthodon- MOPs and control groups. The peak of ALP level occurred at the second week on mesial and distal sides in both MOPs and control groups, but there was a markedly increased ALP activity in MOPs group at the second, third and fourth weeks, when compared to control group. These results indicate that there was increased osteoblastic activity at the third and fourth week on mesial side (tension side), as a compensatory mechanism Raghav P, Khera AK, Preeti P, Jain S, Mohan S, Tiwari A -Effect of micro-osteoperforations on the rate of orthodontic tooth movement and expression of biomarkers: a randomized controlled clinical trial to increased osteoclastic activity (occurring in the initial two weeks on distal side). Increased osteoblastic activity on distal side (compression side) was due to homeostatic mechanism induced by micro-perforations. These biomarker activity shows that increased osteoclastic activity persist only for the first three weeks after the increased osteoblastic activity (Fig 6).
These results directly correlate, and may be the reason for the
|
2022-04-19T04:40:56.426Z
|
2022-06-06T00:00:00.000
|
{
"year": 2022,
"sha1": "5f2436c67c951140a6f4cbdfd2a39b416d1edb84",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/dpjo/a/8nmYqs5xFQM9tFCxWFW59hF/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00ff50707b937262140383766d236aa28950f655",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
44474843
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis and Evaluation of the Plant Growth Regulatory Activity of 8-oxabicyclo [ 3 . 2 . 1 ] oct-6-en-3-one Derivatives
A síntese de vários análogos do 8-oxabiciclo[3.2.1]oct-6-en-3-ona é relatada. O efeito desses compostos e do ácido 4-oxoexanóico sobre a germinação e crescimento radicular do Sor ghum bicolor foi avaliado. Na concentração de 100 ppm os compostos 3-(metoxicarbonilmetil)-8-oxabiciclo[5.3.0]dec4-eno-2,9-diona (13) e ácido 4-oxoexanóico (17) apresentaram efeito estimulador do crescimento radicular de 33-35% e a 1000 ppm um efeito inibitório foi observado em ambos os casos (29% ( 13 ) e 80,2% ( 17). Todos os outros compostos inibiram o crescimento radicular a 100 e 1000 ppm. Nenhum efeito significativo foi observado sobre a taxa de germinação.
In tro duc tion
A num ber of sesquiterpene lac tones af fect plant growth, al though the na ture and ex tent of the ef fects produced de pend on a num ber of fac tors, in clud ing the lactone tested, its con cen tra tion, and the spe cies on which it acts 1 .Some sesquiterpene lac tones have been re ported to be respon si ble for the allelopathic prop er ties of cer tain plants by af fect ing the ger mi na tion and growth of other spe cies 2 .The po ten tial allelopathic ac tiv ity of sev eral nat u ral and synthetic sesquiterpene lac tones has been in ves ti gated and the pres ence of an α-meth y lene-β-butyrolactone has been shown be im por tant for the bi o log i cal ac tiv ity 3 .The presence of other re ac tive cen tres such as α,β-un sat u rated ketone, chloro hydrins, epoxide, hemiacetal, and also the mol e cules spa tial ar range ment is normaly im por tant for the bi o log i cal ac tiv ity pre sented by those lac tones [4][5][6] .
As part of our re search on the syn the sis of new compounds with her bi cidal and/or plant growth reg u la tory activ ity, de rived from the eas ily avail able 8-oxabicyclo [3.2.1]oct-6-en-3-ones 1 7 , we de vised a plan that would allow the prep a ra tion of sev eral lac tones 3-6 for bi o log i cal eval u a tion 23 (Scheme 1).
Ex per i men tal
Syn the sis IR spec tra were re corded on a Perkin-Elmer 881 dou ble beam grat ing spectrophotometer.NMR spec tra were recorded on a Perkin-Elmer R34 (220 MHz) in stru ment, a Bruker WH 400 spec trom e ter (400 MHz) or on a Varian T-60 (60 MHz) in stru ment, us ing tetramethylsilane as inter nal stan dard.Mass spec tra were ob tained on a VG ZAB-E high res o lu tion mass spec trom e ter.Flash chro matog ra phy was per formed us ing Crosfield Sorbsil C60 (40-60 µm).Sol vents were pu ri fied ac cord ing to Perrin and Armarego 26 , and pe tro leum re fers to the frac tion with b.p. 40-60 °C, ether re fers to di ethyl ether.
Bioassays
The bioassays were car ried out ac cord ing to the method of Einhelling et al. 23 with seeds of Sor ghum bicolor.Dichloro methane so lu tions of com pounds 7, 13, 17, 20a, 22 and 23 were pre pared at con cen tra tions of 100 and 1000 ppm.
As says were con ducted in a 100 x 15 mm glass Petri dishes lined with 1 sheet of Whatman No. 1 fil ter pa per and sealed with parafilm.To each dish was added 2 mL of each so lu tion and the sol vent was evap o rated be fore ad di tion of 2 mL of wa ter fol lowed by 20 seeds of Sor ghum bicolor.As says were car ried out at 25 °C un der ar ti fi cial flu o res cent light (8 x 40W) in a in cu ba tor dur ing three days, af ter which ger mi na tion was scored and the rad i cle lenth was measured.Seeds were con sid ered to be ger mi nated if a rad i cle protuded at least 1 mm.A controling ex per i ment was carried out un der the same con di tions de scribed, us ing only wa ter.Each bioassay was rep li cated 5 times in a com plete ran dom ized de sign.
Syn the sis
The oxabicyclic ketone 7, has al ready been trans formed into a num ber of nat u ral prod ucts and their an a logues 8,9 , and we thought to ex plore fur ther its chem is try by us ing it as start ing ma te rial for the syn the sis of sesquiterpene lactone 4 (R 1 = H or CH 3 ), ac cord ing to the strat egy shown on Scheme 1.
The ketone 7 was pre pared on a large scale us ing Sato and Noyori's meth od ol ogy 10 .Cat a lytic hy dro ge na tion of 7 us ing 10% Pd-C af forded the oxabicyclic ketone 8 in almost quan ti ta tive yield.The enol silyl ether 9 was pre pared in about 95% yield us ing trimethylsilylchloride (TMSCl) in the pres ence 1,8-diazabicyclo [5.4.0]undec-7-ene (DBU).The al most quan ti ta tive con ver sion of the par ent ketone 8 into the re quired enol ether 9 was con firmed by the vir tual dis ap pear ance of the car bonyl stretch ing at around 1715 cm -1 (in the in fra red spec trum) with con com i tant appear ance of a very strong band at ~1640 cm -1 cor re sponding to the enol trimethylsilyl group.
The enol ether 9 was treated with methyl lith ium and the enolate formed was trapped with methyl bromoacetate (Scheme 2).It was ob served that when the enolate was gener ated at -78 °C/1.5 h, the re ac tion was in com plete re sulting in 29% re cov ery of the start ing enol ether.Raising the tem per a ture to -40 °C re sulted in com plete trans for ma tion of 9 into the cor re spond ing enolate.In gen eral, typ i cal over all yields for this alkylation was 30-45% for the monoalkylated com pound 10a, 15-20% for the cor respond ing dialkylated methyl es ter 10b and 20-30% re covery of the start ing ketone 8.It was found that sep a ra tion of the monoalkylated 10a from the dialkylated methyl es ter 10b was ex tremely dif fi cult, and 10b was not ob tained in a pure form.Some re ac tions were car ried out with 10a con-Scheme 2.
Al though a high de gree of exo-stereoselectivity for this alkylation has been claimed 11 , the com plex ity of the sig nals for H-1 and H-5 be tween δ 4.50 and δ 4.70 as so ci ated with two sig nals for methoxy group at δ 3.72 and δ 3.80 (ra tio 5:1) showed that a con sid er able amount (~20%) of the endo-alkylated prod uct was formed (the sam ple ana lysed was not con tam i nated with 10b as judged by the mass spectrum).
It was en vis aged that the trans for ma tion of 10a into lactone 12 could be achieved via an in ter me di ate like 11, formed by the cleav age of the ether bridge (Scheme 3).
An at tempt to pro duce lactone 12 was made by treat ing com pound 10a with trimethylsilyltrifluoromethanesulfonate (TMSOTf) and triethylamine (TEA) 20 , for two hours at room tem per a ture.In this case all the start ing ma te rial was con sumed and a very com plex mix ture was formed.How ever when a mixture of 10a+10b was treated with TMSOTf/TEA, the only prod uct iso lated was the lactone 13 (Scheme 4).
The struc ture of the lactone 13 was de duced by spec troscopic means.In the high res o lu tion mass spec trum, there was a peak at m/z 238.0827 cor re spond ing to the pro posed for mula C 12 H 14 O 5 .The in fra red spec trum showed a very strong ab sorp tion at 1781 cm -1 due to the γ-butyrolactone, and an other band at 1717 cm -1 for the ketone su per im posed with the es ter group.Spe cial fea tures in the 13 C-NMR spectrum are the ab sorp tions at δ 172, 175 (lactone and es ter), and δ 195 (ketone).Sig nals cor re spond ing to three CH2, one CH3, and five CH were ob served.The 220 MHz 1 H-NMR spec trum showed a sin glet at δ 3.72 for the methoxy group, and a multiplet at δ 5.80-5.90for alkene pro tons (Fig. 1).
The for ma tion of lactone 13 prob a bly in volves the inter me di ate 14 18 , and it shows the fea si bil ity of our ini tial syn thetic pro posal (Scheme 1).
The for ma tion of a com plex mix ture of prod ucts from this re ac tion is prob a bly due to the fact that keto es ter 10a was a mix ture of α and β-alkyl iso mers and also be cause the cleav age of the ether bridge was not regioselective.A fur ther in ves ti ga tion on the prep a ra tion of the 10b and its re ac tion un der the con di tions de scribed should be car ried out, since one can en vis age the trans for ma tion of lactone 13 into a pseudoguaianolide skel e ton 4.
Due to the prob lems with the stereoselective monoalkylation of 8 and pu ri fi ca tion of 10a, an al ter na tive route lead ing to lac tones 4 and 6 was in ves ti gated (Schemes 5 and 6).
Succinic an hy dride was con verted into the keto acid 17 in 69% yield 21 .Af ter methylation with CH3OH/H2SO4, the es ter 18 formed was brominated with Br 2 /HBr to af ford the re quired dibromoketone 19 in 42% yield.
The cycloaddition be tween the dibromoketoester 19 and furan was car ried out in the pres ence of Cu/NaI.The re-Scheme 3.
In or der to ac com plish the strat egy pre sented on route 1, com pound 20b was re quired, and since this was the mi nor iso mer formed, we used the ma jor iso mer 20a to fol low the syn the sis ac cord ing to route 2 (Scheme 1).
The bicyclic ketone 20a was treated with NaBH 4 /MeOH and the in ter me di ate formed by the re duction of the keto group re acted in a intramolecular fash ion with the carbomethoxy group re sult ing in the for ma tion of lactone 22 in 59% yield (Scheme 6).
The hy dro ge na tion of the oxabicyclo 20a fol lowed by sim i lar treat ment with NaBH 4 /MeOH led to the iso la tion of the lactone 23 in 50% yield.
Work is know in prog ress to trans form com pounds 22 and 23 in to more com plex and functionalized lac tones.
Her bi cidal Ac tiv ity
The dis cov ery of new her bi cides usu ally in volves the fol low ing ap proaches: i) the ra tio nal de sign of spe cific inhib i tors of key met a bolic pro cesses; ii) an a logue syn the sis of com pounds with known her bi cidal ac tiv ity and iii) the ran dom screen ing of new chem i cals.
Al though in this work we planed to make use of strategy ii), by de vel op ing a syn thetic route for the prep a ra tion of sev eral sesquiterpene lac tones, hav ing an α,β-un sat urated car bonyl group, we de cided to carry out a ran dom screen ing on sev eral syn thetic in ter me di ates (strat egy iii).
For this screen ing the in vivo ef fect of com pounds 7, 13, 17, 20a, 22 and 23 on the ger mi na tion and rad i cle growth of Sor ghum bicolor was eval u ated ac cord ing to the meth od ology pro posed by Enhelling et al. 23 .Two con cen tra tions (100 and 1000 ppm) of each com pound were tested, since it has al ready been shown that some com pounds ex hib ited both stimulatory and in hib i tory ef fects on seed ling growth, de pend ing on the con cen tra tion 24 .
Fig ure 2 shows the rad i cle lenth (mm) of Sor ghum af ter 3 days in cu ba tion at 25 °C and the per cent age of rad i cle growth (in hi bi tion or stim u la tion) in re la tion to the con trol is pre sented in Ta ble 1.At 100 ppm all com pounds showed a condiderable in hib i tory ef fect on the rad i cle growth, specially the ketoacid 17, that caused a 80% in hi bi tion.As com pound 17 showed a re mark ably dif fer ent ef fect on plant de vel op ment at lower and higher con cen tra tion 24 , and since it can be eas ily pre pared, it becames an in ter est ing start ing ma te rial for the prep a ra tion of other prod ucts for bi o log i cal eval u a tion.Al though the lactone 13 showed a sim i lar ef fect on rad i cle growth as 17, its prep a ra tion is more la bo ri ous and this makes fur ther bi o log i cal eval u ation less ap peal ing.Com pound 7 showed no clear ef fect at 100 ppm and 34% in hi bi tion at a 1000 ppm.
Com pound 20a (at 100 ppm) was 26 times more ac tive than its sim ple an a logue 7, and this ef fect can be at trib uted to the pres ence of the sub stitu ents at the 2 and 4 po si tions.
In view of these re sults and due to the ver sa til ity of the [3+4] cycloaddition meth od ol ogy used 25 for the prep a ration of com pounds 7 and 20a, the syn the sis of other oxabicyclic com pounds like 7 hav ing dif fer ent sub stitu ents at var i ous po si tions is now our next goal.Also an in ves ti gation of the her bi cidal se lec tiv ity of the com pounds al ready dis cussed to wards a wide range of crops and weeds is under way and will be pub lished else where.Ta ble 1. Ger mi na tion and rad i cle growth in hi bi tion of Sor ghum bicolor by sev eral syn thetic com pounds af ter 3 days, in cu ba tion at 25 °C.
Fig ure 2 .
Fig ure2.Root growth of Sor ghum bicolor af ter ex po sure to var i ous com pounds (100 ppm and 1000 ppm) and wa ter, af ter 3 days in cu bation at 25 °C.
Neg a tive val ues cor re spond to rad i cle growth in duc tion.
|
2017-09-16T00:12:07.546Z
|
1997-01-01T00:00:00.000
|
{
"year": 1997,
"sha1": "b77426968335089ed4f35e407852a6b70cc1167b",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/jbchs/a/DyXcLYZTyVf3bCpMJHQGWVh/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b77426968335089ed4f35e407852a6b70cc1167b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.