id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
34109277
pes2o/s2orc
v3-fos-license
Optimal antiplatelet strategy after transcatheter aortic valve implantation: a meta-analysis Objective International guidelines recommend the use of dual antiplatelet therapy (DAPT) after transcatheter aortic valve implantation (TAVI). The recommended duration of DAPT varies between guidelines. In this two-part study, we (1) performed a structured survey of 45 TAVI centres from around the world to determine if there is consensus among clinicians regarding antiplatelet therapy after TAVI; and then (2) performed a systematic review of all suitable studies (randomised controlled trials (RCTs) and registries) to determine if aspirin monotherapy can be used instead of DAPT. Methods A structured electronic survey regarding antiplatelet use after TAVI was completed by 45 TAVI centres across Europe, Australasia and the USA. A systematic review of TAVI RCTs and registries was then performed comparing DAPT duration and incidence of stroke, bleeding and death. A variance weighted least squared metaregression was then performed to determine the relationship of antiplatelet therapy and adverse events. Results 82.2% of centres routinely used DAPT after TAVI. Median duration was 3 months. 13.3% based their practice on guidelines. 11 781 patients (26 studies) were eligible for the metaregression. There was no benefit of DAPT over aspirin monotherapy for stroke (P=0.49), death (P=0.72) or bleeding (P=0.91). Discussion Aspirin monotherapy appears to be as safe and effective as DAPT after TAVI. Objective International guidelines recommend the use of dual antiplatelet therapy (DAPT) after transcatheter aortic valve implantation (TAVI). The recommended duration of DAPT varies between guidelines. In this two-part study, we (1) performed a structured survey of 45 TAVI centres from around the world to determine if there is consensus among clinicians regarding antiplatelet therapy after TAVI; and then (2) performed a systematic review of all suitable studies (randomised controlled trials (RCTs) and registries) to determine if aspirin monotherapy can be used instead of DAPT. Methods A structured electronic survey regarding antiplatelet use after TAVI was completed by 45 TAVI centres across Europe, Australasia and the USA. A systematic review of TAVI RCTs and registries was then performed comparing DAPT duration and incidence of stroke, bleeding and death. A variance weighted least squared metaregression was then performed to determine the relationship of antiplatelet therapy and adverse events. Results 82.2% of centres routinely used DAPT after TAVI. Median duration was 3 months. 13.3% based their practice on guidelines. 11 781 patients (26 studies) were eligible for the metaregression. There was no benefit of DAPT over aspirin monotherapy for stroke (P=0.49), death (P=0.72) or bleeding (P=0.91). Discussion Aspirin monotherapy appears to be as safe and effective as DAPT after TAVI. IntRODuCtIOn Transcatheter aortic valve implantation (TAVI) has been demonstrated to be superior to medical therapy in patients in whom surgical aortic valve replacement (AVR) is deemed too high risk. 1 The use of TAVI continues to increase 2 and, due to its success in high-risk patients, its role in intermediate-risk patients is being assessed. [3][4][5] One of the most feared complications of TAVI is stroke. The incidence of periprocedural stroke varies from 1% to 11%. 6 To mitigate the risk of stroke, dual antiplatelet therapy (DAPT) is prescribed periprocedurally and for a period of time after the procedure. The optimal duration of antiplatelet therapy post-procedure has not been convincingly defined. European 7 and American 8 9 guidance differs. Additionally, different valve manufacturers suggest different durations of DAPT post-procedure and the randomised studies that have established TAVI also use varying durations of DAPT. 1 10 Finally, little prospective randomised data have been gathered to definitively answer this question. Unfortunately, while helping to mitigate against stoke, DAPT is also associated with an inevitable increased risk of bleeding Key questions What is already known about this subject? Periprocedural stroke is a well-recognised complication of transcatheter aortic valve implantation (TAVI). Post-TAVI antiplatelet therapy is routinely employed in an attempt to mitigate the risk of stroke, with a cost of increased bleeding complications. There is a paucity of data available to inform optimal antiplatelet strategy after TAVI and this is reflected in uncertainty within published guidelines. What does this study add? This study presents data demonstrating the heterogeneity of real-world practice with regard to antiplatelet prescribing in TAVI. By synthesising the available evidence, we demonstrate in a meta-analysis of 11 781 patients that dual antiplatelet therapy (DAPT) after TAVI confers no benefit beyond 1 month in terms of stroke rate, overall mortality or major bleeding. Aspirin monotherapy may therefore be a sufficient antiplatelet therapy in the post-TAVI period. How might this impact on clinical practice? This study provides the most robust evidence to date advocating a more conservative approach to antiplatelet therapy following TAVI. These data may be used to unify clinical practice and reduce exposure to the potentially harmful effects of DAPT in this vulnerable patient population. complications which can cause significant morbidity and mortality. 11 12 The risk of bleeding complication appears to be most acutely related to patients' age, gender, renal function and frailty. 13 14 The majority of patients in the 'high-risk' TAVI group also fall into the category of being at 'high risk' for bleeding complications from DAPT. Defining the minimal duration of DAPT, which reduces stroke risk while minimising the risk of bleeding complications, is therefore important to minimise adverse events in this population. In this two-part study, we first performed a structured survey of 45 TAVI sites to determine their preferred antiplatelet regimen and then performed a systematic review of the literature, meta-analysis and meta-regression to determine if there is an optimal duration of DAPT, and the role of aspirin monotherapy following TAVI. MetHODs tAVI survey Up to 298 TAVI centres were identified worldwide using a TAVI implanters registry. An online structured questionnaire was issued to each centre to establish current practice with regard to DAPT. The duration of DAPT used, and the reasons why this was preferred, were ascertained. A copy of the original questionnaire is available in online supplementary appendix 1. search strategy We performed a systematic search of the MEDLINE, Cochrane and Embase databases (from 2006 to March 2017) for all studies of TAVI. Our search strings included 'Transcatheter Aortic Valve Implantation', 'Transcatheter aortic valve replacement', 'antiplatelet', 'aspirin', 'clopidogrel' or 'antithrombotic'. Bibliographies were handsearched for relevant studies, reviews and meta-analyses to identify further eligible studies. Abstracts were reviewed for suitability and articles were accordingly retrieved. The search and meta-analysis were performed in accordance with published guidance. Inclusion and exclusion criteria We considered all studies of TAVI. Studies were eligible if they reported mortality, stroke, bleeding and the antithrombotic regimen after TAVI. Both observational studies and randomised controlled trials (RCTs) were identified. We excluded studies of the transapical approach due to the inherent increased risk with this approach. 15 16 We excluded animal studies, case reports, conference abstracts, meta-analyses and reviews from the final selection. Analysis The primary endpoints were stroke, all-cause mortality and major bleeding, and outcome data regarding these were extracted from the included studies. The duration of DAPT was also extracted for each included study. We carried out a meta-analysis of TAVI studies, in accordance with published guidance. 17 These data were synthesised and meta-analysed using inverse-variance weighting and the Restricted Effects Maximum Likelihood (REML) method to provide a pooled estimate. A mixed-effects metaregression was then performed to determine the relationship of antiplatelet therapy and duration to adverse events by including DAPT duration as a continuous moderator and study arm as a random effect. Heterogeneity was assessed using the I 2 statistic. 2 18 Mean values are expressed as mean±SD unless otherwise stated. An unpaired t-test was used to compare between group data. The statistical programming environment R 19 with the metafor package was used for all statistical analyses. TAVI survey Forty-five TAVI sites across 12 countries participated. There was a breadth of experience in respondents: 33.3% had performed >200 cumulative procedures, 31.1% had performed 100-200 procedures and 35.6% had performed <100 procedures. 42.2% reported that their antiplatelet therapy policy was based on operator preference, 44.5% on local institutional policy and 13.3% on national and international guidelines (figure 1A). Only 11.1% reported adjusting antiplatelet therapy according to valve system used. Pre-TAVI loading regimens were variable (figure 1B): 40% were loaded with aspirin monotherapy, 6.6% were loaded with clopidogrel monotherapy and 24.4% were loaded with aspirin and clopidogrel. Twenty-nine per cent of centres did not load with any antiplatelet therapy. Antiplatelet therapy after TAVI was similarly variable (figure 1C): 82.2% reported using DAPT with 17.8% using monotherapy. Overall, 46.7% used 3 months, 22.2% used 6 months, 4.4% used 1 month and 2.2% used 12 months of DAPT therapy. Characteristics of included studies Characteristics of included studies are shown in online supplementary appendix 2. All outcomes are at time to last follow-up across all studies. DIsCussIOn In this study, we have found that: (1) there is large variance in the antiplatelet regimens used by different TAVI operators, with the majority of operators relying on personal or institutional policies rather than guideline recommendations; (2) compared with DAPT, aspirin monotherapy demonstrated similar protection against stroke; and (3) if DAPT is preferred, 1 month appears to be sufficient. The meta-analysis presented is, by some margin, the largest addressing the question of the optimal antiplatelet strategy following TAVI. In this field, where adequately sized, high-quality RCTs are currently lacking, this represents the most comprehensive analysis to date to inform clinical practice. In addition to providing the largest and most contemporary data available, our study has further unique aspects. First, we have performed a structured survey that highlights the patterns of antithrombotic use by TAVI operators in clinical practice, and the rationale behind this. Furthermore, we have gone on to perform a metaregression using antiplatelet strategy as a moderator. This additional analysis confirms that prolonged DAPT had no effect on clinical outcomes, and helps give clinicians further confidence in limiting the duration of DAPT or using a single antiplatelet strategy for their patients. Disparity between guidelines and clinical practice Current guideline recommendations on antithrombotic therapy after TAVI are not uniform, and the strength of recommendations is not strong. The 2014 and 2017 8 9 American Heart Association/American College of Cardiology (ACC) guidelines on valvular heart disease suggest aspirin lifelong plus clopidogrel for 6 months 'may be reasonable', corresponding to a strength of recommendation class of IIb. The level of evidence is classified as level C, indicating the lowest possible level of certainty (very limited populations evaluated, only consensus opinion of experts, case studies or standards of care). The 2017 ACC Expert Consensus Document 45 on the care of patients after TAVI states that the current standard antithrombotic therapy after TAVI should be clopidogrel 75 mg orally daily for 3-6 months with oral aspirin 75-100 mg daily lifelong. The European Society of Cardiolog y guidelines on valvular heart disease 7 do not make a specific recommendation for the duration of DAPT after TAVI, but simply state that a combination of low-dose aspirin and a thienopyridine should be used early after TAVI, followed by aspirin or a thienopyridine alone. The Clinicians treating patients after TAVI are therefore provided with guideline recommendations that are not uniform across guideline bodies, are not specific with regard to the duration of DAPT recommended within individual guidelines and are made with the lowest levels of recommendation and weakest strength. This is a reflection of the paucity of evidence available in the field, and has likely contributed to the heterogeneous patterns of antithrombotic therapy clinicians are using. For the first time, in this study, we present contemporary, real-world clinical practice data with regard to antiplatelet prescribing following TAVI, and the rationale behind treatment choices made by clinicians. This has provided an insight into how treating clinicians chose to interpret the available data, along with guideline recommendations and their own clinical judgement to inform the choice of antithrombotic therapy for patients undergoing TAVI. The majority of the clinicians surveyed do not base their decisions on antithrombotic regimens after TAVI on guideline recommendation (only 13.3%); rather, the vast majority base their decisions on personal preference or local institutional policy. This may be a reflection of the variance in recommendations between different guideline bodies, as well as the weak strength of recommendation and low class of evidence. The guideline recommendations are in turn a reflection of the absence of strong randomised data in the field. The most commonly used duration of DAPT in our structured survey was 3 months (57% of respondents), while only 5% of centres used a shorter term policy (1 month). Only 11.1% of operators choose their antithrombotic regimen according to the valve type inserted. The recommendation for the Edwards SAPIEN series of valves is 6 months of DAPT followed by aspirin lifelong; this is because this was the regimen used in the Placement of Aortic Transcatheter Valves (PARTNER) series of trials. 3 After the implantation of a CoreValve, the recommendation is for 3 months of DAPT then aspirin lifelong as this was the suggested strategy in the CoreValve trials. 10 Optimal duration of DAPt The majority of centres used DAPT after TAVI. The most common duration was 3 months with only 5% of centres using 1-month DAPT. This study demonstrates that there was no additional benefit of durations of DAPT longer than 1 month with regard to the prevention of stroke or death. There was no difference in bleeding rate between DAPT durations in this analysis. This may be a reflection of the heterogeneous definitions of bleeding between the studies. Other studies have demonstrated the clear bleeding risk of prolonged DAPT. 34 Although these studies did not include patients undergoing TAVI, their findings are likely to be applicable to the TAVI population given the comparatively high prevalence of renal impairment, underlying anaemia, frailty and other comorbidities that increase the propensity to bleed. It is therefore reasonable to conclude that should DAPT be preferred, the duration should be limited to 1 month. Aspirin monotherapy is equivalent to DAPt Guidelines recommend DAPT after TAVI, but as described above these are empirical recommendations based largely on expert consensus view. In the absence of RCT data, the recommendation for DAPT over aspirin monotherapy has been founded on extrapolation of data advocating DAPT following implantation of drugeluting coronary stents. However, there are fundamental differences in the thrombotic tendencies of TAVI valves and coronary stents, which limit the acceptability of this assumption. Important differences include the increased size and the bioprosthetic nature of TAVI valves. Furthermore, the advanced age, frailty and comorbidity of the current cohort of patients undergoing TAVI make bleeding complications more likely. This analysis suggests DAPT may not be necessary. Aspirin monotherapy was found to be equivalent to all durations of DAPT in terms of stroke prevention and death. This is consistent with two small randomised trials 26 42 (total of 199 patients) which have compared DAPT with aspirin monotherapy, as well as two non-randomised studies 28 33 (total of 463 patients). When considered alongside our analysis of 11 781 patients, there appears to be a consistent signal that aspirin monotherapy is as effective as DAPT at preventing stroke. The recently published randomised controlled ARTE trial (Aspirin Versus Aspirin+Clopidogrel Following TAVI) was prematurely halted following an excess of major and life-threatening bleeds in the DAPT allocated group. Importantly, no increased risk of stroke was observed in those patients randomised to aspirin monotherapy. The findings therefore from this small trial comprising just 222 patients would concur with the conclusions of this much larger meta-analysis, specifically that DAPT does not confer additional advantage over aspirin monotherapy. 48 Potential role of new oral anticoagulants Recent data have shown that subclinical valve leaflet thrombosis is more common after TAVI than surgical aortic valve replacement with bioprosthetic valves, 49 and that this may lead to an increased risk of stroke. Therefore, there is interest in the potential role of oral anticoagulants, particularly non-vitamin K oral anticoagulants (NOAC) for antithrombotic therapy after TAVI. The role of NOACs after TAVI is being evaluated in prospective RCTs ( ClinicalTrials. gov Identifier: NCT02664649) and these data may inform future guideline recommendations regarding antithrombotic therapy after TAVI. limitations To limit the possibility of a false positive result, we have used a random effects analysis. However, it remains unclear how the number of studies, the relative weights of each study, the extent of heterogeneity between studies and the potential for aggregation bias all influence the probability of any metaregression producing false positive or false negative results. We assessed publication bias using a Funnel plot, but accept that this does not completely exclude the possibility of publication bias. Therefore, as with any other meta-analysis, any conclusions from this study should be interpreted with this uncertainty in mind. The number of patients treated with aspirin monotherapy was much lower than those treated with DAPT, probably as a reflection of the current guideline recommendations. One could argue that this may limit the comparability of these cohorts. However, the event rates seen in the monotherapy groups are similar to the DAPT groups, and furthermore short durations of DAPT were as safe and effective as longer durations of DAPT; this provides support to the idea that aspirin monotherapy may be sufficient. There was no uniform definition of bleeding. This may have masked any effect longer DAPT durations may have had on bleeding. A further explanation for the lack of effect of DAPT on bleeding may lie in the fact that some TAVI trials have excluded patients with recent bleeding events or at particularly high risk for bleeding. This incurs an inevitable selection bias which may have affected our results. The lack of effect of DAPT on bleeding should therefore be interpreted with caution. COnClusIOn Aspirin monotherapy appears to be as safe and effective as DAPT. Competing interests None declared. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement The corresponding author agrees to share the source data upon reasonable request. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/
2018-04-03T04:55:37.544Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "92cd78e7430426214daf58355148451c296c7340", "oa_license": "CCBYNC", "oa_url": "https://openheart.bmj.com/content/openhrt/5/1/e000748.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "222c37a087f13eee03333f667738a776b8d89532", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248438387
pes2o/s2orc
v3-fos-license
Mottled Duck introductions to South Carolina: The ugly, the bad, and the good? Abstract Translocations or other movements of wildlife sometimes accomplish their intended objectives, but unforeseen consequences may arise and disrupt locally adapted ecological communities, restructure or dilute genetic integrity of populations or subspecies of the moved organism, and otherwise negatively influences a species’ long‐term fitness. Two historical populations of Mottled Ducks (Anas fulvigula) exist and are endemic to (1) Mexico and the West‐Gulf Coast (A. f. maculosa) regions of the United States and (2) Florida (A. f. fulvigula). From 1975 to 1983, 1285 Mottled Ducks from Florida, Louisiana, and Texas were released to coastal South Carolina, primarily to ultimately establish a legally harvestable population. This movement stirred mixed reactions amid the conservation community. Contemporary information suggests an increasing Mottled Duck population in South Carolina and possibly dispersing into Georgia. Herein, I objectively discuss the potential consequences of this new population per the birds’ evolution, ecology, and management. Ultimately, I suggest that this translocation is a long‐term benefit to the species. | INTRODUC TI ON There is a long history of translocation, a human-mediated movement and free release of wildlife by humans (Seddon et al., 2012(Seddon et al., , 2014. Animals may be transported for retention of population viability, assisted colonization, and several other species' conservation-related objectives (Evans et al., 2018;Seddon et al., 2014;Tobias et al., 2020). Translocations used as a conservation strategy have greatly increased in recent decades (Bouzat et al., 2009). Approximately 124 species were translocated worldwide from 1900 to 1992 but that increased to 424 species by 2005 (Seddon et al., 2007(Seddon et al., , 2014. Translocations often strive to reinforce existing populations or reintroduce a species into an area after a local extinction (Seddon et al., 2014;Thévenin et al., 2018). In some cases, a species may be moved beyond its endemic range for other purposes, such as establishing and growing a population of a huntable species, that is, Mottled Ducks (Anas fulvigula) in coastal South (Kneece, 2016). Of the Mottled Ducks released to South Carolina, 26 originated from the Florida population and 1259 were from Louisiana and Texas (Kneece, 2016). Approximately 107 (8%) banded birds were later identified either through direct or indirect band recoveries from 1975 to 1986 (Kneece, 2016). Only 7 (6%) of these ducks (Balkcom & Mixon, 2015;Lavretsky et al., 2021;Pollander et al., 2019). Prior to this introduction, there were apparently no records of Mottled Ducks breeding in South Carolina. Presently, Mottled Ducks inhabit at least South Carolina and Georgia, important regions to waterbirds of the South Atlantic Coastal Zone (SACZ; Gordon et al., 1998;Watson & Malloy, 2008). Given the apparent expanded range into Georgia, hereafter I will refer to this new population as SACZ Mottled Ducks (Seyoum et al., 2012) (Figure 1). At the time of initial release in the 1970s, there was no genetic information available that differentiated West-Gulf Coast from Florida Mottled Ducks like that available today McCracken et al., 2001). Conservationists in the 1970s reconciled that the species would prosper in coastal South Carolina given some similarities in habitats used by Mottled Ducks in coastal Louisiana and Florida (Singleton, 1953;Smith, 1961). The release of Mottled Ducks into South Carolina, however, has created growing awareness in the waterfowl conservation community over potential genetic ramifications (Baldassarre, 2014;Seyoum et al., 2012). There is already concern over declining breeding populations of Mottled Ducks in some portions of its range (e.g., coastal Texas), mostly due to habitat contraction and loss (Wilson, 2007). As such, this release of birds into a novel environment arguably has diverse short-and long-term consequences and outcomes. Three specific negative concerns or hypotheses have been considered, including that currently established SACZ Mottled Ducks will: (1) Hybridize with feral Mallards; in North Carolina, for example, American Black Ducks (Anas rubripes) × Mallard introgression has resulted from gene flow through male feral Mallards Lawson et al., 2021); (2) hybridize with wild Mallards (3) directly interbreed with the Florida Mottled Duck population, thereby disrupting this latter gene pool that has been generationally distinct (Bielefeld et al., 2010;Lavretsky et al., 2021;Peters et al., 2016). All three of these possibilities could occur and potentially deteriorate the integrity of the Florida Mottled Duck gene pool if subsequent hybrids or pure SACZ Mottled Ducks freely moved into Florida. A fourth and rather unknown result of this release rests with the Mottled Duck's ecological role and interactions with other species in their newly established wetland community (Table 1). The current genetic structure of Mottled Ducks and other Mallard-like species is well documented Peters et al., 2016;Weng, 2006). Although it is necessary to briefly overview those dynamics herein, my objective in this paper was to expand the view of the potential outcomes associated with this release (i.e., Table 1). In fact, I offer in the end that this new population is largely a long-term benefit for the species. conservation introduction, or the intentional movement and release of an organism outside its indigenous range (Hällfors et al., 2014;IUCN, 2013;Seddon, 2010). There are two components of conservation introductions, (a) Assisted colonization, or the intentional movement and release of an organism outside its indigenous range to avoid extinction of populations of the focal species, and (b) ecological replacement, or the intentional movement and release of an organism outside its indigenous range to perform a specific ecological function (IUCN, 2013). Despite these precise definitions, I submit that neither term applies here because genetics of Mottled Ducks, and the species' ecology for that matter, were not well established at the time of the release. Notwithstanding these definitions, Mottled Ducks were intentionally moved and released to establish a novel population, the potential outcomes of it being the subject of this paper. | CONTEMP OR ARY S TATUS OF SAC Z MOT TLED DUCK S The population status of SACZ Mottled Ducks is unknown, but available information suggests it has increased since initial release (Kneece et al., 2020 suggest an expanded regional population and achieving the original purpose, to grow the new population into a sustainable harvestable one. Population growth of these birds, however, has fueled concern over genetic integrity of the species, particularly that for Florida Mottled Ducks. 1986;Bielefeld et al., 2010;Mank et al., 2004;Peters et al., 2014;Stutzenbaker, 1988;USFWS, 2013). In support of limited hybridization or gene flow, Peters et al. (2014) explained that as much as four times more genetic diversity has likely resulted through gene flow between Mallards and Mottled Ducks, compared to the potential diversity achieved had they been completely isolated though time. An important alternative to this explanation, however, is that introgression of non-native genes could cause extinction of native genotypes that confer local adaptations TA B L E 1 A hypothetical model of potential consequences associated with introducing Mottled Ducks (n = 1285) from Florida and the West-Gulf Coast to coastal South Carolina, 1975Carolina, -1983 (Quilodrán et al., 2018) and maladaptive compared to their parentals. | Mottled Duck genetics Despite the potential negative implications of maladaptive alleles, Ford et al. (2017) recently estimated low levels (~5%-8%) of hybridization between Mallards and Mottled Ducks in the West-Gulf Coast, and this value was approximately 9% for Florida birds (Ford et al., 2017;Williams, Brust, et al., 2005;Williams, Fedynich, et al., 2005). Other studies demonstrated little contemporary gene flow among these Mallard-like species and posited that genetic extinction is unlikely for Mottled Ducks Weng, 2006 | Pipelining SACZ Mottled Duck genes into Florida Another concern among some waterfowl conservationists is related to gene flow and that SACZ Mottled Ducks will introgress with Florida populations, ultimately eroding genetics of the latter (Baldassarre, 2014 sun-senti nel.com/news/weath er/hurri cane/sfl-hc-caneh istor y1,0,33520 10.special). Surprisingly, these significant storms apparently did not prompt Florida Mottled Ducks to settle (i.e., founder individuals) coastal South Carolina, a region boasting favorable habitats for the species for the past 400 years, specifically historic rice fields that eventually were converted to managed coastal wetlands (Edgar, 1998;Gordon et al., 1998;Zwank et al., 1989). Perhaps some individuals did move there but were too few to establish popula- tions. Curiosity over the impacts of tropical cyclones on Mottled Ducks has long existed, as molting birds have been killed in storms (Stutzenbaker, 1988 | Aspects of the introduction -The "unknown" Perhaps a more challenging aspect of the introduction to be re- (Cely et al., 1993;Kaufman, 1996). Mottled Ducks also nest in dense spartina and other vegetation amid seasonal wetlands (Kneece, 2016;Shipes, 2014). Undoubtedly, some fourth order (Johnson, 1980) resource needs of nesting Mottled Ducks will depart from several co-existing species. However, how these species partition foraging, and the potential influence of Mottled Ducks on nest clustering and density dependent nest survival (Ringelman et al., 2014) in this avian community are currently unknown but worthy of understanding. | Novel environments and niche compatibility Paradoxically, some degree of hybridization between species is good as genetic diversity may introduce variation, novel alleles, and mutations (Alleaume- Benharira et al., 2006;Frankham, 2005;Garant et al., 2007;Lande & Shannon, 1996). Low rates of gene flow (<2%) for years between two species of Darwin's Finches (Geospiza fortis and G. scandens) apparently enhanced beak morphology and overall fitness of the individuals (Grant & Grant, 2010;Hedrick, 2013;Lamichhaney et al., 2020). In effect, some levels of hybridization can assist adaptations to potentially new niches, and species can expand their climatic ranges resulting from introgression with other species (Krehenwinkel & Tautz, 2013;Stelkens et al., 2014). Peters et al. (2014) posited that introgression of Mallard alleles has helped maintain high genetic diversity in Mottled Ducks, which could benefit the adaptability and survival of the latter. Perhaps an unsettled question is how much gene flow between these populations is acceptable to conservationists? Attempts to safeguard the integrity of Florida Mottled Ducks seems a defensible conservation priority, but also a challenge, relative to evolutionary and ecological processes. (Southwood, 1977 (Coster et al., 2018). Interestingly, hybridization between species likely occurs in marshes of intermediate salinity at some locations where range overlap occurs (Coster et al., 2018;Eddleman & Conway, 1994;Meanley, 1969;Olson, 1997). Despite the threat of hybridization creating outbreeding depression, reduced fitness, or other consequences (Edmands, 2007;Rhymer & Simberloff, 1996), introgression likely introduced novel genotypes that increase fitness and potentially local adaptations (Coster et al., 2018;Rhymer & Simberloff, 1996). King and Clapper Rails co-exist in a region of Virginia and introgression is not viewed as deleterious, as Clapper Rails typically do not invade freshwater marshes, thus leaving this habitat type for King Rails (Coster et al., 2018). Relative to Mottled Ducks, habitats used by Florida birds diverge somewhat from habitats in the West-Gulf Coast. Florida Mottled Ducks historically have exploited thousands of ponds and irrigation reservoirs associated with ranching, farming, and citrus production inland and other suburban and urban areas (Bielefeld & Cox, 2006). Further south near Lake Okeechobee, Mottled Ducks use stormwater treatment areas and permanent marshes of the Everglades (Bielefeld, 2008(Bielefeld, , 2011. The West-Gulf Coast Mottled Ducks also use freshwater wetlands, ditches, canals, and ricefields, but some birds in the West-Gulf Coast and South Carolina seek intermediate and brackish wetlands (Baldassarre, 2014;Grand, 1988;Shipes et al., 2015;Zwank et al., 1989). If SACZ Mottled Ducks (i.e., of predominate West-Gulf Coast origin) move seasonally or otherwise to Florida, it might be that these third-and fourth-order habitat affinities (Johnson, 1980) create natural niche partitioning among the cohorts of birds, similar to King and Clapper Rails in Virginia. The evolutionary ecology of animal personalities (Dall et al., 2012;Miranda et al., 2013) may offer some insight into potential interactions between SACZ and Florida Mottled Ducks during their habitat use in Florida. Genetic components of animal personalities can influence resource use of individuals (Miranda et al., 2013;Schielzeth et al., 2011;Van Oers et al., 2004). Cities are evolutionarily novel environments with unfamiliar challenges for wildlife, and urban landscapes are thus ideal systems for understanding how plasticity might promote or hinder adaptation to new environments (Bressler et al., 2020;Shanahan et al., 2013;Sol et al., 2013). In dark-eyed juncos (Junco hyemalis) and Eurasian magpies (Pica pica), both demonstrated a protracted breeding season in urban areas, possibly resulting from milder climates or greater food abundances compared with their conspecifics in natural habitats (Bressler et al., 2020;Jerzak, 2001). What remains equivocal for birds generally is whether plasticity is adaptive for urban popula- Mottled Ducks in the West-Gulf Coast (Grand, 1988;Zwank et al., 1989 Carolina has lost approximately 29% of its wetlands since 1780 (Yarrow, 2009), and wetland losses are especially problematic along coastal South Carolina (Strauss et al., 2014). Moreover, states receiving the greatest levels of human migration from 1995 to 2000 included Georgia, Florida, and South Carolina, which increased the population density of coastal communities by 70% in those states between 1980between and 2003between (Franklin, 2003 | Are we stalling inbreeding depression? West-Gulf Coast and Florida populations differ phenotypically (e.g., plumage and bill color) and are nearly as divergent from each other as they are from other Mallard-like duck taxa . With modification to and foreseeable future loss of suitable habitats along coastal Carolina and in Florida, perhaps the most ecologically lucrative outcome of this translocation is that potential consequences of inbreeding depression have been delayed? Inbreeding depression is the mating of close relatives (Wright, 1922) and an artifact of it is expression of deleterious recessive alleles (Roff, 2002;Szulkin & Sheldon, 2007) and a trend toward genome-wide homozygosity (Keller & Waller, 2002;Szulkin & Sheldon, 2008). Accumulation of deleterious mutations can subsequently reduce individual fitness (Opatová et al., 2016). Fortunately, there are potentially positive outcomes relative to inbreeding depression via individual dispersal. First, Opatová et al. (2016) studied effects of inbreeding on Zebra Finch sperm characteristics and inbred males had more abnormal spermatozoa and lower sperm velocity than outbred males maintained under the same conditions. Hence, dispersal of individuals from one population into another can increase the heterozygosity of a population and minimize breeding among close relatives (Hamilton & May, 1977;Opatová et al., 2016;Szulkin & Sheldon, 2008). Second, the Greater Prairie Chicken (Tympanuchus cupido pinnatus) has declined throughout its range in North America, in part due to issues with inbreeding depression (Bouzat et al., 2009 (Gordon et al., 1989(Gordon et al., , 1998. Our However, the bird has an affinity for managed wetlands (Shipes et al., 2015), as do other dabbling ducks in coastal South Carolina (Gordon et al., 1998 (Taylor et al., 1993) can ameliorate many risk factors and allow physically disjunct populations to persist in a network (Crooks & Sanjayan, 2006;Macdonald & Johnson, 2001), or even as a metapopulation of interconnected habitats (Doleman, 2012;Hanski, 1999;Smith & Green, 2005;van Rees et al., 2018). In this light, SACZ Mottled Ducks are proximal to Florida, but at the same time the species demonstrates reluctance to move great distances, thus I hypothesize that population structuring will likely be maintained, but the limited gene flow that may occur could actually benefit both populations, with little fear of genetic homogenization to the Florida birds. In closing, a primary concern for species viability is how availability of quality habitats influences population size and integrity. With
2022-04-30T15:16:05.811Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "211e07f0f99743f7f83be9a6d380160b5d2befc9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2916b9086e62c4bc3b35427db12288c75f0e8968", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
261321059
pes2o/s2orc
v3-fos-license
Knowledge distillation based on multi-layer fusion features Knowledge distillation improves the performance of a small student network by promoting it to learn the knowledge from a pre-trained high-performance but bulky teacher network. Generally, most of the current knowledge distillation methods extract relatively simple features from the middle or bottom layer of teacher network for knowledge transmission. However, the above methods ignore the fusion of features, and the fused features contain richer information. We believe that the richer and better information contained in the knowledge delivered by teachers to students, the easier it is for students to perform better. In this paper, we propose a new method called Multi-feature Fusion Knowledge Distillation (MFKD) to extract and utilize the expressive fusion features of teacher network. Specifically, we extract feature maps from different positions of the network, i.e., the middle layer, the bottom layer, and even the front layer of the network. To properly utilize these features, this method designs a multi-feature fusion scheme to integrate them together. Compared to features extracted from single location of teacher network, the final fusion feature map contains meaningful information. Extensive experiments on image classification tasks demonstrate that the student network trained by our MFKD can learn from the fusion features, leading to superior performance. The results show that MFKD can improve the Top-1 accuracy of ResNet20 and VGG8 by 1.82% and 3.35% respectively on the CIFAR-100 dataset, which is better than state-of-the-art many existing methods. Introduction The great success of computer vision in the past few decades is inseparable from deep neural network (DNN) [1][2][3][4] due to the reason that DNN can show excellent performance on many vision tasks [5,6].Generally speaking, the performance of a network model is positively related to the amount of parameters and computation of the model.However, it is unable to deploy a model with a large amount of parameters on embedded devices with limited resources.The existing methods to solve this problem mainly include knowledge distillation (KD) [7,8], network pruning [9,10], network quantization [11,12] and low-rank factorization [13], among which KD is a very effective method. The essence of KD is to enable the small student network with a small number of parameters to learn the "dark knowledge" taught by the large teacher network with a large number of parameters, Fig 1 shows the essence of KD The student network can achieve considerable performance improvement, and even make its performance close to that of the teacher network.In the existing KD methods, there are two main ways for "dark knowledge" transfer including logtis distillation [7,14] and features distillation [15,16].Logits distillation performs knowledge transfer by minimizing the relative entropy of logits predicted by teachers and students.Compared with logits distillation, the features-based methods have a good performance on many tasks, such as the model compression task of image classification network and the model compression task of object detection network [17,18].Therefore, researchers are more inclined to the features distillation in recent years.But most of them ignored the fact that in the same neural network, the feature maps generated by different neural network layers located in different locations contained different information which may yield different results when they are applied to KD. It is a known fact that the resolution of feature maps generated by neural network will change from high to low under the action of downsampling with layer by layer computation of neural network.And low resolution feature maps have stronger semantic information, while high resolution feature maps have more accurate localization activation due to fewer downsampling times [19].Thus, we believe that extracting the feature maps of different layers from the neural network for fusion and then distillation can make the knowledge taught by the teacher to the student more high-quality, thereby improving the learning effect of the students. In the present work, a new method called Multi-feature Fusion Knowledge Distillation (MFKD) is proposed.First, we extract the feature maps generated from different layers of teacher network, and employ a feature pyramid to fuse the extracted feature maps.During this process, the feature maps are corrected.We then obtain the fusion features from the student network by the same way.Finally, the mean squared error (MSE) between the fusion feature maps produced by the teacher and the student is minimized for promoting knowledge transfer. In a summary, our main contributions are summarized as follows: • Considering that fused features are rich in information, we perform knowledge transfer by fusing feature maps generated by multiple neural network layers.• In the process of fusion, attention mechanism is added to correct the feature map so as to obtain better multi-layer fusion features. • For different datasets and different networks, the distillation effect can be significantly improved. Related work Since this paper focuses on compressing neural network models by the method of KD, some excellent related work published in recent years on the model compression and KD are given below.a) Model Compression.In order to solve the problem of deploying DNN models on embedded devices with limited storage and computing resources, model compression technology is proposed.There are several methods to realize model compression: network pruning, network quantization, low-rank factorization and KD.Since there are a large number of redundant parameters in the neural network models, and these parameters have a very subtle impact on the final result of the network model.The main idea of pruning is to cut out the unimportant neurons and filters in the network to achieve the purpose of compressing the model [20,21].The purpose of quantization is to replace the high-precision numbers stored in the original model with low precision numbers [22].Low-rank factorization refers to sparse the convolution kernel matrix by combining dimensions and imposing low-rank constraints [23].The most of weight vectors are distributed in the low-rank subspace, and a small number of basis vectors can be used to reconstruct the convolution kernel matrix to achieve the purpose of reducing storage space. b) Knowledge Distillation.The framework of knowledge distillation was first proposed by Hinton et al. [7] which is based on logits.By introducing the concept of temperature (T), logits generated by teacher model are softened to obtain soft targets, and logits generated by student are then used to simulate soft targets.However, most of the current knowledge distillation methods are based on features.According to the extraction location of features, we have the following divisions: (b1) Middle layer.Romero et al. [15] proposed a two-stage method which extracts the respective intermediate layer features in the teacher network and the student network, and allows the student features to fit the teacher features to obtain a pre-trained student model in the first stage.In the second stage, with the help of parameters obtained in the first stage, the authors used soft targets to train to get the complete student model parameters. (b2) Bottom layer.Factor Transfer (FT) [24] is distilled at the end of the last layer group.Here the convolution layer group refers to the combination of N convolution layers with the same size of the output feature map.For example, ResNet56 has three layer groups with nine convolutional layers in each layer group.The method of Heo et al. [25] is slightly different with FT.They chose the position of extracting features between the first ReLU and the end of the layer group which can better preserve the information conveyed by the teacher.Contrastive Representation Distillation (CRD) [17] adds the method of contrastive learning to knowledge distillation, and transfers knowledge by comparing the penultimate layer features (before logits) of the teacher and student networks. (b3) Multilayer.Different from FitNets extracting the output features of any intermediate layer, AT [26], FSP [27] and Jacobian [28] methods extract the output features of each convolution layer group when the teacher network and the student network have different depths.Chen et al. [16] used the multi-layer features in the teacher network to guide the learning of a certain layer in the student network to realize the transfer of knowledge in different layers. It differs from the aforementioned methods, we not only extract the feature maps of different layers from the neural network, but also fuse these feature maps to obtain fused feature maps containing abundant information and having good representation ability.Thereby, it is convenient for students to learn on the network and improve the effect of KD. Our method Multi-feature Fusion Knowledge Distillation (MFKD) is a feature-based distillation method.So, for MFKD, feature extraction and processing are particularly important.In this section, the details of the MFKD including notation, features extraction and correction, feature fusion pyramid, and hyper-parameter p are introduced. • Notation.Supposed that the teacher network and the student network are represented by N t and N s respectively, the convolution network part of N t has i layer groups, and N s has j layer groups.The input is represented by x.When x is processed by the network, the set of output feature maps obtained by each layer group in N t can be expressed as O t ¼ fo The final fusion features of N t and N s are denoted by F t mÀ 1 and F s nÀ 1 , respectively.Therefore, the knowledge transfer between teachers and students can be described as an optimization problem with the following expression: • Features extraction and correction.We extract features at different positions in the teacher network and the student network to obtain sets � O t and � O s , and use the attention mechanism to correct all feature maps in the set.The theory of the Squeeze-and-Excitation block (SEblock) [29] is used in this paper to achieve the purpose of feature correction, and the framework of the theory is displayed in the following Fig 2. In Fig 1(A), SEblock consists of two parts: squeeze and excitation.In the squeeze phase, global average pooling is used for converting the input feature map f with size C×H×W into the output feature map f squ with size C×1×1, and compressing the information in f into f squ .In the excitation phase, the mainly idea is a simple gating mechanism with sigmoid activation.This gating mechanism is parameterized by two fully connected layers with a dimensionality reduction ratio of r (in this paper, r = 16).Finally, performing the channel multiplication of f and f exc to complete the mapping from f to f 0 , and the mathematical expression is given by • Feature fusion pyramid.Since the size of the feature maps generated by different locations of the network is different, it impossible to directly fuse the features.Therefore, we use the feature fusion pyramid method to fuse features.The feature fusion pyramid structure is based on the pyramid feature hierarchy of convolutional neural network [1,2], with the purpose of fusing high-level semantic information and low-level localization features in the neural network, and performing knowledge transfer. Taking N t as an example, the framework of the feature fusion pyramid is shown in Fig with the help of the Squeeze-and-Excitation block, and then fuse these two corrected feature maps.The formula can be expressed as Owing to the resolution of o 0t m is higher than o 0t mÀ 1 , it is necessary to downsample o 0t m before fusion.If element-wise addition is used as the feature fusion method (the ablation study case in experiment section), o 0t m and o 0t mÀ 1 should also have the same number of channels, this paper uses 1×1 convolutional layers to increase/decrease the dimension of feature maps.It should be noted that when the feature map is processed by downsampling, dimensionality increase/ reduction, and fusion, it will be corrected.It found that the feature correction keeps the information carried by the feature map correct, and the final fusion features can perform well after repeated adjustment. Actually, the feature map will change after the convolution calculation, and the original information in feature map will also be affected.Therefore, we increase/reduce the dimension of � o 0t m after downsampling so that the dimension of � o 0t m matches that of � o 0t mÀ 1 .The biggest 1 .Therefore, after m-1 times of fusion, we will get the final fusion features of N t , which can be expressed as In the same way, the final fusion feature F s nÀ 1 of N s can be obtained.For MFKD, the realization of knowledge transfer is to let F s nÀ 1 simulate F t mÀ 1 , and short the distance between them.This problem is described in Eq 1. • Hyperparameter p.To further improve the performance of MFKD, a hyperparameter p is introduced.In the process of N s learning from N t , when the prediction results of N t are good, the N s should be to learn from N t ; when the prediction results of N t are bad, the N s should be to learn from ground-truth labels.In this way, p becomes the criterion for judging the quality of prediction results of N t .So the rest task is to set the optimal values of p.Here we divide a dataset into α batches, and send them to N t to calculate the prediction results.Using the ascending order, these results are arranged in a vector PRE = {pre 1 , pre 2 , pre 3 ,. ..,pre α } where pre i <pre j when i<j.Furthermore, we set a percentage β to determine the value of p = pre α �β. Experiment • Implement Details a) Dataset.Two classical image classification datasets, CIFAR-10 [30] and CIFAR-100 [30], are selected to validate the effectiveness of MFKD.The details of the two datasets are described in the following. CIFAR-10 has a total of 60K color images, including a training set with 50K images and a test set with 10K images.Each image size is 32 × 32pixels.There are 10 categories in total, and each category has 6K images. Similar to the CIFAR-10, CIFAR-100 has also 60K color images including 50K training images and 10K test images.Further, it has a total of 100 categories where each category has 600 images including 500 training images and 100 test images.The image size is 32× 32 pixels.b) Models.Three kinds of neural networks are applied in our experiment including: ResNet [4] which is narrow and deep; WideResNet [31] which is wider but shallower than ResNet; VGG [2] which is a classical linear structure network.In addition, our experiments focus on knowledge distillation between networks with the same architectural style, such as: teacher is VGG13, student is VGG8.c) Setting.Data augmentation.For the training set of CIFAR dataset, we first fill 4pixels around the image, then randomly cut the image to 32×32pixels, perform random horizontal flip with a probability of 0.5, and finally normalize the image with the mean and standard deviation of each channel.But for the test datasets, the normalized is only applied to process data. Training parameter settings (Table 1).To verify the effectiveness of MFKD, we use the same parameter settings for baseline training and distillation training.The Stochastic Gradient Descent (SGD) algorithm is applied for network optimization where the momentum of SGD is set to 0.9 and the weight decay is 5e-4.The initial learning rate is 0.05, and it decays to 0.1 times of the previous time at the 150th, 180th, and 210th epochs, respectively.A total of 240 trained epochs are predetermined, and the batchsize at training time is 64. • Results a) CIFAR-10.In the CIFAR-10 dataset, we perform two groups of cases, one is ResNet56 as teacher, ResNet20 as student; the other is WRN_40_2 as teacher, WRN_16_2 as student. In the first case, both ResNet56 and ResNet20 contain three convolutional layer groups, we extract features at the output position of each layer group, and use the average of three experimental results as the final result.Compared to conventional training, MFKD improves the Top-1 accuracy of ResNet20 by 0.48%, which is also slightly better than several other methods.The experimental results are shown in Table 2. In the second case, WRN_40_2 and WRN_16_2 both have three convolution layer groups, and we extract the output feature maps of each layer group for fusion.Taking average of the three experiments as the final result, our method improves the Top-1 accuracy of WRN_16_2 by 0.59% compared with the conventional training, which is also slightly better than the other methods.The experimental results are tabulated in Table 3. b) CIFAR-100.For the CIFAR-100 dataset, we consider two groups of cases, the first case is VGG13 as teacher, VGG8 as student; the second case is ResNet56 as teacher, ResNet20 as student.In case 1, both VGG13 and VGG8 contain five convolutional layer groups, and we select the output positions of the 2nd, the 3rd, and the 4th layer group to extract features to validate MFKD.The average of three experiments is computed as the final experimental result.One checks easily that the MFKD can improve the Top-1 accuracy of VGG8 by 3.35% which compared to one obtained by conventional training.The experimental results are displayed in Table 4. Similar as the operation scheme of the first case of CIFAR-10, MFKD is carried on the CIFAR-100 dataset.The computational results show that the MFKD improves the Top-1 accuracy of ResNet20 by 1.82% compared to conventional training.The experimental results are given in Table 5. • Ablation Study It is known that the location of feature extraction and the way of fusion are two important factors affecting MFKD.Here, we conducted a detailed ablation experiment.a) Extract Location.Thanks to the VGG network having 5 different layer groups, it is convenient for us to explore the influence of the change of the extraction position on MFKD.Therefore the experiment in the ablation study is completed by the VGG network. Compared with the original extraction combination extracted the output features of the 2nd, the 3rd, and the 4th layer group of the network, we designed two extraction combinations named Combination A and Combination B in what follows.Combination A: we use the output features of the 5th layer group to replace the output features of the 2nd layer group based on the purpose of which is to replace the previous layer features with the bottom layer features.Combination B: we replace the output features of the 3rd layer group with the output features of the 5th layer group owing to the reason that replacing the middle layer features with the bottom layer features.Taking VGG13 as the teacher and VGG8 as the student, the experimental results on CIFAR-100 are shown in Table 6. It can be seen that the original extraction combination includes the features of the front, the middle and the bottom part of the network, and the performance of MFKD decreases after the front or the middle layer features are missing.b) Fusion Method.We explore the fusion of two feature maps: ADD and CONCAT where ADD refers to adding feature map A and feature map B by means of element-wise, while CONCAT means to concatenate feature map A and feature map B into a new feature map. When using ADD for fusion, the value of the fusion feature is obtained by averaging the value computed by adding the two feature maps on each pixel.When using CONCAT, a structure such as 1×1conv-3×3conv-3×3conv is applied to process the spliced feature maps to achieve the purpose of dimensionality raising/lowering and feature collection.The result looks like this. It follows from Tables 7 and 8 that CONCAT performs better than ADD for the residual network ResNet, while ADD is better than CONCAT for the linear network VGG.The choice of fusion method varies with the network structure. • Extension In addition, we perform our method between teacher networks and student networks with different structural styles where ResNet50 with four convolutional layer groups as the teacher network, and VGG8 with five convolutional layer groups as the student network.On dataset CIFAR-100, we extract the output feature maps of the 2nd, the 3rd, and the 4th layer group of ResNet50, and extract the output feature maps of the 3rd, the 4th, and the 5th layer group of VGG8, respectively.The fused features of ResNet50 are processed by using 1×1 convolutions to match the number of channels with those of VGG8.MFKD improves the Top-1 accuracy of VGG8 by 2.18% compared to regular training.The experimental results are given in Table 9. Conclusions and future work In the present study, the multi-layer feature fusion knowledge distillation (MFKD) is proposed to improve the performance of student network.Specifically, we first design the feature fusion pyramid to effectively fuse multiple layers of features together.Then, the quality of feature maps is refined by the attention mechanism.Finally, by setting hyperparameters, students can choose the object of study to further improve the distillation effect.Experiments show that MFKD can significantly outperform state-of-the-art methods. In the future, we may explore our MFKD in a comprehensive case of teacher network and student network have different structural styles.Further, the applications of MFKD in image detection, image segmentation and other tasks are another research interests. Fig 3 . Fig 3. Feature fusion pyramid of an N t .https://doi.org/10.1371/journal.pone.0285901.g003 , and is also the input of the (k+1)th layer group.Similarly, the set of output feature maps obtained by each layer group in N s can be expressed as O s ¼ fo s Among them, o t i and o s j are also respectively the final output of the entire convolutional layer in N t and N s .We select m feature maps from O t to form a feature extraction set of N t , which � t 1 ; o t 2 ; o t 3 ; . . .; o t i g in which o t k is the output of the kth layer group in N t 1 ; o s 2 ; o s 3 ; . . .; o s j g.
2023-08-30T15:11:52.356Z
2023-08-28T00:00:00.000
{ "year": 2023, "sha1": "b0c3d73b28daf36a691390468ba2939625afff3b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0285901&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a5d8c50a53dc9b4aa6fb40b44853635c7fef734", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
11824203
pes2o/s2orc
v3-fos-license
Loss of NOTCH2 Positively Predicts Survival in Subgroups of Human Glial Brain Tumors The structural complexity of chromosome 1p centromeric region has been an obstacle for fine mapping of tumor suppressor genes in this area. Loss of heterozygosity (LOH) on chromosome 1p is associated with the longer survival of oligodendroglioma (OD) patients. To test the clinical relevance of 1p loss in glioblastomas (GBM) patients and identifiy the underlying tumor suppressor locus, we constructed a somatic deletion map on chromosome 1p in 26 OG and 118 GBM. Deletion hotspots at 4 microsatellite markers located at 1p36.3, 1p36.1, 1p22 and 1p11 defined 10 distinct haplotypes that were related to patient survival. We found that loss of 1p centromeric marker D1S2696 within NOTCH2 intron 12 was associated with favorable prognosis in OD (P = 0.0007) as well as in GBM (P = 0.0175), while 19q loss, concomitant with 1p LOH in OD, had no influence on GBM survival (P = 0.918). Assessment of the intra-chromosomal ratio between NOTCH2 and its 1q21 pericentric duplication N2N (N2/N2N-test) allowed delineation of a consistent centromeric breakpoint in OD that also contained a minimally lost area in GBM. OD and GBM showed distinct deletion patterns that converged to the NOTCH2 gene in both glioma subtypes. Moreover, the N2/N2N-test disclosed homozygous deletions of NOTCH2 in primary OD. The N2/N2N test distinguished OD from GBM with a specificity of 100% and a sensitivity of 97%. Combined assessment of NOTCH2 genetic markers D1S2696 and N2/N2N predicted 24-month survival with an accuracy (0.925) that is equivalent to histological classification combined with the D1S2696 status (0.954) and higher than current genetic evaluation by 1p/19q LOH (0.762). Our data propose NOTCH2 as a powerful new molecular test to detect prognostically favorable gliomas. INTRODUCTION Histological classification and the WHO grading of glial brain tumors represents the gold standard to estimate prognosis and guide therapy [1,2]. Median survival time of glioma patients varies considerably between gliomas of different histological type and WHO grade, e.g. it is less than 12 months in GBM [3], 10 years in OD grade II [4] and approximately 3-4 years in anaplastic OD grade III [5]. However, histological classification of malignant gliomas can be difficult, especially if only small amounts of stereotactic biopsies are available [6]. Even within one histological glioma subtype, the course of the disease can be highly variable, depending of the genetic background of the tumor. For these reasons, molecular markers are expected to improve diagnostic and prognostic accuracy and guide therapy. In contrast to OD, the significance of LOH on 1p as a prognostic marker is not clear in malignant astrocytoma, although a correlation had also been postulated for GBM [19]. We therefore compared the deletion patterns on 1p in a large series of OD and GBM, constructing genetic deletion maps of 1p to determine distinct 1p haplotypes in relation to patient survival. LOH on 19q is not associated with 1p loss in GBM Since LOH of 1p and 19q are concomitant in OD [8], we analyzed 19q status in all OD and GBM displaying 1p loss. As expected [20], 100% (21/21) of OD with haplotype H10 had 19q loss and significantly better prognosis (P = 0.0038, Manova). In contrast, only 47% of GBM with haplotypes H2-H10 displayed concomitant 19q loss, which was randomly distributed among the three 1p deletion categories, not correlating with survival (P = 0.918, Manova). These data suggest that only 1p loss rather than 19q loss predict better survival in the subgroup of GBM patients with 1p loss, distinct from OD patients that display codeletions of 1p and 19q. The centromeric marker D1S2696 was indeed the best discriminator for longer survival in both GBM and OD compared to more telomeric 1p and 19q markers ( Figure 2). Minimally lost areas in OD and GBM converge to NOTCH2 GBM with haplotypes H8-H10 define a minimally lost area that spans between markers D1S514 and 210WF10 and overlaps the centromeric breakpoint cluster between markers D1S2696 and 210WF10 in OD with haplotype H10 ( Figure 3B). Refinement of deletion mapping in this area has so far been limited by pericentric duplication of chromosome 1 [18]. This duplicates the 5' part of NOTCH2 to 8 kb of intron 5 from 1p11 to 1q21.1, encoding the truncated NOTCH2 N-terminal (N2N) gene [21]. Sequence comparison between these duplicated regions revealed several single nucleotide polymorphisms and microdeletions. We selected two 5-bp microdeletions from exons 1 and 4 of N2N to develop a PCRbased assay, the 'N2/N2N test', that recognizes either genomic region by size and determines its relative dosage in tumor DNA ( Figure 3A). Calculation of the ratio between NOTCH2 and N2N PCR products levels in DNA from tumor and lymphocytes derived from the same patient, evaluates the gene copy status at NOTCH2 relative to N2N. In 100% (21/21) of OG displaying 1p loss (haplotype H10), this test showed imbalance between the duplicated regions: exons 1 and 4 of NOTCH2 harbored half copy number relative to N2N, indicating loss of one NOTCH2 copy Figure 3B). Two OD cases with 1p loss (AO80 and AO84) had fluorescence intensity of exon 4 of NOTCH2 close to baseline ( Figure 3B). This indicated loss of both NOTCH2 genomic copies at this position and was confirmed by real-time quantitative PCR to be a homozygous deletion. This genomic imbalance showed that the breakpoints detected in OD with 1p loss (Figures 1A and 3B) cluster between duplicated areas. In contrast, 97% (35/36 cases with informative N2/N2N test) of GBM with 1p loss (haplotypes H2-10) revealed equal copy numbers with the N2/N2N test. Therefore, in GBM, breakpoints on 1p are telomeric to the pericentric duplication, either towards distal 1p, or 1q ( Figure 3B). The single GBM showing an OD-like pattern in the N2/N2N test (tumor 155, Figure 3C) was histologically reclassified by two independent neuropathologists as a GBM with oligodendroglial features. All analyzed GBM without 1p loss (5/5) also had equal copy numbers between NOTCH2 and N2N (tumor G49, Figure 3B). Hence, OD and GBM display distinct 1p deletion patterns that can be recognized by using the N2/N2N test. Moreover, results of the N2/N2N test and fine mapping of centromeric deletions in GBM disclosed a minimal area of loss located between the marker D1S514 and exon 4 of NOTCH2, and homozygous deletions at exon 4 of NOTCH2 in OD ( Figure 3C). These findings render NOTCH2 a candidate tumor suppressor gene in all OD with 1p loss and in the subgroup of GBM with centromeric 1p loss. The centromeric 1p status is a predictor of glioma patient survival We performed receiver operating characteristics (ROC) analysis of the different molecular markers with regard to prognosis (observed survival). In addition, specificity and sensitivity at a cut-off of 24month were calculated for the 1p telomeric (D1S2845), interstitial (D1S216), centromeric (D1S2696) and 19q (D19S589) microsatellite markers. With respect to microsatellite markers, D1S2696 was the most accurate 1p microsatellite marker to predict the survival of glioma patients, with an area under curve (AUC) of 0.860 ( Figure 4). However, the N2/N2N test predicted a 24month survival even more accurate, thus, with an exceptionally high accuracy for a biological test (AUC = 0.931). The information content with respect to prognosis of the N2/N2N test was even higher than the histological examination (AUC = 0.891, Figure 4). NOTCH2 status is a predictor of glioma patient survival Presently, estimation of glioma patient survival is based on molecular diagnoses that identify OD with 1p loss, frequently performed with telomeric 1p and 19q markers. We therefore analyzed how the predictive power of either telomeric D1S2845 or centromeric D1S2696 1p markers in combination with 19q status relates to survival. In addition, survival time cut-off values which were calculated for 24, 36 and 48 months, were optimized for discrimination of prognostic accuracy ( Table 3). The N2/N2N test and the histological classification (i.e. OD vs. GBM) were used for stratification to determine the negative and positive predictive values. The accuracy to predict survival of the centromeric 1p marker D1S2696 together with the 19q status (0.800) were slightly higher when compared to the combined use of telomeric D1S2845 and 19q status (0.762). However, using the centromeric marker D1S2696 in combination with the N2/N2N test, the accuracy to predict survival (more or less than 24 months) was 0.925, which Thus, a given cut-off (particular survival time), the test result was determined for all individual tests (based on either molecular markers D19S589, D1S2845, D1S216, D1S2696, N2/N2N, or histology) as being true or false positive, or true or false negative, respectively. Based on these data, the specificities (or 1-specificities, respectively) and sensitivities calculated for each of the cut-off points. doi:10.1371/journal.pone.0000576.g004 Table 3). Thus, the combined use of molecular markers D1S2696 and N2/N2N accurately predicts glioma survival by identifying subgroups of OD and GBM with a better prognosis of survival, and among them, by distinguishing OD from GBM. DISCUSSION We found that NOTCH2 is a common deletion target in OD as well as in GBM, raising the hypothesis of a possible causal relationship between NOTCH2 status and tumor behavior. NOTCH2 location near the chromosome 1 breakpoint cluster area of OD with 1p/19q loss (Fig. 3B) suggests that NOTCH2 inactivation is associated with the recently described OD translocation t(1;19)(q10;p10) [9,10]. In GBM, although additional prognostic factors would certainly had provided stronger validation, the low number of tumors with 1p centromeric loss detected (n = 9) resulting from a low frequency event (8%), was sufficient to reach high statistical significance (P = 0.0175). NOTCH signaling represents an evolutionarily conserved pathway that controls key steps of development, cell growth and differentiation [22]. During brain development, NOTCH2 is expressed in the external granule layer of the cerebellum and in postnatal brain, in dividing immature glial cells of ventricular germinal zones [23,24]. NOTCH1 and NOTCH2 are involved in neoplastic disease [25], e.g. leukemia [26,27], skin cancers [28], in human medulloblastomas [29]. In fact, since NOTCH1 can be regarded either as an oncogene or as a tumor suppressor, depending on the cellular context [25], this rule may also apply to NOTCH2. Interestingly, a subset of GBM with better outcome shows expression alterations in components of NOTCH pathway [30]. In a recent report, the existence of a deletion hotspot of centromeric 1p in glioma has consistently been shown by comparative genomic hybridization [15]. ROC analysis with regard to a 24-month survival first showed a higher relevance of centromeric marker D1S2696 (AUC = 0.860) compared to telomeric or 19q markers. Moreover, the N2/N2N test predicted a 24-month survival with high accuracy for a biological test (AUC = 0.931), even higher than the histological examination (AUC = 0.891, Figure 4). In fact, while marker D1S2696 defined all glioma, GBM and OD, with 1p loss, and histology identified all OG regardless of their genetic signature, the N2/N2N test allowed the distinction of OG with 1p loss, precisely the subgroup of glioma with the best outcome. Identification of OD with 1p/19q loss is presently performed with 1p telomeric and subtelomeric molecular markers in combination with 19q markers. Our results show first that diagnostic assessment of 1p telomeric markers cannot distinguish between subgroups of prognostically better OD and poor GBM with 1p deletions. Moreover, random distribution of 19q loss in half of GBM with 1p loss did not resolve the complementary assessment by the 19q status. As a consequence, numerous false positive cases, particularly GBM with concomitant 1p and 19q loss and poor survival lowered negative and positive predictive values of combined telomeric 1p/19q marker data (0.762). In contrast, when using the N2/N2N test, the GBM with poor survival could be excluded in 21 out of 21 cases, thus, with a specificity of 100% and a sensitivity of 97% (35 of 36 cases). Consistently, the accuracy of the D1S2696-N2/N2N combined status to predict survival for the 0.925 was similar to the D1S2696-histological classification (0.954, Table 1). We found that GBM with interstitial deletions located in the 1p22-32 interval had the poorest prognosis ( Figure 1). They may target one or more of the GBM suppressor genes linked with rapid progression located between 1p32 and 1p22 (reviewed in [31]. Among them are RAD54 [6] and CDKN2C/p18 INK4c [14], both located on 1p32. However, TP73 [13] and CHD5 [15], located on 1p36, are not included in this set of deletions. In contrast, GBM with deletions at the 1p11-13 interval have a significantly better prognosis than GBM with interstitial or telomeric deletion patterns, and GBM without 1p loss (Figure 1). Those tumors display genetic similarities to OD with 1p loss and may target a centromeric gene located on 1p -and independently of 19q -that is linked with a distinct prognostically better glioma pathway. A better patient prognosis for OD with 1p/19q loss relative to other OD is supported by the observation that among OD, 1p/19q loss and TP53 mutations are mutually exclusive events, suggesting that OD with either genetic alteration follow distinct tumor developmental pathways [7]. Consistently, genetic profiling of primary OD revealed that both genetic alterations are part of two distinct molecular subgroups of OD [32]. In contrast, the interaction shown between CHD5 and P53 in mouse fibroblasts [15] strongly suggested that both proteins are part of the same cancer pathway. In conclusion, we found the breakpoints of somatic deletions in most OD and in a subgroup of GBM converging at the NOTCH2 gene locus which also harbors homozygous deletions in primary OD. These findings raise the hypothesis of a role of NOTCH2 in brain tumor development. We further propose the combination of two NOTCH2 genetic markers to provide sharp diagnostic and prognostic accuracy of malignant gliomas. Patients Frozen tissue samples of primary gliomas obtained from the operating room and blood samples derived from the same patients were processed as previously described [33] Nucleic acid extraction and somatic deletion mapping Extraction of genomic DNA from biopsies and peripheral blood mononuclear cells and LOH were performed as previously described [33]. Microsatellite markers used [35] Statistical analysis Histological and molecular genetic parameters potentially associated with survival time were determined. Factor analysis (orthotran/varimax transformation method) was used to identify highly correlated continuous parameters and to define the factors to be subjected to the subsequent multivariate analysis of variance (MANOVA). MANOVA was used for direct multivariate comparison of the effects of the different histological and molecular genetic factors on survival time, respectively, and to determine the significance levels of these correlations. ANOVA and post hoc tests were used for univariate comparison. MANOVA, ANOVA, Kaplan-Meier curves including Logrank Mantel-Cox comparison and significance levels of non-parametric differences were computed using jmp, version 6.0 (SAS Institute Inc., Cary, NC, USA). Receiver Operating Characteristic (ROC) analyses were carried out with ROC, version 1.1 (diagene inc., Reinach, Switzerland) and jmp, version 6.0 (SAS Institute). Sensitivity, specificity and predictive values calculations were computed. All other calculations were performed using SPSS 9.0 (SPSS Inc., Chicago, IL, USA). Results are presented as means (6SEM). Figure S1 Kaplan Meier cumulative survival curve of OD Haplotype H10 compared to OD Haplotype H1 and GBM Haplotype H1. Found at: doi:10.1371/journal.pone.0000576.s001 (0.57 MB TIF)
2014-10-01T00:00:00.000Z
2007-06-27T00:00:00.000
{ "year": 2007, "sha1": "496aad0cdde80495084efc66ebca16a2eb0705f3", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0000576&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "496aad0cdde80495084efc66ebca16a2eb0705f3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234318663
pes2o/s2orc
v3-fos-license
Selection of the ERP System with Regard to the Global 4th Industrial Revolution . Research background: So-called Fourth industrial revolution, triggered by the massive development of information and communication technologies and leading to the new era of manufacturing and logistics known as Industry 4.0, is definitely an important topic across the global economy. Besides their other effects, technologies of the Industry 4.0 have a significant impact on the IT landscape of organizations, including the central part of this landscape – Enterprise Resource Planning (ERP) systems. It is, therefore, important for organizations to take this fact into measure when selecting a new ERP system. Purpose of the article: The aim of the presented research was to propose set of criteria, which could be used by the organization during the choice process of new ERP system in order to evaluate the readiness of every candidate for the challenges related to the Industry 4.0. Methods: Development of the criteria set was based on analysis and evaluation of two main areas – firstly, the content of the Industry 4.0 and its particular technologies, and secondly the tasks which should be performed by the modern ERP system. Requirements arising from these two areas were then merged together into one set of criteria. Findings & Value added: Result of the presented research is a comprehensive and easy-to-use set of criteria, which can be used as a decision-making support tool in the business practice. Introduction The Enterprise Resource Planning system (hereinafter referred to as "ERP system") is a central software element within the information and communication technology system of a company, permeating the whole company in which the basic company processes are managed and recorded. Generally, these involve at least processes in the field of accounting and warehousing (for more details see [1]); nevertheless, in case of comprehensive implementation, the current ERP systems are able to easily cover virtually any other areas, such as production planning and management, logistics, customer relationship management, etc. Even in cases where these other areas are not covered by the ERP system directly but are managed through particular single-purpose systems instead, e.g. in the form of manufacturing execution systems (MES) for production management, warehouse management systems (WMS) for warehouse management or product data management (PDM) solutions for design and technology needs, these subsystems are usually integrated with ERP so that data can be mutually exchanged. Similarly, even if there is a reporting layer over the ERP system, e.g. in the form of a management information system (MIS), the source data for its operation are usually at least partially drawn from ERP. Therefore, even in this case, the ERP system plays the role of a central node, into which relevant data are concentrated and on the basis of which fundamental decisions are made. This unique position of ERP in business processes also means that a suitable selection of a particular ERP system is one of the key conditions for the proper functioning of the entire IT infrastructure of an enterprise. Even the process of implementing the ERP system is extremely demanding for a company, both in terms of costs and, above all, in terms of human resources. The analysis and implementation of a new system in fact requires the cooperation of employees across the company, who must in addition to their normal work provide relevant information to the implementer, participate in new system training and possibly modify selected processes so that they comply with the business logic of the new system. The particular modelling of business processes in the implemented ERP system will then fundamentally affect the operation of the whole company throughout the period when the system will be used. That is why any improper system selection at the beginning may have significant negative effects on the company operation. In extreme cases, it may even result in the company's bankruptcy. In this context, the specialist literature in particular highlights threats to small and medium-sized companies, which, unlike large corporations, do not have the adequate financial and human resources required to cope with such a situation, see for example [2], [3]. Nevertheless, in actual business practice there are even cases of big companies that have found themselves in a critical situation due to the flawed implementation of the ERP system. Therefore, it is not surprising that a number of expert publications have focused on designing a suitable tool to support the selection of an optimal ERP system. The specific approach by individual authors to this issue differs greatly. The proposed procedures range from relatively simple frameworks and decision making models that have their practical applicability emphasised to mathematical studies of a rather academic nature. The first group include studies based on the SMART method (Simple Multi-Attribute Rating Technique) ( [4] or [3]), framework PIRCS (Prepare, Identify, Rate, Compare, Select) [3], framework Define -Evaluate -Select [5] or the SCAPE method (Selection Approach for ERP systems) [2]. Its authors usually include here a framework based on the AHP (Analytic Hierarchy Process) approach [6], although in this case it is already a more complex tool. The second group mainly includes models based on the fuzzy approach presented, for example, in [7], [8] or [9]. Research Scope and Methods Factors that should be currently taken into account when choosing an ERP system include the fact that the economy and society as a whole are currently undergoing significant technological changes, especially in the field of ICT, which are collectively referred to as the "fourth industrial revolution" or "Revolution 4.0" (see e.g. [10] or [11]). The expected result of this transformation is the emergence of so-called "smart factories", "Industry 4.0" or, in a broader sense, the whole "Society 4.0", where the virtual world will be interrelated with the real world through information and communication technologies. Within the selection process of the ERP system, it should be very carefully assessed to what extent the individual evaluated systems are prepared for these technological changes. However, such an assessment requires the evaluator to have sufficient knowledge both in the field of technologies related to Industry 4.0 (at least with those that are relevant to the respective enterprise) and in the field of information systems architecture. In other words, so that an evaluator is able to assess the information system from this point of view, he/she must understand what requirements the fourth industrial revolution will place on ERP systems, and at the same time, he/she must have sufficient technical knowledge to assess, which of the offered systems can meet these requirements best. However, looking for such an evaluator within the company may be difficult, particularly in case of small and medium-sized enterprises, which often lack sufficient capacity in the field of IT professionals. The main goal of this study is to propose a userfriendly decision support tool in the form of a set of criteria, which allows for the assessment of the ERP system in terms of its readiness for Industry 4.0 even without the detailed knowledge discussed above. The structure of the paper is as follows -in its first section, the meaning of the term "Industry 4.0" itself is analysed, and the requirements arising from it for information systems are identified at the general level. The second section puts these demands in a specific context with ERP systems and presents a set of criteria for assessing the readiness of ERP systems for Industry 4.0, which has been proposed on a basis of this analysis. In the last section of the study, possibilities of further development of the proposed evaluation tool are discussed. Upon the creation of the evaluation tool, the method of analysis and evaluation of specialist literature related to Industry 4.0 and the subsequent synthesis and generalisation of its results with the current state of art in the field of ERP systems were used. Industry 4.0 and its requirements for the information system One of the main problems in the analysis of the fourth industrial revolution, i.e. Industry 4.0, includes the fact that the use of these terms is now ubiquitous, but their content has not been clearly defined (see e.g. [12]). However, if the characteristics of ERP systems required for their successful integration into the environment of Industry 4.0 are to be identified, it is at first necessary to clearly define particular manifestations of this concept in the business practice. In the specialist literature, either the Reference Architecture Model Industry 4.0 (abbreviated as RAMI 4.0) or a list of technologies contained in it are usually used for the definition of Industry 4.0 concept content [13], [14]. The RAMI 4.0 model generally describes this concept; primarily it is a structured tool aimed at grasping the principles of systems, it is therefore more appropriate to use its definition through a list of particular technologies. This approach better addresses the issue as to what particular changes are about to occur in a business operation with the advent of Industry 4.0. Industry 4.0 is mainly related to the use of nine technologies, which are indicated as its "9 pillars" [14]. These are: 1. Big data, i.e. big (and especially rapidly growing over time) amounts of diverse data that can be stored, analysed and evaluated. 2. Autonomous collaborative robots, i.e. machines that are able to work largely independently, while cooperating effectively with humans at the same time. 3. Simulation serving both for drafting ergonomy and layout of operating areas and for the virtual development and prototyping. System integration within the meaning of the interconnection of individual information systems within the company and across the entire supply and demand chain. 5. Internet of things (commonly also referred to as IoT) consisting in connecting everyday items to a network/Internet, and their subsequent communication both with each other and with central systems. 6. Cyber-physical systems (abbreviated as CPS), i.e. an environment where the physical reality, represented by machines, materials, products and people, will be closely connected with the virtual reality in the form of data representation of each physical object, control algorithms, artificial intelligence, etc. 7. Cloud technologies when an increasing amount of the information infrastructure of companies is not physically operated by them but is moved to data centres instead. 8. Additive production, currently mostly on the basis of 3D printing from plastic materials or metal, powder allowing for a rapid prototyping or small series production. 9. Augmented reality allowing for connecting the real world with the virtual one in the user's field of vision by means of smart glasses or a mobile phone (e.g. projecting schemes or work procedures directly on a respective object). On the basis of the analysis presented in the publication [15], we believe a tenth technology should be added to these 9 pillars, which play an important role in current business practice, in particular in the field of logistics and asset tracking in general. Automatic Identification and Data Collection (AIDC) is this tenth technology. Collectively, this term is used to indicate technologies enabling unambiguous identification of objects and subsequent automated collecting data about them. Currently, various methods are used for these purposes; they are either operated on the basis of marking the monitored object with a unique machine-readable identifier (e.g. QR code or RFID chip) and subsequent automatic reading of this identifier, or on the principle of machine image or sound recognition. Therefore, from a practical point of view, the fourth industrial revolution mainly consists in the massive spread of the above-mentioned technologies both in individual companies and in society as a whole. Now, we can identify particular demands to be imposed by the routine use of these technologies on information systems. We have based this definition on the publication by Leyh et al. [16], who define the following key requirements that an information system operated in an Industry 4.0 environment should meet: A. It must be ready for horizontal system integration, i.e. the interconnection with other information systems within a company. In the case of an ERP system, it is mainly about the data exchange with specifically focused systems such as MES, B. It must be ready for vertical system integration, i.e. the data exchange with information systems of business partners (or also other stakeholders, as the case may be) across the supply chain. Leyh et al. [16] stress that this should be a fully automated integration. C. It must allow for digital continuity for every single product. The life cycle of a product must be completely captured by respective information systems, whereas all particular product information must be available at all times to all interested parties. D. It must be erected on the service-oriented architecture (SOA). Thus, the information system in the environment of Industry 4.0 should not be a monolithic structure; on the contrary, it should be possible to use its individual components separately, even from interconnected information systems. E. It must be able to be operated in the cloud. Therefore, the system should not be limited to the infrastructure of the particular company. F. It must allow for the information aggregation and processing, which means an ability to obtain data from various inputs (including other information systems in the company using the horizontal integration) and evaluate them effectively, e.g. through clustering, correlation analysis, etc. G. It must comply with the principles of the cyber security. First of all, the system must primarily guarantee the protection of data and their availability only to authorised users. Expanding requirements for information systems within the conditions of Industry 4.0 On a basis of the analysis of the particular above-mentioned Industry 4.0 technologies, we consider it appropriate to add some more requirements: H. It must allow for the fast and customised processing of big data volumes. Big data in Industry 4.0 environment are generated both in the field of production, where individual production machines connected to the data network are able to send detailed information about the course of each individual production operation to the central repository, and in the field of production and business logistics, where it is possible to monitor the precise position (or even the route) by means of the Internet of Things combined with the AIDC technology not only of each warehouseman, reach truck or vehicle, but in extreme cases also of each individual material or product. The information system prepared for Industry 4.0 should therefore be able to process these extreme volumes of data efficiently, either for the purposes of productivity evaluation or predictive maintenance (in case of data from production facilities). Effective processing of these data is subject both to the system ability to perform various analyses adapted to particular needs of a company and a sufficient rate of these analyses implementation. This feature is essential for the full use of benefits the big data, the Internet of Things, cyberphysical systems and AIDC technologies offer. I. It must be able to collaborate with various platforms including mobile ones. The use of the information system should not be strictly limited to one operating K. To allow for the utmost modularity and customisation levels so that it can be adapted to the particular needs of each company, not only during implementation, but also for the rest of its life cycle. One has to in fact assume that on one hand the development of the existing technologies of Industry 4.0 will continue and on the other hand completely new technologies will be invented; in connection with this development, business processes will also develop and change. A system ready for Industry 4.0 therefore cannot be static and rigid; it must be able to keep up with these requirements instead. Draft evaluation tool for assessment of the readiness of the ERP system for Industry 4.0 The 11 requirements defined above were subsequently analysed in relation to the current state of art in the field of ERP systems, both in terms of their technologies and in terms of the processes ERP systems currently have to meet. In this analysis, we relied both on the specialist literature and the experience of one of the authors of the article, who has long been active in the development and implementation of ERP systems and is therefore experienced in terms of technical issues that are usually addressed during the ERP system selection and implementation. The objective was to propose a set of criteria that would not place with regard to the evaluation high demands on the evaluator in the field of detailed knowledge of information systems architecture or particular technologies of Industry 4.0. A general knowledge of information technologies should be sufficient for their assessment. Criteria are therefore clearly defined. In some cases they even contain some examples of products that are most widespread in the business practice. Furthermore, the criteria deliberately do not contain complicated rating scales that would determine the degree of compliance with the requirements of a given criterion, as the need for an arbitrary assessment of the degree of compliance with the criterion would again increase requirements placed on an evaluator. Instead, the criteria wording allows for evaluating the compliance of each of them by "Yes" -"No" conditions. This results in an easily evaluable checklist, the completion of which (both by the evaluators themselves and suppliers of individual applicant systems) should provide evaluators with an initial view of the readiness of individual ERP systems for Industry 4.0 -the more criteria are answered "Yes", the better. The created draft set of criteria, including their links to particular requirements defined in the previous subchapters, is demonstrated in the Table 1. The system has an API layer allowing for communication by means of the REST architecture, allowing for the full system control. A, B, J The system has an API layer allowing for communication by means of the SOAP protocol, allowing for the full system control. A, B, J The system has an API layer allowing for communication by means of the integration database or, rather, integration database objects, allowing for the full system control. A, B, J The system allows for the automatic export/import of data files in a standard format (e.g. csv) to/from the determined repository. A, B, J The system supports standard EDI protocols relevant for the field of the enterprise activity (e.g. VDA in the automotive environment). C The system allows for saving documents with each product, both directly and in the form of the repository link. At the same time, it is able to open these links or to start the program, which is able to open them, if available. C The system has the standard functionalities of the PDM/PLM system (registration of all product data, change management, registration of technological procedures, drawings, etc.) or offers, as the case may be, a native connection to the corresponding product. D, J The architecture of the system is service-based, respecting the standard contract mechanism and using one of the standard data transfer formats (XML or JSON). E All system functionalities can be controlled via the web client. If this is not thus possible and a local client is required, the system must run smoothly in the remote desktop mode so that the client does not need to be installed on workstations. D, E, H Individual services can be distributed to multiple servers, both in the sense of application servers (if used) and in the sense of machines (physical or virtual). Multiple instances of one service can be created for higher performance. F, H The system has Business Intelligence tools or, as the case may be, supports the native connection to a specialised tool for Business Intelligence. 14 F, H, K The system allows for filling in customised analytical procedures in the Business Intelligence tool. 15 G Logging in the system allows for using the single sign on. G The communication between individual services and between individual application levels is encrypted. G The system allows for the access and right control up to the level of individual events and columns in the database. G The system allows for the retrospective audit of used events and monitored data for each user. G The application layer accesses the database layer under a dedicated account, not directly through individual user accounts. Therefore, users accounts do not have direct rights to the database layer. H The system supports the in memory processing for working with online data. I The system has the web client executable in standard browsers (Chrome, Firefox, Opera, Safari, Edge). The client allows for the complete system control. I The system has a mobile client executable on both standard platforms (both IOS and Android). The client allows for the complete system control. I The system has clients executable on industrial terminals, which perform the functions of MES (records of production operations, material consumption, reporting of breaks, produced scraps and deviations, viewing of production documentation). Or it offers a native connection to the product itself with these functions. I The system has clients executable on mobile terminals, which meet the WMS functions (warehousing, picking from the warehouse and booking material or goods, viewing the warehouse map, routing within the warehouse). Or it offers a native connection to the product itself with these functions. The system allows columns to be filled in the database, both data and defined by a calculation. K The system allows for the creation of new tables including editors for working over these tables and links between these tables and the rest of the system. K The system allows for the creation of new events in the system. The event functionality is defined by some of the standard script languages, e.g. JavaScript, Python or Visual Basic. It does not require the exclusive use of a proprietary language. The legend of less known terms used in the table API (Application Programming Interface) -an interface enabling the communication with the software from outside REST (Representational State Transfer) -a widely used interface architecture SOAP (Simple Object Access Protocol) -data exchange protocol frequently used within the API interface XML (eXtensible Markup Language) -a markup language used for the creation of structured files suitable for the data exchange JSON (JavaScript Object Notation) -a way of data notation used for the creation of structured files suitable for the data exchange OLAP (OnLine Analytical Processing) -the technology of data storage in the database suitable for working with big data volumes Source: own elaboration Discussion and conclusion The previous chapter presented a simple set of criteria, using which persons responsible for the selection of the information system can quickly assess the architectural and functional readiness of individual ERP systems for the challenges associated with Industry 4.0. This is therefore a decision supporting tool, which can help evaluators, especially in the initial stages of selection allowing for to exclude from further selection ERP systems, which are completely unsuitable for the environment of Industry 4.0, and on the other hand to identify those systems that are leaders in this area. Systems selected in this way can be subjected to a more complicated analysis, e.g. on the basis of provided demo versions, reference visits, etc. Nevertheless, in the presented form, it only is an initial proposal of the tool, which should be further developed. The first possible direction of further development is the creation of a system of weights that would more accurately assess the importance of individual criteria. The system of weights is also related to the second intended direction of development, which is the adaptation of the proposed checklist for the needs of individual sectors, at least at the level of the weights of individual criteria. However, a partial reformulation or the addition or removal of selected criteria depending on their importance for a particular sector also is possible. Both of these directions could be examined by means of a questionnaire or expert interviews with persons responsible for the selection of information systems and with the owners of individual main processes in companies. These surveys will be subject of the further work of authors of the article.
2021-05-11T00:05:42.277Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "eb65649ae6b806efe4d1cf69a7c22f78a5760526", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/03/shsconf_glob20_04019.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e58b580ac80bda2cf249200a84982d9df99149c0", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
7956991
pes2o/s2orc
v3-fos-license
Serum Mannose-Binding Lectin Concentration, but Not Genotype, Is Associated With Clostridium difficile Infection Recurrence: A Prospective Cohort Study Low mannose-binding lectin concentration, but not genotype, was associated with disease recurrence in a large prospective cohort of patients with Clostridium difficile infection. The initiation and propagation of inflammatory cascades is an essential housekeeping property of the innate immune response during infections.Mannosebinding lectin (MBL) activates the lectin-complement pathway of innate immunity through binding to repetitive sugar arrays on microbial surfaces [1].MBL is also a potent regulator of inflammatory pathways: it can modulate phagocyte interaction with mucosal organisms at the site of infection [2], and interacts with other components of the innate immune system such as Toll-like receptors [3]. Low MBL concentrations have been associated with increased susceptibility to infections in both animal models and humans [4,5], as well as with poor disease prognosis [1].The modulation of disease severity is partly thought to be through a complex, dose-dependent influence on cytokine production [6].Serum MBL concentrations range from negligible to as high as 10 000 ng/mL [7][8][9]; this varies with ethnicity and with the screening method adopted [10]. MBL secretion in humans is dependent on the MBL2 genetic architecture [11,12].To date, 57 genetic variants have been identified within the entire MBL2 gene (dbSNP, build 140, NCBI), with only 6 of them known to affect secretion and/or function of the encoded protein (Figure 1) [8,13].The mutated alleles B, C, or D are collectively termed O and their correspondent wild-type alleles are jointly referred to as variant A, with the presence of any given O variant (in either the heterozygous or homozygous state) resulting in MBL deficiency [8,13].The existence of strong linkage disequilibrium (LD) between the promoter and structural gene variants means that only 7 common haplotypes (out of a possible 64), which may affect serum concentrations, have been described: HYPA, LYQA, LYPA, LXPA, HYPD, LYPB, and LYQC [14,15].HYPD, LYPB, and LYQC lead to the production of unstable ligands with shorter half-lives that are easily degraded to lower oligomeric forms.Studies that have evaluated both genetic mutations and serum concentrations in white populations are summarized in Supplementary Table 1. Clostridium difficile is an opportunistic spore-forming bacterium that can effectively colonize the intestinal tract following antibiotic-driven dysbiosis.Clostridium difficile infection (CDI) is the result of intense colonic inflammation caused by the release of potent enterotoxins.Research into host biomarkers for CDI has focused on mediators of inflammation in the gut, such as fecal interleukin 8 [16], lactoferrin [16], and calprotectin [17], and linked them with disease severity [16,18].More recently, both serum interleukin 23 and procalcitonin have also been proposed as potential biomarkers for CDI severity [19,20].However, the role of these biomarkers in the stratification of problematic CDI patients remains unclear, and thus remains an important area of research.Additionally, several clinical prediction rules have been proposed for the evaluation of CDI outcomes [21][22][23], but none have gained widespread clinical acceptance. To date, there have been no studies on the role of either MBL levels or MBL2 genetic variants with CDI, possibly because MBL is not thought to bind to the surface of C. difficile [24].However, there is growing evidence for an association between MBL and major modulators of inflammation, such as Toll-like receptors and C-reactive protein (CRP), both of which have been associated with CDI [25,26].Therefore, we sought to investigate the role of MBL in a prospective cohort of CDI cases and inpatient controls. Cohort A cohort of 453 inpatients was consecutively recruited from a large hospital setting in Merseyside, United Kingdom.Patients were eligible for inclusion if they had healthcare-associated diarrhea (defined as ≥3 liquid stools passed in the 24 hours preceding assessment), an onset after being in hospital for >48 hours, and recent exposure to either antimicrobials and/or proton pump inhibitors (PPIs).Using criteria previously described [27], 308 patients with CDI (cases) and 145 control patients with antibiotic-associated diarrhea (AAD) were classified based on toxin enzyme-linked immunosorbent assay (ELISA) test (TOX A/B II, Techlab, Blacksburg, Virginia), microbiological culture, and clinical diagnosis made by independent clinicians.Polymerase chain reaction (PCR) ribotyping and multiplex PCR were performed to determine strains types and the toxigenic nature of the isolates [28]. Blood and fecal specimens were collected from patients at study entry, of whom 98% were white.Relevant information on demographics, admission, and clinical history was collected for each patient.Ethical approval was obtained from the Liverpool Research Ethics Committee (reference number 08/H1005/ 32), and each patient provided written informed consent prior to recruitment. Definition of Outcomes Cases and controls were defined as described above.The severity of CDI symptoms was assessed at baseline by research nurses using the guidelines proposed by Public Health England [29], which we adjusted to incorporate a more stringent white blood cell count cutoff of >20 × 10 9 /L while also replacing acute rising creatinine with an estimated glomerular filtration rate of <30 mL/min/1.73m 2 at the time of diagnosis.Duration of symptoms was recorded from the date of onset of symptoms and then dichotomized into episodes lasting ≥10 or <10 days.All-cause mortality was actively monitored for a period of 30 days from diagnosis, and recurrent CDI was defined as the development of subsequent CDI episodes up to a period of 90 days post-diagnosis of the initial episode.If the patient was discharged from hospital prior to final follow-up, we attempted in every case to obtain data from the hospital, general practitioner, or patient (the latter by a telephone call). Determination of MBL Serum Concentrations A commercially available in vitro diagnostic ELISA kit (Sanquin Blood Supply, Amsterdam, the Netherlands) was transferred onto the Meso Scale Discovery electrochemiluminescence (ECL)-based platform, undergoing appropriate optimization prior to use.The MBL kit control was used across all plates to determine interplate variability and a subsequent correction factor used for each plate.Final minimum detection level (lower limit of detection [LLOD]) and minimum quantification level (lower limit of quantification [LLOQ]) were calculated by taking the mean values across all plates.The mean LLOD and LLOQ across all plates were 11.3 and 11.0 ng/mL, respectively, with overall median values of 491.9 ng/mL among controls and 361.8 ng/mL in cases.Signal values ranged from only 50 to 500 ECL units, which denotes a compressed signal range inherent with the assay.Because this may have potentially limited discrimination of the quantitative values, data were subject to binary categorization based on 3 previously used deficiency cutoffs: 50, 100, and 500 ng/mL [30][31][32]. Determination of MBL2 Variants A total of 9 variants lying in the promoter and exon 1 were typed (Figure 1) by either pyrosequencing (PyroMark Q96 custom assays, Qiagen; rs36014597, rs7084554, rs1800451, rs1800450, rs5030737, and rs10556764) or TaqMan SNP genotyping (Applied Biosystems; rs7096206, rs11003125, and rs11003123).The variants rs1800451 (C), rs1800450 (B), rs5030737 (D), rs7096206 (X/Y), and rs11003125 (H/L) were used for haplotype determination, and rs10556764, a 6-bp Ins/Del in complete LD with rs7095891 (P/Q), was used as a proxy.Another recognized tagging marker for P/Q (rs11003123) was independently typed to evaluate the accuracy of the pyrosequencing assays.The effect of both MBL2 genetics (based on stratified expression genotypes) and serum MBL concentrations (based upon deficiency cutoffs) were individually taken forward for casecontrol comparison and subgroup analysis of cases.For the latter, this included logistic regression for the following outcome measures: (1) severity of disease, (2) duration of symptoms ≥10 days, (3) 90-day recurrence, and (4) 30-day mortality.Covariates including demographic variables, the presence of PCR ribotype 027/NAP/BI1, and potential confounders (immunosuppressive therapy, renal disease, and diabetes; score on Charlson comorbidity index; and time delay between sample testing positive and recruitment) were individually assessed.Severity of disease was assessed both as a CDI outcome and as a baseline predictor for the other outcome measures.Statistically significant covariates were added to the final regression model to produce adjusted P values, odds ratios (ORs), and 95% confidence intervals (CIs).All analyses were carried out using SPSS (version 20). Power calculations were simulated using nQuery Advisor + nTerim (version 2.0).This showed that the power a posteriori was ≥99% for the majority of analyses.However, for analysis of 30-day mortality and disease severity at baseline, power was lower (67% and 75%, respectively; Supplementary Table 2). Patient Demographics CDI cases and AAD controls were demographically comparable (Table 1).However, mortality at 1 year (35% vs 18%; P < .001)and duration of diarrhea symptoms (≥10 days 60% vs 24%; P < .0001)were significantly greater among CDI cases.In relation to medication history, 9% (28/308) and 1% (2/145) of CDI cases and AAD controls had prior exposure to PPIs but not antibiotics within 90 days of the development of CDI, respectively, with 58% (180/308) and 54% (79/145) exposed to both an antibiotic and a PPI.Of CDI cases, 41% (127/308) had severe disease and 38% (83/220) experienced recurrence within 90 days.Twenty-eight CDI cases, who had not experienced any recurrence of symptoms but died within the 90-day follow-up period, could not be included in our analysis of recurrence. Relationship of Genotype With Serum MBL Concentrations Of the 9 variants typed in the CDI cases and AAD controls, 3 were excluded: 1 SNP (rs7084554) deviated from Hardy-Weinberg equilibrium (<0.001); rs11003123 was deemed redundant due to complete LD with the Ins/Del polymorphism (rs10556764); and rs36014597 was also in complete LD with both rs10556764 and rs11003123.Of the 6 polymorphisms analyzed, the genotyping success rate was ≥95%.Their minor allele frequencies were in line with those reported in the literature (Supplementary Table 3).For both groups, 7 common haplotypes were derived from the 6 polymorphisms (Supplementary Figure 1), which is consistent with other previous studies in white populations (Table 2) [9,34].Presence of the mutant allele for all individual MBL2 variants had a significant influence on serum MBL concentration across all patients, except for the X allele encoded by rs7096206 (P = .30;Supplementary Table 3).All the assembled MBL2 haplotypes also significantly impacted on serum concentrations, except for haplotype LXPA where there was no difference compared with the overall median value (P = .34;Table 2).Genotypic and haplotypic analyses demonstrated that the presence of a variant allele for any of the 3 exonic variants (rs1800451, rs1800450, and rs5030737) were the major contributing factors for lower MBL concentrations (Table 2 and Supplementary Table 3). Patients with high-expressing genotypes had a median serum MBL concentration of 714 ng/mL, compared with 190 ng/mL with intermediate-expressing genotypes, and 32 ng/mL with low-expressing genotypes (P < .001;Table 3; Figure 2A).The contribution of the X allele, seemingly insignificant when evaluated on an individual basis (Supplementary Table 3), became apparent with a gradual decrease when compared with the equivalent genotypes containing the Y allele in the rank order: XA/YA < YA/YA, XA/XA < XA/YA, and XA/YO < YA/ YO (Table 3; Figure 2B). Comparison of MBL Levels Versus CDI Disease Outcomes Serum MBL concentrations are shown in Supplementary Table 5. Analysis using both <50 ng/mL and <100 ng/mL as cutoff points to signify deficiency identified no significant differences between CDI cases and AAD controls (P = .79and P = .09,respectively) (Table 4).Evaluation of the clinical outcomes in CDI cases showed a significant association with CDI recurrence (P < .01 for both; Table 4) with ORs of 3.18 and 2.61 at the <50 ng/mL and <100 ng/mL cutoff points, respectively. No association was identified with any of the other outcomes including prolonged symptoms, 30-day mortality, and disease severity at baseline (Table 4).Despite the strong correlation observed between genotypes/haplotypes and serum MBL concentrations in this cohort, no significant associations were identified between high-, intermediate-, and low-expressing genotypes and CDI disease outcomes (Supplementary Table 6). DISCUSSION Studies evaluating the role of MBL in infectious and immune diseases have focused on either genotype, phenotype, or occasionally on both parameters.The latter approach is preferred as it can show discordance between genotype and phenotype.This study is one of the larger disease-related studies concurrently investigating both genotypic/haplotypic variants and serum concentrations in a white population (Supplementary Table 1) and is the first to demonstrate an association between serum MBL concentrations, but not genotype, and recurrence of CDI within 90 days using two distinct cutoff values for MBL deficiency. The mechanistic basis of the association is unclear.With other bacterial and viral infections, MBL is thought to be capable of binding to the cell surfaces of invasive pathogens, thereby stimulating a downstream immune response.However, this does not seem to be the case with C. difficile, where binding of MBL has been shown to be low [24].This suggests that MBL deficiency does not per se predispose to CDI and is consistent with the observed lack of difference in circulating concentrations of MBL between CDI cases and AAD controls.MBL has other functions including modulation of inflammation and clearance of apoptotic cells [35].The former may be relevant to CDI, where MBL may be acting as a modulator of the disease.Consistent with this, clinical manifestations of MBL deficiency appear to be of more relevance either in infants when the immune system is still maturing or in susceptible groups when there is an associated immunodeficiency [36], such as in hospitalized elderly patients Data regarding duration of symptoms and disease recurrence was unavailable for 18 and 60 of our cases, respectively.For disease recurrence, an additional 28 patients had died within the follow-up period prior to experiencing any recurrent symptoms and therefore could not be included in the analysis.Serum mannose binding lectin level was unavailable for an additional 3 individuals who were therefore excluded from analysis across all outcomes. P values and ORs were calculated using univariate logistic regression and adjusted for the presence of significant covariates. Abbreviations: CI, confidence interval; OR, odds ratio.a Age, body mass index (BMI), time delay between testing positive and recruitment, and the presence of diabetes. b Age, BMI, time delay between testing positive and recruitment, and the presence of diabetes and immunosuppressive therapy. c Age, BMI, score on Charlson comorbidity index, and disease severity at baseline.d No covariates were found to be significant and therefore P value remains unadjusted. e Age. or following major clinical interventions.However, these are hypotheses that need further investigation. Although MBL concentrations remain relatively constant in individuals due to genetic determinants, MBL is known to be a relatively modest acute phase reactant [37].This is in sharp contrast to other acute phase proteins such as CRP whose concentrations can increase sharply by 10-to 1000-fold during acute inflammation [38].Elevated CRP concentrations have previously been shown to be associated with various CDI outcomes including disease severity and recurrence [25,39].Consistent with this, low MBL concentrations have been associated with an increase in the level of CRP [40], and with our findings of the association with CDI recurrence and inverse correlation with CRP.In keeping with the immunomodulatory effect of MBL, it is known that low concentrations lead to increased secretion of the proinflammatory cytokines interleukin 6, interleukin 1β, and tumor necrosis factor α [40,41], all of which have also been shown to be elevated in response to CDI [42,43]. The genetic architecture of the MBL2 gene is complex (Figure 1), with the existence of numerous common functional polymorphisms and haplotypes (Figure 1; Tables 2 and ;3; and Supplementary Table 3).MBL2 haplotype frequencies and the corresponding impact on serum MBL concentrations were in line with those previously reported [9,13] (Table 2).This was also evident after stratification of MBL haplotypes based on previously defined expression genotypes [32,33], with carriers of low-expressing genotypes showing much lower serum MBL concentrations than both intermediate-and high-expressing genotypes (32 ng/mL vs 190 ng/mL and 714 ng/mL, respectively; Table 3).Despite the strong association observed between MBL2 genotypes and serum MBL concentrations, and the association between MBL concentrations and CDI recurrence, there was no association between MBL genotype and CDI outcomes.Other studies have also identified associations with protein levels, but not with genotype (Supplementary Table 1), highlighting the need to evaluate both MBL genotype and phenotype in infection and other immune conditions.The lack of association between MBL genotype and disease outcome may be due to the incomplete genetic penetrance of MBL genetic variation on phenotype.In this study, only 78% and 68% of the low-expressing genotypes accounted for deficient serum levels using the cutoff values of <50 ng/mL and <100 ng/mL, respectively (Supplementary Table 4).Genetic heterogeneity due to functionally related genes such as L-ficolin, MASP2, and surfactant proteins may also play a role, but this needs further investigation. Our study sought to adhere to a stringent methodology through the use of a relatively large cohort size and extensive quality control, but it is not without its limitations.Although there is less chance of MBL concentrations being confounded by infection-related events compared with other response markers, one of the clear drawbacks of this work is the lack of longitudinal measurements, which is now being addressed in a new prospective study.The effect of proteins functionally related to MBL and other markers of inflammation and the relative roles they play in disease modulation need further investigation.Previous studies have used various definitions for MBL deficiency, with commonly used cutoffs ranging from 50 ng/mL [30] to 500 ng/mL [32].It is thus difficult to compare results across different study groups given the heterogeneity of platforms, profile of cohorts, and standards adopted for the measurement of MBL.Discrepancies between studies could be due to low sample sizes, poor assay performance, and differences in techniques adopted by laboratories.We have tried to overcome some of these limitations by evaluating a number of cutoff levels, but there is a need for international consensus and harmonization in this area. In conclusion, our data suggest that low serum MBL concentrations may act as a predictor of CDI recurrence.Further work is needed to validate these findings in an independent cohort of patients and to evaluate the mechanistic basis of this association.This area of research would also be advanced through consensus on definitions of deficiency, standardization of methods employed for measurement of serum concentrations, and further evaluation of the genotype-phenotype relationships. Figure 1 . Figure 1.Schematic representation of the major MBL2 isoform and genetic polymorphisms.Polymorphisms responsible for the haplotypes that ultimately determine mannose-binding lectin (MBL) expression levels are indicated by the red arrows.*In this study, rs10556764 (6-bp deletion) was used as a proxy single-nucleotide polymorphism for rs7095891. Figure 2 . Figure 2. Median serum mannose-binding lectin (MBL) concentrations in relation to 3-tier grouping based on proposed expression profiles (A) and individual genotypic groups within proposed expression profiles (B).Median serum MBL concentrations were determined across previously defined expression profiles: high (YA/YA and XA/YA), intermediate (XA/XA and YA/YA), and low (XA/YO and YO/YO).Median levels were also determined for the 6 individual genotypic groups across all expression profiles.Abbreviation: MBL, mannose-binding lectin. Table 1 . Demographics of Patients With Clostridium difficile Infection and Antibiotic-Associated DiarrheaPyrosequencing PCR optimization was conducted using 20 ng of genomic DNA and temperature gradients following standard guidelines.Optimized products were run on a PyroMark Q96 ID following the recommended assay protocol.Repeat samples and blanks were included for quality control purposes, and data were analyzed using PyroMark Q96 software (version 2.5.8). Table 2 . Mannose-Binding Lectin (MBL) Serum Concentrations Across MBL2 Haplotypes in Patients With Clostridium difficile Infection and Antibiotic-Associated Diarrhea a P values were calculated using a Mann-Whitney test comparing mannose-binding lectin serum concentrations against the presence/absence of each individual haplotype. Table 3 . Median Serum Mannose-Binding Lectin Concentrations Across Previously Defined Expression Genotype Groups a [32]pression groups defined according to Eisen et al[32]. Table 4 . Analysis of Clostridium difficile Infection Disease Outcomes Versus Serum Mannose-Binding Lectin Concentration Based on Deficiency Cutoffs of 50 and 100 ng/mL
2017-04-13T14:43:09.224Z
2014-08-28T00:00:00.000
{ "year": 2014, "sha1": "36ed022835b9650f38b5ac8bab890f9f41221865", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cid/article-pdf/59/10/1429/17352031/ciu666.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "981cba24f97704410add27183197b7b09dd30564", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221107841
pes2o/s2orc
v3-fos-license
Coronavirus Disease 2019 (COVID-19), Organ Transplantation, and the Nuances of Immunomodulation: Lessons Learned and What Comes Next e4100 • cid 2021:73 (1 december) • EDITORIAL COMMENTARY Received 1 August 2020; editorial decision 6 August 2020; accepted 10 August 2020; published online August 11, 2020. Correspondence: G. Haidar, Division of Infectious Diseases, 3601 Fifth Ave, Falk Medical Bldg, Ste 5B, Pittsburgh, PA 15213 (haidarg@upmc.edu). Clinical Infectious Diseases 2021;73(11):e4100–2 © The Author(s) 2020. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com. DOI: 10.1093/cid/ciaa1193 Coronavirus Disease 2019 (COVID-19), Organ Transplantation, and the Nuances of Immunomodulation: Lessons Learned and What Comes Next The coronavirus disease 2019 (COVID-19) pandemic has caused a seismic shift in transplant practices. Many centers have suspended live donor transplants, and most have enforced significant restrictions to their deceased donor programs [1]. Because of early observations suggesting a role of proinflammatory cytokines in the pathogenesis of severe COVID-19 [2], a prevailing thought has been that the anti-inflammatory effects of immunosuppressive medications may paradoxically diminish disease severity in solid organ transplant (SOT) recipients with COVID-19. There is precedent in the literature to support this line of speculation. For instance, kidney and liver transplant recipients with sepsis were shown to have lower mortality rates compared to nontransplant patients [3], which was thought be partially due to a dampening of the destructive aspects of sepsis by immunosuppressive agents. Additionally, calcineurin inhibitors modulate the expression of opportunistic infections such as cryptococcosis, protecting against mortality in SOT recipients [4]. However, these optimistic notions were at odds with our knowledge that SOT recipients with respiratory viral infections such as influenza develop more severe complications than the general population. Additionally, transplant patients frequently suffer from the same comorbidities that have been associated with detrimental outcomes in COVID-19. Nonetheless, and despite early hints that mortality in SOT recipients may be high, the outcomes of SOT recipients with COVID-19 have remained ill defined. In this issue of Clinical Infectious Diseases, Kates and colleagues describe the outcomes of 482 SOT recipients with COVID-19 across over 50 transplant centers. Although the majority of patients were kidney transplant recipients, this is nonetheless the largest study of SOT and COVID-19 to date and confirms the ominous findings from smaller cohorts: in short, SOT recipients with COVID-19 are at high risk for complications and death. The authors demonstrate that 78%, 34%, and 27% of SOT recipients with COVID-19 require hospitalization, intensive care, and mechanical ventilation, respectively. Additionally, the inpatient mortality rate was ~20%, which is similar to the pooled weighed mortality rate of ~19% (range 8-33%) reported in studies from the general population with a similar median age and prevalence of comorbidities. Indeed, over 90% of SOT recipients with COVID-19 had chronic medical conditions, and nearly a third were >65 years of age. The authors should be commended for appropriately navigating the epidemiological quagmire of case fatality rates in COVID-19, as comparisons with studies of younger and healthier patients-such as the Chinese Center for Disease Control study, which reported a mortality rate of 2.3% [5]-would not have been suitable. The only predictors of mortality in the current study were age (>65 years), heart failure, chronic lung disease, obesity, pneumonia, and lymphopenia. In contrast, the "net state of immunosuppression" had no impact on mortality, neither did time from transplant. Thus, although morbidity and mortality related to COVID-19 in SOT recipients are substantial, they appear to be driven by age and underlying medical conditions and unaffected by immunosuppression, corroborating the results of other studies in the general population. The study included only 30 lung transplant recipients and was therefore unable to assess whether mortality in these patients is greatest (as is the case with sepsis [3]) or whether COVID-19 precipitates acute or chronic lung allograft rejection. Furthermore, because all laboratory testing was done as standard of care, the study could not evaluate severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) viremia or the duration of SARS-CoV-2 polymerase chain reaction (PCR) positivity, which may be longer than that of nontransplant patients. Although prolonged viral shedding is common and does not imply infectivity [6], critically ill SOT recipients who are unable to control viral replication may potentially serve as a source of continued viral transmission. Finally, patients received various interventions for COVID-19, and no conclusions can be made about the efficacy of any of these therapies. Several other key observations from this study warrant special mention. First, fever, which was defined as a temperature of >38.1°C, and which is erroneously perceived to be a universal finding in COVID-19, developed in only 55% of patients. Second, the manifestations of COVID-19 were protean and included a plethora of "atypical symptoms, " with fatigue or gastrointestinal symptoms occurring in ~50% of patients. Clinicians must therefore have a low threshold for performing PCR testing for SARS-CoV-2 in SOT recipients and must not discount the possibility of COVID-19 in afebrile patients or those with "atypical symptoms. " Third, most SARS-CoV-2 infections were community acquired, and no donor-derived infections occurred. The lack of donor-derived infections is a testament to the excellent guidance by the American Society of Transplantation (AST) Infectious Diseases Community of Practice regarding donor testing [7] and the rapid adoption of these guidelines by organ procurement organizations. Fourth, and perhaps most surprisingly, was the paucity of superinfections and fungal infections, although additional studies will be needed to define the incidence of "post-COVID-19 invasive aspergillosis. " Now that the characteristics and outcomes of SOT recipients with COVID-19 have been defined, the question is "what comes next?" How can we synthesize the deluge of COVID-19 knowledge into therapeutic interventions? The key lies in developing a nuanced understanding of the pathogenesis of COVID-19. However, this may prove to be a herculean challenge because of the heterogeneity of the host-pathogen interactions in COVID-19. Indeed, a recent study of deep immune profiling of patients with COVID-19 revealed several "immunotypes" ranging from robust CD4 and/or CD8 responses to minimal lymphocyte responses to infection [8]. Furthermore, the mortality benefit of dexamethasone among patients with COVID-19 who require supplemental oxygen, particularly those who received the drug after their first week of illness [9], lends some credence to the notion that immunosuppressants may ameliorate outcomes after transplant, and suggests that immunopathology drives tissue damage in the later stages of illness. These observations can inform the design of trials of immunosuppression management in SOT. Kates et al showed that immunosuppression was modified in 70% of patients, and that the antimetabolite (mycophenolate or azathioprine) was stopped in 56% of patients. Although this mirrors clinical practice in other infections, it is rational to speculate whether mortality in SOT recipients with COVID-19 could have been attenuated by less aggressive immunosuppression modification, given what is now known about dexamethasone. Several issues unique to transplantation will require reevaluation as the pandemic evolves. First, programs need to be flexible in how they adapt to rising cases in their regions, relying on a tiered approach for adjusting transplant activity based on the burden of COVID-19 in the region and hospital. Moreover, centers that have remained unaffected by the pandemic may need to resort to temporary cessation of transplants if COVID-19 cases surge. Second, centers performing preoperative SARS-CoV-2 PCR screening for asymptomatic transplant candidates must publish their experiences. These studies should focus on the impact that canceling cases due to positive PCRs has on waitlist mortality, and on the optimal timing of reactivating a SARS-CoV-2-positive transplant candidate. Third, universal SARS-CoV-2 testing of deceased donors, which is currently recommended by the AST [7], may one day be relaxed as herd immunity develops and once SARS-CoV-2 becomes a seasonal circulating virus. Finally, although it is permissible to use organs other than lungs and intestines from donors with influenza, whether organs from a donor with COVID-19 can ever be transplanted warrants careful evaluation. Issues to consider include (a) the repercussions of SARS-CoV-2 viremia (which is uncommon and of unclear significance) [2] in the deceased donor and (b) whether the use of any organ is safe, given the dissemination of SARS-CoV-2 to kidneys and other organs in some patients [10]. Current guidelines recommend against the use of donors with a history of COVID-19 unless a negative SARS-CoV-2 PCR result is documented [7]. Because chronic nasopharyngeal shedding of ostensibly inert SARS-CoV-2 appears to be common [6], these recommendations may be modified in the coming years. Once an effective vaccine is developed and mass-administered, these paradigms and many others will need to be revisited. In the past decades, the transplant community has had to respond to SARS-CoV, pandemic influenza H1N1, Middle East respiratory syndrome coronavirus (MERS-CoV), Ebola virus, Zika virus, and others. Although COVID-19 is a pandemic of unparalleled proportions, transplant providers have learned to adjust to a new normal. Looking beyond the current crisis, transplant research efforts should focus on pathogenesis and virology, immunosuppression strategies, donor and recipient screening issues, and e4102 • cid 2021:73 (1 december) • EDITORIAL COMMENTARY vaccine and drug trials in SOT. Finally, we must prioritize protecting SOT recipients from SARS-CoV-2 infection by enforcing social distancing, implementing universal masking, and utilizing telemedicine services to provide care. Notes Financial support. This work was supported by the National Institutes of Health through grant number KL2TR001856 awarded to G. H. Potential conflicts of interest. The author: No reported conflicts of interest. The author has submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. The manuscript has not been previously published nor is it being considered for publication elsewhere.
2020-08-13T10:09:17.962Z
2020-08-11T00:00:00.000
{ "year": 2020, "sha1": "2dcdc27007078d2a76f07af39cd521ac08bceabf", "oa_license": null, "oa_url": "https://academic.oup.com/cid/advance-article-pdf/doi/10.1093/cid/ciaa1193/33787276/ciaa1193.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "7adc2c5aa464662c9e13b3d1a299d8430a78d9ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13718671
pes2o/s2orc
v3-fos-license
VCP (p97) Regulates NFKB Signaling Pathway, Which Is Important for Metastasis of Osteosarcoma Cell Line In order to identify genes associated with metastasis, suppression subtractive hybridization (SSH) was performed using murine osteosarcoma cell line Dunn and its subline with higher metastatic potential, LM8. SSH revealed expression of the gene encoding valosin‐containing protein (VCP; also known as p97) to be constitutively activated in LM8 cells, but it declined in Dunn cells when the cells became confluent. Because VCP is known to be involved in the ubiquitination process of Inhibitor‐αBα (IαBα), an inhibitor of nuclear factor‐αB (NFαB), whether VCP influences NFαB activation or not was examined by using VCP‐transfected Dunn cells (Dunn/VCPs). When stimulated with tumor necrosis factor‐α (TNFα), Dunn/VCPs showed constantly activated NFαB, although in the original Dunn cells and control vector transfectant (Dunn/Dunn‐c) NFαB activation ceased when the cells became confluent. Western immunoblot analysis showed an increase of phosphorylated IαBα (p‐IKBα) in the cytoplasm of confluent Dunn/Dunn‐c cells compared to that of Dunn/VCPs. Therefore, decrease of p‐IKBα degrading activity might be responsible for the decrease in NFKB activation. In vitro apoptosis assay demonstrated increased apoptosis rates of Dunn/Dunn‐c cells after TNFα stimulation compared to those of Dunn/VCPs and LM8 cells. In vivo metastasis assay showed increased incidences of metastatic events in Dunn/VCP‐1 inoculated male C3H mice compared to those in Dunn/Dunn‐c inoculated mice. These findings suggested that VCP expression plays an important role in the metastatic process. Anti‐apoptotic potential in these cells owing to constant NFKB activation via efficient cytoplasmic p‐IKBα degrading activity may explain the increased metastatic potential of these cells. In order to identify genes associated with metastasis, suppression subtractive hybridization (SSH) was performed using murine osteosarcoma cell line Dunn and its subline with higher metastatic potential, LM8. SSH revealed expression of the gene encoding valosin-containing protein (VCP; also known as p97) to be constitutively activated in LM8 cells, but it declined in Dunn cells when the cells became confluent. Because VCP is known to be involved in the ubiquitination process of Inhibitor-κ κ κ κBα α α α (Iκ κ κ κBα α α α), an inhibitor of nuclear factor-κ κ κ κB (NFκ κ κ κB), whether VCP influences NFκ κ κ κB Metastasis involves multiple processes, 1,2) and the pattern of metastasis is distinct in each cancer cell type. 1,2) Osteosarcoma is the most common malignant bone tumor with a high metastatic potential mainly to the lung. 3,4) In order to understand the mechanisms involved in pulmonary metastasis of osteosarcoma, LM8 subline of murine osteosarcoma cell line Dunn was established. 5) LM8 was obtained through 8 rounds of in vivo selection according to the procedure of Poste and Fidler,6) and showed a high metastatic incidence to the lung after subcutaneous inoculation into the back space of mice. No pulmonary metastasis was found in mice inoculated with the original Dunn cells. 5) Suppression subtractive hybridization (SSH) is a PCRbased cDNA subtraction technique to construct differential gene expression libraries. 7) By SSH, we found the gene for valosin-containing protein (VCP; also known as p97) to be prominently expressed in LM8 cells. VCP, a member of the ATPases associated with various cellular activities (AAA) superfamily, is implicated in a large number of biological functions, such as fusion of the endoplasmic reticulum 8) and the reassembly of Golgi cisternae. 9) Furthermore, VCP physically associates with Inhibitor-κBα (IκBα) complexes both in vivo and in vitro, and is co-purified with the mammalian 26S proteasome, and thus might be involved in the ubiquitin-dependent proteasome degradation pathway of IκBα. 10) IκBα belongs to the IκB family (IκBs) of inhibitors of the activation of a transcription factor, nuclear factor-κB (NFκB). 11) In unstimulated cells, NFκB is localized in the cytoplasm in a complex with IκBs, which mask its nuclear-localization signal (NLS) and prevent its translocation to the nucleus. 12,13) Upon stimulation, the entire NFκB complex become hyperphosphorylated. Phosphorylation of IκBα signals for subsequent ubiquitination and degradation, allowing inhibitor-free NFκB complex to translocate to the nucleus. [14][15][16] Further, NFκB activation is inhibited by expression of a dominant-inhibitor IκBα mutant. 17) These findings prompted us to postulate that the NFκB signaling pattern might be altered by the increase of cytoplasmic VCP concentration. NFκB is a transcription factor which acts as a protective factor against apoptosis, as well as a mediator of immune and inflammatory responses. 18) Several studies have shown that NFκB is both necessary and sufficient to prevent apoptosis induced by tumor necrosis factor-α (TNFα), radiation, and chemotherapeutic agents. 19,20) Furthermore, constitutive activation of NFκB is observed in Hodgkin's disease 21) and breast cancer, 22) which suggested that NFκB may contribute to the survival of the tumor cells. In the present study, higher pulmonary metastatic incidence of VCP-transfected cells than that of original Dunn cells was shown by in vivo metastasis assay. Because NFκB signaling is increased in VCP-transfected Dunn osteosarcoma cells, our results suggest that VCP might be involved in the metastatic potential of cancer cells by influencing the anti-apoptotic NFκB signaling pathway. Animals Male inbred C3H mice aged 5 weeks were purchased from Japan SLC (Shizuoka) for in vivo pulmonary metastasis assay. Cell lines and cell culture Cloned murine osteosarcoma cell lines, Dunn and LM8, and human osteosarcoma cell lines, HOS, MG-63, and Saos-2 were cultured in Dulbecco's modified Eagle's medium (DMEM) (Sigma, St. Louis, MO) supplemented with 10% fetal bovine serum (FBS) (Sigma) in an air incubator with 5% CO 2 at 37°C. SSH Messenger RNA (mRNA) was isolated from confluently growing Dunn and LM8 cells by using Oligotex-dT30 (TaKaRa, Kusatsu). Subtractive hybridization was performed by a PCR-based method using the PCR-select cDNA Subtraction Kit (Clontech, Palo Alto, CA) according to the manufacturer's protocol. Both cell lines were used alternatively as testers and drivers to produce libraries of candidate genes that were selectively expressed in Dunn or LM8 cells, respectively. Each clone obtained by SSH was confirmed to show modulated expression by dotblot analysis and/or northern blot analysis. Isolation of VCP and plasmid construction The fulllength mouse VCP cDNA was prepared with RT-PCR using 1 µg of mRNA from Dunn cells and a set of primers (VCP forward primer: 5′-ACTGGATCCATGGCCTCTG-GAGCCGATTCAAAAGG-3′ and VCP reverse primer: 5′-CTGTTCAGACTGAGGAATGGAGCAGGCC-3′). The PCR product was further cloned into the expression vector pIRESneo (Clontech) by using the GATEWAY cloning system (Invitrogen, Carlsbad, CA). The DNA sequence of the plasmid was confirmed by using an ABI dye terminator sequencing kit (Perkin Elmer, Foster City, CA). Northern blot analysis Total RNA of Dunn and LM8 was prepared from cells at approximately 70% confluency (sub-confluent growth conditions) and 100% confluency with Trizol (Invitrogen). Ten micrograms of each sample was separated on 1% agarose-formaldehyde gel electrophoresis, transferred to Hybond-N + nylon membrane (Amersham Pharmacia Biotech, Little Chalfont, UK) in 10× SSC, and immobilized by UV cross-linking. The hybridization probe prepared from a gel-purified PCR fragment was denatured and random-labeled using large Klenow fragment of DNA polymerase and [α-32 P]dCTP. The blot was hybridized in a solution containing 20 mM Pipes, 800 mM NaCl, 50 mM sodium phosphate, 5% sodium dodecyl sulfate (SDS), 50% deionized formamide, and 100 µg/ml of heat-denatured salmon sperm DNA at 65°C for 24 h. After the hybridization, the blot was washed three times in 1× SSC, 5% SDS at 50-60°C. The washed membrane was autoradiographed at −80°C overnight. RT-PCR analysis of VCP Ten micrograms of DNase Itreated total RNA was used for reverse transcription (RT) with Superscript II (Invitrogen). An aliquot representing 25 ng of input RNA was amplified by PCR with Ampli-Taq Gold DNA polymerase (Perkin Elmer). For murine VCP amplification, 1 cycle of 95°C for 10 min then 35 cycles of 95°C for 15 s, 60°C for 30 s, and 72°C for 2 min were performed with VCP-forward and reverse primers. For human VCP amplification, we used 1 cycle of 95°C for 10 min followed by 24 cycles of 95°C for 15 s, 60°C for 30 s, and 72°C for 1 min with primers 5′-TGGCA-GATGATGTGGACCTGGAACA-3′ and 5′-CAGCTTG-GCGGGCCTTGTCAAAGAT-3′. For human and murine G3PDH amplification, the conditions were 1 cycle of 95°C for 10 min followed by 24 cycles of 95°C for 15 s, 60°C for 30 s, and 72°C for 1 min with primers 5′-ACCA-CAGTCCATGCCATCAC-3′ and 5′-TCCACCACCCT-GTTGCTGTA-3′. Amplified products were electrophoresed through 1% Nusieve 3:1 agarose gel (Biowhittaker Molecular Applications, Rockland, ME), stained with ethidium bromide, and photographed. Antibodies The monoclonal antibody to VCP (p97) was purchased from Progen Biotechnik (Heidelberg, Germany), polyclonal antibodies to NFκB (sc-109 and sc-109x) and monoclonal antibody to phosphorylated IκBα (p-IκBα) (sc-8404) from Santa Cruz Biotech (Santa Cruz, CA), and polyclonal antibody to actin (A2066) from Sigma. Anti-p97, sc-8404, and A2066 were used for western immunoblotting, sc-109 for immunofluorescence staining, and sc-109x for electrophoretic mobility-shift assays (EMSAs). Anti-mouse IgG antibody or anti-rabbit IgG antibody linked with horseradish peroxidase purchased from Cell Signaling Tech. (Beverly, MA), and fluoresceinlabeled anti-rabbit IgG antibody from Vector Laboratories (Burlingame, CA) were used as secondary antibodies. Western immunoblot analysis Total cellular proteins were solved in a buffer containing 10% glycerol, 1 mM phenylmethanesulfonyl fluoride (PMSF), 1% Triton X-100 and 40 mM HEPES, pH 7.4. The protein concentration was determined by the Bradford assay (Bio-Rad Laboratories, Hercules, CA). An aliquot of 20 µg from each sample was used for the western immunoblot analysis. The extracts were boiled for 10 min, then discontinuous SDSpolyacrylamide gel electrophoresis was performed according to the standard protocols. 23) The separated proteins were electrophoretically transferred to a polyvinylidene difluoride (PVDF) membrane (Bio-Rad). After blockage of nonspecific binding sites with 5% nonfat milk in PBST (phosphate-buffered saline, 0.05% Tween 20), blots were incubated with anti-p97, washed, incubated again with the secondary antibody, and washed again, then the antibody binding was visualized using Western Blot Chemiluminescence Reagent Plus (NEN Life Science Products, Boston, MA). Nuclear and cytoplasmic protein extraction from TNFα α α α-induced cells Nuclear and cytoplasmic extracts were prepared as described previously. 24) In brief, the cells cultured either sub-confluently or confluently as described above were trypsinized and re-suspended in DMEM with 0.5% FBS at a concentration of 1×10 7 /ml, treated with TNFα (Sigma) for 30 min at a concentration of 5 ng/ml, washed with PBS, then resuspended in 400 µl of buffer A (10 mM Hepes pH 7.8, 10 mM KCl, 0.1 mM EDTA, 0.1% Triton X-100, 1 mM dithiothreitol (DTT), 0.5 mM PMSF, 2 µg/ml aprotinin, 2 µg/ml pepstatin, 2 µg/ml leupeptin), and allowed to swell on ice for 20 min. After centrifugation for 5 min at 6000g, the supernatent was collected and adjusted to 100 mM KCl, 20% glycerol, then used as a sample of cytoplasmic extract in the western immunoblot analysis for p-IκBα. The nuclear pellet was resuspended in 100 µl of buffer C (50 mM Hepes pH 7.8, 420 mM KCl, 0.1 mM EDTA, 5 mM MgCl 2 , 2% glycerol, 1 mM DTT, 0.5 mM PMSF, 2 µg/ml aprotinin, 2 µg/ml pepstatin, 2 µg/ml leupeptin), incubated on ice for 30 min, and centrifuged for 15 min at 15 000g, then the supernatant was subjected to the following EMSAs analysis. EMSAs A probe was generated by using a pair of complementary oligonucleotides containing a specific binding site for NFκB transcription factor: 5′-AGCTTGGG-GACTTTCCACTAGTACG-3′ and 5′-AATTCGTACTA-GTGGAAAGTCCCCA-3′. 25) The oligonucleotides were boiled for 5 min, allowed to anneal by cooling gradually on the benchtop, then end-labeled using large Klenow fragment of DNA polymerase and [α-32 P]dCTP. EMSAs were performed as described previously 26) with 2000 dpm of labeled oligonucleotide and 10 µg of nuclear extracts. The specificity of the binding was tested by competition analysis with an additional 100-fold molar excess of cold probe, leaving no shifted band on the gel, and super-shift analysis using anti p-65 to super-shift the protein-DNA complexes. Immunofluorescence microscopy The cells were cultured confluently on a 8-chamber slide, treated with TNFα for 30 min at a concentration of 5 ng/ml, washed with PBS, then fixed in methanol at −20°C for 10 min. After fixation, the slides were washed in PBS, pre-incubated in 5% normal goat serum for 1 h, then incubated with anti-p65. After washing with PBS three times, the slides were In vivo metastasis assay C3H male mice aged 5 weeks were used to estimate the in vivo metastatic potential to the lung. Tumor cells (1×10 7 ) were suspended in 0.2 ml of DMEM and inoculated subcutaneously into the back space of mice on day 0. Lungs were removed 4 weeks later to evaluate metastatic tumor nodules macroscopically using a magnifying glass and then routinely processed for histological examination; 5 µm sections of 15% formalin-fixed, paraffin-embedded lung specimens were cut stepwise, stained with hematoxylin-eosin and evaluated microscopically to confirm the presence of pulmonary metastasis. Statistics The significance of the differences between the experimental groups was calculated by using the χ 2 test or Mann-Whitney's U test. VCP identification as a gene over-expressed in LM8 Six-hundred and forty clones were isolated by SSH. Dot blot hybridization revealed that 92 of these were differently expressed between Dunn and LM8 cells. After sequencing analysis, 23 clones out of the 92 were chosen as candidates that might be involved in the metastatic event. Finally, VCP was selected as a gene to be investigated for functional association with metastatic activity. Differential expression of VCP between Dunn and LM8 cells was further confirmed by northern blot analysis ( a steep decrease as they became confluent and proliferation-arrested. On the contrary, VCP expression in LM8 cells was maintained even after the cells became confluent. Therefore, the difference in VCP expression was mostly observed when the cells became confluent and ceased proliferation. Construction of a Dunn subline constitutively active for VCP Because VCP is involved in various cellular activities including fundamental functions for cell survival, [8][9][10] and the mutant form of VCP induces apoptosis in a dominant-negative manner, 27) we decided to evaluate the relationship between VCP expression pattern and metastasis by introducing sublines of Dunn cells constitutively active for VCP and comparing them with the original Dunn cells. Two lines were established from Dunn cells stably transfected with pIRESneo-VCP (Dunn/VCP-1 and 2). As a control, one line transfected with pIRESneo (Dunn-c) was also established. Constitutively active expression of VCP in Dunn/VCPs was confirmed by RT-PCR and western blot analyses (Fig. 2). VCP expression regulates NFκ κ κ κB activation To assess the influence of VCP expression on NFκB signaling, EMSAs of NFκB was performed (Fig. 3). Transient NFκB activation by TNFα was markedly reduced in confluent Dunn/Dunn-c cells compared to that in the sub-confluent populations. Nevertheless, NFκB activation was maintained in Dunn/VCPs and LM8 even in the confluent condition. These findings together with the result of western blot analysis, showing the increase of cytoplasmic p-IκBα in confluent Dunn/Dunn-c cells (Fig. 4), indicate that NFκB activation in confluent cells is impaired by the excess amount of p-IκBα in the cytoplasm. Decrease of VCP expression in these cells (Fig. 4) is suggestive of a decrease of degrading activity of p-IκBα. A difference in NFκB activation was confirmed by fluorescent immunohistochemical analysis, which showed nuclear localization of NFκB in Dunn/VCP-1 cells, but not in Dunn/Dunn-c cells in the confluent condition (Fig. 5). Association of VCP with NFκB activation was further analyzed by RT-PCR analysis of c-IAP1, a gene which has an anti-apoptotic role and is known to be induced by NFκB. Dunn/Dunn-c cells showed reduced expression of c-IAP1 when the cells were in the confluent condition, whereas Dunn/VCPs cells showed a constant expression level of c-IAP1 even in the confluent condition (Fig. 6). VCP is anti-apoptotic against TNFα α α α stimulation The function of VCP against apoptosis was investigated by in vitro apoptosis assay. Dunn/Dunn-c cells showed increased apoptosis rates compared to those of Dunn/ VCPs and LM8 cells (P<0.01) (Fig. 7). Difference of metastatic potential between Dunn/ Dunn-c and Dunn/VCPs cells by in vivo metastasis assay In vivo metastasis assay showed increased inci-dences of metastatic events in Dunn/VCP-1 inoculated mice compared to those in Dunn/Dunn-c inoculated mice (Table I). Metastatic events were observed only in one mouse of the five mice inoculated with Dunn cells, and none of the five with Dunn-c, whereas metastasis was seen in all of the five mice inoculated with Dunn/VCP-1 (P<0.01). However, there were no differences in the size of the main tumors. These results indicate that Dunn/ VCPs have greater metastatic potential in vivo compared to the original Dunn cells. VCP expression in human osteosarcoma cell lines VCP expression was analyzed by using three human osteosarcoma cell lines; HOS, Saos-2, and MG-63. HOS and Saos-2 cells showed similar VCP expression even in the confluent condition, while confluent MG-63 cells showed a reduced VCP expression level compared to that of subconfluent cells (Fig. 8). DISCUSSION By using SSH, we found that VCP gene expression was maintained in LM8 cells in the confluent condition, as well as in the sub-confluent condition. A stable VCP transfectant of the Dunn cells (Dunn/VCP-1) showed increased metastatic incidence compared to the original Dunn cells in in vivo metastasis assay. We also observed constant NFκB signal activation in Dunn/VCPs, which has an anti-apoptotic influence. 18) Apoptosis is a key event in several steps during metastasis. 28) The steps involved in metastasis include 1) neo-vascularization at the primary site, 2) local invasion and intravasation, 3) transport and arrest at the target organ, 4) extravasation and migration, and 5) outgrowth at the metastatic sites. 6) The role of apoptosis has been discussed in the first step, in which reduced oxygenation and overt necrosis are commonly observed. 6) In addition, during the process of circulation and arrest at the secondary organ, massive loss of tumor cells has been demonstrated. 6) Experimentally, less than 0.1% of cells injected into the circulation successfully form detectable lesions. 29) Circulating cells are able to arrest in a wide variety of organs, but metastasis occurs only in a limited number of organs. 30,31) Most of the injected cells are capable of arrest and extravasation, but a major loss of metastatic cells occurs at the time of initial replication. 32) Cells of high and low metastatic potential, together with non-malignant cells can similarly extravasate. 33,34) Instead, the survival and growth rates of the cells after the migration step are different according to the malignancy of the cells. 33,34) Furthermore, molecular analysis revealed that dissemination of tumor cells from the primary site is clinically a frequent event. 35,36) NFκB activation is required to protect cells from the apoptotic cascade induced by TNF and other stimuli. 19,20) NFκB induces anti-apoptotic genes such as TNF receptorassociated factors (TRAFs), IAPs, and the Bcl-2 homolog A1/Bfl-1. [37][38][39] In addition to the apoptotic-suppressing function, NFκB has been shown to regulate many genes involved in oncogenesis and metastasis, cell growth-promoting genes such as cyclin D1, cell adhesion molecules such as ICAM-1, cell surface proteases such as MMP-9, and extracellular matrix proteins such as tenascin-C. 39) NFκB signaling starts with phosphorylation of IκBs, and subsequently ubiquitination of IκBs enables the freed NFκB to translocate into the nucleus, where it promotes expression of the target genes. 18) In the present study, reduced NFκB signal, together with increased p-IκBα protein in the cytoplasm, was observed upon TNFα stimulation of confluent Dunn/Dunn-c cells compared to those in sub-confluent cells, which indicates that the NFκB signaling was disturbed by p-IκBα. Reduced expression of VCP was suggested to be associated with the impairment of the degradation process of p-IκBα. On the contrary, in Dunn/ VCPs cells constitutively active for VCP, no difference in NFκB activation or cytoplasmic p-IκBα protein level was observed between confluent and sub-confluent cells. Dai et al. showed that the level of VCP correlates with the proteolytic activity of IκBα by in vitro assay. 10) Stable transfectants of mutant IκBα show reduced NFκB activation in a dose-dependent manner. 21) Taken together with these previous observations, our results indicate that VCP modulates NFκB activation by influencing the degradation process of cytoplasmic p-IκBα. NFκB activation and antiapoptotic function of VCP was further confirmed by c-IAP1 RT-PCR analysis and apoptosis assay following TNFα stimulation. In the present study, confluency-dependent down-regulation of VCP in Dunn cells was observed. Confluencydependent proliferation arrest of cultured cells, i.e., contact-inhibition, is a widely accepted concept, although little is known about the underlying molecular mechanism. 40) Based on mRNA in situ hybridization and immunohistochemical analysis, Muller et al. suggested distinct cellto-cell heterogeneity and tissue-specific patterns of VCP expression. 41) The nucleotide sequence of the 5′-flanking region of VCP contains consensus binding sites for several transcriptional activators, suggesting complex regulation of VCP expression. 40) Enhanced expression of VCP in a metastatic variant of a murine melanoma cell line has been reported. 42) These findings suggest that VCP expression is involved in metastatic potential of many tumor cell types. In the human osteosarcoma cell lines analyzed, two types in VCP expression pattern were observed, i.e., constant VCP expression in the sub-confluent or confluent condition, or decreased VCP expression in the confluent condition. In conclusion, the findings of constant VCP expression in LM8 cells with higher metastatic potential and an increased metastatic potential in constitutively active VCP transfectant Dunn cells suggested that VCP expression plays an important role in the metastatic process. Antiapoptotic potential in these cells owing to constant NFκB activation via efficient cytoplasmic p-IκBα-degrading activity may regulate the increased metastatic potential of these cells. ACKNOWLEDGMENTS This work was supported in part by grants from Japan Society for the Promotion of Science (00902), Japan. The authors thank Mr. Y. Kabutomori for technical assistance.
2018-04-03T06:22:11.555Z
2002-03-01T00:00:00.000
{ "year": 2002, "sha1": "468dbcbb3726a7c81d6d3a6ed34cff39393a024e", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1349-7006.2002.tb02172.x", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "468dbcbb3726a7c81d6d3a6ed34cff39393a024e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234006079
pes2o/s2orc
v3-fos-license
Smartphone addiction and associated factors among postgraduate students in an Arabic sample: a cross-sectional study Background Smartphone addiction, as with other behavioral addictions, is associated with social, physical, and mental health issues. In this article, we investigated the prevalence of smartphone addiction among postgraduate students and evaluated its correlation with social demographics, depression, attention-deficit/hyperactivity disorder (ADHD), and nicotine dependence. Objectives The objective of this study was to investigate the prevalence of smartphone addiction among Middle Eastern postgraduate students, determine the factors associated with smartphone addiction, and estimate the incidence rate of major depressive disorder (MDD), ADHD, insomnia, and nicotine addiction among postgraduate students with smartphone addiction. Methods As part of a cross-sectional online survey, participants were given a self-questionnaire divided into six sections: Socio-demographics, Smartphone Addiction Scale (SAS), Patient Health Questionnaire (PHQ9) for Depression, Athens Insomnia Scale (AIS), the Fagerström Test for Cigarette Dependence Questionnaire (FTCd), and the adult ADHD Self-Report Scale (ASRS-v1.1). Results Of the 506 patients, 51.0% of the participants demonstrated smartphone addiction. A significant association was also observed between extensive smartphone use and MDD (P = 0.001). Of the smokers in this study, 41.5% were addicted to smartphones (P = 0.039). Smartphone addicts had approximately two times the chance of having insomnia (OR = 2.113) (P = 0.013). In addition, they showcased more ADHD symptoms (OR = 2.712) (P < 0.001). Conclusions We found a positive association among insomnia, depression, adult ADHD, and smartphone addiction, which confirms the findings reported in the previous studies. Therefore, we encourage the scientific community to further study the impacts of smartphone addiction on the mental health of postgraduate students. Background Smartphones are handheld mobile devices with many convenient features and software applications (email, social media, web browser, etc.), which are operated via an Internet connection. The first smartphone was produced in 1992, but the term "smartphone" was designated in 1995, when smartphone functions evolved to include more than communications. Currently, smartphones provide entertainment, social media, health monitoring, productivity, utility functions (e.g., day planners), text talk, photo editing, and many more features in one handheld device. With this wide array of functionalities built into smartphones, researchers have observed an increasing number of smartphone users. In 2017, Google announced that they had reached 2 billion active users; in 2019, this number reached 2.5 billion [1]. Additionally, in 2019, Apple announced 900 million active users [2]. In 2019, Google and Apple collectively announced that 3.4 billion people use Apple or Google smartphones. These numbers do not include people who are not using Apple or Google products. According to the American Psychiatric Association (APA), addiction is "A complex condition, a brain disease that is manifested by compulsive substance use despite harmful consequences." [3] Regardless of whether addiction is substance or behavior related, there are five elements of addiction [4]. The first element is feeling different; it includes the feeling of uncomfortability, loneliness, restlessness, or incompleteness [5]. The second element of addiction is a preoccupation with behavior; excessive thoughts about and desire to perform a behavior; excessive time spent planning and engaging in the behavior, including recovering from its effects; and less time spent on other activities [6], despite potentially diminishing appetitive effects [7,8]. Temporary satiation is the third element of addiction; after acute engagement in addictive behavior, some period of time may occur in which urges are not operative, the addiction craving is "shut down," but soon returns [9][10][11]. The fourth element is loss of control, wherein many people who claim to be struggling with addiction experience feeling obliged to exhibit addiction, which is associated with an experience of loss of control and, in some cases, neglect of essential selfcare, which suggests a loss of will [12]. The final element is negative consequences, which involve ongoing engagement in addictive behavior despite suffering from numerous negative consequences. This last component of addiction has often been used as a criterion for identifying dependence on the addictive behavior [13]. "Smartphone addiction" is a form of technological addiction. Generally, it is similar to internet addiction. Smartphone addiction consists of four main components: compulsive behaviors, tolerance, withdrawal, and functional impairment [14]. In a study of 2367 university students in Riyadh, the results indicated that 27.2% of participants stated that they spent more than 8 h per day using their smartphones [15]. In another study conducted on 688 Lebanese university students, 49% reported excessive smartphone use (≥5 h/weekday) [16]. Major depressive disorder (MDD) is a mental illness characterized by debilitating changes in the way enthusiasm are felt when partaking in activities that the individual once enjoyed is a disturbance in cognitive functions, emotional regulation, affect processing, reward function, and circadian rhythms. MDD can manifest in a wide variety of symptoms, including lack of appetite, fatigue, trouble sleeping (e.g., insomnia), feelings of guilt, and thoughts of suicide. Depending on the severity, MDD can be associated with a degree of cognitive dysfunction that influences the ability to perform everyday home and work activities, causing various physical and emotional issues [3,17,18]. MDD seems to be closely associated with addiction and substance abuse. Two epidemiological studies in 1990 and 1994 provided evidence that mood disorders increase the risk of substance use disorders (SUD) [19,20]. One literature review studied the relationship between alcohol use disorders (AUD) and MDD and found a correlation between the two, in that having AUD doubled the risk of developing MDD, and vice versa [21]. Mood disorders and SUD comorbidity lower the prognosis and treatment outcomes for each problem [22]. However, there is evidence to suggest that successful treatment of a comorbid mood disorder would decrease cravings and substance abuse [23]. Furthermore, the correlations are not exclusive to substance addiction, and several studies have concluded that behavioral addictions (such as Internet and smartphone addiction) can be associated with MDD [24,25]. Insomnia is defined as a subjective perception of difficulty falling or staying asleep. It can have acute episodes lasting one night or chronically up to several weeks or months. It is associated with decreased mental and physical health-related quality of life scores [26] and psychiatric illness [27]. Furthermore, it is indirectly associated with smartphone overuse [28]. Attention-deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder usually diagnosed in childhood and may last into adulthood. It is characterized by hyperactivity, impulsiveness, or inattentiveness, and often all three symptoms (DSM-IV-TR; APA, 2000), which interfere with or affect the quality of social, academic, or occupational performance or development [29]. A study in different countries in America, Europe, and the Middle East demonstrated that the average adult ADHD prevalence was 3.4%, with a higher percentage in high-income countries (4.2%) compared with low-income countries (1.9%) [30]. Adults with ADHD have a significantly high chance of suffering from depression, antisocial personality, anxiety, and SUD [31]. Past studies on the prevalence of smartphone addiction and its relationship to mental and physical issues have been conducted [15,16,25,32,33]. These investigations revealed some of the components of smartphone addiction [28,[34][35][36]. Contrarily, numerous elements, such as ADHD or nicotine addiction, were left uninvestigated. The objective of this study was to discover the correlation between smartphone addiction and different elements, including MDD, nicotine dependence, quality of life, and sleep, in order to determine standard variables among postgraduates. Identifying these features of smartphone addiction can promote awareness and knowledge about smartphone addiction while surveying the level of its impact on mental health. This study aimed to distinguish the prevalence of smartphone addiction among postgraduate students. Due to the high level of pressure experienced by postgraduate students [37], we assumed that they are especially vulnerable to smartphone addiction. Postgraduate students use smartphones for communication, research for school assignments, and entertainment. As far as we know, there have been no investigations on the prevalence of smartphone addiction among Arabian Middle Eastern postgraduate students. In this article, we will investigate the prevalence of smartphone addiction among postgraduate students and evaluate its correlation with social demographics, depression, ADHD, and nicotine dependence. Methods A cross-sectional online survey was sent via email and social media accounts for postgraduate education (Twitter, Facebook, and WhatsApp) to post-secondary students and was completed by 558 participants. Participants were included in the study if they were Arabic speaking postgraduate students and smartphone users. Postgraduate students from 187 different universities participated in the study. The participants were studying in different countries worldwide, including Saudi Arabia, Jordan, Egypt, Kuwait, Algeria, Bahrain, Iraq, Lebanon, Afghanistan, Ethiopia, Fiji, Cyprus, Australia, England, United States, and Canada. We excluded 52 students from the study due to incomplete questionnaires, leaving us with a total of 506 participants, 385 (75..9%) of which were located in Saudi Arabia. This study was approved by the Institutional Review Board (IRB) of Imam Mohammad Ibn Saud Islamic University in Riyadh, Saudi Arabia. Informed consent was obtained from all participants through a statement of agreement at the beginning of each questionnaire. All methods were thoroughly explained to each participant and performed in accordance with the relevant guidelines and regulations from the World Medical Association (WMA) Declaration of Helsinki. The online survey consisted of 43 questions, which took approximately 5 to 10 min to complete. The questionnaire was divided into six parts. The first part was concerned about sociodemographic information, such as age and gender. The second and third parts included Arabic-validated versions of the Smartphone Addiction Scale (SAS) [38] and the Patient Health Questionnaire (PHQ9) for Depression [39] respectively. The fourth part used the Athens Insomnia Scale (AIS) to evaluate sleep quality [40]. The fifth section was concerned about nicotine dependence and employed the Fagerström Test for Cigarette Dependence Questionnaire (FTCd) [41]. Finally, the sixth part implements the Adult ADHD Self-Report Scale (ASRS-v1.1) [20]. Smartphone addiction was measured using the Arabic version of the Smartphone Addiction Scale (SAS). The SAS is a self-diagnosis scale modified from K-scale, which is a scale to evaluate Internet addiction (IA) in juveniles. The SAS consists of 33 items with 6 subscales, namely, daily life disturbance, positive anticipation, withdrawal, cyberspace-oriented relationship, overuse, and tolerance [42]. The items are scored on a six-point Likert scale: strongly disagree (1), disagree (2), weakly disagree (3), weakly agree (4), agree (5), or strongly agree (6). The sum of the six subscales refers to an SAS score with a range of 33 to 198. A higher score means higher addictive behavior using smartphones [42]. Data factorability for the Arabic version of the SAS was confirmed using the Kaiser-Meyer-Olkin (KMO) test of sampling adequacy with a resulting value of 0.94 and was supported with Bartlett's test of sphericity to confirm the suitability of data for factor analysis, which demonstrated a significant value of p < 0.01 [38]. The internal consistency for the Arabic SAS was calculated using Cronbach's alpha with a value of α = 0.94 [38]. In this study, we grouped the participants who scored 116 or more in SAS in the high smartphone use group, whereas the participants who scored less were placed in the low smartphone use group. The second part of the questionnaire uses the PHQ9 for Depression, a self-report questionnaire designed to evaluate the level of depression over 2 weeks; a higher score indicates a higher chance of depression [43]. We used a validated and translated version to assess our Arabic population; it had an internal consistency reliability of 0.857, as calculated using Cronbach's alpha [39]. We used a cut-off point of 10 for clinically significant depression and then further classified the depressed participants as clinically significant, moderately severe, or severe. We considered those who scored between 10 and 14 to have clinically significant depression, scores that ranged between 15 and 19 as moderately severe depression, and scores > 20 as severe depression. Furthermore, participants who scored 15+ warranted active treatment [43]. The AIS was adopted to measure sleep quality. The English version has an optimum specificity of 85% and sensitivity of 93% [40] and evaluates sleep quality over the last month using a four-point system of 0 to 3, where 0 means no insomnia symptoms and 3 means acute sleep difficulties. In our study, any participant with a score of 6 or more was considered to have insomnia. We used an Arabic version of AIS by the Toronto Sleep Clinic, which was translated to Arabic by an Englishspeaking healthcare professional whose mother tongue is Arabic. Another translator used the same approach to perform back-translation of the Arabic translation into English. A few English-speaking translators reviewed the back-translation for any problematic contextual discrepancies. Despite the use of measures to ensure accurate translation, the scale has yet to be tested for validity. The fifth part employed the FTCd, which is a six-item questionnaire used to measure nicotine dependence associated with cigarette smoking. It uses a 10-point system, wherein those who scored less than 4 were considered to be minimally dependent, 4-6 were moderately dependent, and 6-10 were highly dependent. FTCd was found to be moderately reliable on an Arabic sample and was valued as 0.68 for Cronbach's alpha coefficient [41]. We used the validated Arabic version of the ASRS-v1.1, which is a six-item screening tool for ADHD used to assess adult ADHD. It has been proven to be a reliable tool, with a sensitivity of 68.7% and specificity of 99.5%; twothirds of the clinical cases of ADHD scored 4-6 [44]. Results The total number of participants in this study was 506, with 158 (31.23%) males and 348 (68.77%) females. Of the participants, 9.41% were aged between 21 and 24 years, 35.88% were between 25 and 29 years (P = 0.007), 44.51% were between 30 and 39 years, and 10.20% were 40 years or older. Of the participants, 46.18% were single, 50.68% were married, and 3.13% were divorced. The majority of the participants (56.19%) did not have any children. The participants were pursuing different majors: 49.32% (majority) were taking courses in humanities/social sciences, 12.72% were studying biological/physical sciences, 12.92% were in engineering fields, and 25.05% were pursuing unspecified majors. With regard to postgraduate studies, 67.72% of the participants were studying for a master's degree, whereas 32.28% were preparing for their Ph.D.; 26.39% were first-year students, 32.08% second-year students, 20.40% third-year students, 10.30% fourth-year students, and 10.30% fifth-year students. Finally, 33.86% of our participants were studying abroad, whereas 66.14% of the students were studying in their country of origin (Table 1). According to the Smartphone Addiction Scale, 51.0% of the participants had high smartphone use, whereas 49.0% had low smartphone use ( Table 2). The statistical analysis revealed no significant relationship between smartphone use and the sociodemographic characteristics, such as gender, marital status, number of children, majors, educational level, academic year, studying abroad or in the home country, monthly income, family income, GPA, or number of published papers. However, there was a statistically significant relationship between smartphone use and age (P = 0.026). (Table 3). (Table 3). The PHQ-9 for Depression demonstrated a significant association between high smartphone use and MDD The multivariate analysis revealed an elevated risk of having severe depression and smartphone addiction simultaneously (OR = 3.779) (P = 0.001) ( Table 2 and Table 5). In conclusion, high smartphone use is associated with a higher prevalence of depression (Tables 4 and Table 5). We employed the FTCd to assess nicotine dependence. The total result revealed a moderate significantly positive Pearson's correlation coefficient between smartphone addiction and smoking (r = 0.323) (P = 0.018). In our study population, 20.8% were active smokers, and 8.4% of those with smartphone addiction were smokers, indicating that 41.5% of the smokers were addicted to smartphones (P = 0.039) ( Table 4). We measured difficulty sleeping based on the AIS; The results demonstrated a significant correlation between the severity of insomnia and smartphone use (r = 0.306) (P = 0.001) ( Table 4); 65.7% of those with high smartphone use had insomnia, whereas 34.3% did not. Conversely, only 44.4% of the non-smartphone addiction group had insomnia, whereas 55.6% were free of it, which indicated a higher prevalence of insomnia among high smartphone users. Smartphone addicts have approximately two times the risk of having insomnia (OR = 2.113) (P = 0.013) ( Table 5). We employed the ASRS-v1.1 symptom checklist to consider ADHD symptoms and found that 47.8% of the participants with high smartphone use had ADHD symptoms. Conversely, 19.7% of the non-smartphone addiction group exhibited ADHD symptoms, which indicated a significant relationship between smartphone addiction and adult ADHD symptoms (r = 0.405) (P = 0.001) ( Table 4). Those who had ADHD symptoms were at a greater risk of having smartphone addiction (OR = 2.712) (P < 0.001) ( Table 5). Discussion Our study demonstrates that 51% of our population scored high on the SAS. A similar study on Lebanese university students employed the Smartphone Addiction Inventory and found that 49% had smartphone addiction [16]. Another study in Saudi Arabia found that 61% of university students had high smartphone use [15]. A significant correlation was observed between age and smartphone addiction. A study in Turkey suggested that gender and young age were correlated with the amount of smartphone use. Specifically, women and younger populations may be at a higher risk for smartphone addiction [28]. However, our results revealed no significant relationship between gender and smartphone addiction. Reaffirming the previous literature, we observed a significantly positive relationship between smartphone addiction and MDD, which was consistently supported by research [25,32,33]. In a review of 23 studies, it was found that depression was consistently associated with smartphone use [34]. A study on Korean adolescents observed an association between unhealthy lifestyle habits and smartphone addiction and linked unhealthy diets, weight gain, and sleep disturbance to smartphone addiction. Therefore, these are considered to be symptoms and consequences of MD [35]. A study on university students in Saudi Arabia revealed that 43% of problematic smartphone users had reduced sleeping hours [15]. Our current research indicates that there is a strong association between high smartphone use and insomnia, as most of our subjects (65.7%) reported both. Intensive smartphone use was shown to be positively correlated with poor sleep quality and daytime sleepiness, which was consistent with our findings [36]. Another study conducted at King Abdulaziz University, Jeddah, revealed that mobile use was highly prevalent among participants (73.4% used smartphones > 5 h/day), and two-thirds of the participants had poor sleep quality and latency to sleep [45]. A Belgian study revealed that bedtime smartphone use caused later self-reported rise time, higher insomnia score, and increased fatigue [46]. A study on students between the ages of 18 and 39 indicated that insomnia is associated with high smartphone use [47]. The National Sleep Foundation's 2011 Sleep in America Poll showed the results indicating that the use of numerous technological devices before bedtime leads to difficulty falling asleep [48]. Confirming our finding of a higher prevalence of ADHD symptoms among students with smartphone addiction (47.8%) compared with low-use smartphone users (19.7%), an epidemiological study employing SAS performed on 4512 South Korean adolescents examined the relationship between smartphone addiction and symptoms of depression, anxiety, and ADHD. It was found that those with smartphone addiction had a higher likelihood of developing ADHD symptoms [49]. Studies found similarities between smartphone addiction and IA [50]. A study that used 12 addiction risk factors to compare smartphone addiction and IA found that there are multiple similar risk factors, such as depression, anxiety, self-control, life satisfaction, and aggression; moreover, the effects of the five identified psychological factors of addiction were all significant (P < 0.01) for both IA and smartphone addiction [51]. The current results have revealed a relationship between behavioral addictions and adult ADHD. A previous similar study looked at the relationship between IA and symptoms of ADHD severity and emotional distress through an online survey that established a significant relationship between the severity of IA symptoms and the presence and severity of ADHD symptoms [52]. Furthermore, the studies revealed that individuals with ADHD are more likely to develop other types of behavioral addictions, such as gambling disorders [53,54]. Adult ADHD was strongly associated with SUD in a literature review of adult ADHD in the Arab world [55]. Limitations Due to the nature of cross-sectional studies representing a single point of time rather than a longitudinal observation, it is not guaranteed to be representative of the population. This research cannot be utilized to analyze the behavior of the population over a period of time. Cross-sectional studies do not specify the cause of the disease. There is also a chance of recall bias on the part of participants. Since our study has been circulated online, through emails and various social media channels, it excludes people with MDD, insomnia, or ADHD who do not have access to social media, as well as those who are not interested in taking part in our questionnaire due to social stigmas. Therefore, future research should involve participants who are more open to the idea of mental health and mental illness. In addition, PHQ-9 is the most commonly used questionnaire for the diagnosis of MDD in clinical practice. It addresses somatic symptoms, such as exhaustion and poor appetite, which can be attributed to other diseases, thus placing the study at risk of overestimating MDD prevalence. Conclusions In conclusion, due to the ease of access and utter dependence of smartphones in our daily lives, our mental and physical impacts should be studied across different populations. The postgraduate student population is underrepresented throughout the medical literature. Thus, we hope to expand current knowledge on postgraduate students to include information on smartphone addiction. Confirming several studies, we found a positive association among insomnia, depression, adult ADHD, and smartphone overuse. Therefore, we encourage the scientific community to study the impacts of smartphone addiction on the mental health of postgraduate students. Finally, we recommend that smartphone addiction be carefully monitored in postgraduate students exhibiting depression, insomnia, or ADHD symptoms.
2021-05-10T00:04:44.803Z
2020-12-01T00:00:00.000
{ "year": 2021, "sha1": "84d651065a6202dd530a3896772703c4d4a1e07c", "oa_license": "CCBY", "oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/s12888-021-03285-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28d9721db187c9e23a23891c1a6085957b7681ee", "s2fieldsofstudy": [ "Medicine", "Psychology", "Education" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
253398095
pes2o/s2orc
v3-fos-license
Dynamics of hot galactic winds launched from spherically-stratified starburst cores The analytic galactic wind model derived by Chevalier and Clegg in 1985 (CC85) assumes $\textit{uniform}$ energy and mass-injection within the starburst galaxy nucleus. However, the structure of nuclear star clusters, bulges, and star-forming knots are non-uniform. We generalize to cases with spherically-symmetric energy/mass injection that scale as $r^{-\Delta}$ within the starburst volume $R$, providing solutions for $\Delta = 0$, 1/2, 1, 3/2, and 2. In marked contrast with the CC85 model ($\Delta=0$), which predicts zero velocity at the center, for a singular isothermal sphere profile ($\Delta=2$), we find that the flow maintains a $\textit{constant}$ Mach number of $\mathcal{M}=\sqrt{3/5} \simeq 0.77$ throughout the volume. The fast interior flow can be written as $v_{r<R} = (\dot{E}_T/3\dot{M}_T)^{1/2} \simeq 0.41 \, v_\infty$, where $v_\infty$ is the asymptotic velocity, and $\dot{E}_T$ and $\dot{M}_T$ are the total energy and mass injection rates. For $v_\infty \simeq 2000 \, \mathrm{km \, s^{-1}}$, $v_{r<R} \simeq 820 \, \mathrm{km\, s^{-1}}$ throughout the wind-driving region. The temperature and density profiles of the non-uniform models may be important for interpreting spatially-resolved maps of starburst nuclei. We compute velocity resolved spectra to contrast the $\Delta=0$ (CC85) and $\Delta=2$ models. Next generation X-ray space telescopes such as XRISM may assess these kinematic predictions. INTRODUCTION Galactic winds are important to the process of galaxy formation and evolution (see Veilleux et al. 2005;Zhang et al. 2018;Veilleux et al. 2020). They are commonly found in rapidly star-forming galaxies at both low and high redshift (Martin 2005;Rubin et al. 2014), act to modulate star formation, shape the stellar mass and mass-metallicity relations (Peeples & Shankar 2011;Ma et al. 2016), and advect metals into the circumgalactic and intergalactic medium (Borthakur et al. 2013;Werk et al. 2016). Galactic outflows are observed to be multi-phase. The hot, ≥ 10 7 K, phase is observed in X-rays and is often compared to the CC85 wind model (e.g., Strickland & Heckman 2009;Lopez et al. 2020Lopez et al. , 2022. The CC85 model assumes uniform energy and massinjection within a sphere of radius (∼ 0.1 − 0.5 kpc), which drives a flow that has the characteristic solution of transitioning from sub to supersonic at . Outside of the sphere there are no energy and mass sources and the flow undergoes adiabatic expansion (i.e., ∝ −4/3 , ∝ −2 , and ∝ 0 ). There have been many modifications to CC85. These semi-analytic studies typically relax the assumption of an adiabatic wind by including additional physics such as radiative cooling, gravity, radiation ★ E-mail: dnguyen.phys@gmail.com pressure, non-equillibrium ionization, non-spherical flow geometries, and/or mass-loading of swept up material (see Wang 1995;Suchkov et al. 1996;Silich et al. 2004; Thompson et al. 2016;Bustard et al. 2016;Yu et al. 2020;Fielding & Bryan 2022;Sarkar et al. 2022). Other studies have numerically considered uniform wind-driving cylinders (Strickland et al. 2000) and rings , and non-uniform injection within cold galactic disks (Tanner et al. 2016;Schneider et al. 2020). Star formation in nuclear star clusters is inherently non-uniform. Embedded stellar clusters display either multi-peaked surface density distributions or highly concentrated surface density distributions (Lada & Lada 2003). Nuclear star clusters and bulges are observed to be compact and non-uniform (Böker et al. 2002). Consequently, a self-consistent wind model needs to consider non-uniform sources within the wind-driving region (WDR). In this work, in contrast with uniform injection, we consider spherically-symmetric volumetric energy and mass injection, [ergs cm −3 ] and [g cm −3 ] respectively, that scales as ∝ ∝ −Δ within the WDR. Zhang et al. (2014) present the solutions for arbitrary Δ models but do not present a study on the bulk gas dynamics and thermodynamics of these models. Silich et al. (2011) presents wind models with non-uniform mass and energy injection modeled with an exponential function as ∝ ∝ exp( / ). Both Palouš et al. (2013) and Bustard et al. (2016) consider a Schuster distribution of sources that scale as ∝ ∝ (1 − 2 / 2 ) (with the latter reference taking = 0). In these previous works the sonic point shifts away from , as and are taken to be non-uniform. Here we calculate the structure of −Δ models for < . Similar to CC85, we assume that the supernovae efficiently thermalize their energy and drive a wind. We extend Zhang et al. (2014) by exploring how the kinematic and thermodynamic properties of the flow change over different injection slopes Δ. The CC85 model (Δ = 0) predicts flat temperature, densities, and pressure within the WDR and zero velocity at the center which linearly accelerates to become supersonic at the starburst ridge. We find the non-uniform models produce flows that are denser and faster than the CC85 flows within . Notably, for a model representative of a galactic density profile with a constant rotation curve, an isothermal sphere ( sources ∝ −2 ), the outflow maintains M = √︁ 3/5 0.77 flow throughout the WDR. We verify these results using 3D hydrodynamic simulations with the Cholla code for Δ = 1/2, 1, 3/2, and 2 models. We then focus on the observational characteristics of these non-uniform injection wind models, finding that the fast subsonic winds ( < 0.4 ∞ , see Eq. 15), leads to horn-like features in resolved line profiles (Fig. 3) which may be observed by XRISM. In § 2, we write down the hydrodynamic equations, derive the selfsimilar analytic Mach number, physical, dimensionless solutions, and take central limits of these solutions. In § 3, we run 3D hydrodynamic simulations, confirm the derived analytics, and construct X-ray surface brightness, brightness vs. height profiles, and velocity resolved line profiles. In § 5, we provide a synthesis of this work, discuss how the models predict outflow velocities that can be resolved by XRISM, how the different and profiles may be important in interpreting spatially-resolved maps for the interior of starburst superwinds, and consider future research directions. HYDRODYNAMIC EQUATIONS In the absence of rotation, gravity, and radiative cooling, the hydrodynamic equations for a steady-state spherically expanding flow are (see Chevalier & Clegg 1985): where the volumetric energy and mass injection rates are respectively where Δ = 0 is taken for the CC85 model. Equations 1, 2, and 3 can be re-written as a single equation, the derivative of the Mach number, as We then impose the boundary condition (Chevalier & Clegg 1985;Wang 1995): For 1 ≠ Δ < 3, the solution for the Mach number as a function of radius within the WDR is These solutions agree with those also derived by Zhang et al. (2017). Taking Δ = 0, we arrive at the CC85 solution. The adiabatic, spherically expanding, exterior ( > ) solutions for the Mach number are identical to the CC85 solutions for all Δ models. Physical solutions require the definition of the total energy and mass injection rates, and , within the WDR. We use where and are the dimensionless energy thermalization and massloading efficiencies, SFR, * ≡ SFR /M yr −1 is the dimensionless star-formation rate, and we have assumed that there is one supernova per 100 of star formation and that each supernova releases 10 51 ergs of energy. To make a direct comparison with the uniform case (Δ = 0), we normalize the rates to that of the uniform CC85 model. The normalization requirement for energy and mass loading, From Equations 1 and 3, the sound speed and velocity are The density is obtained from the continuity equation as The remaining quantity, the pressure, is solved from the sound speed as = 2 / , where we take = 5/3 throughout the paper. In Figure 1, we plot the dimensionless Mach number, density, and temperature. Relative to uniform injection (Δ = 0, red line), we find an isothermal sphere model (Δ = 2, blue line) produces a higher Mach number and denser outflow within the interior of the starburst. In Table 1 we present analytic central limit solutions for Δ = 0 and 2 models. We find the Mach number for an isothermal sphere (Δ = 2) is constant: This starkly contrasts the Mach number for a CC85 (uniform) model, which linearly grows as M = 2 −5/14 3 −11/14 * 0.33 * from the origin, where * = / . Both the pressure and density scale as −1 such that the velocity profile is also constant within the WDR: This can be written in terms of the asymptotic wind-velocity, ∞ = (2 / ) 1/2 , as We see that the interior flow for a Δ = 2 model is approximately , and , see Table 1. The difference in kinematics may be observable in velocity-resolved line profiles (see Sec. 4). For Δ = 1, in the limit that * 1, the Mach number is given by M = [−20/3 × ln((4/3) 1/5 * )] 1/2 . The remaining quantities can be calculated by combining this with Equations 11 and 12. Equations 7 and 8 are solutions to an implicit equation. To use the solution, one is required to define an inner and outer radius. As shown in Table 1, for an isothermal sphere model, there is a strong dependence on the inner radii, as the density and pressure diverge towards infinity. We define the inner radius (the minimum value of * ) for the Δ = 2 model as core, * . Inference of the volumetric energy and mass injection rates within the wind-driving region A critical inference from X-ray observations of starburst nuclear centers are the energy thermalization and mass-loading efficiencies (i.e, and , see Eqs. 9). Using measurements of the central temperature and density, we infer and (Strickland & Heckman 2009) using the solutions from Table 1 where 0.1 = /0.1 cm −3 , 7 = /10 7 K, 0.5 = /0.5 kpc, and 10 = SFR, * /10. From Equations 16 and 17, it is apparent that inferred efficiencies and from the Δ = 2 model have a dependence on the defined inner radius core, * , whereas for Δ = 0, there is not. The dependence on core, * for Δ = 2 arises from the diverging density and pressure profiles, see Table 1. 3D HYDRODYNAMIC SIMULATIONS We test our solutions using the Cholla (Schneider & Robertson 2015) code to simulate the starburst nuclei. The box has dimensions 1 kpc 3 with 256 3 cells, giving a cell resolution of Δ 3.9 pc. Within a radius of , we deposit energy and mass at a rate and for different power-law injection slopes Δ = 0, 1/2, 1, 3/2, and 2, where the normalization for each Δ model is given by Equation 10. For all simulations, we take the M82-like fiducial wind parameters (Strickland & Heckman 2009) of = 1, = 0.3, = 0.3 kpc, and SFR, * = 10. The value of the core radius * ,core is effectively set by the resolution. In order to make a direct comparison between the analytic solutions and numerical simulations, we do not include any additional physics, such as radiative cooling or gravity. For these wind model parameters, most of the flow is non-radiative and can escape a typical potential (Chevalier & Clegg 1985;Thompson et al. 2016;Lochhaas et al. 2021). All wind models reach a steady state, showing that the solutions are stable. In Figure 2, we show 1D radial profiles +ˆskewers of the Mach number, number density, temperature, and velocity profiles for both the analytic solutions (colored solid lines) and the Cholla simulations results (black x markers) after a time-steady solution has been established. The analytic solutions match the simulation results for every physical quantity. This implies that the imposed boundary condition of M = 1 at * = 1, which was used in the analytic derivation, is indeed valid over the range of Δ values presented. Compared to the uniform sphere CC85 model (Δ = 0), the isothermal sphere model (Δ = 2) maintains a much higher, constant, radial velocity 740 km s −1 throughout most of the WDR. Surface Brightness We calculate the instantaneous X-ray surface brightness as S 1 , 2 ( , ) = ∫ 2 1 ∫ 0 ( , , ) 2 Λ( ( , , ), ), where is the length of the simulation domain, which includes the post-WDR supersonic wind. Using PyAtomDB (Foster & Heuer 2020), we evaluate the plasma emissivity over XRISM's observing bandwidth (0.3 ≤ [keV] ≤ 12), and assume solar metallicity abundances (Anders & Grevesse 1989). In Figure 3 we show S for Δ = 0, 1, and 2 Cholla models. In the left panel of Figure 4, we calculate the surface brightness profile by taking Σ and integrating along theˆdirection, and then dividing by the area of each surface. The surfaces are taken to be 50 pc 2 . The Δ = 2 model leads to a strongly-peaked brightness profile, whereas the Δ = 0 model produces a less-peaked profile. We note that for the Δ = 2 model, the diverging density (see Tab. 1) implies a short cooling timescale. For these short cooling times, a cool non-X-ray emitting core may develop (Wünsch et al. 2008;Lochhaas et al. 2021). Radiative cores will be considered in a future work. Velocity Resolved Line Profile XRISM's Resolve instrument is capable of resolving individual spectral lines and will trace gas motions through Doppler broadening and line shifts (XRISM Science Team 2020). A spectrum of the entire wind-driving region will yield insight into the hot gas kinematics, which remain thus far unprobed. We construct resolved velocity line profiles for the Δ = 0 and Δ = 2 wind models. To do so, we consider shells inside ≤ . When projected along the line of sight, this leads to a top-hat distribution in ( ) versus ( ) space, with bounds defined by ± ( ). We then calculate the emissivity of O , Mg , and Si , and integrate over XRISM's observing bandwidth. Next, we integrate over the volume = Δ 4 2 Δ . The result is shown in the three right panels of Figure 4. The Δ = 2 model has brighter emission along high velocities, whereas the Δ = 0 model is brightest where the gas is stationary. This is a result of the constant high velocity flow (Eq. 15). For these injection parameters (see Sec. 3) the characteristic feature of the Δ = 2 model is the sharp increase in the emissivity at ∼ ±750 km s −1 . SUMMARY In this work we study the dependence of injection slope, Δ, for the kinematic and thermodynamic structure of the wind within the wind driving region. We derive analytic solutions, present their limits at small (see Table 1), and then confirm them with 3D Cholla simulations (see Fig. 2). Importantly, we find that for a distribution of sources that scale as −2 (Δ = 2) the Mach number in the WDR is constant (M = √︁ 3/5) and is approximately half of the asymptotic wind velocity (see Eq. 15), faster than the uniform distribution (Chevalier & Clegg 1985) or Schuster-like distributions (Palouš et al. 2013;Bustard et al. 2016). The inferred energy and mass-loading efficiencies, and , are affected by Δ, with a Δ = 2 sensitive to the core radius of injection. The Δ = 2 model produces strongly peaked X-ray brightness profiles (see Fig. 3). Figure 4 shows resolved line The Δ = 2 model produces a strongly peaked surface X-ray brightness profile, whereas the the CC85 X-ray surface brightness appears more broadened within the WDR. Right three panels: The velocity resolved line profile with emissivities integrated over energies 0.3 ≤ [keV] ≤ 12, for three He-like triplets: O , Mg , and Si , respectively. The emisivities are calculated with PyAtomDB with energies corresponding to the bandpass of XRISM's Resolve soft X-ray spectrometer instrument. We see that Δ = 2 models lead to a sharp horn-like feature in the velocity distribution across all He-like triplets, with the discrepancy between Δ = 2 and Δ = 0 more apparent in the heavier triplets. velocity profiles for relevant emission lines O , Mg , and Si for the Δ = 0 (CC85), 1, and 2 (isothermal sphere) models. These features may be observed by XRISM in the future. The and structure of the non-uniform models may be important in interpreting spatially-resolved maps for the interior of starburst superwinds. Wünsch et al. (2008); Lochhaas et al. (2021) showed that in cases of high mass-loading, the WDR develops a cool inert core. To make a direct comparison to the analytics, the simulations did not include cooling. We expect a cool core at the origin, as ∝ −1 (Δ = 2). This would affect the X-ray surface brightness profiles shown in Section 4. The condition for a cool inert core depends on the competing cooling and advection timescales and will be investigated in a future work.
2022-10-14T01:15:45.977Z
2022-10-13T00:00:00.000
{ "year": 2022, "sha1": "87ec6e09639b4d63efd49e0751bf158d18d00b8c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "87ec6e09639b4d63efd49e0751bf158d18d00b8c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
228858844
pes2o/s2orc
v3-fos-license
The effect of enrichment on quartz sand properties The technological properties of quartz sand (without and after enrichment) by methods of emission spectral analysis, petrography, and X-ray phase analysis are investigated. The studied sand is a waste after extraction of titanium-containing components from raw materials. It is shown that the enrichment of quartz sand expands the field of its application. It has been established that quartz sand can be used as a filler of heavy, light, fine-grained, cellular and silicate concrete, mortar, and the preparation of dry construction mixtures. Also, sand can be claimed as a raw material component to produce ceramic tiles, porcelain stoneware, glazed ceramic products, wall ceramics, as well as proppants. Hydraulic enrichment leads to a decrease in the content of clay particles in the sand, which allows it to be used also for the construction of bases and coatings of roads and airfields. The integrated use of raw materials and waste allows solving the problem of creating non-waste and environmentally friendly technologies, which ensures the saving of natural raw materials, and on the other hand, allows it to be disposed of, improving the environmental situation. Introduction The quality of quartz sand used in various fields of industry is regulated by notations in accordance with the technical specifications. These requirements and quartz sand properties determine the technology and the necessity of enrichment. The final cost of the sand is determined by the method and degree of enrichment. The utilization of high-quality sands is limited by their high cost. The most common impurities in the sands are associated with the presence of heavy and/or magnetic minerals (ilmenite, rutile, magnetite, hematite, goethite, etc.); weathered or layered minerals that absorbed iron (feldspar, mica); surface films covering quartz grains; clay; inclusions in the structure of quartz, etc. The enrichment of sand from clay impurities can be carried out by the dry method (air classification) or by the wet method (the hydraulic one). Dry enrichment methods, as a rule, are inferior in quality to sand enriched by the hydraulic method, especially on sands of complex mineral composition and sands with a high clay content. Fine quartz sands are used, as a rule, in concrete production as a filler, in mortar production, in the production of ceramic materials as a non-shrinkage component, in the production of molding sands. The aim of this project is to determine the application of quartz sand in its natural form and to consider the effect of enrichment on the properties and applications. Given the current state of consumption of natural raw materials, there is no doubt that the utilization and involvement of by-products in production is of value [1][2][3]. There are clay deposits in Tyumen region (Russia). Combined with extraction of titanium-containing components, the quartz waste comes into being [4]. It is important to find applications for this waste. It can be used in the construction industry [5][6]. The aim of this project is to study the possibility of using quartz waste of 2 raw materials to produce dry building mixtures, proppants, cement, silicate brick, glass and glassware, fine ceramics and as molding sand [7][8][9][10][11][12][13]. Quartz sands are used to obtain cristobalite as a composite filler, coating material, surface finishing medium and structural ceramics [14]. Experimental procedure The chemical composition of quartz sand was determined by emission spectral analysis with inductively coupled plasma using Optima 4300 DV optical emission spectrometer (Perkin Elmer, USA). The grain composition, modulus of sand fineness and real density using the pycnometric method were determined in accordance with [15]. Petrographic studies of refractory products were carried out in reflected light by optical microscope Polam R-311 [15,16]. Results and discussion Unenriched sand visually contains large flinders more than 5 mm in size. When sifted, these pieces are easily destroyed, and the entire sample of sand passes through a No. 5 sieve. Therefore, the unenriched sand does not contain particles larger than 5 mm. The clay content in pieces of unenriched sand is 5% (for fine and very fine sands it should not exceed 1%, whereas in the enriched sand there are no such lumps. The clay-aggregated sand sample sare failed upon enrichment in water. The full residue on sieve No. 063 for unenriched sand is 7.0% as well as for enriched one is 0.14 %, which corresponds to a group of very fine sands. The decrease in the sand particles content of more than 0.63 mm in the enriched sample is associated with the destruction of clay-aggregated sand particles. Particle content less than 0.16 mm for unenriched sand is 79.7% and for enriched one is 89.13%, therefore, sands under discussion are very fine sands and can be used as fillers. The class of studied sands corresponds to II according to [17]. The chemical composition of the sand is given in Table 2. After enrichment, the content of Al2O3, Fe2O3, TiO2 in sand decreases. It is associated with the removal of the clay part. The silica content in this case, as one would expect, increases. The content of dusty and clay particles, determined by the method of elutriation, in non-enriched sand is 5.6%, in enriched one is 0.46%, which corresponds to the requirements of [17] (up to 10%). By the presence of organic impurities, unenriched and enriched sands are suitable for use in concrete and mortars, since the liquid above the unenriched and enriched sand samples is slightly colored. The optical density of the liquid over the unenriched sand is 0.044 and over the enriched one is 0.040 in comparison with a reference solution of 0.6-0.68. As a result of petrographic studies of enriched sand, it was found that the studied samples are represented by alluvial sands and sandstones with a SiO2 content of 95-98%, from milky white to beige and red (sandstone) in color with admixtures of muscovite and biotite. Mica (muscovite and biotite) are concentrated mainly in fine sand fractions of 0.16-0.315 (3-6 and 1-3 wt. %, respectively) and 0.315-0.63 (3-5 and 2-4 wt. %, respectively). Here, white and transparent quartz is concentrated in well-rounded grains with a high degree of crystallization. The 0.63-1.25 mm fraction is represented by a mixture of white milk and partially transparent, well crystallized quartz and sandstone aggregates from red to dark gray, consisting of cemented small (0.05-0.1 mm) rounded grains forms. Cement is weak, non-hydrated, aggregates are not durable. Sandstone content is 10-13 wt. % There are individual particles of biotite fractions of less than 0.315 mm. A fraction of more than 1.25-2.5 mm consists entirely of fragile agglomerates of sandstone. According to the X-ray diffraction data, unenriched quartz sand contains up to 5% montmorilonite, ~ 77% quartz, ~ 18% feldspar and muscovite in total. The mineral composition of enriched quartz sand is as follows: quartz is about 81%, as well as feldspar and muscovite in the amount of up to 19%. The real density of the unreached sand sample was determined by the pycnometric method ( Table 3). The voidness of enriched sand is increased, and the bulk density is decreased due to leaching offine clay fractions filling the voids between large sand particles. An increase in the true density of sand also indicates the leaching of less dense clay particles. The reactivity of unenriched and enriched sand is 37.66 and 37.24 mmolL -1 , respectively. It is in accordance with [17] (not more than 50 mmolL -1 ). Consequently, the sands contain a permissible amount of amorphous varieties of silicon dioxide, soluble in alkalis (chalcedon, opal, flint, etc.) and sand can be used as a filler for concrete, as it is resistant to the chemical effects of alkali cement. The content in unenriched and enriched sand of sulfide sulfur is 0.0043 and 0.0065%, respectively. The content of sulfate sulfur is 0.015% and 0.01%. According to [17], sulfur, sulfides, except pyrite (marcasite, pyrrhotite, etc.) and sulfates (gypsum, anhydrite, etc.) in terms of SO3 should be not more than 1.0%, pyrite in terms of SO3not more than 4% by weight). Therefore, both samples of sand are suitable for use as a filler for concrete and mortars. For the suitability of quartz sand in road construction, the content of clay particles was determined by the swelling method according to [17]. An increase in the material volume in the sand was not observed. Therefore, the sands under study can be recommended for road construction. To determine the suitability of sand as proppants, [18,19] were used. According to the chemical composition ( Table 2) in terms of MgO and Al2O3 content, these sands do not meet the requirements for either magnesia-quartz or aluminosilicate proppants. The minimum size of proppants according to [18,19] corresponds to a size of 0.212 mm. The studied sands in the bulk consist of particles smaller than 0.16 mm (80-90%). Therefore, the sands under investigation cannot be used as proppants due to the discrepancy in grain and chemical compositions. But the studied sands can be recommended as silica-containing raw materials to produce aluminosilicate or magnesian proppants if necessary. To determine the suitability of unenriched sand as molding one according to [20] the content of silicon dioxide should be at least 90.0%.With careful enrichment of the sand it is possible to bring this indicator to 90%,in the elutriated sand sample the content of silicon dioxide is 89.741%. The content of harmful impurities Fe2O3 decreases in the enriched sand sample from 1.179% to 0.813%. According to [20], the Fe2O3 content should be no more than 1%. Therefore, enriched sand according to this indicator meets the requirements. The content of alkaline and alkaline earth (Na2O + K2O + + MgO + CaO) according to [20] should be no more than 2%. In enriched sand it is 3.015%, in unenriched one is 3.1%. Therefore, the studied sands cannot be used as molding sand even after enrichment due to the high content of alkaline and alkaline earth oxides. Alkaline oxides are found in the sand in the form of feldspar and muscovite. Other enrichment methods are needed to remove such minerals. To utilize studied sands in Portland cement production as a silicate additive, it does not require enrichment, since the clay component contained in unenriched sand will partially play the role of an alumina additive. The use of quartz sand in the production of ceramic tile is possible due to the content in the sand of up to 19% of feldspars with muscovite. It is known that the fluxes are introduced into the body composition, which are used as feldspars and hydromica (in our case, muscovite). The sand enrichment in this case reduces the content of iron oxide, which allows the use of sand under investigation in the porcelain production. Conclusion The unenriched quartz sand belongs to the group of very fine sands (class II) with a particle size modulus of 0.44. The particle content of less than 0.16 mm is 79.7%, and particles of more than 0.063 mm are 7.03%. The sand does not contain large inclusions of more than 5 mm; it belongs to sands with a clay content of 5% in lumps and 5.6% of dust and clay particles. Sand is characterized by a low content of organic impurities, a bulk density of 1338.9 kg·m -3 , a real density of 2646.6 kg·m -3 and a voidness of 49.4%. It has a low reactivity of 37.24 mmol·L -1 . The content of sulfur compounds is negligible. Enriched quartz sand belongs to the group of very fine sands (class II) with a particle size modulus of 0.11, a particle content of less than 0.16 mm is 89.2%, and particles of more than 0.063 mm are 0.14%. Sand does not contain large inclusions more than 5 mm. There is no clay in the lumps in the sand, and dusty and clay particles are practically absent (0.46%). Sand is characterized by a low content of organic impurities, a bulk density of 1275.2 kg·m -3 , a real density of 2723 kg·m -3 and a voidness of 53.3%. It has a low reactivity of 37.66 mmol·L -1 .
2020-11-19T09:14:14.163Z
2020-11-14T00:00:00.000
{ "year": 2020, "sha1": "f733cd7a63ba947b288c24e2a5ce6a06ae0d15ee", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/966/1/012027", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a7c0a3e0bf2a3f40368d219ddb62f93663fcf222", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
210862113
pes2o/s2orc
v3-fos-license
Chiral Thioureas—Preparation and Significance in Asymmetric Synthesis and Medicinal Chemistry For almost 20 years, thioureas have been experiencing a renaissance of interest with the emerged development of asymmetric organocatalysts. Due to their relatively high acidity and strong hydrogen bond donor capability, they differ significantly from ureas and offer, appropriately modified, great potential as organocatalysts, chelators, drug candidates, etc. The review focuses on the family of chiral thioureas, presenting an overview of the current state of knowledge on their synthesis and selected applications in stereoselective synthesis and drug development. Introduction The replacement of the electronegative oxygen atom of urea by sulfur (with electronegativity comparable to carbon) results in a significant change of properties. Thioureas (thiocarbamides) exhibit higher acidity and are stronger hydrogen bond donors [1][2][3]. This ability to participate in hydrogen bonding, which can be further modified by the appropriate substitution of nitrogen atoms, is essential for numerous applications of this class of organic compounds, mainly in organocatalysis and molecular recognition. Thioureas are also widely applied in agriculture and medicine [4][5][6][7], and as corrosion inhibitors [8][9][10][11]. For almost 20 years they have been used as catalysts in organic synthesis, especially in stereoselective reactions [12][13][14]. They also serve as valuable starting materials for the synthesis of heterocycles [15][16][17], and are used as ligands in coordination chemistry (particularly when additional donors are present in their molecules) [18][19][20][21][22], as well as in the field of anion binding and recognition [2,23]. Structurally thioureas can be classified depending on the number of substituents of nitrogen atom ( Figure 1) [6,7]. Not surprisingly, derivatives with one and two organic groups (either 1,1 or 1,3-disubstituted) are most common, though trisubstituted (with limited, but still preserved possibility to act as hydrogen bond donors) and fully substituted (mainly cyclic) thioureas are also prepared and used for various purposes. The first thiocarbamides were prepared ca. 150 years ago [24,25], and their chiral derivatives have been long known [26,27], but a great interest in the latter has arisen with the development of enantioselective organocatalysis. Synthesis of Chiral Thioureas Since the thiourea skeleton is not intrinsically chiral, the existence of enantiomers typically originates from its substituents. Examples of various groups bearing stereogenic centers will be presented in the subsequent section. Desymmetrisation resulting from less common axial (biphenyl or binaphthyl derivatives) [28][29][30] or planar chirality [31][32][33] is also worth mentioning. Consequently, the methods of synthesis of chiral thioureas are essentially the same as for their achiral counterparts, but, typically, the reactants from a chiral pool are used in enantiomerically pure form. However, the preparation of the desired product as racemic mixture followed by separation of enantiomers was reported in selected cases, e.g., bis-thiourea derivatives of Tröger's base were obtained as racemates and resolved on a chiral stationary phase [34]. Similarly, enantiomers of norbornane thiocarbamide formed as 1:1 mixture in a two-step protocol from norbornene were separated either by chiral HPLC or by crystallization of diastereomeric salts of the amine intermediates [35]. The advantage of this approach lays in the fact that both optical antipodes of chiral thiourea can be isolated and used as organocatalysts or for chiral recognition. The choice of a particular method of preparation of the desired stereoisomer depends mainly on its structure (number and type of substituents) and availability of necessary starting materials. Certainly, one should also consider possible inconveniences (both for the person conducting the synthesis and for the environment) connected with the use of certain reactants: their limited stability, toxicity, flammability, or simply an unpleasant odor. In most preparations, reactants containing C=S bond are used-mainly isothiocyanates, but also dithiocarbamates, carbon disulfide, thiophosgene and their equivalents. Less frequently, inorganic sources of sulfur are useful for a given transformation-P4S10 or Lawesson's reagent to convert urea into thiourea [36,37] sulfides or elemental sulfur. Amines are the most usual source of thiourea nitrogen atoms; however, isocyanides and azides are also found in various protocols. An alternative route involves modification of already constructed achiral thiourea with optically active groups. Reaction of Isothiocyanates with Amines For many years, the most common method of preparation of thiourea, including chiral derivatives, has been based on the reaction of alkyl or aryl isothiocyanate with amine or ammonia (Scheme 1) [1]. This way, unsymmetrical mono-, di-or trisubstituted products are formed, depending on the amine (which can be either aliphatic or aromatic). This substrate typically serves as a source of chirality in most preparations of chiral thioureas, while the isocyanate is chosen to enhance the desired properties of the designed catalyst, ligand, or receptor (most frequently trifluoromethylsubstituted aryl derivatives are used). However, chiral isothiocynates are used as well, as they can be easily obtained from the respective primary amine and CS2 [38]. Synthesis of Chiral Thioureas Since the thiourea skeleton is not intrinsically chiral, the existence of enantiomers typically originates from its substituents. Examples of various groups bearing stereogenic centers will be presented in the subsequent section. Desymmetrisation resulting from less common axial (biphenyl or binaphthyl derivatives) [28][29][30] or planar chirality [31][32][33] is also worth mentioning. Consequently, the methods of synthesis of chiral thioureas are essentially the same as for their achiral counterparts, but, typically, the reactants from a chiral pool are used in enantiomerically pure form. However, the preparation of the desired product as racemic mixture followed by separation of enantiomers was reported in selected cases, e.g., bis-thiourea derivatives of Tröger's base were obtained as racemates and resolved on a chiral stationary phase [34]. Similarly, enantiomers of norbornane thiocarbamide formed as 1:1 mixture in a two-step protocol from norbornene were separated either by chiral HPLC or by crystallization of diastereomeric salts of the amine intermediates [35]. The advantage of this approach lays in the fact that both optical antipodes of chiral thiourea can be isolated and used as organocatalysts or for chiral recognition. The choice of a particular method of preparation of the desired stereoisomer depends mainly on its structure (number and type of substituents) and availability of necessary starting materials. Certainly, one should also consider possible inconveniences (both for the person conducting the synthesis and for the environment) connected with the use of certain reactants: their limited stability, toxicity, flammability, or simply an unpleasant odor. In most preparations, reactants containing C=S bond are used-mainly isothiocyanates, but also dithiocarbamates, carbon disulfide, thiophosgene and their equivalents. Less frequently, inorganic sources of sulfur are useful for a given transformation-P 4 S 10 or Lawesson's reagent to convert urea into thiourea [36,37] sulfides or elemental sulfur. Amines are the most usual source of thiourea nitrogen atoms; however, isocyanides and azides are also found in various protocols. An alternative route involves modification of already constructed achiral thiourea with optically active groups. Reaction of Isothiocyanates with Amines For many years, the most common method of preparation of thiourea, including chiral derivatives, has been based on the reaction of alkyl or aryl isothiocyanate with amine or ammonia (Scheme 1) [1]. This way, unsymmetrical mono-, di-or trisubstituted products are formed, depending on the amine (which can be either aliphatic or aromatic). This substrate typically serves as a source of chirality in most preparations of chiral thioureas, while the isocyanate is chosen to enhance the desired properties of the designed catalyst, ligand, or receptor (most frequently trifluoromethyl-substituted aryl derivatives are used). However, chiral isothiocynates are used as well, as they can be easily obtained from the respective primary amine and CS 2 [38]. Synthesis of Chiral Thioureas Since the thiourea skeleton is not intrinsically chiral, the existence of enantiomers typically originates from its substituents. Examples of various groups bearing stereogenic centers will be presented in the subsequent section. Desymmetrisation resulting from less common axial (biphenyl or binaphthyl derivatives) [28][29][30] or planar chirality [31][32][33] is also worth mentioning. Consequently, the methods of synthesis of chiral thioureas are essentially the same as for their achiral counterparts, but, typically, the reactants from a chiral pool are used in enantiomerically pure form. However, the preparation of the desired product as racemic mixture followed by separation of enantiomers was reported in selected cases, e.g., bis-thiourea derivatives of Tröger's base were obtained as racemates and resolved on a chiral stationary phase [34]. Similarly, enantiomers of norbornane thiocarbamide formed as 1:1 mixture in a two-step protocol from norbornene were separated either by chiral HPLC or by crystallization of diastereomeric salts of the amine intermediates [35]. The advantage of this approach lays in the fact that both optical antipodes of chiral thiourea can be isolated and used as organocatalysts or for chiral recognition. The choice of a particular method of preparation of the desired stereoisomer depends mainly on its structure (number and type of substituents) and availability of necessary starting materials. Certainly, one should also consider possible inconveniences (both for the person conducting the synthesis and for the environment) connected with the use of certain reactants: their limited stability, toxicity, flammability, or simply an unpleasant odor. In most preparations, reactants containing C=S bond are used-mainly isothiocyanates, but also dithiocarbamates, carbon disulfide, thiophosgene and their equivalents. Less frequently, inorganic sources of sulfur are useful for a given transformation-P4S10 or Lawesson's reagent to convert urea into thiourea [36,37] sulfides or elemental sulfur. Amines are the most usual source of thiourea nitrogen atoms; however, isocyanides and azides are also found in various protocols. An alternative route involves modification of already constructed achiral thiourea with optically active groups. Reaction of Isothiocyanates with Amines For many years, the most common method of preparation of thiourea, including chiral derivatives, has been based on the reaction of alkyl or aryl isothiocyanate with amine or ammonia (Scheme 1) [1]. This way, unsymmetrical mono-, di-or trisubstituted products are formed, depending on the amine (which can be either aliphatic or aromatic). This substrate typically serves as a source of chirality in most preparations of chiral thioureas, while the isocyanate is chosen to enhance the desired properties of the designed catalyst, ligand, or receptor (most frequently trifluoromethylsubstituted aryl derivatives are used). However, chiral isothiocynates are used as well, as they can be easily obtained from the respective primary amine and CS2 [38]. The majority of publications concerning transformations of dithiocarbamates into thioureas focus on the synthesis of achiral derivatives. For example, various di-and trisubstituted thioureas were obtained in 63-92% yield via the reaction of trimethylamine or DABCO salts of dithiocarbamates and primary or secondary amines [57]. Cerium ammonium nitrate was used as a catalyst, and the condensation was carried out in acetonitrile at room temperature for 2-24 h. A series of 1-aryl-3,3-dimethylthioureas were prepared in 70-92% yield from aryl amines which were first treated with sodium hydride in DMSO and then heated with S-aryl-N,N-dimethyldithiocarbamates for 3-5 h at 90 • C, with the release of thiophenol byproduct [58]. Recently published methods show that the reaction can be performed in water under mild conditions. For example, thiazolidine-2-thiones and secondary amines stirred in aqueous solution at 80 • C for 3-5 h without a catalyst yielded trisubstituted thioureas in 60-90% yield [59]. In another preparation, unsymmetrical thioureas were formed when primary or secondary amines, either aliphatic or aromatic, were heated with dithiocarbamates (derived from primary amines) at 50-60 • C and under solvent-free conditions (Scheme 5) [60]. Yields ranged from 64% to 100%. Among various derivatives obtained, three inherited chirality from the amine, and five from the dithiocarbamate. The reaction can be also mediated by metals that are complexed with dithiocarbamate ligands. Dirksen et al. reported on the preparation of a mixture of 1,3-disubstituted and trisubstituted thioureas from bis(dimethyldithiocarbamato) zinc(II) and primary amines [61]. Maddani and Prabhu used dioxomolybdenum dialkyl dithiocarbamates and primary amines to prepare eleven thioureas in 51-85% yield (Scheme 6) [62]. The reaction was conducted in refluxing toluene under nitrogen for 0.5-3 h. Four chiral derivatives were prepared starting from methyl esters of l-phenylalanine, l-tyrosine, and l-leucine. The reaction can be also mediated by metals that are complexed with dithiocarbamate ligands. Dirksen et al. reported on the preparation of a mixture of 1,3-disubstituted and trisubstituted thioureas from bis(dimethyldithiocarbamato) zinc(II) and primary amines [61]. Maddani and Prabhu used dioxomolybdenum dialkyl dithiocarbamates and primary amines to prepare eleven thioureas in 51-85% yield (Scheme 6) [62]. The reaction was conducted in refluxing toluene under nitrogen for 0.5-3 h. Four chiral derivatives were prepared starting from methyl esters of L-phenylalanine, Ltyrosine, and L-leucine. The formation of thioureas upon self-condensation of trialkylammonium dithiocarbamates was reported as well [63]. The Use of Carbon Disulfide Reactions of amines with carbon disulfide serve as a simple method for the preparation of symmetrical, 1,3-disubstituted thioureas. Using a modified protocol with two different amines opens the route to unsymmetrical, mono-, di-or trisubstituted products (Scheme 7). Transient formation of dithiocarbamates and isothiocyanates was suggested as a key step in the mechanism of the process [1]. Like other C=S transfer reagents, carbon disulfide is not free of drawbacks: it is flammable, volatile (which requires an excess to be used), and has an unpleasant smell. Atom economy suffers from the formation of sulfur-containing by-product, usually hydrogen sulfide. The reaction is typically performed in organic solvents at elevated temperature and is relatively slow; it can be accelerated by addition of bases or oxidants to remove H2S [1]. Furthermore, CBr4 was shown by Liang et al. to efficiently promote the reaction [64]. Most recent modifications involve the use of water, a convenient and green solvent, as reaction medium [65]. A simple condensation between amines and carbon disulfide in refluxing water was described by Maddani and Prabhu as a route to symmetrical and unsymmetrical di-and trisubstituted thiourea derivatives [66]. The method worked well with aliphatic amines: in the first step, a secondary or primary amine was treated with CS2 in aqueous NaOH under ambient conditions and thus prepared dithiocarbamates were heated under reflux with primary amines for 3-12 h; after acidic work-up the desired thioureas were isolated in good to high yields (19 examples, 40-93%, Scheme 6. The use of coordinated dithiocarabamate in the preparation of chiral thiourea [62]. The formation of thioureas upon self-condensation of trialkylammonium dithiocarbamates was reported as well [63]. The Use of Carbon Disulfide Reactions of amines with carbon disulfide serve as a simple method for the preparation of symmetrical, 1,3-disubstituted thioureas. Using a modified protocol with two different amines opens the route to unsymmetrical, mono-, di-or trisubstituted products (Scheme 7). Transient formation of dithiocarbamates and isothiocyanates was suggested as a key step in the mechanism of the process [1]. Like other C=S transfer reagents, carbon disulfide is not free of drawbacks: it is flammable, volatile (which requires an excess to be used), and has an unpleasant smell. Atom economy suffers from the formation of sulfur-containing by-product, usually hydrogen sulfide. The reaction is typically performed in organic solvents at elevated temperature and is relatively slow; it can be accelerated by addition of bases or oxidants to remove H 2 S [1]. Furthermore, CBr 4 was shown by Liang et al. to efficiently promote the reaction [64]. The reaction can be also mediated by metals that are complexed with dithiocarbamate ligands. Dirksen et al. reported on the preparation of a mixture of 1,3-disubstituted and trisubstituted thioureas from bis(dimethyldithiocarbamato) zinc(II) and primary amines [61]. Maddani and Prabhu used dioxomolybdenum dialkyl dithiocarbamates and primary amines to prepare eleven thioureas in 51-85% yield (Scheme 6) [62]. The reaction was conducted in refluxing toluene under nitrogen for 0.5-3 h. Four chiral derivatives were prepared starting from methyl esters of L-phenylalanine, Ltyrosine, and L-leucine. The formation of thioureas upon self-condensation of trialkylammonium dithiocarbamates was reported as well [63]. The Use of Carbon Disulfide Reactions of amines with carbon disulfide serve as a simple method for the preparation of symmetrical, 1,3-disubstituted thioureas. Using a modified protocol with two different amines opens the route to unsymmetrical, mono-, di-or trisubstituted products (Scheme 7). Transient formation of dithiocarbamates and isothiocyanates was suggested as a key step in the mechanism of the process [1]. Like other C=S transfer reagents, carbon disulfide is not free of drawbacks: it is flammable, volatile (which requires an excess to be used), and has an unpleasant smell. Atom economy suffers from the formation of sulfur-containing by-product, usually hydrogen sulfide. The reaction is typically performed in organic solvents at elevated temperature and is relatively slow; it can be accelerated by addition of bases or oxidants to remove H2S [1]. Furthermore, CBr4 was shown by Liang et al. to efficiently promote the reaction [64]. Most recent modifications involve the use of water, a convenient and green solvent, as reaction medium [65]. A simple condensation between amines and carbon disulfide in refluxing water was described by Maddani and Prabhu as a route to symmetrical and unsymmetrical di-and trisubstituted thiourea derivatives [66]. The method worked well with aliphatic amines: in the first step, a secondary or primary amine was treated with CS2 in aqueous NaOH under ambient conditions Seventeen N-sulfonylcyclothioureas, including 1 chiral derivative, were prepared in 60-87% yield by Wan et al. in the reaction of corresponding amines with carbon disulfide in water catalyzed by silica [69]. An efficient (90-99% yield), green (water used as solvent), catalyst-and chromatography-free protocol for symmetrical, bis-aliphatic thioureas was also developed by Jangale et al. [70]. Milosavljević et al. described the environmentally friendly methodology involving the use of oxidants (sodium percarbonate + EDTA system was most efficient, but also H2O2 and air were tested) and solvent recycling [71]. Other modifications of reaction conditions for the reaction of amines with CS2 have been proposed, including green variants: the use of solar energy [72], microwaves [73,74], ionic liquid media [75,76]. Ten chiral thioureas were prepared by Vázquez et al. in the reaction of enantiopure primary amines (including four pairs of enantiomers) with carbon disulfide under either solvent-free conditions or using microwave irradiation in ethanol (Scheme 11) [77]. Simple mixing of liquid reactants resulted in immediate and high yielding (91-97%) reaction, though product isolation and purification was required. Slightly lower yields (81-95%) were noted for 5 min of MW irradiation, however, pure products crystallized from ethanol used as solvent. Conventional heating was found time-consuming and less efficient (75-88% yield). Scheme 8. Preparation of thioureas in refluxing water [66]. Two chiral and twenty-three achiral symmetrical disubstituted thioureas were prepared in 70-97% yield by Azizi et al. from various primary amines and CS2; the reaction was carried out in water at 60 °C for 1-12 h (aromatic amines required a longer reaction time) and purification involved only filtration, washing with water and recrystallization from ethanol or diethyl ether [67]. Six years later, the same group described an ultrasound-assisted synthesis of 1,3-disubstituted, but also trisubstituted thiourea derivatives (Scheme 10) [68]. The reaction was performed in water or polyethylene glycol, and in the latter yields were higher in most cases (60-97% after 3-6 min of sonication at 30-35 °C). Among 27 products, two enantiomers of 1,3-bis(1-phenylethyl)thiourea were prepared, both in 97% yield. Scheme 10. Ultrasound-assisted synthesis of chiral thioureas [68]. Seventeen N-sulfonylcyclothioureas, including 1 chiral derivative, were prepared in 60-87% yield by Wan et al. in the reaction of corresponding amines with carbon disulfide in water catalyzed by silica [69]. An efficient (90-99% yield), green (water used as solvent), catalyst-and chromatography-free protocol for symmetrical, bis-aliphatic thioureas was also developed by Jangale et al. [70]. Milosavljević et al. described the environmentally friendly methodology involving the use of oxidants (sodium percarbonate + EDTA system was most efficient, but also H2O2 and air were tested) and solvent recycling [71]. Other modifications of reaction conditions for the reaction of amines with CS2 have been proposed, including green variants: the use of solar energy [72], microwaves [73,74], ionic liquid media [75,76]. Ten chiral thioureas were prepared by Vázquez et al. in the reaction of enantiopure primary amines (including four pairs of enantiomers) with carbon disulfide under either solvent-free conditions or using microwave irradiation in ethanol (Scheme 11) [77]. Simple mixing of liquid reactants resulted in immediate and high yielding (91-97%) reaction, though product isolation and purification was required. Slightly lower yields (81-95%) were noted for 5 min of MW irradiation, however, pure products crystallized from ethanol used as solvent. Conventional heating was found time-consuming and less efficient (75-88% yield). Scheme 9. Synthesis of a chiral, racemic thiourea from diamine and CS 2 [66]. Two chiral and twenty-three achiral symmetrical disubstituted thioureas were prepared in 70-97% yield by Azizi et al. from various primary amines and CS 2 ; the reaction was carried out in water at 60 • C for 1-12 h (aromatic amines required a longer reaction time) and purification involved only filtration, washing with water and recrystallization from ethanol or diethyl ether [67]. Six years later, the same group described an ultrasound-assisted synthesis of 1,3-disubstituted, but also trisubstituted thiourea derivatives (Scheme 10) [68]. The reaction was performed in water or polyethylene glycol, and in the latter yields were higher in most cases (60-97% after 3-6 min of sonication at 30-35 • C). Among 27 products, two enantiomers of 1,3-bis(1-phenylethyl)thiourea were prepared, both in 97% yield. Two chiral and twenty-three achiral symmetrical disubstituted thioureas were prepared in 70-97% yield by Azizi et al. from various primary amines and CS2; the reaction was carried out in water at 60 °C for 1-12 h (aromatic amines required a longer reaction time) and purification involved only filtration, washing with water and recrystallization from ethanol or diethyl ether [67]. Six years later, the same group described an ultrasound-assisted synthesis of 1,3-disubstituted, but also trisubstituted thiourea derivatives (Scheme 10) [68]. The reaction was performed in water or polyethylene glycol, and in the latter yields were higher in most cases (60-97% after 3-6 min of sonication at 30-35 °C). Among 27 products, two enantiomers of 1,3-bis(1-phenylethyl)thiourea were prepared, both in 97% yield. Scheme 10. Ultrasound-assisted synthesis of chiral thioureas [68]. Seventeen N-sulfonylcyclothioureas, including 1 chiral derivative, were prepared in 60-87% yield by Wan et al. in the reaction of corresponding amines with carbon disulfide in water catalyzed by silica [69]. An efficient (90-99% yield), green (water used as solvent), catalyst-and chromatography-free protocol for symmetrical, bis-aliphatic thioureas was also developed by Jangale et al. [70]. Milosavljević et al. described the environmentally friendly methodology involving the use of oxidants (sodium percarbonate + EDTA system was most efficient, but also H2O2 and air were tested) and solvent recycling [71]. Other modifications of reaction conditions for the reaction of amines with CS2 have been proposed, including green variants: the use of solar energy [72], microwaves [73,74], ionic liquid media [75,76]. Ten chiral thioureas were prepared by Vázquez et al. in the reaction of enantiopure primary amines (including four pairs of enantiomers) with carbon disulfide under either solvent-free conditions or using microwave irradiation in ethanol (Scheme 11) [77]. Simple mixing of liquid reactants resulted in immediate and high yielding (91-97%) reaction, though product isolation and purification was required. Slightly lower yields (81-95%) were noted for 5 min of MW irradiation, however, pure products crystallized from ethanol used as solvent. Conventional heating was found time-consuming and less efficient (75-88% yield). Seventeen N-sulfonylcyclothioureas, including 1 chiral derivative, were prepared in 60-87% yield by Wan et al. in the reaction of corresponding amines with carbon disulfide in water catalyzed by silica [69]. An efficient (90-99% yield), green (water used as solvent), catalyst-and chromatography-free protocol for symmetrical, bis-aliphatic thioureas was also developed by Jangale et al. [70]. Milosavljević et al. described the environmentally friendly methodology involving the use of oxidants (sodium percarbonate + EDTA system was most efficient, but also H 2 O 2 and air were tested) and solvent recycling [71]. Other modifications of reaction conditions for the reaction of amines with CS 2 have been proposed, including green variants: the use of solar energy [72], microwaves [73,74], ionic liquid media [75,76]. Ten chiral thioureas were prepared by Vázquez et al. in the reaction of enantiopure primary amines (including four pairs of enantiomers) with carbon disulfide under either solvent-free conditions or using microwave irradiation in ethanol (Scheme 11) [77]. Simple mixing of liquid reactants resulted in immediate and high yielding (91-97%) reaction, though product isolation and purification was required. Slightly lower yields (81-95%) were noted for 5 min of MW irradiation, however, pure products crystallized from ethanol used as solvent. Conventional heating was found time-consuming and less efficient (75-88% yield). Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules amines with CS2 have been proposed, including green variants: the use of solar energy [72], microwaves [73,74], ionic liquid media [75,76]. Ten chiral thioureas were prepared by Vázquez et al. in the reaction of enantiopure primary amines (including four pairs of enantiomers) with carbon disulfide under either solvent-free conditions or using microwave irradiation in ethanol (Scheme 11) [77]. Simple mixing of liquid reactants resulted in immediate and high yielding (91-97%) reaction, though product isolation and purification was required. Slightly lower yields (81-95%) were noted for 5 min of MW irradiation, however, pure products crystallized from ethanol used as solvent. Conventional heating was found time-consuming and less efficient (75-88% yield). Scheme 11. Preparation of chiral, symmetrical thioureas from CS 2 and amines [77]. In 2019, a route to unsymmetrical thioureas was proposed by Dutta et al. involving the reaction of dithiocarbamate anions, generated in situ from secondary or primary amines and carbon disulfide at low temperature, with aromatic nitro compounds (Scheme 12) [78]. DMF as solvent, potassium carbonate as a base, and temperature of 100 • C were established as optimal conditions for the second step of the reaction. Among 22 derivatives obtained in 77-93% yield after 4-6 h one was chiral, but apparently racemic (Scheme 13). The postulated reaction mechanism involved the formation of nitrosoaryl intermediate and release of SO 2 (consumed by K 2 CO 3 ) as a by-product resulting from dithiocarbamate oxidation by nitroarene. In 2019, a route to unsymmetrical thioureas was proposed by Dutta et al. involving the reaction of dithiocarbamate anions, generated in situ from secondary or primary amines and carbon disulfide at low temperature, with aromatic nitro compounds (Scheme 12) [78]. DMF as solvent, potassium carbonate as a base, and temperature of 100 °C were established as optimal conditions for the second step of the reaction. Among 22 derivatives obtained in 77-93% yield after 4-6 h one was chiral, but apparently racemic (Scheme 13). The postulated reaction mechanism involved the formation of nitrosoaryl intermediate and release of SO2 (consumed by K2CO3) as a by-product resulting from dithiocarbamate oxidation by nitroarene. Scheme 12. Preparation of thioureas from nitro compounds, carbon disulfide and amines [78]. Scheme 13. A racemic thiourea prepared by Dutta et al [78]. Organic azides were also converted to the corresponding thioureas. Kumar et al. reported on the synthesis of thioureido peptidomimetics using N-protected amino alkyl azides and dithiocarbamoic acids formed in situ from carbon disulfide and primary or secondary amines (Scheme 14) [79]. The reaction proceeded with the liberation of N2 and sulfur; it was performed in THF in the presence of pyridine at 0 °C to room temperature under inert atmosphere for 6 h. Fifteen enantiomerically pure derivatives were isolated in 72-85% yield. Application of Other Compounds Containing C=S Bond Less frequently, other thioorganic compounds find their application in the synthesis of thioureas, both chiral and achiral derivatives. In the past, a reaction of thiophosgene with primary or secondary amines was used in such preparations, though this source of C=S fragment has not gained popularity due to its corrosive and toxic properties [80]. However, its reaction with diamines was a key step in the synthesis of chiral cyclic thiourea ligands for palladium-catalyzed reactions by Yang and co-workers (Scheme 15) [81,82]. 1-(Methyldithiocarbonyl)imidazole and its quaternary Nmethylated salt were found by Mohanta et al. to be effective thiocarbonyl transfer agents which Scheme 12. Preparation of thioureas from nitro compounds, carbon disulfide and amines [78]. In 2019, a route to unsymmetrical thioureas was proposed by Dutta et al. involving the reaction of dithiocarbamate anions, generated in situ from secondary or primary amines and carbon disulfide at low temperature, with aromatic nitro compounds (Scheme 12) [78]. DMF as solvent, potassium carbonate as a base, and temperature of 100 °C were established as optimal conditions for the second step of the reaction. Among 22 derivatives obtained in 77-93% yield after 4-6 h one was chiral, but apparently racemic (Scheme 13). The postulated reaction mechanism involved the formation of nitrosoaryl intermediate and release of SO2 (consumed by K2CO3) as a by-product resulting from dithiocarbamate oxidation by nitroarene. Scheme 12. Preparation of thioureas from nitro compounds, carbon disulfide and amines [78]. Scheme 13. A racemic thiourea prepared by Dutta et al [78]. Organic azides were also converted to the corresponding thioureas. Kumar et al. reported on the synthesis of thioureido peptidomimetics using N-protected amino alkyl azides and dithiocarbamoic acids formed in situ from carbon disulfide and primary or secondary amines (Scheme 14) [79]. The reaction proceeded with the liberation of N2 and sulfur; it was performed in THF in the presence of pyridine at 0 °C to room temperature under inert atmosphere for 6 h. Fifteen enantiomerically pure derivatives were isolated in 72-85% yield. Application of Other Compounds Containing C=S Bond Less frequently, other thioorganic compounds find their application in the synthesis of thioureas, both chiral and achiral derivatives. In the past, a reaction of thiophosgene with primary or secondary amines was used in such preparations, though this source of C=S fragment has not gained popularity due to its corrosive and toxic properties [80]. However, its reaction with diamines was a key step in the synthesis of chiral cyclic thiourea ligands for palladium-catalyzed reactions by Yang and co-workers (Scheme 15) [81,82]. 1-(Methyldithiocarbonyl)imidazole and its quaternary Nmethylated salt were found by Mohanta et al. to be effective thiocarbonyl transfer agents which Scheme 13. A racemic thiourea prepared by Dutta et al [78]. Organic azides were also converted to the corresponding thioureas. Kumar et al. reported on the synthesis of thioureido peptidomimetics using N-protected amino alkyl azides and dithiocarbamoic acids formed in situ from carbon disulfide and primary or secondary amines (Scheme 14) [79]. The reaction proceeded with the liberation of N 2 and sulfur; it was performed in THF in the presence of pyridine at 0 • C to room temperature under inert atmosphere for 6 h. Fifteen enantiomerically pure derivatives were isolated in 72-85% yield. In 2019, a route to unsymmetrical thioureas was proposed by Dutta et al. involving the reaction of dithiocarbamate anions, generated in situ from secondary or primary amines and carbon disulfide at low temperature, with aromatic nitro compounds (Scheme 12) [78]. DMF as solvent, potassium carbonate as a base, and temperature of 100 °C were established as optimal conditions for the second step of the reaction. Among 22 derivatives obtained in 77-93% yield after 4-6 h one was chiral, but apparently racemic (Scheme 13). The postulated reaction mechanism involved the formation of nitrosoaryl intermediate and release of SO2 (consumed by K2CO3) as a by-product resulting from dithiocarbamate oxidation by nitroarene. Scheme 12. Preparation of thioureas from nitro compounds, carbon disulfide and amines [78]. Scheme 13. A racemic thiourea prepared by Dutta et al [78]. Organic azides were also converted to the corresponding thioureas. Kumar et al. reported on the synthesis of thioureido peptidomimetics using N-protected amino alkyl azides and dithiocarbamoic acids formed in situ from carbon disulfide and primary or secondary amines (Scheme 14) [79]. The reaction proceeded with the liberation of N2 and sulfur; it was performed in THF in the presence of pyridine at 0 °C to room temperature under inert atmosphere for 6 h. Fifteen enantiomerically pure derivatives were isolated in 72-85% yield. Application of Other Compounds Containing C=S Bond Less frequently, other thioorganic compounds find their application in the synthesis of thioureas, both chiral and achiral derivatives. In the past, a reaction of thiophosgene with primary or secondary amines was used in such preparations, though this source of C=S fragment has not gained popularity due to its corrosive and toxic properties [80]. However, its reaction with diamines was a key step in the synthesis of chiral cyclic thiourea ligands for palladium-catalyzed reactions by Yang Application of Other Compounds Containing C=S Bond Less frequently, other thioorganic compounds find their application in the synthesis of thioureas, both chiral and achiral derivatives. In the past, a reaction of thiophosgene with primary or secondary amines was used in such preparations, though this source of C=S fragment has not gained popularity due to its corrosive and toxic properties [80]. However, its reaction with diamines was a key step in the synthesis of chiral cyclic thiourea ligands for palladium-catalyzed reactions by Yang and co-workers (Scheme 15) [81,82]. 1-(Methyldithiocarbonyl)imidazole and its quaternary N-methylated salt were found by Mohanta et al. to be effective thiocarbonyl transfer agents which allowed preparation of mono-, di-and trisubstituted thioureas (though not chiral) under mild conditions [83]. Application of Other Compounds Containing C=S Bond Less frequently, other thioorganic compounds find their application in the synthesis of thioureas, both chiral and achiral derivatives. In the past, a reaction of thiophosgene with primary or secondary amines was used in such preparations, though this source of C=S fragment has not gained popularity due to its corrosive and toxic properties [80]. However, its reaction with diamines was a key step in the synthesis of chiral cyclic thiourea ligands for palladium-catalyzed reactions by Yang and co-workers (Scheme 15) [81,82]. 1-(Methyldithiocarbonyl)imidazole and its quaternary Nmethylated salt were found by Mohanta et al. to be effective thiocarbonyl transfer agents which allowed preparation of mono-, di-and trisubstituted thioureas (though not chiral) under mild conditions [83]. Scheme 15. The use of thiophosgene in the preparation of a chiral cyclic thiourea [81]. Furthermore, achiral derivatives were prepared in the reaction of amines with thiuram disulfides (Scheme 16) or monosulfides (in turn obtained from dithiocarbamates) [84]. Li et al. developed a protocol utilizing phenyl chlorothionoformate [85,86]. Its reaction with primary amines (either aliphatic or aromatic) in refluxing water carried out for 2-12 h (though 2-3 h were sufficient in most cases) afforded a series of 17 symmetrical, 1,3-disubstituted thioureas in 60-99% yield (Scheme 17) [85]. Among them, two enantiomerically pure derivatives were prepared. The protocol was also used to obtain thione heterocycles, however, it failed in the attempts with a mixture of two amines aimed to yield unsymmetrically substituted products. Such compounds could be formed using a two-step route: amines were first reacted with chloroformate in water for 1 h at room temperature [86]. Then, the resulting thiocarbamate was converted into the desired thiourea by heating with another (more reactive) amine for 20-80 min at 100 • C (Scheme 18). The yields were high (87-95% for the first step, 75-97% for the second one), products (none of them were chiral) were isolated by filtration and washed with water, without the need of further purification. The reaction could be performed on a gram scale. Furthermore, achiral derivatives were prepared in the reaction of amines with thiuram disulfides (Scheme 16) or monosulfides (in turn obtained from dithiocarbamates) [84]. Li et al. developed a protocol utilizing phenyl chlorothionoformate [85,86]. Its reaction with primary amines (either aliphatic or aromatic) in refluxing water carried out for 2-12 h (though 2-3 h were sufficient in most cases) afforded a series of 17 symmetrical, 1,3-disubstituted thioureas in 60-99% yield (Scheme 17) [85]. Among them, two enantiomerically pure derivatives were prepared. The protocol was also used to obtain thione heterocycles, however, it failed in the attempts with a mixture of two amines aimed to yield unsymmetrically substituted products. Such compounds could be formed using a two-step route: amines were first reacted with chloroformate in water for 1 h at room temperature [86]. Then, the resulting thiocarbamate was converted into the desired thiourea by heating with another (more reactive) amine for 20-80 min at 100 °C (Scheme 18). The yields were high (87-95% for the first step, 75-97% for the second one), products (none of them were chiral) were isolated by filtration and washed with water, without the need of further purification. The reaction could be performed on a gram scale. Scheme 16. Thiuram disulfide as a reactant in preparation of thioureas [84]. Scheme 17. Synthesis of symmetrical thioureas from phenyl chlorothionoformate and amine [85]. Scheme 18. Two-step approach to unsymmetrical thioureas from phenyl chlorothionoformate [86]. Preparation of Chiral Thioureas from Other Thioureas The thiocarbonyl moiety of chiral thiourea can also originate from achiral ones, subjected to an appropriate modification. In principle, it is possible to preserve the original skeleton and replace groups attached to the nitrogen atom(s), changing the substitution pattern. For example, the thiazolidine motif was introduced via the reaction of mono-aryl-substituted thioureas and the appropriate carboxylic acids with boronic acid catalyst (Scheme 19) [87]. Various prepared N-acyl derivatives included chiral ones and were found to exhibit antioxidant activity. Appropriate modifications of substituents of thiourea derived from 1,2-phenylenediamine were described by Liang et al. resulting in the introduction of chirality, and, finally, isolation of atropoisomeric N,Sdonating ligands bearing oxazoline moiety [88]. Scheme 16. Thiuram disulfide as a reactant in preparation of thioureas [84]. Furthermore, achiral derivatives were prepared in the reaction of amines with thiuram disulfides (Scheme 16) or monosulfides (in turn obtained from dithiocarbamates) [84]. Li et al. developed a protocol utilizing phenyl chlorothionoformate [85,86]. Its reaction with primary amines (either aliphatic or aromatic) in refluxing water carried out for 2-12 h (though 2-3 h were sufficient in most cases) afforded a series of 17 symmetrical, 1,3-disubstituted thioureas in 60-99% yield (Scheme 17) [85]. Among them, two enantiomerically pure derivatives were prepared. The protocol was also used to obtain thione heterocycles, however, it failed in the attempts with a mixture of two amines aimed to yield unsymmetrically substituted products. Such compounds could be formed using a two-step route: amines were first reacted with chloroformate in water for 1 h at room temperature [86]. Then, the resulting thiocarbamate was converted into the desired thiourea by heating with another (more reactive) amine for 20-80 min at 100 °C (Scheme 18). The yields were high (87-95% for the first step, 75-97% for the second one), products (none of them were chiral) were isolated by filtration and washed with water, without the need of further purification. The reaction could be performed on a gram scale. Scheme 16. Thiuram disulfide as a reactant in preparation of thioureas [84]. Scheme 17. Synthesis of symmetrical thioureas from phenyl chlorothionoformate and amine [85]. Scheme 18. Two-step approach to unsymmetrical thioureas from phenyl chlorothionoformate [86]. Preparation of Chiral Thioureas from Other Thioureas The thiocarbonyl moiety of chiral thiourea can also originate from achiral ones, subjected to an appropriate modification. In principle, it is possible to preserve the original skeleton and replace groups attached to the nitrogen atom(s), changing the substitution pattern. For example, the thiazolidine motif was introduced via the reaction of mono-aryl-substituted thioureas and the appropriate carboxylic acids with boronic acid catalyst (Scheme 19) [87]. Various prepared N-acyl derivatives included chiral ones and were found to exhibit antioxidant activity. Appropriate modifications of substituents of thiourea derived from 1,2-phenylenediamine were described by Liang et al. resulting in the introduction of chirality, and, finally, isolation of atropoisomeric N,Sdonating ligands bearing oxazoline moiety [88]. Scheme 17. Synthesis of symmetrical thioureas from phenyl chlorothionoformate and amine [85]. Furthermore, achiral derivatives were prepared in the reaction of amines with thiuram disulfides (Scheme 16) or monosulfides (in turn obtained from dithiocarbamates) [84]. Li et al. developed a protocol utilizing phenyl chlorothionoformate [85,86]. Its reaction with primary amines (either aliphatic or aromatic) in refluxing water carried out for 2-12 h (though 2-3 h were sufficient in most cases) afforded a series of 17 symmetrical, 1,3-disubstituted thioureas in 60-99% yield (Scheme 17) [85]. Among them, two enantiomerically pure derivatives were prepared. The protocol was also used to obtain thione heterocycles, however, it failed in the attempts with a mixture of two amines aimed to yield unsymmetrically substituted products. Such compounds could be formed using a two-step route: amines were first reacted with chloroformate in water for 1 h at room temperature [86]. Then, the resulting thiocarbamate was converted into the desired thiourea by heating with another (more reactive) amine for 20-80 min at 100 °C (Scheme 18). The yields were high (87-95% for the first step, 75-97% for the second one), products (none of them were chiral) were isolated by filtration and washed with water, without the need of further purification. The reaction could be performed on a gram scale. Scheme 16. Thiuram disulfide as a reactant in preparation of thioureas [84]. Scheme 17. Synthesis of symmetrical thioureas from phenyl chlorothionoformate and amine [85]. Scheme 18. Two-step approach to unsymmetrical thioureas from phenyl chlorothionoformate [86]. Preparation of Chiral Thioureas from Other Thioureas The thiocarbonyl moiety of chiral thiourea can also originate from achiral ones, subjected to an appropriate modification. In principle, it is possible to preserve the original skeleton and replace groups attached to the nitrogen atom(s), changing the substitution pattern. For example, the thiazolidine motif was introduced via the reaction of mono-aryl-substituted thioureas and the appropriate carboxylic acids with boronic acid catalyst (Scheme 19) [87]. Various prepared N-acyl derivatives included chiral ones and were found to exhibit antioxidant activity. Appropriate modifications of substituents of thiourea derived from 1,2-phenylenediamine were described by Liang et al. resulting in the introduction of chirality, and, finally, isolation of atropoisomeric N,Sdonating ligands bearing oxazoline moiety [88]. Preparation of Chiral Thioureas from Other Thioureas The thiocarbonyl moiety of chiral thiourea can also originate from achiral ones, subjected to an appropriate modification. In principle, it is possible to preserve the original skeleton and replace groups attached to the nitrogen atom(s), changing the substitution pattern. For example, the thiazolidine motif was introduced via the reaction of mono-aryl-substituted thioureas and the appropriate carboxylic acids with boronic acid catalyst (Scheme 19) [87]. Various prepared N-acyl derivatives included chiral ones Molecules 2020, 25, 401 9 of 56 and were found to exhibit antioxidant activity. Appropriate modifications of substituents of thiourea derived from 1,2-phenylenediamine were described by Liang et al. resulting in the introduction of chirality, and, finally, isolation of atropoisomeric N,S-donating ligands bearing oxazoline moiety [88]. Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules The thiocarbonyl moiety of chiral thiourea can also originate from achiral ones, subjected to an appropriate modification. In principle, it is possible to preserve the original skeleton and replace groups attached to the nitrogen atom(s), changing the substitution pattern. For example, the thiazolidine motif was introduced via the reaction of mono-aryl-substituted thioureas and the appropriate carboxylic acids with boronic acid catalyst (Scheme 19) [87]. Various prepared N-acyl derivatives included chiral ones and were found to exhibit antioxidant activity. Appropriate modifications of substituents of thiourea derived from 1,2-phenylenediamine were described by Liang et al. resulting in the introduction of chirality, and, finally, isolation of atropoisomeric N,Sdonating ligands bearing oxazoline moiety [88]. Scheme 19. Reaction of monosubstituted thiourea with a chiral carboxylic acid as a route to a chiral thiourea [87]. Scheme 19. Reaction of monosubstituted thiourea with a chiral carboxylic acid as a route to a chiral thiourea [87]. Quite frequently, a reaction at thiocarbonyl carbon atom of thiourea is performed, resulting in C-N bond cleavage and attachment of an amine nucleophile. Yin et al. reported on the application of readily available and stable 1,3-bis-Boc-substituted thiourea as thioacylating agent of nucleophiles (amines, but also alcohols, thiols, thiophenolates, sodium malonates, Scheme 20) [89]. Thiocarbonyl compounds were formed in reasonable yields (78-94% in case of thioureas) under mild conditions (sodium hydride as a base, trifluoroacetic acid anhydride (TFAA) as an activator, THF solvent, 0 • C to RT). Amines bearing additional functional groups could be converted into desired products as well, including two enantiomerically pure derivatives for which no epimerization was observed. However, for hindered acyclic secondary amines, the procedure did not work. Bis-Boc thiourea was also utilized by Cohrt and Nielsen in their synthesis of N-terminally modified α-thiourea peptides that were further converted into the respective thiazoles, which in turn were incorporated into 15-to 17-membered macrocycles bearing up to three stereocenters [90]. Quite frequently, a reaction at thiocarbonyl carbon atom of thiourea is performed, resulting in C-N bond cleavage and attachment of an amine nucleophile. Yin et al. reported on the application of readily available and stable 1,3-bis-Boc-substituted thiourea as thioacylating agent of nucleophiles (amines, but also alcohols, thiols, thiophenolates, sodium malonates, Scheme 20) [89]. Thiocarbonyl compounds were formed in reasonable yields (78-94% in case of thioureas) under mild conditions (sodium hydride as a base, trifluoroacetic acid anhydride (TFAA) as an activator, THF solvent, 0 °C to RT). Amines bearing additional functional groups could be converted into desired products as well, including two enantiomerically pure derivatives for which no epimerization was observed. However, for hindered acyclic secondary amines, the procedure did not work. Bis-Boc thiourea was also utilized by Cohrt and Nielsen in their synthesis of N-terminally modified α-thiourea peptides that were further converted into the respective thiazoles, which in turn were incorporated into 15-to 17-membered macrocycles bearing up to three stereocenters [90]. Scheme 20. Conversion of bis-Boc thiourea into other thioureas. [89] Thiourea derivatives bearing heterocyclic substituents were also found to be useful thioacylating agents. 1-(Alkyl/Arylthiocarbamoyl)benzotriazoles were applied by Katritzky et al. as stable isothiocyanate equivalents (Scheme 21) [91]. Among di-and trisubstituted thioureas, three (R)-1phenylethyl derivatives were formed in 92-99% yield (Scheme 22). Kang et al. used 1,1′thiocarbonyldiimidazole ( Figure 2) for C=S transfer, and converted it into desired monosubstituted (achiral) thioureas in two subsequent reactions with primary amines (mainly fluorene derivatives) and NH3 [92]. Molecules 2020, 25, x 9 of 57 Quite frequently, a reaction at thiocarbonyl carbon atom of thiourea is performed, resulting in C-N bond cleavage and attachment of an amine nucleophile. Yin et al. reported on the application of readily available and stable 1,3-bis-Boc-substituted thiourea as thioacylating agent of nucleophiles (amines, but also alcohols, thiols, thiophenolates, sodium malonates, Scheme 20) [89]. Thiocarbonyl compounds were formed in reasonable yields (78-94% in case of thioureas) under mild conditions (sodium hydride as a base, trifluoroacetic acid anhydride (TFAA) as an activator, THF solvent, 0 °C to RT). Amines bearing additional functional groups could be converted into desired products as well, including two enantiomerically pure derivatives for which no epimerization was observed. However, for hindered acyclic secondary amines, the procedure did not work. Bis-Boc thiourea was also utilized by Cohrt and Nielsen in their synthesis of N-terminally modified α-thiourea peptides that were further converted into the respective thiazoles, which in turn were incorporated into 15-to 17-membered macrocycles bearing up to three stereocenters [90]. Scheme 20. Conversion of bis-Boc thiourea into other thioureas. [89] Thiourea derivatives bearing heterocyclic substituents were also found to be useful thioacylating agents. 1-(Alkyl/Arylthiocarbamoyl)benzotriazoles were applied by Katritzky et al. as stable isothiocyanate equivalents (Scheme 21) [91]. Among di-and trisubstituted thioureas, three (R)-1phenylethyl derivatives were formed in 92-99% yield (Scheme 22). Kang et al. used 1,1′thiocarbonyldiimidazole ( Figure 2) for C=S transfer, and converted it into desired monosubstituted (achiral) thioureas in two subsequent reactions with primary amines (mainly fluorene derivatives) and NH3 [92]. Molecules 2020, 25, x 9 of 57 Quite frequently, a reaction at thiocarbonyl carbon atom of thiourea is performed, resulting in C-N bond cleavage and attachment of an amine nucleophile. Yin et al. reported on the application of readily available and stable 1,3-bis-Boc-substituted thiourea as thioacylating agent of nucleophiles (amines, but also alcohols, thiols, thiophenolates, sodium malonates, Scheme 20) [89]. Thiocarbonyl compounds were formed in reasonable yields (78-94% in case of thioureas) under mild conditions (sodium hydride as a base, trifluoroacetic acid anhydride (TFAA) as an activator, THF solvent, 0 °C to RT). Amines bearing additional functional groups could be converted into desired products as well, including two enantiomerically pure derivatives for which no epimerization was observed. However, for hindered acyclic secondary amines, the procedure did not work. Bis-Boc thiourea was also utilized by Cohrt and Nielsen in their synthesis of N-terminally modified α-thiourea peptides that were further converted into the respective thiazoles, which in turn were incorporated into 15-to 17-membered macrocycles bearing up to three stereocenters [90]. Scheme 20. Conversion of bis-Boc thiourea into other thioureas. [89] Thiourea derivatives bearing heterocyclic substituents were also found to be useful thioacylating agents. 1-(Alkyl/Arylthiocarbamoyl)benzotriazoles were applied by Katritzky et al. as stable isothiocyanate equivalents (Scheme 21) [91]. Among di-and trisubstituted thioureas, three (R)-1phenylethyl derivatives were formed in 92-99% yield (Scheme 22). Kang et al. used 1,1′thiocarbonyldiimidazole ( Figure 2) for C=S transfer, and converted it into desired monosubstituted (achiral) thioureas in two subsequent reactions with primary amines (mainly fluorene derivatives) and NH3 [92]. C=S Bond Formation Less frequently, thioureas have been prepared via multi-component reactions in which elementary sulfur or other sulfur-transfer agents are used to form the C=S bond. Hydrogen sulfide was used by Katritzky et al. as the source of sulfur and reacted with 1-cyanobenzotriazole and amines or with carboximidamides to give thioureas in reasonable to high yields (54-99%, with two C=S Bond Formation Less frequently, thioureas have been prepared via multi-component reactions in which elementary sulfur or other sulfur-transfer agents are used to form the C=S bond. Hydrogen sulfide was used by Katritzky et al. as the source of sulfur and reacted with 1-cyanobenzotriazole and amines or with carboximidamides to give thioureas in reasonable to high yields (54-99%, with two exceptions, Scheme 23) [93]. Thioureas (and selenoureas) were also prepared from cyanamides as described by Koketsu et al. [94]. The reaction with HCl in Et 2 O and then with LiAlHSH afforded 1,1-disubstituted products in 52-89% yield (Scheme 24). Unfortunately, chiral derivatives were not prepared by these two groups. Scheme 23. Utilization of hydrogen sulfide in the synthesis of thioureas [93]. [95]. Various primary amines were treated with chloroform in the presence of a base followed by addition of sulfur and a second primary amine. Optimal reaction conditions comprised t-BuOK used as a base, tert-butanol/1,4-dioxane (1:1 mixture) solvent, 55 °C, and reaction times varied from 6 to 15 h. Yields ranged from 51% to 96%, and among 35 thioureas ten chiral derivatives were obtained with a complete preservation of optical purity of the starting amines. Scheme 25. Reaction of amines with sulfur and chloroform as a route to thioureas [95]. Sulfuration of azoles connected with N-difluoromethylation yielding a family of appropriately substituted cyclic thioureas was reported by Tang and co-workers [96]. The optimum reaction conditions were established as 24 h at 100 °C in DMA solvent with sodium hydroxymethylsulfite additive; elementary sulfur and ethyl 2-bromo-2,2-difluoroacetate were used as inexpensive reagents. Two products-derivatives of Econazole (48%) and Ketoconazole (43%)-were chiral, and the latter was obtained as a single enantiomer ( Figure 3). Scheme 23. Utilization of hydrogen sulfide in the synthesis of thioureas [93]. Scheme 23. Utilization of hydrogen sulfide in the synthesis of thioureas [93]. [95]. Various primary amines were treated with chloroform in the presence of a base followed by addition of sulfur and a second primary amine. Optimal reaction conditions comprised t-BuOK used as a base, tert-butanol/1,4-dioxane (1:1 mixture) solvent, 55 °C, and reaction times varied from 6 to 15 h. Yields ranged from 51% to 96%, and among 35 thioureas ten chiral derivatives were obtained with a complete preservation of optical purity of the starting amines. Scheme 25. Reaction of amines with sulfur and chloroform as a route to thioureas [95]. Sulfuration of azoles connected with N-difluoromethylation yielding a family of appropriately substituted cyclic thioureas was reported by Tang and co-workers [96]. The optimum reaction conditions were established as 24 h at 100 °C in DMA solvent with sodium hydroxymethylsulfite additive; elementary sulfur and ethyl 2-bromo-2,2-difluoroacetate were used as inexpensive reagents. Two products-derivatives of Econazole (48%) and Ketoconazole (43%)-were chiral, and the latter was obtained as a single enantiomer ( Figure 3). [95]. Various primary amines were treated with chloroform in the presence of a base followed by addition of sulfur and a second primary amine. Optimal reaction conditions comprised t-BuOK used as a base, tert-butanol/1,4-dioxane (1:1 mixture) solvent, 55 • C, and reaction times varied from 6 to 15 h. Yields ranged from 51% to 96%, and among 35 thioureas ten chiral derivatives were obtained with a complete preservation of optical purity of the starting amines. Scheme 23. Utilization of hydrogen sulfide in the synthesis of thioureas [93]. [95]. Various primary amines were treated with chloroform in the presence of a base followed by addition of sulfur and a second primary amine. Optimal reaction conditions comprised t-BuOK used as a base, tert-butanol/1,4-dioxane (1:1 mixture) solvent, 55 °C, and reaction times varied from 6 to 15 h. Yields ranged from 51% to 96%, and among 35 thioureas ten chiral derivatives were obtained with a complete preservation of optical purity of the starting amines. Scheme 25. Reaction of amines with sulfur and chloroform as a route to thioureas [95]. Sulfuration of azoles connected with N-difluoromethylation yielding a family of appropriately substituted cyclic thioureas was reported by Tang and co-workers [96]. The optimum reaction conditions were established as 24 h at 100 °C in DMA solvent with sodium hydroxymethylsulfite additive; elementary sulfur and ethyl 2-bromo-2,2-difluoroacetate were used as inexpensive reagents. Two products-derivatives of Econazole (48%) and Ketoconazole (43%)-were chiral, and the latter was obtained as a single enantiomer ( Figure 3). Sulfuration of azoles connected with N-difluoromethylation yielding a family of appropriately substituted cyclic thioureas was reported by Tang and co-workers [96]. The optimum reaction conditions were established as 24 h at 100 • C in DMA solvent with sodium hydroxymethylsulfite additive; elementary sulfur and ethyl 2-bromo-2,2-difluoroacetate were used as inexpensive reagents. Two products-derivatives of Econazole (48%) and Ketoconazole (43%)-were chiral, and the latter was obtained as a single enantiomer ( Figure 3). Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules Sulfuration of azoles connected with N-difluoromethylation yielding a family of appropriately substituted cyclic thioureas was reported by Tang and co-workers [96]. The optimum reaction conditions were established as 24 h at 100 °C in DMA solvent with sodium hydroxymethylsulfite additive; elementary sulfur and ethyl 2-bromo-2,2-difluoroacetate were used as inexpensive reagents. Two products-derivatives of Econazole (48%) and Ketoconazole (43%)-were chiral, and the latter was obtained as a single enantiomer ( Figure 3). A relatively low explored synthetic strategy is based on the use of isocyanides. Zhu et al. described a Co(II)-catalyzed insertion of isocyanides into active N-H bonds of amines under ultrasound irradiation; the amino methylidyneaminium intermediates reacted readily with various nucleophiles including sulfur (to give thioureas) or water (yielding ureas) [97]. A series of aniline derivatives were tested with tert-butyl isocyanide to give 37-53% yield, and tryptamine with 4 different isocyanides gave products in 53-67%. The optimal reaction conditions included 20 mol% of A relatively low explored synthetic strategy is based on the use of isocyanides. Zhu et al. described a Co(II)-catalyzed insertion of isocyanides into active N-H bonds of amines under ultrasound irradiation; the amino methylidyneaminium intermediates reacted readily with various nucleophiles including sulfur (to give thioureas) or water (yielding ureas) [97]. A series of aniline derivatives were tested with tert-butyl isocyanide to give 37-53% yield, and tryptamine with 4 different isocyanides gave products in 53-67%. The optimal reaction conditions included 20 mol% of Co(acac) 2 catalyst, two equivalents of Na 2 CO 3 and one equivalent of TBHP in 1,4-dioxane and ultrasound irradiation at 75 • C. Chiral thioureas were not prepared, however, two enantiomers of urea were obtained from enantiopure 1-phenylethylamines and tert-butyl isocyanide with a complete retention of configuration, suggesting a similar possibility for the synthesis of chiral thiourea. An efficient, three-component reaction of isocyanides, aliphatic amines and elemental sulfur under mild conditions (RT to 40 • C, solvent-free or toluene) was also reported by Nguyen et al. (Scheme 26) [98]. A three-component reaction of isocyanides, in situ formed N-chlorinated secondary amines and water or sodium sulfide was reported on by Angyal et al. [99]. Sodium dichloroisocyanurate (NaDCC) was used as a chlorinating agent. The reactions were performed in isopropanol under microwave-assisted conditions (100 °C, 10 min, 250 W, Scheme 27). Eight thioureas were prepared in 27-68% yield, including an enantiopure proline derivative (30%). Singh and Sharma developed a multicomponent protocol of thiourea preparation utilizing aromatic isocyanides, amines and 1,2di(tert-butyl)disulfide under solvent-and catalyst-free conditions (Scheme 28) [100]. Isocyanides were obtained by formylation of aromatic amines followed by dehydration using POCl3. All the reactions were heated for 5 h at 120 °C and 1 h at 60 °C after the addition of amines. Twenty-seven thioureas were prepared in 56-99% yield, including an optically active derivative obtained from phenyl isocyanide Cinchonamine (57%). Types of Chiral Thioureas and Selected Applications in Asymmetric Synthesis Scheme 26. Preparation of thioureas from isocyanides, amines and elementary sulfur [98]. A three-component reaction of isocyanides, in situ formed N-chlorinated secondary amines and water or sodium sulfide was reported on by Angyal et al. [99]. Sodium dichloroisocyanurate (NaDCC) was used as a chlorinating agent. The reactions were performed in isopropanol under microwave-assisted conditions (100 • C, 10 min, 250 W, Scheme 27). Eight thioureas were prepared in 27-68% yield, including an enantiopure proline derivative (30%). Singh and Sharma developed a multicomponent protocol of thiourea preparation utilizing aromatic isocyanides, amines and 1,2-di(tert-butyl)disulfide under solvent-and catalyst-free conditions (Scheme 28) [100]. Isocyanides were obtained by formylation of aromatic amines followed by dehydration using POCl 3 . All the reactions were heated for 5 h at 120 • C and 1 h at 60 • C after the addition of amines. Twenty-seven thioureas were prepared in 56-99% yield, including an optically active derivative obtained from phenyl isocyanide Cinchonamine (57%). A three-component reaction of isocyanides, in situ formed N-chlorinated secondary amines and water or sodium sulfide was reported on by Angyal et al. [99]. Sodium dichloroisocyanurate (NaDCC) was used as a chlorinating agent. The reactions were performed in isopropanol under microwave-assisted conditions (100 °C, 10 min, 250 W, Scheme 27). Eight thioureas were prepared in 27-68% yield, including an enantiopure proline derivative (30%). Singh and Sharma developed a multicomponent protocol of thiourea preparation utilizing aromatic isocyanides, amines and 1,2di(tert-butyl)disulfide under solvent-and catalyst-free conditions (Scheme 28) [100]. Isocyanides were obtained by formylation of aromatic amines followed by dehydration using POCl3. All the reactions were heated for 5 h at 120 °C and 1 h at 60 °C after the addition of amines. Twenty-seven thioureas were prepared in 56-99% yield, including an optically active derivative obtained from phenyl isocyanide Cinchonamine (57%). Types of Chiral Thioureas and Selected Applications in Asymmetric Synthesis Scheme 27. Synthesis of thioureas from amines, isocyanides, and sodium sulfide [99]. Types of Chiral Thioureas and Selected Applications in Asymmetric Synthesis Chiral thioureas can be classified taking into account a moiety introduced to its structure in order to induce chirality. Most structures contain a stereogenic center attached to a nitrogen atom of thiourea which typically derives from amine used as one of the reactants. In many cases, also additional elements of chirality are present, as well as functional groups designed to participate in desired interactions with target molecules (e.g., substrates of catalyzed reaction) or ions (metal ions in coordination compounds, anions bound by thiourea-based receptors). In particular, derivatives based on chiral diamines have attracted attention due to their possible application as bifunctional catalysts, with thiourea moiety acting as hydrogen bond donor and an amino group as a Brønsted base/Lewis base/nucleophile. Alternatively, systems consisting of an achiral thiourea together with components responsible for asymmetric induction are tested as well [101]. Types of Chiral Thioureas and Selected Applications in Asymmetric Synthesis Chiral thioureas can be classified taking into account a moiety introduced to its structure in order to induce chirality. Most structures contain a stereogenic center attached to a nitrogen atom of thiourea which typically derives from amine used as one of the reactants. In many cases, also additional elements of chirality are present, as well as functional groups designed to participate in desired interactions with target molecules (e.g., substrates of catalyzed reaction) or ions (metal ions in coordination compounds, anions bound by thiourea-based receptors). In particular, derivatives based on chiral diamines have attracted attention due to their possible application as bifunctional catalysts, with thiourea moiety acting as hydrogen bond donor and an amino group as a Brønsted base/Lewis base/nucleophile. Alternatively, systems consisting of an achiral thiourea together with components responsible for asymmetric induction are tested as well [101]. Molecules 2020, 25, x 12 of 57 Consequently, the following part is focused on the presentation of selected examples of compounds with only a short mention about their applications in the synthesis and functionalization of one particular system, i.e., chroman scaffold and its derivatives ( Figure 4). As an example for a versatile structure with various pharmaceutically active derivatives [115], it has various possible medicinal usages [116,117]. Prochiral carbon atoms can be substituted, yielding structures with several stereogenic centers, e.g., flavonoids ( Figure 4) [118,119]. Preparation of various chroman derivatives with a defined stereochemistry challenged organic chemists. Versatile, general methods for their asymmetric synthesis have been found in recent years [117,[120][121][122]. Diverse modifications of Takemoto's compound have been introduced, both in the parent group and by others (Figure 8). A PEG-immobilized catalyst was efficient in the enantioselective Michael and tandem Michael reactions [152], while a hydroxylated derivative catalyzed a Petasis-type reaction of quinolones and boronic acids [153]. A hybrid catalyst containing arylboronic acid was evaluated for asymmetric hetero-Michael addition to unsaturated carboxylic acids [154,155]. Various modified derivatives of Takemoto's catalyst were tested in the enantioselective reduction of ketones with borane; a benzyl-substituted catalyst led to the best outcomes [156]. A piperidine derivative showed the optimal performance among diamine-derived catalysts tested in inverse-electrondemand Diels-Alder cycloaddition [157], Mannich reaction of 2-substituted indolin-3-ones [158], and pyrrolidine-substituted thiourea, in conjugated addition of nitroacetates to unsaturated ketoesters [159], and in double Michael reaction used for the construction of a spiro-fused cyclohexanone-5- Now commercially available (in both (R,R)-and (S,S) forms), Takemoto's organocatalyst was also used by other groups, mainly to catalyze various stereoselective additions [135,[143][144][145][146][147][148][149][150], and to control the ring-opening polymerization of racemic lactide leading to highly isotactic polylactide [151]. Other chiral diamines have been converted to thioureas as well. Organocatalysts bearing multiple hydrogen bond donors-chiral diaminocyclohexane and 1,2-diphenylethylenediamine fragments (Figure 12), the latter converted into sulfonamide, were prepared by Wang and co-workers and appeared highly efficient in asymmetric Michael addition as well as nitro-Mannich reaction [176,177]. Among chiral thioureas containing tertiary amines tested in Michael additions of 3-substituted oxindoles to maleimides, and cyanoacetates to vinyl sulfones, 1,2-diphenylethylenediamine derivatives led to the highest yields and stereoselectivities ( Figure 12) [178,179]. In turn, primary amines of this kind ( Figure 12) were found most efficient in asymmetric additions to unsaturated ketones [180,181] and a stereoselective construction of α,α-disubstituted cycloalkanones [182]. A catalyst bearing two 1,2-diphenylethylenediamine fragments ( Figure 12) was used in a tandem asymmetric Michael/cyclization reaction of 4-hydroxycoumarin and nitroalkanes, albeit with moderate yields [183]. Other chiral thioureas bearing additional amino functionalities have been described as well ( Figure 14). Ephedrine-based thioureas were obtained in Bolm's group and tested in Michael additions [187]. Indane scaffold introduced into a structure of a catalyst resulted in good yields and stereochemical outcome of cascade Michael-oxa-Michael-tautomerization process [188]. A bifunctional pyrrolidine-thiourea was applied by Tang and co-workers to the diastereo-and enantioselective Michael additions of cyclohexanone to alkyl and aryl nitroolefins [189]. In a study by Enders and co-workers, tertiary amines (piperidine and pyrrolidine derivatives) were found the best option for domino Michael-Mannich cycloadditions [190]. Chen's group prepared a series of chiral catalysts, including thioureas, using a chloramphenicol base skeleton, and used them in asymmetric transformations [191][192][193]. Other chiral thioureas bearing additional amino functionalities have been described as well ( Figure 14). Ephedrine-based thioureas were obtained in Bolm's group and tested in Michael additions [187]. Indane scaffold introduced into a structure of a catalyst resulted in good yields and stereochemical outcome of cascade Michael-oxa-Michael-tautomerization process [188]. A bifunctional pyrrolidine-thiourea was applied by Tang and co-workers to the diastereo-and enantioselective Michael additions of cyclohexanone to alkyl and aryl nitroolefins [189]. In a study by Enders and co-workers, tertiary amines (piperidine and pyrrolidine derivatives) were found the best option for domino Michael-Mannich cycloadditions [190]. Chen's group prepared a series of chiral catalysts, including thioureas, using a chloramphenicol base skeleton, and used them in asymmetric transformations [191][192][193]. [187][188][189][190][191][192][193]. Chromans can be easily substituted using the Michael addition approach. Various chiral organocatalysts have been used in asymmetric reactions of nitroalkenes with malonates [194][195][196]. Thiourea-based ones bearing diamine functions were tested in the Michael addition by Yan and co-workers (Scheme 29) [197]. Tang's group prepared chroman derivatives by addition of malonate to the appropriate nitroolefins in high yield and enantiomeric excess [189]. Zhu and co-workers proposed a versatile method using vinylindols as dienophiles in an inverse electron demand Diels-Alder reaction (Scheme 30) [157,198]. This appeared an efficient method of preparation of new biochemically active flavonoids containing additional privileged structures such as pyran and indole [199]. Over recent years, asymmetric cascade reactions [200][201][202][203][204][205] have been developed into a powerful tool in asymmetric synthesis of such compounds; stereogenic quaternary all-carbon centers are especially challenging [206]. Spiro-substituted structures can be found in biologically active natural and synthetic compounds [207,208]. The study on the asymmetric synthesis of spiro centers has been recently intensified [209,210], and various asymmetric routes to chromans bearing this structural feature have been found [208,211]. Thiourea-based multivalent organocatalysts offer a great opportunity for preparation of substituted spiro-chroman structures, as exemplified by Michael-acetylation-cascade reaction of 2-oxocyclohexanecarbaldehyde derivatives (Scheme 31) [212]. Chromans can be easily substituted using the Michael addition approach. Various chiral organocatalysts have been used in asymmetric reactions of nitroalkenes with malonates [194][195][196]. Thiourea-based ones bearing diamine functions were tested in the Michael addition by Yan and coworkers (Scheme 29) [197]. Tang's group prepared chroman derivatives by addition of malonate to the appropriate nitroolefins in high yield and enantiomeric excess [189]. Scheme 29. Asymmetric Michael addition and a proposed mechanism of formation of nitrochroman [197]. Zhu and co-workers proposed a versatile method using vinylindols as dienophiles in an inverse electron demand Diels-Alder reaction (Scheme 30) [157,198]. This appeared an efficient method of preparation of new biochemically active flavonoids containing additional privileged structures such as pyran and indole [199]. Over recent years, asymmetric cascade reactions [200][201][202][203][204][205] have been developed into a powerful tool in asymmetric synthesis of such compounds; stereogenic quaternary all-carbon centers are especially challenging [206]. Spiro-substituted structures can be found in biologically active natural and synthetic compounds [207,208]. The study on the asymmetric synthesis of spiro centers has been recently intensified [209,210], and various asymmetric routes to chromans bearing this structural feature have been found [208,211]. Thiourea-based multivalent organocatalysts offer a great opportunity for preparation of substituted spiro-chroman structures, as exemplified by Michaelacetylation-cascade reaction of 2-oxocyclohexanecarbaldehyde derivatives (Scheme 31) [212]. Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules Scheme 29. Asymmetric Michael addition and a proposed mechanism of formation of nitrochroman [197]. Zhu and co-workers proposed a versatile method using vinylindols as dienophiles in an inverse electron demand Diels-Alder reaction (Scheme 30) [157,198]. This appeared an efficient method of preparation of new biochemically active flavonoids containing additional privileged structures such as pyran and indole [199]. Over recent years, asymmetric cascade reactions [200][201][202][203][204][205] have been developed into a powerful tool in asymmetric synthesis of such compounds; stereogenic quaternary all-carbon centers are especially challenging [206]. Spiro-substituted structures can be found in biologically active natural and synthetic compounds [207,208]. The study on the asymmetric synthesis of spiro centers has been recently intensified [209,210], and various asymmetric routes to chromans bearing this structural feature have been found [208,211]. Thiourea-based multivalent organocatalysts offer a great opportunity for preparation of substituted spiro-chroman structures, as exemplified by Michaelacetylation-cascade reaction of 2-oxocyclohexanecarbaldehyde derivatives (Scheme 31) [212]. A non-spiro stereogenic quaternary all-carbon center was formed in an oxa-Michael addition followed by an intramolecular Michael addition as reported by Lu and co-workers (Scheme 32) [213]. A non-spiro stereogenic quaternary all-carbon center was formed in an oxa-Michael addition followed by an intramolecular Michael addition as reported by Lu and co-workers (Scheme 32) [213]. A non-spiro stereogenic quaternary all-carbon center was formed in an oxa-Michael addition followed by an intramolecular Michael addition as reported by Lu and co-workers (Scheme 32) [213]. Thioureas Containing Cinchona Alkaloids Cinchona alkaloids, a privileged motif of various structures useful in asymmetric synthesis [214,215], were also combined with thioureas, typically by conversion of hydroxyl group at C 9 position into amine and its reaction with an appropriate isothiocyanate [17]. Such derivatives, introduced by the groups of Connon and Soós (Figure 15), were first applied in enantioselective addition of malonates to nitroalkenes [216] and nitroalkanes to chalcones [217]. Later on, these bifunctional organocatalysts were found efficient in a variety of asymmetric transformations, including Michael and combined Michael-Henry reaction , sulfa-Michael and retro-sulfa-Michael reaction [239,240], aldol reaction [241], Mannich reaction [242], Strecker reaction [243,244], hydrophosphonylation [245,246], decarboxylative protonation [247], fluorination of ketoesters [248], arylation of cyclic ketoamides with quinone monoamine [249] and others. Thiourea was also introduced in place of methoxy group of quinine by Hiemstra and co-workers (Figure 16), and the resulting catalysts used in enantioselective Henry reaction [250], though they were less efficient in other asymmetric transformations [221,[243][244][245]. Bis-alkaloid thioureas were prepared by Song and co-workers and were shown to exhibit high enantioselectivity in a dynamic kinetic resolution of racemic azlactone derived from valine ( Figure 16) [251]. One can choose among stereoisomers (quinine/quinidine, cinchonidine/cinchonine and their epimers), and their modified derivatives, which often allows obtaining both optical antipodes of the product of the catalytical reaction [228,232,238,252]. However, other bisfunctional catalysts seem to offer more possibilities of fine tuning of catalytic properties. Certain problems are connected with limited thermal stability [217] and dimerization of alkaloid-thiourea conjugates in solution through intermolecular hydrogen bonds which can limit their activity [253,254]. Gu and co-workers prepared a series of chroman derivatives in a double Michael reaction of commercially available starting materials, chalcone enolates and nitromethane, using Cinchonaalkaloid thioureas as catalysts (Scheme 33) [255]. Three stereogenic centers were formed with high stereoselectivity. In a proposed mechanism, both Si (not observed) and Re face approaches were included, which would result in a completely diverse hydrogen bonding interactions in a transient state. Thiourea was also introduced in place of methoxy group of quinine by Hiemstra and co-workers (Figure 16), and the resulting catalysts used in enantioselective Henry reaction [250], though they were less efficient in other asymmetric transformations [221,[243][244][245]. Bis-alkaloid thioureas were prepared by Song and co-workers and were shown to exhibit high enantioselectivity in a dynamic kinetic resolution of racemic azlactone derived from valine ( Figure 16) [251]. Thiourea was also introduced in place of methoxy group of quinine by Hiemstra and co-workers (Figure 16), and the resulting catalysts used in enantioselective Henry reaction [250], though they were less efficient in other asymmetric transformations [221,[243][244][245]. Bis-alkaloid thioureas were prepared by Song and co-workers and were shown to exhibit high enantioselectivity in a dynamic kinetic resolution of racemic azlactone derived from valine ( Figure 16) [251]. One can choose among stereoisomers (quinine/quinidine, cinchonidine/cinchonine and their epimers), and their modified derivatives, which often allows obtaining both optical antipodes of the product of the catalytical reaction [228,232,238,252]. However, other bisfunctional catalysts seem to offer more possibilities of fine tuning of catalytic properties. Certain problems are connected with limited thermal stability [217] and dimerization of alkaloid-thiourea conjugates in solution through intermolecular hydrogen bonds which can limit their activity [253,254]. Gu and co-workers prepared a series of chroman derivatives in a double Michael reaction of commercially available starting materials, chalcone enolates and nitromethane, using Cinchonaalkaloid thioureas as catalysts (Scheme 33) [255]. Three stereogenic centers were formed with high stereoselectivity. In a proposed mechanism, both Si (not observed) and Re face approaches were included, which would result in a completely diverse hydrogen bonding interactions in a transient state. One can choose among stereoisomers (quinine/quinidine, cinchonidine/cinchonine and their epimers), and their modified derivatives, which often allows obtaining both optical antipodes of the product of the catalytical reaction [228,232,238,252]. However, other bisfunctional catalysts seem to offer more possibilities of fine tuning of catalytic properties. Certain problems are connected with limited thermal stability [217] and dimerization of alkaloid-thiourea conjugates in solution through intermolecular hydrogen bonds which can limit their activity [253,254]. Gu and co-workers prepared a series of chroman derivatives in a double Michael reaction of commercially available starting materials, chalcone enolates and nitromethane, using Cinchona-alkaloid thioureas as catalysts (Scheme 33) [255]. Three stereogenic centers were formed with high stereoselectivity. In a proposed mechanism, both Si (not observed) and Re face approaches were included, which would result in a completely diverse hydrogen bonding interactions in a transient state. Thiourea was also introduced in place of methoxy group of quinine by Hiemstra and co-workers (Figure 16), and the resulting catalysts used in enantioselective Henry reaction [250], though they were less efficient in other asymmetric transformations [221,[243][244][245]. Bis-alkaloid thioureas were prepared by Song and co-workers and were shown to exhibit high enantioselectivity in a dynamic kinetic resolution of racemic azlactone derived from valine ( Figure 16) [251]. One can choose among stereoisomers (quinine/quinidine, cinchonidine/cinchonine and their epimers), and their modified derivatives, which often allows obtaining both optical antipodes of the product of the catalytical reaction [228,232,238,252]. However, other bisfunctional catalysts seem to offer more possibilities of fine tuning of catalytic properties. Certain problems are connected with limited thermal stability [217] and dimerization of alkaloid-thiourea conjugates in solution through intermolecular hydrogen bonds which can limit their activity [253,254]. Gu and co-workers prepared a series of chroman derivatives in a double Michael reaction of commercially available starting materials, chalcone enolates and nitromethane, using Cinchonaalkaloid thioureas as catalysts (Scheme 33) [255]. Three stereogenic centers were formed with high stereoselectivity. In a proposed mechanism, both Si (not observed) and Re face approaches were included, which would result in a completely diverse hydrogen bonding interactions in a transient state. Cinchona alkaloid-based organocatalysts were also applied by Singh and co-workers in their synthesis of chiral chroman derivatives from chalcones and α,β-unsaturated nitroalkenes (Scheme 34) [256]. Reasonable yields were accompanied by a modest diastereoselectivity and high enantioselectivity, and the reaction was completed in 8-48 h. Thioureas Derived from Amino Acids and Peptides As already demonstrated by Jacobsen's catalyst (described in part 3.2.) [123], naturally occurring amino acids and their derivatives can serve as chirality source in the construction of chiral thioureas [62,[257][258][259][260]. Jacobsen's group described the application of amide-thiourea catalyst obtained in three steps from valine and N-methylbenzylamine in Pictet-Spengler reaction [261,262], and tert-leucinederived amides in iso-Pictet-Spengler reaction (both reactions were co-catalyzed with benzoic acid; Scheme 33. Asymmetric double Michael reaction catalyzed by Cinchona alkaloid-derived thiourea and a proposed reaction mechanism [255]. Cinchona alkaloid-based organocatalysts were also applied by Singh and co-workers in their synthesis of chiral chroman derivatives from chalcones and α,β-unsaturated nitroalkenes (Scheme 34) [256]. Reasonable yields were accompanied by a modest diastereoselectivity and high enantioselectivity, and the reaction was completed in 8-48 h. Cinchona alkaloid-based organocatalysts were also applied by Singh and co-workers in their synthesis of chiral chroman derivatives from chalcones and α,β-unsaturated nitroalkenes (Scheme 34) [256]. Reasonable yields were accompanied by a modest diastereoselectivity and high enantioselectivity, and the reaction was completed in 8-48 h. Thioureas Derived from Amino Acids and Peptides As already demonstrated by Jacobsen's catalyst (described in part 3.2.) [123], naturally occurring amino acids and their derivatives can serve as chirality source in the construction of chiral thioureas [62,[257][258][259][260]. Jacobsen's group described the application of amide-thiourea catalyst obtained in three steps from valine and N-methylbenzylamine in Pictet-Spengler reaction [261,262], and tert-leucinederived amides in iso-Pictet-Spengler reaction (both reactions were co-catalyzed with benzoic acid; Thioureas Derived from Amino Acids and Peptides As already demonstrated by Jacobsen's catalyst (described in part 3.2.) [123], naturally occurring amino acids and their derivatives can serve as chirality source in the construction of chiral thioureas [62,[257][258][259][260]. Jacobsen's group described the application of amide-thiourea catalyst obtained in three steps from valine and N-methylbenzylamine in Pictet-Spengler reaction [261,262], and tert-leucine-derived amides in iso-Pictet-Spengler reaction (both reactions were co-catalyzed with benzoic acid; Figure 17) [263]. Thioureas bearing t-leucine arylpyrrolidino amide component catalyzed the stereoselective indole addition to γ-pyrone derivatives [264]. t-Leucine-derived thioureas were also efficient in the enantioselective dearomatization of isoquinolines [265], and the synthesis of furan derivatives with a trifluoromethyl at a stereogenic center [266]. Valine derivative prepared by Pedrosa and co-workers was applied in nitro-Michael additions; a supported catalyst was also prepared [267][268][269]. Fullerene was also introduced into chiral thioureas derived from valine, phenylalanine and tert-leucine, and the resulting hybrids appeared efficient, recyclable catalysts for stereoselective nitro-Michael reaction [270]. Carbohydrate-Based Chiral Thioureas Chiral thioureas can bear enantiopure carbohydrates. Quite frequently, they are combined with other functionalities. For example, Liu et al. prepared five novel D-mannitol-derived thiourea organocatalysts containing Cinchona alkaloids as well and used them for asymmetric Henry reaction [40]. Ma and co-workers introduced bifunctional organocatalysts containing multiple stereogenic centers in both primary amine derived from DACH and various carbohydrate moieties ( Figure 21) [288]. With these promoters, Michael addition of ketones to nitroalkenes proceeded with high yields and stereoselectivities. Similar catalysts (also containing 1,2-phenylenediamine) were applied in other Michael additions [194,289], Mannich reaction [290,291], and conjugate addition/dearomative fluorination [292,293]. Thioureas containing a glycosyl scaffold were also applied in aza-Henry reaction between N-Boc imines and nitromethane [294]. Carbohydrate-Based Chiral Thioureas Chiral thioureas can bear enantiopure carbohydrates. Quite frequently, they are combined with other functionalities. For example, Liu et al. prepared five novel d-mannitol-derived thiourea organocatalysts containing Cinchona alkaloids as well and used them for asymmetric Henry reaction [40]. Ma and co-workers introduced bifunctional organocatalysts containing multiple stereogenic centers in both primary amine derived from DACH and various carbohydrate moieties ( Figure 21) [288]. With these promoters, Michael addition of ketones to nitroalkenes proceeded with high yields and stereoselectivities. Similar catalysts (also containing 1,2-phenylenediamine) were applied in other Michael additions [194,289], Mannich reaction [290,291], and conjugate addition/dearomative fluorination [292,293]. Thioureas containing a glycosyl scaffold were also applied in aza-Henry reaction between N-Boc imines and nitromethane [294]. Carbohydrate-Based Chiral Thioureas Chiral thioureas can bear enantiopure carbohydrates. Quite frequently, they are combined with other functionalities. For example, Liu et al. prepared five novel D-mannitol-derived thiourea organocatalysts containing Cinchona alkaloids as well and used them for asymmetric Henry reaction [40]. Asymmetric tandem Michael-Michael additions of ketones and nitroalkenes were optimized by Miao and co-workers (Scheme 37) [295]. A carbohydrate-based chiral thiourea organocatalyst used together with benzenesulfonic acid (BSA) afforded high yields, excellent enantiomeric excess and high diastereoselectivity. The resulting spiro-compounds are not only interesting due to possible biological activities [296], but also as enantiopure multi-functionalized pyrazole derivatives for further synthesis [211,212,[297][298][299]. Chiral Phosphine-Bearing Thioureas Bifunctional catalysts containing thiourea moiety (hydrogen bond donor, Brønsted acid) and nucleophilic/basic phosphine functionality, both attached to a chiral skeleton, have already found a variety of applications in stereoselective reactions [113]. The first report on their synthesis was published by Shi and Shi in 2007 who prepared three binaphthyl-based derivatives ( Figure 22) [300]. A N-phenyl-substituted catalyst was most efficient in aza-Morita-Baylis-Hillman reaction of Nsulfonated imines and vinyl ketones or acrolein: (S)-products were formed in moderate to high yield and stereoselectivities. Shi and co-workers used modified thioureas belonging to the same class in the reaction of MBH adducts with oxazolones [301]. Chiral Phosphine-Bearing Thioureas Bifunctional catalysts containing thiourea moiety (hydrogen bond donor, Brønsted acid) and nucleophilic/basic phosphine functionality, both attached to a chiral skeleton, have already found a variety of applications in stereoselective reactions [113]. The first report on their synthesis was published by Shi and Shi in 2007 who prepared three binaphthyl-based derivatives ( Figure 22) [300]. A N-phenyl-substituted catalyst was most efficient in aza-Morita-Baylis-Hillman reaction of N-sulfonated imines and vinyl ketones or acrolein: (S)-products were formed in moderate to high yield and stereoselectivities. Shi and co-workers used modified thioureas belonging to the same class in the reaction of MBH adducts with oxazolones [301]. Asymmetric tandem Michael-Michael additions of ketones and nitroalkenes were optimized by Miao and co-workers (Scheme 37) [295]. A carbohydrate-based chiral thiourea organocatalyst used together with benzenesulfonic acid (BSA) afforded high yields, excellent enantiomeric excess and high diastereoselectivity. The resulting spiro-compounds are not only interesting due to possible biological activities [296], but also as enantiopure multi-functionalized pyrazole derivatives for further synthesis [211,212,[297][298][299]. Scheme 37. Asymmetric synthesis of various spiro-compounds with mechanism proposed [295]. Chiral Phosphine-Bearing Thioureas Bifunctional catalysts containing thiourea moiety (hydrogen bond donor, Brønsted acid) and nucleophilic/basic phosphine functionality, both attached to a chiral skeleton, have already found a variety of applications in stereoselective reactions [113]. The first report on their synthesis was published by Shi and Shi in 2007 who prepared three binaphthyl-based derivatives ( Figure 22) [300]. A N-phenyl-substituted catalyst was most efficient in aza-Morita-Baylis-Hillman reaction of Nsulfonated imines and vinyl ketones or acrolein: (S)-products were formed in moderate to high yield and stereoselectivities. Shi and co-workers used modified thioureas belonging to the same class in the reaction of MBH adducts with oxazolones [301]. Figure 22. Axially chiral phosphine-containing thiourea [300]. Figure 22. Axially chiral phosphine-containing thiourea [300]. An active catalyst bearing a diphenylphosphine moiety was introduced by Wu and co-workers and tested for the Morita-Baylis-Hillman reaction of aromatic aldehydes with methyl vinyl ketone and acrylates ( Figure 23) [302,303]. Slightly higher (and opposite) enantioselectivities were noted for valine-derived phosphinothiourea, however, the reaction required more time [304]. An active catalyst bearing a diphenylphosphine moiety was introduced by Wu and co-workers and tested for the Morita-Baylis-Hillman reaction of aromatic aldehydes with methyl vinyl ketone and acrylates ( Figure 23) [302,303]. Slightly higher (and opposite) enantioselectivities were noted for valine-derived phosphinothiourea, however, the reaction required more time [304]. Phosphinothiourea, described by Mita and Jacobsen, catalyzed enantioselective opening of aziridines with hydrogen chloride to yield β-chloroamine derivatives ( Figure 24) [305]. Chiral thiourea bearing a phosphine moiety was also efficient in 1,6-conjugate addition of para-quinone methides with dicyanoolefins [306]. An interesting example of thioureas bearing a stereogenic phosphorus atom, prepared by a stereoselective reduction of the corresponding aminophosphine oxides and their reaction with isothiocyanates, was described by Su and Taylor ( Figure 24) [307]. Epimeric catalysts exhibited different activity and stereoselectivity in Morita-Baylis-Hillman reactions of methyl acrylate and aromatic aldehydes. A novel chiral ferrocenyl bis-phosphine thiourea was introduced by Zhang and co-workers who described its use in Rh-catalyzed hydrogenation of nitroalkenes with high yields and enantioselectivities, even at low catalyst loading ( Figure 25) [309,310]. This ligand, named ZhaoPhos, easily prepared from Ugi's amine, showed high efficiency in various hydrogen bond-assisted catalytic hydrogenations with rhodium and iridium complexes [311][312][313][314][315][316][317][318]. The system exemplified the idea of synergistic activation via cooperating transition metal-catalyst and organocatalyst joined in one bifunctional structure [319]. Phosphinothiourea, described by Mita and Jacobsen, catalyzed enantioselective opening of aziridines with hydrogen chloride to yield β-chloroamine derivatives ( Figure 24) [305]. Chiral thiourea bearing a phosphine moiety was also efficient in 1,6-conjugate addition of para-quinone methides with dicyanoolefins [306]. An interesting example of thioureas bearing a stereogenic phosphorus atom, prepared by a stereoselective reduction of the corresponding aminophosphine oxides and their reaction with isothiocyanates, was described by Su and Taylor ( Figure 24) [307]. Epimeric catalysts exhibited different activity and stereoselectivity in Morita-Baylis-Hillman reactions of methyl acrylate and aromatic aldehydes. An active catalyst bearing a diphenylphosphine moiety was introduced by Wu and co-workers and tested for the Morita-Baylis-Hillman reaction of aromatic aldehydes with methyl vinyl ketone and acrylates ( Figure 23) [302,303]. Slightly higher (and opposite) enantioselectivities were noted for valine-derived phosphinothiourea, however, the reaction required more time [304]. Phosphinothiourea, described by Mita and Jacobsen, catalyzed enantioselective opening of aziridines with hydrogen chloride to yield β-chloroamine derivatives (Figure 24) [305]. Chiral thiourea bearing a phosphine moiety was also efficient in 1,6-conjugate addition of para-quinone methides with dicyanoolefins [306]. An interesting example of thioureas bearing a stereogenic phosphorus atom, prepared by a stereoselective reduction of the corresponding aminophosphine oxides and their reaction with isothiocyanates, was described by Su and Taylor ( Figure 24) [307]. Epimeric catalysts exhibited different activity and stereoselectivity in Morita-Baylis-Hillman reactions of methyl acrylate and aromatic aldehydes. A novel chiral ferrocenyl bis-phosphine thiourea was introduced by Zhang and co-workers who described its use in Rh-catalyzed hydrogenation of nitroalkenes with high yields and enantioselectivities, even at low catalyst loading ( Figure 25) [309,310]. This ligand, named ZhaoPhos, easily prepared from Ugi's amine, showed high efficiency in various hydrogen bond-assisted catalytic hydrogenations with rhodium and iridium complexes [311][312][313][314][315][316][317][318]. The system exemplified the idea of synergistic activation via cooperating transition metal-catalyst and organocatalyst joined in one bifunctional structure [319]. An active catalyst bearing a diphenylphosphine moiety was introduced by Wu and co-workers and tested for the Morita-Baylis-Hillman reaction of aromatic aldehydes with methyl vinyl ketone and acrylates ( Figure 23) [302,303]. Slightly higher (and opposite) enantioselectivities were noted for valine-derived phosphinothiourea, however, the reaction required more time [304]. Phosphinothiourea, described by Mita and Jacobsen, catalyzed enantioselective opening of aziridines with hydrogen chloride to yield β-chloroamine derivatives ( Figure 24) [305]. Chiral thiourea bearing a phosphine moiety was also efficient in 1,6-conjugate addition of para-quinone methides with dicyanoolefins [306]. An interesting example of thioureas bearing a stereogenic phosphorus atom, prepared by a stereoselective reduction of the corresponding aminophosphine oxides and their reaction with isothiocyanates, was described by Su and Taylor ( Figure 24) [307]. Epimeric catalysts exhibited different activity and stereoselectivity in Morita-Baylis-Hillman reactions of methyl acrylate and aromatic aldehydes. Other chiral phosphorus derivatives of thioureas are also worth mentioning. Two phosphorylamide derivatives were prepared by Juaristi and co-workers and utilized in stereoselective Michael addition of cyclohexanone to nitrostyrenes and chalcones [42]. Chiral thioureas containing an aminophosphonate moiety were investigated in context of their antiviral activity as well [320]. In a chiral phosphine thiourea-mediated reaction of para-quinone methides with dicyanoolefins described by Yao et al. excellent diastereomeric excess in combination with good yield and enantioselectivity were observed (Scheme 38) [306]. Other chiral phosphorus derivatives of thioureas are also worth mentioning. Two phosphorylamide derivatives were prepared by Juaristi and co-workers and utilized in stereoselective Michael addition of cyclohexanone to nitrostyrenes and chalcones [42]. Chiral thioureas containing an aminophosphonate moiety were investigated in context of their antiviral activity as well [320]. In a chiral phosphine thiourea-mediated reaction of para-quinone methides with dicyanoolefins described by Yao et al. excellent diastereomeric excess in combination with good yield and enantioselectivity were observed (Scheme 38) [306]. Scheme 38. Asymmetric reaction of para-quinone methides catalyzed by chiral phosphine thiourea catalysts and a proposed transient state [306]. Thioureas and Bis-Thioureas with Axial, Planar or Helical Chirality Chirality of majority enantiopure thioureas used in asymmetric organocatalysis origins from the presence of stereogenic center(s). However, one cannot neglect a group of derivatives which are characterized by the presence of other stereogenic elements. Some notable representatives include efficient organocatalysts and exciting examples of molecular motors from Feringa's laboratory. Most compounds belonging to this class exhibit axial chirality/helicity connected with the presence of biaryl fragment characterized by a restricted rotation around the Caryl-Caryl bond. First examples come from 2005, when, looking for optimal organocatalyst for the asymmetric Morita-Baylis-Hillman reaction of cyclohexanone with aldehydes, Wang and co-workers decided to combine thiourea moiety with binaphthylamine ( Figure 26) [28]. An axially chiral binaphthyl bis-thiourea was prepared by Connon and co-workers and proved useful in promotion of asymmetric Friedel-Crafts type addition of indole derivatives to nitroalkenes (Figure 26) [29]. Modified catalysts belonging to Scheme 38. Asymmetric reaction of para-quinone methides catalyzed by chiral phosphine thiourea catalysts and a proposed transient state [306]. Thioureas and Bis-Thioureas with Axial, Planar or Helical Chirality Chirality of majority enantiopure thioureas used in asymmetric organocatalysis origins from the presence of stereogenic center(s). However, one cannot neglect a group of derivatives which are characterized by the presence of other stereogenic elements. Some notable representatives include efficient organocatalysts and exciting examples of molecular motors from Feringa's laboratory. Most compounds belonging to this class exhibit axial chirality/helicity connected with the presence of biaryl fragment characterized by a restricted rotation around the C aryl -C aryl bond. First examples come from 2005, when, looking for optimal organocatalyst for the asymmetric Morita-Baylis-Hillman reaction of cyclohexanone with aldehydes, Wang and co-workers decided to combine thiourea moiety with binaphthylamine ( Figure 26) [28]. An axially chiral binaphthyl bis-thiourea was prepared by Connon and co-workers and proved useful in promotion of asymmetric Friedel-Crafts type addition of indole derivatives to nitroalkenes (Figure 26) [29]. Modified catalysts belonging to this class obtained in Shi's group were also efficient in the asymmetric Henry [30] and Morita-Baylis-Hillman reactions [300,301,321]. A multifunctional organocatalyst containing quinine, thiourea and binaphthylamine moieties, capable of stereoselective formation of three stereocenters in a domino Michael-aldol reaction was constructed by Barbas III and co-workers [322]. Novel bis-thioureas prepared by Rampalakos and Wulff from commercially available enantiomerically pure 1,1 -binaphthyl-2,2 -diamine were tested in aza-Henry reactions (Figure 26) [323]. A series of binaphthyl-derived thiourea catalysts were applied by Kim and co-workers in asymmetric Mannich-type reactions of fluorinated ketoesters [324]. For the asymmetric Henry reaction, bis-thioureas were prepared connected with 4,4 -bisindanyl [325], and substituted biphenyl-and bianthryl-based linkers ( Figure 26) [326][327][328]. High yields and stereoselectivities were observed for certain derivatives. Molecules 2020, 25, x 27 of 57 Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules this class obtained in Shi's group were also efficient in the asymmetric Henry [30] and Morita-Baylis-Hillman reactions [300,301,321]. A multifunctional organocatalyst containing quinine, thiourea and binaphthylamine moieties, capable of stereoselective formation of three stereocenters in a domino Michael-aldol reaction was constructed by Barbas III and co-workers [322]. Novel bis-thioureas prepared by Rampalakos and Wulff from commercially available enantiomerically pure 1,1′binaphthyl-2,2′-diamine were tested in aza-Henry reactions ( Figure 26) [323]. A series of binaphthylderived thiourea catalysts were applied by Kim and co-workers in asymmetric Mannich-type reactions of fluorinated ketoesters [324]. For the asymmetric Henry reaction, bis-thioureas were prepared connected with 4,4′-bisindanyl [325], and substituted biphenyl-and bianthryl-based linkers ( Figure 26) [326][327][328]. High yields and stereoselectivities were observed for certain derivatives. A combined point chirality and axial chirality resulting from the restricted rotation was observed for thiourea-oxazoline ligands used for palladium-catalyzed asymmetric reactions ( Figure 27) [82,88]. A series of enantiopure atropisomeric thioxothiazol-substituted derivatives prepared by Roussel et al. were tested as enantioselective anion receptors ( Figure 27) [329]. Bisfunctional, photoswitchable, dual stereoselective catalysts based on the idea of unidirectional molecular motors were designed by Feringa and co-workers ( Figure 28) [330,331]. The cooperation of thiourea and tertiary amine in cis states was found a key factor for stereoselectivity of Henry reaction of nitromethane and fluorinated ketones and Michael reaction of bromonitrostyrene and pentanedione. A combined point chirality and axial chirality resulting from the restricted rotation was observed for thiourea-oxazoline ligands used for palladium-catalyzed asymmetric reactions ( Figure 27) [82,88]. A series of enantiopure atropisomeric thioxothiazol-substituted derivatives prepared by Roussel et al. were tested as enantioselective anion receptors ( Figure 27) [329]. Molecules 2020, 25, x 27 of 57 Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules this class obtained in Shi's group were also efficient in the asymmetric Henry [30] and Morita-Baylis-Hillman reactions [300,301,321]. A multifunctional organocatalyst containing quinine, thiourea and binaphthylamine moieties, capable of stereoselective formation of three stereocenters in a domino Michael-aldol reaction was constructed by Barbas III and co-workers [322]. Novel bis-thioureas prepared by Rampalakos and Wulff from commercially available enantiomerically pure 1,1′binaphthyl-2,2′-diamine were tested in aza-Henry reactions ( Figure 26) [323]. A series of binaphthylderived thiourea catalysts were applied by Kim and co-workers in asymmetric Mannich-type reactions of fluorinated ketoesters [324]. For the asymmetric Henry reaction, bis-thioureas were prepared connected with 4,4′-bisindanyl [325], and substituted biphenyl-and bianthryl-based linkers ( Figure 26) [326][327][328]. High yields and stereoselectivities were observed for certain derivatives. A combined point chirality and axial chirality resulting from the restricted rotation was observed for thiourea-oxazoline ligands used for palladium-catalyzed asymmetric reactions ( Figure 27) [82,88]. A series of enantiopure atropisomeric thioxothiazol-substituted derivatives prepared by Roussel et al. were tested as enantioselective anion receptors ( Figure 27) [329]. Bisfunctional, photoswitchable, dual stereoselective catalysts based on the idea of unidirectional molecular motors were designed by Feringa and co-workers ( Figure 28) [330,331]. The cooperation of thiourea and tertiary amine in cis states was found a key factor for stereoselectivity of Henry reaction of nitromethane and fluorinated ketones and Michael reaction of bromonitrostyrene and pentanedione. Bisfunctional, photoswitchable, dual stereoselective catalysts based on the idea of unidirectional molecular motors were designed by Feringa and co-workers ( Figure 28) [330,331]. The cooperation of thiourea and tertiary amine in cis states was found a key factor for stereoselectivity of Henry reaction of nitromethane and fluorinated ketones and Michael reaction of bromonitrostyrene and pentanedione. Enantiomerically pure planar-chiral thiourea derivatives based on [2.2]cyclophane were first prepared by Paradies and co-workers and showed rather limited efficiency in stereoselective Friedel-Crafts alkylation of indole and transfer hydrogenation of nitroolefin ( Figure 29) [31]. However, the performance of cyclophane bis-thioureas in asymmetric Henry reaction was much improved and also led to high induction of chirality ( Figure 29) [32]. Other modifications of the system (the introduction of amino group) were found useful for the catalysis of aldol reaction of isatines and ketones ( Figure 29) [33]. Thioureas were also incorporated to optically active helical polymers. Polymerized enantiopure N-propargylthioureas derived from 1-phenylethylamine formed helices in low polarity solvents and showed affinity to iron(III) ions [332]. Polyacetylenes bearing pendant thiourea groups were used as chiral catalysts for the asymmetric Michael addition of diethyl malonate to trans-β-nitrostyrene [333]. Thioureas Containing Other Functional Groups A variety of thiourea organocatalysts have been prepared containing additional functionalities to introduce chirality or/and to actively participate in the catalytic reaction. For example, photochemically active thioxanthone was attached to thiourea moiety with various chiral linkers and the resulting catalysts were employed in the photocyclization of 2-aryloxycyclohex-2-enones ( Figure 30) [334]. Enantiomerically pure planar-chiral thiourea derivatives based on [2.2]cyclophane were first prepared by Paradies and co-workers and showed rather limited efficiency in stereoselective Friedel-Crafts alkylation of indole and transfer hydrogenation of nitroolefin ( Figure 29) [31]. However, the performance of cyclophane bis-thioureas in asymmetric Henry reaction was much improved and also led to high induction of chirality ( Figure 29) [32]. Other modifications of the system (the introduction of amino group) were found useful for the catalysis of aldol reaction of isatines and ketones ( Figure 29) [33]. Enantiomerically pure planar-chiral thiourea derivatives based on [2.2]cyclophane were first prepared by Paradies and co-workers and showed rather limited efficiency in stereoselective Friedel-Crafts alkylation of indole and transfer hydrogenation of nitroolefin ( Figure 29) [31]. However, the performance of cyclophane bis-thioureas in asymmetric Henry reaction was much improved and also led to high induction of chirality ( Figure 29) [32]. Other modifications of the system (the introduction of amino group) were found useful for the catalysis of aldol reaction of isatines and ketones ( Figure 29) [33]. Thioureas were also incorporated to optically active helical polymers. Polymerized enantiopure N-propargylthioureas derived from 1-phenylethylamine formed helices in low polarity solvents and showed affinity to iron(III) ions [332]. Polyacetylenes bearing pendant thiourea groups were used as chiral catalysts for the asymmetric Michael addition of diethyl malonate to trans-β-nitrostyrene [333]. Thioureas Containing Other Functional Groups A variety of thiourea organocatalysts have been prepared containing additional functionalities to introduce chirality or/and to actively participate in the catalytic reaction. For example, photochemically active thioxanthone was attached to thiourea moiety with various chiral linkers and the resulting catalysts were employed in the photocyclization of 2-aryloxycyclohex-2-enones ( Figure 30) [334]. Thioureas were also incorporated to optically active helical polymers. Polymerized enantiopure N-propargylthioureas derived from 1-phenylethylamine formed helices in low polarity solvents and showed affinity to iron(III) ions [332]. Polyacetylenes bearing pendant thiourea groups were used as chiral catalysts for the asymmetric Michael addition of diethyl malonate to trans-β-nitrostyrene [333]. Thioureas Containing Other Functional Groups A variety of thiourea organocatalysts have been prepared containing additional functionalities to introduce chirality or/and to actively participate in the catalytic reaction. For example, photochemically active thioxanthone was attached to thiourea moiety with various chiral linkers and the resulting catalysts were employed in the photocyclization of 2-aryloxycyclohex-2-enones ( Figure 30) [334]. Wang and co-workers reported a preparation of bifunctional rosin-derived catalysts (prepared from chiral dehydroabietic amine) and used it in a doubly stereocontrolled addition reactions, including the stereoselective synthesis of chiral cyclic thioureas ( Figure 31) [51,171,[335][336][337][338][339][340][341]. Wang's group modified the original structure of their catalyst with Cinchona alkaloids as well to provide a double stereocontrol [342][343][344]. Rosin-derived thioureas were also used by Reddy and co-workers in asymmetric Michael-hemiketalization of allomaltol [345]. Thioureas can be equipped with additional sulfur functionalities. In our laboratory, chiral amino-and diaminosulfides obtained from cyclohexane-1,2-diol were converted into mono-and bisthioureas which were tested in Morita-Baylis-Hillman reaction ( Figure 33) [348]. Bolm and coworkers designed the synthesis of a set of chiral sulfoximine-based thioureas ( Figure 33) [41]. Interestingly, the optimum results of asymmetric Biginelli reaction were obtained for the derivative bearing only sulfur as a stereogenic center separated from thiourea moiety. Wang and co-workers reported a preparation of bifunctional rosin-derived catalysts (prepared from chiral dehydroabietic amine) and used it in a doubly stereocontrolled addition reactions, including the stereoselective synthesis of chiral cyclic thioureas ( Figure 31) [51,171,[335][336][337][338][339][340][341]. Wang's group modified the original structure of their catalyst with Cinchona alkaloids as well to provide a double stereocontrol [342][343][344]. Rosin-derived thioureas were also used by Reddy and co-workers in asymmetric Michael-hemiketalization of allomaltol [345]. Biological Activity of Chiral Thioureas Thioureas represent an important class of compounds that attracted a lot of attention due to their bioactivity, e.g., in medicinal chemistry, pharmaceutical industry, and agriculture. Derivatives were found to be efficient antiviral [72], antifungal [355], antimicrobial [356] or anticancer agents [357]. The ability of preparing thioureas containing various scaffolds allows their use as potential inhibitors against numerous molecular targets. Consequently, these compounds have become an interesting material for further investigations. In this part of the review, selected examples of chiral thiocarbamides used in biomedical studies will be presented. Antiviral Thioureas Aminophosphonates are structural analogues of amino acids. Likewise the aminophosphonic acids, they are able to inhibit enzymes involved in amino acids metabolism, and thus may generate various physiological responses, e.g., neuromodulatory activity [358]. Cucumber mosaic virus (CMV) and tobacco mosaic virus (TMV) are plant virus diseases which have become a serious problem in agriculture in recent years [359]. The commercially available product Ningnanmycin is commonly used against CMV and TMV, though with serious limitations resulting from its light and moisture sensitivity. Thus, the design and preparation of novel antiviral agents constitutes a significant challenge. In 2009, Chen et al. reported the novel anti-TMV chiral thioureas and bis-thioureas bearing αaminophosphonate moiety [320]. The evaluation of a tested series revealed that two derivatives are active against CMV and TMV comparable to Ningnanmycin (Figure 34). It was also found that the absolute configuration of isothiocyanates, used to form corresponding mono-and bis-thioureas, considerably affected the antiviral activity of tested compounds. Generally, mono-thiourea (S)enantiomers were more effective against TMV than corresponding (R)-enantiomers [320], while the results for bis-thioureas against CMV appeared to be reversed: derivatives obtained from (R)enantiomer of isothiocyanate were more active against the virus [359]. [348] and sulfoximine [41]. Krasonvskaya et al. described a preparation of a thiourea-modified doxorubicin as a pH-sensitive prodrug capable of releasing cytotoxic component as well as anticonvulsant albutoin in a weakly acidic medium [349]. Thioureas containing a chiral isosteviol moiety [350][351][352], terpenes [77], camphor [353] or steroid scaffold [354] have been also reported. Biological Activity of Chiral Thioureas Thioureas represent an important class of compounds that attracted a lot of attention due to their bioactivity, e.g., in medicinal chemistry, pharmaceutical industry, and agriculture. Derivatives were found to be efficient antiviral [72], antifungal [355], antimicrobial [356] or anticancer agents [357]. The ability of preparing thioureas containing various scaffolds allows their use as potential inhibitors against numerous molecular targets. Consequently, these compounds have become an interesting material for further investigations. In this part of the review, selected examples of chiral thiocarbamides used in biomedical studies will be presented. Antiviral Thioureas Aminophosphonates are structural analogues of amino acids. Likewise the aminophosphonic acids, they are able to inhibit enzymes involved in amino acids metabolism, and thus may generate various physiological responses, e.g., neuromodulatory activity [358]. Cucumber mosaic virus (CMV) and tobacco mosaic virus (TMV) are plant virus diseases which have become a serious problem in agriculture in recent years [359]. The commercially available product Ningnanmycin is commonly used against CMV and TMV, though with serious limitations resulting from its light and moisture sensitivity. Thus, the design and preparation of novel antiviral agents constitutes a significant challenge. In 2009, Chen et al. reported the novel anti-TMV chiral thioureas and bis-thioureas bearing α-aminophosphonate moiety [320]. The evaluation of a tested series revealed that two derivatives are active against CMV and TMV comparable to Ningnanmycin (Figure 34). It was also found that the absolute configuration of isothiocyanates, used to form corresponding mono-and bis-thioureas, considerably affected the antiviral activity of tested compounds. Generally, mono-thiourea (S)-enantiomers were more effective against TMV than corresponding (R)-enantiomers [320], while the results for bis-thioureas against CMV appeared to be reversed: derivatives obtained from (R)-enantiomer of isothiocyanate were more active against the virus [359]. Liu and co-workers prepared a series of chiral thioureas bearing t-leucine and αaminophosphonate moieties and evaluated their anti-TMV activity [360]. Two compounds were identified as the most potent antiviral agents ( Figure 35). It was shown that the derivatives containing electron withdrawing groups in para position of the aromatic ring revealed better activity. Stereoisomers with l configuration were more active than their d counterparts or the racemic form. Yan et al. synthesized also a new collection of chiral thioureas and evaluated their activity against TMV [361]. Novel compounds were prepared in ionic liquid, an eco-friendly environment. Among them, an indane derivative ( Figure 36) exhibited the best antiviral properties: in vivo protection, inactivation and curative effects against TMV with inhibitory effects of 57.0%, 96.4% and 55.0%, respectively, at 500 μg/mL. Moreover, it was more active against TMV than the reference compound Ningnanmycin. Ningnanmycin (left) and the most active bis-thiourea derivatives bearing α-aminophosphonate functional group [320,359]. Liu and co-workers prepared a series of chiral thioureas bearing t-leucine and α-aminophosphonate moieties and evaluated their anti-TMV activity [360]. Two compounds were identified as the most potent antiviral agents ( Figure 35). It was shown that the derivatives containing electron withdrawing groups in para position of the aromatic ring revealed better activity. Stereoisomers with l configuration were more active than their d counterparts or the racemic form. Liu and co-workers prepared a series of chiral thioureas bearing t-leucine and αaminophosphonate moieties and evaluated their anti-TMV activity [360]. Two compounds were identified as the most potent antiviral agents ( Figure 35). It was shown that the derivatives containing electron withdrawing groups in para position of the aromatic ring revealed better activity. Stereoisomers with l configuration were more active than their d counterparts or the racemic form. Yan et al. synthesized also a new collection of chiral thioureas and evaluated their activity against TMV [361]. Novel compounds were prepared in ionic liquid, an eco-friendly environment. Among them, an indane derivative ( Figure 36) exhibited the best antiviral properties: in vivo protection, inactivation and curative effects against TMV with inhibitory effects of 57.0%, 96.4% and 55.0%, respectively, at 500 μg/mL. Moreover, it was more active against TMV than the reference compound Ningnanmycin. Yan et al. synthesized also a new collection of chiral thioureas and evaluated their activity against TMV [361]. Novel compounds were prepared in ionic liquid, an eco-friendly environment. Among them, an indane derivative ( Figure 36) exhibited the best antiviral properties: in vivo protection, inactivation and curative effects against TMV with inhibitory effects of 57.0%, 96.4% and 55.0%, respectively, at 500 µg/mL. Moreover, it was more active against TMV than the reference compound Ningnanmycin. Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules Besides the agriculture, various thiourea derivatives are important for pharmaceutical industry. Several examples of structurally diverse derivatives that act on human viruses, e.g., human immunodeficiency virus (HIV) or human cytomegalovirus (HCMV) can be found in the literature [362,363]. After extensive studies of the pharmacological influence of novel 1,3-thiazepine-based urea and thiourea derivatives on animal central nervous system (CNS), Struga and co-workers published the results of their antiviral activity assays [364]. The compounds were found useful in antiviral therapy due to their specific structure: butterfly-like conformation formed by the hydrophilic center and two hydrophobic moieties. Such a structure is characteristic for non-nucleoside reverse transcriptase inhibitors (NNRTIs), used as anti-HIV agents [364,365]. Furthermore, the 1,3-thiazepine ring is an important system considering its biological activity. It is a part of the Omapatrilat structure, an antihypertensive drug currently in stage IV of clinical trials [366,367]. Additionally, seven-membered cyclic thiourea derivatives are used as a nitric oxide synthase inhibitors [366]. The 1,3-thiazepine-based isothiourea derivatives were tested against diverse virus classes: Retrovirus (HIV-1), Hepadnavirus (HBV) and Flaviridae (YFFV and BVDV, both the single-stranded RNA + viruses; Figure 37). In spite of the promising pharmacological action on animal CNS, only three compounds exhibited modest antiviral activity [364]. Besides the agriculture, various thiourea derivatives are important for pharmaceutical industry. Several examples of structurally diverse derivatives that act on human viruses, e.g., human immunodeficiency virus (HIV) or human cytomegalovirus (HCMV) can be found in the literature [362,363]. After extensive studies of the pharmacological influence of novel 1,3-thiazepine-based urea and thiourea derivatives on animal central nervous system (CNS), Struga and co-workers published the results of their antiviral activity assays [364]. The compounds were found useful in antiviral therapy due to their specific structure: butterfly-like conformation formed by the hydrophilic center and two hydrophobic moieties. Such a structure is characteristic for non-nucleoside reverse transcriptase inhibitors (NNRTIs), used as anti-HIV agents [364,365]. Furthermore, the 1,3-thiazepine ring is an important system considering its biological activity. It is a part of the Omapatrilat structure, an antihypertensive drug currently in stage IV of clinical trials [366,367]. Additionally, seven-membered cyclic thiourea derivatives are used as a nitric oxide synthase inhibitors [366]. The 1,3-thiazepine-based isothiourea derivatives were tested against diverse virus classes: Retrovirus (HIV-1), Hepadnavirus (HBV) and Flaviridae (YFFV and BVDV, both the single-stranded RNA + viruses; Figure 37). In spite of the promising pharmacological action on animal CNS, only three compounds exhibited modest antiviral activity [364]. Besides the agriculture, various thiourea derivatives are important for pharmaceutical industry. Several examples of structurally diverse derivatives that act on human viruses, e.g., human immunodeficiency virus (HIV) or human cytomegalovirus (HCMV) can be found in the literature [362,363]. After extensive studies of the pharmacological influence of novel 1,3-thiazepine-based urea and thiourea derivatives on animal central nervous system (CNS), Struga and co-workers published the results of their antiviral activity assays [364]. The compounds were found useful in antiviral therapy due to their specific structure: butterfly-like conformation formed by the hydrophilic center and two hydrophobic moieties. Such a structure is characteristic for non-nucleoside reverse transcriptase inhibitors (NNRTIs), used as anti-HIV agents [364,365]. Furthermore, the 1,3-thiazepine ring is an important system considering its biological activity. It is a part of the Omapatrilat structure, an antihypertensive drug currently in stage IV of clinical trials [366,367]. Additionally, seven-membered cyclic thiourea derivatives are used as a nitric oxide synthase inhibitors [366]. The 1,3-thiazepine-based isothiourea derivatives were tested against diverse virus classes: Retrovirus (HIV-1), Hepadnavirus (HBV) and Flaviridae (YFFV and BVDV, both the single-stranded RNA + viruses; Figure 37). In spite of the promising pharmacological action on animal CNS, only three compounds exhibited modest antiviral activity [364]. Figure 38) [368]. Eleven chiral naphthyl thioureas were tested in vitro against recombinant reverse transcriptase (RT) [369]. Generally, the (R)-stereoisomers of all eleven compounds were more active than their enantiomers. Five of the most active compounds were further evaluated for their ability to inhibit HIV-1 replication in human peripheral blood mononuclear cells (PBMC). While the (R)-stereoisomers were active at nanomolar concentration, their enantiomers were again inactive. Furthermore, the most active compound was much more active against various NNI-resistant HIV-1 strains, than standard NNI drugs (nevirapine, delavirdine and trovirdine). Molecular modelling studies confirmed that the (R)-isomer fits to the target NNI binding pocket on HIV-RT much better than the (S)-enantiomer [368]. Figure 38) [368]. Eleven chiral naphthyl thioureas were tested in vitro against recombinant reverse transcriptase (RT) [369]. Generally, the (R)-stereoisomers of all eleven compounds were more active than their enantiomers. Five of the most active compounds were further evaluated for their ability to inhibit HIV-1 replication in human peripheral blood mononuclear cells (PBMC). While the (R)-stereoisomers were active at nanomolar concentration, their enantiomers were again inactive. Furthermore, the most active compound was much more active against various NNI-resistant HIV-1 strains, than standard NNI drugs (nevirapine, delavirdine and trovirdine). Molecular modelling studies confirmed that the (R)-isomer fits to the target NNI binding pocket on HIV-RT much better than the (S)-enantiomer [368]. In the 1990s, Bell and co-workers reported phenethylthiazolylthiourea (PETT) compounds as potent anti-HIV agents; taking the structure-activity relationship into account various substituents in their structures were analyzed [370,371]. Later, Venkatachalam's group undertook the studies of the influence of stereochemistry on the activity of this class of compounds [372]. A new series of chiral halopyridyl-and thiazolyl-substituted thioureas were synthesized ( Figure 39). Molecular modelling suggested that for both groups (R) stereoisomers fit better to the target binding pocket of HIV reverse transcriptase than their (S) counterparts. The in vitro tests confirmed this result and exhibited also that the lead compounds were several orders of magnitude more potent than the standard NNRTI Nevirapine. Anticancer Thioureas Since cancer constitutes the second most common cause of death globally, the quest for the new anti-tumor agents remains a continuous interest of numerous scientific teams. It has been proven that various groups of thioureas exhibit antiproliferative activity [373][374][375]. Some compounds reveal dual biological effects, e.g. anti-tumor together with anti-oxidant or anti-inflammatory activity [376]. Depending on structure and content of other biologically active fragments they act on various types of cancer cells. Herein, we present chosen, promising results of antiproliferative studies. Chiral thioureas containing α-aminophosphonate moiety, already mentioned as potential antiviral agents, were also used by Liu et al. as pseudo-peptides to treat cancer ( Figure 40) [377]. Several derivatives revealed promising activity against human tumor cells PC3 (prostate cancer), Bcap37 (breast cancer) and BGC823 (stomach cancer). In a preliminary in vitro assay the three most potent compounds emerged with IC50 values from 4.7 to 17.2 μM against the PC3 cell line; two of them exhibited a better antiproliferative activity than the reference compound-a commercially In the 1990s, Bell and co-workers reported phenethylthiazolylthiourea (PETT) compounds as potent anti-HIV agents; taking the structure-activity relationship into account various substituents in their structures were analyzed [370,371]. Later, Venkatachalam's group undertook the studies of the influence of stereochemistry on the activity of this class of compounds [372]. A new series of chiral halopyridyl-and thiazolyl-substituted thioureas were synthesized ( Figure 39). Molecular modelling suggested that for both groups (R) stereoisomers fit better to the target binding pocket of HIV reverse transcriptase than their (S) counterparts. The in vitro tests confirmed this result and exhibited also that the lead compounds were several orders of magnitude more potent than the standard NNRTI Nevirapine. Figure 38) [368]. Eleven chiral naphthyl thioureas were tested in vitro against recombinant reverse transcriptase (RT) [369]. Generally, the (R)-stereoisomers of all eleven compounds were more active than their enantiomers. Five of the most active compounds were further evaluated for their ability to inhibit HIV-1 replication in human peripheral blood mononuclear cells (PBMC). While the (R)-stereoisomers were active at nanomolar concentration, their enantiomers were again inactive. Furthermore, the most active compound was much more active against various NNI-resistant HIV-1 strains, than standard NNI drugs (nevirapine, delavirdine and trovirdine). Molecular modelling studies confirmed that the (R)-isomer fits to the target NNI binding pocket on HIV-RT much better than the (S)-enantiomer [368]. In the 1990s, Bell and co-workers reported phenethylthiazolylthiourea (PETT) compounds as potent anti-HIV agents; taking the structure-activity relationship into account various substituents in their structures were analyzed [370,371]. Later, Venkatachalam's group undertook the studies of the influence of stereochemistry on the activity of this class of compounds [372]. A new series of chiral halopyridyl-and thiazolyl-substituted thioureas were synthesized ( Figure 39). Molecular modelling suggested that for both groups (R) stereoisomers fit better to the target binding pocket of HIV reverse transcriptase than their (S) counterparts. The in vitro tests confirmed this result and exhibited also that the lead compounds were several orders of magnitude more potent than the standard NNRTI Nevirapine. Anticancer Thioureas Since cancer constitutes the second most common cause of death globally, the quest for the new anti-tumor agents remains a continuous interest of numerous scientific teams. It has been proven that various groups of thioureas exhibit antiproliferative activity [373][374][375]. Some compounds reveal dual biological effects, e.g. anti-tumor together with anti-oxidant or anti-inflammatory activity [376]. Depending on structure and content of other biologically active fragments they act on various types of cancer cells. Herein, we present chosen, promising results of antiproliferative studies. Chiral thioureas containing α-aminophosphonate moiety, already mentioned as potential antiviral agents, were also used by Liu et al. as pseudo-peptides to treat cancer ( Figure 40) [377]. Several derivatives revealed promising activity against human tumor cells PC3 (prostate cancer), Bcap37 (breast cancer) and BGC823 (stomach cancer). In a preliminary in vitro assay the three most potent compounds emerged with IC50 values from 4.7 to 17.2 μM against the PC3 cell line; two of them exhibited a better antiproliferative activity than the reference compound-a commercially Anticancer Thioureas Since cancer constitutes the second most common cause of death globally, the quest for the new anti-tumor agents remains a continuous interest of numerous scientific teams. It has been proven that various groups of thioureas exhibit antiproliferative activity [373][374][375]. Some compounds reveal dual biological effects, e.g. anti-tumor together with anti-oxidant or anti-inflammatory activity [376]. Depending on structure and content of other biologically active fragments they act on various types of cancer cells. Herein, we present chosen, promising results of antiproliferative studies. Chiral thioureas containing α-aminophosphonate moiety, already mentioned as potential antiviral agents, were also used by Liu et al. as pseudo-peptides to treat cancer ( Figure 40) [377]. Several derivatives revealed promising activity against human tumor cells PC3 (prostate cancer), Bcap37 (breast cancer) and BGC823 (stomach cancer). In a preliminary in vitro assay the three most potent compounds emerged with IC 50 values from 4.7 to 17.2 µM against the PC3 cell line; two of them exhibited a better antiproliferative activity than the reference compound-a commercially available 6,7-dimethoxy-N-(3-bromophenyl)-4-aminoquiazoline (IC 50 13.7 µM). Furthermore, one derivative exhibited higher activity than the reference drug against a stomach cancer cell line (IC50 values: 4.7 µM and 6.9 µM, respectively). Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules Interestingly, a significant influence of the absolute configuration of tested compounds on the antiproliferative activity was observed. Generally, D-isomers revealed higher growth inhibition in comparison with L-isomer derivatives [376]. To enhance antiproliferative activity, another amino acid residue (glycine or more rigid fragment, e.g., L-proline) was introduced into the pseudo-peptide scaffold [378]. This new series of chiral thiourea derivatives was examined toward BGC-823 (human gastric cancer) and A-549 (human non-small cell lung cancer) cell lines. The basic SAR studies led to the statement that the presence of the rigid L-proline fragment in the corresponding dipeptide structure increases antiproliferative activity. Moreover, antitumor properties may be improved by introducing an electron-withdrawing group in the para position of the terminal phenyl group of the dipeptide thioureas [377]. Huang et al. demonstrated the results of anticancer activity assays of thioureas containing the α-aminophosphonate moiety based on dehydroabietic acid (DHA, Figure 41) skeleton [379]. It was found that the DHA increases antiproliferative action of drugs on various cancer cells [380]. Hence, a new series of thioureas incorporating the α-aminophosphonate moiety and DHA core was synthesized and tested in vitro against NCI-H460 (lung), A549 (lung adenocarcinoma), HepG2 (liver) and SKOV3 (ovarian) human cancer cell lines. The compounds exhibited moderate to high level of antiproliferative activity. The most active derivative revealed better results against A549 cell line than 5-fluorouracil, the medicine commonly used in cancer therapy ( Figure 41). A preliminary analysis of mechanism of its action proved that the compound is capable to induce cell apoptosis [379]. Interestingly, a significant influence of the absolute configuration of tested compounds on the antiproliferative activity was observed. Generally, d-isomers revealed higher growth inhibition in comparison with l-isomer derivatives [376]. To enhance antiproliferative activity, another amino acid residue (glycine or more rigid fragment, e.g., l-proline) was introduced into the pseudo-peptide scaffold [378]. This new series of chiral thiourea derivatives was examined toward BGC-823 (human gastric cancer) and A-549 (human non-small cell lung cancer) cell lines. The basic SAR studies led to the statement that the presence of the rigid l-proline fragment in the corresponding dipeptide structure increases antiproliferative activity. Moreover, antitumor properties may be improved by introducing an electron-withdrawing group in the para position of the terminal phenyl group of the dipeptide thioureas [377]. Huang et al. demonstrated the results of anticancer activity assays of thioureas containing the α-aminophosphonate moiety based on dehydroabietic acid (DHA, Figure 41) skeleton [379]. It was found that the DHA increases antiproliferative action of drugs on various cancer cells [380]. Hence, a new series of thioureas incorporating the α-aminophosphonate moiety and DHA core was synthesized and tested in vitro against NCI-H460 (lung), A549 (lung adenocarcinoma), HepG2 (liver) and SKOV3 (ovarian) human cancer cell lines. The compounds exhibited moderate to high level of antiproliferative activity. The most active derivative revealed better results against A549 cell line than 5-fluorouracil, the medicine commonly used in cancer therapy ( Figure 41). A preliminary analysis of mechanism of its action proved that the compound is capable to induce cell apoptosis [379]. Molecules 2020, 25, x 34 of 57 Molecules 2020, 25, x; doi: www.mdpi.com/journal/molecules available 6,7-dimethoxy-N-(3-bromophenyl)-4-aminoquiazoline (IC50 13.7 μM). Furthermore, one derivative exhibited higher activity than the reference drug against a stomach cancer cell line (IC50 values: 4.7 μM and 6.9 μM, respectively). Interestingly, a significant influence of the absolute configuration of tested compounds on the antiproliferative activity was observed. Generally, D-isomers revealed higher growth inhibition in comparison with L-isomer derivatives [376]. To enhance antiproliferative activity, another amino acid residue (glycine or more rigid fragment, e.g., L-proline) was introduced into the pseudo-peptide scaffold [378]. This new series of chiral thiourea derivatives was examined toward BGC-823 (human gastric cancer) and A-549 (human non-small cell lung cancer) cell lines. The basic SAR studies led to the statement that the presence of the rigid L-proline fragment in the corresponding dipeptide structure increases antiproliferative activity. Moreover, antitumor properties may be improved by introducing an electron-withdrawing group in the para position of the terminal phenyl group of the dipeptide thioureas [377]. Huang et al. demonstrated the results of anticancer activity assays of thioureas containing the α-aminophosphonate moiety based on dehydroabietic acid (DHA, Figure 41) skeleton [379]. It was found that the DHA increases antiproliferative action of drugs on various cancer cells [380]. Hence, a new series of thioureas incorporating the α-aminophosphonate moiety and DHA core was synthesized and tested in vitro against NCI-H460 (lung), A549 (lung adenocarcinoma), HepG2 (liver) and SKOV3 (ovarian) human cancer cell lines. The compounds exhibited moderate to high level of antiproliferative activity. The most active derivative revealed better results against A549 cell line than 5-fluorouracil, the medicine commonly used in cancer therapy ( Figure 41). A preliminary analysis of mechanism of its action proved that the compound is capable to induce cell apoptosis [379]. Figure 41. Dehydroabietic acid and its most active thiourea derivative [379]. Figure 41. Dehydroabietic acid and its most active thiourea derivative [379]. The results of previous research on impact of stereochemistry of halopyridyl and thiazolyl thioureas on anti-HIV activity [372], prompted Venkatachalam's group to extent studies toward anti-leukemic activity. They designed and synthesized five series of new chiral derivatives ( Figure 42) [381]. Their anticancer activity was evaluated against human B-lineage Nalm-6 and T-lineage Molt-3 acute lymphoblastic leukemia cell lines. Preliminary studies proved that the stereochemistry indeed was the factor that determined the activity of tested compounds: (S) enantiomers performed better in the tests. The results of previous research on impact of stereochemistry of halopyridyl and thiazolyl thioureas on anti-HIV activity [372], prompted Venkatachalam's group to extent studies toward antileukemic activity. They designed and synthesized five series of new chiral derivatives ( Figure 42) [381]. Their anticancer activity was evaluated against human B-lineage Nalm-6 and T-lineage Molt-3 acute lymphoblastic leukemia cell lines. Preliminary studies proved that the stereochemistry indeed was the factor that determined the activity of tested compounds: (S) enantiomers performed better in the tests. Anti-Allergic Thioureas Venkatachalam et al. reported the synthesis and evaluation of anti-allergic activity of novel chiral heterocycle-based thioureas [382]. Referring to the fact that leukotrienes, chemical mediators released by mast cells, play an important role in pathophysiology of allergy and asthma, they became a new target for potential thiourea based anti-allergic agents [383]. The set of indolyl-, naphthyl-and phenylethyl-substituted halopyridyl, thiazolyl and benzothiazolyl thiourea derivatives were tested in vitro for the mast cell inhibitory activity ( Figure 43). Among them, naphthyl-substituted thiazolyl thioureas were found most promising. Based on the results obtained for (S)-and (R)-isomer of naphthyl thioureas it was concluded that the stereochemistry of studied thioureas did not greatly influence their activity. Antimicrobial Thioureas Pyrazole derivatives have been recognized as versatile compounds with multiple biological properties, e.g., antibacterial, anti-inflammatory or antiproliferative activity [384,385]. Many scientific groups have been involved in the investigation and development of novel pyrazole derivatives, facing the problem of their high toxicity. The connection of thiourea functional group with these biologically active molecules led to a discovery of new compounds with a potential pharmaceutical importance. Bildirici and co-workers synthesized novel chiral pyrazole-based thioureas ( Figure 44) [383]. The compounds were examined against three Gram-positive bacteria (Bacillus subtilis, Staphylococcus aureus, Bacillus megaterium) and four Gram-negative bacteria (Enterobacter aerogenes, Pseudomonas aeruginosa, Klebsiella pneumoniae and Escherichia coli) and exhibited a desired activity, one of them even higher than amikacin and rifampicin and similar to penicillin. Additionally, they were evaluated as antifungal agents against three fungal strains (Candida albicans, Saccharomyces cerevisiae and Yarrovia lipolytica). Anti-Allergic Thioureas Venkatachalam et al. reported the synthesis and evaluation of anti-allergic activity of novel chiral heterocycle-based thioureas [382]. Referring to the fact that leukotrienes, chemical mediators released by mast cells, play an important role in pathophysiology of allergy and asthma, they became a new target for potential thiourea based anti-allergic agents [383]. The set of indolyl-, naphthyl-and phenylethyl-substituted halopyridyl, thiazolyl and benzothiazolyl thiourea derivatives were tested in vitro for the mast cell inhibitory activity ( Figure 43). Among them, naphthyl-substituted thiazolyl thioureas were found most promising. Based on the results obtained for (S)-and (R)-isomer of naphthyl thioureas it was concluded that the stereochemistry of studied thioureas did not greatly influence their activity. The results of previous research on impact of stereochemistry of halopyridyl and thiazolyl thioureas on anti-HIV activity [372], prompted Venkatachalam's group to extent studies toward antileukemic activity. They designed and synthesized five series of new chiral derivatives ( Figure 42) [381]. Their anticancer activity was evaluated against human B-lineage Nalm-6 and T-lineage Molt-3 acute lymphoblastic leukemia cell lines. Preliminary studies proved that the stereochemistry indeed was the factor that determined the activity of tested compounds: (S) enantiomers performed better in the tests. Anti-Allergic Thioureas Venkatachalam et al. reported the synthesis and evaluation of anti-allergic activity of novel chiral heterocycle-based thioureas [382]. Referring to the fact that leukotrienes, chemical mediators released by mast cells, play an important role in pathophysiology of allergy and asthma, they became a new target for potential thiourea based anti-allergic agents [383]. The set of indolyl-, naphthyl-and phenylethyl-substituted halopyridyl, thiazolyl and benzothiazolyl thiourea derivatives were tested in vitro for the mast cell inhibitory activity ( Figure 43). Among them, naphthyl-substituted thiazolyl thioureas were found most promising. Based on the results obtained for (S)-and (R)-isomer of naphthyl thioureas it was concluded that the stereochemistry of studied thioureas did not greatly influence their activity. Antimicrobial Thioureas Pyrazole derivatives have been recognized as versatile compounds with multiple biological properties, e.g., antibacterial, anti-inflammatory or antiproliferative activity [384,385]. Many scientific groups have been involved in the investigation and development of novel pyrazole derivatives, facing the problem of their high toxicity. The connection of thiourea functional group with these biologically active molecules led to a discovery of new compounds with a potential pharmaceutical importance. Bildirici and co-workers synthesized novel chiral pyrazole-based thioureas ( Figure 44) [383]. The compounds were examined against three Gram-positive bacteria (Bacillus subtilis, Staphylococcus aureus, Bacillus megaterium) and four Gram-negative bacteria (Enterobacter aerogenes, Pseudomonas aeruginosa, Klebsiella pneumoniae and Escherichia coli) and exhibited a desired activity, one of them even higher than amikacin and rifampicin and similar to penicillin. Additionally, they were evaluated as antifungal agents against three fungal strains (Candida albicans, Saccharomyces cerevisiae and Yarrovia lipolytica). Antimicrobial Thioureas Pyrazole derivatives have been recognized as versatile compounds with multiple biological properties, e.g., antibacterial, anti-inflammatory or antiproliferative activity [384,385]. Many scientific groups have been involved in the investigation and development of novel pyrazole derivatives, facing the problem of their high toxicity. The connection of thiourea functional group with these biologically active molecules led to a discovery of new compounds with a potential pharmaceutical importance. Bildirici and co-workers synthesized novel chiral pyrazole-based thioureas ( Figure 44) [383]. The compounds were examined against three Gram-positive bacteria (Bacillus subtilis, Staphylococcus aureus, Bacillus megaterium) and four Gram-negative bacteria (Enterobacter aerogenes, Pseudomonas aeruginosa, Klebsiella pneumoniae and Escherichia coli) and exhibited a desired activity, one of them even higher than amikacin and rifampicin and similar to penicillin. Additionally, they were evaluated as antifungal agents against three fungal strains (Candida albicans, Saccharomyces cerevisiae and Yarrovia lipolytica). Interestingly, the compounds derived from natural amino acids exhibited antibacterial activity, whereas their isomers turned out to be inactive against the tested Gram-positive and Gram-negative strains. Thioureas substituted with isopropyl or isobutyl and 3,5-(CF3)2C6H3 groups were the most potent among the series. It was considered that a suitable stereochemistry and the presence of lipophilic trifluoromethyl groups which may increase bioavailability and bio-efficiency, were the factors that determined antibacterial activity. Chiral thiophosphorylated thioureas have attracted an attention due to their biological activity, and ability to form diverse complexes with transition metals. Based on literature reports Metlushka et al. synthesized new coordination polymers with chiral thiophosphorylated thioureas in both racemic and enantiopure forms and checked their antimicrobial activity against S. aureus and B. cereus [22]. They examined both enantiopure and racemic derivatives as well as their complexes with Ni(II) ( Figure 46). Neither (R)-nor (S)-ligands did exhibit valuable anti-microbial activity, while the racemic mixture was surprisingly active. In case of complexes, (R)-complex was less active than (S)-complex against S. aureus while the racemate probably was not considered in the research. In turn, both isomers and the racemic mixture performed similarly against B. cereus. In the literature, there are several reports concerning thiourea functional group combined with heterobicyclic fused aromatic scaffolds, e.g., benzothiazole or benzimidazole rings [386,387]. Madabhushi et al. prepared two sets of chiral thioureas bearing benzimidazole ring based on natural and non-natural amino acids ((S)-alanine, (S)-phenylalanine, (S)-valine, (S)-leucine and (R)-alanine, (R)-phenylalanine, (R)-valine and (R)-leucine, respectively; Figure 45) [388]. The obtained compounds were studied as potential antimicrobial agents against the Gram-positive strains Staphylococcus aureus, Bacillus subtilis, Staphylococcus aureus and Micrococcus luteus as well as the Gram-negative strains Klebsiella planticola, Escherichia coli and Pseudomonas aeruginosa. Additionally, antiproliferative activity was examined against A549, MCF7, DU145 and HeLa cell lines. Interestingly, the compounds derived from natural amino acids exhibited antibacterial activity, whereas their isomers turned out to be inactive against the tested Gram-positive and Gram-negative strains. Thioureas substituted with isopropyl or isobutyl and 3,5-(CF3)2C6H3 groups were the most potent among the series. It was considered that a suitable stereochemistry and the presence of lipophilic trifluoromethyl groups which may increase bioavailability and bio-efficiency, were the factors that determined antibacterial activity. Chiral thiophosphorylated thioureas have attracted an attention due to their biological activity, and ability to form diverse complexes with transition metals. Based on literature reports Metlushka et al. synthesized new coordination polymers with chiral thiophosphorylated thioureas in both racemic and enantiopure forms and checked their antimicrobial activity against S. aureus and B. cereus [22]. They examined both enantiopure and racemic derivatives as well as their complexes with Ni(II) ( Figure 46). Neither (R)-nor (S)-ligands did exhibit valuable anti-microbial activity, while the racemic mixture was surprisingly active. In case of complexes, (R)-complex was less active than (S)-complex against S. aureus while the racemate probably was not considered in the research. In turn, both isomers and the racemic mixture performed similarly against B. cereus. Interestingly, the compounds derived from natural amino acids exhibited antibacterial activity, whereas their isomers turned out to be inactive against the tested Gram-positive and Gram-negative strains. Thioureas substituted with isopropyl or isobutyl and 3,5-(CF 3 ) 2 C 6 H 3 groups were the most potent among the series. It was considered that a suitable stereochemistry and the presence of lipophilic trifluoromethyl groups which may increase bioavailability and bio-efficiency, were the factors that determined antibacterial activity. Chiral thiophosphorylated thioureas have attracted an attention due to their biological activity, and ability to form diverse complexes with transition metals. Based on literature reports Metlushka et al. synthesized new coordination polymers with chiral thiophosphorylated thioureas in both racemic and enantiopure forms and checked their antimicrobial activity against S. aureus and B. cereus [22]. They examined both enantiopure and racemic derivatives as well as their complexes with Ni(II) ( Figure 46). Neither (R)-nor (S)-ligands did exhibit valuable anti-microbial activity, while the racemic mixture was surprisingly active. In case of complexes, (R)-complex was less active than (S)-complex against S. aureus while the racemate probably was not considered in the research. In turn, both isomers and the racemic mixture performed similarly against B. cereus. Summary A relatively simple, rigid skeleton of thiourea can be combined with chiral moieties, yielding a system that is capable of strong and selective interactions with a variety of chiral molecules, including compounds of biological importance. By an appropriate substitution we can suitably modify their properties, and, in principle, there are no limitations in preparation of desired mono-, di-, tri-or tetrasubstituted, aliphatic or aromatic, unsymmetrical, or symmetrical thiocarbamides. Recently published synthetic methods extend the palette of possible reactants, and often focus on the modification of conditions: the reaction can be performed in water, with ultrasound-assistance, and even without any solvent and catalyst. Chiral thioureas have already proven their utility in various stereoselective reactions, mainly as efficient organocatalysts and chiral ligands. As shown by numerous examples, a proper choice of a chiral component present in the structure of thiourea and its configuration can also result in a desired biological activity. This is manifested in a considerable interest in the use of these compounds as pharmaceuticals and in agriculture, and biomedical applications of chiral thioureas should become an important and growing area. Summary A relatively simple, rigid skeleton of thiourea can be combined with chiral moieties, yielding a system that is capable of strong and selective interactions with a variety of chiral molecules, including compounds of biological importance. By an appropriate substitution we can suitably modify their properties, and, in principle, there are no limitations in preparation of desired mono-, di-, tri-or tetrasubstituted, aliphatic or aromatic, unsymmetrical, or symmetrical thiocarbamides. Recently published synthetic methods extend the palette of possible reactants, and often focus on the modification of conditions: the reaction can be performed in water, with ultrasound-assistance, and even without any solvent and catalyst. Chiral thioureas have already proven their utility in various stereoselective reactions, mainly as efficient organocatalysts and chiral ligands. As shown by numerous examples, a proper choice of a chiral component present in the structure of thiourea and its configuration can also result in a desired biological activity. This is manifested in a considerable interest in the use of these compounds as pharmaceuticals and in agriculture, and biomedical applications of chiral thioureas should become an important and growing area.
2020-01-23T09:07:58.336Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "42e174380b905c6215eb63f0e6894405c8924745", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/molecules25020401", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "898777bec2c1786a4d1d03e7bf497b1226b093a0", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
219047684
pes2o/s2orc
v3-fos-license
Fatigue Properties Estimation and Life Prediction for Steels under Axial, Torsional, and In-Phase Loading In this study, several estimation methods of fatigue properties based on different monotonic mechanical parameters were first discussed. )e advantages and disadvantages of the Hardness Method proposed by Roessle and Fatemi were investigated and improved through the analysis of a total of 92 fatigue test data. A new Segment Fitting Method from Brinell hardness was then proposed for the fatigue properties estimation, and a total of 96 pieces of fatigue test data under axial, torsional, and multiaxial inphase loading were collected to verify the applicability of the new proposal. Finally, the prediction accuracy of the new proposal and three exciting estimation methods was compared with the predictions based on the experimental fatigue properties. Based on the results obtained, the newly proposed estimation method has a significant improvement on the relation between fatigue ductility coefficient and Brinell hardness, which consequently improves the fatigue life prediction accuracy with the scatter band of 2, particularly for the materials with low Brinell hardness. )e present study can provide a simplified analysis of the preliminary fatigue design of engineering structures. Introduction e Manson-Coffin equation combined with equivalent strain parameters or critical plane parameters has been routinely applied in the strain-life prediction for uniaxial or multiaxial fatigue. e Manson-Coffin equation is expressed as follows: where, σ f ′ , ε f ′ , b, and c are fatigue strength coefficient, fatigue ductility coefficient, fatigue strength exponent, and fatigue ductility exponent, respectively, which are called fatigue properties. Δε/2, Δε e /2, and Δε p /2 are total, elastic, and plastic strain amplitudes, respectively, and E is the modulus of elasticity and N f is fatigue life. When the Manson-Coffin equation is adopted to evaluate the fatigue performance, it is necessary to first determine the fatigue properties based on the uniaxial fatigue test. In most cases, however, these parameters are not easy to obtain the basic mechanical properties of materials, so it is essential to test the fatigue properties of various materials, which needs lots of professional fatigue testing procedures and equipment. When referring to multiaxial fatigue life prediction, more parameters, such as cyclic mechanical and nonproportional additional hardening coefficient, also need to be tested [1,2]. Considering the diversity of metal materials, many repetitive tests are required to obtain the fatigue properties by fatigue tests, which are time-consuming and expensive. In contrast, the monotonic mechanical properties of steel, such as yield strength, ultimate tensile strength, modulus of elasticity, Brinell hardness, and reduction of area, are generally used as the basic mechanical parameters of metal materials and usually available in the metal materials manual. e fatigue properties of steel are usually quite different due to the dispersion of materials performance. In recent decades, many researchers have attempted to establish the optimal fitting relationship between the fatigue properties and monotonic mechanical properties to reduce the complex fatigue test and simplify the fatigue life prediction under the condition of ensuring a certain accuracy [3]. Manson et al. [3,4] expressed the fatigue properties as functions of ultimate tensile strength, σ u , and reduction of area, RA, and proposed a series of simplified estimation methods, such as the Four-Point Correlation Method and Universal Slopes Method. Baumel and Seeger [5]proposed a Uniform Material Law, in which fatigue strength coefficient, σ f ′ , and fatigue ductility coefficient, ε f ′ , were represented by the ultimate tensile strength, σ u , and the modulus of elasticity, E, while fatigue strength exponent, b, and fatigue ductility exponent, c, were expressed as statistical mean values based on a large number of test data. As mentioned in reference [3], Mitchell et al. tried to find the fitting relationship between the fatigue properties and the ultimate tensile strength, σ u , and fracture toughness, ε f . In Mitchell's method, fatigue strength coefficient, σ f ′ , and fatigue strength exponent, b, were the function of ultimate tensile strength, σ u , while fatigue ductility coefficient, ε f ′ , was approximately taken as the fracture toughness, ε f , and fatigue ductility exponent, c, was taken as the statistical mean value of test data. Muralidharan and Manson [6] and Ong [7] modified the Four-Point Correlation Method and Universal Slopes Method, respectively, and gave different simplified estimation methods of fatigue properties. In recent years, Roessle and Fatemi [8], Shamsaei and Fatemi [2,9], and Shamsaei and McKelvey [2] studied the estimation methods of fatigue properties and nonproportional cycle mechanical parameters from Brinell hardness, and then life prediction of multiaxial fatigue by using the critical plane method was investigated. Researchers suggested that fatigue life predictions based on the reasonably estimated fatigue properties can be guaranteed within certain scatter bands of error. e obvious advantage of the estimation method is that it can predict the fatigue life accessibly and effectively and ensure a certain accuracy. In the present paper, the existing estimation methods of fatigue properties were studied, the Hardness Method proposed by Roessle and Fatemi was discussed and improved, and also its advantages and disadvantages were analyzed by using a total of 92 pieces of fatigue test data. A new Segment Fitting Method from Brinell hardness was proposed to estimate fatigue properties, and the estimated fatigue properties were adopted for life prediction of 9 types of steel. A total of 96 pieces of fatigue test data under axial, torsional, and in-phase loading were collected to verify the applicability of the new proposal. Finally, the prediction accuracy of the new proposal and the exciting estimation methods was compared with the predictions based on the experimental fatigue properties. Estimation Methods of Fatigue Properties As mentioned in the foreword, researchers have proposed several estimation methods of fatigue properties based on different monotonic mechanical parameters, and related reviews are available in [2, 3, 10]. Kim et al. [3] verify the existing estimation methods by using uniaxial fatigue test data of eight types of steel and conclude that the Modified Universal Slopes Method, Uniform Material Law, and Hardness Method can get relatively good fatigue life prediction results. In the present study, the three estimation methods are briefly discussed below. Modified Universal Slopes Method. Muralidharan and Manson [6] investigated the Universal Slopes Method previously proposed by Manson et al. and gave a more effective improvement as follows: e fracture toughness, ε f , in equation (2c) can be calculated by where RA is the reduction of area. Uniform Material Law. Baumel and Seeger [5] proposed a Uniform Material Law in which the fatigue properties are estimated according to material types. For unalloyed and low-alloy steel, the estimation equations were proposed as follows: [8] studied estimation method from Brinell hardness through the least square fitting analysis of the relationship between fatigue strength coefficient, σ f ′ , and Brinell hardness, HB, parameters by using 69 pieces of fatigue test data. However, the relationship between fatigue ductility coefficient ε f ′ and Brinell hardness is established through the intermediate variable of fatigue life, N t . To determine the fatigue strength exponent, b, and fatigue ductility exponent, c, the statistical mean values of the 69 pieces of fatigue test data were taken as the approximation of the two exponents, respectively. e Hardness Method for fatigue properties estimation method was given as follows: A New Proposal from Hardness e Hardness Method was adopted to fatigue life prediction by Shamsaei and McKelvey [2]. e results show that the prediction error of fatigue life using estimated fatigue properties was relatively large for the materials with small Brinell hardness and generally showed a tendency to dangerous estimation. is is mainly because the Brinell hardness range of data samples used for the estimation of fatigue properties is 150-700. erefore, the Hardness Method may be not suitable for materials with low Brinell hardness, such as HB < 150, and a large amount of material data is still needed for Hardness Method to determine the optimal estimation equation. In the present study, the collected test data have Brinell hardness and fatigue properties with a wider range. ere are 92 pieces of test data of different materials, and the Brinell hardness range of the materials is about 80-660. e statistical results are shown in Table 1. Based on the Hardness Method proposed by Roessle and Fatemi [8], a new fitting between fatigue strength coefficient, σ f ′ , and Brinell hardness, HB, parameter is obtained: e comparison between equation (5) and equation (4a) from Roessle and Fatemi [8]is shown in Figure 1. It can be concluded from Figure 1 that although there are some differences in the data sample, the fitting equation in the present study still has a high degree of agreement with equation (4a), which directly show the reliability for the estimation of fatigue strength coefficient based on the Brinell hardness of materials. In contrast, it is relatively difficult to establish the approximate fitting relationship between the fatigue ductility coefficient and the basic mechanical parameters of materials. Compared with the monotonic mechanical properties, the fatigue ductility coefficient, ε f ′ , under fatigue load is similar to the fracture toughness, ε f , under monotonic load. erefore, some researchers suggested taking the fatigue ductility coefficient, ε f ′ , as the fracture toughness, ε f , approximately. However, the relationship between the fatigue ductility coefficient and the monotonic fracture toughness of materials shows great discreteness. Roessle and Fatemi [8] proved the huge prediction error brought by this approximation. In Hardness Methods, the relationship between the fatigue ductility coefficient and Brinell hardness is deduced through the intermediate variable of transition fatigue life, N t . According to the Manson-Coffin equation, at the point of the transition fatigue life, the strain amplitudes, Δε t , can be expressed as Substitute equation (6) into equation (7); then, fatigue ductility coefficient, ε f ′ , can be expressed as Roessle and Fatemi [8] point out that the transition fatigue life, N t , and the nominal transition stress, S t � σ f ′ (2N t ) b , corresponding to transition fatigue life have a good linear relationship with Brinell hardness, which can be given by Substitute equations (9) and (10) into equation (8); then, fatigue ductility coefficient, ε f ′ , can be estimated only depending on modulus of elastic, E, and Brinell hardness, HB, as follows: Linear fitting When the Brinell hardness range is 150 < HB < 700, equation (11) can be simplified as Figure 2 shows the corresponding relationship between Brinell hardness and fatigue ductility coefficient of 92 test data of different materials. e derived equation (11) and the simplified equation (12) are also plotted for comparison. As can be seen from Figure 2, the fitting results of Brinell hardness and fatigue ductility coefficient based on equations (11) and (12) are not very accurate with the hardness range of 150 < HB < 700. For the materials with HB < 150, the simplified equation (12) and derived equation (11) are obviously inconsistent with the change of Brinell hardness. It can be inferred from the founding that the prediction error based on the Hardness Method for the materials with HB < 150 will be large. Another observation from the distribution of test data between Brinell hardness and fatigue ductility coefficient in Figure 2 is that the fatigue ductility coefficient increases with the increase of Brinell hardness in the low Brinell hardness zone, while the fatigue ductility coefficient decreases with the increase of Brinell hardness in the high Brinell hardness zone, or to be more exact, which is approximately a power exponential increase in the low Brinell hardness zone and decrease in the high Brinell hardness zone. Based on the observation, it is suggested to adopt the Segment Fitting Method to estimate the fatigue ductility coefficient from Brinell hardness. According to the statistical analysis of the test data in this study, the fitting range of the Brinell hardness zone is roughly divided into two segments of HB < 350 and HB > 350. e boundary value of the Brinell hardness range is determined by the intersection point of the fitting equation of the two segments. (13b) Figure 3 shows the fitting results of the new proposed Segment Fitting Method. e boundary Brinell hardness value of the two segments is about 340 according to the data sample used for this study. It can be seen from the figure that the Segment Fitting Method has a significant improvement in the relationship of fatigue ductility coefficient, ε f ′ , and Brinell hardness, HB. For fatigue strength exponent, b, and fatigue ductility exponent, c, there is no enough evidence to establish the corresponding relationship between the two exponents and the basic mechanical properties of materials. Researchers usually take the statistical mean values of the two exponents as approximate estimates. For the test data in this study, the statistical mean value of fatigue strength exponent, b, is about −0.09, and the statistical mean value of fatigue ductility exponent, c, is about −0.56. ese values are very close to the recommended values of Baumel and Seeger [5], Muralidharan and Manson [6], and Roessle and Fatemi [8]. Considering that the fatigue ductility coefficient, ε f ′ , is estimated by using the Segment Fitting Method, the statistical mean value of corresponding hardness segments is also used to deal with the fatigue ductility exponent, c, in this paper. For the test data listed in Table 1, the statistical mean value of fatigue ductility exponent, c, is −0.54 for 50 < HB < 700. us, the new proposal for the fatigue properties estimation from Brinell hardness can be given as follows: Results and Discussion In this section, 96 pieces of fatigue test data of 9 types of steel are used to verify the new estimation methods of fatigue properties, and Brinell hardness range of the types of steel is about 130-600. Some basic mechanical properties and the fatigue properties determined by the fatigue test are listed in Table 2. e fatigue test data include uniaxial, torsion, and in-phase loading paths, as shown in Figure 4. e fatigue test results such as applied strain amplitude and fatigue test life can be found in [11][12][13][14][15][16][17]. For the life prediction under axial, torsional, and multiaxial in-phase loading, the von Mises equivalent strain parameters combined with the Manson-Coffin equation can obtain good prediction results, which were verified by a large amount of fatigue test data [18,19]. e equivalent strain model used for the fatigue life prediction in this investigation is given as where Δε eq /2 is the von Mises equivalent strain amplitude. It can be found from the figures that approximately 95%, 92%, and 90% of all the data are within scatter bands of 5 based on the three estimation methods, respectively, which indicates a well agreement of the predictions within scatter bands of 5, while the prediction accuracy of fatigue life is relatively poor within the scatter band of 2, which is about 64%, 45%, and 58%, respectively. It is noteworthy in Figure 7 that only 40% of prediction results for the Haynes 188 steel Advances in Materials Science and Engineering with the Brinell hardness of HB � 130 fall within the scatter band of 5. is is mainly because the Hardness Method may be most suitable for the materials with a Brinell hardness range of 150 < HB < 700, as shown in Figure 2. For the materials with low Brinell hardness, such as Haynes 188 steel used in the present study, the life prediction based on Hardness Method leads to the decrease of the prediction accuracy, as shown in Figure 7. Figure 8 shows the prediction results of fatigue life based on the Segment Fitting Method. It can be found from the figure that the prediction accuracy of the scatter band of 2 and 5 is 73% and 97%, respectively. It should be pointed out that, for Haynes 188 steel with low Brinell hardness, all the life prediction results fall in the scatter band of 3. is indicates that the predicted results under axial, torsional, and in-phase loading are significantly improved by using the fatigue properties estimated from the newly proposed method. To compare the prediction accuracy of the estimated fatigue properties with experimental ones, the fatigue life prediction results based on experimental fatigue properties are also shown in Figure 9. It can be seen from Figure 9 that the prediction accuracy of the scatter band of 2 and 5 is 74% and 94%, respectively, which is in good agreement with the prediction accuracy of the proposed estimation method. e prediction accuracy based on experimental fatigue properties and the four estimation methods is listed in Table 3. Compared with the Table 3, it can be concluded that the prediction accuracy of the fatigue life under axial, torsional, and in-phase loading has little difference within the scatter band of 5. However, the difference in prediction accuracy is mainly happened in the small scatter band, such as the scatter band of 2. It should be noted that the experimental fatigue data may be quite scattered, which leads to relatively poor prediction accuracy based on experimental fatigue properties, while for the life prediction by using the estimation method, because a large amount of fatigue test data of various materials is adopted for fitting, to some extent, it can mitigate the negative influence caused by individual scattered date of a certain material and consequently provide satisfactory fatigue life prediction. In conclusion, the proposed Segment Fitting Method modified the Hardness Method, previously proposed by Roessle and Fatemi, based on the observation of statistic distribution of 92 pieces of test data, which consequently improves the fatigue life prediction results, particularly for the materials with low Brinell hardness, which illustrated the feasibility of the new proposal based on the thought of segment fitting of fatigue properties from Brinell hardness. However, what should be noted is that the optimal Segment Fitting Method is also limited by the size of the data sample, the Brinell hardness range included in the data sample, and the discreteness of fatigue test data and cannot fully cover all cases. e proposed segment fitting equation still needs a wider range of test data to correct the form and parameters in the equations. Conclusions An improved estimation method of fatigue properties from Brinell hardness is proposed, and the fatigue life prediction based on the estimated fatigue properties is studied. e new method and three estimation methods of fatigue properties are verified based on 96 pieces of fatigue test data in the existing literature. e following conclusions can be made from the analyses performed in the present study: (1) e fatigue strength confident, σ f ′ , has a fine linear relationship with Brinell hardness. e Segment Fitting Method combined with the power exponential form is adopted to estimate fatigue ductility confident, ε f ′ , from Brinell hardness in the present study, which improves the prediction accuracy. However, it is difficult to estimate fatigue strength exponent, b, and fatigue ductility exponent, c, directly from the monotonic mechanical properties, and the statistical mean values of numerous test data are usually taken as the approximation of these two exponents. (2) e fatigue life predictions have a good agreement with the experimental ones within scatter bands of 5 based on the estimation method of fatigue Data Availability e fatigue test data used to support the findings of this study are available upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2020-04-30T09:03:58.402Z
2020-04-27T00:00:00.000
{ "year": 2020, "sha1": "cb13aec3a05a449c3665d311579065e0ca3303b2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/amse/2020/8186159.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cccb472add31aab17896c84f5f0ecebc381eff95", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
62886859
pes2o/s2orc
v3-fos-license
The Effects of Dietary Garlic Powder on the Performance , Egg Traits and Blood Serum Cholesterol of Laying Quails This study was conducted to study the effects of dietary garlic powder on laying performance, egg traits and blood serum cholesterol level of quails. A total of three hundred quails (Coturnix coturnix japonica) aged nine weeks were used. They were allocated to 3 dietary treatments. Each treatment comprised 5 replicates of 20 quails. The diets were supplemented with 0, 5 and 10 g/kg garlic powder. The experimental period lasted 21 weeks. The addition of garlic powder did not significantly affect body weight, egg production, feed consumption, feed efficiency, egg shell thickness, egg albumen index, egg yolk index and egg Haugh unit. Adding 5 and 10 g/kg garlic powder to the laying quail diets increased egg weight (p<0.01). Egg yolk cholesterol and blood serum cholesterol concentration were reduced with garlic powder supplementation. The results of this study demonstrated that garlic powder addition had a significant cholesterol-reducing effect in serum and egg yolk without adverse effects on performance and egg traits of laying quails. ( Previous studies with laying hens and broilers showed controversial results about the hypocholesterolemic effect of garlic (Qureshi et al., 1983;Reddy et al., 1991;Konjufca et al., 1997;Birrenkott et al., 2000;Chowdhury et al., 2002;Yalçın et al., 2006).Reddy et al. (1991) reported that the values of body weight, egg production, egg weight, feed efficiency, yolk cholesterol and plasma cholesterol of laying hens were not affected by the supplementation of diets with 0.2 g/kg garlic oil.In the study of Chowdhury et al. (2002), no differences in egg weight, egg mass, feed consumption, feed efficiency and body weight gain among the groups fed diets containing different levels of sun-dried garlic paste (0, 20, 40, 60, 80 or 100 g/kg) were found but serum and egg yolk cholesterol concentrations have been shown to decrease linearly (p<0.05) with increasing levels of dietary garlic.The reduction of the activities of hepatic 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) reductase and cholesterol 7α-hydroxylase in the birds fed garlic was observed by some researchers (Qureshi et al., 1983;Konjufca et al., 1997).In the study of Yalçın et al. (2006), garlic powder addition at the level of 5 and 10 g/kg increased egg weight (p<0.01) and decreased egg yolk cholesterol concentration as mg/g yolk (p<0.01) and serum triglyceride (p<0.05) and cholesterol (p<0.01)concentrations without adverse effects on performance and egg traits of laying hens.However, Birrenkott et al. (2000) reported that 30 g/kg garlic powder supplementation had no significant effect on yolk and serum cholesterol concentrations of hens and they also observed no differences in color and flavor in eggs from hens consuming up to 30 g/kg dietary garlic powder. There are no published reports about the dietary garlic powder for laying quails as we know.Therefore, the present study was aimed to examine the effects of garlic powder on laying performance, egg traits and blood serum cholesterol level of laying quails. Animals and diets A total of 300 Japanese quails (Coturnix coturnix japonica) aged nine weeks were chosen at random from a large flock.They were housed in cages (20 cm×45 cm×45 cm) and randomly allocated to 3 dietary treatments.Each treatment comprised 5 replicates of 20 quails.Therefore, 3 groups containing 100 quails each were arranged.Feed and water were provided for ad libitum consumption and the diets were presented in mash form.A photoperiod of 17 h was maintained.The experiment was conducted for 21 weeks. The ingredient and chemical composition of the diets were given in Table 1.The diets were formulated to be isocaloric and isonitrogenous.The diets of the first, second and third groups were supplemented with 0, 5 and 10 g/kg garlic powder, respectively.Garlic powder was purchased commercially (Arifoğlu Spices and Food Limited Inc., İstanbul, Turkey). Traits measured Moisture, crude ash, crude fibre, ether extract, crude protein, calcium and total phosphorus contents of diet were determined according to the AOAC (1990).Metabolizable energy levels of diets were estimated using a prediction equation (Leeson and Summers, 2001): Quails were weighed individually at the beginning and at the end of the experiment.Eggs were collected daily and egg production was calculated as a bird-day basis.Mortality was recorded as it occured. Eggs were weighed every week individually for one day of production.Feed consumption was recorded biweekly and calculated as g/quail/day.The value of feed efficiency was calculated as kg feed per kg egg and kg feed per one dosen egg. To determine the egg traits, 15 eggs were collected from each group (3 eggs from each replicate) at the 3rd, 9th, 15th and 21st week of the experiment.Individual eggs were weighed and their shell thickness was measured.Then the values of yolk height, albumen height, yolk width, albumen width and albumen length were determined.By using these values, yolk index, albumen index and Haugh unit were calculated (Card and Nesheim, 1972).Egg quality analyses were completed within 24 h of the eggs being collected.At the end of the experiment, 35 eggs per group (7 eggs from each replicate) were randomly chosen to determine yolk cholesterol.Eggs were boiled for 5 minutes.Cholesterol was extracted according to the method of the AOAC (1990). Blood samples from 15 quails were collected randomly from each group (3 from each replicate) at the slaughtering time at the end of the experiment and centrifuged at 3,000× g for 10 min.Serum was collected and stored at -20°C for determination of serum cholesterol level.Serum was analysed for cholesterol by Hitachi autoanalyser (Serial Number 1238-23, Hitachi Ltd, Tokyo) using commercial kit. Statistical analyses Statistical analyses were done using SPSS programme (SPSS Inc., Chicago, IL, USA).One way ANOVA was used to evaluate the effects of garlic powder on performance, egg traits and blood serum cholesterol level of laying quails among groups.The significance of mean differences between groups was tested by Duncan.The effect of supplementation on the mortality of laying quails was evaluated by the x 2 test (Dawson and Trapp, 2001). supplemented with 0, 5 and 10 g/kg garlic powder, respectively.Mortality was not affected by the inclusion of garlic powder.Similar to the result of the present study, garlic powder supplementation had no effect on mortality in laying hens (Yalçın et al., 2006). The effects of garlic powder on performance of laying quails are shown in Table 2. Body weight, feed consumption, egg production and feed efficiency were not significantly affected by dietary treatments over the 21 weeks period.These results demonstrate that the strong odor of garlic does not act as a deterrent to feeding.In agreement with the present study, some researchers (Reddy et al., 1991;Chowdhury et al., 2002;Yalçın et al., 2006) reported that body weight, body weight gain, feed consumption, egg production and feed efficiency of laying hens were not significantly affected by dietary garlic supplementation. Egg weight increased with garlic powder (p<0.01).Similar results were also reported by Yalçın et al. (2006).However some researchers (Reddy et al., 1991;Chowdhury et al., 2002;Lim et al., 2006) found that garlic products had no effect on egg weight.These differences may be due to the use of different commercial garlic products and the preparation methods of garlic powder. The effects of garlic powder on egg traits and blood serum cholesterol of laying quails are shown in Table 3.The addition of garlic powder had no significant effect (p>0.05) on the egg shell thickness, egg albumen index, egg yolk index and egg Haugh unit.These results are in agreement with the results of study involving laying hen fed diets supplemented with garlic powder (Yalçın et al., 2006).Lim et al. (2006) also reported that egg shell thickness was not affected by the dietary supplementation of garlic powder. Adding 5 and 10 g/kg garlic powder to the laying quail diets reduced egg yolk cholesterol and blood serum cholesterol concentration significantly (p<0.01) as shown in Table 3.The reduction of serum and egg yolk cholesterol when garlic paste was fed to the laying hens could be attributable to the reduction of synthetic enzyme activity (Chowdhury et al., 2002).Konjufca et al. (1997) reported that the reduction in the activities of HMG-CoA reductase and cholesterol 7α-hydroxylase in broilers fed garlic.The results of the present study are in agreement with some studies (Sharma et al., 1979;Chowdhury et al., 2002;Mottaghitalab and Taraz, 2004).Sharma et al. (1979) observed that egg yolk cholesterol was reduced by 4.1 and 5.5% when laying hens were fed 10 and 30 g/kg garlic powder for 3 weeks, respectively.Supplementation of 20, 40, 60, 80 or 100 g/kg of sun dried garlic paste (Chowdhury et al., 2002), 5, 10 or 15 g/kg of garlic powder (Mottaghitalab and Taraz, 2004) and 30 g/kg garlic powder (Lim et al., 2006) reduced serum and egg yolk cholesterol concentration.Similar to the present study, Yalçın et al. (2006) also reported that the levels of serum cholesterol in laying hens were significantly (p<0.01)reduced with 5 and 10 g/kg garlic powder supplementation.However, Reddy et al. (1991) found that 0.2 g/kg garlic oil in the diets of laying hens did not significantly reduce total plasma cholesterol.These inconsistent findings may be due to the differences in supplemental levels, feeding period or preparation method of garlic products (e.g.organic solution extraction, alcohol extraction, simple drying, and etc.) (Lim et al., 2006). It is concluded that garlic powder can be included in diets for laying quails at the levels of 5 or 10 g/kg without any adverse effect on the performance and egg traits.The main important result of dietary garlic powder supplementation is the significant cholesterol-reducing effect in both serum and egg yolk in laying quails.Further studies should be necessary to evaluate the effects of garlic powder on egg yolk composition and its cholesteroldepressing mechanism of action. Table 1 . Ingredient and chemical composition of the diets (g/kg) Table 2 . The effects of garlic powder on performance of laying quails (mean±standard error) Mean values in the same row having different superscripts are significantly different (p<0.01). Table 3 . The effects of garlic powder on egg traits and blood serum cholesterol of laying quails (mean±standard error) Mean values in the same row having different superscripts are significantly different (p<0.01).
2018-12-21T17:22:02.747Z
2007-05-02T00:00:00.000
{ "year": 2007, "sha1": "b2839491401e9d1e15811499211fb004a3cb4f57", "oa_license": "CCBY", "oa_url": "https://www.animbiosci.org/upload/pdf/20-133.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b2839491401e9d1e15811499211fb004a3cb4f57", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
209751896
pes2o/s2orc
v3-fos-license
Combined virgin coconut oil and tocotrienol-rich fraction protects against bone loss in osteoporotic rat model Background and Aim: Both virgin coconut oil (VCO) and tocotrienol-rich fraction (TRF) are rich in antioxidants and may protect the bone against bone loss induced by ovariectomy and high-fat diet. The study aimed to determine the protective effects of combined therapy of VCO and TRF on osteoporosis in ovariectomized (OVX) rat fed with high-fat diet. Materials and Methods: Thirty-six female Sprague-Dawley rats were divided into six groups: Sham-operated (SHAM), OVX control, OVX and given Premarin at 64.5 µg/kg (OVX+E2), OVX and given VCO at 4.29 ml/kg (OVX+V), OVX and given TRF at 30 mg/kg (OVX+T), and OVX and given a combination of VCO at 4.29 ml/kg and TRF at 30 mg/kg (OVX+VT). Following 24 weeks of treatments, blood and femora samples were taken for analyses. Results: There were no significant differences in serum osteocalcin levels between the groups (p>0.05), while serum C-terminal telopeptide of Type I collagen levels of the OVX+VT group were significantly lower than the other groups (p<0.05). The dynamic bone histomorphometry analysis of the femur showed that the double-labeled surface/bone surface (dLS/BS), mineral apposition rate, and bone formation rate/BS of the OVX+E2, OVX+T, and OVX+VT groups were significantly higher than the rest of the groups (p<0.05). Conclusion: A combination of VCO and TRF has the potential as a therapeutic agent to restore bone loss induced by ovariectomy and high-fat diet. Introduction Osteoporosis is known as a silent metabolic bone disease characterized by low bone mass and microarchitecture damage, resulting in increased risk of fractures. It is caused by reduced osteoblastic activity and increased osteoclastic activity. It commonly occurs in elderly women due to lack of estrogen following menopause [1]. Osteoporosis induced by estrogen deficiency could be further aggravated by unhealthy diet intake. The study has shown that cholesterol is one of the factors involved in stimulating osteoclast formation and survival [2] by promoting interleukin-1 production. Intake of repeatedly heated palm oil may be detrimental to the bone structure of ovariectomized (OVX) rat model [3]. A combination of these two unhealthy diets has been shown to worsen the bone deterioration caused by ovariectomy [4]. The effects of the unhealthy diet were thought to be related to oxidative stress [5]. Estrogen has antioxidant effects and was positively correlated with the levels of plasma antioxidants and antioxidant enzymes [6]. Estrogen also enhanced the expression of glutathione peroxidase (GPX), an enzyme that degrades hydrogen peroxide [7]. Therefore, estrogen deficiency caused reduction in GPX, thus predisposing the bones to hydrogen peroxide. The lack of estrogen also reduces its protective effect against oxidative stress [8]. A previous study showed that estrogen deficiency stimulated bone loss, which in turn contributed to the development of osteoporosis [9]. Estrogen replacement therapy (ERT) is the primary treatment and prevention of postmenopausal osteoporosis [10]. Either estrogen alone or in combination with progesterone may prevent bone loss after menopause. These therapies are effective in preventing bone loss but do not reverse the bone loss. Longterm ERT may also increase the risk of breast cancer, coronary heart disease, stroke, and dementia [11]. Due to these serious side effects of ERT, researchers are finding alternative treatment [12] that is effective but has fewer side effects. One of the most popular natural products is virgin coconut oil (VCO), which has been extracted directly from fresh, mature coconut kernel without undergoing a refined process. This preserves the essential biologically active compounds in the oil such as tocotrienols, polyphenols, and tocopherols, which possess antioxidant properties [13]. Dietary supplementations of VCO had increased the antioxidant properties in rats [14]. VCO supplementation provided protection against bone loss in osteoporosis [15]. VCO has also been accounted to have anticancer, antimicrobial, and anti-inflammatory properties [16][17][18]. Furthermore, the effects of tocotrienols on bone parameters using different osteoporosis models such as ovariectomy [19], steroid-induced [20], and nicotine-induced [21] models have been conducted by researchers. All the studies showed that tocotrienols offered protection against bone loss in osteoporosis models. The mechanism of protection is related to its antioxidant properties. Combined effects of VCO and tocotrienol-rich fraction (TRF) on bone loss in osteoporosis have not been explored yet and may be beneficial with the added oxidative stress of unhealthy diets. Therefore, the current study was designed to determine the effects of VCO and TRF, individually and in combination, on the bone parameters of OVX rat fed with high cholesterol diet and repeatedly heated palm oil. Ethical approval The research project was conducted from May 2014 to August 2016 in Universiti Kebangsaan Malaysia Medical Centre (UKMMC), Cheras, Kuala Lumpur, Malaysia. This research was approved by the Research and Ethical Committee, Faculty of Medicine, UKMMC (FP/ANAT/2014/FAIZAH). Experimental design Thirty-six female Sprague-Dawley rats, weighing between 250 and 300 g, were obtained from the Laboratory Animals Resource Unit, Faculty of Medicine, UKMMC. The animals were allowed 1-week of acclimatization during which they were fed on commercial rat chow (Gold Coin, Klang, Selangor, Malaysia). The rats were randomized into six groups of six animals each: Sham-operated (SHAM), OVX-control, OVX and given Premarin 64.5 µg/kg (OVX+E2), OVX and given VCO 4.29 ml/kg (OVX+V), OVX and given TRF 30 mg/kg (OVX+T), and OVX and given combination of VCO 4.29 ml/kg and TRF 30 mg/kg (OVX+VT). The rats were housed one per cage in stainless-steel cages at 27±2°C with adequate ventilation and 12-h light/dark cycle. After 2 weeks of ovariectomy, the SHAM group was fed with standard rat chow while the OVX rats were given high cholesterol diet mixed with repeatedly heated palm oil. All the rats were allowed free access to food and tap water ad libitum. The treatments were administered to the OVX+E2, OVX+V, OVX+T, and OVX+VT groups daily through oral gavage for 24 weeks. The food intake and body weights were recorded daily and weekly, respectively. Blood was drawn to measure bone biochemical markers (osteocalcin [OC] and C-terminal telopeptide of Type I collagen [CTX]) in the serum before and after treatments. After 24 weeks, the rats were sacrificed, and the left femora were dissected and prepared for dynamic histomorphometric studies. All animal management and procedures were performed in accordance with the recommended guidelines for the care and use of laboratory animals. Care was taken to minimize discomfort, distress, and pain to the animals. Ovariectomy Ovariectomy was carried out under ketamil: ilium xylazil-20 (Troy Laboratories PTY, Australia) in 1:1 ratio, which was injected intramuscularly with the dose of 0.1 ml/100 g body weight of the rats. Once anesthetized, the furs on the abdomen were shaved. A vertical incision was made in the abdomen using a sterilized sharp knife, and both ovaries were identified. The fallopian tubes were ligated before removing the ovaries. The muscular layer under the skin was stitched up by catgut suture (Serafit, Germany) while the outer layer of skin was stitched with Mersilk Thread (Seralon, Serag Wiessner, Germany). The abdomen of the SHAM group was opened, whereby their ovaries were exposed and carefully manipulated but left intact [22]. The rats were left recuperating for 2 weeks before commencing the treatment. Preparation of high cholesterol diet and repeatedly heated palm oil Palm oil (Cap Buruh, Lam Soon Edible Oils, Kuala Lumpur, Malaysia) was purchased from a local manufacturer. The palm oil was heated 5 times. Briefly, 2.5 L of fresh palm oil was used to fry 1 kg of sweet potato slices in a stainless-steel wok. The temperature of the heated oil was maintained at 180°C for 10 min. Then, the oil was cooled down at room temperature for 5 h. The whole frying process was repeated 4 more times with a new batch of sweet potatoes without adding any fresh oil. Then, the 5 times heated oil (5HO) was collected to prepare the special diet by mixing 15% (w/w) of 5HO to the high cholesterol diet. The diets were made into pellets and dried in the oven at 70°C overnight. The oil: high cholesterol diet ratio represents the average amount of daily oil intake in humans [23]. Preparation of VCO, TRF, and Premarin The VCO used in this study was purchased from Bio-Asli Sdn. Bhd., Sungai Besar, Selangor, Malaysia. The VCO was administered through oral gavage using a cannula needle at a dose of 4.29 ml/kg body weight of rats for 24 weeks. The dose was equivalent to VCO given to humans for alternative therapy, which was three tablespoons or equal to 45 ml/day [24]. TRF was prepared by Carotech (Tocomin, Selangor, Malaysia), consisting of alpha-tocotrienol (37.2%), gamma-tocotrienol (39.1%), and delta-tocotrienol (22.6%). It was diluted in olive oil (Bertolli Classico, Italy) and given daily through oral gavages at the dose of 30 mg/kg body weight of rats for 24 weeks. This dose was roughly equivalent to 3 mg/kg in humans, or 210 mg for a 70 kg man. Estrogen dose given in this study was 64.5 µg/kg. Each Premarin tablet containing 0.625 mg of conjugated estrogens was crushed and dissolved in 20 ml of distilled water. The solution was mixed using a magnetic stirrer until homogenous. Then, Premarin solution was stored in the refrigerator at 4°C. The Premarin was given daily through oral gavage at the dose of 0.2 ml/100 g body weight of rats for 24 weeks. Blood collection and bone sampling For the biochemical study, blood samples were collected at the beginning and after 24 weeks of treatment from the retro-orbital vein under diethyl ether anesthesia. After leaving the blood at room temperature for 3 h, the blood was centrifuged at 3000 rpm for 10 min and the serum was stored at −70°C until further use. Following 24 weeks of treatment, the rats were anesthetized with diethyl ether and sacrificed humanely by cervical dislocation. The left femora were dissected, and adhering muscles were cleansed before fixing in 10% formalin. Biochemical markers parameter (OC and CTX) Bone biochemical markers of OC and CTX were evaluated before and after the treatment by enzyme-linked immunosorbent assay (ELISA's) machine (Leica CTR. MIC, Germany) using Rat-Mid OC ELISA kit (Nordic Biosciences, IDS, UK) and RatLaps ™ CTX-1 ELISA kit (Nordic Biosciences, IDS, UK), respectively. Dynamic histomorphometric bone parameter (single-labeled surface/bone surface [sLS/BS], double-labeled surface/BS [dLS/BS], mineralized surface/BS [MS/BS], mineral apposition rate [MAR], and bone formation rate/BS [BFR/BS]) After fixation, the bones were cut sagittally at mid-shaft using a rotary electronic saw (Black & Decker, USA). The distal left femora were then cut into half longitudinally and subsequently dehydrated in graded concentrations of ethanol. The femora were embedded in polymer methyl methacrylate medium according to the manufacturer's instructions (Osteo-Bed Bone Embedding Kit; Polysciences, USA). Then, the samples were sectioned at 7 µm thickness using a Manual Rotary Microtome (Model 2235, Leica, Germany). For dynamic parameters, undecalcified and unstained bones were analyzed using an image analyzer with Pro-Plus 5.0 software (Media Cybernetics, Silver Spring, MD, USA) that was connected to fluorescence microscope (Nikon Eclipse 80 µ, Japan). Dynamic parameters were measured using double fluorescent labeling technique by intraperitoneal injection of 20 mg/kg calcein to the rats at 7 days and 2 days before they were sacrificed. The basic measurements for dynamic parameters were sLS/BS, %, dLS/BS, %, MS/BS, %, MAR, µm/day, and BFR/BS, µm 3 /µm 2 /day. All dynamic measurements were carried out randomly at the metaphyseal region of the distal femora, which was located between 3 mm and 7 mm from the lowest point of the growth plate and 1 mm from the lateral cortex, excluding endocortical region [25]. The selected area was the secondary spongiosa area, which is rich in high-turnover trabecular bone. Trabecular bone was chosen because its remodeling process is more dynamic than cortical bone. Statistical analysis The Kolmogorov-Smirnov test was used as a normality test. The paired-sample t-test was carried out to compare the same group before and after treatment. For normally distributed data, one-way analysis of variance followed by Tukey's honestly significant difference post hoc test was utilized for comparison between treatment groups while Kruskal-Wallis and Mann-Whitney tests were used for data that were not normally distributed. Statistical analysis was performed using the Statistical Package for the Social Sciences software version 22.0 (SPSS Inc., Chicago, IL, USA). The results were presented as mean val-ues±standard error of the mean. The statistical differences were considered significant at p<0.05. Figure-1 shows the mean daily food intake of all groups throughout the study treatment. The results showed that the OVX, OVX+E2, OVX+V, and OVX+T groups had significantly higher mean daily food intake compared to the SHAM group (p<0.05). All the OVX treated groups, OVX+E2, OVX+V, OVX+T, and OVX+VT groups, showed significant lower food consumption compared to the negative control, OVX group (p<0.05). Among the treated groups, both OVX+V and OVX+T groups had significantly higher mean daily food intake compared to the OVX+E2 and OVX+VT groups (p<0.05). Weight gain The body weight gain for all groups after 24-week study is shown in Figure-2. The body weight gain of the OVX, OVX+V, OVX+T, and OVX+VT groups was significantly higher than the control, SHAM group (p<0.05). The weight gain of the OVX+E2 and OVX+T groups was significantly lower than that of the negative control, OVX group (p<0.05). Among the treated groups, both OVX+V and OVX+VT groups had significantly higher body weight gain compared to the positive control, OVX+E2 group (p<0.05). Figure-3 shows post-treatment serum OC levels for all groups. There was no significant difference in serum OC levels among the groups (p>0.05). Figure-4 shows post-treatment serum CTX levels for all groups. After 24 weeks of treatment, the serum CTX levels in the OVX+E2, OVX+V, OVX+T, and OVX+VT groups were significantly lower compared to the control, SHAM group (p<0.05). Among the OVX groups, the post-treatment CTX level of the OVX+VT group was significantly lower than the negative control, OVX group (p<0.05). sLS/BS The sLS/BS of the SHAM group was significantly higher compared to the OVX groups (p<0.05). The sLS/BS of the OVX+E2 and OVX+VT groups were significantly lower compared to the negative control, OVX group (p<0.05) (Figure-5). dLS/BS The dLS/BS of the OVX+E2, OVX+T and OVX+VT groups was significantly higher compared to the control, SHAM, and negative control, OVX groups (p<0.05) (Figure-6). MS/BS The MS/BS of the OVX+VT group was significantly higher compared to the control, SHAM group (p<0.05). The MS/BS of the OVX+E2, OVX+T, and OVX+VT groups was significantly higher compared to the negative control, OVX group (p<0.05) (Figure-7). MAR The MAR of the OVX+E2, OVX+T, and OVX+VT groups was significantly higher compared to the control, SHAM, and negative control, OVX groups (p<0.05) (Figure-8). Bone histology of dynamic parameter The photomicrographs of the trabecular bone of the distal part of femora were analyzed using fluorescence microscope. Both the control, SHAM and negative control, OVX groups showed an increase in sLS/BS than dLS/BS while the OVX+E2, OVX+V, OVX+T, and OVX+VT groups had an increase in dLS/BS than sLS/BS (Figure-10). Discussion High cholesterol diet mixed with repeatedly heated palm oil was provided to the OVX rat model to mimic the unhealthy practice of postmenopausal women consuming diets high in cholesterol and containing repeatedly heated palm oil. These conditions would lead to oxidative stress, which may require extra antioxidant supplementations to help the body against free radical attacks [26]. In this study, two forms of well-known antioxidants, VCO and TRF, were used to protect the bone against free-radical induced damage. Estrogen regulates food intake through the action of leptin, a protein that controls food intake [27]. Leptin level was decreased by ovariectomy [28], resulting in increased food consumption. Estrogen was directly involved in regulation of body weight by binding to estrogen receptors in subcutaneous fat tissue [29]. The result showed that the OVX group had significantly higher mean daily food intake and body weight gain compared to the rest of groups. This was in agreement with the previous reports that OVX rats have higher body weight gain due to fat deposition caused by deficiency of estrogen [30,31]. However, combined or individual supplementations of VCO and TRF to OVX rats were able to reduce the food intake pattern, until they were almost similar to the Premarin group (OVX+E2) and significantly lower than the OVX group. TRF given alone was able to control body weight gain which was comparable to Premarin. The findings on the effects of TRF in reducing body weight gain were similar to previous studies on OVX rats given estrogen [32] and calcium, estrogen, and TRF [33]. OC, a bone formation marker, and CTX, a bone resorption marker, are bone biochemical markers Available at www.veterinaryworld.org/Vol.12/December-2019/23.pdf that could detect osteoblast and osteoclast activities, respectively [34,35]. In estrogen deficiency state, the increase in serum CTX levels was associated with the increase in bone turnover rate, leading to bone loss [36]. After 24 weeks of treatment, serum OC levels did not show any significant difference between the groups. This finding was in line with the previous study, which did not find any significant difference in the serum OC levels of groups supplemented with VCO or TRF after an 8-week period of treatment [37]. Hypercholesterolemia was positively correlated with reduction in bone formation and bone density, while bone resorption was increased [38]. The bone mineral density of OVX rats fed high cholesterol diet was significantly decreased after 7 months of treatment [39]. In this study, addition of cholesterol and repeatedly heated palm oil in the diet seemed to promote bone resorption. The result showed that the serum CTX levels were significantly lower in the OVX+E2, OVX+V, OVX+T, and OVX+VT groups compared to the SHAM group. This was in agreement with the previous study, which reported that treatment with anti-osteoporotic agents had decreased the serum CTX level [40]. The post-treatment CTX level in the OVX+VT group was also significantly lower compared to the OVX group. This indicated that VCO-TRF combination was able to reduce the raised CTX level by decreasing bone resorption and bone formation activities. The correlation between serum biomarkers with bone microarchitecture was demonstrated in laboratory animals and postmenopausal women studies [41,42]. Recent clinical diagnostic techniques for osteoporosis were mainly based on using of either X-rays or ultrasound. Both dual X-ray absorptiometry (DXA) and micro-computed tomography have become standard c d e f tools to evaluate bone mineral density and bone architecture, respectively. Among the most commonly used techniques, DXA was considered the current gold standard for osteoporosis diagnosis and fracture risk prognosis. However, as a research method, bone histomorphometry supported the interpretation of bone biology [43], developed the potential mechanism of actions of several effective therapies [44] and has been essential in identifying the adverse effects of drugs [45]. Thus, the dynamic bone histomorphometry parameter was used in this study and has been considered the ultimate histomorphometry assessment as it provides a quantitative assessment of the extent of bone formation over a specific period of time. The lower sLS/BS and the higher dLS/BS in both OVX+E2 and OVX+VT groups verified that the addition of Premarin and VCO-TRF supplementation had the potential to overcome the improper bone growth by stimulating bone formation as also seen in the MS/BS, MAR, and BFR/BS parameters. This indicated that VCO-TRF supplementation was as effective as Premarin in producing more newly mineralized bone. In addition, daily VCO-TRF supplementation had also increased the osteoblastic bone formation and decreased osteoclastic bone resorption in the OVX rats. This was indicated in the OVX+VT group by the higher dLS/BS, MS/BS, MAR, and BFR/ BS values, and lower sLS/BS values compared to the OVX and OVX+V groups. Previous studies showed Vitamin E supplementation provided a positive effect on bone strength and bone mineral density in animal model studies [46][47][48]. Therefore, this study discovered that VCO-TRF supplementation on the OVX rats may have additive bone protective effects compared to single supplementation with VCO or TRF that can be beneficial in treating the postmenopausal osteoporosis. This study will help the researchers to uncover the critical areas of antioxidants combination that many researchers were not able to explore. Thus, a new theory on antioxidant therapy may be arrived at combined therapy of VCO and TRF. Conclusion The more superior osteoprotective effects of VCO-TRF supplementation indicated their worthiness as an alternative therapy for the prevention of postmenopausal osteoporosis. Further studies are required to explore their potential as an anti-osteoporotic agent for postmenopausal osteoporosis.
2020-01-02T21:53:48.258Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "ecc117d71cc87ba91e91f483bdce61c5810fa180", "oa_license": "CCBY", "oa_url": "http://www.veterinaryworld.org/Vol.12/December-2019/23.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5eaf53f21238e6e836e1cb01a3b720d8a393f48", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
238229225
pes2o/s2orc
v3-fos-license
A Randomized Controlled Pilot Trial About the Influence of Kyusho Jitsu Exercise on Self-efficacy, Fear, Depression, and Distress of Breast Cancer Patients within Follow-up Care Introduction Breast cancer survivors are faced with several psychological issues. We report the influence on self-efficacy by a holistic orientated training schedule based on the “Kyusho Jitsu” martial art and explore the effects on self-efficacy, distress, fear, and depression. Methods Breast cancer survivors (N = 51) were randomly assigned to an intervention (n = 30) or control group (n = 21). The intervention group participated in a Kyusho Jitsu intervention twice a week over a period of 6 months, the control group received no intervention. Patients from both groups were measured at baseline, 3 and 6 months on level of self-efficacy (German General-Self-Efficacy Scale, SWE), stress (Perceived Stress Questionnaire, PSQ20), and fear and depression (Hospital Anxiety and Depression Scale, HADS). Results Analysis of the original data showed a significant difference between both groups regarding the subscale “joy” (P = .018). Several significant results within the intervention group were seen in self-efficacy (P = .014), fear (P = .009) and the overall score for fear and depression (P = .043). Both groups improved significantly within “worries” (intervention P = .006, control P = .019) and the PSQ20 overall score (both P = .005). The control group also significantly improved in the subscale for “demands” (P = .019). Conclusion To summarize, our pilot study showed that Kyusho Jitsu training is safe and feasible. Though, the intervention alone cannot be considered as being effective enough to help breast cancer survivors regarding relevant psychological issues, but might be an important supplement offer within follow-up care. Introduction Breast cancer is the leading cause of cancer-related death among females worldwide. 1,2 With approximately 69 000 new cases per year in Germany, breast cancer is the most common cancer among women. 3 The absolute 5-year survival rate for 2015/2016 was 79%. 3 A diagnosis of a life threatening disease such as cancer can challenge a patient's ability to cope 4 and the stress associated with the diagnosis and treatment of cancer can cause significant psychiatric morbidities. 5 In addition to the struggles of cancer treatment (eg, unexpected body changes caused by breast cancer surgery or hair loss from chemotherapy), systemic side effects such as "arthralgia" which is commonly triggered by aromatase inhibitors, 6 foster distress, which can lead to a large psychological burden for the patient. 25% to 50% of patients have psychological problems, with at least 25% meeting the criteria for either major depressive disorder or adjustment disorder with depressed mood. 5 Furthermore, a combination of mood disorders occur in 30% to 40% of oncological patients in hospital settings. 7 Self-efficacy refers to the ability of an individual to control their motivation, behavior, and environment, and essentially boils down to be the belief of an individual to succeed in a given situation. 8 Obviously, this belief is shaken after a cancer diagnosis, and self-efficacy will play an important role in the recovery process. Interestingly enough, Tai Chi (a form of Chinese martial arts/mediative practice) has been shown to have a beneficial effect on self-efficacy in a variety of patient groups, 9,10 including breast cancer survivors. 11 However, more than one third of all breast cancer survivors experience distress even after completing medical therapy. 12 This distress leads to a higher risk of psychological illness in breast cancer survivors. For example, a significant number of women still had elevated anxiety and depression symptoms 18 months following treatment. 13 Moreover, the majority of the literature indicates that the experience of stress impairs efforts to be physically active. 14 This is especially true for those who receive a diagnosis of breast cancer, as simply receiving the diagnosis itself can reduce the level of physical activity. 15 This leads to a vicious cycle of impaired movement and negative side effects, as those who do exercise suffer from less depression, 16 anxiety, 17 fatigue, 18,19 and cognitive impairments. 20,21 Previous research indicates that physical activity and exercise can directly benefit an individual upon the completion of cancer treatment. [22][23][24] This applies in particular for depression, with both cancer patients 22 and clinicians 23 reporting lower levels of depression among patients who completed exercise training. Conversely, physical inactivity can contribute to worsening health in breast cancer survivors. 24 While many modalities of exercise have been studied with the goal of improving the quality of life of cancer patients, one that has received markedly less attention, is martial arts. However, in recent years both martial arts and Tai Chi have been the focus of various studies, to increasingly positive results. 25,26 One such as of yet under researched type of martial arts is Kyusho Jitsu. Kyusho Jitsu focuses on the so-called vital points of the body, and how the manipulation of these vital points can be used to produce neurological or physiological effects. While originally developed to be used in hand-to-hand combat, traditional Chinese medicine has applied the same principles with the goal of alleviating a variety of ailments. It is for this reason, that Kyusho Jitsu could be a feasible physical activity modality to improve the mental well-being of breast cancer patients; while at the same time increasing their overall levels of physical activity, and thus all of the previously discussed positive benefits of exercise in general. Therefore, it is the goal of this study to assess the impact that a Kyusho Jitsu martial arts training program on the distress experienced by cancer patients, more specifically its influence on self-efficacy, fear, and depression. Methods This pilot study was a prospective, randomized-controlled 2-armed intervention study, which ran for 24 weeks with 51 female breast cancer survivors participating. The participants were randomly assigned to the intervention group or the control group by an uninvolved third-party using Microsoft Excel. The intervention group received their exercise intervention from weeks 1 to 24, while the control group received no such intervention, and only had contact with the researchers at their assessment appointments. The exercise intervention consisted of a 24-week holistic training program, which was based on "Kyusho Jitsu" martial arts, also known as "the art of vital points." Participants were asked to attend 2 exercise sessions per week for 90 minutes each. The primary outcome was feasibility, which has been analyzed and published elsewhere within the publication of the main study. 27 This exploratory analysis focused on the modification of distress in the intervention group. Further secondary outcomes were self-efficacy, fear, and depression. All outcomes were measured at baseline (t 0 ), at the end of week 12 (t 1 ), and 24 (t 2 ). The study was conducted under consideration of the Declaration of Helsinki. It was approved by the human research ethical committee of the German University of Sports Cologne and was registered retrospectively in the German Clinical Trials Register (DRKS-ID: DRKS00011245). Participants and Procedures Participants were recruited via local newspaper advertisement between 16th of June 2014 and 9th of September 2014. Inclusion criteria were: female breast cancer survivors aged 18 years or older, and having completed medical treatment (excluding hormone treatment) within the 6 months before enrollment. All participants had to provide written consent before they were enrolled into the study. Exclusion criteria were: metastasis, if the last chemotherapy or surgery was completed more than 6 months ago, or if physical and/or psychological functions that would impair the participation within the trial were present. Assessments Patients of both groups had to fill in an anthropometric questionnaire, which was comprised of clinical and sociodemographic data. Additional information about tumor stage, chemotherapy, and radiation were collected from patient medical records. All questionnaires were administered by a member of the research team. The measurement of the secondary outcome self-efficacy was tested using the German SWE ("Allgemeine Selbst-Wirksamkeits-Erwartung") questionnaire. 28 It is a one-dimensional scale of 10 items. The range of the items is from 1 is "not true," 2 is "hardly correct," 3 is "more true" to 4 is "accurate." Each item expresses an internal-stable attribution of the success expectation. The individual test value results from the summation of all 10 responses, so a score between 10 and 40 is possible. The level of distress was evaluated using the PSQ-20 (Perceived Stress Questionnaire). 29 The questionnaire analyses the subjective perception, evaluation and further processing of stressors. Whilst the original PSQ contains 30 items, the PSQ-20 is the short-form version of the original, and is comprised of 20 items. The range of the items is from 1 for "almost never," 2 indicating "sometimes," 3 for "frequently," and 4 for "mostly" and should indicate certain stress events in a defined period of time. The period was set to the last 4 weeks, but the short form still contains a variant, which inquires for the last 2 years. The shortened version includes the following 4 scales, each with 5 items: worries, tension, joy, and demands. 30 The level of fear and depression was quantified using the HADS (Hospital Anxiety and Depression Scale). 31 The questionnaire consists of 14 questions that are presented alternately regarding the subdomains "anxiety" and "depression," each with 7 items. For each answer, there are scores of 0 to 3. By adding the individual scores, both an anxiety-and a depression scale can be formed. The values are interpreted as: 0 to 7 inconspicuous, 8 to 10 suspect, >10 conspicuous. Intervention The therapeutic intervention consisted of a 24-week holistic orientated exercise training based on "Kyusho Jitsu-the art of vital points." The training was performed 2 times a week for 90 minutes per session. The training included several aspects of martial arts, self-defense, and pain cognition, plus stretching, and physical invigoration. Kyusho Jitsu uses different techniques from Tai Chi and Qi Gong, such as breathing exercises and meditation to improve mental health. All training sessions were specially designed for post breast cancer patients, and focused on providing a positive environment for the participants. The training was split into 2 main parts: the first part included a physical training session, beginning with a warmup phase (for about 45 minutes). It consisted of different physical and psychological elements: coordination, mobilization (upper body focus), strength and endurance, but also self-efficacy and dependence training for psychological stabilization and handling of fear (such as blindly walking through the gym). Every training session started and ended with a short round of feedback about feelings, thoughts and whether it was considered a personal success. The second main part was divided in 3 phases: Kyusho Jitsu training (20 minutes), Katha (continuous fight sequence for approximately 5 minutes) and meditation (15 minutes). In the first phase, the Kyusho Jitsu training, the patients were able to learn new elements (in the group) and review previously learnt movement patterns. The exercise time varied for every individual, and additionally when problems arose regarding the physical capability of an individual, different variations on the movement or exercise were provided. The Kyusho Jitsu training was also accompanied by Tai Chi breathing exercises. In the second phase, the fighting sequence was performed with relaxing music as a group. The goal was to learn one complete Katha by the end of the study. The third and last phase was the mediation for relaxing and strengthening the vital points (based on body "Meridians" from Traditional Chinese medicine). The training took place at the German Sport University Cologne and was supervised by sport scientists and professional Kyusho Jitsu instructors who built up the movement patterns from simple to complex routines. During the training duration of 24 weeks, the control group received no kind of intervention. Statistical Analysis Patient characteristics were described using mean ± standard deviation (SD). Potential baseline differences between groups in age, time since diagnosis and cancer treatments (surgery, chemotherapy, radiation, hormone treatment, antibodies) were analyzed using Mann-Whitney U-Tests and Chi-square tests, as appropriate. Outcome results were analyzed per protocol and presented as mean ± SD and median. For the between-group comparison of the delta we used the Mann-Whitney U-Test, for the within group comparison from baseline to 6 months the Wilcoxon Test, with a 2-sided significance level of α = 5%. A P-value ≤.05 was considered statistically significant. Furthermore, missing outcome data were imputed by Last Observation Carried Forward (LOCF) for an intention-to-treat analysis. All analyses were done using SPSS Statistics, Version 27 (IBM Corp., Armonk, NY, USA). Results Between June 2014 and September 2014, 60 breast cancer survivors were enrolled in the study. However, because 9 patients did not attend to first assessment session, in total 51 patients were randomized, with 30 patients in the intervention group and 21 patients in the control group (Figure 1). Both groups were comparable with regards to the most relevant clinical and socio-demographic data. All patients had completed medical treatment within the previous 6 months before inclusion into the study. With study entry the time since diagnosis of breast cancer was 48.3 ± 44.4 months in the intervention group and 39.8 ± 34.5 months in the control group (Table 1). During the study, some patients from both the intervention and control group dropped out due to personal or medical reasons. There were 14 dropouts from the intervention group and 7 dropouts from the control group. No adverse events were observed. The average training participation of the intervention group was 67%. The results of the between-group and within-group comparison from baseline (t 0 ) to post-intervention (t 2 ) assessment of self-efficacy, distress, fear, and depression are presented in Table 2. The results of the assessment after 12 weeks will not be presented within this publication because of an insufficient number of patients completing this assessment. Self-efficacy Self-efficacy (SWE) analysis of the original data showed no significant difference between both groups. There was a significant improvement in the overall self-efficacy score within the intervention group from baseline to 6 months (P = .014), while the score remained unchanged within the control group. Distress The inter-group comparison showed significant improvement regarding the subscale "joy" (P = .018), favoring the intervention group. There were no further significant differences between the groups. Analysis of the intra-group comparison of the original data showed several significant changes from baseline to the 6 months. There was a significant reduction of the subscale "worries" in both the intervention (P = .006) and control (P = .019) group. Both groups showed no relevant change for the subscale "tension." However, there was a significant decrease of "demands" in the control group (P = .019). Both groups showed a significant decrease in the overall score (P = .005). Fear and Depression From baseline to the 6 months assessment there were no significant differences between the groups regarding the subscales of fear and depression, nor the overall score. Analysis of the intra-group comparison showed significant improvements within the "fear" subscale (P = .009) and the overall score (P = .043), which decreased from baseline to 6 months. There were no significant changes within the control group. Discussion At the start of this randomized controlled 2-armed intervention pilot study it was hypothesized that a holistic orientated training schedule which was borrowed from the "Kyusho Jitsu" martial art for 24 weeks would have a positive effect on the psychological well-being when faced with a lifethreatening disease such as breast cancer. Feasibility as the primary outcome, as well as secondary outcomes such as quality of life and the level of physical activity were published elsewhere. 27 Our findings showed that a Kyusho Jitsu intervention with breast cancer survivors is feasible 27 and improved several outcomes of the participants. In addition, we wanted to explore the influences on self-efficacy, distress, fear, and depression. The Kyusho Jitsu intervention showed a significant rise of the self-efficacy score of the intervention group from baseline to 6 months (P = .014), though the difference was not significant when compared to the control group. This result corroborates the findings of Yeh et al 10 who found that Tai Chi improved not only selfefficacy and social support, but overall empowerment with additional gains such as internal locus of control, selfawareness and stress management. Some studies have shown that exercise reduces stress, anxiety, and depression among adults, 32,33 but to our knowledge there are no studies investigating the influence of Kyusho Jitsu on distress. Galantino et al 34 reported increased relaxation, reduced stress, and enhanced sleep quality and duration of breast cancer survivors while doing Tai Chi. Our findings only showed a significant difference for the subscale "joy" (P = .018), favoring the intervention group. The other subscales between both groups were not significant, which is why we cannot conclude the intervention to be effective regarding overall distress. Anyway, both groups significantly reduced their scores for "worries" and the overall score. We assume that, the increasing time since diagnosis and therapy leaded to decreasing levels of distress. 35 Wipfli et al 17 report that exercise interventions support the alleviation of anxiety. Our findings support this by showing a significant reduction of the fear scale and overall scale from the baseline to 6 months in the Kyusho Jitsu intervention group. Furthermore, our findings support the findings of Galantino et al 34 who tested Tai Chi for the wellbeing of breast cancer survivors and found a significant improvement from the baseline to follow-up for the HADS anxiety scale. 34 In our study we did not find any significant differences between both groups regarding the HADS, though, we found a significant reduction of the anxiety subscale from baseline to 6 months (P = .009) within the intervention group. This might be due to the improved coping mechanisms learned in the Kyusho Jitsu training routine. Kyusho Jitsu could thus be an effective method to sustainably lower the level of anxiety in post-care breast cancer patients. Unfortunately, we found no significant improvements in the depressive subscale. Therefore, it could be posited that Kyusho Jitsu neither improves or impairs depressive symptomatology. Regarding the execution of the study, it can be said that on the positive side, the first randomized controlled supervised martial arts study in this setting with breast cancer patient in the follow-up care. It was a homogeneous group (age, type of therapy). Furthermore, the group had a good group dynamic, which is why after completing the study, 11 participants actually joined a Kyusho Jitsu club. No adverse events or negative physical side effects were observed. However, this study was initially planned as a cross-over design after 24 weeks, but due to a high number of dropouts in both groups on the 1 side, and the high interest of eleven participants of the intervention group who ultimately joined a Kyusho Jitsu Club, a cross-over design was not realized. Furthermore, we had a low number of cases (51) which led to a small amount of data being gathered; this resulted in high sources of errors, that is that small variations will have had an impact on the results. Also, because of the small sample size it was not possible to do further analyses regarding weight status, type of diagnosis or type of treatment in terms of depression symptoms, for example. Additionally, the exercise intervention was hard to systemize, due to the individualized nature of the program itself. Conclusion To summarize, our prospective, randomized-controlled 2-armed intervention study did not show relevant differences between both groups to be considered effective. Though, the comparison within the intervention group may suggest that specialized martial arts training can support the psychological rehabilitation and may be beneficial in preventing distress, improving self-efficacy and reducing fear. However, further studies are needed to confirm these findings and to investigate the long-term effects of this training method. Author Contributions As the principal investigator F.B. was responsible for the study concept and design. T.N., J.S. and F.B. recruited participants and collected data. J.L.S., M.S., C.-A.M. and F.B. did the statistical analysis and interpretation of the data. J.L.S., M.S., C.-A.M. and F.B. drafted the manuscript. All authors approved the final manuscript.
2021-10-01T06:16:51.349Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "1da6ab7252bfd44a5e4df4023f64a07a39b57c8a", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15347354211037955", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be14e3aa689a29abfdf8b7cf9dc1a56bf05c47f7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
266224740
pes2o/s2orc
v3-fos-license
PD-L1 (SP142) Expression in Primary and Recurrent/Metastatic Triple-Negative Breast Cancers and Its Clinicopathological Significance Purpose The programmed death-ligand 1 (PD-L1) SP142 assay identifies patients with triple-negative breast cancer (TNBC) who are most likely to respond to the anti–PD-L1 agent atezolizumab. We aimed to compare PD-L1 (SP142) expression between primary and recurrent/metastatic TNBCs and elucidate the clinicopathological features associated with its expression. Materials and Methods Primary and recurrent/metastatic TNBCs tested with PD-L1 (SP142) were collected, and clinicopathological information of these cases was obtained through a review of slides and medical records. Results PD-L1 (SP142) positivity was observed in 50.9% (144/283) of primary tumors and 37.8% (31/82) of recurrent/metastatic TNBCs with a significant difference. Recurrent or metastatic sites were associated with PD-L1 positivity, with high PD-L1 positivity in the lung, breast, and soft tissues, and low positivity in the bone, skin, liver, and brain. When comparing PD-L1 expression between primary and matched recurrent/metastatic TNBCs using 55 paired samples, 20 cases (36.4%) showed discordance; 10 cases revealed positive conversion, and another 10 cases revealed negative conversion during metastatic progression. In primary TNBCs, PD-L1 expression was associated with a higher histologic grade, lower T category, pushing border, and higher tumor-infiltrating lymphocyte infiltration. In survival analyses, PD-L1 positivity, especially high positivity, was found to be associated with favorable prognosis of patients. Conclusion PD-L1 (SP142) expression was lower in recurrent/metastatic TNBCs, and substantial cases showed discordance in its expression between primary and recurrent/metastatic sites, suggesting that multiple sites may need to be tested for PD-L1 (SP142) when considering atezolizumab therapy. PD-L1 (SP142)–positive TNBCs seems to be associated with favorable clinical outcomes. Introduction Triple-negative breast cancers (TNBCs), which are defined as breast cancers that are negative for estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2 (HER2), are known to be aggressive and show poor clinical outcome [1].As patients with TNBC cannot benefit from endocrine therapy or HER2-targeted therapy, other therapeutic agents, including poly adenosine diphosphate-ribose polymerase inhibitors, such as olaparib [2] and talazoparib [3], and immunotherapy have been developed in attempts to treat TNBC patients. Generally, breast cancer is known to be a less immunogenic tumor with low mutational burden [4].However, the mutational load is higher in TNBCs than in non-TNBCs [5].Programmed death-ligand 1 (PD-L1) expression is also higher, and more tumor-infiltrating lymphocytes (TILs) are found in TNBCs [6,7].Tumor mutational burden, TILs, and PD-L1 expression are predictive biomarkers of immune checkpoint inhibitor therapy; thus, immunotherapy has been considered in TNBCs [8]. The programmed death 1 (PD-1) receptor is an immuneinhibitory receptor expressed in immune cells (ICs), including activated T cells, B cells, and myeloid cells, and acts by binding to PD-L1 within the tumor immune microenvironment [9,10].Pembrolizumab (a PD-1 inhibitor) and atezolizumab (a PD-L1 inhibitor) prevent the interaction between PD-1 and PD-L1, resulting in the reversal of T cell suppression.They are immune checkpoint inhibitors that have been approved for treatment of locally advanced or metastatic TNBCs through Keynote-355 [11] and IMpassion130 trials [12].These trials have also proved the value of PD-L1 as a predictive biomarker for the efficacy of pembrolizumab or atezolizumab in locally advanced or metastatic TNBC patients, although different antibodies and scoring systems are required.For pembrolizumab, PD-L1 IHC 22C3 PharmDx assay is used and combined positive score of 10 or more is considered positive.On the other hand, VENTANA PD-L1 (SP142) assay is used for atezolizumab and an IC score of 1% is used as cutoff. For the samples to be tested, both primary and metastatic tumor samples were used to evaluate the PD-L1 status.However, the IMpassion130 trial and some other studies have shown that PD-L1 IC positivity is higher in primary tumors than in metastatic tumors.In addition, PD-L1 positivity in metastatic TNBCs has been found to vary depending on the site of metastasis [12][13][14].If these findings are consistent, they can have important clinical implications in the selection of samples to be tested.Moreover, only a few studies have compared PD-L1 expression between primary TNBCs and their matched metastases.Most of them included not only TNBCs but also other subtypes, and even those that included only TNBCs evaluated a limited number (up to 45 cases) of paired samples [13,[15][16][17][18]. Thus, in this study, we aimed to compare PD-L1 (SP142) expression between primary and recurrent/metastatic TNBCs and evaluate PD-L1 positivity by the site of recurrence or metastasis using a large number of cases.In particular, we compared PD-L1 (SP142) expression between paired primary and metastatic TNBC samples.We also attempted to elucidate the clinicopathological features associated with PD-L1 (SP142) expression and to determine the prognostic value of PD-L1 (SP142) expression in TNBCs. Patients and samples Primary and recurrent/metastatic TNBCs tested for PD-L1 (SP142) at Seoul National University Bundang Hospital between 2019 and 2022 were collected.In total, 310 patients (365 samples) were included in this study.Two hundred eighty-three samples were from primary TNBCs, and 82 samples were from recurrent or metastatic TNBCs.One hundred and fifty-seven samples were from surgical resection specimens, and 208 samples were from core needle biopsy specimens.Fifty-five primary and matched recurrent/metastatic samples were used for the paired comparisons. Clinicopathological information Clinicopathological information of the selected cases was obtained through a review of the slides and medical records.The following information was collected: age at diagnosis, sex, histologic type, histologic grade, primary tumor size, T category, N category, lymphovascular invasion, perineural invasion, tumor border, TILs, and immunohistochemical results for p53, cytokeratin 5/6, and epidermal growth factor receptor, site of recurrence or metastasis and type of systemic therapy.In patients who received neoadjuvant chemothera-py (NAC), the primary tumor size was measured based on imaging studies prior to treatment, and clinical staging for T and N categories was applied.In patients who underwent immediate surgery without NAC, the microscopic size of the tumor and pathological staging for T and N categories were used.TIL were scored based on the tutorial and reference images created by the International Working Group for TIL in breast cancer, and were categorized as < 10%, ≥ 10% and < 50%, and ≥ 50%.Clinicopathological characteristics of the patients included in this study are summarized in Table 1. Clinical follow-up data were also collected for each patient.In patients who received NAC before surgery, recurrence-free survival was defined as the period from the start of NAC to the date of clinical detection of recurrence, and cancer-specific survival was defined as the period from the start of NAC to the date of death due to TNBC.The date of surgery was used for patients who had undergone upfront surgery.In cases where recurrence or death did not occur, the follow-up time was calculated from the date of the start of NAC or surgery to the date of the last event-free follow-up. Immunohistochemical staining for PD-L1 Immunohistochemical (IHC) staining for PD-L1 (SP142) was performed on 4-μm-thick sections from formalin-fixed paraffin-embedded tissue blocks using the OptiView DAB IHC detection kit (Ventana Medical Systems, Tucson, AZ) and OptiView Amplification Kit (Ventana Medical Systems) on a BenchMark ULTRA platform (Ventana Medical Systems) according to the manufacturer's instructions. When only biopsy specimens were available, IHC was performed on the biopsy sections.For surgically resected samples, one representative tumor block was chosen for PD-L1 staining.To exclude the effect of NAC on the expression of PD-L1, pre-NAC biopsy specimens were used in patients who underwent NAC. Interpretation of PD-L1 (SP142) staining Interpretation of the PD-L1 SP142 assay was based on Ventana's interpretation guide for TNBC.PD-L1 staining of tumor-infiltrating ICs was scored.As stated in the interpretation guide, lymphocytes, macrophages, and cells with dendritic or reticular morphology in the intratumoral and contiguous peritumoral stroma are regarded as tumor-infiltrating ICs.The IC score was calculated as the proportion of the tumor area occupied by PD-L1-stained ICs of any intensity.A specimen was considered positive for PD-L1 (SP142) if it showed a ≥ 1% IC score.According to the IMpassion130 trial [12], specimens with ≥ 5% IC scores were considered to have high PD-L1 expression. Statistical analyses Statistical analyses were performed using IBM SPSS Statistics ver.21.0 (IBM Corp., Armonk, NY).The chi-square test or Fisher's exact test was used to compare PD-L1 positivity between primary and recurrent/metastatic tumors or between different recurrent or metastatic sites, and to evaluate the clinicopathological features of tumors associated with PD-L1 expression.The difference in TIL levels in recurrent/metastatic tumors was analyzed by Kruskal-Wallis test among three groups, and by Mann-Whitney U test between two groups.Corrections for multiple testing were made by Bonferroni method, and adjusted p-values were calculated.The Wilcoxon signed-rank test was used to evaluate the change in PD-L1 expression in paired primary and recurrent/metastatic samples. Recurrence-free survival and cancer-specific survival were calculated using the survival analysis.Survival curves were drawn using the Kaplan-Meier method, and p-values were calculated using the log-rank test.The Cox proportional hazards model was used for multivariate analysis using the backward stepwise selection method.Hazard ratios (HR) and 95% confidence intervals (CI) were calculated for each variable.All p-values were two-sided, and p-values less than 0.05 were considered statistically significant. We also examined TIL levels in the 82 recurrent or metastatic samples.TIL levels were variable but showed a difference according to recurrent or metastatic sites.Median TIL levels were high in the lung (30%), breast (10%) and soft tissue (10%), intermediate in the lymph node (5%), and low in the bone, brain, liver, and skin (1% in all) (Table 2).TIL levels were different among the three groups (high, intermediate and low TIL groups; p=0.002,Kruskal-Wallis test), with significant differences between intermediate and low TIL groups (adjusted p=0.045,Mann-Whitney U test), as well as between high and low TIL groups (adjusted p < 0.001, Mann-Whitney U test).Collectively, TIL levels (< 10% vs. ≥ 10%) showed a close relationship with PD-L1 positivity in these recurrent or metastatic samples (9.1% vs. 71.1%;p < 0.001). Paired comparison of PD-L1 (SP142) expression in the primary and matched recurrent/metastatic TNBCs A paired comparison of PD-L1 expression between primary and matched recurrent/metastatic TNBC was performed in 55 cases.When the level of PD-L1 expression was compared as a continuous variable, no significant difference was found (p=0.560,Wilcoxon signed-rank test). In the paired analysis, twelve cases showed high PD-L1 expression in the primary tumor and five (41.7%) of them showed negative conversion in metastasis.The remaining seven cases were positive for PD-L1 in the metastatic sites, five of which still showed high PD-L1 expression. Prognostic significance of PD-L1 expression in primary TNBCs Among the 283 primary TNBCs, 264 were included in the survival analyses.Of the 19 excluded cases, 17 presented with synchronous metastasis, one could not be resected due to deterioration of the patient's condition, and another was non-operable at the time of presentation.In the 264 cases, median follow-up was 2.3 years (range, 0.4 to 18.2 years). Of these, 156 were treated with NAC before surgery.Among them, 27 cases presented with recurrence afterwards, and four patients died.The remaining 108 patients underwent upfront surgery without NAC, 34 of whom experienced recurrence and six of whom died. The Kaplan-Meier survival curve for recurrence-free survival showed increased survival time in the PD-L1-positive group than in the PD-L1-negative group (p=0.049 by logrank test) (Fig. 3A).Especially, PD-L1 high tumors (≥ 5% IC) showed favorable survival compared to non-PD-L1 high tumors (p=0.005,log-rank test) (Fig. 3B).The survival curve for cancer-specific survival tended to show increased survival in the PD-L1 positive group, but the difference was not statistically significant (p=0.140,log-rank test). Discussion Our study demonstrated that primary TNBCs were more likely to be PD-L1 (SP142)-positive than recurrent/ metastatic TNBCs.In this study, 50.9% of primary TNBCs and 36.0% of metastatic TNBCs were PD-L1-positive.This result is in concordance with previous studies that showed higher PD-L1 IC positivity in primary TNBCs (44.0%-62.0%)compared to metastatic ones (31.0%-42.2%)[12][13][14].PD-L1 positivity in TNBC has been reported to be associated with increased stromal TILs [13].Previous studies have shown that there are some differences in the immune microenvironment in primary and metastatic breast cancer with lower TIL in metastatic lesions [19][20][21], which might have caused the difference. It is well known that PD-L1 expression in tumors can be spatially heterogeneous, and it has been reported that PD-L1 (SP142) IC positivity in surgical samples is statistically higher than in biopsied samples [22].When we analyzed the specimen type of primary and recurrent/metastatic samples, primary samples consisted more of biopsied samples compared to recurrent/metastatic samples (61.1% vs. 42.7%).Therefore, lower PD-L1 positivity in metastatic samples might not be related to the specimen type. Another study using 30 matched paired primary and metastatic TNBC samples suggested that the time of sample collection may influence PD-L1 positivity, with PD-L1 status agreement being higher in synchronously collected cases (80%) than in asynchronously collected cases (75%) [17].Of the 55 paired samples, discordance in PD-L1 status was 40% in synchronous metastases and 36% in metachronous metastases, suggesting that the timing of sample collection may not have an influence on changes in PD-L1 status. In the present study, recurrence or metastatic sites was associated with PD-L1 positivity, with high PD-L1 positivity in breast, lung, and soft tissue, and low PD-L1 positivity in bone, brain, liver, and skin.Although PD-L1 expression in other sites varied among studies, low positivity in the liver has consistently been noted, ranging from 13% to 17.4% [12][13][14].In our study, similar results were found, with 11.1% of liver samples being PD-L1 positive.We wondered whether the varying distribution of TILs in metastatic TNBC across organs could explain the observed differences in PD-L1 expression.In one relevant study on TNBCs, TIL levels were found to be highest in the lung and lowest in the skin, although the difference was not statistically significant [23], and in another study on breast cancer including other subtypes, brain metastases had a lower amount of TILs compared to metastatic breast cancers of other locations [20].Our study showed similar results to the previous studies, with high level of TILs in the lung, and low level of TILs in the bone, brain, liver, and skin.Thus, it is possible that different PD-L1 positivity according to metastatic sites is related to TIL infiltration levels in the metastatic sites. Hoda et al. [13] have shown a discordance in PD-L1 (SP142) expression in four out of eight paired primary and second site TNBCs, although this result was limited due to the small number of cases.In their study, all discordant cases showed negative conversion.However, it should be noted that out of these four cases, three were from the liver, and one was from bone, both of which sites are known to show low positivity for PD-L1 [13].In one systemic review on breast cancer, discordance in PD-L1 expression in paired primary and metastatic tumors was reported to be 39.5% and the direction of change was more commonly from PD-L1 positive primary tumor to PD-L1 negative metastasis [24].Recently, Miyakoshi et al. [18] reported the concordance of PD-L1 status between primary tumors and metastases in 160 patients with TNBC, including 45 paired samples.In their study, 16 of the 45 paired samples (35.6%) showed discordance for PD-L1 (SP142) with seven cases of positive conversion and nine cases of negative conversion in metastasis [18], which was consistent with the findings of our study.We used 55 paired samples for matched comparison, and there was a discordance of PD-L1 (SP142) expression in 20 out of 55 cases (36.4%).In addition to the 10 negatively converted cases, there were 10 positively converted cases.These results suggested that PD-L1 status in primary TNBC samples cannot reliably predict its status in recurrent or metastatic samples.We also evaluated changes in PD-L1 status according to the recurrent/metastatic sites, but the number of cases was too small to observe a trend of change.However, it is worth noting that more than half of the recurrent samples in the breast were positively converted, and all three metastatic samples in the skin were negatively converted.The type of recurrent or metastatic site and its immune microenvironment appear to be related to PD-L1 status in recurrent or metastatic tumors. In IMpassion130 trial, combining atezolizumab in PD-L1 IC-positive TNBC cases showed a clinical benefit regardless of whether the sample was from primary or metastatic tumor [12].Taking into account these findings, the question of which tissue should be used for evaluation of PD-L1 (SP142) expression arises.One expert committee recommended that the primary tumor should be primarily evaluated for PD-L1 expression and if the result is negative, at least one metastatic tumor should be tested [25].They also mentioned the priority of choosing the site for PD-L1 testing should be given according to the positivity [25].Based on the post-hoc analysis of the IMpassion130 trial in which PD-L1 positivity was the highest in the lymph nodes (51%), they designated the lymph node as the sample of choice for PD-L1 evaluation in metastasis [25].However, only 31.8% of the lymph node samples were positive for PD-L1 in our study.This may be due to the difference in delimiting the tumor area and stroma in the lymph nodes, as they are rich in ICs.Despite this difference, we agree that multiple sites of the tumor may need to be tested to maximize the patients' chances of receiving immunotherapy.However, inhomogeneous PD-L1 expression at different metastatic sites may be associated with varied responses to immune checkpoint inhibitor, and further studies are needed. In the present study, PD-L1 expression was associated with invasive carcinoma of no special type, higher histologic grade, lower T category, pushing border, and higher percentage of TILs.In addition, high PD-L1 expression is associated with longer recurrence-free survival.The prognostic value of PD-L1 expression in breast cancer patients remains controversial.Some studies have reported that PD-L1 expression in non-metastatic TNBC leads to better recurrence-free survival and overall survival, while another study showed that its expression in TNBC was an independent poor prognostic factor for overall survival [26][27][28].The reason why PD-L1 expression was related to better prognosis in this study remains unclear.One possible explanation is that PD-L1 expression was related to some favorable features in our study, such as lower T category and higher percentage of TIL.In particular, TIL are potential prognostic factors in TNBC, as a high percentage of TILs is associated with a better response to NAC and improved survival in early-stage TNBCs [29,30]. In summary, our study showed that PD-L1 (SP142) expres-sion is lower in recurrent/metastatic TNBCs and that there is a discordance in PD-L1 (SP142) expression in primary and recurrent/metastatic TNBCs, suggesting that multiple sites may need to be tested for PD-L1 when atezolizumab therapy is being considered.PD-L1 (SP142) positivity, especially high PD-L1 positivity, appears to be associated with favorable clinical outcomes in TNBCs. Fig. 2 . Fig. 2. Representative cases of positive and negative conversion of programmed death-ligand 1 (PD-L1) (SP142) expression in the metastatic sites.(A, B) A case with negative conversion.Primary tumor (A) shows PD-L1 positivity, while metastatic tumor to the skin (B) shows PD-L1 negativity.(C, D) A case showing positive conversion.Primary tumor (C) is negative for PD-L1 and metastatic tumor in the chest wall (D) is positive for PD-L1. Table 1 . Clinicopathological characteristics of the patients Table 3 . Change of PD-L1 (SP142) status by recurrent or metastatic site in paired comparison of primary and matched recurrent/metastatic samples Table 5 . Univariate and multivariate analyses of recurrence-free survival p-values are calculated by Cox proportional haz ards model using the backward stepwise selection method.CI, confidence interval; HR, hazard ratio; IC, immune cell; IC-NST, invasive carcinoma of no special type; IC-ST, special types of invasive carcinoma; LVI, lymphovascular invasion; PD-L1, programmed death-ligand 1; PNI, perineural invasion; TIL, tumor-infiltrating lymphocyte.
2023-12-16T12:45:57.517Z
2023-12-12T00:00:00.000
{ "year": 2023, "sha1": "b8594015a3122eb0180dcd77c552311602d0f1ad", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "875b3e004608dfa56d58987a40d289947f6f75de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265498397
pes2o/s2orc
v3-fos-license
SEB: a computational tool for symbolic derivation of the small-angle scattering from complex composite structures Scattering Equation Builder (SEB) is a C++ library for symbolically deriving form factors for composite structures built by linking sub-units to each other. Analysis of small angle scattering (SAS) data requires intensive modelling to infer and characterize the structures present in a sample.This iterative improvement of models is a time consuming process.Here we present the Scattering Equation Builder (SEB), a C++ library that derives exact analytic expressions for the form factor of complex composite structures.The user writes a small program that specifies how sub-units should be linked to form a composite structure and calls SEB to obtain an expression for the form factor. SEB supports e.g.Gaussian polymer chains and loops, thin rods and circles, solid spheres, spherical shells and cylinders, and many different options for how these can be linked together.In the present paper, we present the formalism behind SEB, and give simple case studies such as block-copolymers with different types of linkage and more complex examples such as a random walk model of 100 linked sub-units, dendrimers, polymers and rods attached to surfaces of geometric objects, and finally the scattering from a linear chain of 5 stars, where each star is build by four diblock copolymers.These examples illustrate how SEB can be used to develop complex models and hence reduce the cost of analyzing SAS data. Introduction Figure 1 SEB workflow: 1) defining a structure, 2) implementing the structure in SEB, 3) obtaining the analytic form factor equation, and 4) evaluating and plotting the form factor for given structural parameters.) Small angle scattering (SAS) is an ideal technique to characterize the size, shape and orientation of nano scale structures in a sample.(Guinier et al., 1955;Feigin et al., 1987) In order to infer the structures present in a sample, SAS scattering profiles are often analyzed by fitting models.(Pedersen, 1997) Thus SAS data analysis is an iterative process where models for structures are proposed, their corresponding scattering profiles are mathematically derived, and model scattering profiles are fitted to the experimental scattering profiles.If the fits are poor models have to be improved and the process starts over, until a good model has been developed.That is a model which provides an acceptable fit of the experimental data, and is thus the most likely candidate for the structures present in the sample. SAS scattering spectra contain information about the nano scale structure, but not the detailed atomic scale structure, hence relatively simply geometric models models are often used when analyzing SAS data.Fortunately, the scattering from a large number of models has already been derived, see e.g.(Pedersen, 1997).In the case where e.g.objects of similar shape are dispersed in a liquid, the problem of modelling the scattering from a sample can be split into 1) what are the shape of the objects, and 2) what are the spatial correlations of objects due to their mutual interactions.(Pedersen, 1997) The first problem is described by the form factor while the latter part is described by the structure factor, and in dilute samples the scattering is dominated by the form factor. Here we present the Scattering Equation Builder (SEB), which is an C++ software library that analytically derives symbolic expressions for the form factor of composite structures built by linking an arbitrary number of sub-units together. Our aim with SEB has been the ability to computationally efficiently derive form factor expressions for arbitrary complex branched structures.The expressions can be exported in a variety of formats allowing it to be imported into e.g. a C, C++, or Python programs, included LaTeX documents, or imported into Matlab or Matematica for further analysis.Finally, if the user specifies the length scales of the various sub-units, SEB can also evaluate the scattering equations to generate the corresponding scattering profile. Fig. 2 illustrates the sub-units that we have implemented in this initial release.The figure caption states which reference points we have implemented.These sub-units together with the large number of linkage options offered by the reference points define a large family of structures for which SEB can analytically derive scattering expressions. SEB has been written in Object Oriented C++, which allows the expert user to expand SEB e.g. with additional sub-units and/or linkage options with relative ease.This choice also makes it possible to embed SEB within other software programs.SEB is Open Source and is freely available for download from GitHub at (Jarrett & Svaneborg, 2023a).SEB depends on the GiNaC library (Bauer et al., 2002) for internally for representing symbolic expressions, and GNU Scientific Library (Gough, 2009) for evaluating certain special functions. The paper is structured as follows, in Sect. 2 we briefly introduce the formalism and the logic behind SEB.SEBs design and implementation are presented in Sect.3. Finally we present four advanced examples in Sect. 4. Sect. 5 wraps up the article with a summary and outlook. Formalism We regard a composite structure as being created by linking sub-units together.For example, the structure of a semi-flexible polymer can be built by linking a sequence of rods end-to-end to form a linear chain of rods.The structure of a block-copolymer or a star-polymer can be built by linking two or more polymers together at one end.The structure of a di-block copolymer micelle can be built by linking polymers to the surface of a solid sphere representing the core.Here both the polymers and the sphere are sub-units.A bottle-brush polymer structure can be built by linking a number of short polymers to a random point along a long polymer chain. Common for these example structures are that they are composites made of distinct sub-units linked in specific ways.Subunits come in two varieties: simple geometric sub-units such as rods and spheres, and sub-units with internal conformational degrees of freedom such as polymers.In the latter case, we need to perform conformational averages when predicting their scattering contributions. For each type of sub-unit, we define specific reference points on a sub-unit where links can be made.For instance, a linear sub-unit such as a polymer or a rod has two distinct ends.These are points where we can link other sub-units.Each link represents the constraint that a reference point on one sub-unit is colocalized with a reference point on another sub-unit.A sphere can be linked to other sub-units at any random point on its surface.We could also imagine linking at any random point along the contour of the polymer or rod.This illustrates that reference points come in two varieties: specific geometric reference points such as the ends of a polymer or a rod, or distributed reference points such as random points on a geometric surface or along a polymer chain.When predicting scattering contributions, we also have to perform averages over distributed reference points.Even with e.g. a polymer sub-unit, we can link it together in many ways forming many structures e.g.block copolymers, star polymers, dendrimers or bottle brush structures or any combination of these. To calculate the scattering from a composite structure we utilize the formalism of Svaneborg and Pedersen (Svaneborg & Pedersen, 2012a;Svaneborg & Pedersen, 2012b).The formalism is based on three assumptions: 1) a structure does not contain sub-units that are linked into closed loops, 2) links are completely flexible, and 3) sub-units pairs are mutually non-interacting.These three assumptions ensure that the internal conformation and orientation of all sub-units are statistically independent.Interactions between different sub-units (3) would for instance create conformational correlations, for example in dense polymers the excluded volume interactions give rise to correlation hole effects in the scattering.(Schweizer & Curro, 1988).When e.g. two rods are linked (2), the joint is flexible and can adopt any angle.If this was not the case, the links would create orientational correlations between the two rods.Finally if a structure contains loops (1), the closure constraint creates long range orientational and conformational correlations between all the sub-units involved in the loop.When the internal conformation and orientation of all sub-units are statistically independent, the scattering from a composite structure can be factorized in terms of contributions from individual sub-units.No assumptions are made on the internal structure of sub-units and no additional assumptions or approximations are made.In this sense the formalism is exact.SEB is an implementation of this formalism in C++.Below we introduce SEB and the formalism in more detail.A sub-unit can have any number of specific and distributed reference points depending on its geometry.To keep track of them SEB has hard coded labels for each reference point.For example, a polymer sub-unit has two specific reference points labeled "end1" and "end2", while it has one distributed reference point labeled "contour" (see Fig. 3a).Hence with just two polymers "P1" and "P2", we can create three different structures by linking "P1.end2" to "P2.end1" which produces a linear structure, "P1.end2" to "P2.contour" which produces a random 3-functional star structure, or "P1.contour" to "P2.contour" which produces a random 4-functional star structure.Fig. 3bcd illustrates these structures.When calculating scattering from structures with distributed reference points, we need to perform an average over random realizations of the link, hence we will obtain slightly different scattering profiles for these structures.Fig. 3e shows the scattering form factor for these structures.In the Guinier regime observe that the radius of gyration is largest for the linear structure and smallest for the 4-functional star.At small q values the structures produce the same scattering since they have the same scattering lengths, whereas for large q values, we observe the power law scattering due to the internal random walk structure of the polymer, which is the same for all three structures. Sub-units A sub-unit is the building block of a structure.It is typically composed of many individual scatterers grouped together.We make no assumptions about the internal structure of a sub-unit. Here and below we use capital latin letters to denote sub-units.The scattering contributions of the sub-unit is characterized by the following factors: The form factor is defined as where r i j = |R i − R j | is the spatial distance between the two scatterers, and β i denotes the excess scattering length of the i'th scatterer.The form factor describes the interference contribution from all pairs of scatterers within the I'th sub-unit.Here and below we will use greek symbols to denote reference points. For each reference point α, the sub-unit has a corresponding form factor amplitude defined as where r jα = |R j − R α | is the spatial distance between the j'th scatterer and the reference point.The amplitude describes the phase difference introduced by the spatial distance between scatterers in a sub-unit and a reference point.For each pair of reference points α, ω, the sub-unit has a corresponding phase factor defined as where r αω = |R α − R ω | is the spatial distance between the two reference points.The phase factor describes the phase difference between two specified reference points.In these expressions, we have already performed the orientational average, however an additional average has potentially to to be made over internal conformations and/or distributed reference points.For example, for a polymer described by Gaussian chain statistics.For the "end1" form factor amplitude, one has to perform an average over the distribution of distances between "end1" and any scatterer along the chain.For the "end1" to "end2" phase factor, one has to perform an average of the polymer chain connecting the two ends.For the "contour" form factor amplitude of a polymer one has to perform a double average over random position of the reference point along the chain and any scatterer along the chain.Finally for the "contour" to "contour" phase factor, one has to average over two random positions of the reference points along the chain as well as the Gaussian statistics of the polymer. In the special case where distributed reference points (e.g.contour) and scatterers are characterized by the same distribution, such as a homogeneous distribution along the polymer, then the average expressions for the form factor amplitude and phase factor result in the same expression: the Debye expression for the form factor (Debye, 1947).We refer to Ref. (Svaneborg & Pedersen, 2012a;Svaneborg & Pedersen, 2012b) for the specific expressions.Illustration of (a) how a polymer and its reference points can be represented diagrammatically, and (bcd) how the different linkage options shown in Fig. 3 are represented. Figure 5 Library of all the possible diagrams and the corresponding factors to use when deriving scattering equations. Figure 6 Example structure showing one sub-unit (A) with 3 pendant sub-units (BCD).The sub-unit are linked at three reference points (η, δ, and σ).Some scatterers within sub-units are illustrated as well (lower case letters).A few distances between scatterers are illustrated (colored dashed lines), together with their representations in terms of paths going through the structure (colored solid lines). Figure 7 Example structure build of three polymers linked to the surface of a sphere (top), three spheres linked by their center to the contour of a polymer (bottom), and the generic diagram with the same connectivity (center). Diagrammatic interpretation A formal derivation of the general scattering expressions for a composite structure can be found in Refs.(Svaneborg & Pedersen, 2012a;Svaneborg & Pedersen, 2012b).Before stating the general equations, we motivate the formalism with a diagrammatic derivation of the scattering from an example first. To abstract from the concrete internal details of different subunit, we illustrate all sub-units by ellipses as shown in Fig. 4. Specific reference points are illustrated as dots on the circumference of the ellipse.Distributed reference points are illustrated as a thick line segment on the circumference of the ellipse to indicate that many points contribute.The total library of possible steps and the factors they contribute are shown in Fig. 5. Diagrammatically form factors are derived from distances between pairs of scatterers within the same sub-unit, and hence they are illustrated as a line inside the ellipse.The form factor is also scaled by the square excess scattering length of the sub-unit.Form factor amplitudes are derived from distances between scatterers and a reference point, and they are illustrated by a line that starts inside the ellipse and ends on the circumference on the reference point.Form factor amplitudes are scaled by the excess scattering length of the sub-unit.Phase factors describe the phase introduced by the distance between two reference points, and hence are illustrated by a line between the two reference points.Since no scatterers are involved, phase factors do not depend on any excess scattering lengths.Finally, when summing over all pairs of sub-unit we note that form factors are counted only once, however, all interference contributions are counted twice, since both the I, J and J, I paths contribute. Algorithm To calculate the form factor of a composite structure, SEB has to account for interference contributions between pairs of scatterers, while also keeping in mind that scatterers are grouped into linked sub-units.Fig. 6 shows three illustrative cases, 1) the l, k scatterers belong to the same sub-unit D, 2) the n, m scatterers belong to directly linked sub-units A, C, and 3) scatterers i, j belong to sub-units BD, that are indirectly connected via sub-unit A. The first case of internal interference contributes between all scatterers within the same sub-unit is described by the form factor of the sub-unit F D , here and below we suppress the dependency on q for the sake of brevity.In the second case, the interference contribution between A and C depends on (average of) the vector ∆R = R n − R m , however stepping through the structure we note that ∆R Where each parenthesis corresponds to an intra-sub-unit step.Since we have assumed that sub-units are uncorrelated, the spatial probability distribution of pair distances between scatterers in P AC (R nη , R ηm ) can be written as a convolution of the two intra-sub-unit pair-distance distributions relative to the common reference point P A (∆R nη ) * P C (∆R ηm ).In Fourier space, that convolution turns into the product of two sub-unit form factor amplitudes A Aη A Cη both of which are evaluated relative to the common reference point η.This is the resulting interference contribution for case two. Finally, the third case generalizes this logic.The interference contribution between scatterers i, j depends on (average of) the vector ∆R = R i − R j .We note again that we can use reference points as stepping stones to write ∆R Each of the three parentheses describes an intra-sub-unit step.The distribution P BAD is a convolution of individual sub-unit contributions which factorizes into a product of three terms.However, since the middle step involves two reference points, hence the corre-sponding contribution is a phase factor.Thus the interference contribution becomes A Bδ Ψ Dδσ A Dσ for case three. Hence the algorithm used by SEB for obtaining the scattering from a composite structure is to analyze all possible pairs of scatterers in the same or different sub-units.Hence the form factor is a double sum over all sub-units where we encounter three possible types of contributions: A form factor for scattering pairs belonging to the same sub-unit.Each pair of sub-units is either directly or indirectly connected.If they are directly connected, they contribute the product of their form factor amplitudes relative to the common reference point by which they are linked.If they are indirectly connected, we find the unique path through the structure connecting the two sub-units.This path uses reference points as stepping stones.The path is unique since the structure is assumed to be acyclic.The path contributes a form factor amplitude for the first and final sub-units relative to the first and final reference points in the path, respectively.Furthermore, each sub-unit along the path contributes a phase factor, which is to be calculated relative to the two reference points used to step across that sub-unit. Example Figure 8 Diagrams of all the contributions to the form factor of an ABCD structure, where sub-units BCD are linked to sub-unit A. Fig. 7 shows an example of a block-copolymer micelle modelled as three polymers linked to the surface of a spherical core.(Pedersen & Gerstenberg, 1996) The figure also shows an example of three spheres linked by their center to a random position along contour of a polymer chain.This could be a beads-on-a-string model of a surfactant denatured protein.(Giehm et al., 2010) In the center of the figure, we show the diagrammatic representation where three sub-units are linked to a central sub-unit.We note that the generic diagram emphasizes the connectivity of the structure, and allows us to write down a generic equation for the form factor independent of the specific sub-units involved.In the figure, π denotes the distributed reference point at which the other sub-units are linked. For the simple example in Fig. 7 we can enumerate all the possible scattering contributions from pairs of scatterers.This is shown in Fig. 8 where the top, middle, and bottom rows, respectively, correspond to scattering pairs within the same sub-unit, within directly linked sub-units, and finally between indirectly connected sub-units, respectively.We note that all interference terms are counted twice since IJ and JI interferences contribute the same terms.Form factors only contribute twice.The reason is that while both r i j and r ji vectors between two scatterers i, j contribute to the form factor, this is already accounted for by eq. ( 1).Summing all the scattering terms we get the (unnormalized) form factor of the structure. To finally derive the expression for a block copolymer micelle, we have to substitute the concrete polymer expressions for sub-units BCD, and the sphere expressions for subunit A. To finally derive the expression for the beads-on-a-string model, we instead substitute the concrete sphere expressions for sub-units BCD, and the polymer expressions for sub-unit A. These expressions can be found in Ref. (Svaneborg & Pedersen, 2012b). When requesting the form factor of a structure in SEB, the user either obtains a generic structural equation like the one in Fig. 8, but the default is for SEB to perform all the sub-unit substitutions and returns a form factor equation for the specific choice of sub-units.For more complex structures, enumerating all the potential scattering contributions by hand is a very tedious and error prone process.SEB automates the process of identifying paths and tallying the corresponding factors.Just as a sub-unit has form factor amplitudes and phase factors, so does a composite structure that we have built out of subunits.Using the diagrammatic logic above, we can also draw the diagrams for form factor amplitudes of a structure relative to a reference point (not shown).In this case we have to sum over all sub-units in the structure.We find a path from the reference point to the sub-unit.The path contributes a product of phase factors for each sub-unit it traverses, and a form factor amplitude for the last sub-unit along the path relative to the last reference point.To calculate a phase factor of a structure relative to two reference points, we find the path through the structure connecting the reference points.The phase factor of the structure is the product of all the phase-factors of sub-units along that path. Generalizing the logic above, we can state the general expression for the form factor of a structure of sub-units.For each sub-unit pair I, J we identify the first and final reference points α and ω and the path P(α, ω) through the composite structure that connects them.Then the scattering interference contribution is the product of the form factor amplitudes of the first and final sub-units and of all the phase factors of sub-units along the path.The form factor of the composite structure is given by (Svaneborg & Pedersen, 2012a; Svaneborg & Pedersen, 2012b) Having derived the form factor, it is straightforward to apply the same logic to state the equivalent form factor amplitude of a structure relative to any reference point it contains, as well as the phase factor of a structure relative to any reference point pair.These are given by(Svaneborg & Pedersen, 2012a; Svaneborg & Pedersen, 2012b) (5) and Usually the focus is on deriving form factors for different structures, and phase factors and form factor amplitudes are just intermediate expressions in the derivation.However, having all three scattering expressions for a structure allows us to use it as a sub-unit.In terms of mathematics, this corresponds to recursively inserting the left hand sides of eqs.(4)(5)(6) into the right hand side of the equations.In terms of SEB, the code for generating scattering expressions makes recursive calls to itself until terminating at the sub-unit level.This hierarchical view of building structures using simpler sub-structures and sub-units as building blocks is a cornerstone of SEBs design. The logic is illustrated in Fig. 9a-c, where the ABCD structure is wrapped into a single structure of type "star".In this case, we can think of e.g."P1.end2" and "S.surface" as being the labels of reference points inside a star structure.In Fig. 9d four instances of a star structure (named "star1"-"star4") are linked "P3.end1" to "S.surface".The resulting structure, a linear chain of stars, is shown in Fig. 9e.With SEB, we would write code to link sub-units as in Fig 9a , write a line to name the structure "star" thus realizing 9c, and proceed to write code to build the structure 9d using stars.Finally with a line of code we get the form factor of structure 9e.Towards the end of the paper, we give an example where we build a diblock copolymer by joining two polymers.We then build a star by linking 4 diblock copolymers by one end, and proceed to build a chain where five stars are linked "tip" to "tip".This takes just 13 lines of code to do with SEB.Building hierarchical structures from more basic sub-structures vastly accelerates the time it takes to derive the scattering expressions. Expressions for form factor amplitudes are also useful for modelling structure factor effects.If a structure has a reference point that could be regarded as the "center" of the structure, then SEB can also calculate the form factor amplitude relative to the center point A C .In that case, an approximate model for the scattering including structure factor effects would be I(q) = F(q)+ A 2 C (q)(S CC (q)−1), where S CC is a structure factor that describes the distribution of "center" to "center" distances between different structures.(Pedersen, 2001;Pedersen et al., 2003) This is analogous to the decoupling approximation (Kotlarchyk & Chen, 1983) for polydisperse or anisotropic particles.The structure factor could e.g.be modelled as that of a hard-sphere liquid (Wertheim, 1963;Thiele, 1963) or a hard-sphere liquid augmented with a Yukawa tail (Herrera et al., 1999;Cruz-Vera & Herrera, 2008).Structure factor effects can also described using e.g. the RPA approximation (Benoit & Benmouna, 1984) or using integral equation theory e.g. in the form of PRISM theory (Schweizer & Curro, 1987;Curro & Schweizer, 1987;Schweizer & Curro, 1994;David & Schweizer, 1994;Yethiraj & Schweizer, 1992).pyPRISM is a software package for numerically solving the PRISM equations (Martin et al., 2018).We note that liquid state theories requires the form factor of a structure as an input, which can be derived with SEB. Estimating sizes While predicting scattering profiles is the main focus of SEB, we can also use analytic Guinier expansions of the scattering expressions to provide expressions for the size of composite structures.The size of a structure or a sub-unit can be gauged by three different measures: The radius of gyration ⟨R 2 g ⟩ which describes the apparent mean-square distance between unique pairs of scatterers is obtained when expanding the form factor.The (apparent) mean-square distance between a given reference point and any scatterer ⟨R 2 Iα is obtained when expanding a form factor amplitude.Finally, the mean-square distance between a pair of reference points ⟨R 2 Iαω is obtained when expanding a phase factor.We define the three Guinier expansions for a subunit I as Here the right hand side of the expressions defines the three size measures in terms of the expression in the middle.Based on the generic equations (4-6), we can derive three similar generic expressions for the size of any composite structure expressed in terms of the sizes of sub-units and paths through the structure.However, for simplicity we have directly implemented the Guinier expanded scattering terms for all sub-units in SEB, such that SEB explicitly calculates the Guinier expansion above (middle equations) and derives the size from the q 2 term in the expansion (right hand side). Extra care has to be taken with regards to double counting of distances.The form factor includes the distance between any pair of scatterers twice since both r i j and r ji contribute to the form factor.We have made this double counting explicit by the prefactor of two in eq. 7.This has the effect of defining the radius of gyration from the unique set of distances between pairs of scatterers.For the form factor amplitude and phase factor, we occasionally have to account for a double counting.This done by introducing the double counting factors: σ Iα and σ Iαω . In cases with specific reference points, pair distances between scatterers and reference points are unique by construction, and the double counting factor is unity.For instance, for the Guinier expansion of the form factor amplitude of a polymer relative to "end1", distances between "end1" and scatterers along the polymer are only summed once, hence σ polymer,end1 = 1.Similarly, for the Guinier expansion of the phase factor between "end1" and "end2" of the polymer, the distance between the two ends of the polymer is summed only once, hence σ polymer,end1,end2 = 1. In cases involving distributed reference points double counting can occur due to the additional average that has to be performed.For instance, Guinier expanding the form factor amplitude of a polymer relative to a "contour" reference point, we sum every distance between random points and scatterers twice, because both scatterers and reference points are uniformly distributed along the contour of the polymer.Hence σ polymer,contour = 2. Similarly, for the Guinier expansion of the phase factor between a pair of random "contour" points, we encountered every distance twice, hence σ polymer,contour,contour = 2 in this case as well.In fact, the set of distances between a random point on a polymer and a scatterer or between two random points on a polymer is exactly the same as the set of distances between pairs of scatterers, i.e. the mean-square distances from "contour" to scatterer and between two "contour" points is exactly the radius of gyration of the polymer.If we did not account for double counting in this case, we would have an inconsistency where e.g. the distance between randomly chosen points on a polymer would be twice the radius of gyration of the polymer.Note that SEB is not able to deduce whether double counting occurs in a given structure, hence SEB returns σ Iα R 2 Iα and σ Iαω R 2 Iαω to the user, and it is up to the user to divide the result by two in the rare cases where double counting has occurred. SEB In the preceding section, we have illustrated the formalism.While its entirely possible to use the formalism to write down scattering expressions for complex structures by hand, this rapidly becomes tedious and error prone when many paths through a complex structure have to be enumerated, inserting the various expressions for sub-unit factors, and finally implementing the resulting expression in a SAS analysis software. The Scattering Equation Builder "SEB" is a Object Oriented C++ library that automates the process.SEB calculates the form factor of a structure by identifying and traversing all the paths between unique sub-unit or sub-structure pairs.SEB can also calculate the form factor amplitude for a given reference point by exploring all the paths connecting that reference point to every other sub-units or sub-structures.Similarly, the phase factor between any two reference points is obtained by identifying the path between the reference points.In the case of hierarchical structures, the algorithm generates "horizontal" paths at a given structural level, and then evaluates scattering expressions by recursively exploring paths through sub-structures until the level of individual sub-units are reached.Internally, we have designed SEB to efficiently store a hierarchical graph representation of the structures and it uses efficient recursive algorithms to generate paths through the hypergraphs at a specified depth into the structure. The SEB uses the GiNaC library (Bauer et al., 2002) for representing symbolic expressions.SEB depends on GNU Scientific Library (Gough, 2009) for evaluating Sin integrals, Bessel functions, and Dawson functions.SEB also includes code from J.-P.Morou (Moreau, 2014) for evaluating Struve functions. The core functionality of SEB is to allow the user to write a short program that 1) builds structures by linking specific uniquely named sub-units, 2) names a composite structure build by sub-units, such that it can be used as another sub-unit 3) builds hierarchical structures by linking simpler structures together, 4) to obtain analytic expressions characterizing the scattering and sizes of those structures and/or 5) to save a file with a scattering profile for a chosen set of parameters. From the user perspective, SEB exposes a very lean interface.Just four methods are available for building structures.The user can choose to obtain generic structural scattering expressions, expressions with all sub-unit scattering terms inserted yielding an equation that depends explicitly on q and a set of structural parameters.The user can also obtain an intermediate representation, where scattering terms are inserted but expressed with dimensionless variables, where all structural length scales are scaled by q.Finally if the user defines the structural parameters and a vector of q values, SEB can evaluate the scattering expressions to provide a vector of scattering intensities that can be saved to a file for plotting. Before going in detail with implementation and design choices, we start with two simple illustrative examples: a diblock copolymer and a micelle / decorated polymer.These and more examples can be downloaded along with the SEB code from Ref. (Jarrett & Svaneborg, 2023a). Diblock copolymer Creating a structure similar to the one seen in Fig. 3b involves a world to host the sub-units, and then creating two polymers and specifying how they are to be linked.The following complete C++ program does that 1:#include "SEB.hpp"2:int main() 3:{ 4: World w; 5: GraphID g=w.Add("GaussianPolymer", "A"); 6: w.Link("GaussianPolymer", "B.end1", "A.end2"); 7: w.Add(g, "DiBlockCopolymer"); 8: cout << latex; 9: cout << w.FormFactor("DiBlockCopolymer"); 10:} The first line includes the SEB header file, which declares what functions SEB provides.Lines 2-3, and 10 sets up the function main, which is executed when a program is run.Line 4 in the program creates an instance w of the World class.This instance provides all SEBs functionality to the user. To create a structure in the world, we must first add and link the two polymers.In the fifth line, the user uses the w.Add() method to add a polymer to the world."GaussianPolymer" refers to type of polymer described by a Gaussian chain statistics.With the second argument, the user assigns the unique name "A" to this sub-unit.The world returns a GraphID to the user in response to adding the sub-unit.The GraphID is a common ID shared by all sub-units linked together forming a graph. In the third line, the user uses the w.Link() method to add and link a second GaussianPolymer sub-unit.With the second argument the user names this new sub-unit "B".With the second and third arguments the user defines that the new "B" should be linked by the "end1" reference point to "end2" on the already existing "A" sub-unit.To calculate the form factor and print it out, we must first wrap the graph formed by these two polymers in a structure.This is done in the fourth line with w.Add(), but this time it is called with a GraphID of the structure we want to name, and the string "DiBlockCopolymer".We note that all sub-unit and structure names are case-sensitive and unique.Types of sub-units and their reference points names are hard coded in SEB (see Fig 2).Reference point names are also casesensitive. Having defined a structure in Lines 5-7, we now want to print out the equation for its form factor.The eight line specifies that we want the expression to be printed in the form of a LaTeX expression.With the command w.FormFactor( "DiBlockCopolymer") in the ninth line, the user requests the symbolic expression for the form factor.This is printed to the screen (cout <<).The form factor equation will be expressed in terms of the magnitude of the momentum transfer q, the structural parameters Rg A , Rg B , as well as the excess scattering lengths β A , β B .The names of the sub-units are used as subscripts in parameters used in the scattering expressions. Here we chose LaTeX formatted output, but we could also have outputted the equation in formats compatible with C/C++, Python, or the native GiNaC format which is compatible with Mathematica / Matlab.GiNaC by default generates equations in expanded form and with a random unpredictable ordering of terms.This makes native latex formatted output lengthy.Most often we would export the scattering expression to a fit program, or to a symbolic mathematics program for simplification, or directly evaluate it to predict the scattering profile. To change the diblock from "end2" to "end1" linking to random linking, such as Fig. 3c, we need to link "A.end2" to a randomly chosen point on "B.contour".Replacing line six with the following code snippet achieves that 6: w.Link("GaussianPolymer", "B.contour#r1", "A.end2"); Here simultaneously with specifying the distributed reference point "contour" on the "B" sub-unit, we also label that (now specific) reference point with the arbitrary string "r1".If we instead want to create the structure of Fig. 3d, we need to link one random reference point on "B.contour#r2" to a random reference point on "A.contour#r3".Replacing line three with the following code snippet achieves that 6: w.Link("GaussianPolymer", "B.contour#r2", "A.contour#r3"); The scattering profile corresponding to fig.3bcd is shown in Fig. 3e.The difference is not large, but illustrates the point that even with the same sub-units different linkage options affect the scattering profile.The reference point name "contour" is hard coded in SEB, but the user is free to choose the labels (here "r1","r2","r3").Having a unique name for each reference point allows us to add more sub-units to the same random point.Having both options for linking allows the user to develop well defined arbitrarily complex branched structures of end-to-end linked polymers, or bottle brush structures where many side chains are randomly attached to a main polymer. As default SEB express scattering expressions in terms of an explicit q value, a set of structural parameters and excess scattering lengths.The default option is also to output normalized scattering expressions such that they converge to unity in the limit of small q values.Replacing w.FormFactor( "DiBlockCopolymer") by w.FormFactorAmplitude( "DiBlock-Copolymer:A.end1") would generate the form factor amplitude expression for the whole DiBlockCopolymer, but expressed relative to the specified reference point.With w.PhaseFactor( "DiBlockCopolymer:A.end1", "DiBlockCoPolymer:B.end2") SEB would instead generate the phase factor of the DiBlock-Copolymer relative to the two specified reference points.With w.FormFactorGeneric( "DiBlockCopolymer") we would get the generic form factor of a structure of two connected subunits without the specific scattering expressions inserted, this is often useful for debugging.Finally, with w.RadiusOfGyration2( "DiBlockCopolymer") SEB would generate the expression for the radius of gyration. Diblock copolymer micelle SEB is not limited to using one type of sub-unit type, but we can use and link all types of sub-units to each other.We can, for instance, model a diblock copolymer micelle as a number of polymer chains attached to the surface of a spherical core.(Pedersen & Gerstenberg, 1996) Here we limit the number of polymers to three for the sake of simplicity.To generate the micelle shown in Fig. 7 (top), we need to create a solid sphere ("A") and add three polymers ("B", "C", and "D") to its surface, the following code snippet does that 1: World w; 2: GraphID g=w.Add("SolidSphere","A","s"); 3: w.Link("GaussianPolymer","B.end1","A.surface#p1","p");4: w.Link("GaussianPolymer","C.end1","A.surface#p2","p");5: w.Link("GaussianPolymer","D.end1","A.surface#p3","p");6: w.Add(g, "Micelle"); A polymer sub-unit (type GaussianPolymer) has "end1", "end2", and "contour" as reference points, a solid sphere subunit (type SolidSphere) has "center" and "surface" as reference points.Just as we need to add labels for random points on the contour of the polymer above, we also add labels for the random points on the surface of the sphere.If we used the same label in all three Link commands, the three polymers would be linked to the same random point.This would influence the scattering interference between the polymers and is not the structure we are aiming to create. We also introduce tags in the example, which are an optional parameter of w.Add() / w.Link().We tag all polymers as "p", and the spherical core as "s".The result is that the scattering expressions are not stated in terms of the unique names A,B,C, and D, but are stated using the radius of gyration of the polymers Rg p , and radius of the sphere R s as well as the two excess scattering lengths β p and β s .If a tag is not specified, then the unique name is used in its place as in the diblock example above.By specifying tags, we can mark a set of sub-units as being identical in terms of their scattering properties and structural parameters. Decorated polymer A model of a surfactant denatured protein could be a long polymer with some spherical surfactant micelles along its contour.To generate a polymer decorated by three spheres as in Fig. 7 (bottom), we would use the following code snippet 1: World w; 2: GraphID g=w.Add("GaussianPolymer","A", "p"); 3: w.Link("SolidSphere","B.center","A.contour#p1","s");4: w.Link("SolidSphere","C.center","A.contour#p2","s");5: w.Link("SolidSphere","D.center","A.contour#p3","s");6: w.Add(g, "DecoratedPolymer"); We note that this is nearly identical to the micelle code above, since we link three sub-units to a single sub-unit in both cases.The only difference being that instead of linking three polymers to a sphere, we link three spheres to one polymer.The three spheres "B", "C", and "D" are tagged with "s".Such that the scattering expression depends on the same parameters as described above. Advanced examples Having discussed the basics of how to add and link sub-units, create structures, and output GiNaC expressions, here we show how to implement some of the more advanced examples.In particular, we show a complete example of to write a program that generates the scattering from 100 identical linked sub-units for a variety of sub-units and linkage options, how to generate a dendritic structure of linked sub-units, an example of polymers and rods linked to the surfaces of different solid geometric objects, and finally how to implement a chain of 5 linked di-block copolymer stars using hierarchically defined building blocks. Figure 10 Scattering from a chain of N = 100 identical linked sub-units for a) "end2" to "end1" linked Gaussian polymers, b) "contour" to "contour" linked Gaussian polymers, c) "end2" to "end1" linked rods, d) "contour" to "contour" linked rods, e) "contour" to "contour" linked polymer loops, and f) "contour" to "contour" linked circles.The structural parameters of the sub-units are chosen such that their radius of gyration is one. Lines 2-8 creates the chain.Initially we add a single polymer "P1", then we use a for loop to add and link 99 more polymers.The polymers have unique names "P(N)", where N denote the number of the sub-unit.The strings variables now and last, is the name of the current and previous sub-units.They are all identical and both are tagged as "poly".The linkage is "P(N).end1" to "P(N-1).end2" for all polymers, such that they form one long continuous chain.In Line 9, we name this structure "Ran-domWalkPolymer", and obtain the symbolic expression for its form factor F in Line 10.In Lines 11-13 we define a list of parameters, and set the excess scattering length "beta poly" to one, and the radius of gyration "Rg poly" is also set to one.In Line 13, we generate qvec, which is a vector of all the q values at which we want to evaluate the form factor.We choose 400 log-equidistant points between q min = 0.01 and q max = 50.From the point of view of SEB, units are irrelevant.All scattering expressions depend on dimensionless products of structural length scales and a q value, and as long as both are expressed with a consistent choice of unit, the unit will cancel when evaluating the scattering profile numerically.Finally in Line 15, we evaluate the symbolic expression by inserting the list of parameters and each of the q values in the expression.The result is saved to a file "chain end2end.q".A plot of that file is shown in Fig. 10a.We can now study how the scattering profile changes when we keep the chain structure, but change the sub-unit and/or the linkage.Replacing "Gaus-sianPolymer" by "ThinRod" directly generates a file with the scattering for a chain of rods linked end-to-end.This is shown in Fig. 10c.Replacing "end1" and "end2" by "contour.r(N)"and "contour.s(N-1)"produces the contour-to-contour linkage shown in Fig. 10bdef, where for the latter two curves, we chose "GaussianLoop" or "ThinCircle" as sub-units. In the Guinier regime of Fig. 10, we observe that the endto-end linked rods have the largest radius of gyration followed by the end-to-end linked polymers.These form the most loose and extended chain structure.The contour-to-contour linked rods, polymers and loops have the smallest radius of gyration, which is consistent with these chains being the most dense and collapsed structures.Since a chain of 100 end-to-end linked polymers with R 2 g = 1 corresponds to a single polymer with R 2 g = 100 the scattering is the Debye form factor.At large q values, for all polymer structures we observe the (qRg) −2 power law consistent with local random walk statistics.For chains built with rods, we see a (qL) −1 power law behavior at large q values which is expected from a rigid rod.The chains of circles structure shows oscillations due to the regular distance between scatterers on a circle, but the trend line of the oscillations follows a q −1 power law consistent with local rod-like structure. Dendrimers Figure 11 Scattering from dendrimer with 4 generations and 3 functional links.a) "end1" to "end2" linked polymers, b) "contour" to "contour" linked polymers, c) "end1" to "end2" linked rods, d) "contour" to "contour" linked rods.Structural parameters of sub-units chosen so the radius of gyration is always one.The sketches of the dendrimer structures only show the first two generations for sake of brevity. 1: GraphID dendrimer = w.Add("Point","center"); 2: int count=0; 3: Attach(4, 3, "center.point",count, w); 4: w.Add(dendrimer, "Dendrimer"); Generating a dendritic structure calls for a recursive function, and the challenge is how to assign names systematically so the links are consistent with a dendritic structure.In line 1 we define a Point, which we call "center".This is an invisible sub-unit with zero excess scattering length, but which is useful as a seed to attach other sub-units to.In line 2, we define a counter which will be counting the number of sub-units added.The recursive function Attach() generates the dendrimer (see code below), and is called in Line 3. The argument 4 is the number of generations to generate, and 3 is the functionality of each connection point.The "center.point" is the initial reference point on which to graft additional polymers.The two last arguments are the counter and the world we are adding sub-units into.In the last line we name the resulting structure "Dendrimer".The rest of the code for generating a file with the form factor is identical to the chain example above.string name = "S"+to_string(c)+".end1";8: w.Link("GaussianPolymer", name, ref, "poly"); 9: string newref = "S"+to_string(c)+".end2";10: c++; 11: if (g>1) Attach( g-1, f, newref, c, w); 12: } 13:} The recursive function receives "g" the number of generations that remains to be attached, "f" the functionality of each link, "ref" which is the reference point from the previous generation onto which we link the current generation."c" and "w" are a global counter and world, respectively.In line 3-4 we define the numbers of arms to attach to this reference point.Usually this is f − 1 since we are linking to the tip of an existing branch, however in the special case where we are linking arms to the center.point,we need to add func arms instead.That ensures all connection points have desired functionality. In lines 5-12 we add the arms and link them to the previous generation.In line 7 we define a name for each new sub-unit "S(c)", and in line 8, we add GaussianPolymer sub-units and link them to the tip of the previous generation.The links are "S(c).end1"to ref, where "ref" is the tip of the last generation of polymers.In line 9, we define the new reference point on which to add the next generation.This reference point is "S(c).end2".Finally in line 10, we increment the counter of sub-units that has been added so far.In case we are not done building, that is if g larger than one, in line 11 we again call the Attach function to attach the next generation to the tip of the current arm, that is to newref.Now with the generation decremented by one, the same functionality. The resulting structure contains 45 sub-units (3 from the 1st generation, 6 from the 2nd generation, 12 from the 3rd generation, and 24 from the 4th generation) The code above generates the structure plotted in Fig. 11a.Again by changing line 8 we can link other sub-units such as thin rods.Changing lines 7 and 9, we can change the reference points from end-to-end to contour-to-contour links.The results are the four curves shown in Fig. 11.Again we observe in the Guinier regime, that dendrimers made by end-to-end linked rods and polymers have the largest radii of gyration.We also observe that at large q vectors the power laws (qL) −1 for rods, and (qR g ) −2 for polymers show what sub-units they are built with.We also observe that contourto-contour linked structures have the same radius of gyration independently of their sub-unit structure.1: GraphID str = w.Add( "SolidSphericalShell", "shell"); 2: for (int i=1; i<=50; i++) 3: { 4: string name1= "poly"+to_string(i)+".end1";5: string ref1 = "shell.surfaceo#p"+to_string(i);6: w.Link( "GaussianPolymer", name1, ref1, "poly"); 7: string name2= "rod"+to_string(i)+".end1"; 8: string ref2 = "shell.surfacei#r"+to_string(i);9: w.Link( "ThinRod", name2, ref2, "rod"); 10: } 11: w.Add(str, "Structure"); With SEB we can investigate how different linkage options of sub-units on the surface of solid bodies affect the scattering.In the example code above, we generate a solid spherical shell in line 1.The shell is a homogeneous solid body defined by an exterior radius R o and an interior radius R i .In lines 4-6, we add and link a Gaussian polymer.The polymer is named "poly(i)", and linked by "poly(i).end1"to "shell.surfaceo#p(i)",where "surfaceo" denotes distributed reference points on the "outer" or exterior surface of the shell.The unique label "p(i)" ensures that all polymers are linked to different random points on the surface.In lines 7-9, we add and link a thin rod.The rod is named "rod(i)", and linked by "rod(i).end1" to "shell.surfacei#r(i)",where "surfacei" denotes the interior surface.Again the unique label "r(i)" ensures that rods are linked to different random points.In line 11, we name the resulting structure "Structure".As in the chain example, we evaluate the form factor and generate a file with the corresponding scattering curve. Changing line 1, we can change which solid body we are attaching sub-units to e.g.solid spheres or cylinders.Changing lines 6 and/or 9, we can change what sub-units we link to the surface, and by which reference point the link should be made.Changing the reference points in lines 5 or 8, we can choose different linkage options on the solid body.Fig. 12 shows a comparison of some of the possible linkage options.The code above corresponds to the d curve.Here, we choose to contrast match the solid body β shell = 0, and choose β poly = β rod = 1.Hence the scattering is due to both the polymers and rods and their interference contribution which depends on the shape of the body to which they are attached. In the Guinier regime of the scattering profiles shown in Fig. 12, we observe that the solid spheres and spherical shells are nearly identical as are the scattering from cylinders.This is not surprising since the scattering between different sub-units is modulated by the phase factor of the solid body on which the sub-units are attached.At very large q values we observe power law behavior with an exponent slightly larger than −1.This is to be expected, since the scattering is dominated by the sub-unit form factors, and asymptotically the rod (qL) −1 will dominate over the polymer (qRg) −2 unless the number of polymers vastly outnumber the number of rods.In the crossover regime, we observe different oscillations for the different linkage options.These oscillations are due to the different distributions of surface-to-surface distances between the tethering points of pairs of rods and / or polymers. Hierarchical structures Figure 13 Scattering from a chain of five four-functional stars where each arm is a diblock copolymer for three different choices of contrast.The illustrated links are a) the block copolymer formed by "A.end2" to "B.end1", b) the star formed by "diblock2:A.end1"to "diblock1:A.end1"and similar for the other arms, c) the chain formed by "star2:diblock1:B.end2" to "star1:diblock3:B.end2" and similar for the other stars. In the examples above we have built structures by connecting sub-units to each other.The result was described by a GraphID, that we could name as a type of structure, and then we could use that name to derive various scattering expressions.Since the formalism is complete any sub-structure can be used as a sub-unit.World has a Link method, that takes a GraphID (referring to a type of structure) and names and links it to an existing structure.This works analogously to Link called with a string denoting a type of sub-unit.The code below illustrates the concept. In lines 1, we add a Gaussian polymer sub-unit "A", and in line 2 we add and link another Gaussian polymer "B" sub-unit to it as we did several times above.The names "A" and "B" should be thought of as two instantiations of the type of object with an internal structure described by the type "GaussianPolymer".It is important to distinguish between concrete objects of a certain type of structure and the type of structure itself.The type does not exist per se, but is just a generic description.In the case of "A" and "B" these have their own structural parameters, and contribute specific terms to scattering expressions.The type GaussianPolymer is a description of the internal chain statistics of a polymer molecule.When creating a new sub-units or structures in SEB, we instantiate it from a type of structure.GraphID variables are also types of structure, in particular, the GraphID variable d describes a diblock copolymer structure.In line 3, we add a new structure to the world named "diblock1", which is an instantiation of the diblock type.Hence "diblock1" is a concrete structure in the same sense as "A" and "B" are concrete sub-units. In lines 4-6, we do something new, we call Link(), not with a sub-unit type, but with the diblock type (GraphID variable d).We name these four new structures "diblock2", "diblock3", "diblock4", respectively.Each structure is linked by a reference point inside the structure to a reference point that already exists in the world.For the diblock2 structure, we link "diblock2:polyA.end1" to "diblock1:polyA.end1",since "diblock1" already exists in the world we can link to it.To link structures, we need to specify the path to get from the structure level via sub-structures down to the reference point, which is associated with a specific sub-unit.Since all names are unique, so is any path from a sub-structure to a reference point.The resulting structure is a 4-armed diblock copolymer star, where the "A" blocks are linked by their "end1" reference points and forms the center of the star, while the corona is formed by the four "B" blocks and their free chain ends are at the "end2" reference points. While we usually define the GraphID by the return value of the first Add() method, all subsequent Link() calls also return the same GraphID value, since this is associated with the whole graph first created by Add(), and then grown each time the Link() is called.In line 3, we stored the type of graph formed by "diblock1" to "diblock4" in the GraphID variable s, which is now the type of a 4-functional diblock star structure. In line 7, we now instantiate a star sub-structure and name it "star1".This defines a new GraphID, we save in a variable c.Then in lines 8-11 we proceed to instantiate 4 more star sub-structures named "star2" to "star5".Each time we link "star(n):diblock1:B.end2" to "star(n-1):diblock3:B.end2",since "star(n-1)" already exists, and has a "diblock3:B.end2"reference point inside it.The result is a linear chain of stars formed by linking the tips of "diblock1" and "diblock3", hence "diblock2" and "diblock4" form dangling ends analogously to a bottle-brush structure.Finally to calculate the form factor of this type of chain, we must name it to instantiate it in the world.The rest of the code is similar to the chain example above. This example illustrates the power of building structures using more simple sub-structures as building blocks.With 15 lines of code, we have generated a hierarchical structure with 40 sub-units.Fig. 13 shows an illustration of the resulting structure together with the form factor evaluated for three different contrast options.In the Guinier regime, we observe that the radius of gyration is nearly the same independently of contrast which we would also expect for such a structure.At large q values we obtain the characteristic power law of polymer sub-units.For intermediate q values, the structure is slightly different.When the "polyA" blocks are contrast matched β A = 0, they play the role of invisible spacers inside the stars.When the "polyB" blocks are contrast matched, they play the role of invisible spacers between different stars. Besides calculating scattering expressions SEB can also provide expressions characterizing the size of a structure.For instance, w.RadiusOfGyration2("chain") returns an expression for the radius of gyration by applying a Guinier expansion of all sub-unit scattering terms.After simplification, the result is A , while the radius of gyration measures the distances between all pairs of scatterers, we could for instance also ask what is the mean-square distance between the center of the star and all scatterers in the structure.A Guinier expansion of the corresponding form factor amplitude provides the result, and "star3:diblock1:polyA.end1" is the reference point at the center of the star, hence this mean-square distance gives an idea of the radial extent of the structure.Calling w.SMSD ref2scat( "chain:star3:diblock1:polyA.end1")returns that result.The method is called SMSD for sigma mean-square-distance to remind the user to account for a potential symmetry factor.Finally, we could ask what is the length and breadth of the structure.To calculate the length, we call w.SMSD ref2ref( "chain:star1:diblock1:polyB.end2","chain:star5:diblock3:polyB.end2") which returns the meansquare distance between the two reference points at either end of the structure.The result is ⟨R 2 length ⟩ = 60(R 2 gB + R 2 gA ) To estimate the breadth of the structure, we change the reference points to w.SMSD ref2ref( "chain:star3:diblock2:polyB.end2","chain:star3:diblock4:polyB.end2"),since "diblock2" and "diblock4" are the two dangling diblocks, and the "polyB.end2"are the dangling ends of these diblocks.The result is ⟨R 2 breadth ⟩ = 12(R 2 gB + R 2 gA ).These results are easy to obtain by hand.Noting for a single polymer R 2 g (N) = ⟨R 2 end2end ⟩/6 = b 2 N/6 where b is the random walk step length, and N number of steps in the polymer.Then to estimate the number of steps along the length of the chain, we note that it has 10 A blocks and 10 B blocks from one end to the other.Hence ⟨R 2 length ⟩= b 2 N length = b 2 (10N A + 10N B )= 60(R 2 gA + R 2 gB ).For the breadth, note a star has a breadth of N breadth = 2N A + 2N B .The result is that the chain is five times longer than its breadth, which is what one would expect. Summary The main problem in analyzing small-angle scattering (SAS) data is the availability of model expressions for fitting.Here we presented the "Scattering Equation Builder" (SEB) which is an open-source C++ library available at Ref. (Jarrett & Svaneborg, 2023a).SEB automates part of this problem by generating symbolic expressions for complex composite models of structures using the formalism presented in Refs.(Svaneborg & Pedersen, 2012a;Svaneborg & Pedersen, 2012b).The formalism is built on the assumption that sub-units are mutually non-interacting, and the assumption that structures do not contain loops.Finally all links are assumed to be completely flexible.No further mathematical simplifications or approximations are made.In particular, no assumptions are made regarding the internal structure of sub-units. With SEB users write short programs that construct a structure using sub units and simpler structures as building blocks.Much like LEGO, sub-units can be linked at certain points called reference points.These can be either specific geometric points such as one of the ends of a polymer, or they can be randomly distributed e.g. on the surface of a sphere.With the building blocks of sub-units and reference points, a large number of complex structures can be built with relative ease.See Fig. 2 for the sub-units and reference points supported by this initial release. SEB derives analytic symbolic expressions for the form factor, form factor amplitude, phase factor of a structure.SEB can also derive expressions for the radius of gyration as well as the mean-square distance between a reference point and all scatterers in a structure.Finally SEB can derive the mean-square distance between pairs of reference points.The expressions can evaluated to a number e.g. when fitting, evaluated to produce a file for plotting, or they can be outputted several formats for LaTeX documentation, C/C++ and python compatible equations, or exported to matlab / Mathematica. In the present article, we have given simple illustrative examples as well as some more complex examples of what SEB can do.SEB is available at GitHub (Jarrett & Svaneborg, 2023a), and a frozen version related to the present work is deposited on Zenodo (Jarrett & Svaneborg, 2023b), We hope the SEB library will grow as more sub-units becomes supported, and we welcome contributions from the users in developing future versions of the library. 3 Figure 3 Illustration of a polymer sub-unit.(a) the three different reference points, (bcd) the three ways two polymers can be linked, (e) the scattering form factors for the different linkage options. Figure 4 Figure 4 Fig. 4a shows a polymer and its diagrammatic representation.To illustrate links, the reference points on two sub-unit ellipses are shown as touching circumferences.The three linkage options shown in Fig 3bcd are illustrated in Fig. 4bcd.For simplicity, often we only show and label the reference points of interest when showing structures. Figure 9Examples of hierarchical descriptions.A bottom-up description: a) a specific star structure made of sub-units linked to a core, b) the diagrammatic representation of sub-units in the star structure, and c) diagrammatic representation of a star sub-unit.A top-down description: d) four linked star sub-units, and e) the detailed structure when inserting the internal structure. Figure 12Scattering from various solid bodies with 50 rods and 50 polymers attached to different surfaces.The solid body is contrast matched β solid = 0, and β poly = β rod = 1.a) solid sphere R = 10 with rods and polymers randomly attached to the surface, b) solid sphere R = 10 with pairs of rods and polymers attached to the same random point, c) solid spherical shell R i = 8, Ro = 12 with rods and polymers randomly attached to the interior and exterior surfaces.d) solid spherical shell R i = 8, Ro = 12 with 50 rods attached to the interior surface and 50 polymers attached to the exterior surface.e) cylinder L = 10, R = 5 with rods and polymers randomly attached to the surfaces.f) cylinder L = 10, R = 5 with rods attached to the two cylinder ends and polymers are attached to the hull.For curves c and e where several surfaces contribute area, we have weighted the scattering terms with their respective area fractions to ensure homogeneous area coverage in the case of random attachment.
2023-11-30T06:43:12.741Z
2023-11-29T00:00:00.000
{ "year": 2024, "sha1": "7e3ec45cdd25983500bc50536c4be9dd39e64032", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/j/issues/2024/02/00/uz5007/uz5007.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "7e3ec45cdd25983500bc50536c4be9dd39e64032", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
41151628
pes2o/s2orc
v3-fos-license
The Large Conductance, Calcium-activated K+ (BK) Channel is regulated by Cysteine String Protein Large-conductance, calcium-activated-K+ (BK) channels are widely distributed throughout the nervous system, where they regulate action potential duration and firing frequency, along with presynaptic neurotransmitter release. Our recent efforts to identify chaperones that target neuronal ion channels have revealed cysteine string protein (CSPα) as a key regulator of BK channel expression and current density. CSPα is a vesicle-associated protein and mutations in CSPα cause the hereditary neurodegenerative disorder, adult-onset autosomal dominant neuronal ceroid lipofuscinosis (ANCL). CSPα null mice show 2.5 fold higher BK channel expression compared to wild type mice, which is not seen with other neuronal channels (i.e. Cav2.2, Kv1.1 and Kv1.2). Furthermore, mutations in either CSPα's J domain or cysteine string region markedly increase BK expression and current amplitude. We conclude that CSPα acts to regulate BK channel expression, and consequently CSPα-associated changes in BK activity may contribute to the pathogenesis of neurodegenerative disorders, such as ANCL. L arge conductance, calcium-activated K 1 channels (BK channels) are widely distributed throughout the CNS, and play an important role in regulating neuronal action potential duration, the extent of fast after-hyperpolarization and burst firing frequency [1][2][3] . BK channels are also prominent in the pre-synaptic membrane, where they regulate the magnitude and timing of depolarization-evoked calcium influx, thereby influencing neurotransmitter release at the synapse [4][5][6][7][8] . Genetic deletion of BK channel subunits in mice 9,10 and a gain-offunction channel mutation in humans 11,12 are associated with neurological disorders, such as ataxia and epilepsy. It is thus evident that alterations in neuronal BK channel activity give rise to CNS dysfunction and for this reason, it is critical to understand the mechanisms underlying BK channel regulation. In this study, we provide evidence that the co-chaperone, cysteine string protein (CSPa), controls the cell surface density of neuronal BK channels. CSPa is a synaptic vesicle-associated protein that is broadly expressed in the nervous system and displays unique anti-neurodegenerative properties 13,14 . It is a member of the large J protein family 15 . CSPa contains an N terminal J domain and a middle region displaying a string of 13-15 cysteine residues 16 , which are subject to palmitoylation and critical for anchoring CSPa to synaptic vesicles 17,18 . CSPa is active as a trimeric complex, composed of the heat shock cognate protein of 70 kDa (Hsc70) and the small tetratricopeptide protein (SGT) [19][20][21] . While not essential for neurotransmitter release, CSPa is required to maintain synaptic function in mice after 3 weeks of age. Genetically-modified mice lacking CSPa appear normal at birth, but around postnatal day 20, they develop progressive motor deficits and CNS degeneration, followed by early lethality between days 40-80 14 . The synapse loss in CSPa null mice is activity-dependent and synapses that fire frequently, such as those associated with photoreceptors and GABAergic neurons, are lost first 22,23 . It has been reported that 22 proteins are decreased in CNS synapses from CSPa knockout mice 24 and several other synaptic proteins appear to be putative clients for the CSPa chaperone system, based on association studies [25][26][27][28] . Which client protein(s) is critical for triggering the cascade of events leading to degeneration and which changes are downstream of the primary event is a current biological question. Specifically, studies in CSPa null mice reveal that the t-SNARE SNAP25 (synaptosomal associated protein of 25 kDa), which is fundamental for exocytosis, and the GTPase dynamin1, essential for endocytosis, are clients of the CSPa chaperone complex, leading to the prediction that loss of CSPa function would impair vesicle trafficking in neurons 24,[29][30][31][32] . Interestingly, degeneration in CSPa null mice can be rescued by over-expression of wild-type, but not inactive mutants of SNAP-25 30 . Moreover, over-expression of a-synuclein, which does not have homology to either CSPa or SNAP-25, prevents degeneration in CSPa-null mice 29 . Exactly how these proteins compensate for loss of CSPa is not currently known. The importance of CSPa in synapse protection is well established, but our knowledge of the mechanistic events underlying this protection is weak and many questions remain. The activity of BK channels is subject to an elaborate array of regulatory mechanisms including; modulatory accessory subunits 33 , phosphorylation 34 , palmitoylation 35 and alternative splicing 36 . In this study, we demonstrate that CSPa regulates cell surface expression of BK channels, which would be expected to modify synaptic function. We report that neuronal BK channel expression is increased in the brain tissue of CSPa null mice. We additionally demonstrate that mutation of a highly conserved motif in the J domain (i.e. HPD to AAA substitution) and deletion of residue 116 or replacement of Leu115 by Arg in the cysteine string region of CSPa increases BK current by selectively increasing cell surface BK channel expression. These data reveal a novel chaperone-based cellular mechanism that regulates BK channel expression in the CNS. These observations raise the intriguing possibility that CSPa-related dysregulation of BK current density may alter neuronal excitability and contribute to the pathogenic sequence of events in one or more neurodegenerative disorders. Results BK channel was identified as a possible target protein of the CSPa chaperone complex based on our initial observation of robust changes in BK channel density in neuroblastoma cells co-expressing CSPa mutant proteins. Members of the J-protein family have been reported to regulate the trafficking/expression of several ion channel types, including: the hERG (human ether-a-go-go related gene) K 1 channel 37,38 , CFTR (cystic fibrosis transmembrane conductance regulator) 39 and the K ATP channel (ATP-sensitive K 1 channel) 40 , leading to the prospect that J-proteins are components of the cellular machinery that make 'triage' decisions regarding whether ion channels are trafficked and/or retained at the cell surface or degraded. To study the relationship between CSPa and BK channel expression, we first measured BK channel expression in wild-type and CSPa knockout mice. CSPa null mice are normal at birth, but after a lag period of 2-3 weeks, they stop gaining weight, and develop blindness, activitydependent neuronal degeneration and progressive motor impairments. CSPa null mice typically die prematurely between 40-80 days 14 . Heterozygous mutant mice contain less CSPa protein than wild-type controls, while homozygous mutant mice lack detectable CSPa 14 . Figure 1 shows the expression of BK channel, Ca v 2.2, Kv1.1 Figure 1 | Comparison of protein levels for BK, Ca v 2.2, K v 1.1 and K v 1.2 channels in whole brain tissue from CSPa wild-type, heterozygous and null mice. (A) Representative western blot data of the pore-forming a subunits of BK channel, Ca v 2.2 channel, K v 1.1 and K v 1.2 channels detected in synaptosome-enriched membrane fractions prepared from whole brain (age P23-27). The total protein loaded per lane was 40 mg; detection of b-actin on the same blots was used to verify equal loading amongst the various lanes. The data shown in panel A were selected from full-length western blot images, which are displayed in Supplementary Figure 1. (B) Histogram showing average data for BK, Ca v 2.2, K v 1.1 and K v 1.2 channel protein in the wild-type, heterozygous and null brain samples quantified by camera-based detection of emitted chemiluminescence. To perform quantification, the ratio of detected channel protein to b-actin for a given sample was calculated for all three genetic backgrounds. Channel density data for the heterozygote and null tissues were then normalized to the wild-type tissue by dividing all calculated ratios by the wild-type ratio for a given channel species. (C) Histogram showing BK channel expression detected in aorta from CSPa heterozygous and null mice relative to wild-type mice. Quantification of BKa subunit detection was performed as described in panel B. Averaged Western blot data were derived from 5-6 animals (panel B) and 3-4 animals (panel C) of each genetic background (2-3 litters). * indicates a statistically significant difference from the heterozygote and wild-type values, as determined by one-way ANOVA and a Tukey's post-hoc test; p , 0.05. www.nature.com/scientificreports SCIENTIFIC REPORTS | 3 : 2447 | DOI: 10.1038/srep02447 and K v 1.2 channels in whole brain synaptosomal membrane fractions from CSPa 2/2 , CSPa 2/1 , and CSPa 1/1 mice, as assessed by western blotting. BK channel expression is significantly elevated in CSPa null mice, whereas heterozygotes show expression levels similar to the wild-types. Moreover, the observed increase in BK channel is selective, as the tissue levels of Ca v 2.2, K v 1.1 and K v 1.2 channels are not significantly altered in the brains of either the CSPa 2/2 or CSPa 2/1 mice compared with the wild-type control. Interestingly, BK channel a subunit mRNA was not elevated in the brains of CSPa 2/2 mice, as determined by quantitative RT-PCR ( Table 1), suggesting that the increased level of BK channel protein in CSPa 2/2 brain tissue is not due to a transcriptional event. It is also well known that CSPa is primarily expressed in tissues exhibiting regulated secretion, such as the brain and pancreas 16,41 . In murine aortic smooth muscle, which has prominent BK channel levels, we observed no difference in the average level of BKa subunit expression amongst the three genetic backgrounds ( Figure 1C). Given that CSPa is not expressed in the aorta, this finding is not unexpected. In order to gain mechanistic insight into the influence of CSPa on BK channel proteostasis, we generated a murine CNS-derived catecholaminergic (CAD) cell line 42 stably expressing a murine neuronal BK channel ( 43 ; see Methods Section) that would enable rigorous electrophysiological and immunocytochemical analyses of BK channel levels. The expression of BK channels in these cells was measured in the presence of myc-tagged CSPa or the myc-tagged loss-of-function mutant CSPa HPD-AAA . Functionally, mutation of the HPD motif (residues [43][44][45] in CSPa disrupts the integrity of the highly conserved J domain, thereby preventing CSPa from activating Hsp70/ Hsc70 to carry out conformational protein folding 44 . Expression of the loss-of-function mutant CSPa HPD-AAA is thus expected to override the activity of endogenous CSPa in CAD cells and act in a dominant-negative fashion 45 . Figures 2A and B show Western blot analysis and corresponding mean data of BK channel expression in the stable CAD cell line 48 h following transient transfection with myc-tagged CSPa, CSPa HPD-AAA or pCMV plasmid (negative control). As expected, 3 distinct species of CSPa were identified by Western blot in transfected cells; the 26 kDa immature form, the 34 kDa mature palmitoylated protein and the 70 kDa CSPa dimer 42,46 . In the presence of CSPa HPD-AAA , BK channel expression was elevated ,6-fold (657.5 6 175.4%) compared with the plasmid control (100%). In contrast, increased levels of wild-type CSPa did not significantly change BK channel expression. As genetic knockout of CSPa is associated with elevated BK channel protein in brain tissue ( Figure 1), we speculated that CSPa may interact either directly or indirectly with neuronal BK channels under native conditions. However, we were unable to capture stable CSPa-BK channel complexes from wild-type mouse brain using a classic immunoprecipation strategy (n 5 4, data not shown). Such an observation may either reflect a lack of complex formation between native CSPa and BK channels, or that such putative interactions are of a transient, low affinity nature and not detectable using this analytical approach. As CSPa is a member of the highly conserved J protein family, we reasoned that other J proteins may also influence BK channel expression. Hsp40 (heat shock protein of 40 kDa) is a cytosolic member of the J protein family that is expressed constitutively, as well as in response to cell stress. Like CSPa, Hsp40 is neuroprotective and acts in concert with Hsc70, but the exact mechanism underlying its cell protective actions remains poorly characterized 47 . In CAD cells stably expressing BK channels, transient expression of either wild-type Hsp40 (99.3 6 21.7%) or the loss-of-function mutant Hsp40 HPD-AAA (94.7 6 9.5%) did not alter BKa subunit expression ( Figure 2B). Collectively, these results indicate that the loss-of-function mutant CSPa HPD-AAA leads to a selective increase in BK channel levels, whereas an equivalent mutation in Hsp40, a related J protein, has no effect. It has been reported that BKa subunits can undergo palmitoylation 35 , prompting us to examine whether other palmitoylated proteins may also be increased in the presence of the CSPa mutant CSPa HPD-AAA . As shown in Figure 2C, CSPa HPD-AAA did not increase expression of the palmitoylated proteins 48,49 SNAP25, syn-taxin1, GAP43 (growth associated protein of 43 kDa), and flotillin in BK stable CAD cells. In synaptosomes from CSPa null mice, we also did not observe changes in the expression of K v 1.1 ( Figure 1A), another protein reported to be palmitoylated 50 . In order to evaluate the influence of mutant forms of CSPa on cell surface expression of functional BK channels, we carried out whole cell patch clamp electrophysiology to quantify active BK channels in single CAD cells. To distinguish BK channel current from endogenous voltage-gated K 1 channels (i.e. K v 1-type channels), we utilized 4aminopyridine (4-AP, 5 mM) to block these channels and measured the remaining whole cell currents in the absence and presence of the highly selective BK channel inhibitor, penitrem A (100 nM) 51 . Figure 3A shows a current-voltage plot of the penitrem A-sensitive whole cell current density recorded from CAD cells stably expressing BK channels that have been transiently transfected with either wild-type or mutant forms of CSPa. BK current density was significantly greater in the presence of the CSPa HPD-AAA mutant compared with eGFP-expressing control cells, demonstrating higher surface expression of functional BK channels. In contrast, transient expression of wild-type CSPa had no effect on single cell BK current density, which is consistent with western blot data demonstrating very little effect of exogenous wild-type CSPa on the level of BKa subunit protein in BK stable CAD cells ( Figure 2B). Figure 3B shows representative families of membrane currents recorded from BK stable CAD cells in the absence and presence of penitrem A under the described transfection conditions. The right hand column displays the penitrem A-sensitive current obtained by digital subtraction (i.e. left hand column minus middle column). These data clearly demonstrate that the CSPa loss-of-function mutant HPD-AAA increases cell surface expression of functional BK channels. Figure. A. Electrophysiological analysis of BK stable CAD cells transiently transfected with either wild-type or mutant forms of CSPa. Single CAD cells were voltage clamped as depicted by the test pulse protocol shown on the left, and the selective BK channel inhibitor, penitrem A (100 nM) was used to isolate BK channel current. The current-voltage plot summarizes the penitrem A-sensitive, whole cell current density recorded from transfected cells, as described on the right hand side of the plot. Data are presented as mean 6 SE, and statistically significant differences were determined using a one-way ANOVA and a Dunnett post-hoc test (vs. GFP alone); * (p,0.05), ** (p,0.01). B. Representative families of whole cell currents recorded from BK stable CAD cells transiently transfected with either wild-type or mutant forms of CSPa in response to voltages steps ranging from 220 to 1200 mV. The left hand column displays total whole cell current recorded from cells under the indicated transfection conditions. The middle column shows current remaining in the presence of the BK channel blocker penitrem A, and the right hand column displays the calculated difference currents obtained by digitally subtracting the currents evoked in penitrem A from the total whole cell currents. Scale bars are indicated. ANCL is a rapidly progressive, neurodegenerative disorder in young adults, characterized by psychiatric manifestations, seizures, progressive dementia and motor deficits. Detailed pathogenic mechanisms of these CSPa mutations have yet to be elucidated, but individuals afflicted with ANCL display phenotypes reminiscent of the progressive motor deficits observed in the CSPa KO mouse. Both these mutations interfere with the palmitoylation of CSPa's cysteine string region, which is important for anchoring CSPa to synaptic vesicles. To evaluate the cellular effects of these putative dominant negative mutations, we examined the impact of CSPa D116 and CSPa L115R mutations on the level of functional BK channel in our CAD cells stably expressing neuronal BK channels. Analysis of BK channel activity in the presence of either CSPa D116 or CSPa L115R is shown in Figure 3. BK channel current density is significantly greater in the presence of CSPa D116 and CSPa L115R mutants compared with either wild-type CSPa or eGFP-expressing control cells, but the increases are less than that observed with the CSPa HPD-AAA mutant. Collectively, these data indicate that CSPa dysfunction has a profound effect on BK channel expression in a KO mouse model, as well as a model neuronal cell line. To determine if the dysfunctional mutant CSPa HPD-AAA could also affect BK channel activity in a non-neuronal cell, we recorded BK currents from a rat aortic smooth muscle cell line (i.e. A7r5 55 ) stably transfected with the same BKa subunit cDNA used to create stable CAD cells. The data displayed in Figure 4 demonstrate that transient expression of either wild-type CSPa or CSPa HPD-AAA , as confirmed by immunocytochemistry ( Figure 4B), had no effect on BK current density in A7r5 cells stably expressing BKa subunits. These findings are thus consistent with the observed lack of effect of CSPa knockout on native BK channel levels in mouse aorta ( Figure 1C). Taken together, our data indicate that CSPa-mediated regulation of BK channels likely involves additional co-factors expressed in secretory cells, such as neurons. Biochemical confirmation that the CSPa HPD-AAA -mediated increase in BK current density was associated with an increase in BK channel cell surface expression, along with the total cellular pool of channel protein, is shown in Figure 5. For this purpose, CAD cells stably expressing BK channels were first transfected with CSPa variants, followed by labeling of intact cells with biotin. Biotinylated cell surface proteins were then extracted from the total cell lysate by streptavidin pull-down, followed by western blot analysis. The degree of BK channel biotinylation was evaluated in the presence of transfected CSPa, CSPa HPD-AAA or pCMV plasmid as a negative control. Consistent with the data displayed in Figures 2 and 3, the results shown in Figure 5A independently confirm that CSPa HPD-AAA (lane 2) markedly increased the cell surface expression of BK channel compared with wild-type CSPa (lane 1) or the plasmid control (lane 3). These data are complemented by quantifiable changes in the immunofluorescence staining of BKa subunit expression in BK stable CAD cells that were transiently transfected with either wild-type or mutant forms of CSPa ( Figures 5B and C). Co-expression of eGFP was used as a marker of transiently transfected CAD cells. The elevated cellular expression of BK channels in the presence of dysfunctional CSPa mutants shown in Figure 5B thus mimics the functional increase in BK current density, as determined by single cell patch clamp recordings ( Figure 3A). Whereas transient expression of wild-type CSPa did not significantly alter BK current density, it did reduce BKa subunit staining in BK stable CAD cells ( Figure 5B). This difference likely reflects the ability of immunocytochemistry to detect intracellular and cell surface pools of BK channels, the former of which may be more readily affected by increased levels of wild-type CSPa. Note that only very low background fluorescence was detected in BK stable CAD cells stained with only fluorescently-labeled secondary antibody ( Figure 5B, 2u Ab alone). Discussion This study identifies the co-chaperone CSPa (DnaJC5) as a critical regulator of BK channel density in the neuronal plasma membrane. In CSPa null mice, we observed a ,2.5-fold increase in the level of BK channel protein in CNS synaptosomal fractions compared with wild-type littermates ( Figure 1); interestingly, the levels of other neuronal channels (i.e. Ca v 2.2, K v 1.1 and K v 1.2) were unchanged by KO of CSPa. Consistent with this novel finding in mice, we further observed that BK channel levels were dramatically increased in murine CNS-derived neuroblastoma cells following expression of a dominant negative form of CSPa containing a mutated HPD sequence (Figures 2 and 3). The HPD motif is essential for the overall activity of the heterotrimeric CSPa chaperone complex. Collectively these fundamental observations imply that CSPa normally acts to dampen or limit BK channel density at the cell surface, and that silencing or disruption of CSPa's activity leads to increased channel expression. Biologically, it is possible that wild type CSPa limits BK channels by a cellular control mechanism in which CSPa acts to (1) reduce export of BK channels from the endoplasmic reticulum to the cell membrane and/or (2) increase channel removal/exit from the plasma membrane to lysosomes or proteasomes. Inactivation of CSPa, either by mutations or loss of the protein, disrupts this regulatory activity, thereby allowing BK channels to accumulate at the cell surface. Such regulatory steps are not without precedent; earlier studies demonstrate that CSPa and Hsp40 limit the exit of the cystic fibrosis transmembrane conductance regulator (CFTR channel) from the ER 39,56 , while DnaJA1 and DnaJA2 similarly reduce ER export of the hERG (human ether-a-go-go related gene) channel 37 . Our data showing that BK channels are also targeted by CSPa are consistent with this general paradigm. However, an alternate scenario whereby CSPa targets other cellular proteins (e.g. transport machinery) that in turn regulate BK export to/retrieval from the plasma membrane is also possible. Fernandez-Chacon's group has found a defect in synaptic vesicle recycling at motor nerve terminals in CSPa null mice 32 and Chandra's group has found that CSPa regulates polymerization of dynamin, an essential component of the endocytotic machinery 24 , pointing to synaptic vesicle trafficking issues. The defective assembly of the exocytotic machinery due to reduced SNAP25 levels reported by Sudhof and colleagues makes a further case for changes in neuronal membrane trafficking in the absence of CSPa 30,31 . The role that these other CSPa clients may play in BK channel proteostasis remains to be determined. That said, it is clear from the data presented in Figure 1 that the CSPa-related increase in neuronal BK channel expression is selective, based on the absence of change found in Ca v 2.2, K v 1.1 and K v 1.2 protein levels. In this way, the cellular chaperone system, in addition to making 'decisions' about the folding/misfolding status of cellular ion channels, also monitors BK channel density. An interesting extension of the present study will be to investigate the role that CSPa plays in modulating mutant BK channels (e.g. D434G) associated with human epilepsy and movement disorders 11 . Our data further indicate that the recently identified, human disease-associated CSPa mutations CSPa D116 and CSPa L115R are also capable of increasing BK current, suggesting that the neuronal sorting and trafficking of BK channels is altered by these mutations within the cysteine string region. While CSPa D116 and CSPa L115R increase BK channel density at the membrane, the increase is not as large as that observed with CSPa HPD-AAA (Figure 3), suggesting that CSPa D116 and CSPa L115R mutations may result in only a partial loss of CSPa function. These same mutations were recently identified as the cause of ANCL in young adults [52][53][54] and lead to lysosomal accumulation of lipofuscin, along with dementia and motor deficits. Current understanding of CSPa's link to lysosomal dysfunction, ceroid deposition and synapse maintenance is limited. Is CSPa dysfunction involved in diseases other than ANCL? Although dysfunction of CSPa, through either mutations or genetic deletion, has been linked to synaptic loss, the involvement of CSPa in neurodegenerative diseases other than ANCL is not clear. CSPa is reduced in the frontal cortex of humans with Alzheimer's disease 24 suggesting that a reduction in CSPa's neuroprotective capacity plays a role in Alzheimer's disease progression. Rescue of CSPa null mice with a-synuclein, but not a-synuclein A30P, the mutation associated with Parkinson's disease, implies a link between CSPa and the degenerative cascade in Parkinson's disease 29 . In experimental models, mutant huntingtin interferes with CSPa chaperone activity, suggesting that CSPa is compromised in Huntington's disease 57 . It is also reported that CSPa is reduced in rats following chronic morphine administration 58,59 . Despite these reported connections between reduced CSPa activity and the pathogenesis underlying distinct neurodegenerative diseases, numerous details remain to be established regarding the potential causality between CSPa dysfunction and the loss of synaptic integrity. Growing evidence suggests that the large J protein family, acting in concert with Hsc70/Hsp70, has links to neural diseases. In addition to the mutations in DnaJC5 (CSPa) associated with ANCL, mutations in the related J protein DnaJC6 (auxilin) results in juvenile Parkinsonism 59 . Similarly, mutations in sacin lead to autosomal recessive spastic ataxia of Charlevoix-Saguenay 60 , mutations in DnaJB2 (HSJ1) cause lower motor neuron disease 61 , while mutations in DnaJB6 (mrj) are implicated in muscular dystrophy 62 , Parkinson's 63 and Huntington's disease 64 . Does BK channel over-expression contribute to degeneration under conditions in which CSPa is absent or dysfunctional? CSPamediated changes in BK levels may potentially contribute to the progressive activity-dependent neurodegeneration associated with CSPa loss/dysfunction, however there is currently no direct evidence supporting this idea. Indeed, high levels of BK channels in nerve terminals would be predicted to alter membrane excitability and could explain the increased synaptic depression in CSPa null mice observed by Sudhof and colleagues 14 . The severe age-dependent 14 and activity-dependent degeneration 22,23 of synaptic function in CSPa KO mice argue that increased BK channel activity may be intimately involved in the pathogenic sequence of events that lead to synaptic deterioration; however this scenario has not been directly examined thus far. Future studies will undoubtedly explore the potential link between BK channel-associated neuropathies (i.e. ataxia, epilepsy) and CNS vulnerability to neurodegeneration in the absence of CSPa. In summary, our data demonstrate that CSPa is capable of regulating BK channel expression in a neuronal cell model and establish key residues within CSPa for this regulatory activity. Mutations of CSPa in the N terminal J domain or central cysteine string region lead to an increase in total and cell surface BK channel expression, resulting in greater BK channel current density. In parallel, increased BK channel expression is also found in the brain of CSPa null mouse, demonstrating that loss of CSPa function alters BK channel expression in the intact CNS, which may lead to altered neuronal membrane excitability ( Figure 6). Based on these findings, we speculate that alterations in the cell surface expression of neuronal BK channels may contribute to the pathogenesis of neurodegeneration associated with either genetic loss or dysfunction of CSPa. Methods Cell culture. CAD (CNS catecholaminergic derived) mouse neuroblastoma cells were seeded into 6 well plates and grown in DMEM/F12 medium supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin, as previously described 42,65 . The established rat aortic smooth muscle cell line A7r5 was grown in DMEM containing 10% FBS, as described previously 55 . Cells were lysed in 40 mM Tris (pH 7.4), 150 mM NaCl, 2 mM EDTA, 1 mM EGTA, 1 mM Na 3 VO 4 , 0.1% SDS, 1% (v/v) Triton X-100, 0.5 mM PMSF and protease inhibitor (Sigma) at 4uC for 1 hour. Lysates were centrifuged at 15000 3 g for 5 minutes at 4uC and the supernatant (soluble fraction) was collected and stored at 270uC. For transient transfection, CAD cells grown on 35 mm dishes were washed in PBS and transiently transfected using ,1.5 mg of cDNA and 7 ml of Lipofectamine-2000 (Invitrogen) per dish. Reagents were mixed in 0.2 ml of Opti-MEM medium and then diluted to a total volume of 1 ml with DMEM. Protein concentration of the soluble CAD cell lysate was determined using the Bradford assay (BioRad). Immunoblotting. Proteins were electrotransferred from polyacrylamide gels to nitrocellulose membrane (0.2 mm pore size) in 20 mM Tris, 150 mM glycine and 12% methanol. Membranes were blocked in phosphate-buffered saline (PBS) containing 0.1% Tween 20, 4% (w/v) skim milk powder and then incubated with primary antibody overnight at 4uC. The membranes were washed and incubated with horseradish peroxidase-coupled secondary antibody for ,2 h at room temperature. Bound antibodies on the membranes were detected by incubation with the West Pico chemiluminescence reagent (Pierce Chemical Co.) and exposure to Kodak x-ray film. The chemiluminscent signals were quantified using a Biorad Fluor-S MultiImager Max and Quantity One 4. (Table 1). PCR assay validation was performed by testing serial dilutions of pooled template cDNAs, which indicated a linear dynamic range of 33-0.0033 ng template and yielded standard curves with slopes of 23.53 and 23.31 for KCNMA1 and ACTB, respectively, and R 2 values .0.99 and percent efficiencies of 91.8 and 100.3% for KCNMA1 and ACTB, respectively. No fluorescence was detected in negative template controls. RNA integrity was confirmed by performing agarose gel electrophoresis of the isolated mouse brain total RNA preparations denatured with glyoxal sample buffer (Ambion). This analysis yielded intensity ratios between the 28 S and 18 S rRNA bands of 1.85-2.04 for all the samples tested 66 . Biotinylation of cell surface BK channels. CAD cells stably expressing murine brain BKa subunits 43 were transfected with CSPa variants, as described above. 24 h posttransfection, the cells were washed three times with PBS. CAD cells were then incubated with EZ-Link Sulfo-NHS-SS Biotin (Thermo Scientific) (1 mg/ml) in PBS for 30 min at 4uC. As a negative control, cells were incubated only with PBS. The reaction was neutralized by addition of 1% (w/v) BSA in PBS for 10 min at 4uC. After neutralization, cells were washed thrice with ice-cold PBS to remove non-reacted biotin, and were harvested in 1 ml of PBS containing 1% v/v Triton X-100 and protease inhibitor (complete, EDTA-free, Sigma) by an incubation for 2-5 minutes on ice. The lysates were centrifuged at 15,000 3 g for 15 min at 4uC and the soluble protein concentration was determined using the Bradford assay (Bio Rad). For streptavidin pull-down, 1 mg of the soluble protein lysate was incubated with 100 ml streptavidin agarose beads (50% slurry) (Thermo Scientific) overnight at 4uC on a rotator. Beads were rinsed thrice with 1% Triton X-100 in PBS. Biotinylated proteins were eluted from the beads by adding 23 Laemmli sample buffer (62.2 mM Tris HCl pH 6.8, 7.5% v/v Glycerol, 2% w/v SDS, 0.015 mM Bromophenol Blue, 1.2% v/v b-Mercaptoethanol and 100 mM DTT) and treating the samples at 37uC for 1 h. Following elution, proteins were separated by SDS-PAGE using a 10% polyacrylamide gel and BKa subunits were detected by Western blot analysis. Stable transfection of CAD cells. To establish a stable BK channel expressing cell line, CAD cells were transiently transfected in 100 mm culture plates with pcDNA3.1 Zeo(1) plasmid containing cDNA encoding a BKa subunit originally cloned from murine brain 43 . 48 h post-transfection, cells were plated and grown in media containing 0.8 mg/ml Zeocin (Invitrogen). Non-transfected cells were used as negative control. The selection medium was switched every 3 to 4 days until cell foci were identified. Cell foci were isolated with cloning cylinders and transferred into 12well plates before expanding colonies in 6-well plates. The stable expression of the BK channel was confirmed by Western blot analysis and immunocytochemistry. Individual CAD cell clones stably expressing BKa subunits were subsequently maintained in DMEM/F12 medium containing 0.5 mg/ml Zeocin. Whole cell patch clamp recordings. Voltage-clamp measurements were performed using conventional, ruptured membrane patch clamp methodology in combination with an Axopatch 200B amplifier, Digidata 1440 series analogue/digital interface and pClamp v10 software. Whole cell electrical signals were typically filtered at 1-2 kHz and sampled at 5 kHz. Glass micropipettes (2-4 MV tip resistance) were pulled from thin-walled borosilicate capillaries and contained 100 mM KOH, 30 mM KCl, 1 mM MgCl 2 , 0.005 mM CaCl 2 , 10 mM HEPES, pH 7.3 with methanesulfonic acid. The bath chamber was placed on the stage of Nikon TE2000 inverted microscope equipped with epifluorescence illumination and perfused with a modified Ringer's saline solution containing 135 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , 2.5 mM CaCl 2 , 5 mM 4-aminopyridine and 10 mM HEPES, pH 7.3 with 1N NaOH. Cells in the bath chamber were constantly superfused at ,2 ml/min and solution changes were performed by gravity flow from a series of elevated solution reservoirs using manually controlled solenoid valves. All electrophysiological recordings were performed at 35-37uC. CAD cells stably expressing BK channels were transiently transfected with either wild-type or mutant CSPa cDNA constructs, together with eGFP. Transfected cells were identified in the recording bath by their green fluorescence using 488 nm excitation and 510 nm emission filters. Immunocytochemistry. Transfected CAD cells were plated on sterile glass coverslips and washed three times with PBS to remove medium and serum, then fixed by incubation for 25 min at room temperature in PBS containing 1% (w/v) formalin. Cells were washed thrice with PBS and permeabilized by incubation for 5 min in PBS containing 0.01% Triton X-100. Cells were then washed three times in PBS and incubated in blocking solution (5% v/v donkey serum, 0.05% v/v Tween-20 in PBS) to reduce non-specific antibody binding. Cells were then incubated with anti-BK channel polyclonal antibody (Millipore, 15400 dilution) overnight at 4uC in blocking solution, then washed five times with PBS. Cells were incubated at 4uC for 1 hour in blocking solution containing goat anti-rabbit secondary antibody conjugated to Cy3 (Jackson ImmunoResearch, 152000). Cells were rinsed once with blocking solution and washed three times with PBS. Coverslips containing stained cells were mounted onto glass slides with a drop of mounting media (Prolong Gold, anti-fade, Invitrogen) and images were taken on an Olympus Fluoview FV10i confocal microscope using a 603 (oil immersion) objective (NA:1.35). Laser intensity and sensitivity were maintained at 40% for all images, with a confocal aperture of 1.35. Fluorescence signal intensities of individual cells were analyzed and quantified using CellSens Dimension digital imaging software (Olympus). Isolation of Tissue Fractions from Mouse Brain and Aorta. CSPa 2/1 mice were obtained from Jackson Labs (Bar Harbor, Maine) and genotyped as previously described 14 . All mice were maintained in accordance with an animal protocol approved by the University of Calgary and the Guidelines for Lab Animal Safety (NIH). Mice (age 23-27 days) were anesthetized with isofluorane and euthanized and intact brain tissue obtained from wild-type, heterozygote and CSPa KO mice were fractionated. Briefly, a mouse brain was homogenized in 7 mls of ice cold 0.32 M sucrose, 10 mM HEPES, 1 mM EGTA, 0.1 mM EDTA and 0.3 mM PMSF with 15 up and down strokes using a Teflon glass homogenizer. The homogenate was centrifuged at 4uC for 7 min at 1,000 3 g and the supernatant (S1) collected. The S1 supernatant was then spun for 15 min at 22,000 3 g and the resulting supernatant (S2) was discarded. The pellet (P2) was washed by re-suspension in the above sucrose solution and then re-centrifuged at 22,000 3 g. The final pellet, representing washed-crude synaptosomes, was re-suspended in 4 ml of sucrose buffer. Thoracic and abdominal aorta was removed following euthanasia and cleaned of extraneous tissue. Each isolated aorta was then placed in an Eppendorf tube, immediately frozen on dry ice and stored at 280uC. Thawed aortic tissue was minced into small pieces and then disrupted in the tube using a tight-fitting plastic pestle. Tissue pieces were suspended in 0.2-0.3 ml of ice-cold solubilization buffer (10 mM Tris HCl pH 7.5, 150 mM NaCl, 1% Triton X-100, 1 mM EDTA, 1 mM EGTA, 1 mM benzamidine, and 5 mg/ml each of leupeptin, aprotinin and pepstatin A) and kept on ice for 45-60 min with occasional mixing. The crude homogenate was then centrifuged at 1000 3 g for 5 min at 4uC and the supernatant was collected. Following determination of protein concentration, the soluble fraction was used for Western blot analysis.
2018-04-03T04:59:15.193Z
2013-08-15T00:00:00.000
{ "year": 2013, "sha1": "eefd4b587d0726e5c17b8513008e4c35f87e7f21", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep02447.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "bfb97eb1e7f6facd34baa11f567aca2abe3eb901", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
271337580
pes2o/s2orc
v3-fos-license
The Invisible Toll: Unveiling the Prevalence and Predictors of Depression and Anxiety Among Pulmonary Tuberculosis (TB) Patients and Their Households in Gujarat, India Background: Tuberculosis (TB) imposes a substantial physical and psychological burden on patients and their families. This study aimed to investigate the prevalence and predictors of depression and anxiety among pulmonary TB patients and their household contacts in Jamnagar, Gujarat, India. Materials and methods: A cross-sectional study was conducted at TB units (TUs) in Jamnagar, Gujarat. Trained research assistants interviewed 272 pulmonary TB patients and 544 household contacts using structured questionnaires. Depression and anxiety were assessed using the Patient Health Questionnaire-9 (PHQ-9) and Hamilton Anxiety Rating Scale (HAM-A), respectively. Sociodemographic, clinical, and psychosocial factors (stigma and social support) were evaluated. Logistic regression analyses were performed to identify predictors of depression and anxiety. A p-value of < 0.05 was considered statistically significant for all analyses in this study. Results: Out of 272 TB patients and 544 household contacts, the prevalence of depression was 98 (36.0%) and 135 (24.8%) (p=0.001). Anxiety was present in 85 (31.3%) of TB patients and 112 (20.6%) of household contacts (p<0.001). For TB patients, low household income (AOR=2.1, 95% CI: 1.9-4.3), low social support (AOR=0.84, 95% CI: 0.6-0.9), and high perceived stigma (AOR=2.3, 95% CI: 1.3-4.5) were independently associated with depression. Among household contacts, similar factors were identified, including low household income (AOR=1.7, 95% CI: 1.6-2.9), low social support (AOR=0.88, 95% CI: 0.6-0.9), and high perceived stigma (AOR=1.80, 95% CI: 1.1-2.3). Conclusion: Depression and anxiety are highly prevalent among pulmonary TB patients and their household contacts in Gujarat, India. Low socioeconomic status, lack of social support, and TB-related stigma emerged as significant predictors of these mental health conditions, underscoring the need for integrated, multidisciplinary interventions to address the psychological impact of TB on patients and their families. Introduction Tuberculosis (TB), a formidable adversary in infectious diseases, has long shadowed global public health.While the medical community has made strides in combating the physical manifestations of this ancient scourge, the psychological toll it exacts on patients and their families remains largely uncharted territory.Depression and anxiety, insidious companions to TB, often go unnoticed, yet their impact can be profound, undermining treatment adherence, prolonging recovery, and diminishing quality of life [1,2]. Mortality and incidence of TB decreased across all age groups for both males and females over the period 1990-2019.The incidence and mortality were higher among males than females [3].Gujarat, a state renowned for its cultural richness and economic prowess, finds itself grappling with the dual challenge of TB and its psychological ramifications.Despite concerted efforts to address the physical manifestations of TB, the mental health aspects have remained an often-neglected facet, obscured by the urgency of clinical interventions [4]. Previous studies have shed light on the elevated prevalence of depression and anxiety among TB patients, underscoring the intricate relationship between physical and mental well-being [5,6].However, the general impact and stress associated with a (relatively) chronic disease for children and adolescents with TB may also contribute to mental ill-health and affect their socioeconomic trajectory [7]. This study sought to unravel the intricate tapestry of depression, anxiety, and TB in Gujarat, India, by exploring the prevalence of these conditions among pulmonary TB patients and their household contacts.Furthermore, it aimed to elucidate the sociodemographic, clinical, and psychosocial factors that predispose individuals to these mental health challenges, providing a nuanced understanding of the complex interplay between TB and psychological distress [8,9]. By unveiling the hidden burdens of depression and anxiety in this population, the study endeavored to illuminate a path toward comprehensive care, where the physical and mental aspects of TB are addressed in tandem.Ultimately, the findings of this research endeavor to catalyze a paradigm shift, ushering in a holistic approach to TB management that transcends the boundaries of traditional medical interventions and embraces the multifaceted needs of patients and their families [10]. Study design and setting This was a cross-sectional analytical study conducted at 3 TB units (TUs) in Jamnagar co-operation (2 Urban + 1 Rural) in Gujarat, India.The study was carried out between 23 March 2023 and 23 March 2024. Eligibility criteria The study population consisted of pulmonary TB (PTB) patients and their household contacts residing in Gujarat, India.The inclusion criteria for the study required pulmonary TB patients to be aged 18 years or older, diagnosed with active TB, and receiving anti-TB treatment at participating centers.Additionally, household contacts had to be aged 18 years or older and reside in the same household as the enrolled TB patients.Exclusion criteria were applied to exclude patients with extrapulmonary TB or multi-drug-resistant TB, as well as patients or household contacts with cognitive impairments or those unable to provide informed consent.Furthermore, household contacts who were not residing with the TB patient during the current treatment episode were also excluded from the study. Sample size and sampling technique For this study, a non-probability sampling approach was employed to recruit the required sample of TB patients and their household contacts.Specifically, a combination of purposive and snowball sampling techniques was utilized. In the initial stage, the TUs in Jamnagar, Gujarat, India, were purposefully selected.At selected treatment centers, TB patients meeting the inclusion criteria were purposefully recruited through the coordination of healthcare providers and staff. A sample of 272 TB patients was enrolled in TUs at that time, all patients who met inclusion criteria were included, and a snowball sampling technique was employed to recruit their household contacts.Each enrolled TB patient was asked to provide information about their household members, and those who met the inclusion criteria were invited to participate in the study. The target sample size for household contacts was set at 544, maintaining a 1:2 ratio with the TB patient sample.This ratio was chosen to ensure a larger sample of household contacts, providing greater statistical power to assess the mental health burden and associated factors specific to this group. The decision to use non-probability sampling techniques was based on the following considerations -(1) Accessibility and feasibility: TB patients and their household contacts can be considered hard-to-reach populations, and non-probability sampling methods allowed for more efficient and targeted recruitment within the available resources and timeframe.(2) Exploratory nature of the study: As this was an exploratory study investigating the prevalence and predictors of depression and anxiety in this population, nonprobability sampling provided a pragmatic approach to gather initial insights and generate hypotheses for future research. Data collection Trained research assistants conducted face-to-face interviews with the study participants using structured questionnaires.The following data were collected -(1) Sociodemographic and clinical data: Age, gender, marital status, education, employment, income, substance abuse, and TB treatment details (for patients); (2) Depression and anxiety assessment: Depression was assessed using the Patient Health Questionnaire-9 (PHQ-9).A score ≥10 was considered indicative of depression [11].Anxiety was assessed using the Hamilton Anxiety Rating Scale (HAM-A).A score ≥10 was considered indicative of anxiety [12]; and (3) Social support and perceived stigma: Social support was measured using the Multidimensional Scale of Perceived Social Support (MSPSS) [13,14] while the study utilized the perceived TB stigma scale to evaluate how participants perceived stigma related to TB. Operational Definition Depression: Depression was defined as a PHQ-9 score of 10 or higher.The PHQ-9 is a widely used and validated self-report measure for assessing the presence and severity of depressive symptoms.It consists of nine items based on the DSM-IV criteria for major depressive disorder, with scores ranging from 0 to 27.A score of 10 or higher has been shown to have high sensitivity and specificity for detecting major depressive disorder in various populations [11]. Anxiety: Anxiety was defined as a HAM-A score of 10 or higher.The HAM-A is a clinician-rated scale widely used to assess the severity of anxiety symptoms.It consists of 14 items, each rated on a scale of 0 (not present) to 4 (severe), with a total score ranging from 0 to 56.A score of 10 or higher is generally considered indicative of clinically significant anxiety [12]. Social support: Social support was measured using the MSPSS, a 12-item self-report instrument that assesses perceived social support from three sources: family, friends, and significant others.The MSPSS has been widely used and validated across various populations and cultures, demonstrating good internal consistency, test-retest reliability, and construct validity [13,14]. Perceived stigma: Perceived stigma related to TB was assessed using the perceived TB stigma scale.This scale comprises 11 items, each rated on a 4-point Likert scale from 1 (strongly disagree) to 4 (strongly agree).Participants were categorized as perceiving stigma if their score equaled or exceeded the mean stigma score.The scale demonstrated good internal consistency in this study, with a Cronbach's alpha of 0.81 [15]. Validity and reliability of scales The PHQ-9, HAM-A, MSPSS, and perceived TB stigma scale are widely used and validated instruments for assessing depression, anxiety, social support, and perceived stigma, respectively.In this study, the internal consistency of these scales was evaluated using Cronbach's alpha, and the results demonstrated good reliability (PHQ-9: α = 0.85, HAM-A: α = 0.89, MSPSS: α = 0.92, perceived TB stigma scale: α = 0.81). Measures Implemented to Ensure the Quality and Integrity of the Data Collected Training of research assistants: Research assistants underwent rigorous training in data collection procedures, interviewing techniques, and ethical considerations to ensure standardized and consistent data collection. Pilot testing: The questionnaires and data collection procedures were pilot-tested with a small subset of participants to identify and address any potential issues or ambiguities before the main data collection phase. Data entry and validation: All data were double-entered into a secure database by trained data entry personnel.Automated range and consistency checks were performed to identify and resolve potential data entry errors. Regular monitoring: The principal investigator and supervisors regularly monitored the data collection process, reviewed completed questionnaires, and provided feedback to research assistants to maintain quality standards. Data security and confidentiality: All data were anonymized and stored securely, with access restricted to authorized personnel only.Appropriate measures were taken to protect the confidentiality of participants' information throughout the study. Ethical considerations The study protocol was approved by the Institutional Review Board/Ethics Committee (Protocol no: 257/03/2023) (M.P. Shah Medical College and Guru Gobind Singh Hospital Jamnagar).Written informed consent was obtained from all participants before enrollment.Confidentiality and privacy were maintained throughout the study. Statistical analysis Descriptive statistics were used to summarize the sociodemographic and clinical characteristics of the study participants.The prevalence of depression and anxiety was calculated as percentages with 95% CIs.Bivariate and multivariate logistic regression analyses were performed to identify factors associated with depression and anxiety among TB patients and household contacts.Odds ratios (ORs) with 95% CIs were calculated.A p-value <0.05 was considered statistically significant. Results The sociodemographic characteristics of the study participants are presented in Table 1.This table provides a comprehensive overview of the demographic profile of both TB patients and their household contacts (HCC), which included 272 pulmonary TB patients and 544 HCC.The mean age of TB patients was 38.5 years (SD = 12.8), and for HCCs, it was 35.2 years (SD = 15.6).Among TB patients, 58.1% (n = 158) were male, and 41.9% (n = 114) were female, while in the HCC group, 53.1% (n = 289) were male, and 46.9% (n = 255) were female.The majority of TB patients (68.0%, n = 185) and HCCs (64.2%, n = 349) were married.In terms of education level, the highest proportion of TB patients (37.5%, n = 102) and HCCs (35.5%, n = 193) had completed primary education.Most TB patients (68.0%, n = 185) and HCCs (71.5%, n = 389) were employed.Regarding monthly household income, 45.6% (n = 124) of TB patients and 47.2% (n = 257) of HCCs had an income between 10,000 and 20,000 Indian rupees (INR).Substance abuse was reported by 23.9% (n = 65) of TB patients and 16.4% (n = 89) of HCCs.The mean social support score was 22.8 (SD = 6.2) for TB patients and 25.1 (SD = 7.4) for HCCs, while the mean perceived stigma score was 18.4 (SD = 5.8) for TB patients and 14.2 (SD = 6.1) for HCCs.Overall, these tables provide valuable insights into the prevalence of depression and anxiety among pulmonary TB patients and their household contacts, as well as the factors associated with these mental health conditions.The findings highlight the importance of addressing mental health issues and providing appropriate support to TB patients and their families.Table 2 shows the prevalence of depression and anxiety among pulmonary TB patients and their household contacts.Depression, defined as a PHQ-9 score of 10 or higher, was present in 36.0%(n=98) of TB patients and 24.8% (n=135) of HCCs.The prevalence of depression was significantly higher among TB patients compared to HCCs (p=0.001).Similarly, anxiety, defined as a HAM-A score of 10 or higher, was present in 31.3%(n=85) of TB patients and 20.6% (n=112) of HCCs.The prevalence of anxiety was also significantly higher among TB patients compared to HCCs (p<0.001).3)* 1.9 (0.9-3.9) 1.7 (1.6-2.9)*1.5 (0.9-2.6) 10,000-20,000 INR 1.6 (0.9-2.9)Among HCCs, low household income (<10,000 INR: AOR=1.7,95% CI: 1.6-2.9),low social support (AOR=0.88,95% CI: 0.6-0.9),and high perceived stigma (AOR=1.80,95% CI: 1.1-2.3)were independently associated with higher odds of depression after adjusting for other factors.For anxiety among HCCs, low social support (AOR=0.79,95% CI: 0.66-0.89)and high perceived stigma (AOR=2.87,95% CI: 1.90-7) remained independently associated. Overall, these tables provide valuable insights into the prevalence of depression and anxiety among pulmonary TB patients and their household contacts, as well as the factors associated with these mental health conditions.The findings highlight the importance of addressing mental health issues and providing appropriate support to TB patients and their families. Discussion This study aimed to investigate the prevalence and predictors of depression and anxiety among pulmonary TB patients and their household contacts in Jamnagar, Gujarat, India.The findings revealed a high burden of mental health issues in this population, with a significant proportion of TB patients and their household contacts experiencing depression and anxiety. Prevalence of depression and anxiety The prevalence of depression among TB patients (36.0%) and their household contacts (24.8%) observed in this study is consistent with previous reports from similar settings.A prospective study by Ambaw et al. found that the prevalence of probable depression and suicidal ideation among TB patients was 53.9% and 17.4% [16].Similarly, a meta-analysis also reported a prevalence of depression among TB patients of 45.19% (95% CI 38.04-52.55)[17].The high prevalence of depression in our study underscores the need for integrating mental health screening and support services into TB care programs. The prevalence of anxiety among TB patients (31.3%) and their household contacts (20.6%) in our study is also aligned with previous findings.A systematic review also found that the pooled prevalence of anxiety among TB patients was 32.54% (24.95, 41.18) [18].The high rates of anxiety observed in our study population highlight the psychological distress and emotional burden associated with TB and its impact on family members. Factors associated with depression and anxiety The bivariate and multivariate analyses identified several sociodemographic, clinical, and psychosocial factors associated with depression and anxiety among TB patients and their household contacts.Low household income, low social support, and high perceived stigma emerged as independent predictors of depression in both groups, after adjusting for potential confounders.These findings are consistent with previous studies that have reported the adverse impact of poverty, a lack of social support, and TB-related stigma on mental health outcomes [19][20][21]. Additionally, substance abuse was found to be associated with increased odds of depression and anxiety among TB patients in the bivariate analysis, although this association did not remain significant in the multivariate model.This finding aligns with previous research suggesting a bidirectional relationship between substance abuse and mental health disorders, where individuals with mental health issues may resort to substance use as a coping mechanism, and substance abuse can exacerbate or contribute to the development of mental health problems [22]. Implications and recommendations The high prevalence of depression and anxiety among TB patients and their household contacts, coupled with the identified risk factors, underscores the need for a comprehensive and integrated approach to TB care.Mental health screening and support services should be incorporated into TB treatment programs to address the psychological and emotional challenges faced by patients and their families. Interventions aimed at reducing poverty, enhancing social support networks, and combating TB-related stigma may also play a crucial role in mitigating the risk of depression and anxiety in this population. Collaborative efforts involving healthcare providers, mental health professionals, community organizations, and policymakers are necessary to develop and implement effective strategies for addressing the mental health needs of TB patients and their household contacts. Furthermore, targeted interventions focused on substance abuse prevention and treatment may be beneficial in reducing the burden of mental health disorders among TB patients.Integrating substance abuse counseling and rehabilitation services into TB care programs could potentially improve treatment adherence and overall health outcomes. Strengths and limitations One of the strengths of this study is its comprehensive assessment of both TB patients and their household contacts, providing insights into the mental health burden and associated factors within the broader family context.Additionally, the use of standardized and validated instruments for assessing depression, anxiety, social support, and perceived stigma enhances the reliability of the findings. However, the study has some limitations.The cross-sectional design precludes the establishment of causal relationships between the identified factors and mental health outcomes.Longitudinal studies are warranted to better understand the temporal associations and potential bidirectional effects and also to better capture the long-term effects of such interventions on mental health outcomes in TB patients and their families.Furthermore, the findings may not be directly applicable to other regions or populations with different sociodemographic and cultural contexts.We have suggested that future multi-center or multi-region studies could help address this limitation and provide more broadly applicable results.We also acknowledge that our findings may not be generalizable to the important subgroups (like extrapulmonary TB, multi-drug-resistant TB, < 18 years of age, and those with cognitive impairments or unable to provide informed consent or prior mental health history and concurrent medical conditions).We have also suggested that future research should aim to include these populations to provide a more comprehensive understanding of mental health issues across all TB patient groups.We also acknowledge that the 2-week timeframe of the PHQ-9 may not fully capture the chronic nature of depression in TB patients and suggested that future studies could consider using additional or alternative measures that assess depressive symptoms over longer periods to better understand the chronic aspects of depression in this population.We also suggested that future studies could consider alternative methods of identifying and recruiting household contacts to minimize potential bias or incomplete information. Conclusions This study highlights the high prevalence of depression and anxiety among pulmonary TB patients and their household contacts in Jamnagar, Gujarat, India.Low household income, low social support, and high perceived stigma emerged as significant predictors of these mental health conditions.The findings underscore the pressing need for integrating mental health screening and support services into TB care programs, as well as addressing the underlying socioeconomic and psychosocial factors that contribute to the burden of mental health disorders in this population.Moving or speaking so slowly that other people could have noticed, or the opposite -being so fidgety or restless that you have been moving around a lot more than usual 9. Thoughts that you would be better off dead or of hurting yourself in some way Hamilton Anxiety Rating Scale (HAM-A) For each item, please indicate the extent to which you have experienced the following symptoms over the past week: Questionnaire 1 Age: _____ years 2 . 9 )( 0 = Gender: □ Male □ Female □ Other 3. Marital Status: □ Unmarried □ Married □ Widowed □ Divorced 4. Education Level: □ No formal education □ Primary □ Secondary □ Higher secondary and above 5.Employment Status: □ Employed □ Unemployed 6. Monthly Household Income: □ < 10,000 INR □ 10,000-20,000 INR □ > 20,000 INR 7. Do you use any substances (e.g., alcohol, tobacco, drugs)?□ Yes □ No Patient Health Questionnaire-9 (PHQ-Over the last 2 weeks, how often have you been bothered by any of the following problems?Not at all, 1 = Several days, 2 = More than half the days, 3 = Nearly every day) 1.Little interest or pleasure in doing things 2. Feeling down, depressed, or hopeless 3. Trouble falling or staying asleep, or sleeping too much 4. Feeling tired or having little energy 5. Poor appetite or overeating 6. Feeling bad about yourself or that you are a failure or have let yourself or your family down 7. Trouble concentrating on things, such as reading the newspaper or watching television 8. TABLE 1 : Sociodemographic Characteristics of the Study Participants INR: Indian Rupees Table 3 presents the results of bivariate logistic regression analyses examining the association between various factors and depression/anxiety among TB patients and HCCs. 0, 95% CI: 1.3-5.1).Similar patterns were observed for the association between these factors and anxiety among HCCs, although some associations were not statistically significant.
2024-07-24T15:26:39.554Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "81ddd1d5d15ff655cfca1bd9f4e9f54af7d47da1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "89ef62355f2cbc740f2bde4207a6f135941ba0b1", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
12432108
pes2o/s2orc
v3-fos-license
The UNITAID Patent Pool Initiative: Bringing Patents Together for the Common Good Developing and delivering appropriate, affordable, well-adapted medicines for HIV/AIDS remains an urgent challenge: as first-line therapies fail, increasing numbers of people require costly second-line therapy; one-third of ARVs are not available in pediatric formulations; and certain key first- and second-line triple fixed-dose combinations do not exist or sufficient suppliers are lacking. UNITAID aims to help solve these problems through an innovative initiative for the collective management of intellectual property (IP) rights – a patent pool for HIV medicines. The idea behind a patent pool is that patent holders - companies, governments, researchers or universities - voluntarily offer, under certain conditions, the IP related to their inventions to the patent pool. Any company that wants to use the IP to produce or develop medicines can seek a license from the pool against the payment of royalties, and may then produce the medicines for use in developing countries (conditional upon meeting agreed quality standards). The patent pool will be a voluntary mechanism, meaning its success will largely depend on the willingness of pharmaceutical companies to participate and commit their IP to the pool. Generic producers must also be willing to cooperate. The pool has the potential to provide benefits to all. INTRODUCTION Two pills a day: one in the morning, one at night. This straightforward treatment regimen for HIV/AIDS is currently the mainstay of treatment programmes in many developing countries. Fixed-dose combinations (FDC) that combine two or more medicines into one pill have simplified AIDS treatment protocols, facilitated patient adherence and reduced the risk of drug resistance. Vigorous generic competition has reduced medicines prices to around US$ 87 for the first-line FDC of stavudine, lamivudine and nevirapine -roughly 1% of the price a decade ago. These factors combined have helped make possible a ten-fold increase in access to antiretroviral (ARV) therapy in the developing world within the span of just six years. Today, however, the treatment landscape is more complex. While some older ARVs have become increasingly affordable, newer, less toxic products are still too expensive. For example, treating a patient for one year with the most affordable improved first-line regimen for HIV, as recommended by the World Health Organization (WHO) [1], today costs between US$ 613 and 1 033 using originator products -at least eight times as much as the older regimen. With increasing numbers of AIDS patients failing on their first-line therapy, there is also an urgent need to find affordable second-line treatments. In addition, about onethird of ARVs are not available in pediatric formulations, making effective treatment of children an even more difficult *Address correspondence to this author at the UNITAID, 20, avenue Appia, WHO Headquarters, CH-1211, Geneva 27, Switzerland; Tel: +41 22 791 50 65; Fax: +41 22 791 48 90; E-mail: bermudezj@who.int task. Finally, certain triple FDCs do not exist or sufficient suppliers are lacking for the improved first-line regimen and for second-line treatment. UNITAID aims to help solve these problems through an innovative initiative for the collective management of intellectual property rights -a patent pool for HIV medicines. WHAT IS UNITAID? UNITAID obtains its funds primarily from a solidarity tax on airline tickets established by the participating member countries. The funds are used to provide regular, sustainable, predictable, additional, long-term financing for drugs and diagnostics for AIDS, tuberculosis (TB) and malaria for use in developing countries. The governments of Brazil, Chile, France, Norway and the United Kingdom launched UNITAID at the UN General Assembly in September 2006. To date, 29 countries have committed to contribute, the majority of which are low-and middle-income. 1 Countries levy a tax on flights leaving from their territories, and can adapt the tax according to their individual circumstances. For example, in France the tax is 1 EUR on short-haul economyclass tickets, and up to 40 EUR on long-haul business-class flights. A full flight from Paris to New York raises enough to cover a year's treatment for 60 HIV-positive children. Other countries are committing multi-year budgetary contributions, as has the Bill and Melinda Gates Foundation. UNITAID is an innovative financing mechanism that draws on both industrialized and developing countries to provide sustainable, long-term funding for health [2]. UNITAID partners with other global health actors to decrease drug prices, support quality, and accelerate and expand delivery. For example, UNITAID's implementing partner, the Clinton HIV/AIDS Initiative, has negotiated with drug producers to reduce prices for pediatric ARVs from $200 to $60 per patient/year and stimulated the development of improved formulations. UNITAID has also supported the supply of second-line ARVs for 140,000 patients in 26 countries in 2008, and negotiated to reduce the prices of second-line drugs by 23-49% [3]. In addition, UNITAID has funded WHO to add 61 medicines to its list of prequalified products. And it has supported treatment for pediatric and multi-drug resistant TB and helped to prevent TB drug stockouts by establishing a strategic international stockpile. Finally, UNITAID worked with WHO and UNICEF to speed shipments of artemisinin-based combination therapy (ACT) for malaria to Liberia and Burundi in 2007 in order to avert a stockout. Recently, UNITAID has agreed to be a major funder for the first phase of the rolling out of the Affordable Medicines Facility for Malaria (AMFm), which will be hosted by the Global Fund. The UNITAID Constitution directs the initiative to dedicate at least 85% of its spending on products for lowincome countries. To meet its goal of scaling up access to treatment for AIDS, TB and malaria, UNITAID has committed to a prohealth approach to IP: "Where intellectual property barriers hamper competition and price reductions, it will support the use by countries of compulsory licensing or other flexibilities under the framework of the Doha Declaration on the Trade-Related Aspects on Intellectual Property Rights (TRIPS) Agreement and Public Health, when applicable [4]." In line with UNITAID's mission and principles, the patent pool initiative aims to provide patients in low and middle income countries with increased access to more appropriate and affordable medicines. The UNITAID Executive Board in July 2008 approved the plan in principle to create a patent pool. The patent pool will complement other tools that UNITAID uses to achieve these objectives, such as reliable financing and bulk purchasing power. HOW WILL THE POOL WORK? The principle of a patent pool is to facilitate the availability of new technologies by making patents and other forms of intellectual property more readily available to entities other than the patent holder. The pool is intended to avert a "tragedy of the anti-commons [5]" in which people are unable to make use of knowledge because of the tangle of property rights that can block them. Patent pools have been established in other fields, including for Golden Rice in agriculture, for a vaccine for Severe Acute Respiratory Syndrome (SARS), for aircraft to facilitate US military efforts in the First World War, and multiple areas of information technology; they are formed to overcome barriers to access and innovation that may arise when relevant patents are owned by many different entities [6,7]. The UNITAID pioneer initiative will lead to the first medicines patent pool. The idea behind a patent pool is that patent holderscompanies, governments, researchers or universitiesvoluntarily offer, under certain conditions, the intellectual property related to their inventions to the patent pool. Any company that wants to use the intellectual property to produce or develop medicines can seek a license from the pool against the payment of royalties, and may then produce the medicines for use in developing countries as defined by the World Bank. Producers that make use of the patents in the pool would need to meet agreed quality standards. In the absence of a patent pool, a company might need to obtain licenses from at least three different patent holders to be able to develop, produce, export and sell an ARV FDC. A very concrete example is the need for an FDC of the newly WHO-recommended first-line antiretroviral treatment for HIV/AIDS, which would consist of tenofovir (Gilead), lamivudine (GlaxoSmithKline) and either nevirapine (Boehringer-Ingelheim) or efavirenz (Bristol Myers Squibb). An FDC of three of these drugs currently does not exist or is in limited supply. The patents on every compound in this triple-therapy are held by a different company. A generic company seeking voluntary licenses for the development and production of these FDCs would have to obtain licences from four different patent-holders. However, if these patents could be combined in a patent pool the generic company would only have to deal with the pool, which would considerably decrease transaction costs and risk. Any qualified company that wanted to use the inventions could get a licence from the pool. The patent pool would be a onestop-shop for all parties involved -it would facilitate the legal and bureaucratic processes involved in obtaining licenses, reduce transaction costs and increase access to the intellectual property needed to make important medicines. The pool will help to speed up the availability of lowerpriced, newer medicines because there will be no need to wait out the patent term (usually about 20 years) -time patients can ill-afford to lose. In exchange for the payment of royalties to the patent owners through the pool, any producer would be allowed to manufacture the patented medicines and sell them in countries well before the expiration of the patent term. With licenses covering both low and middle-income countries, the geographical scope of the market would be attractively large, thereby encouraging multiple generic producers to come forward and access the patents. The greater the competition between producers, the more one can expect the price of medicines to fall. The patent pool will be a voluntary mechanism, meaning its success will largely depend on the willingness of pharmaceutical companies to participate and commit their intellectual property to the pool. Generic producers must also be willing to cooperate. The pool has the potential to provide benefits to all: pharmaceutical companies are rewarded for their investments into research and development (R&D); generic companies are able to access the intellectual property The UNITAID Patent Pool Initiative The Open AIDS Journal, 2010, Volume 4 39 more easily and quickly; and patients in developing countries get faster access to better, more affordable treatments. IS THE POOL FEASIBLE? The idea of patent pools to facilitate medical research is gaining ground [8]. In addition to the examples listed above, patent pools have been proposed in the field of genetics, particularly for gene-based diagnostic testing [6]. The United States Patents and Trademark Office has explored the potential utility of patent pools for facilitating innovation in biotechnology, particularly for genome-related research. At a panel on the UNITAID patent pool at the 2008 Mexico City AIDS Conference, representatives of drug companies also expressed their openness to the idea [9]. Pharmaceutical giant GlaxoSmithKline (GSK) has announced that it would make available its neglected disease-related patents through a pool and has called on other companies to follow suit [10]; others would be able to access those patents to develop medicines for the world's Least Developed Countries. Notably, GSK so far has not included HIV-related patents in the pool, which has prompted UNITAID to call on GSK to join the UNITAID initiative [11]. The Indian Pharmaceutical Alliance (IPA) endorsed the UNITAID patent pool initiative in its meeting on 5 September 2008. Finally, patent pools were among the innovative approaches to research & development included in 2008 by the World Health Assembly in its Global Strategy and Plan of Action on Public Health, Innovation and Intellectual Property [12]. The adoption of the Global Strategy and Plan of Action by the WHA signals a normative shift in international expectations regarding how the inter-related issues of trade, health and intellectual property ought to be managed. Specifically, there is widespread recognition that a purely market-based system for health R&D suffers from major shortcomings: first, patent monopolies lead to high prices of essential medicines, thereby restricting access; second, priorities are set by the size of the market, not by health needs, which leads to over-investment in some disease areas and neglect of others; and finally, the proliferation of patent monopolies can retard rather than accelerate innovation. A patent pool is one way of managing IP from a public health perspective and to counteract high prices, spur needs-driven research, and facilitate innovation. The potential and hopes for the UNITAID patent pool are high, but key details will determine whether the pool is a success. In order to achieve both vigorous generic competition and economies of scale in production, the size of the potential market must be sufficiently large. While the default geographical scope of the pool will include all nonhigh-income countries, companies may specify that certain markets are excluded from the patents that they put into the pool. Companies are urged to allow for sufficient scope in the licenses so that medicines production can be efficient and competitive. Furthermore, it will be critical to obtain licenses for patents relevant to priority medicines so that optimal FDCs can be developed; for example, if two out of three patent-owners agree to allow generic production of a triple FDC, but the third one does not, the entire combination could be undermined. These concerns highlight the importance of voluntary contributions to the patent pool. Why would pharmaceutical companies participate? First, as noted above, companies will receive royalties for the use of their IP. Second, companies can expect a reputational boost from taking pro-active measures to improve the global access to medicines situation. Third, they can reduce both the monetary and political transaction costs associated with negotiating licenses and price reductions on a case-by-case basis. Fourth, they may get access to new markets and increased information about those markets. Finally, they can avert the political costs of IP-related conflicts, particularly, the risk of compulsory licensing of their patents. If a workable arrangement for access to intellectual property through mechanisms such as the patent pool cannot be achieved, both patients and companies stand to lose. Not only will the development of needed FDCs become far more difficult, but the prices of second-line and other new drugs are also likely to remain out of reach. Without access to affordable medicines, governments may choose to take advantage of available flexibilities in the World Trade Organization (WTO) Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS) to override patents to meet public health needs [13]. Certainly, doing so would provide countries with the benefit of lower-cost generic alternatives to monopoly-priced drugs. However, since compulsory licenses must be granted country-by-country, at a global level this approach is less likely to achieve economies of scale rapidly, would entail higher transaction costs, imply greater uncertainty for generic producers, and require significant political capital. Current WTO rules also make the export of drugs produced under compulsory license a complex, lengthy and cumbersome process [14]. The time is ripe to find new, reliable, sustainable and predictable ways -such as through patent pools -of ensuring widespread access to new essential medicines [12]. WHAT NEXT? UNITAID is currently meeting with pharmaceutical companies, research institutions, generic manufacturers and other concerned parties to ensure that the patent pool design addresses their requirements and achieves the desired public health outcomes. The patent pool operational plan will be presented to the UNITAID Executive Board in December 2009. The initial focus of the pool will be on AIDS drugs. It will concentrate on urgently-needed products that have not yet been developed, such as FDCs and pediatric ARVs, and on existing products with high prices that may decrease with economies of scale, such as many second-line ARVs. UNITAID has worked with the WHO HIV/AIDS Programme and the Department of Essential Medicines and Pharmaceutical Policies to draw up a list of missing essential ARVs [15]. The next steps will be to establish a licensing agency and to work with the relevant patent owners to agree on the specific licensing terms. Once up and running and proven effective, the patent pool could expand to respond to other diseases and health needs in developing countries. CONCLUSIONS Despite recent achievements in scaling up access to ARV treatment, the latest estimates indicate that AIDS treatment still reaches less than one-third of those in need [16]. Developing and delivering appropriate, affordable, welladapted medicines remains as urgent a challenge as ever. It requires new approaches to managing intellectual property in a manner that will support access to medicines for all. UNITAID extends an invitation to all concerned partiespatients, governments, donor agencies, civil society and generic and patent-owning pharmaceutical companies -to collaborate in establishing a patent pool that will broaden access to the knowledge that can save lives and improve health.
2014-10-01T00:00:00.000Z
2010-01-19T00:00:00.000
{ "year": 2010, "sha1": "15e262dad93d5086e9fcb6a1267dec5189f5c1fe", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2842943?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "15e262dad93d5086e9fcb6a1267dec5189f5c1fe", "s2fieldsofstudy": [ "Business", "Law", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209416189
pes2o/s2orc
v3-fos-license
Changing perioperative prophylaxis during antibiotic therapy and iterative debridement for orthopedic infections? Background Perioperative antibiotic prophylaxis in non-infected orthopedic surgery is evident, in contrast to prophylaxis during surgery for infection. Epidemiological data are lacking for this particular situation. Methods and findings It is a single-center cohort on iterative surgical site infections (SSIs) in infected orthopedic patients. We included 2480 first episodes of orthopedic infections (median age 56 years and 833 immune-suppressed): implant-related infections (n = 648), osteoarticular infections (1153), and 1327 soft tissue infections. The median number of debridement was 1 (range, 1–15 interventions). Overall, 1617 infections (65%) were debrided once compared to 862 cases that were operated multiple times (35%). Upon iterative intraoperative tissue sampling, we detected pathogens in 507 cases (507/862; 59%), of which 241 (242/507; 48%) corresponded to the initial species at the first debridement. We witnessed 265 new SSIs (11% of the cohort) that were resistant to current antibiotic therapy in 174 cases (7% of the cohort). In multivariate analysis, iterative surgical debridements that were performed under current antibiotic administration were associated with new SSIs (odds ratio 1.6, 95%CI 1.2–2.2); mostly occurring after the 2nd debridement. However, we failed to define an ideal hypothetic prophylaxis during antibiotic therapy to prevent further SSIs. Conclusions Selection of new pathogens resistant to ongoing antibiotic therapy occurs frequently during iterative debridement in orthopedic infections, especially after the 2nd debridement. The new pathogens are however unpredictable. The prevention, if feasible, probably relies on surgical performance and wise indications for re-debridement instead of new maximal prophylactic antibiotic coverage in addition to current therapeutic regimens. Introduction The ideal regimen for perioperative antibiotic prophylaxis for prevention of surgical site infections (SSI) is evidence-based for the majority of clean, non-infected orthopedic procedures [1][2][3][4]. However, standard prophylaxis protocols do not recommend specific regimens before redebridement of patients under already implemented curative antibiotic therapy for an established orthopedic infection (SSI or community-acquired) [1,2]. Scientific literature and epidemiological evaluations are lacking, but clinicians acknowledge that the microbiological spectrum may change during the course. The surgical debridement may itself cause new SSI; or a former SSI may get a new bacterial, postoperative SSI. Practically, when performing a second look during ongoing antibiotic therapy, surgeons continue with the current therapeutic antibiotics or, if clinical evolution is unsatisfactory, empirically broaden the spectrum after obtaining new intraoperative tissue samples. Alternatively, few colleagues administer the standard perioperative prophylaxis, independently of the pathogens, simply because they lack specific protocols. New intraoperative cultures during reoperation may remain negative because of the influence of systemic antibiotics [5], but they might also grow previously unidentified pathogens typically resistant to current antibiotics. These new pathogens indicate a dilemma. If clinical evolution is satisfactory, physicians might interpret them as a selection or contamination, and usually continue with the antibiotic treatment in place. However, besides a pre-planned re-intervention (in order to reduce the bacterial load surgically), mostly the evolution has been unsatisfactory; hence the indication for re-debridement. Consequently, these new pathogens are interpreted as new SSIs, with broadening of the spectrum and prolongation of total antimicrobial therapy [6]. In this study, we aimed to evaluate the missing epidemiology and specifically link the occurrence of new SSIs to the numbers of iterative re-debridement that we performed under current therapeutic antibiotic agents and. We wonder if these patients would profit from extended prophylaxis during re-debridement; and if the nature of possible secondary SSIs would be predictable. Methods The Geneva University Hospitals is a tertiary center for septic orthopedic surgery and associated infectiology [7]. For the current study, we used a composite database 2004-2017 (Ethical Committee no. 13-178, 08-057 [8], 08-06 [9], and , including all adult patients hospitalized for clinically moderate and severe orthopedic infections, including the diabetic foot [10]). We did not collect tissue samples and did not contact the patients specifically for that study, but used their old anonymized data to compose our database. We excluded cases that were amputated in toto [11], cases with antibiotic-free windows before re-debridement [5], and episodes for which the occurrence of newly identified pathogens did not change the antibiotic regimen, because we interpreted them as "contamination", because the newly detected bacteria had no clinical impact on the further management. In contrast, pathogens sensitive to original antibiotic therapy and presumably causative of clinical worsening, were identified as new pathogens. We defined infection as intraoperative pus and clinical signs of infection (color, calor, pain). SSI definitions based on the Center of Disease Control standards [12]. We collected several microbiological samples from deep intraoperative tissues, and ignored results of superficial specimens or sinus tracts. We regrouped coagulasenegative staphylococci [13], micrococci, corynebacteria or propionibacteria as "skin commensals". We assessed the first five pathogens of semi-quantitative cultures and arbitrarily censored thereafter. The Microbiology Laboratory processed all specimens according to Clinical and Laboratory Standard's Institute recommendations [14], before switching to the EUCAST criteria (European Committee) in 2014 [15]. Of note, besides prior to the very first debridement for orthopedic infection (when the antibiotics were first started after intraoperative microbiological samplings), all study patients were under systemic antibiotic therapy. This therapy was either empirical or targeted to previously identified pathogens. In this manuscript, the term "prophylaxis" refers to a true perioperative antibiotic prophylaxis, which is only given as a single dose and is not continued after debridement; independent of current systemic antimicrobial therapy. In contrast, the clinical changing of antibiotic regimens after/during debridement would be a preemptive, or targeted, therapeutic change, continuing for several days or weeks. Statistical analyses The primary objectives of this study were to determine possible mismatch between current curative antibiotic therapies and newly identified bacterial superinfection after debridement and to evaluate the need of a prophylactic antibiotic regimen, in addition to the ongoing curative antibiotic treatment. We performed group comparisons using the Pearson-χ 2 or the Wilcoxon-ranksum-test. An unmatched multivariate logistic regression analysis determined associations with the outcome "SSI resistant to antibiotic therapy". We introduced independent variables in the univariate analysis stepwise into the multivariate analysis, except for the surgical and antibiotic-related parameters, which we forced into the final model. We computed the variables "total number of debridements", "number of debridements before new SSI", and the "time interval between consecutive debridement" as continuous and categorical variables. The cut-off values of the strata were chosen according to the middle stratum positioned around the median value of that variable. We further plotted new SSIs according to the number of prior debridements, and stratified new SSIs according to key pathogen groups. We used STATA software (9.0, STATA™, USA). P values �0.05 (two-tailed) were significant. Iterative surgeries under curative antibiotic therapy All patients were under systemic, curative antibiotic therapy for bacterial infection. We noted 867 different regimens prior to intraoperative samplings; divided upon administration route, changing during the course, combination therapies and different drug choices. An allocation of these 867 prior individual antibiotic regimens to the subsequent Overall, 1617 episodes (65%) were debrided once, compared to 862 cases with multiple debridements (35%); of which 510 a second time and 195 a third time. Formally, the median number of surgical debridement for infection was 1 (total range, 1-15 interventions; interquartile range, 1-2 interventions). The median delay between two consecutive interventions was 16 days. In 420 re-debridements (420/862; 49%), the current antimicrobial agent was continued without additional perioperative prophylaxis. In 90 cases, surgeons or anesthesiologists administered a supplementary standard prophylaxis with cefuroxime single dose 1.5 g intravenously [1][2][3]20] in addition to ongoing therapeutic antibiotics. Clinicians avoided to administer large-spectrum perioperative prophylaxis and avoided topical antibiotic prophylaxis regimens. Table 1 compares the study population with single vs. multiple debridements. In this comparison, patients with bone and joint infections, implant infections, Gram-negative infections and infections due to skin commensals have been operated significantly more often than others, whereas sex, age, or immune-suppression did not influence the risk for reoperation. New pathogens and new susceptibility profiles according to the number of iterative surgeries Among all iterative intraoperative samples during re-debridement, 507 were positive (507/862; 59%), but only 241 (242/507; 48%) returned a species already present in the index debridement. We witnessed thus 265 new pathogens (265/507; 52%) in the same patient. These new selections were Gram-positive in 192 cases and Gram-negative in 109 episodes and were interpreted as (new) SSIs, because of unsatisfactory evolution. As they were resistant to current antibiotics in 174 cases (174/507; 34%), clinicians broadened the therapeutic antimicrobial spectrum and prolonged therapy. In contrast, 333 new pathogens were susceptible to the prior antibiotics. To cite an example, the overall proportion of methicillin-susceptible S. aureus among the causative pathogens had fallen from 38% to 11%, that of streptococci from 16% to 9% [21], while the proportion of methicillin-resistant S. aureus [8], enterococci [22], and nonfermenting rods [19] rose up significantly (Fig 1). Stratified upon the groups of bone and joint infections, soft tissue and diabetic foot infections, the overall proportion of resistant new SSI were 13% (145/1153), 9% (120/1327), and 14% (30/213), respectively. Table 2 shows clinical variables related to new antibiotic-resistant SSIs. The number of prior surgical debridements (all under current systemic antibiotic therapy) were significantly associated with the occurrence of new pathogens; independent of the initial pathogens. These new resistant SSIs were unpredictable regarding the microbiology and distributed among the entire Gram-positive and Gram-negative spectrum ( Table 2; Fig 2A) with, however, a tendency towards more Gram-negatives with increasing numbers of surgical interventions, age, and a shorter delay between consecutive debridement (Table 3). Patients' sex, immune-suppression or localization of the orthopedic infections did not influence epidemiology. Table 4 summaries these new pathogens. Many are naturally resistant to usual, narrow-spectrum β-lactam antibiotics (e.g. penicillins and 1 st or 2 nd generation cephalosporins). Of note, during the study period there was no specific outbreak in the septic orthopedic ward with the exception of five cases of vancomycinresistant enterococci (VRE). The endemicity of methicillin-resistant S. aureus declined throughout the study period [8], and that of ESBL is rising [23]. Regarding the timing, new SSIs mostly peaked after the 2 nd and 3 rd debridement. Indeed, the microbiology during the first re-debridement still reveals two-thirds of known pathogens and one-third of new constellations. But already the second and third re-debridement switches to a third known pathogens and two-thirds of new ones (Fig 3). Multivariate adjustment In view of the considerable case-mix, we adjusted with logistic regression analysis. We confirmed that with the occurrence of new antibiotic-resistant SSIs under current systemic antibiotic therapy and iterative surgeries (odds ratio 1.6, 95% confidence interval 1.2-2.2), (Table 5). Of note, since all patients undergoing iterative debridement were already under systemic antibiotic administration, we could not determine the impact of iterative surgeries alone (without concomitant antibiotic therapies) on the occurrence of these new SSI's. Already the second debridement substantial under antibiotic treatment increased the odds ratio of new SSIs to twelve. In contrast, sex, age, and immune-suppression were unrelated. The Table 2 https://doi.org/10.1371/journal.pone.0226674.g002 Discussion This study provides insights in the complex epidemiology of iterative SSIs during multiple debridements and current antibiotic therapy for orthopedic infections. It is an original work, with a large number of patients included in an analysis from a retrospective database. Among 2480 adult patients, we re-debrided a third, and a quarter revealed new pathogens. Totally, around ten percent of all episodes had new bacterial SSIs; with resistance to ongoing antibiotic agent in seven percent. From a clinical perspective, among 862 patients that required a redebridement, 507 (59%) revealed a positive culture. In 265 (52%) the isolated microorganisms were different from the prior debridement. This means that from all episodes that required redebridement, 30.7% (265) had a different pathogen. This is a major problem, particularly considered that the new microorganisms were often more resistant. Since we only included relevant cases with immediate adaptation of the antibiotic therapy, we think that our interpretation of new SSIs is genuine and we are not facing mere selection and contamination. We think that it is nearly impossible to study our hypotheses in any other prospective and more controlled way. Moreover, the majority of the new microorganisms are undisputed pathogens for orthopedic SSIs [20] in Switzerland. Available literature is very sparse. We identified only a single Spanish article with a similar study question, but in a very different setting. Ballus et al. published the epidemiology of surgical site peritonitis in an intensive care unit with broad-spectrum antibiotic use [6]. They prospectively observed 162 adult patients. Microorganisms isolated from tertiary peritonitis SSI's (160 cases; after combined surgical and medical treatment of previous episodes) revealed higher antibiotic-resistance (65%) than primary peritonitis. Every clinician would confirm this experience similar to our findings. Unfortunately, the authors lacked specific suggestions in terms of prevention of tertiary peritonitis, let alone concerning its optimal perioperative prophylaxis [6]. The legitime question is how much of these new SSI pathogens can be prevented by a modified or additional single-dose prophylaxis upon iterative debridement. The reason for a new SSI could be the consequence of miss-identification during the first surgery, new contamination during previous surgery for infection or superinfection of the wound on the ward despite current therapeutic antibiotic administration. Considering only the first two options as preventable, the third is not modifiable by any additional antibiotic administration. Clinically, the novel incidence of 7-11% SSIs warrants adaptation of perioperative prophylaxis for the first and second conceptual situations. Standard second-generation cephalosporins or vancomycin [1][2][3] lack the necessary coverage in view of the random nature of the new pathogens. Unfortunately, we equally failed identifying a specific microbiological pattern to tailor a specific prophylaxis regimen. New postoperative superinfections appear Gram-positive, Gram-negative or both and include dozens of pathogen combinations; and this independently of initial pathogens, initial antimicrobial therapies, orthopedic infections or patient characteristics. An optimal total prophylactic coverage would hence theoretically consist of a combination of glycopeptides with aminoglycosides, or glycopeptides with carbapenems, piperacillin-tazobactam and similar spectra. Also, in some selected cases, a partial supplementary prophylaxis may be added on. For example, in patients treated with narrow-spectrum penicillin for streptococcal infections and multiple debridements, perhaps the combination with vancomycin might be sometimes indicated, but this is no maximal coverage by far and still needs to be proven as beneficiary. However, unless there are future published clinical trials, we advocate against the introduction of this near-maximal prophylaxis because of the following reasons: First, perioperative prophylaxis is only one cornerstone of SSI prevention. It must be embedded in a whole bundle of measures [1][2][3]. Alone, it only reduces absolute SSI risks by some few percent [1]. Second, enhanced antibiotic prophylaxis lacks its final proofs, but might be associated with unnecessary adverse events (even when it is in single doses [24] or administered during three days such as in open fractures [4]). Several author groups proposed different enhancement strategies for non-infected orthopedic surgery: combining with local prophylaxis (e.g. local vancomycin in spine surgery [25]), double prophylaxis against Gram-negative [26] and Grampositive [27] pathogens, or universal glycopeptide prophylaxis [28]. The majority of these enhancements failed to reduce SSI risk. Exceptions remain rare, very specific and often not reproducible by other research groups. At the same time, numerous reports documented transient kidney injuries by aminoglycosides [27] or combined vancomycin prophylaxis [28] in orthopedic surgery. Walker et al. reported that following a change in prophylaxis (from floxacillin & gentamycin to amoxicillin/clavulanic acid), they witnessed a 63% decrease in postoperative renal insufficiencies [29]. Moreover, enlarged prophylaxes, if implemented during a long period, could alter endemicity in septic orthopedics wards towards more resistant and Gramnegative pathogens [23]. Besides the fact that our study is retrospective, it has five major limitations. First, we ignore the acquisition route and the exact timing of the first presence and onset of the new SSI pathogens. We ignore if they were already colonizing the patient from the start, if they were present in the initial wound and subsequently selected by inactive antibiotics, or if they are true new acquisitions. Second, consequences of microbiological findings are arbitrary by nature. Infectious diseases physicians are often absent during surgery [7]. They have to decide the antibiotic changes, but are depending on the microbiological laboratory and especially upon the surgeons regarding clinical interpretation of the clinic and microbiological results (e.g. hematoma/seroma versus pus). Likewise, even if some new pathogens are clearly pathogenic, others might be not. Thus, in polymicrobial SSIs, it is quasi impossible to judge which of the pathogens is causative and which one is contamination. Moreover, new bacteria can also be a true new SSI that was simply not severe enough to worsen the clinical evolution. In that sense, when there is good clinical evolution, it is impossible to distinguish between colonization and clinical new infection. Third, although we analyzed many confounders, there are still some important variables unnoted, such as hand hygiene compliance [1], post-operative non-infectious wound complications [30] or use of negative-pressure vacuum therapy. Likewise, all patients undergoing iterative surgeries for infection, were already under systemic antibiotic therapy during re-debridement. Hence, we cannot pronounce on the possibility of microbiological changes during iterative debridement in absence of antibiotic treatment. Fourth, in our study population, we had 83 different antibiotic regimens and an occurrence of 273 new pathogens. We moreover add a variety of 867 new antibiotic regimen changes throughout the therapeutic course in our study population. Such mixed constellations become too much detailed to be analyzed individually or to be individually displayed. We must recur to group analysis. Fifth, we limited the assessment of pathogens to the five most dominant ones in line with usual clinical practice. It is clear that a full microbiological work-up and a prolongation of the incubation time, beyond the standard five days, could alter overall epidemiological results. Conclusions According to our cohort of 2480 adult patients with orthopedic infections, new SSIs occur at ten percent's risk during iterative debridement and concomitant antibiotic therapy. They already predominate after the 2 nd debridement and are often resistant to administered antibiotics. Their microbial etiology seems unpredictable. We argue nevertheless against a total prophylactic coverage without prior prospective trials due to potential adverse effects and call for strict adherence to general infection control policies, evidence-based indications for surgical re-debridement and skilled surgical techniques [1][2][3][4]. The role of partial and selected enhancements of prophylaxis needs to be elucidated separately. Supporting information S1 File. Supporting Information files are uploaded. (XLSX)
2019-12-19T09:13:29.990Z
2019-12-18T00:00:00.000
{ "year": 2019, "sha1": "2990dba1ef8efc9dc2dfe0872abdc932ae3767b7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0226674", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b096c604c92a2022cb5e242b3e4f15f098b7bda", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235218129
pes2o/s2orc
v3-fos-license
The everlasting hunt for new ice phases Water ice exists in hugely different environments, artificially or naturally occurring ones across the universe. The phase diagram of crystalline phases of ice is still under construction: a high-pressure phase, ice XIX, has just been reported but its structure remains ambiguous. The ordering from ice I h to XI, from V to XIII, from XII to XIV needs to be induced through basic or acidic doping, as the ordering temperatures are too low to observe spontaneous ordering in laboratory time scales. However, in the case of ordering of ice VI to XV a more complex kinetics has been observed. The ordering temperature tails from a relatively high 129 K down to 100 K, and the process depends on the applied pressure. Also, the experimentally found (partial) ordering of hydrogen differs from what is predicted by computer simulation 18 . Back in 2018, Gasser et al. reported on a 'β-ice XV' phase 19 , obtained, like the hydrogen ordered phase ice XV, upon cooling ice VI. At that time, experimental evidence was insufficient to attribute a Roman numeral to this phaseas is the convention for crystalline phases of icebecause its structure remained to be elucidated. Now, in a recent paper, Yamane et al. 9 investigate this highpressure phase and describe it as a hydrogen-ordered phase of ice VI, with a different hydrogen ordering as compared to ice XV, and assign to it a new Roman numeral, XIX. While they can show convincingly that ice XIX is different from ice XV and ice VI, by means of dielectric measurements and in situ neutron powder diffraction, its crystal structure remains ambiguous: several space groups remain possible according to the authors' data on the deuterated in situ high-pressure samples of ice XIX… Ice XIX definitely forms a ffiffi ffi 2 p ffiffi ffi 2 p 1 super-cell with twice the unit cell volume of ice VI (and ice XV). Neither orthorhombic nor tetragonal unit cells can be ruled out convincingly against each other. As we will learn later, it is not even clear whether it is the hydrogen ordering, which is different compared to ice VI and ice XV, although it remains the working hypothesisand the selling argument, or headline, of their manuscript. The dielectric measurements indicate an ordering of hydrogen as compared to ice VI, but whether the topology of this order is different from ice XV, as the title suggests, may be contested. At the same time, Gasser et al. have investigated the same phase of ice, and published a paper on the structure of ice XIX 10 , again announced as a second (different from ice XV) hydrogenordered polymorph to ice VI. They performed ex situ high resolution neutron powder diffraction on recovered samples, providing better quality data as compared to in situ high-pressure data. The work also sheds more light on the phase boundaries, such as the order-order transition between ice XV and ice XIX. Yet, by a crystallographically different approach (filtering of eligible subgroups by compatibility with the oxygen lattice), an ambiguity between different possible space groups remains. To reduce the 109 possible solutions in five space groups to a smaller set of likely solutions, they referred to the preprinted paper of Yamane et al. and limited the quantitative tests to Yamane's structure models in the three space groups both groups found to be eligible. Again, even a distinction of orthorhombic and tetragonal solutions remains ambiguous. The oxygen topology is found to be the same in ice VI, XV and XIX. The hydrogen order is only partial in both, ice XV and XIX. And in both cases, the ordering is found to be anti-ferroelectric (in contrast to initial suspicions to find a ferroelectric ordering like in ice XI). Two space groups are most likely, tetragonal P4 or orthorhombic Pcc2. It would be easy to distinguish the solutions as Pcc2 should exhibit pyroelectricity and non-polar P4 piezoelectricity, but for this one would need single crystals which are very likely inaccessible. It is not surprising that the phase, which had been called β-ice XV previously, which is found different from ice XV and therefore called ice XIX, is suspected to show hydrogen ordering, and a different one as compared to ice XV. How H atoms order and whether an ordered structure is ferroelectric or anti-ferroelectric remains difficult to predict, a fact which nourished the suspicion that not only one H-ordering could be possible. When, in the ordering of ice VI to ice XV, calorimetry suggested a second, underlying, process, and that both ordering processes would have entropy changes corresponding to only partial H-ordering, a different hydrogen ordering became the likely candidate. Yet, this expectation biases the scientist's look at experimental results and may limit the possible interpretations of the experiment. In addition, as for the formation of other hydrogen-ordered phases, certain doping strategies were needed to obtain the new ice phase. And here we eventually run in some danger, as every group has its own recipe to obtain ice XIX, can we really be sure to look at the same phase in all three papers? The third paper on ice XIX 8 , published in Nature Communications only a few weeks later than the other two, comes from a team, Salzmann et al. 16,17 , which has quite some experience in the hunt for hydrogen-ordered ice phases. The structures of ice XIII, XIV and XV have already been determined as well by Salzmann et al. As in the previous investigations, upon cooling a doped sample at higher pressure, additional Bragg peaks appear which imply an increase of the ice VI unit cell to a ffiffi ffi 2 p ffiffi ffi 2 p 1 supercell. However, the authors consider local distortions of the two individual networks of the 'self-clathrate' structure of ice VI. They consider all possible permutations ofrespectivelytilting, shearing or'squishing' the hexameric clusters of water molecules, the characteristic building unit in the ice VI structure which leads torespectivelythree, two or again three different space groups, all subgroups of the space group symmetry of ice VI and compatible with the super-cell. Only one space group, Pbcn, allows for the observed additional Bragg peaks (and all forms of distortion). The space group allows a good fit of the diffraction datatogether with continued total hydrogen disorder in contradiction to the two previous papers. The lower symmetry with respect to the one of ice VI justifies the assignment of the Roman numeral 'XIX' to this phase, but concerning the hydrogen ordering, ice XIX remains a deep glassy state of ice VI with pressure-induced distortions as already suspected earlier by Rosy-Finsen et al. 20 . Although some weak hydrogen ordering cannot be excluded, it is not the main structural feature distinguishing it from ice VI (and ice XV). However, slightly different doping strategies have been applied, are we really looking at the same ice phase in all cases? Are the conclusions of Salzmann Surely, we have not seen the last paper on this new crystalline polymorph of ice yet. There is still additional proof to provide to support the solution found by the paper hereafter. If the last solution offered, the deep glassy state, is the right one, one may wonder whether there is a hydrogen-ordered phase of this distorted phase. Also, might a second hydrogen-ordered phase, as it was claimed by Yamane et al. and Gasser et al. a few weeks ago, exist for any hydrogen-ordered phase? A remarkable common feature of all three papers is that they are based on neutron powder diffraction (at different facilities -J-PARC and ISISwith different instruments and at different resolutions -HRPD and PEARLwith different experimental approachesex and in situ). I may conclude with this biased observation, the role of neutron diffraction in the investigation of ice structures remains crucial.
2021-05-28T06:16:56.626Z
2021-05-26T00:00:00.000
{ "year": 2021, "sha1": "213ef199317063a36875a3cb12cec9b12aaed736", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-23403-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "213fba3216db4c360091217da766f14ea48b06ea", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
233322082
pes2o/s2orc
v3-fos-license
Scent leaf (Ocimum gratissimum) meal improved the growth performance and lowered blood cholesterol level of cockerels This study was conducted to evaluate the effect of scent leaf meal on the growth performance, blood and serum parameters of growing cockerels. A total of ninety-six (96) Isa-brown day-old cockerel chicks were used in a completely randomized design experiment. The birds were randomly allotted to four dietary treatments consisting of basal diets supplemented with scent leaf meal (SLM) at the rate of 0% (control), 1%, 2% and 3% representing T1, T2, T3 and T4 respectively. The experiment lasted for 15 weeks, including one week acclimatization period. Results showed that scent leaf meal improved feed intake only during the grower phase, but significantly improved body weight, FCR and PI at both chick and grower phases. Treatment had no effect on hematologic parameters, but significantly reduced serum cholesterol levels in a dose-dependent manner. Other serum parameters were not affected by SLM treatment. Overall, including of SLM in the diets of cockerels significantly improved growth performance of the birds, and dose-dependently reduced the blood cholesterol level without any deleterious effect on blood and serum parameters. Introduction Poultry production is one of the major contributors to the growth of agricultural GDP in Nigeria with the sector estimated at over ₦80 billion and holding up to 165 million birds as at 2013 (USDA, 2013; Oyediji, 2015) [21,19] . This is in addition to the role of poultry products in meeting the protein needs of the populace. In the poultry industry, cocks are important for egg fertilization, and are also a good source of poultry meat which is most preferred to other breeds because of its good taste. The development of the sector is, however constrained by many factors including disease, the outbreak of which sometimes causes serious setbacks in the enterprise. Over the years, synthetic antibiotics have been the mainstay of the industry, and have been very helpful in controlling infections and promoting growth of birds. However, the resistance of microbial pathogens to the antibiotics as a result of continuous usage, recently called for a global attention and a search for alternatives. Of greater concern is the trend in antimicrobial resistance of human pathogens to the synthetic drugs arising from residual effects of consuming products from birds raised on synthetic antibiotics. Consequently, various non-synthetic substances, including enzymes, inorganic acids and herbs have been explored for their medicinal properties (Adams, 2005) [2] . Scent leaf, which is widely grown as a perennial herb in tropical Africa, and rich in phytochemicals such as alkaloids, tannins, phytates, flavonoids, oligosaccharides, thymol and saponin, has been studied for its antimicrobial and antioxidant properties. For instance, Ogunleye (2019) [17] reported the efficacy of scent leaf in improving feed utilization and reducing mortality due to coccidiosis in broilers. Odoemelam et al. (2018) [15] similarly compared the effects of scent leaf meal and synthetic antibiotics on the performance of broiler finisher birds, and concluded that scent leaf meal can effectively replace synthetic antibiotics since the effects are similar. Olumide et al. (2018) [18] further observed that scent leaf meal fed to broilers at 400g/100kg improved the livability of the birds. There is, however, limited information on the effect of the herb on productive performance and hematologic parameters of cockerels. Cocks play a significant role of egg fertilization in poultry production and thus efforts to improve their production performance cannot be over emphasized. This study aimed at evaluating the effect of scent leaf meal on the growth performance and blood parameters of growing cockerels. Experimental site The experiment was conducted in the livestock section of the Kabba College of Agriculture, Kabba, Kogi State, Nigeria. Kabba is located in the Southern guinea savannah ecological zone of Nigeria, between latitude 07º5ºN and longitude 6º08ºE of the equator with an elevation of 424 m above the sea level. The mean annual rainfall is about 1100 mm per annum with an annual temperature range of 18-32 ºC. Test material and management of experimental birds To obtain the scent leaf meal used for the experiment, fresh scent leaf (Ocimum gratissimum) leaves were harvested from the horticultural unit of the College. The leaves were cleaned, air-dried at 25 o C and ground into powder with a hammer mill. The proximate and phytochemical compositions of the scent leaf are shown in Table 1 below. A total of ninety-six (96) Isa-brown day-old cockerel chicks were used for the experiment. The chicks, which were housed in deep litter pens, were provided with adequate lighting, as well as feed and water ad libitum. The housing and management of experimental birds were in accordance with the guidelines of Ahmadu Bello University on animal research. The birds were randomly allotted to four treatments in a completely randomized design experiment. The treatments were scent leaf meal (SLM) added to commercial feed (Vital ® ) as basal diet at the rate of 0%, 1%, 2% and 3%, representing T 1, T 2, T 3 and T 4 respectively. The birds were fed chick mash from day one to 8 weeks, and later fed grower mash from 8 -15 weeks of age. Each treatment was replicated four times, with six birds in a pen as one replicate. The experiment lasted for 15 weeks, including one week acclimatization period. Data collection The initial body weights of the chicks were recorded on day 8, after which it was recorded weekly whereas feed intake was recorded daily. Daily feed intake was, however aggregated weekly for data analysis. Feed conversion ratio was calculated as the ratio of feed intake to body weight gain. Performance index (PI) of the birds was calculated as follows (North, 1984) [10] : Where, PI = performance index; FCR = feed conversion ratio Performance parameters were measured for both the chick and the grower phases, whereas serum and haematological assessment were done at the end of the experiment. At the 15 th week, blood samples were collected from 3 randomly selected birds per replicate in each treatment group (12 birds/per treatment), via the brachial vein, into heparinized bottles for haematological assessment; and non-heparinized bottles for serum evaluation. The samples were immediately transported in insulated boxes to the laboratory for analysis. The serum separation was done using standard protocols (Elliot et al., 2008) [7] and the obtained serum sample was stored at -20 o C for biochemical assays. All blood and serum analyses were done at the Department of Animal Production and Health, Federal University of Technology, Akure. The packed cell volume (PCV) was measured using a Hematocrit reader, whereas total leucocyte counts, erythrocyte and haemoglobin concentrations were measured with automated cell counter. The mean corpuscular volume (MCV), mean corpuscular haemoglobin (MCH) and mean corpuscular haemoglobin concentration (MCHC) were calculated using the formula of Jain (1986) [8] as follows: MCV (fL) = 10 x PCV (%)/RBC counts (millions/µl) MCH (pg/cell) = haemoglobin (g/100 ml)/RBC counts (millions/µl). Serum parameters such as aspartate transaminase, alanine transaminase, alkaline phosphatase, total protein, albumin, triglyceride, cholesterol and creatine were measured using standard kits and automatic analyzer (Humalyzer300, Merck ® , Germany). All tests were performed according to manufacturer's instructions. Globulin was obtained by calculating the difference between total protein and albumin. Statistical analysis Performance data were subjected to repeated measures ANOVA whereas blood and serum parameters were analysed in single factor ANOVA using SAS software (SAS, 2001) [20] as shown in the following models: Where, = observation ijk; = overall mean, = the effect of treatment i, = effect of period k, = the effect of interaction between treatment i and period k, = random error with mean 0 and variance, ,the variance between measurements within pens. Where, Yij = observed response; µ = overall mean; αi = treatment effect; Eij = random error Treatment differences were considered significant at P < 0.05. Tukey's procedure was used to separate treatment mean difference of each response variable. Effect of SLM on growth performance of experimental birds Effect of scent leaf meal on the production performance of cockerel chicks is shown in Table 2. Birds in the control group, and those treated with 1% scent leaf meal (SLM) had the highest feed consumption rate during weeks 2, 3 and 4 but dropped below birds treated with 2% and 3% SLM in week 5. The feed consumption of the birds, however improved steadily between weeks 5 and 6 of the experiment. The birds treated with 2% and 3% SLM had a near steady improvement in feed consumption from 2 nd to 6 th week of the experiment. Overall, birds treated with 1% SLM had the highest cumulative feed consumption per bird during the chick phase, although this was not significant. However, during 8 to 15 weeks of age (grower phase), feed consumption increased linearly, thou non-significantly, as the level of SLM in the diet increased (Table 3). Reduced feed intake of chicks fed high SLM-supplemented diets may therefore have resulted from the reduced acceptability of the feed perhaps due to the presence of antinutritonal factors such as saponin (9.8%) in SLM. As the birds grow, their ability to handle such factors may have increased, hence the improvement in feed intake during the grower phase. The result of this study is in agreement with the observation of Nweze and Ekwe (2012) [12] who reported significant differences in feed intake of broiler finisher birds drenched with scent leaf water extract. Nworgu (2016) [13] and Ogunleye (2019) [17] similarly reported progressive reduction in total feed intake of pullet chicks and broilers respectively as the level of SLM increased in their diet. In contradiction, however, Odoemelam et al. (2013) [14] observed no significant difference in the feed intake of broilers fed scent leafsupplemented diets. Adebayo et al. (2019) [3] also noted no effect of scent leaf fed at the rate of 10 mg/kg, on feed intake of West African dwarf goats. The difference in observations could be due to variations in feed quality, form of test material used or age/specie of experimental subjects. Birds fed SLM-supplemented diets had significantly higher body weights, better efficiency of feed utilization and improved performance index than birds in the control group both at the chick and grower phases (Tables 2 and 3). Furthermore, the improvements in performance parameters were dietary SLM dose-dependent, increasing as the SLM in the diets increased particularly during the chick phase. The phytochemical properties and the mineral composition of SLM may have been responsible for the enhanced feed utilization and improved performance of the birds. For instance, saponins, which is abundant in the SLM used in the current study, are natural products that have been used to improve the penetration of micromolecules such as proteins through cell membranes (Alexander, 2016) [4] . Furthermore, alkaloids and saponins have antibiotic potentials that may have acted as growth promoters and thus could be responsible for the improvement in growth performance observed in this study. Anugom and Ofongo (2019) [6] similarly noted enhanced final live weight, weight gain and FCR of broiler chickens administered aqueous extract of scent leaf. In catfish, scent leaf meal fed in the diet at the rate of 12g/kg also enhanced performance and feed utilization (Abdel-Tawwab et al., 2018) [1] . Ogbu and Amafuele (2015) [16] reported better performance of broiler chicks fed scent leaf whereas Nte et al. (2016) [11] noted improved FCR in broiler chicks administered aqueous extract of scent leaf at the rate 100ml/L. Odoemelam et al. (2013) [14] and Adebayo et al. (2019) [3] however, did not observe any difference in weight gain of broilers and West African dwarf goats fed SLM-supplemented diets respectively. The fact that the latter authors did not observe any difference in feed intake as aforementioned could be the reason for lack of difference in weight gain of the experimental animals. Effect of SLM on hematological parameters of experimental birds Hematological and biochemical indices are significant indicators of the health and nutritional status of an animal. Effects of dietary SLM on haematological parameters of cockerels are shown in Table 4. Except for the mean corpuscular haemoglobin (MCH), no significant effect of treatment was observed on the blood characteristics of treated birds. The MCH and MCV of birds fed SLM-supplemented diets were higher than the control in a non-linear manner, although the mean differences in the latter were not significant. In addition, other blood parameters such as PCV, RBC and WBC of SLM-treated birds were non-significantly lower than the control although they were all within the normal range for chickens. This observation agrees with the reports of Ogbu and Amafuele (2015) [16] and Ndubuisi-Ogbonna et al. (2016) [9] who also did not observe any significant effect of scent leaf on blood parameters of broilers. Contrariwise, Olumide et al. (2018) [18] noted a steady improvement in percentage packed cell volume, red blood cells, white blood cells and haemoglobin when broiler diets were supplemented with scent leaf at 100, 200, 300 or 400 g/100kg feed. The difference in the results obtained could be due to the level of inclusion of SLM in the diets. The high inclusion rate of SLM in the current study may probably have slightly suppressed the activity of the haemopoeitic system as evidenced by the increased level of MCH and MCV, which are common signs of macrocytic anemia. Scent leaf possesses antibacterial properties, and some antibiotics are known to suppress the activity of the haemopoeitic system leading to a drop in haematological parameters (Al-Mayah et al., 2005;Odoemelam et al., 2018) [5,15] . Effect of SLM on the biochemical parameters of experimental birds Furthermore, SLM significantly reduced the blood cholesterol level of treated birds in a dose-dependent manner (Table 5) although treatment had no significant effect on other serum parameters. The reduced cholesterol level observed in the current study accords with the report of Olumide et al. (2018) [18] . Nte et al. (2016) [11] similarly reported significant reduction in abdominal fat of broiler chicks when an aqueous extract of scent leaf was added to their drinking water. Nworgu (2016) [13] indeed suggested that scent leaf could be a potent hypolipidaemic agent. The results of this study, therefore indicate that scent leaf is not likely to contribute to any disease associated with hyperlipidemia in chickens. Overall, the addition of scent leaf meal to the diet of growing cockerels did not have any deleterious effect on the haematologic and serum parameters of the birds. Conclusion Inclusion of SLM in the diets of cockerels significantly improved growth performance of the birds both at chick and grower phases, and also reduced the blood cholesterol level in a dose-dependent manner without any deleterious effect on blood and serum parameters. It is, however suggested that the herb be used with caution as it has a tendency to lower some blood parameters as evidenced by the results of this study.
2021-04-21T08:48:49.920Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0758c0d2cdba943b2d1569e839f6768fa9a9cc71", "oa_license": null, "oa_url": "https://www.veterinarypaper.com/pdf/2021/vol6issue1/PartA/5-5-15-943.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0758c0d2cdba943b2d1569e839f6768fa9a9cc71", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
265148866
pes2o/s2orc
v3-fos-license
Short‐chain fatty acid producers in the gut are associated with pediatric multiple sclerosis onset Abstract Objective The relationship between multiple sclerosis and the gut microbiome has been supported by animal models in which commensal microbes are required for the development of experimental autoimmune encephalomyelitis. However, observational study findings in humans have only occasionally converged when comparing multiple sclerosis cases and controls which may in part reflect confounding by comorbidities and disease duration. The study of microbiome in pediatric‐onset multiple sclerosis offers unique opportunities as it is closer to biological disease onset and minimizes confounding by comorbidities and environmental exposures. Methods A multicenter case–control study in which 35 pediatric‐onset multiple sclerosis cases were 1:1 matched to healthy controls on age, sex, self‐reported race, ethnicity, and recruiting site. Linear mixed effects models, weighted correlation network analyses, and PICRUSt2 were used to identify microbial co‐occurrence networks and for predicting functional abundances based on marker gene sequences. Results Two microbial co‐occurrence networks (one reaching significance after adjustment for multiple comparisons; q < 0.2) were identified, suggesting interdependent bacterial taxa that exhibited association with disease status. Both networks indicated a potentially protective effect of higher relative abundance of bacteria observed in these clusters. Functional predictions from the significant network suggested a contribution of short‐chain fatty acid producers through anaerobic fermentation pathways in healthy controls. Consistent family‐level findings from an independent Canadian‐US study (19 case/control pairs) included Ruminococaccaeae and Lachnospiraceae (p < 0.05). Macronutrient intake was not significantly different between cases and controls, minimizing the potential for dietary confounding. Interpretation Our results suggest that short‐chain fatty acid producers may be important contributors to multiple sclerosis onset. Introduction Multiple sclerosis (MS) affects over 2.8 million people worldwide, resulting in chronic disability and high socioeconomic burden. 1,2][5] Recently, the gut microbiome has been proposed as not only a mediator of the effect of known environmental risk factors but also as an independent risk factor for MS. 6 This relationship has been supported by animal models where commensal microbiota were required for the development of experimental autoimmune encephalomyelitis. 7bservational studies typically have not identified differences in gut microbiome diversity when comparing MS cases and controls. 8,9Despite occasional convergence in findings suggesting individual taxa differing in their relative abundance, there is still a considerable amount of variability between studies that may reflect confounding. 8,10The instrinsic interindividual heterogeneity of healthy gut microbiome composition is further broadened by differences in human lifestyle and physiological factors, limiting our ability to identify causal relationships. 10Studying pediatric-onset MS (POMS), which may be diagnosed closer to the biological onset of the disease, potentially minimizes the effect of comorbidities and environmental exposures, offering a unique opportunity to unravel the microbiome's contribution to MS pathogenesis. 9herefore, we aimed to characterize the bacterial taxonomic profile of POMS patients in a multicenter, matched case-control study utilizing 16S rRNA gene sequencing of stool samples collected shortly after MS symptom onset and replicate findings in a second dataset. Study population Individuals with MS onset before age 18 years and healthy controls were recruited from seven sites in the U.S. Network of Pediatric MS Centers including the University of California San Francisco, State University of New York at Buffalo, University of Alabama at Birmingham, Boston Children's Hospital, Stony Brook University Medical Center, Children's Hospital of Philadelphia, and New York University between 2012 and 2018.Healthy controls attended general pediatric clinics at the participating institutions and did not have a personal or familial history of autoimmune disorders.POMS cases were within 5 years of symptom onset and met the 2010 McDonald criteria for MS. 11All participants were tested for the presence of myelin oligodendrocyte glycoprotein antibodies at the Mayo Clinic neuroimmunology laboratory (Rochester, MN) using a live cell-based flow cytometry assay.Those with positive results were excluded and cases with low titers had their final diagnosis ascertained by two MS experts, as previously described. 12dditionally, individuals exposed to systemic antibiotics, probiotics, or steroids within 30 days before sample collection were excluded.POMS cases on a diseasemodifying therapy (DMT) were invited to enroll if on stable dosing for more than three months.POMS patients were 1:1 matched with healthy controls by their age at stool sample collection (AE3 years), sex, self-reported race, ethnicity, and recruiting site.Institutional Review Board approval was obtained from all participating institutions.Participants and their parent or legal guardian provided, respectively, assent and informed consent. Gut microbiota profiling The participant's first stool of the day was collected by a parent and shipped overnight on ice to the University of California, San Francisco and stored at À80°C before processing.DNA was extracted, and the V4 region of the bacteria 16S ribosomal RNA (rRNA) gene was amplified for sequencing as previously described. 1316S rRNA sequencing data were processed as previously described. 14orward and reverse reads were processed separately and quality filtered using the DADA2 package version 1.9.0. 15in R, version 4.2.1 (the R Foundation).Reads with more than two expected errors were removed and forward and reverse reads were confirmed to be of at least 150 base pairs in length.Error rates of the filtered and dereplicated reads were estimated using 100,000 sequences.Paired end reads with a minimum overlap of 25 base pairs were merged to obtain the full denoised sequences.Chimeras and any abnormally short or long sequences (AE5 bases from the median number of reads) were removed.Amplicon sequence variants (ASVs), which permit analysis of 16S RNA variants that differ by as little as one nucleotide, were used for downstream analysis.][17] It is important to note that while an ASV has a unique nucleotide sequence, it might not be assigned a unique species due to the limited capacity of short-read 16S rRNA sequencing to differentiate phylogenetically related species or strains.Using the decontam package with previously described parameters, 18 ASVs with a contaminant classification threshold p < 0.1 were removed. 19ASVs containing <1/1000th of a percent of total reads were removed.Sequencing reads were representatively rarefied to the minimum sequencing depth (84,818 reads/sample) 100 times, and the rarefied sample profile closest to the sample-specific centroid was selected, as described previously. 13The resulting tables included 1482 ASVs. Covariates Participants' basic demographics and clinical history were collected with standardized forms completed by the parent or legal guardian and assisted by a research coordinator.Patients with POMS had their MS-related data extracted from a prospective pediatric MS registry.The season at stool sample collection variable was defined as winter (December, January, and February), spring (March, April, and May), summer (June, July, and August), and autumn (September, October, and November).Exposure to breastfeeding was deemed present if the individual was ever breastfed, irrespective of duration.The DMT variable was defined as treatment naive versus ever exposed.Recent intake of foods and nutrients was standardized for all participants based on their responses to the Block Kids Food Screener, a food frequency questionnaire (FFQ) developed by NutritionQuest. 20All covariates were measured at stool sample collection (i.e., study baseline), except for body mass index (BMI) and the expanded disability status scale (EDSS), which were measured at the visit date closest to when the stool sample was received. Microbial diversity For microbial diversity analysis, a-diversity was measured by Shannon, 21 Chao1, 22 and Faith's phylogenic diversity indexes.b-diversity was assessed using Weighted UniFrac 23 distances and visualized using principal coordinates analysis (PCoA).The referred analyses were performed with QIIME2. 24A linear mixed effects model was applied to estimate the difference of a-diversity indexes between individuals with POMS and healthy controls, adjusting for a priori-defined fixed effects of age (continuous), BMI (continuous), and sex and random effects of recruiting site, season, and matching pairs.The PERMA-NOVA test 25 was used to assess the effect of host confounding factors on the variation of microbiome abundance and was performed by using the "adonis2" function implemented in R package vegan. 26Weighted UniFrac distances were compared individually for each reported host factors (disease status (cases/controls), site, age, race, ethnicity, sex, BMI, DMT usage, breastfeeding, birth delivery mode, season at sample collection, macronutrients consumption, HLA-DRB1 genotype, serum vitamin D levels, Epstein-Barr virus (viral capsid antigen) (EBV-VCA), herpes simplex virus type 1 (HSV1), and cytomegalovirus (CMV) serostatus).The variance of microbial abundance between POMS and controls was estimated after adjusting for sample collecting site, season, and matching pairs.The empirical p-value was obtained by running 999 permutations. Microbial composition and disease status 8][29] Linear mixed effects models were applied to the transformed data to identify ASVs associated with disease status.The model had ASV abundance as its outcome and was adjusted by fixed effects of breastfeeding (after assessing the effect of host confounding factors in PERMANOVA analyses) and a prioridefined random effects of recruiting site, season, and matching pairs.The fixed effects were reassessed after determining the effect of host confounding factors on the variance of microbiome abundance (Adonis R 2 ).The inclusion of season at sample collection as a random effect was done a priori to account for expected variations in dietary habits that could have influenced microbiome composition.Additionally, stratification by treatment status was performed using separate models.Linear regression was performed using lmer function from R package "lme4."To reduce the effect of zero-inflation in microbiome data, a variance filtering step was applied to remove species features with very low variance (lower than the median variance). Microbial co-abundance network Co-abundance network inference of ASVs present in at least 20% of samples (resulting in 268 ASVs) was performed using the SparCC method in R using the SpiecEasi package. 30,31Microbial modules were identified by weighted correlation network analysis (WGCNA) as described. 14,32Briefly, the SparCC correlation matrix was generated and transformed to an adjacency matrix using soft thresholding, and a topology overlap matrix was generated.The topology overlap matrix was hierarchically clustered using an average linkage in hclust.The resulting dendrogram was cut using dynamicTreeCut in the stats package to generate clusters with at least three ASVs.Correlated modules (r ≥ 0.5) were combined, generating a dissimilarity matrix for further hierarchical clustering.The quantitative values of each module were calculated for each participant from module eigengenes, defined as the first principal component of the abundance matrix of a respective module.Each module eigengene was examined for its association with disease status using linear mixed effects models adjusting for fixed effect of breastfeeding and random effects of recruiting site, season, and matching pairs. Metagenomic prediction Metagenomic functional profiling of conserved 16S rRNA from microbial communities was predicted using PICRUSt2. 33Significant modules identified from the microbial co-abundance network analysis were grouped into MetaCyc metabolic pathways. 33Pathway abundances were dichotomized by their respective median abundance.The association between the predicted metabolic pathways and the disease status was estimated by conditional logistic regression models adjusted by the a priori-defined set of covariates BMI and season, and conditioned on matching pairs using the survival package.The predicted metabolic pathways are thought to be mediators of the association between co-abundance networks and disease status, thus a different set of confounders for this relationship was a priori defined and independent from the set adapted after the PERMANOVA test findings. 34 Diet analysis Four dietary components (carbohydrate, protein, fiber and fat) were extracted from the FFQ, and the amount of each dietary component consumed in grams was compared between cases and controls with paired t-tests.Associations between each dietary component and each ASVs were estimated using linear mixed effects models adjusted for the a priori-defined fixed effects of age and BMI, as well as random effects of recruiting site, season, and matching pairs. Complementary analysis An external dataset comprised of POMS cases and healthy controls recruited from four Canadian and one American site was used for assessing the reproducibility of findings.POMS were 1:1 matched on sex, self-identified race (grouped as white/other/missing), site (Canada/United States), and age (AE3 years) at sample collection.Gut microbiome profiling was performed as previously described. 9Analytical procedures were replicated as described under the sections "Microbial composition and disease status" and "Microbial co-abundance network," except for season and breastfeeding adjustment since variables were not available.The linear mixed effect models were adjusted for random effects of site and matching pairs.As per privacy and data sharing agreements, these analyses were conducted in Canada. Statistical considerations All statistical analyses were conducted using two-sided tests and the alpha level was set at 0.05, unless otherwise specified.Tests of microbial composition, microbiome networks or metagenomic predictions and disease status or food intake were controlled with Benjamini-Hochberg's false discovery rate (FDR). 35Given the limited sample size and expected high interindividual variability, a less conservative FDR threshold of 0.2 was used.Results are presented as coefficients when estimated by linear mixed effects models and odds ratios (OR) when estimated by conditional logistic regression alongside their respective 95% confidence intervals (CI). Results Stool samples from 60 individuals with POMS and 43 healthy individuals were obtained.The matching procedure identified 35 pairs including 24 exact (age AE3 years, with same sex, self-reported race, ethnicity, and recruiting site) and 11 close matches (age AE3 years, different site, race, and/or sex).Baseline characteristics stratified by disease status are outlined in Table 1.Cases and controls presented a similar proportion of females (77.1% vs. 74.3%),white race (71.4% vs. 80%), Hispanic/Latino ethnicity (28.6% vs. 28.6%),and median age at stool sample collection (15.7 vs. 16.1 years).POMS cases had a median disease duration of 0.9 years (IQR 0.3-1.4) and 54% were on stable DMT.POMS cases that were on DMTs at baseline had similar median age (14.4 vs. 14.2 years) and disease duration (0.86 vs. 0.87 years) as DMT-naive POMS. Microbial diversity a-diversity, or within-individual diversity, was measured at both the species and phylogenetic level.Species diversity, the total number of different species, measured by the Shannon index and Chao1, was not significantly different between POMS individuals and controls (respectively, p = 0.23 and p = 0.08).However, using phylogenetic-based distance measures, that quantify the evolutionary relatedness of groups of microbes within a community, the mean diversity became significantly different when measured by Faith's PD index (p = 0.02).POMS cases presented consistently lower levels of phylogenetic diversity than controls after adjustment for potential confounders, study design, and clustering by site and season of sample collection (Fig. 1A). The effect of host confounding factors on the variance of microbiome abundance was significantly different for the study site and larger, but not significantly different, for season of sample collection, use of DMTs and history of breastfeeding (Fig. 1B).This observation prompted the inclusion of breastfeeding as a fixed effect for model adjustment as opposed to BMI when estimating the association between microbial diversity or co-abundance network and disease status. b-diversity, or between-individual diversity, using a phylogenetic distance-based measure was not significantly different between POMS cases and controls after adjustment (p = 0.25) and no clustering patterns are observed in the PCoA (Fig. 1C). Microbial composition and disease status Among the 1482 ASVs identified, 31 were associated with disease status after adjusting for confounders and are presented in Figure 2A (p < 0.05).However, none remained significant after adjustment for multiple comparisons.Taxa with regression coefficients greater than zero such as those observed in Blautia obeum/provencis/wexlerae (ASV 4) and Prevotella (ASV 84) indicate higher abundance of those taxa in POMS cases while those with coefficients lower than zero, such as Subdoligranulum (ASV 5, ASV 8), represent a lower abundance in POMS cases relative to controls.After stratifying the models by POMS treatment status, Blautia (ASV 4), Faecalibacterium prausnitzii (ASV 75), and Subdoligranulum (ASV 5) remained significantly associated with disease status but only for those that were DMT-naive (p < 0.05, Fig. 2B).Of note, Blautia (ASV 4) presented a higher mean relative abundance in DMT-naive POMS (n = 16) when compared to controls (p < 0.01, Fig. 2B,C).The opposite was observed for Subdoligranulum (ASV 5), where a lower relative abundance was observed in DMT-naive POMS relative to controls (p < 0.05, Fig. 2B,C).Despite presenting a less pronounced difference, the direction of the association was persistent for the difference in relative abundance of Blautia (ASV 4), Faecalibacterium prausnitzii (ASV 75), and Subdoligranulum (ASV 5) between POMS exposed to DMTs and controls.Akkermansia (ASV 208) presented higher relative abundance in POMS that was driven by patients on DMTs.Due to the low variance of Akkermansia (ASV 208) among DMT-naive cases and controls, this genus was filtered (reducing the effect of zero-inflation, as a priori defined) and no estimates were provided for this subgroup. Microbial co-abundance network WGCNA clustering of the SparCC correlation matrix generated 17 modules of ASVs for co-abundance analysis.Those modules aim to identify dependencies between species, suggesting natural microbial communities.Each was tested for association with disease status; two were significantly associated with disease status, although only one remained significant after multiple comparisons adjustment.In both networks, POMS cases had lower abundances of the microbes in each cluster: pink (p < 0.05, q < 0.2) and purple (p < 0.05, q = 0.22).The clusters comprised 18 ASVs, some of which were identified above to be individually associated with disease status.Of note, Subdoligranulum (ASV 5, ASV 8) was present in one of the significant modules (pink), while Blautia (ASV 177) was in the other (purple) (Table 2, modules were arbitrarily identified using color names and do not reflect colors used in figures). Metagenomic functional pathways prediction The metabolic pathways predicted from the significant gut microbial co-abundance network modules (pink and purple) are depicted in Figure 3.Of the 150 predicted pathways for the ASVs from the pink module, 11 were significantly associated with disease status (p < 0.05, Fig. 3A).All 11 had a higher pathway abundance among controls than POMS cases.The largest effect sizes (point estimates farther away from the null value of 1) were observed in pathways associated with anaerobic fermentation leading to the production of short-chain fatty acids (SCFA) and lower odds of having POMS.Namely, pyruvate fermentation to butanoate (OR: 0.10 95% CI 0.01ª 2023 The Authors.Annals of Clinical and Translational Neurology published by Wiley Periodicals LLC on behalf of American Neurological Association.0.71, p = 0.02), pyruvate fermentation to acetone (OR: 0.10 95% CI 0.01-0.80,p = 0.03), superpathway of Clostridium acetobutylicum acidogenic fermentation (OR: 0.10 95% CI 0.01-0.71,p = 0.02), L-1,2-propanediol degradation (OR: 0.10 95% CI 0.01-0.80,p = 0.03), and acetyl-CoA fermentation to butanoate (OR: 0.23 95% CI 0.06-0.89,p = 0.03).Of the 165 predicted pathways for the purple module, four were marginally significantly associated with POMS status (p < 0.05, Fig. 3B). Diet analysis The mean carbohydrate, fiber, protein, and fat intake was not significantly different between POMS and controls (Fig. 4).The ASVs with relative abundance significantly associated with different levels of macronutrients intake (p < 0.05, q > 0.2) are represented in Figure 5.The larger effects on relative abundance were associated with higher fiber intake driving lower relative abundances of Ruminococcaceae (ASV 66) and Methanobrevibacter (ASV 86) and higher abundances of Ruminoclostridium (ASV 62), Bacterioides uniformis (ASV 20), and Bifidobacterium (ASV 41).In addition to Ruminoclostridium (ASV 62), Ruminococcaceae (ASV 151) also belonged to the pink module and was associated with fiber intake levels, but with a less pronounced difference.Further ASVs associated with diet in the pink module included Subdoligranulum (ASV 8) and Coprococcus comes (ASV 58) both had higher relative abundances with higher fat and protein intake.From the purple module, only Dorea longicatena (ASV 22) abundance was associated with diet, slightly increasing with carbohydrate consumption. Complementary analysis The matching procedure identified 19 pairs of POMS and healthy controls in the replication dataset.Cases and controls presented the same proportion of females (78.9%), White race (42.1%), site origin (84.2% from Canada), and similar median age at stool sample collection (17.6 vs. 15.5 years) (Table 3).ASVs associated with disease status after adjustment for potential confounders are presented in Figure 6 (p < 0.05).None remained significant after multiple comparisons adjustment.Multiple ASVs from the Ruminococcaceae family (UCG-010, UCG-005, Ruminiclostridium 9, and Anaerotruncus) present a lower abundance in POMS cases relative to controls.The same was observed for three ASVs from the Lachnospiraceae family (GCA-900066575, Ruminococcus gauvreauii, and Eubacterium ventriosum) while three ASVs from this family presented a higher abundance (Lachnoclostridium, Coprococcus 3, and Blautia).Akkermansia presented lower relative abundance in POMS.The WGCNA clustering procedure yielded 31 modules of ASVs, none being associated with disease status (data not shown). Discussion In this matched case-control study, POMS individuals with recent disease onset had different relative abundances of several gut bacteria in comparison to healthy controls.Stratified analysis by DMT status preserved the directionality of the association of most ASVs with disease status and had minor influence on their effect magnitude.There is no evidence that these results were affected by variation in disease duration or age at stool sample collection when comparing DMT treated to untreated POMS patients.The two microbial co-occurrence networks identified by WGCNA, suggesting interdependent bacterial taxa, exhibited an association with disease status, one after adjustment for multiple comparisons.This finding, in turn, suggested a potentially protective effect of having more of the bacteria observed in these clusters.The metagenomic predictions from the significant network (pink) suggested a prominent contribution of SCFA production through anaerobic fermentation pathways.Among these known SCFA producers, bacteria from the Ruminococcaceae family were consistently observed in the main and complementary analyses.SCFA are primarily produced in the proximal colon of healthy subjects by bacterial fermentation of nondigestible carbohydrates and have been associated with anti-inflammatory properties. 36 past studies, microbial diversity, particularly adiversity, has been frequently reported as not significantly different between MS cases and controls. 8,37In the present study, only one measure of a-diversity that incorporated phylogenetically derived distances was significantly lower among POMS cases compared to controls.Phylogenetic diversity is expected to predict functional similarity, given that species more closely related are morphologically similar and assume similar functional roles in their respective ecosystems. 38Our careful matching procedure and the age range of our cohort may have allowed for uncovering this finding, as a prior study in adult MS discordant twins found no significant differences. 39Overall, a lower a-diversity characterizes a less healthy gut microbiota community. 40iven the modest sample size in POMS studies, ASVs associated with disease status that presented large effect sizes but did not reach the multiple comparisons threshold were reported in this study.Among those, a lower relative abundance of taxa from the Lachnospiraceae (unclassified genera) and Ruminococcaceae families (Faecalibacterium prausnitzii, Subdoligranulum, and UCG-004) were observed in POMS cases.These families are known for their important contribution to butyrate and propionate production from carbohydrate fermentation. 41ur complementary analysis observed a similar and consistent directionality of effect for significant ASVs from the Ruminococcaceae family.Diminished relative abundances from one taxa of Lachnospiraceae relative to healthy controls has only been previously reported by the Canadian group. 9Our complementary analysis has expanded this finding to six significant taxa from this family, although distinct abundance patterns were observed.More consistently, a lower relative abundance of Faecalibacterium prausnitzii has been reported in casecontrol studies in relapsing-remitting MS. 8,37,42 Of note, their reduced abundance of this species has also been frequently reported in other immune-mediated disorders such as Crohn's disease. 43Similarly to another Canadian study, a lower abundance of Subdoligranulum in individuals with MS compared to healthy controls was found in the present study but was not reproduced until now. 44onsistent with prior studies, Akkermansia, a mucindegrading bacteria capable of producing acetate and propionate, was found at a higher relative abundance in POMS than controls. 8,37Nevertheless, this finding was not significant when comparing the subset of DMT-naive POMS to controls, which, alongside other subgroup analyses, should be interpreted with caution due to the limited sample size.Additionally, a lower relative abundance of Akkermansia in POMS than controls was observed in the complementary analysis.In contrast to a study from the International Multiple Sclerosis Microbiome Study consortium, we found Blautia had a higher relative abundance in POMS relative to controls. 37Notably, the 1152 subject consortium study enrolled older adults and used household controls (averaging 50.6 years) which, despite being well matched on the intended covariates, resulted in highly sex-discordant pairs, thereby limiting comparability. 10In line with our findings, which were further reproduced in the complementary analysis, is a study of adults that found higher abundances in 31 MS patients and 36 age and sex-matched healthy controls. 45owever, comparisons were restricted to the genus level, and different species could exert different effects.Regardless of belonging to the colonic dominant phylum Firmicutes, which is composed of many SCFA-producing species, Blautia cannot produce butyrate from carbohydrates. 41icrobial communities are expected to naturally occur and share functional dependencies.In this study, we have identified clusters of bacteria from the Ruminococcaceae and Lachnospiraceae families associated with disease status that contained some of the individual ASVs highlighted above in the main (species level) and complementary (family level) analysis.Among the shared predicted metabolic pathways expressed by those microbes, more abundant and exerting a larger effect were those leading to the production of SCFA.In preclinical studies, SCFAs have been shown to exert anti-inflammatory effects modulating the systemic immune response at the gut barrier level positively affecting the experimental autoimmune encephalomyelitis disease course. 36As PICRUSt only predicts conserved functional traits and cannot distinguish strainspecific functionality, 33 these data should be viewed mostly as hypothesis-generating. 46However, in an open label non-randomized trial, propionic acid was administered to 97 MS patients for 2 weeks before DMT initiation, and findings suggested a lesser disease progression compared to 57 historical MS controls. 47Baseline levels of propionic acid were lower in serum and stool samples of MS patients compared to healthy controls and baseline regulatory T cells were also lower in MS patients. 47iet can be an important source of interindividual heterogeneity in the gut microbiota profile. 10All participants filled out a pre-validated FFQ, and the mean intake of macronutrients was similar between cases and controls.Thus, while diet was associated with differences in some taxa, diet was less likely to explain the differences in the gut microbiota between our cases and controls.Interestingly, two of the taxa identified in the significant cooccurrence networks were associated with macronutrients, where fiber intake was associated with higher relative abundances.These findings are aligned with prior observations that SCFA production is determined by the supply of nondigestible carbohydrates. 41mong the limitations of our study is the modest sample size.However, given the expected high interindividual variability, a thoughtful matching procedure by multiple covariates was performed, which increased the robustness of findings.Nevertheless, the use of MS discordant siblings sharing the same household could have potentially improved confounding control except it may have limited matching for sex and age.Further, despite the minor influence of DMTs observed in the present work, we cannpt rule out its contribution to our findings.Additionally, due to the rarity of POMS, a more conservative threshold for the FDR was used, and significant associations that did not reach multiplicity adjustment threshold were also reported and could represent falsepositive findings.Nonetheless, this possibility was minimized by reproducing the study design and analytical procedures in an external cohort.The comparisons between the two datasets could only be made on the genus level since species-level data were annotated as unnamed or uncultured for most of the significant taxa in the complementary set.Additionally, differences in variable collection resulted in slight differences in model adjustment for each dataset limiting result comparability.Last, despite having arisen from a large U.S. nationwide sample of cases and controls, selection bias, a common concerns in case-control studies, cannot be excluded.Moreover, samples were collected close to symptom onset minimizing but not eliminating concerns for reverse causation.Although resource-intensive, population-based approaches with prospective exposure assessment could, in the future, overcome those uncertainties. Leveraging the unique window of POMS to study disease pathogenesis allowed for the identification of several bacteria that are, either individually or in network clusters, associated with disease status.The contribution of known and predicted SCFA taxa producers had lower abundance in MS cases.Blood and stool measurements of the related metabolites in future studies are warranted to confirm those findings.In summary, our results suggest SCFA producers may be important contributors to MS onset. 170 ª 2023 The Authors.Annals of Clinical and Translational Neurology published by Wiley Periodicals LLC on behalf of American Neurological Association. ª 2023 The Authors.Annals of Clinical and Translational Neurology published by Wiley Periodicals LLC on behalf of American Neurological Association. Figure 1 . Figure 1.Microbial diversity in pediatric MS and healthy controls (A) Boxplot of microbiome a-diversity measured by Shannon, Chao and Faith's PD index in POMS cases and controls, statistical test by mixed linear regression adjusting for fixed effects of age, body mass index, sex, and random effects of sample collecting season, site, and matching pairs.(B) Bar plot showing the effect size (Adonis R 2 ) of potential confounders and their association with gut microbial variations (weighted UniFrac distance, PERMANOVA *p < 0.05).(C) PCoA of weighted UniFrac community distance between POMS individuals and controls.BMI, body mass index; CMV, cytomegalovirus; EBV-VCA, Epstein-Barr virus-viral capsid antigen; HSV1, herpes simplex virus type 1. Figure 2 . Figure 2. Disease status and treatment associated microbes.(A) Significantly differential ASVs identified between POMS cases (n = 35) and matched controls examined by linear mixed effects model adjusting for a fixed effect of breastfeeding and random effects of sample collecting season, site, and matching pairs (p < 0.05).Regression coefficients were shown with 95% confidence intervals.Positive indicates higher microbial abundance in POMS cases than controls; negative indicates lower.(B) ASVs altered in DMT-naive POMS (n = 16), and POMS exposed to DMT (n = 19) versus their control (linear mixed effects model adjusting for a fixed effect of breastfeeding and random effects of sample collecting season, site, and matching pairs).*p < 0.05, **p < 0.01.(C) Arcsine square root transformed relative abundance of one species that is higher and one that is lower in DMT naive POMS versus matched controls. Table 2 . Amplicon sequence variant (ASV) membership for microbial co-abundance network modules associated with POMS disease status.ASVs are significantly different (p < 0.05, bolded) between POMS cases and controls in single ASV analysis (1 different, 0 not different).b Microbial co-abundance network modules were identified using arbitrary color names and do not reflect colors used in figures.176 ª 2023 The Authors.Annals of Clinical and Translational Neurology published by Wiley Periodicals LLC on behalf of American Neurological Association. Figure 3 . Figure 3. Metagenomic prediction.Odds of having POMS for high vs low levels of each significant (p < 0.05) metagenomic predicted pathway adjusted by BMI, season, and matching pairs.Values >1 indicate a higher odds of POMS compared to controls, below 1 indicate a lower odds.(A) Pink module.(B) Purple module.TCA, tricarboxylic acid cycle. Figure 4 . Figure 4. Diet and disease status.Boxplot of macronutrient intake (g) compared between POMS individuals and controls (paired t-test p-values). Figure 5 . Figure 5. Diet and gut microbes.Macronutrient significantly associated ASVs in cases and controls (combined) by linear mixed effects model adjusting for fixed effects of age, body mass index and random effects of site, season, and matching pairs.*p < 0.05, **p < 0.01, ***p < 0.001. Figure 6 . Figure 6.Disease status associated microbes in complementary analysis.Significantly differential ASVs identified between POMS (n = 19) and matched control tested by mixed linear regression model adjusting for random effects of site and matching pairs (p < 0.05). Table 1 . Baseline characteristics of individuals with pediatric-onset MS and matched healthy control participants. Table 3 . Baseline characteristics of pediatric-onset MS and matched healthy controls in complementary analysis. a Three subjects were matched on race at the missing category.
2023-11-14T06:19:03.691Z
2023-11-13T00:00:00.000
{ "year": 2023, "sha1": "db27479f83fc0e4ff13362be417a411928a5ae5d", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acn3.51944", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "d9efa23d4dd3940b95bff9db3b801e926a872293", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
119160427
pes2o/s2orc
v3-fos-license
Design of Provably Physical-Constraint-Preserving Methods for General Relativistic Hydrodynamics The paper develops high-order physical-constraint-preserving (PCP) methods for general relativistic hydrodynamic (GRHD) equations, equipped with a general equation of state. Here the physical constraints, describing the admissible states of GRHD, are referred to the subluminal constraint on the fluid velocity and the positivity of the density, pressure and specific internal energy. Preserving these constraints is very important for robust computations, otherwise violating one of them will lead to the ill-posed problem and numerical instability. To overcome the difficulties arising from the inherent strong nonlinearity contained in the constraints, we derive an equivalent definition of the admissible states. Using this definition, we prove the convexity, scaling invariance and Lax-Friedrichs (LxF) splitting property of the admissible state set $\mathcal G$, and discover the dependence of $\mathcal G$ on the spacetime metric. Unfortunately, such dependence yields the non-equivalence of $\mathcal G$ at different points in curved spacetime, and invalidates the convexity of $\mathcal G$ in analyzing PCP schemes. This obstacle is effectively overcame by introducing a new formulation of the GRHD equations. Based on this formulation and the above theories, a first-order LxF scheme is designed on general unstructured mesh and rigorously proved to be PCP under a CFL condition. With two types of PCP limiting procedures, we design high-order, {\em provably} (not probably) PCP methods under discretization on the proposed new formulation. These high-order methods include the PCP finite difference, finite volume and discontinuous Galerkin methods. I. INTRODUCTION In many cases, high energy physics and astrophysics may involve hydrodynamical problems with special or general relativistic effect, corresponding to that the fluid flow is at nearly speed of light, or the influence of strong gravitational field on the hydrodynamics cannot be neglected. Relativistic hydrodynamics (RHD) is very important in investigating a number of astrophysical scenarios from stellar to galactic scales, e.g. astrophysical jets, gamma-ray bursts, core collapse super-novae, formation of black holes, merging of compact binaries, etc. The governing equations of RHDs are highly nonlinear, making their analytical treatment extremely difficult. Numerical simulation has become a primary and powerful approach to understand the physical mechanisms in the RHDs. The pioneering numerical work on the RHD equations may date back to the Lagrangian finite difference code via artificial viscosity for the spherically symmetric GRHD equations [23,24]. Wilson [33] first attempted to solve multiple-dimensional RHD equations by using the Eulerian finite difference method with the artificial viscosity technique. Since the 1990s, the numerical study of RHD has attracted considerable attention, and various modern shock-capturing methods based on Riemann solvers have been developed for the RHD equations. The readers are referred to the early review articles [11,12,20,21] and some more recent works e.g. [4,34,36] as well as references therein. Most existing methods do not preserve the positivity of the density, pressure and the specific internal energy * wu.3423@osu.edu as well as the bound of the fluid velocity, although they have been used to solve some RHD problems successfully. There exists a big risk of failure when a numerical scheme is applied to the RHD problems involving large Lorentz factor, low density or pressure, or strong discontinuity. This is because once the negative density/pressure or the superluminal fluid velocity is obtained during numerical simulations, the eigenvalues of the Jacobian matrix become imaginary so that the discrete problem becomes ill-posed. Moreover, the superluminal fluid velocity also yields imaginary Lorentz factor and leads to the violation of the relativistic causality. It is therefore significative to design high-order numerical schemes, whose solutions satisfy the intrinsic physical constraints. Recent years have witnessed some advances in developing high-order bound-preserving type schemes for hyperbolic conservation laws. Those schemes are mainly built on two types of limiting procedures. One is the simple scaling limiting procedure for the reconstructed or evolved solution polynomials in a finite volume or discontinuous Galerkin (DG) method, see e.g. [10,39,[43][44][45][46]48]. Another is the flux-corrected limiting procedure, which can be used on high-order finite difference, finite volume and DG methods, see e.g. [6,7,15,17,18,40,41]. A survey of the maximum-principle-satisfying or positivity-preserving high-order schemes based on the first type limiter was presented in [47]. The readers are also referred to [42] for a review of these two approaches. Recently, by extending the above boundpreserving techniques, two types of physical-constraintpreserving (PCP) schemes were developed for the special RHD equations with an ideal equation of state (EOS), i.e., the high-order PCP finite difference WENO (weighted essentially non-oscillatory) schemes [35] and the bound-preserving DG methods [26]. More recently, the high-order PCP central DG methods were proposed in [38] for special RHD with a general EOS. The extension of PCP schemes to the ideal relativistic magnetohydrodynamics was studied in [37], where the importance of divergence-free magnetic fields in achieving PCP methods was revealed in theory for the first time. The aim of this paper is to design high-order, provably PCP methods for the GRHD equations with a general EOS, including PCP finite difference, finite volume and DG methods. Developing provably PCP methods for GRHD with a general EOS is very nontrivial and still untouched in literature. The technical challenges mainly come from three aspects: (1). The inherent nonlinear coupling between the GRHD equations due to the Lorentz factor, curved spacetime and general EOS, e.g., the dearth of explicit expression of the primitive variables and flux vectors with respect to the conservative/state vector. (2). One more physical constraint for the fluid velocity in addition to the positivity of density, pressure and specific internal energy. (3). The non-equivalence of the admissible state sets defined at different points in curved spacetime. It is noticed in [27] that Redice, Rezzolla and Galeazzi once attempted to extend the flux-corrected limiter in non-relativistic case [15] to the GRHD equations, but only achieved enforcing the positivity of density. The importance as well as the difficulty of designing completely PCP schemes were also mentioned in [27,28]. The work in this paper overcomes the above difficulties, via a new formulation of the GRHD equations and rigourously theoretical analysis on the admissible states of GRHD. The paper is organized as follows. Sec. II introduces the governing equations of GRHD and the EOS. Sec. III derives several properties of the admissible state set and proposes a new formulation of the GRHD equations, which play pivotal roles in designing provably PCP methods. Sec. IV proves the PCP property of the first-order LxF scheme on general unstructured mesh. High-order, provably PCP methods are presented in Sec. V with detailed implementation procedures, including PCP finite volume and DG methods in Sec. V A and PCP finite difference methods in Sec. V B. Concluding remarks are presented in Sec. VI. For better legibility, all the proofs of the lemmas and theorems are put in Appendix A. Throughout the paper, we use a spacetime signature (−, +, +, +) with Greek indices running from 0 to 3 and Latin indices from 1 to 3. We also employ the Einstein summation convention over repeated indices, and the geometrized unit system so that the speed of light in vacuum and the gravitational constant are equal to one. II. GOVERNING EQUATIONS The general relativistic hydrodynamic (GRHD) equations [12] consist of the local conservation laws of the baryon number density and the stress-energy tensor T µν , where ρ denotes the rest-mass density, u µ represents the fluid four-velocity, and ∇ µ stands for the covariant derivative associated with the four-dimensional spacetime metric g µν , i.e., the line element in four-dimensional spacetime is ds 2 = g µν dx µ dx ν . The stress-energy tensor for an ideal fluid is defined by where p denotes the pressure, and h represents the specific enthalpy defined by h = 1 + e + p/ρ, with e denoting the specific internal energy. An additional equation for the thermodynamical variables, i.e. the so-called equation of state (EOS), is needed to close the system (1)- (2). In general, the EOS can be expressed as or The relativistic kinetic theory reveals [38] that a general EOS (4) should satisfy which is weaker than the condition proposed in [31]. This paper focuses on the causal EOS. We also assume that the fluid's coefficient of thermal expansion is positive, which is valid for most of compressible fluids, e.g. the gases. Then the following inequality holds [38] The most commonly used EOS, called the ideal EOS, is with Γ ∈ (1, 2] denoting the adiabatic index. The ideal EOS (7) and most of the other EOS reported in numerical RHDs, see e.g. [22,25,29,38], usually satisfy the conditions (5)- (6), and that the function e(p, ρ) is con- for any fixed positive ρ. In the "test-fluid" approximation, where the fluid selfgravity is neglected in comparison to the background gravitational field, the dynamics of the system is completely governed by Eqs. (1) and (2), together with the EOS (4). When such an approximation does not hold, the GRHD equations must be solved in conjunction with the Einstein gravitational field equations, which relate the curvature of spacetime to the distribution of massenergy. In this paper, we only focus on the numerical methods for the GRHD equations (1), (2) and (4), assuming that the spacetime metric g µν and its derivatives ∂gµν ∂x δ are given or can be numerically computed by a given solver for the Einstein equations in each numerical time-step. All the following discussions only require that the metric tensor g µν is real symmetric with signature (−, +, +, +). In order to solve the GRHD equations by using modern shock-capturing methods, it is more suitable to reformulate the covariant form (1)-(2) into conservative Eulerian formulation, see e.g. [3,19]. For this purpose, we adopt the 3 + 1 (ADM) formulation [1] to decompose spacetime into a set of non-intersecting spacelike hypersurfaces with normal (1/α, −β i /α), where α > 0 is the lapse function and β i is the shift vector. Within this formalism the spacetime metric g µν is split as where γ ij denotes the 3-metric induced on each spacelike slice and is symmetric positive definite. A. Definition and equivalent definition For the GRHD equations (9), it is very natural and intuitive to define the (physically) admissible state set of U as follows. Definition 1. The set of admissible states of the GRHD equations (9) is defined by Unfortunately, it is difficult to verify the four conditions in (11) for the given value of U, because there is no explicit expression for the transformation U → (ρ, p, e, v). This also indicates the difficulty in studying the properties of G and developing the PCP schemes for (9) with the numerical solution in G, especially for a general EOS (4). In practice, if giving the value of U, then one has to iteratively solve a nonlinear algebraic equation, e.g. an equation for the unknown pressure p ∈ R + : where Once the positive solution of the above equation is obtained, denoted by p(U), other variables are sequentially calculated by e(U) = e(p(U), ρ(U)). A equivalent simple definition of G is given as follows, with the proof presented in Appendix A 1. Lemma 1. The admissible state set G in (11) is equivalent to the following set (14) where q γ (U) := E − D 2 + mΥm ⊤ , and the matrix Υ = (γ ij ) 1≤i,j≤3 is positive definite and usually depends on (t, x i ). Based on Lemma 1, the admissible state sets G and G γ will not be deliberately distinguished henceforth. However, in comparison with G, the constraints in the set G γ are explicit and directly imposed on the conservative variables, so that they can be very easily verified for given value of U. B. Mathematical properties With the help of the equivalence between G and G γ , the convexity of admissible state set can then be proved, see Lemma 2 with proof displayed in Appendix A 2. The scaling invariance and Lax-Friedrichs (LxF) splitting properties of G γ can be further obtained. (ii). (LxF splitting) for any vector ξ = (ξ 1 , ξ 2 , ξ 3 ) = 0, where ̺ ξ is an appropriate upper bound of the spectral radius of the Jacobian matrix ∂(ξ j F j (U))/∂U. For general EOS, it can be A smaller/sharper satisfied bound for ideal EOS is The results in Lemmas 1, 2 and 3 are consistent with the special relativistic case established in [35,38], if the spacetime is flat or g µν is the Minkowski metric diag{−1, 1, 1, 1}. However, when Υ is not a constant matrix and changes in spacetime, the admissible state set G or G γ becomes dependent on spacetime. In other words, the admissible state sets defined at different points in curved spacetime are inequivalent, i.e. generally G γ = G γ when Υ = Υ. This makes it difficult to use the above properties of G γ to develop PCP methods for GRHD equations (9). The reason is that most existing techniques for designing boundpreserving type methods, see e.g. [15,26,35,38,45,48], highly depend on rewriting the target schemes into some forms of convex combination and then taking advantage of the convexity of the admissible state set. Whereas, unfortunately, in the present case the convexity does not hold between inequivalent admissible state sets defined at different points in curved spacetime, making the related techniques invalidated. C. Spacetime-independent admissible state set We find an effective solution to the above "spacetimedependent" problem via a locally linear map. Specifically, we map the admissible states defined at different points in curved spacetime into a common set in the sense of where the square matrix Σ satisfies Σ ⊤ Σ = diag{1, Υ, 1}. One can take Σ as diag{1, Υ 1 2 , 1}, but a better choice is explicitly defining Σ via the Cholesky decomposition of Υ as follows where Σ 11 = γ 11 , Σ 12 = γ 12 / γ 11 , Σ 13 = γ 13 / γ 11 , It is worth noting that the transformation U → W in (17) is linear in local spacetime. The set G * defined in (16) does not depend on spacetime. In fact, G * is equal to the admissible state set in special relativistic case [35,38]. Hence it has the following properties, whose proofs are the same as the special RHD case in [35] and omitted here. (16) is concave and Lipschitz continuous with respect to W. The admissible set G * is an open convex set. Moreover, λW ′ + (1 − λ)W ′′ ∈ G * for any W ′ ∈ G * , W ′′ ∈ G * , and λ ∈ (0, 1]. D. G * -associated formulation of GRHD equations The above analysis motivates us to develop PCP schemes for GRHD by taking advantages of the convexity of the spacetime-independent set G * . Particularly, we would like to seek a new form of GRHD equations, whose admissible conservative vectors (state vectors) exactly form the set G * . To this end, we multiply Eqs. (9) by the invertible matrix Σ from the left, and then obtain the following equivalent form (abbreviated as "W-form" in later text) where For convenience, these notations omit the dependence of H j and S on the metric g µν and its derivatives ∂gµν ∂x δ . Based on the relation (17), the properties of G γ established in Lemma 3 can be directly extended to G * . where η ξ = α̺ ξ is a bound of the spectral radius of the Jacobian matrix ∂(ξ j H j (W))/∂W with ̺ ξ defined in Lemma 3. The next two sections will utilize the theories established above to design the provably PCP methods for the GRHD equations in W-form (18). IV. A FIRST-ORDER PCP SCHEME This section aims to establish the first theoretical result on PCP method for GRHD, i.e., rigorously show the PCP property of the first-order Lax-Friedrichs (LxF) scheme for the GRHD equations in W-form (18) on a general mesh. For convenience, we will also use x to denote (x 1 , x 2 , x 3 ) in the following. Assume that the three-dimensional "spatial" domain is divided into a mesh of cells {I k }, such as tetrahedron or hexahedron elements. For generality, the mesh can be unstructured. Let N k denote the index set of all the neighboring cells of I k . For each j ∈ N k , let E kj be the face of I k sharing with its neighboring cell I j , i.e. E kj = ∂I k ∩∂I j , and ξ kj = ξ kj,1 , ξ kj,2 , ξ kj,3 be the unit normal vector of E kj pointing from I k to I j . The time interval is also divided into mesh {t 0 = 0, t n+1 = t n + ∆t n , n ≥ 0} with the time step-size ∆t n determined by the CFL-type condition. Integrating the W-form (18) over the cell I k and using the divergence theorem give Let W n k be the approximation to the cell-average or the centroid-value of W over I k at t = t n . Approximating the flux in (19) by the LxF flux, and discretizing the time derivative by froward Euler method, one can derive a first-order scheme (20) where |I k | and |E kj | respectively denote the volume of I k and the area of the face E kj . The adopted LxF flux is with the numerical viscosity coefficient satisfying The readers are referred to Lemma 5 for the definition of η ξ for any nonzero vector ξ ∈ R 3 . Here the corresponding cell-centered values of g µν are used to calculate H ℓ (W n k ) and S(W n k ). Theorem 1 shows that the scheme (20) preserves W n k ∈ G * under a CFL condition, see Appendix A 4 for its proof. V. HIGH-ORDER PCP SCHEMES This section is devoted to designing high-order, provably PCP schemes for the GRHD equations in W-form (18). For the sake of convenience, we assume that the spatial domain is divided into a uniform cuboid mesh, with the constant spatial step-size ∆ ℓ in x ℓ -direction, ℓ = 1, 2, 3, respectively. And the time interval is divided into mesh {t 0 = 0, t n+1 = t n + ∆t n , n ≥ 0}, with the time step-size ∆t n determined by the CFL-type condition. To avoid confusing subscripts, in this section we sometimes use the symbol x or (x, y, z) to replace the independent variables (x 1 , x 2 , x 3 ). A. PCP finite volume and DG schemes Assume the uniform cuboid mesh is with cells and W n ijk be the numerical cell-averaged approximation of the exact solution W(t, x) over I ijk at t = t n . We aim at designing PCP finite volume or DG type methods of the GRHD equations (18), whose solution W n ijk always stays at G * if W 0 ijk ∈ G * . Towards achieving high-order ((K + 1)-th order) spatial accuracy, the approximate solution polynomials W n ijk (x) of degree K are also built usually, to approximate the exact solution W(t n , x) within the cell I ijk . Such polynomial vector W n ijk (x) is, either reconstructed in finite volume methods from W n ijk , or evolved in DG methods. The cell-averaged value of W n ijk (x) over the cell I ijk is required to be W n ijk . Method For the moment, the forward Euler method is used for time discretization, while high-order time discretization will be considered later. Then, the main implementation procedures of our high-order (K ≥ 1) PCP finite volume or DG method can be outlined as follows. Step 0. Initialization. Set t = 0 and n = 0, and compute W n ijk and W n ijk (x) for each cell I ijk by using the initial data. Note the convexity of G * can ensure W n ijk ∈ G * . Step 1. Given admissible cell-averages W n ijk , perform PCP limiting procedure. Use the PCP limiter presented later to modify the polynomials W n ijk (x) as W n ijk (x) , such that the revised polynomials satisfy where the set S ijk consists of several important tensorproducted quadrature nodes in I ijk . Specifically, are the Q-point Gauss-Legendre quadrature nodes in those three intervals respectively. For achieving provably PCP property, L is suggested to satisfy 2L − 3 ≥ K. For the accuracy requirement, Q shall satisfy: 2Q ≥ K + 1 for a (K + 1)-th order finite volume method, or Q ≥ K + 1 for a P K -based DG method [8]. Step 2. For each cell I ijk , evaluate the limiting values of W n ijk (x) at the Gaussian points on the faces of the cell: for µ, ν = 1, · · · , Q. Step 3. Compute numerical fluxes. First estimate the upper bound a (ℓ) with ξ ℓ denoting the ℓ-th row of unit matrix of size 3, ℓ = 1, 2, 3. Let {ω µ } Q µ=1 be the associated weights of the Qpoint Gauss-Legendre quadrature and satisfy Q µ=1 ω µ = 1. Then for each i, j, k, compute the numerical fluxes in x ℓ -direction, ℓ = 1, 2, 3, by with summation convention employed, and the numerical flux H ℓ W − , W + taken as the LxF flux Numerical fluxes in (26) can be regarded as high-order approximations to respectively. Step 4. Update the cell-averages by the scheme where S n ijk denotes an appropriate high-order approximation to the cell-average of S over the cell I ijk , e.g. where Einstein's summation convention is used. Eq. (28) is the formulation of the finite volume scheme or the discrete equation for cell-averaged values in the DG scheme. As shown in Theorem 2 later, the PCP limiting procedure in Step 1 can ensure the computed W n+1 ijk ∈ G, which meets the condition of performing PCP limiting procedure in the next time-forward step, see Step 6. Step 5. Built the polynomials W n+1 ijk (x) . For a high-order finite volume scheme, reconstruct the approximate solution polynomial W n+1 ijk (x) from the cell averages W n+1 ijk ; for P K -based DG method (K ≥ 1), evolve the high-order "moments" of W n+1 ijk (x), similar to (28). The details are omitted here, as these does not affect the PCP property of the proposed schemes. Step 6. Set t n+1 = t n +∆t n . If t n+1 < T stop , then assign n ← n + 1 and go to Step 1, where the admissibility of W n+1 ijk } has been ensured in Step 4. Otherwise, output numerical results and stop. The main difference between the present PCP method and the traditional method is that the former adds a carefully designed PCP limiting procedure (i.e. Step 1). PCP limiter We now present the PCP limiter used in Step 1, which is a key ingredient of the above high-order PCP method. Without this limiter, the original high-order schemes are generally not PCP, and may easily break down after some time steps in solving some ultra-relativistic problems involving low density or pressure, or very large velocity. The notion of our PCP limiter is extended from the non-relativistic case [45] and special relativistic case [26,37,38]. To avoid the effect of the rounding error, we define which is a subset of G * and satisfy lim is a sufficiently small positive number and may be taken as ǫ = 10 −12 in numerical computations. Under the condition W n ijk ∈ G * in Step 1, our PCP limiting procedure for each cell I ijk is divided into the following easily-implemented steps. For simplicity, here we temperately omit the superscripts n. • If W ijk / ∈ G ǫ , then the cell I ijk is identified as vacuum region approximately. Set W ijk (x) = W ijk and skip the following steps. • Enforce the first constraint in G ǫ . Let W ℓ,ijk denote the ℓ-th component of W ijk , and W 0,min = min • Enforce the second constraint in G ǫ . Let q min = min x∈S ijk q W ijk (x) . If q min < ǫ, then W ijk (x) is limited as With the concavity of q(W), the above PCP limiting procedure yields that the revised/limited polynomial W n ijk (x) satisfy (24). In the end, we remark several features of the proposed PCP limiter, in addition to its easy implementation. The limiter keeps the conservativity, i.e. It also maintains the high-order accuracy when W n ijk (x) approximates a smooth solution without vacuum, similar to [44,45]. The above PCP limiting procedure is independently performed on each cell, making the PCP limiter easily parallel. It is worth emphasizing that the PCP limiter does not depend on the reconstructing technique empolyed in Step 5 of a PCP finite volume scheme. Theretofore, the proposed PCP finite volume schemes are very friendly, in cooperation with any appropriate reconstructing techniques for W n ijk (x), e.g., essentially nonoscillatory (ENO) approach [14], weighted ENO approach [16], piecewise parabolic method [9], etc. Provably PCP property We are now in position to present the theoretical result on the PCP property of the proposed finite volume and DG methods. Before discussing high-order case (K ≥ 1), we first present the result for the special case of K = 0, i.e., W n ijk (x) = W n ijk . In this special case, the scheme (28) reduces to first-order LxF scheme, and the PCP limiting procedure is not required. As a direct corollary of Theorem 1, we immediately have the following consequence. Corollary 1. When K = 0, the scheme (28) is PCP under the CFL-type condition Let {ω µ } L µ=1 be the associated weights of the L-point Gauss-Lobatto quadrature, with L µ=1ω µ = 1 andω 1 = ω L = 1 L(L−1) . We can then rigorously show the PCP property of the proposed methods in high-order case K ≥ 1, as stated in Theorem 2 with the proof displayed in Appendix A 5. Theorem 2. Assume K ≥ 1 and W 0 ijk ∈ G * for all i, j, k. Assume that the condition (24) is satisfied by the revised polynomials W n ijk (x) . Then, under the CFL-type condition the scheme (28) preserves W n ijk ∈ G * for all i, j, k, n. In other words, the scheme (28) with K ≥ 1 is PCP under the condition (31), where λ S = 0 if q S n ijk ≥ 0, otherwise λ S is the positive solution of Eq. (30). Remarks The scheme (28) is only first-order accurate in time. To achieve high-order PCP scheme in time, one can replace the forward Euler time discretization in (28) with highorder strong stability preserving (SSP) methods [13]. For example, utilizing the third-order SSP Runge-Kutta method gives where L ijk W(x) is the numerical spatial operator, and W n (x), W * (x), W * * (x) denote the PCP limited versions of the reconstructed or evolved polynomial vector at each Runge-Kutta stage. Since such SSP method is a convex combination of the forward Euler method, according to the convexity of G * , the resulting high-order scheme (32) is also PCP under the CFL condition (31). To enforce the condition (31) rigorously, we need to get an accurate estimation of a (ℓ) ⋆ for all the Rung-Kutta stages in (32) based only on the numerical solution at time level n, which is highly nontrivial. Hence, in practical computations, we suggest to take the value of a (ℓ) ⋆ slightly larger. Besides, the time step-size selecting strategy suggested in [32] may be adopted to improve computational efficiency. The high-order SSP multi-step method can also be used for time discretization to achieve high-order PCP schemes c.f. [38], and the details are omitted here. The above complication of enforcing the condition (31) does not exist if one uses a SSP multi-step time discretization. B. PCP finite difference scheme Assume the uniform cuboid mesh with grid points {(x i , y j , z k )}, and W n i,j,k denote the numerical approximation to the value of the exact solution W(t n , x i , y j , z k ) at the grid point. We would like to design PCP finite difference schemes of the GRHD equations in W-form (18), which preserve W n i,j,k ∈ G * if W 0 i,j,k ∈ G * . Method We also focus on the forward Euler time discretization first, and consider high-order time discretization later. Then, a r-th order (spatially) accurate, conservative finite difference scheme of the GRHD equations (18) may be written as with Here There are lots of approaches, e.g. [2,16,30], to get high-order (r > 1) numerical fluxes. However, the resulting high-order schemes are generally not PCP, and may easily break down when solving some demanding extreme problems due to the nonphysical numerical solutions W n i,j,k / ∈ G * . In order to preserve W n i,j,k ∈ G * , the numerical fluxes in our high-order PCP finite difference method are carefully designed with a PCP flux limiter. The outline of the implementing procedures are as follows. Step 0. Initialization. Set t = 0 and n = 0, and use the initial data to assign the value of W 0 i,j,k at each grid point. Physically, W 0 i,j,k ∈ G * . Step 1. Compute high-order flux. Use an appropriate traditional technique, e.g. ENO [14], WENO [16], monotonicity-preserving approaches [2,30] and ξ ℓ denoting the ℓ-th row of the unit matrix of size 3. Step 5. Set t n+1 = t n + ∆t n . If t n+1 < T stop , then assign n ← n + 1 and go to Step 1. Otherwise, output numerical results and stop. PCP flux limiter The PCP flux limiter used in Step 3 is the key point in designing the above PCP finite difference scheme. Its role is to locally modify any appropriate high-order numerical fluxes into high-order PCP fluxes of form (36). For the sake of convenience, the following notations are introduced. We employ the vector θ ijk to represent the parameters , and the ℓ-th component of θ ijk is also denoted by θ (ℓ) ijk , ℓ = 1, · · · , 6. We also use the notation to explicitly display the dependance of W n+1 i,j,k on θ ijk . Then, W i,j,k (θ ijk ) can be reformulated as Our goal is to carefully choose the parameters θ ijk such that W i,j,k (θ ijk ) = W n+1 i,j,k ∈ G * provided W n i,j,k ∈ G * . Simply taking θ ijk = 0 in (36) gives a PCP scheme, which is exactly the LxF scheme. And the following corollary directly follows from Theorem 1. Corollary 2. If W n i,j,k ∈ G * , then W i,j,k (0) ∈ G * under the CFL-type condition where λ S = 0 if q S(W n i,j,k ) ≥ 0, otherwise λ S > 0 and solves q W n ijk + λ −1 S S(W n i,j,k ) = 0. However, such an approach (taking θ ijk = 0) evidently destroys the orignal high-order accuracy and deprives the significance of constructing high-order numerical flux in Step 1. In order to maintain the r-th order accuracy of the original numerical flux, each component of the parameters θ ijk is expected to be 1 − O(max ℓ {∆ ℓ } r ) for smooth solutions. There exist in the literature two types of positivitypreserving flux limiters, which can be borrowed and extended to the GRHD case, including the cut-off flux limiter [15] and the parametrized flux limiter [6,17,18,40,41]. The extension of the cut-off limiter to the GRHD case is similar to the special RHD case [35]. In the following, we mainly focus on developing parametrized PCP flux limiter, because the parametrized limiter works well in maintaining the high-order accuracy [41]. The parametrized PCP flux limiter attempts to seek the almost "best" parameters θ ijk , such that each parameter is as close to 1 as possible while subject to W i,j,k (θ ijk ) ∈ G * . More specifically, such θ ijk can be computed through the following two sub-steps of Step 3. In following, we shall present the details of Step 3.1. Specifically, we need to determine the hyperrectangular Θ ⋆ i,j,k for given values of W i,j,k (0) and {C ℓ } 6 ℓ=1 defined in (38). To avoid the effect of the rounding error, we introduce a small positive number ǫ = min 10 −12 , min where W 0,i,j,k (θ) denotes the first component of W i,j,k (θ). Under the condition (39), W i,j,k (0) ∈ G * implies ǫ > 0, and that W i,j,k (0) belongs to G ǫ defined in (29). We then have the following property, whose proof is displayed in Appendix A 6. Lemma 6. Under the condition (39), the two sets are both convex. Based on this lemma, Step 3.1 is divided into the following two sub-steps for each i, j, k. Provably PCP property We now study in theory the PCP property of the above high-order finite difference scheme. Based on the computing approach of the parameters Λ (ℓ) i,j,k 6 ℓ=1 displayed in Step 3.1(a) and Step 3.1(b), one has Θ ⋆ i,j,k ⊂ Θ. This implies From the definition of θ ijk in Step 3.2, we obtain Thus θ ijk ∈ Θ ⋆ i,j,k , and W n+1 i,j,k = W i,j,k (θ ijk ) ∈ G ǫ . We then immediately draw the following conclusion. Remarks The scheme (33) is only first-order accurate in time. High-order SSP methods [13] can be used to repalce the forward Euler time discretization in (33), to achieve PCP scheme with high-order accuracy in time. If the SSP Runge-Kutta (resp. multi-step) method is employed, the parametrized PCP flux limiter should be used in each Runge-Kutta stage (resp. each time step). The proposed PCP flux limiter does not depend on what numerical fluxes one uses, that is to say, any highorder finite difference schemes for the GRHD equations in W-form (18) can be modified into PCP schemes by the proposed parametrized PCP flux limiter. Although the parametrized PCP flux limiter is presented here for finite difference scheme, it is also applicable for high-order finite volume or DG methods to preserve the admissibility of approximate cell-averages. VI. CONCLUSIONS The paper designed high-order, physical-constraintpreserving (PCP) methods for the general relativistic hydrodynamic (GRHD) equations with a general equation of state. It was built on the theoretical analysis of the admissible states of GRHD, and two types of PCP limiting procedures enforcing the admissibility of numerical solutions. To overcome the difficulties arising from the strong nonlinearity contained in the physical constraints, an "explicit" equivalent form of the admissible state set, G γ , was derived, followed by several pivotal properties of G γ , including the convexity, scaling invariance and Lax-Friedrichs (LxF) splitting property. It was discovered that the sets G γ defined at different points in curved spacetime are inequivalent. This invalidated the convexity of G γ in analyzing PCP schemes. To solve this problem, we used a linear transformation to map the different G γ into a common set G * , which is also convex and exactly the admissible state set in special RHD case. We then proposed a new formulation (called W-form) of the GRHD equations to construct provably PCP schemes by taking advantages of the convexity of G * . Under disretization on this W-form, the first-order LxF scheme on general unstructured mesh was proved to be PCP, and high-order PCP finite difference, finite volume and DG methods were designed via two types of PCP limiting procedures. It is of particular significance to conduct more validations and investigations on the proposed PCP methods via ultra-relativistic numerical experiments. This is our further work, which may be explored together with computational astrophysicists. Proof. The proof consists of two parts. Proof of Lemma 3 Proof. The scaling invariance can be directly verified by the definition of G γ . In the following, we prove the LxF splitting property via two steps. (1). Show that U ± ̺ −1 ξ ξ j F j (U) ∈ G γ . We would like to split it as a form of convex combination with U ± = U ±̺ −1 ξ ξ jF j (U), where U ± ∈ G γ due to the scaling invariance, and the positive quantity̺ ξ = ̺ ξ −|ξ j β j |/α equals ξ j ξ j for general EOS, while for ideal EOS with sharper ̺ ξ , With the help of Lemma 2 and the scaling invariance of G γ , the form in (A3) indicates that it suffices to show To this end, we denote U ± =: (D ± , m ± , E ± ) ⊤ , then and With the formulas (A5)-(A6), we shall prove (A4) in the following for two cases separately, i.e., the general EOS case, and the ideal EOS case with sharper ̺ ξ . We will always employ the Cauchy-Schwarz type inequality First consider general EOS. Using (A5)-(A7) and (5) gives which immediately imply (A4). Then we focus on the ideal EOS case with sharper ̺ ξ . From (A7) and 0 < c s < 1, we derive and further It follows from Lemma 2 and the deduction proved in part (1) that U ± η −1 ξ j F j (U) ∈ G γ . The proof is completed. In the end we give a remark on Lemma 3. For general EOS, one can also choose to establish the LxF splitting property in Lemma 3, where ς = (ρh− p) 2 − ρ 2 − p 2 W 2 /p 2 ≥ 0. This choice of ̺ ξ is smaller/sharper than that in (15), but generally not an upper bound of the spectral radius of ∂(ξ j F j (U))/∂U. Proof of Theorem 1 Before proving Theorem 1, we first introduce a lemma. Lemma 7. If W n k ∈ G * for all k, then for any δ t satisfying it holds Proof. Using the identity j∈N k E kj (ξ kj · Z) = I k ∂Z ℓ ∂x ℓ dx ≡ 0, for any constant vector Z = (Z 1 , Z 2 , Z 3 ), we reformulate W n k as with Π kj = W n j − a −1 kj ξ kj,ℓ H ℓ W n j . Thanks to Lemma 5 and the condition (21), one has Π kj ∈ G * . Thus the form (A11) is a convex combination under the condition (A10). The proof is completed by Lemma 4. Based on this lemma, the proof of Theorem 1 is given as follows. Proof. Here the induction argument is used for time level number n. Assume W n k ∈ G * for all k, we then prove that W n+1 k computed by (20) also belongs to G * . The scheme (20) can be rewritten as where δ t = ∆t n /ϑ, and Under the condition (22), we know that δ t satisfies (A10) and thus have Ξ H ∈ G * by Lemma 7. We then show Ξ S ∈ G * as follows. Proof of Theorem 2 Proof. Here the induction argument is used for time level number n. Assume W n ijk ∈ G * for all i, j, k, we then show that W n+1 i,j,k computed by (28) also belongs to G * . Define The proof of W n+1 i,j,k ∈ G * is divided into three parts. (1). First prove Ξ H ∈ G * . This part will always employ Einstein's summation convention for indices µ and ν running from 0 to Q. The exactness of the L-point Gauss-Lobatto quadrature rule and the Q-point Gauss quadrature rule yields with Π ⋆ defined by the convex combination which belongs to G * by the hypothesis and the convexity of G * . Furthermore, Ξ H can be reformulated as
2017-03-16T14:29:01.000Z
2016-10-20T00:00:00.000
{ "year": 2016, "sha1": "9532c4601b8a066ee71173ec21bbe7d2713a7d04", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.06274", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9532c4601b8a066ee71173ec21bbe7d2713a7d04", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics", "Computer Science" ] }
253429672
pes2o/s2orc
v3-fos-license
Biomimetic Hierarchical Nanocomposite Hydrogels: From Design to Biomedical Applications : Natural extracellular matrix (ECM) is highly heterogeneous and anisotropic due to the existence of biomacromolecule bundles and pores. Hydrogels have been proposed as ideal carriers for therapeutic cells and drugs in tissue engineering and regenerative medicine. However, most of the homogeneous and isotropic hydrogels cannot fully emulate the hierarchical properties of natural ECM, including the dynamically spatiotemporal distributions of biochemical and biomechanical signals. Biomimetic hierarchical nanocomposite hydrogels have emerged as potential candidates to better recapitulate natural ECM by introducing various nanostructures, such as nanoparticles, nanorods, and nanofibers. Moreover, the nanostructures in nanocomposite hydrogels can be engineered as stimuli-responsive actuators to realize the desirable control of hydrogel properties, thereby manipulating the behaviors of the encapsulated cells upon appropriate external stimuli. In this review, we present a comprehensive summary of the main strategies to construct biomimetic hierarchical nanocomposite hydrogels with an emphasis on the rational design of local hydrogel properties and their stimuli-responsibility. We then highlight cell fate decisions in engineered nanocomposite niches and their recent development and challenges in biomedical applications. Introduction The natural ECM network is highly heterogeneous and anisotropic because of the existence of the rigid domains of stiff biomacromolecules and their assemblies, for example, collagen fibers [1][2][3]. The ECM provides a structural scaffold via a network of protein-protein and protein-proteoglycan interactions. Because multicellular cells evolved independently in different multicellular lineages, the composition, as well as the properties of the extracellular matrix, varies with multicellular structures [4]. These interactions are involved in the formation of supramolecular assemblies such as collagen fibrils and elastic fibers, in tissue architecture, and in cell-matrix interactions that regulate cell growth and behavior. The biochemical properties of the ECM allow cells to sense and interact with their extracellular environment using various signal transduction pathways. Meanwhile, the physical properties of the ECM, including its rigidity, density, porosity, insolubility and topography (spatial arrangement and orientation), provide a physical signal to the cells [5]. These interactions are involved in the formation of supramolecular assemblies such as collagen fibrils and elastic fibers, in tissue architecture, and in cell-matrix interactions that regulate cell growth and behavior [6]. The heterogeneity of the natural ECM network also changes in spatiotemporal and mechanical dependence. However, few designs of the existing pure hydrogels can emulate the heterogeneity of the natural ECM network. 2 of 15 Trappmann et al. designed a synthetic hydrogel matrix tethered with either stiff or soft ligands to mimic the local mechanics of the natural ECM [7]. They further demonstrated that the local stiffness could be decoupled from the bulk stiffness of the whole hydrogel, where the local stiffness was generated by the doping of collagen fibers. These findings give promise to the use of biomimetic hierarchical nanocomposite hydrogels to recapitulate the heterogeneity of the natural ECM network. Recent advances in nanobiotechnology, hydrogels, and composition techniques enable the use of well-characterized nanostructures to mimic the heterogeneous topology in 3D [8][9][10]. Therefore, biomimetic hierarchical nanocomposite hydrogels are of great interest in the biomedical fields. The biophysical and biochemical properties of the biomimetic hierarchical nanocomposite hydrogels can be rationally designed and tailored to emulate the cellular microenvironment in natural ECM. Most of the existing nanostructures can be classified into two categories: (i) organic nanostructures, such as polymeric nanoparticles, liposomes, extracellular vesicles, etc. and (ii) inorganic nanostructures, such as silica nanoparticles, gold nanoparticles, and nanorods. Both organic and inorganic nanostructures can be further engineered with various functionalities, such as stimuli-responsive properties, fast in-vivo clearance, and high loading capacity. Biomaterials-based scaffolds, especially hydrogels, hold considerable promise with respect to enhancing the efficacy of tissue engineering and regenerative medicine [11][12][13][14]. Previous studies have shown that hydrogels can be rationally designed to mimic the structures and microenvironments of natural ECM [15][16][17], but they cannot recapitulate the stiff domains due to the soft and deformable nature of polymer networks. Therefore, the combination of nanostructures and hydrogels can maximize the mimicking of the heterogeneous and anisotropic ECM components and organization. In this review, we first summarize the design principles of biomimetic hierarchical nanocomposite hydrogels with an emphasis on the functionality of the doped nanostructures, including topology manipulation, bioactive reservoir, and ligand presentation. We next classify the various biomedical applications from drug delivery to tissue engineering in cartilage, bone, skin, and nerve fields. This review may shed light on the better design and wilder biomedical applications of biomimetic hierarchical nanocomposite hydrogels in the future. The Functionality of Nanostructures in Biomimetic Hierarchical Nanocomposite Hydrogels The nanostructures can be engineered with various functionality, including topology manipulation, bioactive reservoir and ligand presentation. The topology of hydrogels influences cell behavior, i.e., -rigid hydrogels promote cell adhesion while soft matrix enables cell spreading. Anisotropic hydrogels lead to focal adhesion formation, while aligned structures guide and organize cell growth. Ligands on hydrogels also affect various cell activities, including adhesion, migration, and differentiation. When nanocomposite hydrogels are used as a bioactive reservoir, they have high loading capacities and are able to release loaded cargo through various stimuli, which further enriched the toolkit of cell regulation. Thus, researchers can pursue one or a combination of those strategies to design hydrogels for different biomedical applications based on their specific demands. Topology Manipulation The biophysical properties of nanocomposite hydrogels are one of the key factors to modulate cell adhesion dynamics. The rigid structures promote the formation and maturation of cell adhesion structures, especially focal adhesions (FAs) [18][19][20]. Kubow et al. reported that the adhesion size in 3D was related to the existence and alignment of collagen fibers or electrospun fibers [21]. Therefore, the nanostructures in nanocomposite hydrogels could be recruited to manipulate the topology of these hydrogels. Doyle et al. reported a local 3D matrix microenvironment in which the local stiffness could be finely tuned through the change of the type of collagen fibers [22]. The microenvironmental ECM was tailored to highly heterogeneous and anisotropic to maximize the focal adhesion formation and maturation ( Figure 1A). Yuan et al. developed a dynamic gelatin-based nanocomposite hydrogel providing the local stiffening sites in a soft matrix [23]. The soft matrix enabled the matrix remodeling and cell spreading via the dynamic and reversible host-guest interactions, whereas the stiffened structures strengthened the cell anchoring on the cell adhesive motifs. The encapsulated stem cells exhibited enhanced mechanotransduction and osteogenic differentiation and finally promoted bone regeneration in a bone defect model ( Figure 1B). The responsive and on-demand change of hydrogel topology is important to many biomedical applications, especially well-organized and aligned tissues, such as nerves, muscles, and bones. Rose et al. reported an injectable hydrogel doped with magnetoactive objects containing a very small portion of iron oxide nanoparticles [24]. The magnetoactive objects could be induced to aligned status under the external magnetic field. The aligned magnetoactive objects could further induce the aligned growth and organization of neuron promoting localized nerve regeneration ( Figure 1C). reported that the adhesion size in 3D was related to the existence and alignment of colla gen fibers or electrospun fibers [21]. Therefore, the nanostructures in nanocomposite hy drogels could be recruited to manipulate the topology of these hydrogels. Doyle et a reported a local 3D matrix microenvironment in which the local stiffness could be finel tuned through the change of the type of collagen fibers [22]. The microenvironmenta ECM was tailored to highly heterogeneous and anisotropic to maximize the focal adhe sion formation and maturation ( Figure 1A). Yuan et al. developed a dynamic gelatin based nanocomposite hydrogel providing the local stiffening sites in a soft matrix [23 The soft matrix enabled the matrix remodeling and cell spreading via the dynamic an reversible host-guest interactions, whereas the stiffened structures strengthened the ce anchoring on the cell adhesive motifs. The encapsulated stem cells exhibited enhance mechanotransduction and osteogenic differentiation and finally promoted bone regener ation in a bone defect model ( Figure 1B). The responsive and on-demand change of hy drogel topology is important to many biomedical applications, especially well-organize and aligned tissues, such as nerves, muscles, and bones. Rose et al. reported an injectabl hydrogel doped with magnetoactive objects containing a very small portion of iron oxid nanoparticles [24]. The magnetoactive objects could be induced to aligned status unde the external magnetic field. The aligned magnetoactive objects could further induce th aligned growth and organization of neuron promoting localized nerve regeneration (Fig ure 1C). † Significantly different fibre rigidity in the same ECM condition, p < 0.05 (ANOVA) Reproduced with permission from [22], Copyright 2015, Springer Nature. (B) The schematic illustration of local stiffening in dynamic hydrogels by doping silica nanoparticles. Reproduced with permission from [23], Copyright 2021, Royal Society of Chemistry. (C) Soft, heterogeneous, and highly waterswollen anisometric hydrogels doped with magnetic-responsive objects. Reproduced with permission from [24], Copyright 2017, American Chemical Society. Bioactive Reservoir The biomimetic hierarchical nanocomposite hydrogels have various type of nanostructures, which has higher loading capacity, more sustained release of loaded cargos, and better manipulation tools compared with those of conventional pure hydrogel networks. Yao et al. reported a bisphosphonate-based hydrogel loaded with various concentrations of magnesium ions [25]. The developed hydrogel exhibited sustained magnesium delivery to the neuron tissues and was able to promote the outgrowth of axons, thereby facilitating peripheral nerve regeneration and functional recovery (Figure 2A). Kang et al. developed a layered double hydroxide-based nanohybrid, where the adenosine was encapsulated inside the interlayer spacing through electrostatic interactions [26]. The sustained release of adenosine acted as a ligand of adenosine A2b receptor (A2bR) and effectively activated the neo-bone formation through various processes, including calcification, mature tissue morphology, and vascularization ( Figure 2B). Apart from the free diffusion mechanism, the release of encapsulated cargos in nanocomposite hydrogels can also be triggered and manipulated by internal (pH, redox, enzyme, etc.) and external stimuli (magnetic, light, thermos, etc.). He et al. reported a nanocomposite hydrogel (NH) doped with Pluronic F127 and carbon nanotubes for the treatment of infected wounds [27]. The loading of antibiotic moxifloxacin hydrochloride enabled the pH-responsiveness of the nanocomposite hydrogels. The loaded drugs could be finely delivered and released in an acidic microenvironment in the infected wounds ( Figure 2C). Phuong et al. developed a nanocomposite hydrogel embedded with redox-responsive carbon dots [28]. The IR825-loaded carbon dots showed much better fluorescence and photothermal conversion rates upon receiving the stimuli of GSH. The enhanced photothermal properties of IR825@carbon dots under reduction status provided an effective tool for potential cancer treatment ( Figure 2D). Qin et al. reported injectable superparamagnetic ferrogels containing iron oxide nanoparticles and Pluronic F127 Indomethacin the iron oxide nanoparticles were loaded into the Pluronic F127 micelles [29]. The release of indomethacin was relatively slow as the diffusion coefficient of hydrophobic drugs in aqueous was quite low when the magnetic field was off. When the magnetic field was switched on, the iron oxide nanoparticles tended to orient and aggregate. Therefore, the micelles were squeezed, and the release of indomethacin was significantly accelerated ( Figure 2E). Han et al. developed a light-responsive nanocomposite hydrogel containing PNIPAM backbones and polydopamine nanoparticles [30]. The developed nanocomposite hydrogel showed phase transitions and volume changes under near-infrared (NIR) light. The NIR-induced drug release and NIR-assisted healing could be easily achieved and was able to adapt to various requirements of biomedical applications ( Figure 2F). Ligand Presentation The spatiotemporal control of ligand presentation at the nanoscale is highly desirable for the regulation of various cell activities, including adhesion, migration, and differentiation. The nanocomposite hydrogels can either serve as the carrier of ligands or act as the actuator to manipulate the presentation of these ligands. Peng et al. reported a nanocomposite hydrogel carrying a nanoarray of RGD-coated gold nanoparticles [31]. The presentation of the patterned RGD significantly enhanced the adhesion and osteogenic differentiation of mesenchymal stem cells (MSCs) ( Figure 3A). Wong et al. developed a soft hydrogel matrix as the cage of RGD-bearing magnetic nanoparticles to reversibly control the presentation of RGD ligands to the seeded stem cells [32]. The "Exposed" and "Hidden" conditions of the RGD ligands can be controlled and switched by applying the "Upward" and "Downward" magnetic fields. The cyclic presentation of RGD ligands maximized the cell adhesion on the hydrogel substrates and significantly promoted the osteogenic differentiation of seeded stem cells ( Figure 3B). More recently, Sahar et al. reported an RGD-Modified Alginate−GelMA hydrogel sheet designed for wound healing and soft tissue regeneration. The results confirm that the encapsulated MSCs remain viable within the hydrogel with enhanced collagen deposition. In vivo implantation to excisional wound model in mice confirmed the effectiveness of the GMSC−hydrogel in expediting wound healing via enhancing angiogenesis and suppressing local proinflammatory cytokines [33]. The Biomedical Applications of Biomimetic Hierarchical Nanocomposite Hydrogels Nanocomposite hydrogels with certain formulas are potential materials for various biomedical applications [34]. Customized nanocomposite hydrogels can be used as carriers for cells, drugs, or other bioactive molecules. The incorporation of stimulus-sensitive components with hydrogels allows designed delivery systems to exclusively release specific contents [35]. Although substantial progress in such nanocomposite hydrogels de- The Biomedical Applications of Biomimetic Hierarchical Nanocomposite Hydrogels Nanocomposite hydrogels with certain formulas are potential materials for various biomedical applications [34]. Customized nanocomposite hydrogels can be used as carriers for cells, drugs, or other bioactive molecules. The incorporation of stimulus-sensitive components with hydrogels allows designed delivery systems to exclusively release specific contents [35]. Although substantial progress in such nanocomposite hydrogels designed for regenerative medicine has been achieved over the past few years, actual clinical uses of nanocomposite hydrogels are rare. From the view of clinical translation, the major concerns could be summarized as stability and biosafety]. Due to the additional interfacial interactions in the nanocomposite hydrogels, the analysis of interactions and the mechanisms of the performance enhancements have become more complicated and fundamental [36]. In some cases, the relatively poor interfacial interactions between the polymer chains and nanomaterials and the uneven dispersion of nanomaterials in the hydrogel matrix dramatically affect the mechanical properties and structural stability of the hydrogels in applications. More importantly, the careful choice of nanomaterials and their concentrations will define the type and intensity of the stimuli that control the release of bioactive molecules from nanocomposite hydrogels. Thus, less or non-toxic nanomaterials should be considered to minimize or eliminate possible side effects of toxic nanomaterials on cells to ensure safe clinical application. Moreover, hydrogels with self-healing and tunable mechanical properties are able to conformally fill irregular injury sites. And the hierarchical 3D structure of hydrogels can provide a suitable microenvironment for cell survival, proliferation, and differentiation. Thus, hydrogels have advantages in tissue regeneration applications [37]. This section summarizes some of the representative biomedical applications of nanocomposite hydrogels, including drug delivery, tissue engineering, and other applications for a comprehensive overview. Drug Delivery With the advantages of good biocompatibility and hydrophilicity, nanocomposite hydrogels can be used to deliver drugs for the treatment of various diseases ( Figure 4A). Zhang et al. reported using HA-BP-Mg nanocomposite hydrogel for the controlled and stable release of Mg 2+ at bone defect sites to enhance osteogenesis and stimulate bone regeneration [38]. Furthermore, with the combination drug release of Mg 2+ and Dexamethasone, the BP-based injectable hydrogel created a positive feedback circuit of drug release regulation, which significantly enhanced bone regeneration at the intended sites. A self-healing hydrogel based on MgSiO 3 nanoparticles (NPs) and BP-grafted polymer (HA-BP) has been described by Shi et al. for anti-cancer purposes [39]. Targeted drug delivery was achieved by the protonation of BP and the breaking of chelation with Mg 2+ within MgSiO3 NPs. Breast cancer cells (MCF-7) were significantly inhabited by loaded doxorubicin. In terms of breast cancer recurrence prevention, Gao et al. prepared a novel nanocomposite hydrogel functionalized by ferromagnetic vortex-domain iron oxide (FVIOs) with controlled release of the anti-cancer drug doxorubicin]. In vivo postoperative treatment further confirmed significant suppression of the local tumor recurrences with the usage of FVIObased hydrogels compared to chemotherapy or hyperthermia alone. The development of functionalized nanocomposite hydrogels may help avoid some of the treatment deficiencies in traditional systemic therapies. GO-based nanocomposite has achieved highly targeted synergistic therapy for colorectal cancer. Amini-Fazl et al. reported loading 5-fluorouracil in nanocomposite hydrogel CS/PAA/Fe3O4 enhancing the stability of long-time drug dosing to the colon and rectum [40]. Apart from the application examples listed above, studies on nanocomposite hydrogels are constantly progressing in the prevention [41], diagnosis [42], treatment [43], and prognosis of cancer [44] in recent years. Also, the incorporation of stimulus-sensitive components into hydrogels has allowed drug release under varied triggers, such as light, temperature, pH, and electric and magnetic fields, which improved the flexibility of the response as well as biomedical perfor-mances [45,46]. For example, glucose-sensitive hydrogels allow insulin release in response to the change in glucose concentration, thus maintaining a stable blood glucose level [47]. This type of hydrogel delivery system can be applied in various scenarios in cancer therapy through injections in conjunction with hyperthermia and chemotherapy [48]. Xia et al. presented PSiNPs/PEGDA hybrid hydrogels, which could achieve effective drug release to cancer cells through response to NIR light ( Figure 4B) [49]. The light-responsive hydrogels could provide localized inhibition of cancer cells and have shown great potential in localized cancer treatment. pH-responsive hydrogels have also shown good results serving as drug carriers due to the difference in microenvironment between tumors and healthy tissues [50]. Wu et al. prepared an injectable and self-healing hydrogel with pH-responsiveness, which allowed precise drug release control within 0.2 pH change ( Figure 4C) [51]. In recent years, magnetically responsive hydrogels have drawn attention due to the non-invasive remote control over internal architecture, actuation, and drug release [45]. By incorporating Figure 4D) [52]. In addition to single-responsive nanocomposite hydrogels listed above, multi-responsive nanocomposite hydrogels, such as pH/NIR-controlled hydrogel or magnetic/pH, thermo-responsive hydrogel [53], have complexed the response combination while increased the response flexibility and accuracy, which enhanced its biomedical performance ( Figure 4E). J. Compos. Sci. 2022, 6, x FOR PEER REVIEW 9 of 16 4C) [51]. In recent years, magnetically responsive hydrogels have drawn attention due to the non-invasive remote control over internal architecture, actuation, and drug release [45]. By incorporating Fe3O4 magnetic nanoparticle (Fe3O4-MNP) into the CS/GP hydrogel, Zhang et al. reported the application of magnetic thermo-sensitive hydrogel for prolonged delivery of Bacillus Calmette-Guérin in the treatment of bladder cancer ( Figure 4D) [52]. In addition to single-responsive nanocomposite hydrogels listed above, multi-responsive nanocomposite hydrogels, such as pH/NIR-controlled hydrogel or magnetic/pH, thermoresponsive hydrogel [53], have complexed the response combination while increased the response flexibility and accuracy, which enhanced its biomedical performance ( Figure 4E). Regenerative Medicine In the field of regenerative medicine, the main goals are to provide suitable microenvironments for tissue regeneration and to develop functional biological substitutes that can restore, maintain, or improve tissue function [54,55]. Nanocomposite hydrogels have presented several advantageous properties, including biocompatibility, tunable mechanical properties and porosity, controllable biodegradation, and drug delivery effect, making nanocomposite hydrogels ideal scaffolds for both hard tissue ( Figure 5A,B) and soft tissue repair ( Figure 5C,D) [56,57]. Regenerative Medicine In the field of regenerative medicine, the main goals are to provide suitable microenvironments for tissue regeneration and to develop functional biological substitutes that can restore, maintain, or improve tissue function [54,55]. Nanocomposite hydrogels have presented several advantageous properties, including biocompatibility, tunable mechanical properties and porosity, controllable biodegradation, and drug delivery effect, making nanocomposite hydrogels ideal scaffolds for both hard tissue ( Figure 5A,B) and soft tissue repair ( Figure 5C,D) [56,57]. Cartilage Cartilage defects and cartilage degeneration are commonly encountered by orthopedic surgeons, especially with the increasing cases of sports traumas and the aging population [62]. The combination of soft polymer chains and stiff nanomaterials in nanocomposite hydrogels can improve their mechanical properties, which allows them to serve as advantageous scaffolds in cartilage regeneration [63]. Shen et al. reported Alg-DA/PDA hydrogel by introducing polydopamine (PDA) NPs into alginate-modified dopamine (Alg-DA) and crosslinking by calcium ions [64]. The Alg-DA/PDA scaffold showed improved mechanical properties, biocompatibility, and appropriate degradation rate, providing an optimized environment for cartilage regeneration. In another study, Susanna Piluso et al. developed 3D nanocomposite hydrogels by embedding starch nanocrystals (SNCs) in a gelatin matrix, which presented increased compressive modulus and good viability of encapsulated chondrogenic progenitor ATDC5 cells, indicating its potential for cartilage tissue engineering [65]. Moreover, hydrogels can be used as biological ink combined with 3D printing technology to realize uniform or gradient pore structures [66]. In the study of Sahar Sultan and coworkers, bio-ink was formed by mixing cellulose nanocrystals (CNCs) into the solution of sodium alginate and gelatin; scaffolds were fabricated by 3D printing with a uniform and porous structure [67]. The printed scaffolds met the requirements in cartilage regeneration, which corroborates that 3D printing is a versatile method to obtain customized structures for cartilage tissue engineering. More recently, Felipe Olate-Moya et al. reported nanocomposite hydrogels based on photocrosslinkable alginate with conjugation of gelatin and chondroitin sulfate ( Figure 5A). A graphene oxide (GO) nanofiller was added to the hydrogels for better printability and cell proliferation [58]. The 3D printed scaffolds showed to be cytocompatible with h-AD-MSCs, making them promising candidates for cartilage regeneration. Cartilage Cartilage defects and cartilage degeneration are commonly encountered by orthopedic surgeons, especially with the increasing cases of sports traumas and the aging population [62]. The combination of soft polymer chains and stiff nanomaterials in nanocomposite hydrogels can improve their mechanical properties, which allows them to serve as advantageous scaffolds in cartilage regeneration [63]. Shen et al. reported Alg-DA/PDA hydrogel by introducing polydopamine (PDA) NPs into alginate-modified dopamine (Alg-DA) and crosslinking by calcium ions [64]. The Alg-DA/PDA scaffold showed improved mechanical properties, biocompatibility, and appropriate degradation rate, providing an optimized environment for cartilage regeneration. In another study, Susanna Piluso et al. developed 3D nanocomposite hydrogels by embedding starch nanocrystals (SNCs) in a gelatin matrix, which presented increased compressive modulus and good viability of encapsulated chondrogenic progenitor ATDC5 cells, indicating its potential for cartilage tissue engineering [65]. Moreover, hydrogels can be used as biological ink combined with 3D printing technology to realize uniform or gradient pore structures [66]. In the study of Sahar Sultan and coworkers, bio-ink was formed by mixing cellulose nanocrystals (CNCs) into the solution of sodium alginate and gelatin; scaffolds were fabricated by 3D printing with a uniform and porous structure [67]. The printed scaffolds met the requirements in cartilage regeneration, which corroborates that 3D printing is a versatile method to obtain customized structures for cartilage tissue engineering. More recently, Felipe Olate-Moya et al. reported nanocomposite hydrogels based on photo-crosslinkable alginate with conjugation of gelatin and chondroitin sulfate ( Figure 5A). A graphene oxide (GO) nanofiller was added to the hydrogels for better printability and cell proliferation [58]. The 3D printed scaffolds showed to be cytocompatible with h-AD-MSCs, making them promising candidates for cartilage regeneration. Bone Bone loss caused by trauma, infection, or congenital diseases, has significant effects on the quality of life. Various biomaterial scaffolds have been designed for bone tissue engineering [68]. Compared with hard scaffolds, softer hydrogels have the advantage of injectability and do not require pre-molding, which can fill irregularly shaped defects. Distinctively designed hydrogels with functional nanoparticles have potential in cell differentiation and bone regeneration [69]. Mahmoud Azami et al. prepared a gelatin-amorphous calcium phosphate nanocomposite scaffold with a porous microstructure. When implanted in-vivo, the hydrogel scaffold promoted the mineralization process with good biocompatibility [70]. Yang et al. also reported an injectable hyaluronic acid hydrogel system functionalized by cross-linkable hydrazide groups and bisphosphonate ligands (HA-hy-BP) together with Ca 2+ solution ( Figure 5B) [71]. Interaction between BP residues and Ca 2+ could serve as nuclei for calcium phosphate deposition and further facilitate mineralization. Composite hydrogels containing inorganic nanoparticles like hydroxyapatite (Ca 10 (PO 4 ) 6 (OH) 2 , HAP), silicate glasses or montmorillonite (MMT) have enhanced mechanical properties and better bone regeneration performances. J Barros et al. studied alginate-nanohydroxyapatite hydrogel systems and reported that 30 wt% nHAP content exhibited the best osteoblastic cell proliferation, trabecular bone formation and matrix mineralization [72]. Meanwhile, Mani Diba and coworkers mixed bioactive glass particles with bisphosphonate-functionalized gelatin to prepare composite colloidal gel for the treatment of osteoporotic bone defects [73]. More recently, Zhong-Kai Cui et al. introduced MMT to the photopolymerizable methacrylated glycol chitosan (MeGC) hydrogel system to fabricate an injectable nanocomposite hydrogel for bone tissue engineering [74]. In-vitro results showed that the hydrogel could promote cell proliferation attachment. Furthermore, they applied the hydrogel in-vivo to a critical-sized mouse calvaria defect model, demonstrating its potential effectiveness for bone regeneration. Skin Skin serves as the largest organ of the human body and the first protective line against infections [75]. Wound healing is sometimes challenging because skin defects often vary in terms of shape, size, and depth. While in some cases, when skin defects are combined with systemic diseases, it may lead to serious health problems. Skin tissue regeneration requires scaffolds to be both soft and anti-infective. Nanocomposite hydrogels with different formulas have been widely used to meet such requirements [76]. Shi et al. reported a nanocomposite hydrogel based on pendant bisphosphonate-modified hyaluronan (HA-BP) and AgNO 3 , which have unique advantages such as great moldability, anti-bacterial properties, and self-healing properties [77]. In vivo results have also confirmed complete epithelium layer regeneration, fewer remaining wounds, and better vascularization with the application of BP-Ag nanocomposite hydrogel, suggesting its promising potential for regenerative wound healing. More recently, Rasul Rakhshaei et al. presented chitosangelatin/Zinc oxide nanocomposite hydrogels (CS-GEL/nZnO) with both antibacterial and drug delivery properties, which could be helpful for the wound healing process [78]. Apart from metal ions nanoparticles, graphene-based nanomaterials were also used in the preparation of nanocomposite hydrogels for skin tissue engineering. In a study reported by Huang et al., functionalized graphene oxide (GO) was introduced into the GelMA/ dopamine grafted hyaluronic acid (HA-DA) hydrogel to form the GelMA/HA-DA/GO-βCD-BNN6 nanocomposite hydrogel ( Figure 5C) [60]. Both in-vitro and in-vivo results have demonstrated the antibacterial effect of the hydrogel, together with good mechanical properties and biocompatibility. This nanocomposite hydrogel could promote full-thickness wound healing and could be an ideal candidate for skin tissue regeneration. Nerve The goal of nerve tissue engineering is to provide a suitable microenvironment for nerve regeneration. Ideally, these nerve grafts should not only meet the mechanical prop-erties and porosity of natural nerves but also have biological effects of promoting neural growth [79,80]. Nanocomposite hydrogels with certain formulas are potential materials. Several studies have focused on graphene-based hybrid hydrogels for nerve regeneration [81]. Huang et al. prepared graphene-based hydrogels by mixing graphene or graphene oxide with polyurethane ( Figure 5D) [61]. With its rheological properties suitable for 3D printing and biocompatibility with NSCs, the graphene-polyurethane nanocomposite hydrogel may serve as a viable option to facilitate nerve regeneration. In addition to carbon-based nanomaterials, BP-Metal based hydrogel could also be optimized for nerve regeneration. Our team is currently working on developing BP-Mg nanocomposite hydrogels for peripheral nerve regeneration with good shear-thinning, injectability properties, and sustainable release of Mg 2+ . We have received optimistic results both in-vitro and in-vivo, suggesting the hydrogels can promote peripheral nerve regeneration and functional recovery. Other Applications In nanocomposite hydrogels, nanoparticles interlink with polymer chains to form strong bonds, resulting in lower adhesion energy and energy dissipation, thus enhancing adhesion ( Figure 6A) [82]. Taking advantage of this unique property, adhesive hydrogels have various bio-medicinal applications such as hemostasis [83] and optical lens ( Figure 6B). Liang et al. prepared adhesive hemostatic nanocomposite hydrogel based on dopaminemodified hyaluronic acid and reduced GO, which possessed good hemostatic capacity in vivo ( Figure 6C) [84]. Li et al. reported tough aluminum hydroxide nanocomposite hydrogels (Al-NC gels) with high transparency (over 95%) for fast-tunable focus lenses ( Figure 6D) [85]. Nerve The goal of nerve tissue engineering is to provide a suitable microenvironment for nerve regeneration. Ideally, these nerve grafts should not only meet the mechanical properties and porosity of natural nerves but also have biological effects of promoting neural growth [79,80]. Nanocomposite hydrogels with certain formulas are potential materials. Several studies have focused on graphene-based hybrid hydrogels for nerve regeneration [81]. Huang et al. prepared graphene-based hydrogels by mixing graphene or graphene oxide with polyurethane ( Figure 5D) [61]. With its rheological properties suitable for 3D printing and biocompatibility with NSCs, the graphene-polyurethane nanocomposite hydrogel may serve as a viable option to facilitate nerve regeneration. In addition to carbonbased nanomaterials, BP-Metal based hydrogel could also be optimized for nerve regeneration. Our team is currently working on developing BP-Mg nanocomposite hydrogels for peripheral nerve regeneration with good shear-thinning, injectability properties, and sustainable release of Mg 2+ . We have received optimistic results both in-vitro and in-vivo, suggesting the hydrogels can promote peripheral nerve regeneration and functional recovery. Other Applications In nanocomposite hydrogels, nanoparticles interlink with polymer chains to form strong bonds, resulting in lower adhesion energy and energy dissipation, thus enhancing adhesion ( Figure 6A) [82]. Taking advantage of this unique property, adhesive hydrogels have various bio-medicinal applications such as hemostasis [83] and optical lens ( Figure 6B). Liang et al. prepared adhesive hemostatic nanocomposite hydrogel based on dopamine-modified hyaluronic acid and reduced GO, which possessed good hemostatic capacity in vivo ( Figure 6C) [84]. Li et al. reported tough aluminum hydroxide nanocomposite hydrogels (Al-NC gels) with high transparency (over 95%) for fast-tunable focus lenses ( Figure 6D) [85]. Summary Natural ECM consists of a soft matrix and stiff domains to support the various cell activities, such as cell growth, proliferation, differentiation, etc. Nanocomposite hydrogels are highly hydrated soft polymeric networks doped with stiff nanostructures. Therefore, biomimetic hierarchical nanocomposite hydrogels are employed to optimize the biophysical and biochemical regulation of the seeded or encapsulated cells, thereby achieving the manipulation of cell fates. Herein, we give a comprehensive review, from the design principles to the biomedical applications. This review provides various strategies that can be utilized to manipulate the biophysical and biochemical properties of the nanocomposite hydrogels, followed by highlighting their biomedical applications, especially drug delivery and tissue engineering. From the view of clinical translation, the major concerns could be summarized as stability and biosafety. Further studies to broaden the construction tools and expand the application scope still need joint efforts from material and biological scientists. Author Contributions: Z.Y., J.X., and W.Y. wrote the original manuscript. J.X., J.S., L.Q., and W.Y. proposed the review topic and revised the manuscript. J.X., L.Q., and W.Y. provided the outline and discussion points and supervised the whole drafting processes. All authors have read and agreed to the published version of the manuscript.
2022-11-10T16:42:09.265Z
2022-11-04T00:00:00.000
{ "year": 2022, "sha1": "8f66ded5c937fa68446a77c0404bcf93df853065", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-477X/6/11/340/pdf?version=1667570673", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "185d8848a80578b8fdcd673e08b39527ed17c33d", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Engineering" ], "extfieldsofstudy": [] }
36062224
pes2o/s2orc
v3-fos-license
Dipole Symmetry Near Threshold In celebrating Iachello's 60th birthday we underline many seminal contributions for the study of the degrees of freddom relevant for the structure of nuclei and other hadrons. A dipole degree of freedom, well described by the spectrum generating algebra U(4) and the Vibron Model, is a most natural concept in molecular physics. It has been suggested by Iachello with much debate, to be most important for understanding the low lying structure of nuclei and other hadrons. After its first observation in $^{18}O$ it was also shown to be relevant for the structure of heavy nuclei (e.g. $^{218}Ra$). Much like the Ar-benzene molecule, it is shown that molecular configurations are important near threshold as exhibited by states with a large halo and strong electric dipole transitions. The cluster-molecular Sum Rule derived by Alhassid, Gai and Bertsch (AGB) is shown to be a very useful model independent tool for examining such dipole molecular structure near thereshold. Accordingly, the dipole strength observed in the halo nuclei such as $^6He, ^{11}Li, ^{11}Be, ^{17}O$, as well as the N=82 isotones is concentrated around threshold and it exhausts a large fraction (close to 100%) of the AGB sum rule, but a small fraction (a few percent) of the TRK sum rule. This is suggested as an evidence for a new soft dipole Vibron like oscillations in nuclei. Molecular Dipole Symmetry A molecular degree of freedom is characterized by excitations that involves the relative motion of two tightly bound constituents and not the excitation of the objects themselve. Hence it is associated with a polarization vector known as the separation vector. Such a vector can be classicaly described in a geometrical model in three dimensions or by using the corresponding group U(4) 1 and the very succesful Vibron model of molecular Physics 2 . This model has two symmetry limits that correspond to the geometrical description of Rigid Molecules, the O(4) limit, or Soft Molecules, the U(3) limit. A most comprehensive discussion of such molecular structure and the Vibron model can be found in Iachello-Levine book 2 on "Algebraic Thoery of Molecules". In Fig. 1 taken from that book we show the characteristic dimensions of the Arbenzen molecule. The argon atom is losely bound to the (tightly bound) benzen molecule by a van der Waalls polarization and thus this molecular state lies close to the dissociation limit. We note that the relative dimension and indeed the very polarization phenomena are reminscent of a halo structure where the argon atom creates a "halo" around the benzen molecule. The AGB Cluster Sum Rule The polarization phenomena associated with a molecular state implies that it should be associated with dipole excitations of the separation vector. In this case expectation values of the dipole operator do not vanish as the center of mass and center of charge of the polarized molecular state do not coincide 3,4 . Hence molecular states give rise to low lying dipole excitations. While the high lying Giant Dipole Resonace (GDR) is associated with a Goldhaber-Teller 5 excitation of the entire neutron distribution against the proton distribution, a molecular excitation involves a smaller fraction of the nucleus at the surface and is expected to occur at lower excitation than the GDR; i.e. a soft dipole mode 6,7 . The GDR exhausts the Thomas-Reiche-Kuhn (TRK) 8 Energy Weighted Dipole Sum Rule as applied to nuclei: And for a molecular state Alhassid, Gai and Bertsch 9 derived sum rules by subtracting the individual sum rules of the contituents from the total sum rule: (equ. 5) The molecular sum rule, equ (2), was shown to be useful in elucidating molecular (cluster) states in 18 O where the measured B(E1)'s and B(E2)'s exhaust 13% and 23%, respectively, of the molecular sum rule 10 . Similarily, these molecular states in 18 O have alpha widths that exhaust 20% of the Wigner sum rule. The branching ratios for electromagnetic decays in 18 O were also shown to be consistent with predictions of the Vibron model in the U(3) limit 11 . Indeed the manifestation of a molecular structure in 18 O has altered our undertsanding of the coexistence of degrees of freedoms in 18 The dipole strength at approximately 1.2 MeV in 11 Li 14 , shown in Fig. 2, exhausts approximately 20% of the molecular sum rule, and the total strength integrated up to 5 MeV exhausts approximately 100% of the cluster sum rule 15,16 , but it only exhausts approximately 8% of the TRK sum rule, see table 1. We emphasize that the experimental efficiency at for example 6 MeV is very large (30%), but no strength is found at higher energies beyond 100% of the molecular sum rule. These two facts strongly suggest the existence of a low lying soft dipole mode in 11 Li. Similar observation are reported in 11 Be 17 , oxygen isotopes 18 and 6 He 19 , believed to exhibit a halo structure. The N=82 isotones also show a diople strength near threshol as shown in Fig 3 20 . These results are summarized in Table 1. The ratio of the TRK/AGB sum rules is given by: Conclusions In conclusions we demonstrate that molecular configurations play a major role in the structure of light and heavy nuclei. Unlike the Giant Dipole Resonance that involves oscillation of the entire neutron-proton distributions, these Vibron states involve only oscillations of the surface of the nucleus, and hence they lie at lower energies than the GDR. Similarly, while the GDR exhausts the TRK sum rule, the Vibron states exhausts the ABG cluster sum rule.
2017-09-18T02:16:22.503Z
2003-06-04T00:00:00.000
{ "year": 2003, "sha1": "332f011925dd3b520ab5946e46e50817177cbe1f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/0306017", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "332f011925dd3b520ab5946e46e50817177cbe1f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
221096210
pes2o/s2orc
v3-fos-license
ELASTOGRAPHIC ANALYSIS OF THE SUPRASPINATUS TENDON IN DIFFERENT AGE GROUPS ABSTRACT Objective: To compare the mechanical properties of the supraspinatus tendon in different age groups using Supersonic Shearwave Imaging (SSI) elastography. Methods: We evaluated 38 healthy individuals of both genders, 20 being in the range of 20 to 35 years and 18 being over 60 years. The shear modulus of the supraspinatus tendon was measured by SSI elastography, always on the right side. Means between age groups were compared and statistically analyzed using the Shapiro-Wilk normality test followed by the student’s t-test and were established as a statistically significant value of p ≤ 0.05. Results: A statistically significant difference was observed when the mean values of the shear modulus of the supraspinatus tendon of young adults (23.98 ± 9.94 KpA) were compared with those of older adults (17.92 ± 6.17 KpA). Conclusion: We found a difference between the means of the shear modulus measured by the SSI elastography, showing a significant decrease of the shear modulus with the chronological age progression. Level of Evidence III, Diagnostic Studies - Investigating a Diagnostic Test. INTRODUCTION Rotator cuff injuries, especially supraspinatus muscle tendon (SP), are among the most prevalent in upper limbs. 1 Its etiology is multifactorial, including degenerative, traumatic and inflammatory causes. 2 Yamamoto et al. 1 underwent ultrasonography of 1,366 individuals aged between 22 and 87 years (mean age 57.9 years) and observed a high prevalence of rotator cuff injuries, reaching 20.7%. 1 In addition to being very prevalent, such injuries may disable the individuals, because pain intensity can withdrawal they from sports and work activities. [2][3][4] The prevalence of rotator cuff disease increases with age. Sher et al.5 underwent magnetic resonance imaging in asymptomatic individuals and found rotator cuff injuries in 4% of patients under 40 years old and in 54% of those aged 60 years or older. 5 Tempelhof et al. 6 carried out shoulder ultrasonography of 411 asymptomatic volunteers and found a global prevalence of rotator cuff injury of 23%. This study also reported that the prevalence of this injury increased with age, with 13% of the individuals with 50 years or older, 20% in the sixth decade and 31% in the seventh decade. 6 Although magnetic resonance imaging is the most widespread imaging method for assessing changes in rotator cuff tendons, elastrography has been shown to be as effective as the first in diagnosis and characterization of these alterations. 7,8 In a wide-ranging literature review, Washburn et al. 9 showed that elastography was used in studies of various structures, including the calcaneus, patellar, quadriciptal and rotator cuff tendons. 9 There are two main modalities of elastography: compression (EC) and shear (ES). ES provides noninvasive estimation of tissue mechanical properties. The technique involves a mechanical disturbance in the tissue with an impulse of forces generating a shear wave, visualizing the displacements of tissue and then estimating the speed of the local shear wave (LSW), estimating the "flight time" of this wave. Soft tissue LSW measurements can be interpreted as an indirect evaluation of the shear modulus. 10 When compared to isolated ultrasonography, ES potentially increases the sensitivity and diagnostic accuracy of tendinopathies, in addition to detecting pathological changes earlier, enabling the prediction of which tendons are at risk of injury and evaluation of the recommended treatments. 11 Objective This study aims to compare mechanical properties of the supraspinatus tendon in two distinct age groups, using the measurement of the tendon shear modulus by elastography. Sample This study had the ethical guidelines analyzed by the Research Ethics Committee of the Hospital, with approval recorded by The Embodied Opinion No.1,674,064 of August 8, 2016. The volunteers were recruited by convenience sampling and 38 participants were divided into two groups: one of young adults aged between 20 and 35 years (n = 20) and the other for older adults over 60 years of age (n = 18). All subjects agreed to participate in the study signing a free and informed consent form. The groups are clearly distinct from each other. Studies aforementioned show that the prevalence of rotator cuff ruptures is low in under 40 years old individuals and high in those over 60 years of age. This fact was used as criterion for defining age groups. Anamnesis and physical examination were performed in the candidates, and presenting the right upper limb as dominant was the inclusion criteria. Patients with current or previous shoulder symptoms, those with a history of diseases and/or previous shoulder surgeries, as well as those with known systemic disease were excluded. Patients with ultrasound evidence of supraspinatus rupture were also excluded. Elastog raphy For the shear modulus collection, the equipment ( Figure 1) Aixplorer, v9 (Supersonic Shearwave Imaging, Aix-in-Provence, France), with Super Linear Transducer TM SL 10-2, width of 40mm, 256 pizoelectric elements, operating in the ranges of 2 to 10 MHz and lateral resolution of -6dB: 0.3mm was used. The participants were placed in the sitting position, with the back of the right hand resting on the lumbar region, to evidence the supraspintus tendon, with the left upper limb extended along the body, hips and knees flexed at 90º and feet supported to the ground (Figure 2). 12 The volunteers kept their muscles relaxed throughout the examination. An experienced radiologist in the acquisition of musculoskeletal ultrasound images acquired the images using the transducer longitudinally to the fibers, adopting minimal compression and gel for the best acoustic coupling (Figure 3). A total of three images were acquired to determine the reliability of the method. 12 Before activating the elastography mode, supraspinatus tendon was assessed for its integrity and the best ultrasound image was chosen. Then elastographic mode was activated, with the elastogram in the range of 0-800-kPa. A rectangular, mapping area was selected, demonstrating the tendon boundaries and surrounding structures, positioned in the central region of the tendon (Figure 4). 13 (Table 1). A significant difference was found between the means of the supraspinatus tendon shear modulus of youth groups when compared to the older adults (p = 0.033) ( Figure 5). The mean age in the groups of young and older adult was, respectively, 28.05 and 67.9 years. The group of young individuals was composed of four women and 16 men, while the older adults group was composed of 11 women and seven men. There was no significant difference in shear modulus found in women and men (p = 0.891) ( Figure 6). Image processing was implemented in a MATLAB routine (Mathworks, Massachusetts, USA), to estimate the shear modulus, measured in kilopascal (kPa). In this routine, a circle was manually traced in the mapping area, defining the central region of the tendon as a region of interest. The shear modulus was obtained from each region of interest in each image. Statistical analysis Intraclass correlation coefficient (ICC 2,1 ) was applied to evaluate the measurements reliability performed on the same day. The Shapiro-Wilk normality test was performed. After confirming the normality of the shear modulus, Student t-test was performed for independent samples comparing means of both young and older adults groups, as well as for the comparison between women and men. All statistical treatment was performed by the commercial package GraphPad Prism 5.0 (Graphpad software inc., USA) with 5% statistical significance. DISCUSSION This study showed that the supraspinatus tendon shear modulus varies with advancing age. In fact, younger patients, between 20 and 35 years of age, presented a mean shear modulus of 23.28 ± 9.94 kPa, higher than in the group over 60 years of age, which was 17.92 ± 6.17 kPa. Thus, supraspinatus tendon was shown to have a larger, firmer shear modulus among young patients, while in older adults the tendon was less rigid. In fact, with aging, rotator cuff tendons undergo structural changes, such as loss of fibrillar pattern and microruptures, which decreases their compressive strength. 14 Consequently, the compression exerted by the transducer will cause greater tissue deformation, which was measured by the smaller tendon shear modulus in the group of individuals over 60 years of age. This finding agrees with previous studies. In a cadaveric study, Klauser et al. 15 observed a correlation between histological and sonographic findings of calcaneus tendons. They observed that the progression of tendon degeneration was accompanied by the "softening" of the tendon to elastography. Studies about supraspinatus tendon, such as ours, also revealed that degenerative tendinopathy is associated with greater capacity of tendon deformation during elastographic evaluation. [16][17][18][19] On the other hand, Baumer et al. 20 evaluated the influence of age on supraspinatus tendon shear modulus in individuals of different ages and observed that older individuals had stiffer tendons. However, unlike our study, the measurement was carried out in the intra-muscular portion of the tendon, which does not allow the isolated evaluation of the tendon itself, but also involves the muscle belly itself. In a study that correlated elastographic results with magnetic resonance findings, Lee et al. 21 also observed greater stiffness in tendons with tendon disease. However, differently from our study, the elastographic technique was based on the stretch ratio (strain ratio), in which the tendon elasticity is measured by taking the elasticity of another tissue as a reference, which may generate less accurate results. A study with cadavers showed that aging can alter biomechanical properties of the myotendinous unit of the rotator cuff. 8 However, the identification of these changes in vivo is not yet well established. Although MRI is the most widely applied method to assess rotator cuff injuries, it is not able to provide accurate information on the mechanical properties of tendons. In this context, elastography can help in the evaluation of such properties. Lee et al., 21 in 2015, showed that elastographic findings correlate with MRI findings in patients with rotator cuff tendinosis, but they did not include patients with tendon rupture. In 2014, Seo et al., 22 compared the results of elastography with those of MRI and ultrasonography, finding a good correlation between methods. In 2017, Krepkin et al., 23 conducted a pilot study comparing MRI images obtained on T2 with the findings of shear wave elastography, observing good correlation. In relation to MRI, elastography has some additional advantages. Besides being also a noninvasive method, it is easy to perform, relatively inexpensive and it can be carried out with the patient in a comfortable position, being more easily tolerated by them. Furthermore, elastography proved to be a reproducible method in different studies. 20,24 The evaluation of the tendon mechanical properties can bring important information and practical implications. Extensive rotator cuff injuries may be irreparable and some prognostic factors are useful to identify these injuries, among which, patient's age, size of the injury, duration of symptoms, acromion-umeral distance less than seven millimeters, reduction of range of motion, muscle strength equal to or less than grade 3, intraoperative difficulties; surgeon's experience, patient expectation; atrophy and fatty infiltration of the muscle belly involved. Such factors are not definitive to determine the possibility and feasibility of repairing a given lesion, even when associated, as this information does not directly relate to the elasticity and/or stiffness of the ruptured tendon. Therefore, the estimation of the shear modulus can add information in determining the repair condition of a rotator cuff injury. 8 Moreover, some patients may present signs of rotator cuff dysfunction even without having complete injury of one or more tendons. They are patients whose tendons are inserted, but already have some degree of atrophy and fatty infiltration of the muscle belly. These patients may suffer dynamic rise of the umeral head, with secondary subacromial impact and worsening of anatomical tendons conditions. In these cases, by evaluating a mechanical property of the tendon, elastography can also bring valuable information before a tendon rupture occurs. A possible clinical application for this situation would be the preference for reverse to anatomical arthroplasty in a patient with shoulder arthrosis and rotator cuff tendons inserted, although very biomechanically compromised. 25 Although there is still a lack of standardization to adequately evaluate the reproducibility of the results, elastography is very promising. Notably, tendons may have different elasticity modules. 26 Therefore, it would be necessary multiple studies like this, in a larger population, divided into age groups, so that a value of the shear modulus could be found for each group and thus quantify and qualify the aging of the tendon based on its stiffness. Negative points and limitations should be highlighted in this study. The heterogeneous gender distribution between groups may have caused some result bias, since, for example, the mean shear modulus, having been lower in older adults, may have been caused by the fact that there were more female individuals in this group. However, the comparison of the shear modulus between men and women did not show significant difference. Another point would be the fact that SSI elastography is a method that measures only the shear modulus in anisotropic tissues, such as tendinous tissue, not contemplating other mechanical properties of the tendon. However, there is an intimate relationship between shear modulus and tissue stiffness, which allows further analysis of this important biomechanical valence with elastographic data. CONCLUSION The modulus of supraspinatus tendon shear was significantly higher in young people, suggesting deterioration of biomechanical properties of the tendon in older adults.
2020-08-06T09:07:38.214Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "56ba151f9a736b88375d6b65b463f30b10a80aa2", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/aob/v28n4/1809-4406-aob-28-04-190.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6dc5370504284aa6029ef52a06ea4761e07caefa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
122748812
pes2o/s2orc
v3-fos-license
Path integral approach to Asian options in the Black-Scholes model We derive a closed-form solution for the price of an average price as well as an average strike geometric Asian option, by making use of the path integral formulation. Our results are compared to a numerical Monte Carlo simulation. We also develop a pricing formula for an Asian option with a barrier on a control process, combining the method of images with a partitioning of the set of paths according to the average along the path. This formula is exact when the correlation is zero, and is approximate when the correlation increases. I. INTRODUCTION Since the beginning of financial science, stock prices, option prices and other quantities have been described by stochastic and partial differential equations. Since the 1980s however, the path integral approach, created in the context of quantum mechanics by Richard Feynman [1], has been introduced to the field of finance [2,3]. Earlier, Norbert Wiener [4], in his studies on Brownian motion and the Langevin equation, used a type of functional integral that turns out to be a special case of the Feynman path integral (see also Mark Kac [5], and for a general overview see Kleinert [6] and Schulman [7]). The power of pathintegration for finance ([6], [8][9][10][11][12][13][14]) lies in its ability to naturally account for payoffs that are path-dependent. This makes path integration the method of choice to treat one of the most challenging types of derivatives, the path-dependent options. Feynman and Kleinert [15] showed how quantum-mechanical partition functions can be approximated by an effective classical partition function, a technique which has been successfully applied to the pricing of path-dependent options (see ref. [6] and references therein, and Refs. [12,16] for recent applications). There exist many different types of path-dependent options. The two types which are considered in this paper are Asian and barrier options. Asian options are exotic pathdependent options for which the payoff depends on the average price of the underlying asset during the lifetime of the option [11,[17][18][19]. One distinguishes between average price and average strike Asian options. The average price Asian option has been treated in the context of path integrals by Linetsky [20]. The payoff of an average price is given by max(S T − K, 0) and max(K −S T , 0) for a call and put option respectively. Here K is the strike price and S T denotes the average price of the underlying asset at maturity T .S T can either be the arithmetical or geometrical average of the asset price. Average price Asian options cost less than plain vanilla options. They are useful in protecting the owner from sudden shortlasting price changes in the market, for example due to order imbalances [21]. Average strike options are characterized by the following payoffs: max(S T −S T , 0) and max(S T − S T , 0) for a call and put option respectively, where S T is the price of the underlying asset at maturity T . Barrier options are options with an extra boundary condition. If the asset price of such an option reaches the barrier during the lifetime of the option, the option becomes worthless, otherwise the option has the same payoff as the option on which the barrier has been imposed. (for more information on exit-time problems see Ref. [22] and the references therein) In section II we treat the geometrically averaged Asian option. In section II A the asset price propagator for this standard Asian option is derived within the path integral framework in a similar fashion as in Ref. [20] for the weighted Asian option. The underlying principle of this derivation is the effective classical partition function technique developed by Feynman and Kleinert [15]. In section II B we present an alternative derivation of this propagator using a stochastic calculus approach. This propagator now allows us to price both the average price and average strike Asian option. For both types of options this results in a pricing formula which is of the same form as the Black-Scholes formula for the plain vanilla option. Our result for the option price of an average price Asian option confirms the result found in the literature [20,23]. For the average strike option no formula of this simplicity exists as far as we know. Our derivation and analysis of this formula is presented in section II C, where our result is checked with a Monte Carlo simulation. In section III we impose a boundary condition on the Asian option in the form of a barrier on a control process, and check whether the method used in section II is still valid when this boundary condition is imposed on the propagator for the normal Asian option, using the method of images. Finally in Section IV we draw conclusions. A. Partitioning the set of all paths The path integral propagator is used in financial science to track the probability distribution of the logreturn x t = log(S t /S 0 ) at time t, where S 0 is the initial value of the underlying asset. This propagator is calculated as a weighted sum over all paths from the initial value x 0 = 0 at time t = 0 to a final value x T = log(S T /S 0 ) at time t = T : The weight of a path, in the Black-Scholes model, is determined by the Lagrangian where µ is the drift and σ is the volatility appearing in the Wiener process for the logreturn [8]. For Asian options, the payoff is a function of the average value of the asset. Therefore we introducex T = log S T /S 0 as the logreturn corresponding to the average asset price at maturity T . WhenS T is the geometric average of the asset price, thenx T is an algebraic The key step to treat Asian options within the path integral framework is to partition the set of all paths into subsets of paths, where each path in a given subset has the same averagē x T . Summing over only these paths that have a given averagex T defines the conditional propagator K (x T , T |0, 0|x T ): This is indeed a partitioning of the sum over all paths: The delta function in the sum Dx over all paths picks out precisely all the paths that will have the same payoff for an Asian option. The calculation of K (x T , T |0, 0|x T ) is straightforward; when the delta function is rewritten as an exponential, the resulting Lagrangian is that of a free particle in a constant force field in 1D. The resulting integration over paths is found by standard procedures [24]: and corresponds to the result found by Kleinert [6] and by Linetsky [20]. B. Link with stochastic calculus The conditional propagator K (x T , T |0, 0|x T ) is interpreted in the framework of stochastic calculus as the joint propagator K (x T ,x T , T |0, 0, 0) of x T and its averagex T . The calculation of K (x T ,x T , T |0, 0, 0) here is similar to the derivation presented in Ref. [25] where this joint propagator is calculated for the Vasicek model. The main point is that in a Gaussian model the joint distribution of the couple {x T ,x T } has to be Gaussian too. As a consequence this joint distribution is fully characterized by the expectation values and the variances of x T andx T and by the correlation between these two processes. The expectation value ofx T (t) is given by µ − σ 2 2 t 2 , its variance by σ 2 t 3 and the correlation between the two processes by √ 3 2 . The density function of such a Gaussian process is then known to be This agrees with Eq. (7) for K (x T , T |0, 0|x T ). C. Pricing of an average strike geometric Asian option If the payoff at time T of an Asian option is written as V Asian The price of the option, V Asian 0 is the discounted expected payoff, where r is the discount (risk-free) interest rate. Using expression (9) the price of any option which is dependent on the average of the underlying asset during the lifetime of the option can be calculated. We will now derive the price of an average strike geometric Asian call option explicitly. In order to do this, expression (9) has to be evaluated using the payoff: Substituting (11) in (10) yields where the lower boundary of the x T integration now depends onx T . When considering an average price call, the payoff (for a call option) is max(S T − K, 0) leading to a constant lower boundary log(K/S 0 ) for thex T integration, and the integrals are easily evaluated. In the present case however, the integration boundary is more complicated and it is more convenient to express this boundary through a Heaviside function, written in its integral representation: Now the two original integrals have been reduced to Gaussians at the cost of inserting a complex term in the exponential. Expression (13) can be split into two terms denoted I 1 and I 2 , where and I 2 has the same form, except withx T instead of x T in the last term of the argument of the exponent. As a first step, the Gaussian integrals over x T andx T are calculated, yielding Now the integral has been reduced to a form which can be rewritten by making use of Plemelj's formulae. Taking into account symmetry, this reduces to with The first term thus becomes The second term, I 2 , is evaluated similarly, leading to Using the cumulative distribution function of the normal distribution this can be rewritten in a more compact form as with the following shorthand notations Expression (22) is the analytic pricing formula for an average strike geometric Asian call option, obtained in the present work with the path integral formalism. To the best of our knowledge, no pricing formula of this simplicity exists. To check this formula, we compared its results to those of a Monte Carlo simulation. The Monte Carlo scheme used is as follows [25]: first, the evolution of the logreturn is simulated for a large number of paths. This evolution is governed by a discrete geometric Brownian motion for a number of time steps. Using the value for the logreturn at each time step, the average logreturn can be calculated for every path. Subsequently the payoff per path can be obtained, which is then used to calculate the option price by averaging over all payoffs per path en discounting back in time. The analytical result and the Monte Carlo simulation agree to within a relative error of 0.3% when 500 000 samples and 100 time steps are used. This means that our analytical result lies within the error bars at every point. We also obtained the result for an average price Asian option; in contrast to the new result for the average strike option this could be compared to the existing formula [20,23], and was found to be the same. A. Derivation of the option price In this case we consider two stochastic processes: which are correlated in the following manner: dW 1 dZ = ρdt. The x process models the logreturn of the asset price which underlies the Asian option, and the y process describes the control process. The payoff for an Asian option with a barrier on a control process is the same as for a normal Asian option, with the extra condition that the payoff is zero whenever the value of y surpasses a certain predetermined barrier. This is an example of an up-and-out barrier. There are other types of barrier options, namely down-and-out etc., but since their treatment is analogous we will not consider them here. The payoff for an Asian option with a barrier on a control process is given by: where the payoff of an average price Asian option has been used. Here S 0x denotes the initial asset price of the asset corresponding to the logreturn x and y B is the value of the barrier which has been placed upon the y process. It is difficult to price this option using payoff (25) because of the extra barrier condition. However, if this condition could somehow be included in the propagator for these two processes, then the payoff would reduce to that of a normal (average price) Asian option, making the calculations more tractable. To construct this new propagator, henceforth called barrier-propagator, a linear combination of propagators for the combined evolution of both processes given in (24) can be taken: where K B y stands for the propagator for the processes x and y where a barrier condition has been placed upon the y process. The propagator K (x T , y T ,x T , T |0, 0, 0, 0) belonging to the system (24) is an extension of the propagator (7), and is given by: Furthermore C is a factor upon which three conditions will be placed and x S , y S represent the initial condition from which the mirror-propagator starts. This mirror-propagator is used to eliminate all paths that cross the barrier, and because the paths represented by the mirror-propagator usually have higher values than the paths represented by the propagator K (x T , y T ,x T , T |0, 0, 0, 0), they have been given another averagex S . The barrier-propagator must be zero at the boundary: Using this boundary condition, an expression for C can be derived which must satisfy three conditions: firstly C must be independent of the averagesx T andx S , secondly it may not depend on x T and finally it must be time-independent. This eventually leads to the following propagator for the total system of correlated stochastic processes x and y, with a barrier condition on y when y T ∈ [−∞, y B [: with the following shorthand notations: The propagator (29) is equal to zero when y T ∈ [y B , +∞]. A graphical presentation of propagator (29) is shown in Fig. 1. Using the propagator (29) the price of an Asian option with a barrier V AB 0 can be calculated. The general pricing formula is given by: This calculation was done for an average price option: V Asian The calculation, though rather cumbersome, is essentially the same as for the Asian options in section II. The integral over x T is a Gaussian integral, and the remaining two integrals can be transformed into a standard bivariate cumulative normal distribution, defined by: This eventually leads to the following pricing formula for an Asian option with a barrier: where the following shorthand notations were used: B. Results and discussion in order to cancel out paths which have reached the barrier. The difficulty combining these two steps, is that mirror paths have a different average than the original paths, and thus belong to a different partition. This difficulty can apparently be overcome by treating the average itself as a separate, correlated process (as proposed in Ref. [25]). This procedure, However, from the results shown in Fig. 2 it is clear that this is no longer the case for an Asian option with a barrier on a correlated control process. This is because the exact average of the x process does not behave as a separate, correlated process (the average described by this process is henceforth called the approximate average). This approach is exact for a plain Asian option, where all paths contribute, but when a barrier is implemented using the method of images, and thus eliminating some of the paths, the following approximation is made. When the y process hits the barrier and is thus eliminated, its corresponding x and x processes are eliminated as well. But thex process considered in our derivation is only approximate, so the wrongx paths are eliminated. The central question is whether this will lead to a difference between the distribution of contributing paths for the exact averages and the corresponding distribution for the approximate averages, when a barrier has been implemented. Figure 3 shows that this is indeed the case, and that this difference increases when correlation increases. When the correlation is zero, the paths which are eliminated for both the exact and the approximate average are randomly distributed (because the behavior ofx has nothing to do with the behavior of y) , which means that both distributions remain the same Gaussian as they would be without a barrier. This is the reason why our result is exact when correlation is zero. Another source of approximation lies in the use of the Black-Scholes model which has well-known limitations [14,17]. Several other types of market models propose to overcome such limitations, for example by introducing additional ad hoc stochastic variables [26] or by improving the description of the behavior of buyers/sellers [27]. The extension of the present work to for example the Heston model lies beyond the scope of this article. IV. CONCLUSIONS In this paper, we derived a closed-form pricing formula for an average price as well as an average strike geometric Asian option within the path integral framework. The result for the average price Asian option corresponds to that found by Linetsky [20], using the effective classical partition function technique developed by Feynman and Kleinert [15]. The result for the average strike Asian option was compared to a Monte Carlo simulation. We found that the agreement between the numerical simulation and the analytical result for an average strike Asian option is such that they coincide to within a relative error of less than 0.3 % for at least 500 000 samples and 100 time steps. Furthermore, a pricing formula for an Asian option with a barrier on a control process was developed. This is an Asian option with the additional condition that the payoff is zero whenever the value of the control process crosses a certain predetermined barrier. The pricing of this option was performed by constructing a new propagator which consisted of a linear combination of two propagators for a regular Asian option. The resulting pricing formula is exact when the correlation is zero, and is approximate when the correlation increases. The central approximation made in our derivation, is that the process for the average logreturnx is treated as a stochastic process, which is correlated with the process of the logreturn x. This assumption is correct whenever all price-paths contribute to the total sum, but becomes approximate when a boundary condition is applied.
2011-09-23T00:32:07.000Z
2009-06-24T00:00:00.000
{ "year": 2009, "sha1": "47a30bbf310231647242be38df6c6f7bbddf7497", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0906.4456", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "47a30bbf310231647242be38df6c6f7bbddf7497", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Economics" ] }
13611076
pes2o/s2orc
v3-fos-license
Structural changes in lignins isolated using an acidic ionic liquid water mixture † Recently, acidic ionic liquid water mixtures based on the hydrogen sulfate anion have been shown to effectively extract lignin from lignocellulosic biomass. This study analyses Miscanthus giganteus lignin isolated after extraction with the protic ionic liquid 1-butylimidazolium hydrogen sulfate ([HC4im][HSO4]) followed by precipitation with the antisolvent water. Several analytical techniques were employed, such as quantitative C-NMR, H–C HSQC NMR, P-NMR, Py-GC-MS, GPC and elemental analysis. The analysis shows that the ionic liquid pretreatment breaks lignin-hemicellulose linkages and depolymerizes the lignin through the cleavage of glycosidic, ester and β-O-4 ether bonds. This is accompanied by solubilization of the newly generated lignin fragments. At longer pretreatment times, repolymerization of lignin fragments through condensation reactions occurs. The isolated lignins were carbohydrate-free, had low sulfur contents, low molecular weights. Early stage lignins were structurally similar to ball-milled lignin, while more treated lignins were enriched in p-hydroxyphenyl and guaiacyl units and had a high phenolic hydroxyl group content. We conclude that, depending on the treatment conditions, lignins with a variety of characteristics can be isolated using this type of ionic liquid solution. Introduction It is imperative to develop cost-competitive processes for the production of sustainable liquid fuels and chemicals. This is due to the significant contribution of petroleum use to climate change, compounded by the increasing demand for liquid fuels. Lignocellulosic biomass is a sustainable source of organic carbon that is a suitable feedstock for large-scale production of renewable fuels and materials. 1,2 It is mainly composed of the polymers cellulose, hemicellulose and lignin and is naturally resistant to deconstruction by most microbes, enzymes and mechanical stress. This is collectively defined as recalcitrance. 3 The recalcitrance makes fuel and chemical production from lignocellulose more complex, energy-intensive and currently also more expensive than fuel production from starch or sucrose. To date, an array of pretreatment technologies has been explored to enable utilisation of the carbohydrates contained in the lignocellulose. [4][5][6] Many attempts to enhance or simplify the pretreatment have been made, as this step represents a large capital investment (19-22% of total capital) 7 for lignocellulosic biofuel production. However, inefficient deconstruction or the production of downstream inhibitors highlights the need for the development of improved methods. 8 Recently, ionic liquid (IL)-based biomass pretreatments have generated increasing interest due to the unique behavior of ILs toward the component biomass polymers. The most widely investigated IL-based pretreatment is the dissolution of the entire biomass composite with dialkylimidazolium acetate ILs. 9 Unfortunately, this approach suffers significant disadvantages, including limited IL thermal stability 10 and high solvent cost. 11 The "Ionosolv deconstruction" is an alternative ILbased pretreatment that demonstrates high delignification efficiency and high lignin yields. 12,13 The lignin is recovered from the IL solution by reversible addition of water as an antisolvent. A recent techno-economic analysis of the ILs used in Ionosolv deconstruction indicated that these will have manufacturing costs similar to conventional organic solvents, such as toluene or acetone. 14 Because of these cost advantages, we believe that Ionosolv deconstruction holds significant promise for industrial application. A number of studies have demonstrated that the conversion of cellulose to biofuels benefits from reduced lignin content and alterations in lignin structure. 15,16 Early separation of lignin from the carbohydrates is also desirable for obtaining high quality lignins, which is thought to greatly enhance the economic returns of a biorefinery. 17 Separation of lignin and cellulose can only be achieved by a limited number of pretreatment technologies. The majority of these dissolve the lignin while they leave cellulose, the main carbohydrate component, as a solid. This separation requires a solvent with three capabilities: the ability to break the linkages between carbohydrates and lignin, a high solubility of the lignin fragments in the solvent and an effective way to recover the lignin from the solution. Lignin is an irregular polyphenolic biopolymer in plants, synthesized from up to three phenylalanine-derived monomers that differ in their ring methoxylation: coniferyl, sinapyl and p-coumaryl alcohol. These monomers assemble into a racemic macromolecule via free radical polymerization, giving rise to guaiacyl (G), syringyl (S) and p-hydroxyphenyl (H) subunits in the lignin polymer. In grasses such as Miscanthus, p-coumaric acid (PCA) is also present in significant amounts. PCA is attached to the lignin via ester bonds with the γ-hydroxyl group of the lignin side chain. 18 The core lignin polymer contains a wide range of linkages, such as β-O-4, β-5, β-β, 5-5, 4-O-5, and β-1, with no regular inter-unit repeating structure observed. [19][20][21] In addition, lignin carbohydrate complex (LCC) linkages occur in grasses. These form during lignification between ferulic acid (FA) containing hemicellulose (feruloyted arabinoxylan) and the nascent lignin. The extent of this crosslinking has been correlated with increased plant cell wall recalcitrance. 22 Miscanthus is a tall perennial grass with high lignin content (typically around 25%) and can be grown as a dedicated energy crop with high yields and low maintenance. Without pretreatment, enzymatic saccharification yields are very low. Miscanthus giganteus lignin has previously been characterized as a G/S/H type lignin (52%, 44% and 4%, respectively) with approximately 0.4 β-O-4 linkages per aromatic ring, and ca. 0.1 p-coumaric acid ester linkages per aromatic ring. 23 Currently, detailed characterisation of the lignin precipitate recovered after Ionosolv pretreatment is not available. There is also a lack of understanding of the chemical transformations leading to the lignin extraction. Understanding these will provide important insights for controlling the chemical characteristics of the precipitated lignin and for achieving optimal carbohydrate-lignin separations. Lignin's complexity as well as the difficulty to isolate native lignin from cell walls in an unaltered state, provides significant challenges to analytical techniques. 24 Nevertheless, technological advances have enabled researchers to collect valuable information about the composition and chemical linkages in isolated lignins. In the present study, we have examined these transformations through the characterization of lignins isolated from Miscanthus at different time points. We report structural characteristics of Ionosolv lignins using several complementary analytical techniques. Quantitative 13 C-NMR and two-dimensional 1 H-13 C heteronuclear single-quantum coherence NMR spectroscopy (HSQC NMR) provided information on changes in sub-unit composition, and inter-unit linkages. 31 P-NMR spectroscopy was employed to determine the concentrations of hydroxyl functionalities and how these change throughout the course of the pretreatment. Pyrolysisgas chromatography-mass spectrometry (Py-GC-MS) was applied to examine changes in the lignin subunit composition. The molecular weights of the lignins were measured by gel permeation chromatography (GPC) to elucidate the interplay between depolymerization and repolymerization reactions occurring during the IL pretreatment. Elemental analysis (EA) was used to assess the purity of the lignins and draw conclusions about its fuel value. We chose the representative ionic liquid 1-butylimidazolium hydrogen sulfate ([HC 4 4 ] and 2 g water were then added. The tubes were sealed and samples incubated without stirring in an oven at 120°C for a pre-determined time. After pretreatment, the samples were cooled to room temperature and washed with acetone (20 mL) followed by filtration, giving carbohydrate rich material (CRM) and a liquor. The liquor was concentrated by evaporating the acetone, then water was added as an anti-solvent to precipitate the lignin, and the recovered lignin was dried under vacuum at 40°C overnight. Py-GC-MS experiments The Miscanthus giganteus and Ionosolv lignin samples (approximately 1.1 to 1.4 mg) were pyrolyzed using a chemical data system (CDS) 5200 series pyroprobe pyrolysis unit by heating at 600°C for 20 seconds to fragment macromolecular components. Fragments were analyzed using an Agilent 7890A gas chromatograph, fitted with a HP-5 fused capillary column (J+W Scientific; 5% diphenyl-dimethylpolysiloxane; 30 m length, 0.32 µm internal diameter, 0.25 µm film thickness), coupled to an Agilent 5975 MSD single quadrupole mass spectrometer operating in electron ionization (EI) mode (scanning a range of m/z 50 to 700 at 2.7 scans per second; ionization energy 70 eV). The pyrolysis transfer line and injector temperatures were set at 350°C, the heated interface at 300°C, the EI source at 230°C and the MS quadrupole at 150°C. The GC oven was programmed from 40°C (held for 3 min) to 100°C at 300°C min −1 and then to 300°C at 5°C min −1 and held at this temperature for 15 min. Helium was used as the carrier gas (1 mL min −1 ) and the compounds were introduced in split mode (split ratio 40 : 1). Prior to analyses, 1 to 3 µl of an internal standard (5α-androstane; 100 µl of a 0.256 mg mL −1 solution in dichloromethane) and in the case of GC-MS with tetramethylammonium hydroxide (TMAH) thermochemolysis (Py-TMAH-GC-MS), 10 µl of a TMAH solution (25%, w/w, in methanol) was added to each sample. The relative peak areas were obtained by normalization to the total integral of the areas. NMR spectroscopy Quantitative 13 C NMR. Ionosolv lignins (60-80 mg) were dissolved in 0.30 mL deuterated dimethylsulfoxide (DMSO-d 6 ) with slight heating and stirring using a micro stir bar. The solution was transferred to a Shigemi NMR tube. NMR spectra were acquired on a Bruker Avance 500 MHz spectrometer at 50°C in order to reduce viscosity. An inverse-gated decoupling (Waltz-16) pulse sequence with a 30 degree pulse angle and 25 s pulse delay was used. The spectra were interpreted according to El Hage et al., counting the aromatic region as 6.12 carbons atoms. 23 Unfortunately, the amount of unsaturated side chains in later stage lignin is not known and could be larger or smaller than in early stage lignin. The ionic liquid signals were subtracted where necessary. 2D NMR. According to the method previously described, around 100 mg of finely divided (ball-milled) extractive-free Miscanthus sample was swollen in 0.7 mL of DMSO-d6/ pyridine-d 5 and transferred to NMR tubes. 22 For the Ionosolv lignins, around 20 mg of lignin was dissolved in 0.25 mL of DMSO-d6 and also transferred to NMR tubes. Two dimensional 1 H-13 C heteronuclear single-quantum coherence (HSQC) spectra were acquired on a Bruker Avance 600 MHz NMR spectrometer equipped with an inverse gradient 5 mm TXI 1 H/ 13 C/ 15 N cryoprobe. The chemical shifts were referenced to the central DMSO solvent peak (δ C 39.5 ppm, δ H 2.49 ppm). 1 H-13 C correlation spectra were measured with the Bruker standard pulse sequence "hsqcetgpsisp.2". This experiment provides a phase-sensitive, gradient edited 2D HSQC spectrum using adiabatic pulses for inversion and refocusing. All the experiments were carried out at 25°C with the following parameters: spectral width of 10 ppm in the F2 ( 1 H) dimension with 2048 data points (TD1) and 160 ppm in the F1 ( 13 C) dimension with 1024 data points (TD2); the scan number was 16 and interscan delay (D1) time was 1 s. The HSQC spectra were used to estimate relative the amounts of aromatic rings (guaiacyl, syringyl, p-hydroxyphenyl and p-coumaric acid) in the lignin by volume integration of relevant peaks using the MestReNova software. Note that the symmetric S, H and PCA rings contribute two C-H pairs per peak. 31 P-NMR. Quantitative 31 P-NMR spectra of all lignin preparations were obtained using published procedures. 26,27 300 μl of a solvent solution made from 1.6 : 1 (v/v) of pyridine and deuterated chloroform was prepared. The solvent solution was used to prepare a mixture solution containing 20.5 mg ml −1 of cyclohexanol (as internal standard) and a second 5.6 mg ml −1 solution of chromium(III) acetylacetonate solution (relaxation reagent). Previously dried Ionosolv lignin (ca. 10 mg) was accurately weighed and dissolved in 100 µl of anhydrous pyridine/ deuterated chloroform solvent solution (1.6 : 1, v/v). 50 μL of the cyclohexanol solution, 50 μL of the chromium(III) 2-chloro-4,4,5,5-tetramethyl-1,3,2-dioxaphospholane (TMDP) were added to the lignin and the sealed vial was vortex mixed intensely until it was completely dissolved. The samples were transferred into Shigemi NMR tubes for subsequent NMR analysis. The NMR experiments were carried out at 298 K on a Bruker Avance 500 MHz NMR spectrometer. To obtain quantitative spectra a relaxation delay of 25 s was used between 30°pulses, the number of scans was 127 and an inverse gated decoupling pulse sequence was used. The acquisitions were performed at room temperature. Chemical shifts were calibrated relative to the internal standard, i.e. the cyclohexanol peak signal centred at δ 144.2 ppm. Integration regions that were used to assign the signals and the relative signal intensities used to calculate the concentration of hydroxyl groups are tabulated in Table S4 (ESI †). GPC measurements A JASCO instrument equipped with an LC-NetII/ADC interface, an RI-2031Plus refractive index detector, two PolarGel-M columns (300 × 7.5 mm) and two PolarGel-M guard columns (50 × 7.5 mm) was employed. Dimethylformamide (DMF) with 0.1% lithium bromide was used as the eluent. Samples were prepared at 1 mg ml −1 concentration in DMF with 0.1% LiBr. The flow rate was 0.7 mL min −1 and the analyses were carried out at 40°C. Polystyrene standards (Sigma-Aldrich) ranging from 266 to 70 000 g mol −1 were used for calibration. 6 and analysed on Bruker Avance 400 MHz NMR spectrometer. At the end of the experiment, the IL solution was diluted with water and the resulting precipitate washed 3 times with water, followed by drying under vacuum at 40°C. The yellowish powder was submitted to electrospray mass spectrometry analysis (Micromass LCT Premier, Waters) using methanol as a solvent. Elemental analysis Elemental analysis was performed in duplicate by Medac Ltd (Chobham, UK). Raw data were processed by calculating the cation and anion contributions, which were subsequently subtracted to obtain the elemental composition of the IL free lignins. The molar carbon hydrogen ratio for the IL free lignin was calculated, as was the molar cation to anion ratio of the ionic liquid portion. Results and discussion The Miscanthus gigantheus biomass utilised in this study contained 22.4% acid-insoluble and 4.0% acid-soluble lignin (exact composition shown in ESI, Table S1 †). The lignin was extracted using a solution of 80 wt% 1-butylimidazolium hydrogen sulfate and 20 wt% water at 120°C for between 1 h and 24 h. The lignin containing ionic liquid solutions were separated from the cellulose pulp by washing with acetone. The acetone was removed and the lignins precipitated by adding water. The washed and dried lignins were subjected to the analyses detailed below. Quantitative 13 C-NMR The composition of lignins isolated after 1 h and 12 h was probed using quantitative 13 C-NMR spectroscopy. Due to the low sensitivity, the acquisition time was long and the signal-tonoise ratio poor (Fig. 1). Some of the signal groups overlap. Nevertheless the spectra provide useful clues about the composition of lignin linkages. We assigned areas of the spectrum to functional groups and functional group clusters and quantified by integration, similar to El Hage et al. (Table 1). 23 The NMR spectrum of the early stage lignin (1 h) and the lignin isolated after extensive treatment (12 h) differed significantly. The 1 h lignin had significant intensity in the aliphatic region, while in the 12 h lignin almost all carbon atoms were vinylic or aromatic. The 1 h lignin was characterised by alternating high and low intensities in the aromatic region (102-165 ppm), while the 12 h lignin aromatic intensity was more evenly distributed, with only 3 distinct spikes. This suggests that there was a greater heterogeneity in the aromatic structure of the 12 h lignin. A general commonality was that both lignins appeared to be contaminated with a minor amount of butylimidazole. We have performed additional purification experiments that show a substantial reduction in imidazolium content, although it is noteworthy that it was never removed completely from the lignin. The 13 C-NMR spectrum of the 1 h lignin was similar to the spectrum reported for ball milled lignin, 23 suggesting that the early Ionosolv lignin was similar in composition and linkages. Integration of functional group clusters revealed that the alkyl-O content in the 1 h Ionosolv lignin was high with, around 2.7 alkoxy groups per aryl ring. Strikingly, the alkoxy group content was only 0.3 per aryl ring in the 12 h lignin. This implies that the majority of alkoxy side chains had been modified during extended Ionosolv treatment, most likely by reactions such as dehydration, shortening of the side chain and transformations of alkoxy groups into carbonyl groups such as ketones, aldehydes or carboxylic acid groups (some intensity for these was found at 180-220 ppm). In the 12 h lignin, we observed an increased abundance of aromatic C-C bonds, which we attribute to condensation reactions, leading to replacement of C-H bonds with C-C bonds. Lignin condensation is the formation of non-native bonds between lignin polymer chains and compounds cleaved from the native lignin. The condensation reaction most commonly postulated is in competition with β-O-4 ether hydrolysis. It forms a diphenylmethane structure between electron-rich C-H positions on subunit rings and α carbons on side chains with a non-hydrolyzed β-O-4 linkage (Fig. 2). 28,29 We also observed a slight decrease in C-O bond abundance. This could be due to a reduction in the content of S units, which has three C-O bonds, compared to H and G units which have only one or two C-O bonds. Further evidence for low S unit content in highly treated lignins is provided by the methoxy group content, which decreased from 1.30 to 0.73 methoxy groups per aromatic ring. The integrals of S peaks (151-154, 103-106 ppm) decreased in size, while the integrals of G peaks (119, 115, 145, 148-151 ppm) remained constant. The S/G unit ratio was calculated using the S 2,6 and G 2 peak integrals. It was 0.88 for the 1 h lignin, which agrees well with the literature value of 0.85 for ball milled Miscanthus lignin, 23 but was only 0.32 for the 12 h lignin, again suggesting that the 12 h lignin may have a low S unit content. However, condensation reactions may create a false impression, as they will shift the S resonances to higher frequencies. The degree of the shifts will depend on the exact nature and number of substitutions that have taken place. It is conceivable that the electron-rich S units are more affected by such condensation than are the G units, resulting in an apparent reduction in the S/G ratio. A very notable difference between early stage and late stage Ionosolv lignin was the disappearance of the PCA peaks. The 1 h lignin had a similar or slightly higher PCA content than ball milled Miscanthus lignin, 23 while PCA signals were absent in the 12 h lignin. Conversely, signals for H units (visible in the 127-130 ppm region) were not detected for the 1 h lignin but were noticeable in the 12 h lignin spectrum. The acetyl group content was low in both early and late stage lignins, suggesting that acetyl groups are removed early in the pretreatment. Overall, the quantitative 13 C NMR spectra show that Ionosolv lignins can be structurally very different, depending on the length (severity) of the treatment. HSQC NMR analysis 1 H-13 C HSQC NMR is a two-dimensional NMR technique resolving resonances that overlap in one-dimensional 13 C NMR and 1 H NMR spectra. Fewer signals are seen, as only carbon atoms with one or more protons provide a signal. It is a powerful technique giving equivalent or even better information compared to traditional wet chemistry methods. 22 A number of structural units can be detected by HSQC. They have been assigned by comparison with cell wall model compounds in an NMR database. [30][31][32][33][34][35] Examples of substructures that are resolved by HSQC NMR are β-ether (β-O-4, A), resinol (β-β, B), phenylcoumaran (β-5, C) and Hibbert's ketone (H) side chains. Carbohydrates can also be detected. The aromatic rings of the G, S and H units are resolved, as are the rings and unsaturated side chains of p-coumaric acid (PCA) and ferulic acid (FA). The chemical structures of these substructures are shown in Fig. 3. HSQC spectra of cell walls and lignins can be divided into three regions: the aliphatic region, the side chain region and the aromatic region. The aliphatic region does not provide useful structural information, with the exception of the presence of signals at δ C /δ H 20.7/1.7-1.9 ppm, corresponding to acetyl groups attached to the lignin polymer, as well as a peak at 20.6/2.0 ppm corresponding to acetyl groups attached to hemicellulosic components. Acetyl groups were not detected in the isolated lignins. Therefore, the aliphatic region will not be discussed. We recorded HSQC spectra of ball milled whole cell walls of Miscanthus giganteus and the Ionosolv lignins obtained after 1 h, 5 h, 8 h, 12 h and 24 h of treatment. Full spectra are shown in the ESI (Fig. S7-S12 †) and a list of assigned peaks is shown in Table S2. † The annotated side chain regions (δ C /δ H 50-90/2.5-5.8 ppm) and aromatic regions (δ C /δ H 110-130/ 6.0-9.0 ppm) of the 1 h, 5 h and 12 h spectra are shown in Fig. 3 and discussed below. The annotated HSQC spectra of the 8 h lignin is shown in Fig. S6 of the ESI. † Side chain region The side chain region of the HSQC provides valuable information about linkages in the lignin structure (Fig. 3, left column). In addition, the methoxyl substituents on the aromatic ring contribute to the most prominent peak at δ C /δ H 55.6/3.73 ppm. The most common linkage in the 1 h lignin was, as expected, the β-O-4 linkages (structure A). This lignin also contained carbohydrates. The signals correspond to a furanose saccharide, which strongly suggests that the 1 h lignin contained arabinose. Lignin with a significant arabinose content has been previously observed after mild Organosolv extraction. 36 The arabinose correlations disappeared after 1 h of pretreatment. This suggests that the lignin-carbohydrate-complex (LCC) linkages, which are contributing to cell wall recalcitrance, are hydrolyzed rapidly during Ionosolv lignin extraction, and cleavage of the glycosidic bond between the arabinosyl substituent and the xylan core chain is fast. For lignin isolated after short pretreatment times (1 and 5 h), a strong signal for the α methylene group of the β-O-4 linkage was observed, a signal for the C γ -H γ correlation and two C β -H β correlations, one for the side chain connected to a G unit and one for the side chain connected to an S unit. When examining the intensity of the β-O-4 ether peaks over time, it becomes evident that cleavage of this linkage is significant during Ionosolv deconstruction. The disappearance of the β-O-4 linkage signals after 12 hours of pretreatment was nearly quantitative. It is well known that cleavage of β-O-4 linkages is the most important depolymerization reaction in lignin in any pretreatment, although it rarely is as pronounced as seen here. Interestingly, we were able to observe a signal for Hibbert's ketone (H γ ), the hydrolysis product of β-O-4 ether cleavage. We detected the γ signal at 67.0/4.2 ppm and also observed a peak for the corresponding α methylene group (at 44.3/3.6 ppm, just outside the range of the side chain region). The Hibbert's ketone resonances were observed at 1, 5 and 8 h, but disappeared at long treatment times (12 h). The side chain region also yields information on resinol (β-β linkage, structure B) and phenylcoumaran (β-5 linkage, structure C) structures. The four correlations of the β-β linkage were observed and the three signals for the β-5 linkages (structure C) were readily identified. In contrast to the β-O-4 linkage, significant changes were not detected for the β-β and β-5 linkages before 8 h. However, their signals reduced significantly between 8 and 12 h of pretreatment. This suggests that the β-β and β-5 linkages were also chemically altered at longer treatment times. It is unclear whether the alterations led to actual cleavage, as it is unlikely that the C-C bonds in these linkages were broken. 37 The most prominent correlation in the side chain region of the 12 h lignin was the methoxyl signal with a peak caused by primary aliphatic hydroxyl groups (X γ ). We called this signal X γ because it occurs in the same location as the signal for the γ-methylol group of the β-O-4 linkage. However, the absence of the corresponding α and β signals means that X γ signal must be caused by other aliphatic structures. At the moment we are not sure what these structures are. Aromatic region The aromatic region of the HSQC spectra contains correlations of the H, S and G aromatic rings and of ferulic and p-coumaric acid (Fig. 3, right column). Residual ionic liquid was detected in this region as C 4 -H and C 5 -H correlations of the imidazolium ring. The S units were represented by one prominent signal for the C 2 and C 6 correlations, whereas G units were represented by three correlations for their C 2 , C 5 , and C 6 positions, respectively. The C 5 -H correlation overlaps with the PCA 3,5 -H and the H 3,5 -H correlation. Ferulic acid (FA) peaks were small. The signals of p-coumaric acid (PCA) were prominent in the early stage lignins and decreased over time. A peak for the C 2,6 -H correlation of the H unit was prominent, in increasing amounts, in late-stage lignins. Estimation of subunit composition using the HSQC aromatic region The HSQC pulse sequence employed in this study is optimised for resolution and signal strength. Quantification is not completely reliable, as signal relaxation following each pulse will not be complete for some correlations, especially for end groups such as H 38 and PCA, which relax more slowly than the bulk. 22 Another limitation of integrating HSQC spectra is the potential for the earlier mentioned condensation reactions which replace aromatic C-H bonds with C-C bonds. These C-C bonded aromatic positions do not produce cross correlations, hence the calculated subunit composition may be skewed when examining condensed lignins. With these restrictions in mind, volume integration was attempted for estimating the subunit composition (Table 2). It should be noted that the G 5 peak cannot be relied on for quantification of the guaiacyl content because of overlap with the PCA 3,5 and H 3,5 correlations. G 6 is para to the methoxy group and hence has an increased potential to participate in condensation reactions (compare with Fig. 2). The G 2 peak was therefore selected for estimating the S/G ratio to minimise these disturbances. HSQC spectra were recorded using the internal standard p-xylene (not shown). They showed that the G 2 correlation was indeed the most stable signal per mg of lignin, with only a 7% drop in intensity; hence G 2 was used as a reference peak for volume integration (integral set to 1.0). The apparent subunit composition calculated this way is shown in Table 2. It can be seen that the S/G 2 ratio for untreated Miscanthus is 0.69, which is lower than the ratio determined for ball-milled Miscanthus lignin by quantitative 13 C-NMR (0.85 (ref. 23) and our value −0.88). The data in Table 2 further show that the Ionosolv lignin isolated after 1 h had a similar composition compared to ball-milled lignin in untreated Miscanthus. The S/G ratio appeared to drop with increasing pretreatment time. This trend agrees with our quantitative 13 C NMR data, which provided an S/G ratio of 0.32 for the 12 h lignin. Evidence of p-coumaric acid conversion and integration into treated lignins Table 2 also shows that the H content increased by more than 5 times during treatment. Since the H content in ball-milled Miscanthus is generally low (ca. 3%, see Table 2), we speculate that this was not solely because of selective enrichment but because of p-coumaric acid conversion into H type structures. Initial evidence was provided by the interdependence of PCA removal and H accumulation observed in the HSQC, illustrated in Fig. 4. To further verify the chemical incorporation of the hydroxycinnamic acid into isolated lignin as H type units, pure PCA was subjected to Ionosolv pretreatment. Samples were taken at intervals and submitted for 1 H NMR analysis. The NMR spectra (see ESI †) indicate that PCA was consumed and new products were formed. To identify products, water was added, simulating the lignin recovery procedure. The precipitate was washed, dried and submitted for mass spectrometry. We were able to identify a polymeric product with a repeating unit of 120 g mol −1 (Fig. 5). Such a fingerprint agrees with the loss of the carboxylic acid group of PCA resulting in p-hydroxystyrene, followed by polymerisation to poly( p-hydroxystyrene). The presumed mechanism is shown in Fig. 6. These decarboxylation and polymerization reactions are likely catalyzed by the acidic protons in the IL-water mixture. Hence we conclude that the apparent enrichment of H units in the extensively treated lignin is due to the conversion of PCA into IL soluble oligomers that precipitate during lignin isolation. It is also possible that PCA co-polymerizes with lignin, but at present we do not have evidence for this. The oligomers appear to be short, as the most abundant oligomer was the trimer. MALDI ToF analysis (not shown) using a variety of matrices did not detect polymers with a molecular weight above 1000 g mol −1 . P-NMR analysis A quantitative 31 P-NMR method was applied to study the major types of hydroxyl groups in the Ionosolv lignin and how their abundance changes over time. 2-Chloro-4,4,5,5-tetramethyl-1,3,2-dioxaphospholane (TMDP) was employed in order to link hydroxyl groups in lignin arising from aliphatic, phenolic and carboxylic acids groups with phosphorous atoms. The phosphitylated lignin was then quantitatively assessed in a 31 P NMR spectrum against an internal standard. 39 The original 31 P-NMR spectra of the lignin samples are shown in the ESI, † as well as the numerical values of the concentrations derived from the integrals. Fig. 7 depicts the concentration changes of the lignin hydroxyl groups during Ionosolv pretreatment. The syringyl signal overlaps with signals of condensed G and S structures (β-5, β-β and others). 39 It was observed that the concentration of phenolic hydroxyl groups (syringyl/condensed, guaiacyl and p-hydroxy- phenyl) increased over time, while the aliphatic hydroxyl group content decreased. For the lignin isolated after 1 h, the concentration of phenolic hydroxyl groups was low and the concentrations of G, S and H hydroxyl groups were similar. This suggests that most phenolic oxygen atoms were part of ether linkages. The lower content of S hydroxyl groups relative to G hydroxyls agrees with the observation that S units are preferably internal in native lignin and the relatively high content of p-hydroxyphenyl OH groups confirms that PCA and H units are terminal. 38 The slow increase of phenolic hydroxyl content between 1 h and 5 h pretreatment time suggests that many of the phenolic ends produced by β-O-4 ether cleavage were not present in the isolated lignin. They are presumed to be part of small lignin fragments that were hydrophilic enough to remain in solution upon antisolvent (water) addition. A minimum of PCA and H content at 5 h pretreatment was observed, which also agrees with our HSQC data (Fig. 4). The drop in p-hydroxyphenyl hydroxyl concentration is attributed to the release of PCA into the IL solution. The phenolic hydroxyl content increased after 5 h and particularly so after 8 h. This increase occurred when HSQC NMR indicate that condensation reactions become dominant. Condensation reactions are thought to involve the 2 and 6 positions of aromatic rings and the side chains but not the phenolic OH groups, 40,41 hence phenolic hydroxyl groups become enriched when fragments are added to the lignin polymers via condensation. The hypothesis that soluble fragments rejoin the isolated lignin is in agreement with results previously published by our group, which show a continuing increase in lignin yield after 5 h of pretreatment. 13 The abundance of aromatic OH groups in the 12 h lignin was the following: 42% syringyl/condensed, 37% guaiacyl and 21% PCA/p-hydroxyphenyl. The syringyl/condensed hydroxyl content increased steeply over time and was higher than the syringyl content estimated by quantitative 13 C-NMR (24%) and HSQC NMR (33%), indicating that condensed units become abundant in highly treated Ionosolv lignins. The aliphatic OH content decreased to a lower level with advancing pretreatment time. This is likely due to the loss of the C α hydroxyl groups during β-O-4 ether breakage and during condensation. It is established knowledge that the first step of the β-O-4 ether hydrolysis in acidic media is the formation of a double bond between C α and C β . 42 The reduction in aliphatic hydroxyl content between 1 h and 5 h pretreatments is probably also caused by the removal of arabinose. The loss of γ-methylol groups during β-O-4 ether hydrolysis by splitting off as formaldehyde may also contribute to a reduction in aliphatic hydroxyl content. The carboxylic acid content increased slowly over time. It was low compared to the content of p-coumaric acid, confirming that PCA is esterified to the lignin polymer and that repolymerized PCA products do not contain carboxylic groups. The total hydroxyl content decreased after 1 h pretreatment and increased from 5 mmol g −1 at 1 h to 11 mmol g −1 at 12 h. This shows that the condensed lignin has more hydroxyl groups than the early-stage lignin. Increasing amounts of G and S phenolic hydroxyl groups have been observed before for Organosolv pulping under high severity (harsh) conditions, 23 but still to a lesser degree than observed for the 12 h Ionosolv lignin. In summary, our hydroxyl group analysis confirms that smaller lignin fragments are generated in the beginning of the Ionosolv treatment, some of them water-soluble. The solubilised fragments are not inert but rejoin the lignin precipitate, at least partially, through condensation reactions, increasing the over-all content of phenolic hydroxyl groups. Py-GC-MS analysis Pyrolysis-GC-MS was employed to characterize the compositions of untreated Miscanthus giganteus and Ionosolv lignin after 5 h and 12 h pretreatment ( Fig. 8 and Table 3). The pyrolysis step cleaved the lignin polymeric framework into a multitude of fragments, which are separated by gas chromatography and identified by mass spectrometry. It should be noted that some of the lignin may remain as a non-volatile component (char). Hence caution should be used when interpreting the Py-GC-MS data. The Py-GC-MS procedure can distinguish between carbohydrate derived products and the different subunits. However, chemical processes occurring during the pyrolysis can result in misleading results, as PCA and H, and FA and G units cannot be distinguished. Examples are 4-vinylphenol ( peak 11) which may originate from PCA or the H unit, and 4-vinylguaiacol ( peak 12), which may be derived from FA or the G unit. 43,44 Since we detected 38 different pyrolysis moieties, we grouped the compounds according to subunits to facilitate data interpretation (Fig. 9). The aromatic compounds were assigned to three subunit groups using the substitution pattern on the aromatic ring. If the compound had no methoxy substituents ortho to the phenolic hydroxyl group, they were assigned to H or PCA; one methoxy substituent, assigned to G or FA and two methoxy substituents, assigned to S. In the pyrogram obtained for untreated Miscanthus, carbohydrate-derived moieties were detected ( peaks 1, 2, 4 and 5). Interestingly, none of these were present in the 5 h and 12 h lignin pyrograms, indicating that the polysaccharides had been separated from the lignin during Ionosolv deconstruction. The ionic liquid solution is known to solubilize lignin and a large part of the hemicellulose. 25 The Py-GC-MS data show that intact hemicelluloses do not precipitate with the lignin after 5 h. The lack of carbohydrate correlations in Ionosolv lignins isolated after longer pretreatment times agrees with the HSQC NMR analysis of the 5 h and 12 h lignins, which didn't detect carbohydrates either. Fragments released from untreated Miscanthus originated from H/PCA, G/FA and S in roughly equal amounts (32%, 35% and 33%, respectively). In the 12 h lignin, the amounts of released G/FA fragments and H/PCA fragments increased to 38% and 41%, respectively, while the relative quantity of S derived compounds decreased to 22%. The H/PCA group of peaks was more abundant than expected for a typical Miscanthus lignin. The results suggest that the pyrolysis MS procedure may over-represent the H/PCA content. An increased release of simple monomers such as phenol ( peak 3), guaiacol ( peak 8) and syringol ( peak 13) was observed. These simple aromatic compounds comprised 14% of the moieties released from untreated Miscanthus, 22% of the 5 h moieties and 33% of the 12 h moieties. This indicates that aromatic rings in the depolymerised and/or condensed lignin have connectivities that facilitate release of simple monomers and suggests that pyrolytic valorisation of Ionosolv lignin may be more promising than whole untreated lignocellulose. Py-TMAH-GC-MS analysis Py-GC-MS in the presence of tetramethylammonium hydroxide (TMAH) is a method for determining the composition of lignin with fewer restrictions than conventional Py-GC-MS, since the compounds derived from the H and G units in lignin and the p-hydroxycinnamic acid derivatives of PCA and FA can be clearly differentiated. The resulting derivatives are also more robust, protecting thermolabile compounds and facilitating chromatographic separation. 45 The compound table for the Py-TMAH-GC-MS experiment is shown in Table 4, while Fig. 10 displays the abundance of the released compounds grouped according to source subunits. Tables 1 and 2 for Py-TMAH-GC-MS and Py-TMAH-GC-MS analyses, respectively. * indicates butylimidazole pyrolysis moieties. Table 4 shows that negligible amounts of carbohydrate derived products were detected among the pyrolysis products, even for untreated Miscanthus. This demonstrates that the TMAH procedure is more selective for lignin fragments. In addition, fewer S derived compounds were released with Py-TMAH-GC-MS than with Py-GC-MS, while nearly half of the moieties were derived from either PCA or H (untreated Miscanthus and 5 h Ionosolv lignin). Oxidized compounds, such as 3,4-dimethoxybenzoic acid methyl ester ( peak 24), 3,4dimethoxybenzene acetic acid methyl ester ( peak 27) and 3,4,5-trimethoxybenzoic acid methyl ester ( peak 32) were also observed ( Table 4). Their abundance increased remarkably with treatment time. This agrees with the 31 P NMR data (hydroxyl group analysis), which saw the abundance of carboxylic acid functionalities increase over time. Using the peaks assigned to S and G in Table 4, the S/G ratio was determined for Miscanthus giganteus, Ionosolv lignin (5 h) and Ionosolv lignin (12 h) as 0.97, 0.78 and 0.43, respectively. While the first two ratios are reasonably close to the S/G ratio of native Miscanthus lignin as determined elsewhere (0.85), 46 the 12 h lignin released high quantities of guaiacyl derived compounds, confirming that G units are enriched in the Ionosolv lignin. The Py-TMAH-GC-MS further shows that the p-hydroxycinammic acids, p-coumaric acid and ferulic acid, contribute substantially to the range of pyrolysis products. Miscanthus giganteus and Ionosolv lignin isolated after 5 h released exceptionally high amounts of methylated derivatives of PCA ( peaks 23 and 30) and of the methylated derivative of FA ( peak 39). In untreated Miscanthus, PCA accounted for >80% of the pyrolysis fragments with a p-hydroxyphenyl ring, while ferulic acid accounted for ca. 50% of the fragments with a guaiacyl ring. These proportions are higher than the abundance of PCA and FA predicted by other techniques, such as 13 C-NMR or HSQC-NMR. For example, volume integration of the HSQC estimated that PCA is ca 14% of the 1 h lignin and 7% in 5 h lignin. This indicates that the Py-GC-MS analyses employed here do not represent the true composition of lignin, at least when it comes to ester bound components. Release of the PCA and FA derivatives in the pyrograms decreased dramatically after prolonged Ionosolv extraction (6% and 2%, respectively, in the 12 h lignin pyrogram), while fragments assigned to H (28%) and G subunits (44%) increased. The reduction of PCA and FA content agrees with the 13 C and HSQC NMR data. The increased abundance of H and G units is due to enrichment, at least partly caused by the chemical transformation of PCA and possibly also of ferulic acid into new polymers that create H and G type pyrolysis fragments. This hypothesis is supported by the observation that some moieties with H or G type substitution patterns were abundant in the Ionosolv lignin pyrograms (compounds 3 and 10 for example) while they were negligible in the untreated Miscanthus pyrogram. In summary, the Py-(TMAH-)GC-MS results confirm the relative changes in Ionosolv lignin composition observed by NMR analyses, such as removal of carbohydrates, decrease in S unit, PCA and FA content and increase in H and G unit content. The technique appears to overestimate the content of p-coumaric acid and ferulic acid and to underestimate the amount of syringyl and guaiacyl subunits. This is likely due to the way the subunits are linked into the polymer. Molecular weight determination GPC analysis was performed to investigate the effect of Ionosolv deconstruction on the molecular weight of the resulting lignin. Fig. 11 and Table 5 summarize the drastic changes in molecular weight of the Ionosolv lignins with increasing pretreatment time. The recovered lignin exhibited a substantial decrease in molecular weight, confirming that the lignin macromolecules undergo depolymerization through linkage cleavage. For example, Ionosolv lignin after 1 h pretreatment had a molecular weight of M n = 2500 g mol −1 . The M n had decreased further by ∼36% to 1580 g mol −1 after 8 h of pretreatment. This represents a >80% reduction in molecular weight relative to milled Miscanthus lignin (8300) 23 and is substantially lower than the reported Organosolv Miscanthus lignin value of 4690, 23 though it is important to note that measured lignin M n can vary from system to system. The results suggest that lignin solubilization takes place after substantial depolymerization inside the biomass. We conclude this because the 1 h lignin has a similar ether content and S/G ratio as native ball milled lignin, yet has much lower molecular weight. This suggests that the 1 h lignin has been depolymerized before extraction; otherwise, we would expect a higher molecular weight at shorter times ( prior to solution-phase depolymerization). Despite the initial hydrolysis in the biomass, further hydrolysis occurred in solution. The estimated average degree of polymerization (DP, obtained by assuming M W = 200 g mol −1 as the average weight of a lignin unit) decreased to ca. 8 by 8 hours. Interestingly, the molecular weight of lignin increased noticeably after 12 hours to M n = 2530. We infer that the condensation reactions (C-C bond formation) overtake the depolymerization reactions (ether cleavage) when the Ionosolv treatment is performed for exceedingly long times. This is supported by the 31 P NMR results which show a rapid rise of the phenolic hydroxyl content between 8 and 12 h. The increased polydispersity index (PDI) at 8 h and 12 h (DP = 4.28 and 7.26, compared with 3.71 at 5 h) is also evidence of repolymerization, as repolymerization has been shown to increase the heterogeneity of resulting lignins. 47 Elemental composition of Miscanthus Ionosolv lignin The lignin was further analysed for elemental composition, looking at the carbon (C), hydrogen (H), nitrogen (N) and sulfur (S) content. The N and S contents indicate residual IL contamination; nitrogen is representative of the imidazolium cation which contains two nitrogen atoms and sulfur is representative of the hydrogen sulfate anion, which contains one sulfur atom. We noticed residual cation in some of the NMR experiments; here we report that anion was also present. The untreated Miscanthus sample was nitrogen and sulfur free (data not shown). Table 6 shows the content of the four elements in the isolated lignins. Carbon was the largest contributor, making up ca. 60 wt% of each sample, followed by H (5 wt%), N and S. For the 1 h, 5 h and 8 h lignin samples, the N and S content was roughly the same (around 1 wt% each), suggesting that the lignin contained 6-9 wt% of residual ionic liquid. At 10 wt% biomass loading and 25% lignin content in the biomass, this equates to a solvent loss of 0.2% or less into the lignin. In contrast, the 12 h lignin sample contained more than three times as much nitrogen and sulfur, pointing to substantial IL contamination, which we believe to be due to incomplete washing. This high IL content is reflected by a large peak for butylimidazole seen in the Py-GC-MS pyrograms of the 12 h lignin (Fig. 8) and particularly prominent signals for butylimidazole in the 13 C-NMR and HSQC NMR spectra. The contamination of this sample highlights the need for thorough washing. A 24 h lignin sample had an IL content similar to the 1-8 h lignins (refer to HSQC NMR spectrum in Fig. S12 of the ESI †), confirming that the 12 h sample was unusual and that the IL content does not rise with increasing treatment time. We used the N and S contents to calculate the molar cation/ anion ratio for the IL portion, which is shown in Fig. 12 (raw EA data and the derived values are shown in Tables S4-6 in the ESI †). It can be seen that the cation-to-anion ratio was typically around 1 : 1. Only the 1 h lignin had an excess of 20% hydrogen sulfate anion, which we cannot explain at present. This, and the presence of intact butylimidazole in the pyrograms, indicates that the residual IL is simply adsorbed. Otherwise we would expect the ion ratio to be imbalanced, with either the cation or the anion being more abundant, depending on which component was more reactive towards the lignin. We note that the amount of IL as found in the 12 h lignin would be considerable; hence this content needs to be minimized on economic grounds. An improved lignin washing protocol or alternative lignin recovery technique may achieve this. The interactions of residual IL with the lignin need to be explored in more detail in the future in order to identify an appropriate solution. Nevertheless, three of the Ionosolv lignins contained substantially lower amounts of sulfur than Kraft or sulfite lignins. 48 The elemental analysis data also suggest that there may be a trend for increasing carbon content, with exception of the 12 h lignin. In order to focus on the lignin portion of the elemental analysis, we calculated the carbon content and the molar ratio of carbon and hydrogen for the 'IL-free' portion of the lignin samples. The carbon content of the IL-free lignin increased substantially with treatment time (Fig. 12), as did the C/H ratio. This indicates that lignins treated for longer contained more C-C bonds and fewer C-O bonds. The overall amount of carbon plus hydrogen increased from 67% at 1 h to 71% at 12 h, indicating a reduction of oxygen content. High C and H contents are beneficial for energy applications where fuel value matters. Model of Ionosolv delignification and isolation Based on the preceding results, a summary of changes in the lignin properties is presented in Table 7. We believe that the mechanism for Ionosolv delignification is initiated by hydrolysis of the glycosidic bonds between xylose and arabinose in lignin carbohydrate complexes and the shortening of lignin polymers inside the biomass pulp. This is followed by solubilization of the shortened lignin polymers. The solubilized chains continue to fragment inside the IL liquor, but also begin to condense. We hence recovered lignin that is chemically similar to native Miscanthus lignin in the initial stages of the treatment, while at longer times, we recovered a condensed lignin that is rich in G units, H units and phenolic hydroxyl groups. S/G ratios obtained with various techniques (Py-GC-MS, quantitative 13 C-NMR, HSQC NMR) showed that S and G units were present in roughly equal proportions in ball milled Miscanthus and in early stage lignins, while the S/G ratio in highly treated Ionosolv lignins was around 1 : 3-1 : 4 (Fig. 13). We postulate that the enrichment of guaiacyl units is due to preferred C-C bond formation (condensation) at the G 6 ring position, resulting in precipitation of G-rich lignin polymers. However, it is possible that the measured S/G ratios have been obscured by condensation reactions. Further investigations into the fate of the S units and the chemistry of lignin condensation in acidic solvents are urgently required. The more treated lignins obtained from Miscanthus were high in H content, due to p-coumaric acid decarboxylation to p-hydroxystyrene followed by polymerisation with itself and potentially other fragments in solution. 15-20% of the Ionosolv lignin derived from Miscanthus is p-coumaric acid or a PCA derived H-like polymer. A similar conversion may be happening with the less abundant ferulic acid, contributing to the increased G unit content in highly treated lignins. A preliminary analysis of the IL liquor (LC-MS and GC-MS, data not shown) revealed predominantly monomeric syringols, guaiacols and phenols ( plus sugar degradation products), which are water-soluble or liquid and hence do not precipitate. The fate of these solubilized fragments will be explored in a future ionic liquid solvent recycling study. Conclusions In summary, this study illustrates the changes occurring during the Ionosolv delignification through comparison of lignin structures in the plant cell walls of Miscanthus giganteus to the isolated lignins. We have shown that lignin undergoes more than 80% depolymerization during the initial stage of the pretreatment through the cleavage of β-O-4 aryl ether linkages and ester linkages, confirmed by reduction in molecular weight and the β-O-4 ether bond signal in the HSQC NMR spectra. At long pretreatment times, repolymerization (condensation) overtakes the depolymerization, as evidenced by increased lignin molecular weight (GPC), increased content of phenolic hydroxyl groups and increased C/H ratio in the later stage lignins. The composition of the later stage Miscanthus lignin was also modified, as p-hydroxyphenyl and guaiacyl subunits were enriched and p-coumaric acid and ferulic acid were removed. These results suggest that the Ionosolv deconstruction can be tuned to selectively depolymerize lignin during pretreatment by eliminating ether and ester linkages, providing an opportunity to produce small aromatic molecules during the pretreatment itself. Alternatively, the pretreatment can be tuned to favour condensation reactions, increasing lignin molecular weight and phenolic hydroxyl content for the production of additives or resins and to maximize fuel value. The Ionosolv deconstruction can therefore produce lignin that appears to be more fragmented or more condensed than Organosolv 23 or other ionic liquid lignins. 37,49 At very short times it can produce lignins that retain most of their native subunit composition and linkages. We hope that this information opens new avenues of scientific investigation in the development of the Ionosolv process as an alternative pretreatment technology and encourages the subsequent valorization of Ionosolv lignins.
2017-11-11T02:06:47.368Z
2015-11-02T00:00:00.000
{ "year": 2015, "sha1": "bdcfdec7091d4e40ca4c2202b466effb30aa01e9", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/gc/c5gc01314c", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ca5b877deed11cc0840ee6e90dab21b298420e3d", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
216416239
pes2o/s2orc
v3-fos-license
Spatial variation in egg polymorphism among cuckoo hosts across 4 continents Abstract Although egg color polymorphism has evolved as an effective defensive adaptation to brood parasitism, spatial variations in egg color polymorphism remain poorly characterized. Here, we investigated egg polymorphism in 647 host species (68 families and 231 genera) parasitized by 41 species of Old Word cuckoos (1 family and 11 genera) across Asia, Europe, Africa, and Australia. The diversity of parasitic cuckoos differs among continents, reflecting the continent-specific intensities of parasitic selection pressure on hosts. Therefore, host egg polymorphism is expected to evolve more frequently on continents with higher cuckoo diversity. We identified egg polymorphism in 24.1% of all host species and 47.6% of all host families. The common cuckoo Cuculus canorus utilized 184 hosts (28.4% of all host species). Hosts of the common cuckoo and of Chrysococcyx species were more likely to have polymorphic eggs than hosts parasitized by other cuckoos. Both the number of host species and the host families targeted by the cuckoo species were positively correlated with the frequency of host egg polymorphism. Most host species and most hosts exhibiting egg color polymorphism were located in Asia and Africa. Host egg polymorphism was observed less frequently in Australia and Europe. Our results also suggested that egg polymorphism tends to occur more frequently in hosts that are utilized by several cuckoo species or by generalist cuckoo species. We suggest that selection pressure on hosts from a given continent increases proportionally to the number of cuckoo species, and that this selection pressure may, in turn, favor the evolution of host egg polymorphism. morphs, as eggs display innumerable colors and patterns, not only among taxa, but also among species with similar nesting habitats and structures, and even among individuals within a single population (i.e., polymorphism) (Swynnerton 1916). One possible and persuasive explanation for egg color polymorphisms is that the evolution of nest site selection and nest structure effectively reduced egg predation risk, and secondary adaptations have resulted in increased egg color diversity (Kilner 2006). Avian brood parasitism may drive such secondary evolution events, because egg morphs determine the success of parasitism or of anti-parasite defences (Kilner 2006;Yang et al. 2010). That is, the extent to which a parasite egg mimics host egg coloration determines brood parasitism success (Davies 2011) in that the choice to reject an egg, and the precision of that rejection, depends on host recognition of egg coloration, and the differences between the morphs of the parasite and host eggs (Soler 2014). This coevolutionary arms race favors highly divergent parasite egg morphs, allowing the parasite to mimic the eggs of a variety of host species (Davies 2000;Yang et al. 2015a). Correspondingly, the arms race also favors the evolution of diverse egg morphs in hosts, because differences in egg morphs between parasite and host facilitate the correct recognition and rejection of parasite eggs (Yang et al. 2010; but see Hanley et al. 2017Hanley et al. , 2019Abolins-Abols et al. 2019). Polymorphism is defined as the occurrence of 2 or more clearly distinct phenotypes within a single population (Leimar 2005). Interestingly, 3.5% of all birds species have polymorphic plumage patterns that may have been generated due to selective pressure from prey, predators, and/or competitors, and these patterns may be maintained by disruptive selection (Galeotti et al. 2003). In birds, egg coloration, as well as plumage, may be polymorphic. As biological variations among taxa are obvious (Darwin 1859), egg polymorphism is likely to be less common than egg morph diversity among taxa. However, the former (egg polymorphism) appears to be more well-characterized in the literature (Yang et al. 2010;Vikan et al. 2011). This may be because egg polymorphism is a more recent secondary adaptation, and, as such, the factors driving these variations are easier to deduce (Swynnerton 1916). Avian brood parasitism is regarded as an important factor driving the evolution of egg polymorphism in hosts (Kilner 2006;Spottiswoode and Stevens 2012). Indeed, host populations that have been under pressure from parasitic cuckoos over evolutionary time are more likely to have evolved more egg polymorphism than populations not under pressure from cuckoo parasitism (Yang et al. 2015b). Therefore, host egg polymorphism is associated with coevolutionary temporal interactions with brood parasites such as cuckoos. Although egg color polymorphism in avian brood parasites has received significant attention (Kilner 2006;Liang et al. 2012;Yang et al. 2016bYang et al. , 2018, few studies have investigated spatial variations in color polymorphism at geographic scales. Furthermore, parasite diversity that varies among different geographic areas may have a significant impact on the occurrence of egg polymorphism in hosts. Generally, host anti-parasite defences specifically target brood parasites due to coevolution (Langmore et al. 2009;Yang et al. 2014;Noh et al. 2018). In geographic areas with higher parasite diversity, parasitism pressure on hosts is generally high, because hosts encounter a variety of parasites that may pose distinct threats (Yang et al. 2014). Therefore, we predicted that hosts from geographic areas with high parasite diversity would more frequently evolve egg polymorphism as an effective defence against brood parasites . To test this prediction, we investigated host egg polymorphism in 647 Old World cuckoo hosts across 4 continents (Asia, Europe, Africa, and Australia). The objective of this study was to characterize the factors that have contributed to the evolution of egg polymorphism in cuckoo hosts. Specifically, we aimed to test (1) whether the number of cuckoo species targeting each host species would predict the frequency of egg color polymorphism in the hosts and (2) whether the frequency of color polymorphism differed among continents with different levels of cuckoo diversity, as would be expected if a higher diversity of brood parasites increased the selection pressure on host defences. Data extraction A complete list of the host species of Old World cuckoos was obtained from Lowther (2014). We excluded some obviously unsuitable host species from this list (e.g., precocial birds). The validities of 4 Hierococcyx species (the northern hawk cuckoo, Hierococcyx hyperythrus; the Philippine hawk cuckoo, Hierococcyx pectoralis; the Malaysian hawk cuckoo, Hierococcyx fugax; and Hodgson's hawk cuckoo, Hierococcyx nisicolor) are controversial, and the parasitism records for these taxa are poor. Following Lowther (2014), we therefore considered these taxa an H. fugax species complex (hereafter referred to as "Cuckoo complex" in the text, table, and figures). Host eggs data were obtained from the electronic version of the Handbook of the Birds of the World (http://www.hbw. com; del Hoyo et al. 2013) and Web of Science (http://isiknowledge. com, Clarivate Analytics). Our classification of egg morphs as monomorphic or polymorphic was based on the principle that polymorphic eggs have 2 or more clearly different morphs; all of the other eggs were considered monomorphic (Leimar 2005). Ambiguously described differences among egg morphs were not considered sufficient evidence of egg polymorphism. For example, eggs described as "blue or greenishblue" or as "white or greyish-white" were not regarded as descriptions of distinct egg phenotypes. Furthermore, eggs with 1 type of continuous macular variation (e.g., the eggs of the great reed warbler Acrocephalus arundinaceus) were not considered polymorphic. However, eggs with different types of maculation (e.g., dots vs. lines) were considered phenotypically different. To eliminate the subjective expectations of human observers that might bias the extracted data (Yang and Liang 2016), we used a blinded method to classify host egg morphs: classifications were performed without knowledge of other host data (i.e., species, size, distribution, and habitat). Furthermore, 3 observers independently classified all of the eggs without communicating with each other. Egg morph classification was highly consistent among observers (intraclass correlation coefficient¼0.989, F 649,1298 ¼260.928, P < 0.001). The phylogenetic trees of hosts and cuckoos were pruned from the global bird phylogeny (http://birdtree.org) using the option "Hackett All Species: a set of 10,000 trees with 9993 OTUs each" (Jetz et al. 2012). We sampled 5,000 pseudo-posterior distributions and constructed a Maximum Clade Credibility tree with mean node heights using TreeAnnonator v1.8.2 in the BEAST package (Drummond and Rambaut 2007;Ricklefs and Jonsson 2014). We used the resulting host and cuckoo phylogenetic trees (Supplementary Figure S1) for the following phylogenetic regressions. Statistical analyses We used MCMCglmm (generalized linear mixed models using Markov chain Monte Carlo techniques) (Hadfield 2010) to estimate the effect of continent (i.e., Asia, Europe, Africa, or Australia) on the incidence of host egg polymorphism. MCMCglmm is a phylogenetic regression within a Bayesian framework that supports binary dependent variables (Ives and Garland 2010), such as the monomorphic and polymorphic egg coloration patterns in this study. To adjust for the phylogenetic dependence of host species in the MCMCglmm analysis, we set the continent as the fixed effect and considered the order, family, and genus of each host species as random effects. We ran the MCMCglmm in 4 parallel Markov chains for 23,000 iterations each, discarding the first 3,000 iterations as burn-in, and using a thinning rate of 20. We assessed model convergence using the Gelman-Rubin statistic with diagnostic values <1.1 (Gelman and Rubin 1992). We considered the effect of continent on host egg polymorphism "significant" when the 95% Bayesian credible intervals of the parameter estimates overlapped zero (Kéry and Royle 2016). Phylogenetic analyses were performed with R (http://rproject.org), using the MCMCglmm (Hadfield 2010), coda (Plummer et al. 2016), and picante packages (Kembel et al. 2010). Pearson or Spearman correlations were used to test the relationships between values in pairs of categories, depending on whether a 1-sample Kolmogorov-Smirnov test indicated that the data were normally distributed. Two-way ANOVAs were used to test whether hosts producing polymorphic eggs were parasitized by more cuckoo species than hosts producing monomorphic eggs. In this analysis, the number of cuckoo species targeting each host (i.e., cuckoo diversity) was considered the dependent variable, while the host egg morph (i.e., monomorphic or polymorphic) and the continent were considered fixed effects; the interaction between the 2 dependent effects was also tested. All of the statistical analyses were performed using IBM SPSS 20.0 for Windows (IBM Inc., USA). General information about cuckoos and hosts Our analyses included 647 host species parasitized by 41 Old World cuckoo species; the host species fell into 68 families and 231 genera, while the cuckoo species fell into 1 family and 11 genera (Table 1). We identified egg polymorphism in 24.1% of the 647 host species and in 47.6% of the 68 host families. Most cuckoo hosts (97.8%) belonged to the order Passeriformes (Supplementary Table S1). Egg polymorphism across host families The common cuckoo had the greatest number of hosts: 184 species (28.4% of all host species). Egg polymorphism was common in these host species (37.5% of all species). Hosts of Chrysococcyx cuckoos also had a higher frequency of egg polymorphism than hosts parasitized by other cuckoos. Eight cuckoo species parasitized hosts with egg polymorphism frequencies >30%. The numbers of host species and the numbers of host families targeted by each cuckoo species were positively correlated with the number of hosts laying polymorphic eggs (host species: r ¼ 0.84, n ¼ 41, P < 0.001, Pearson correlation; host family: r ¼ 0.77, n ¼ 41, P < 0.001, Spearman correlation). However, although the frequency of host egg polymorphism (i.e., the proportion of hosts laying polymorphic eggs) was significantly positively correlated with the number of host species targeted by each cuckoo species, the number of host families targeted by each cuckoo species was not (host species: r ¼ 0.42, n ¼ 41, P ¼ 0.006; host family: r ¼ 0.16, n ¼ 41, P ¼ 0.324, Spearman correlation; Figure 1). Host egg polymorphism in the families most commonly parasitized by cuckoos All of the cuckoo species parasitized species in the Passeriformes and 19.5% of the cuckoo species also parasitized other bird orders (Supplementary Table S1). All instances of host egg polymorphism identified in this study were recorded in the passerines, with the exception of 1 phoeniculid species in the Bucerotiformes. Within the Passeriformes, most hosts were in the family Muscicapidae (76 species), followed by the Leiothrichidae (48 species) and the Acanthizidae (48 species). The pattern of host utilization by cuckoos differed from the distribution of hosts among families. However, Pearson correlations showed that the number of host species in a family was positively correlated with the number of cuckoo parasites (r ¼ 0.70, n ¼ 68, P < 0.001, Pearson correlation). Furthermore, the number of host species in a family was also positively correlated with the frequency of egg polymorphism in that family (r ¼ 0.86, n ¼ 68, P < 0.001, Pearson correlation). Host egg polymorphism among continents The greatest proportions of host species and families with polymorphic eggs were located in Asia (26.8% of all species and 42.2% of all families). The proportions of host species and families with polymorphic eggs in Africa were only slightly lower, but Australia and Europe had relatively few instances of host egg polymorphism. The estimated effect of continent on the occurrence of host egg polymorphism excluded zero, indicating that continent was a significant predictor of host egg polymorphism (Figure 2). The hosts producing polymorphic eggs were parasitized by more cuckoo species that the hosts producing monomorphic eggs (F ¼ 4.447, df ¼ 1, P ¼ 0.035, ANOVA; Figure 3). The number of cuckoo species targeting each host species also differed across continents (F ¼ 21.704, df ¼ 3, P < 0.001, ANOVA; Figure 3). However, no interaction effects between continents and egg morphs were found (F ¼ 0.342, df ¼ 3, P ¼ 0.795, ANOVA). Discussion In this study, we identified egg polymorphism in 24.1% of 647 host species and in 47.6% of 68 host families. In addition, both the numbers of host species and the numbers of host families targeted by the cuckoo species were positively correlated with the frequency of host egg polymorphism. In cuckoo hosts, polymorphic eggs have most likely evolved as a specific adaptation to parasitism (Kilner 2006;Yang et al. 2010). When targeting hosts with polymorphic eggs, cuckoos are at a distinct disadvantage, because polymorphic host eggs considerably reduce the rate of parasitism success (Yang et al. 2016a. In a cuckoo-host system where the cuckoo lays mimetic eggs and the host rejects dissimilar eggs, the appearance of an additional host egg morph that is laid at a similar rate to the original morph halves the probability that the cuckoo egg morph will match that of the host. The rate of successful parasitism will further decline if the host evolves 3 egg morphs (Yang et al. 2010). Thus, the probability that the cuckoo egg will match the host egg is negatively correlated with the number of host egg morphs. However, it should be noted that this hypothesis assumes that cuckoos do not actively target nests where the host egg matches the cuckoo egg (Yang et al. 2016c). Obviously, if cuckoos only parasitized host nests containing eggs that match the cuckoo eggs, polymorphic host eggs cannot have evolved as a specific adaptation to cuckoo parasitism, because cuckoos would always choose host nests containing the eggs most similar to those of the cuckoo in order to maximise the egg acceptance rate (Honza et al. 2014). However, recent experimental studies have provided unambiguous evidence that cuckoos do not select specific host eggs when parasitizing host nests (Yang et al. 2016c(Yang et al. , 2017. Therefore, egg polymorphism is an effective defence against cuckoo parasitism and it occurs more frequently in hosts that are parasitized by multiple families and species of cuckoos. Cuckoos reduce host rejection risk by laying eggs in host nests containing eggs with matching phenotypes (Table 1). Many hosts have evolved egg polymorphism to counter cuckoo parasitism, but only rarely have cuckoos evolved correspondingly polymorphic eggs. Yang et al. (2010) identified this type of egg polymorphism between the common cuckoo and its parrotbill Paradoxornis alphonsianus host, while Vikan et al. (2011) described a similar situation between the common cuckoo and 2 host species: the brambling Fringilla montifringilla and the chaffinch Fringilla coelebs (Vikan et al. 2011). Recently, polymorphic parasite eggs were also identified in the plaintive cuckoo Cacomantis merulinus, which parasitizes the common tailorbird Orthotomus sutorius (Yang et al. 2016a). In such cases, polymorphism arises in both species as a result of frequencydependent selection (Majerus 1998;Yang et al. 2010). Therefore, although egg polymorphism has been identified in cuckoos, it remains much more common in hosts than in cuckoos because cuckoo egg polymorphism cannot increase the overall rate of successful parasitism on hosts that lay polymorphic eggs . Our results indicate that the frequency of host egg polymorphism not only varies in time, as has been shown previously (Lahti 2005; Yang et al. 2015b;Wang et al. 2016), but also in space. Host egg polymorphism occurs more frequently in Asia and Africa than in Australia and Europe, possibly due to the variation in cuckoo diversity among continents: 25 cuckoo species are known from Asia, 14 from Africa, 11 from Australia, and 2 from Europe. Consistent with our prediction that higher cuckoo diversity would lead to a higher frequency of egg polymorphism in hosts, this study found that Asia has the highest diversity of Old World cuckoos (Payne 2005), and, correspondingly, the highest frequency of host egg polymorphism. Although there are substantially fewer cuckoo species in Africa than in Asia, the frequency of host egg polymorphism in Africa was similar to that in Asia. However, this does not necessarily contradict our hypothesis, as there are many avian brood parasites in Africa besides cuckoos (e.g., honeyguides and finches) (Davies 2000;Payne 2005). Therefore, the selection pressure associated with brood parasite diversity in Africa may resemble that in Asia. Furthermore, as we predicted, hosts producing polymorphic eggs were utilized by more cuckoo species than hosts producing monomorphic eggs, and this tendency was consistent across continents. That is, egg polymorphism tends to occur in hosts that are parasitized by multiple cuckoo species. Finally, the number of host species targeted by each cuckoo species was positively correlated with the frequency of host egg polymorphism. Because the number of host species targeted by each cuckoo species might reflect the degree of cuckoo generalization or specialization (i.e., cuckoo species are more generalized when they parasitize multiple host species and vice versa), this result implied that parasitism by generalist cuckoos tends to increase the frequency of egg polymorphism in hosts. In summary, our results suggested that the evolution of egg polymorphism is linked to brood parasitism and higher brood parasite diversity promotes the frequent evolution of host egg polymorphism due to the increase in parasitic selection pressure. Author Contributions C.Y. and W.L. initiated the data collection. C.Y. and X.S. analyzed the data and wrote the manuscript. All authors took part in data collection and improving the manuscript.
2020-03-19T10:19:51.521Z
2020-03-17T00:00:00.000
{ "year": 2020, "sha1": "b770b43af22941f2d1a9cffc80b19fa9b01f8653", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/cz/article-pdf/66/5/477/34386730/zoaa011.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d51440f41312c5e33b153ba1e908dd1ec68ea04", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
133738859
pes2o/s2orc
v3-fos-license
Renewable energy sources in future energy balance of the republic of Kazakhstan . The article describes the main factors determining the development of renewable energy sources in the world. The assessment of the applicability of foreign RES development strategies to Kazakhstan's energy system has been made. The main tasks facing Kazakhstan's energy system with large-scale implementation of renewable energy were formulated. On the basis of the analysis and performed calculations recommendations and basic principles have been made on development strategy of renewable energy sources in the Republic of Kazakhstan. Introduction Renewable energy is one of the fastest growing sectors, attracting more than $ 250 billion of capital investments every year worldwide. Many countries, including developing ones, set ambitious long-term goals for the development of renewable energy sources (RES). According to the UN, by the middle of the century, perhaps more than half of humanity's energy needs will be provided through renewable sources [1]. According to the report [2], in 2015, investments in the production of electricity from coal and natural gas amounted to about $ 130 billion, which is less than half of capital investments in renewable energy sources, amounting to $ 286 billion. It is noteworthy that the amount of investment in RES was higher in developing countries than in developed countries. In Brazil, South Africa, Mexico, Chile, Honduras, Morocco, Pakistan, the Philippines and Uruguay -investment ranged from $500 million up to $7 billion. Investment in developing economies rose 19% to $156 billion, while in developed countries decreased by 8%, to $130 billion (for example, in Germany investments totaled $8.5 billion). Kazakhstan also creates favorable conditions for the development of renewable energy sources. According to the plans of the government of the Republic of Kazakhstan, the share of renewable and alternative energy sources in total electricity production should be brought to 3% by 2020, 30% by 2030 and 50% by 2050 [3]. The purpose of this paper is to analyze the factors affecting the development and optimal level of renewable energy sources in the country's energy balance. The factors in the development of renewable energy sources and the current situation in the world One of the main factors stimulating RES development is to reduce greenhouse gas emissions in energy production, to combat global warming. Currently, it is believed that the probability that a large part of the temperature change caused by increases in greenhouse gas concentrations due to human activities is 90 %. The following estimate is given in the reports [2,4]: "It is extremely likely that human influence was the main cause of warming observed from the middle of the 20th century". However, a significant difference in the proportionality between investments in renewable energy sources and the level of CO2 emissions in different countries shows different strategies for increasing the volume of RES despite environmental issues. Another important factor stimulating the development of renewable energy sources is the desire of countries to reduce their dependence on external energy supplies. For this, energy saving and more complete use of own energy resources are used as the main methods. In modern conditions, renewable energy sources also contribute to the solution of this problem. As a main measure of energy conservation, the direction was chosen to structural reorganization of the economy with a reduction in energy-intensive industrial production and an increase in the share of services. However, energy conservation, which is very effective at the national level, does not have a significant impact on the world energy consumption level, as the reduction in the production of energy-intensive products in developed countries causes an increase in energy consumption for the production of relevant products in other countries of the world. Substitution of fossil fuel-based generation by nuclear energy is not always possible, as the use of nuclear power plants is directly connected with the policy implemented by the State and often depends on public opinion. It is expected that at the global level, interest in renewable energy sources will continue to increase due to the growing demand for energy, the reduction of proven fossil fuel reserves, as well as environmental problems. Currently, more than 138 countries identified RES as development goals at the national level. The increase in the share of RES occurring despite falling global energy prices and the allocation of fossil fuel subsidies. At the current stage of development, renewable energy sources (excluding large hydropower stations and geothermal power plants) in most cases are difficult to compete economically with traditional energy industry. Therefore, in all countries, developing renewable energy sources, there are a variety of measures to support them, artificially increasing their competitiveness. The most popular measure of support is the establishment of special high tariffs for electric power produced by renewable energy sources, which guarantee the payback of the project. Also, it is often practicable to compensate for the cost of connecting RES to electric grids. Currently, preferential tariffs for RES power plants are stipulated in the legislation of many countries, including not only developed European countries, but also China, India, poor African countries like Kenya and Tanzania, as well as Iran and Algeria, rich in hydrocarbons [5]. The result of all these incentive measures is the most active development of renewable energy sources in the world over the past decade. Features of the energy system of Kazakhstan. Limitations and risks for large-scale penetration of renewable energy sources The development of the energy infrastructure in Kazakhstan was significantly determined by the presence of large reserves of cheap coal and gas, which makes the share of electricity generation using fossil fuels at a high level. First of all, increasing the share of RES brings the following technical problems to the energy system: Complication of dispatching control due to the increase in the number of generation nodes and the transformation of previously passive distribution networks into active ones [6]. Negative impact of RES on the reliability of energy system operation, the complexity of the UPS operation modes forecasting. The need for reconstruction of networks with mass input of renewable energy sources. Large volumes of variable generation of RES are able to cause power leaps in interfaces due to a drastic change in the production mode. This requires an analysis of the effect of RES on irregular power fluctuations, as well as the consideration of possible power leaps in determining the power transfer limits in the interfaces. There is a need to maintain large reserves of maneuverable generation or reserve transmission capacity in interfaces. When using foreign experience of introducing RES, it is necessary to take into account the peculiarities of Kazakhstan's energy system. First, in the conditions of a sharply continental climate, the main part of the energy demand is made up of thermal energy, which exceeds the demand for electricity several times. A significant part of the energy (about 40%) is generated by CHP plants that operate on a thermal schedule and can not be used to cover unbalances in the generation of RES. Renewable generation based on solar and wind energy, partially covering the demand for electricity, is not able to provide heat in the required volumes. Since in most Western countries there is no centralized heat supply system, it is unacceptable to directly distribute the results obtained in European countries to Kazakhstan. Secondly, for concentrated energy systems of Western countries, the key limitation is the current loading, while for Kazakhstan energy system the determinative limitation for the transmission of electricity through the backbone network is steady state stability. Considering that the total capacity of RES, which is supposed to integrate into the UPS of Kazakhstan, significantly exceeds the natural transmission capacity of transmission lines (even 500 kV), this will be one of the main problems in ensuring the sustainability of the UPS of Kazakhstan in the development of RES. Definitely this will require a fundamentally new approach to load flow management, primarily automatic. In IEA report there are counter-arguments against challenges described above [7]. However, statements are not accompanied by strong evidence. In the conditions of the full load of power transmission lines according to steady state stability limits due to the highly echeloned and complex automation, it is impossible to state that the deviations of RES are safe for the power system. An uncontrolled increase in the share of RES due to state support can lead to a significant increase in electricity tariffs, which at some point may push consumers to avoid centralized power supply and switch to cheaper small generation, including operating in isolation from the UPS. With sufficiently high tariffs, large consumers can start installing own power plants, refusing from centralized power supply services. This is especially true for areas with significant hydrocarbon reserves with a large-scale development of the gas transportation system. The creation by consumers of own generation will not pass without a trace for the UPS, which will lose the large and strong entities of the wholesale market. As a result, the remaining participants of the grid will increase their workload to maintain the conditions for the operation of the entire power system. In addition, the priority given to renewable energy leads to substitution of existing generation and a decline in the economic returns of traditional power plants, whose construction becomes less attractive for investors. While RES is not able to be created and operate without the support of traditional maneuverable generation. Renewable generation, due to its uncertainty, requires reserve capacity to cover unbalances. Theoretically, the variability of renewable sources can be compensated for by the flexibility and modulation of the fleet of stations. However, in practice, this is difficult to implement, because these stations have own technical and economic limitations. Power storage significantly increases the capital investment in the construction of RES. Countries with predominantly coal generation, which are most interested in reducing CO2 emissions, are now facing a shortage of maneuverable power plants. In the absence of a sufficient volume of regulatory capacity, it will be necessary to conclude contracts for regulating services with adjacent power systems, the costs of which are difficult to forecast and will be included in the tariff. For the energy system of Kazakhstan, the contradictions between the desire to introduce large volumes of renewable energy sources and ensure the reliability of the UPS become obvious. Therefore, it can be argued that the spontaneous growth of renewable energy without a scientific study is the main problem. The opinion expressed in 1975 by academician P. Kapitsa regarding the prospect of using renewable energy as the main source of electricity is also relevant today [8]. As noted by P. Kapitsa, the generation of electricity from RES in practice for high-power engineering is limited by the limited value of the energy flux density (1). The density of incoming energy is limited by the physical properties of the medium through which it is transmitted. In a material medium, the energy flux density U is limited by the expression: where v is the speed of propagation of energy, F-energy density. U is the vector value. When stationary processes div U determines the amount of energy conversion in another form. Direct conversion of solar energy into electricity for high-power engineering is associated with a limited value of the energy flux density. In this case the speed of propagation of energy v is almost equal to the speed of light, however, the energy density is low. Impact of large-scale RES penetration. Modeling of influence with respect to UPS of Kazakhstan Important results were obtained by the authors in [9]. In the course of the research, a mathematical model was created, the input parameters for which were: material capital, fossil fuel resources and infrastructure required for RES. In this model, the growth of material capital is possible in the presence of a surplus of electricity, and the resources of fossil fuels take into account the initial energy costs for the creation of RES and the cost of production over time. Parameters: fossil fuel resources and the cost of production are necessary to define a limit for the extraction of fossil fuels. In this paper, for the parameter "infrastructure required for RES", the most important characteristic is "renew return" (RE), which characterizes the return of RES energy relative to the spent energy for RES creation. Of the more than 100 simulated scenarios for more than a quarter have shown the unstable state of the system when you enter RES, i.e. the economy of the country was unable to meet demand for electricity as the depletion of fossil fuel reserves. The equilibrium was ensured by the introduction of renewable energy sources in the event of signals about a shortage of electricity as the resources of fossil fuels were depleted. Simultaneously, the simulation results showed a significant effect on the result of the dynamics of the introduction of renewable energy as the deficit approaches. With the slow introduction of renewable energy sources, the country's economy was not able to provide material resources and energy for the creation of renewable energy infrastructure because of the decline in electricity production from the traditional sources. Based on a series of sustainable scenarios, it can be concluded that the maximum amount of renewable energy is determined not by the technical limitations of the energy system, but by the dynamics of the use of the existing infrastructure and depletion of fossil fuels. In the case of Kazakhstan, the following time contours of RES development can be formulated. Based on the forecasted balance of the UPS of Kazakhstan for a seven-year period (until 2024) [10], in the absence of input of generating capacities, a power shortage of 180 MW occurs by 2022 (excluding reserves). With scheduled inputs of traditional generation, there is no electricity deficit in the next seven-year period. Given the competitiveness of RES, it is possible to replace part of the traditional generation with renewable energy sources. Given the expected power deficit and the timing of the implementation of renewable energy projects in 2022, the deadline for the beginning of active renewable energy input is 2021. However, given the availability of fossil fuels, the type of energy source (for example, nuclear power plants) should be carefully determined, and in case of non-competitiveness of renewable energy sources, it is necessary to postpone the mass entry of renewable energy sources at a later date. The introduction of renewable energy sources on a competitive basis (without the creation of benefits) can be carried out at any time, as with the competitiveness of renewable energy sources one can speak of the inefficiency of traditional generation. Kazakhstan power system consists of three zones: North, South and West (Figure 1). The Southern zone is energy-deficient by about 1500 MW, and covers the deficit from the Northern zone. The main problems with stability in UPS of Kazakhstan are observed in interface, connecting the Northern and Southern zones. The planned significant increase in renewable energy in the South zone has the potential to exacerbate existing problems with stability. While maintaining the current dynamics of input of wind farms and PVs, by 2021 the maximum instantaneous output of PVs and wind farms will be about 7% of the total generation of the country and 45% in the Southern zone. The increase in the volume of renewable energy requires a greater maneuverable capacity of traditional stations to cover the net load, since solar generation changes the form of the daily consumption schedule in the direction of decreasing the noontime demand. Net load is power consumption, excluding generation from renewable energy sources, i.e. it is a demand that must be covered by traditional stations. By 2021, on the day of maximum consumption at maximum output from PVs, the required capacity of traditional stations to cover the evening peak of the net load will increase by 800 MW, i.e. by 65% (Figure 2). A situation was simulated in which the growing power consumption in the South zone is covered exclusively by introducing RES, without introducing traditional capacities, and the effect of reducing parts of the variable generation on the stability is considered. In power systems of developed countries with a high degree of introduction of RES, the accuracy of predicting variable generation is about 10%. The accuracy of the forecast strongly depends on the amount of historical data and modern means of climate monitoring. At the initial stages of penetration of renewable energy in Kazakhstan it will be difficult to achieve high accuracy of forecasting. The loss of a part of RES generation in the South zone causes a power leap in the North-South interface. At a consumption level of 2021, the transmission capacity of the North-South interface allows reserving possible losses of generation in the Southern zone due to the flow from the North Zone. If the annual growth of consumption in the South zone is maintained at 5%, the planned increase of the RES is not able to cover the deficit. The growing deficit will lead to an increase in power flow across the North-South interface and a decrease in interfaces stability margin. At the same time, the increasing share of renewable energy in the Southern zone leads to an increase in the possible power leaps in the North-South interface. With the increase in consumption and the increase in the share of renewable energy sources, critical power leaps can occur with a smaller percentage of the switched-off generation of RES. In 2031, the loss of 80% of renewable energy will lead to a violation of transient stability, due to power leaps on the North-South interface. In 2035, the loss of 50% of renewable energy will lead to a violation of transient stability (Figure3). Even if the entire deficit growth in the South zone is covered by introducing RES, the problems with stability will remain relevant. In this case, it will be necessary to introduce a larger amount of RES, which will also lead to an increase in the possible power leaps (Figure 4). The emergence of a powerful stable generation in the southern zone is capable of relieving North-South interface and providing a larger reserve of capacity, which will ensure the safe operation of the Southern power district when introducing large volumes of RES. Obviously, coverage of imbalances of energydeficient areas exclusively at the expense of renewable energy is impossible. Proceeding from this, in parallel with the large-scale commissioning of renewable energy in the Southern zone of the UPS of Kazakhstan, it is necessary to strengthen the links or enter synchronous generation of commensurate capacity for unloading interfaces. Conclusion Many countries embarked on a large-scale implementation of renewable generation. However, when planning the development of the energy system of Kazakhstan, it is inadmissible to copy the energy strategies of other countries. When planning a model for the development of renewable energy in a particular country or region, the features of the power system configuration should be taken into account. On the basis of the analysis, the following basic principles can be formulated when introducing renewable energy sources into the energy system of the Republic of Kazakhstan: • The transmission capacity of the interfaces must be selected taking into account the possible power leaps caused by variable generation. • When planning the operation modes, it is necessary to take into account the volume and rate of change of the maneuverable generation of regulating stations. • Coverage of imbalances of energy-deficient areas exclusively at the expense of renewable energy is impossible. In parallel with the large-scale commissioning of renewable energy in energy-deficient areas, it is necessary to strengthen the links or introduce synchronous generation of commensurable capacity for unloading interfaces. • Increasing volume of variable generation will require involvement of additional regulation stations or contracting regulation with neighboring countries. In Kazakhstan currently there is no sufficient maneuverable generation. Therefore, the development of renewable energy requires the modernization of the market model in order to create conditions for the development of maneuverable power plants. • The penetration of renewable energy in isolated parts of the UPS or in any parts on a competitive basis (without the creation of benefits) can be carried out at any time, since with the competitiveness of renewable energy sources with traditional energy, one can speak of the extreme inefficiency of the latter. Taking into account the peculiarities of Kazakhstan power system, it becomes obvious that an unreasonable, constant increase in the share of renewable energy in the production of electricity is not an optimal option for the development of the energy system. Insufficiently thought-out policy of constant increase of the share of RES in the power system in pursuit of nominal indicators can disrupt the entire electric power sector. Until the capital costs for the creation of renewable energy can be justified by the energy received for converting solar or wind energy, RES will not contribute to the sustainable development of the energy system and the economy of the country as a whole. To achieve maximum reduction of harmful emissions with minimal negative impact on the energy system, the use of combinations of different technologies, primarily nuclear power, should now be considered. Thus, the penetration of renewable energy will require a significant change in the entire energy industry of the country, both in terms of technology, and in terms of economic and legal relations.
2019-04-27T13:12:05.519Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "453477189d002884c2d345d4945cd39c41b43ba7", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/33/e3sconf_rses2018_03006.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "022c0979b301c17906776de2e9104359037886a6", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Economics" ], "extfieldsofstudy": [ "Business" ] }
235077105
pes2o/s2orc
v3-fos-license
In vitro antioxidant and ex vivo anti-cataract activity of ethanolic extract of Cineraria maritima: a traditional plant from Nilgiri hills Cineraria maritima has a long history of use in the treatment of cataract and other eye-related problems in the homeopathic system of medicines. High oxidative stress is one of the major underlying causes of cataract which results in the precipitation of natural protein present in the lenses with aging. This research has been carried out to determine the anti-cataract activity of C. maritima by performing various antioxidant techniques such as 1,1-diphenyl-2-picrylhydrazyl, nitric oxide, hydrogen peroxide, and studies in oxidative stress–induced ex vivo cataract model. Results of the study conducted in the ethanolic extract of aerial parts (leaves and stems) of C. maritima revealed the presence of various phytoconstituents such as alkaloids, phenols, flavonoids, etc. Total phenol and total flavonoid content was found to be 6.31 ± 0.06 % w/w and 2.14 ± 0.09% w/w respectively, which revealed that the plant contains a good amount of these compounds and hence possesses good antioxidant activity. Furthermore, IC50 values obtained from all the methods gave strong evidence regarding the antioxidant potential of this plant. Anti-cataract activity was also investigated using goat eye lenses and promising results were obtained which speak voluminously about its anti-cataract potential and support its well-prescribed use. Results obtained with this study clearly supported the significant antioxidant potential and anti-cataract activity of this plant. Further, this plant demands great attention for the development of suitable novel dosage forms for the effective treatment of cataract. Background Cataract, the leading cause of avoidable blindness is responsible for almost 50% of cases globally [1,2]. According to the recent report of the World Health Organization (WHO), this percentage will double in the coming two decades if suitable measures would not be taken effectively in time [3,4]. Presently, surgery is the only treatment available which itself is associated with severe postoperative complications such as posterior capsule opacity, intraocular lens dislocation, inflammation of eyes, macular edema, endophthalmitis, and ocular hypertension [5,6]. Socioeconomic problems like illiteracy, poverty, inaccessibility, and overall cost of the treatment are other considerable major barriers to successful treatment [7]. Studies revealed that oxidative stress (OS) is one of the main underlying causes for the development of cataract [8,9]. Formation of reactive oxygen species (ROS) such as hydroxyl radicals, hydrogen peroxide, and superoxide anions due to inefficient defense systems and non-neutralizing action of natural antioxidant enzymes (glutathione, glutathione peroxidase, superoxide dismutase, and catalase) found in the normal crystalline lens is the main reason of the occurrence of OS. These antioxidant enzymes play a very significant role in the protection of lens proteins and lens fiber cell membranes [10,11]. Excessive increase in ROS results in the denaturation of lens proteins, nucleic acid, and lipids causing the development of cataract. Studies claimed a significant decrease in the activity of these natural antioxidants in the lens due to OS during cataract formation. Consequently, the development of plant-based pharmaceutical dosage form possessing remarkable antioxidant potential would be a boon to this global challenge. Successful prevention of ROS using plants seems a new hope on the horizon for giving a major breakthrough in preventing or at least delaying the onset of cataract with the least or no side effects [12]. Cineraria maritima L. (Syn. Senecio bicolor Wild) Tod, spp. Cineraria, (Syn. S. cineraria DC) has been used for the treatment of cataract and other severe eye-related conditions like conjunctivitis, opacity, and corneal clouding in the homeopathic system since time immemorial. It is an annual exotic medicinal shrub, which belongs to the Asteraceae family. It is native to the Mediterranean region, but in India, this exotic plant is cultivated by the Central Council for Research in Homeopathy in Nilgiri hills (at an altitude of 1990 m above sea level, 11°18′-11°41′ N Latitude and 76°37′-76°49′ E Longitude,), Tamil Nadu [13]. It is a perennial shrub with a height varying from 0.6 to 1.0 m. It is white in color and woolly throughout, especially under the leaves. Leaves are pinnate with oblong and obtuse segments. Flowers are hermaphrodite and branching generally starts from the base. It is also cultivated as an ornamental plant and sometimes called dusty miller [14]. Experimental studies suggested the presence of potassium, calcium, aluminum, manganese, iron, and phosphorus in C.maritima [15]. Leaves of the plant collected in Egypt were found to possess various flavonoids such as quercetin, dihydroquercetin, quercetrin, apigenin 7-O-glucoside, and luteolin 7-Oglucoside and their oil contained α-pinene (27.8%), camphene (22.9%), and borneol (7.4%) as major constituents [16]. Other studies suggested the presence of different pyrrolizidine alkaloids and other compounds with hydrocarbons and caryophyllene oxide being the most abundant compounds [14,17]. Research studies showed that this plant possesses significant antioxidant potential which might be the possible reason for its anti-cataract effect [18]. This plant has been selected because we believe that in spite of its high anti-cataract potential, it is underrated and has not yet been explored from the novel drug delivery systems (NDDS) point of view. The development of drug delivery systems using plants demands the identification of compounds responsible for their activity. In our study, we have qualitatively and quantitatively identified the presence of a significant amount of phenols and flavonoids in the Indian species of C.maritima for the first time. Considerable effect of plant extract in the concentration of protein and ions such as sodium and potassium which are essential for the transparency of the lenses were also estimated. These studies may act as a stepping stone for the development of better delivery systems of this potential candidate as compared to its presently available eye drops. Anatomical and physiological barriers of the eye are the key aspects for the ineffectiveness of drops in achieving optimum bioavailability to obtain maximum therapeutic outcomes, thus restricting the potential of C.maritima. Therefore, an attempt was made to determine the antioxidant potential of C.maritima utilizing the principle of total phenolic and flavonoid content and different in vitro antioxidant techniques such as 1,1-diphenyl-2-picrylhydrazyl (DPPH), hydrogen peroxide, and nitric oxide. Further, to know the influence of this plant in lens protein and vital lens elements such as potassium and others, biochemical testing studies were done in goat eye lenses [18]. carefully. Then, 10 g of the powdered material was weighed accurately and extracted with 200 ml of 95% ethanol in a Soxhlet apparatus. The solvent was completely evaporated under reduced pressure at 50°C and dried in vacuum. The material thus obtained was filtered and dried. Further, percentage yield was calculated (11.52%) and this was used as an extract to carry out experimental studies [19]. Total phenol content Preparation of standard curve For the preparation of the standard curve, 20 mg of accurately weighed gallic acid was carefully transferred into the 100-ml volumetric flask and dissolved in 20 ml of 50% v/v methanol. Then, the final volume was made up to the 100-ml mark with 50% v/v methanol to obtain a stock solution of 200 μg/ml concentration of gallic acid. Samples of different concentrations of gallic acid such as 20, 40, 60, 80, 100, or 120 μg/ml were prepared by the proper dilution of stock solution. After this, 1 ml of aliquot from each sample was taken in different test tubes and further diluted to 10 ml with 50% methanol and 1.5 ml Folin Ciocalteu's reagent was added to each test tube. All the test tubes were then incubated at room temperature for 5 min and 4 ml of 20% w/w aqueous solution of sodium carbonate was added to each test tube and the final volume was adjusted up to the mark of 25 ml with 50% v/v methanol. Each test tube was agitated vigorously for the mixing of contents and left aside for 30 min at room temperature and then the absorbance of all the samples was measured at 765 nm by using a UV/ VIS spectrophotometer against 50% v/v methanol as a blank [25]. Preparation of test samples Firstly, 50 mg of extract was accurately weighed with the help of digital balance and extracted with 50% methanol (3 × 5 ml) by maceration for at least 2 h. Then, filtration was done and volume was made up to 50 ml with 50% methanol in a cleaned and dried volumetric flask. After this 1 ml of aliquot was taken in a test tube and 1.5 ml of Folin Ciocalteu's reagent was added to it and kept aside for 5 min for incubation to be carried out at room temperature. Then, 4ml of 20% w/w aqueous solution of sodium carbonate was added to the test tube and the final volume up to 10 ml was made by using 50% v/v methanol. For the proper mixing of contents, the test tube was shaken vigorously and then left aside for 30 min for incubation at room temperature, and immediately after the incubation, absorbance was measured at 765 nm against the blank (50% v/v methanol) with the help of the UV/VIS spectrophotometer. Quantification of total phenol content A standard curve for the estimation of total phenol content (TPC) was prepared by using gallic acid. Different concentrations of gallic acids such as 20, 40, 60, 80, 100, and 120 μg/ml were made and absorbance for each concentration was estimated. Then a graph was plotted by taking the concentration on the X-axis and absorbance on the Y-axis to obtain the regression equation of the standard curve. From this equation of standard curve, gallic acid equivalents were determined and the result was expressed as percentage w/w (mean ± S.D). The percentage of total phenolic content in a sample of plant extract was determined by using the formula: where GAE = gallic acid equivalents (μg/ml); V = total volume of sample (ml); D = dilution factor; W = sample weight (g). Total flavonoid content For the determination of total flavonoid content (TFC), the aluminum chloride complex assay method was used. In this method, quercetin was used as a standard and all the determinations were done by plotting the standard curve of quercetin [25]. Preparation of standard curve of quercetin For the preparation of the standard curve of quercetin, firstly, a stock solution was prepared by dissolving accurately weighed 20 mg of quercetin in 20 ml of methanol in a 100-ml volumetric flask and the final volume was made up to the mark by adding methanol to obtain a solution of 200 μg/ml concentration. Then, samples of various concentrations such as 30, 60, 90, 120, 150, and 180 μg/ml with the stock solution of quercetin in methanol were prepared. Then, 0.5 ml of aliquot was taken from each sample and 1.5 ml of 95% methanol, 0.1 ml of aluminum chloride (10%), 0.1 ml of 1 M potassium acetate, and 2.8 ml distilled water were added separately to each of the samples and incubated for 30 min at room temperature. After this, with the help of a UV/VIS spectrophotometer, the absorbance of various samples was determined at 415 nm. For the preparation of the blank sample, the amount of aluminum chloride (10%) was replaced with the same amount of distilled water in the abovementioned procedure. Preparation of test samples Accurately weighed 50 mg of extract was dissolved separately in 50 ml of methanol. Similarly, 0.5 ml of stock solution was reacted with aluminum chloride for the determination of flavonoids content as described in the above section. The amount of aluminum chloride (10%) was substituted by the same amount of distilled water in the blank. The test sample was prepared by dissolving accurately weighed 50 mg of plant extract in 50 ml of methanol, and for the determination of flavonoids content, 0.5ml of stock solution was reacted with 10% aluminum chloride. For the preparation of the blank, aluminum chloride was replaced with distilled water. Quantification of total flavonoid content For the estimation of TFC, a standard curve of quercetin was plotted by taking the concentration of quercetin against absorbance. With the help of the regression equation of this curve, quercetin equivalents were calculated and the result was expressed as percentage w/w (mean ± S.D). A standard curve of absorbance against quercetin concentration was prepared. From the regression equation of the standard curve, quercetin equivalents were determined. The results were expressed as percentage w/w (mean ± S.D.). The percentage of TFC in the sample was calculated using the following formula: where QE = quercetin equivalents (μg/ml); V = total volume of sample (ml); D = dilution factor; W = sample weight (g). Evaluation of antioxidant activity by the 1,1-diphenyl-2picrylhydrazyl assay technique The standard procedure of the DPPH method was used to carry out this assay technique [25,26]. A stock solution of rutin (100μg/ml) was prepared in methanol. Utilizing the serial dilution method, different concentrations (2, 4, 6, 8, 10, and 12 μg/ml) were prepared. Equal volumes of these aliquots were taken and added to the methanolic DPPH solution. Then, these mixtures were incubated in the dark for 30 min at room temperature. After the specified time duration, absorbance was taken at 517 nm using a UV spectrophotometer. Methanol was used as a blank. Test sample solutions were prepared in a similar manner. The free radical scavenging activity of samples was calculated and the IC 50 value was determined by plotting a graph between concentrations of samples versus % inhibition [27]. where Ac = absorbance of control (DPPH); As = absorbance of sample/standard + DPPH; Ao = absorbance of sample / standard without DPPH interaction. The measurements were taken thrice, and the scavenging effect was calculated based on the percentage of DPPH scavenged. IC 50 values of the samples for antioxidant activity were calculated using the standard curve of rutin. Evaluation of antioxidant activity by the hydrogen peroxide scavenging method The free radical scavenging activity of plant extract towards hydrogen peroxide was measured by the procedure given by Dehpour. Firstly, 40 mM solution of hydrogen peroxide was prepared by using freshly prepared phosphate buffer pH 7.4. The concentration of this solution was measured at 560 nm with the help of a UV spectrophotometer. Then, 0.1 mg/ml of the extract was added to the hydrogen peroxide solution and absorbance was detected at 560 nm against a blank solution of phosphate buffer without hydrogen peroxide. The control was prepared with hydrogen peroxide and phosphate buffer without the plant extract. Ascorbic acid was used as a reference standard and phosphate buffer was used as blank for spectroscopic determinations. Finally, the percentage scavenged was determined [28,29]. where Abs control was the absorbance of the control (without extract) at 560 nm; Abs sample was the absorbance in the presence of the extract at 560 nm. Evaluation of antioxidant activity by nitric oxide scavenging method Ten millimolar of sodium nitroprusside solution was mixed with 7.4 pH phosphate buffer saline and this mixture was mixed with plant extract in various concentrations (5-200μg/ml). These reaction mixtures were incubated at 300°C for 2 h. 0.5 ml aliquots were taken from each of these mixtures and further mixed with 0.5 ml of the Griess reagent (1% sulfanilamide in 2% phosphoric acid and 0.1% naphthylethylene-diaminedihydrochloride). The absorbance was measured at 550 nm. Percentage inhibition of nitric oxide radical by plant extract and standard antioxidant ascorbic acid was determined [30][31][32]. %Inhibition of free redical A Control is the absorbance of the control and A Test is the absorbance of the sample Ex vivo evaluation of anti-cataract activity of ethanolic extract of C. maritima To perform this study, fresh goat eyeballs were collected from the nearby slaughterhouse and immediately carried to the laboratory and kept at 0-4°C. Then, lenses were carefully extracted by utilizing extracapsular extraction technique and incubated in laboratory prepared fresh artificial aqueous humor (sodium chloride: 140 mM, potassium chloride: 5 mM, magnesium chloride: 2 mM, sodium bicarbonate: 0.5 mM, sodium dihydrogen phosphate: 0.5 mM, calcium chloride: 0.4 mM, and glucose: 5.5 mM) at room temperature. The pH of the medium was 7.8. To prevent the growth of microorganisms, 100 μg/ml streptomycin and 100 IU/ml penicillin were added to the medium [19]. Morphological examination of the lens For the determination of the anti-cataract effect of the plant, lenses of each group were incubated for 72 h in their respective culture mediums. Then, visual inspection of lenses for opacity and transparency was carried out by suitably placing lenses in a wired mesh to see the number of squares in the mess through the posterior surface of lenses touching the mesh. The grading system as shown below was used to determine the results [33]. 0: Indicates complete absence of opacity and clear visibility of all the squares covered by the lens. +: Indicates a slight degree of opacity in which maximum squares covered by the lens are visible with minimum clouding ++: Indicates presence of diffuse opacification in which mesh squares are faintly visible +++: Indicates presence of extensive thick opacification in which nothing is visible through the lens Preparation of lens homogenate for the determination of total protein, sodium, and potassium After 3 days of incubation, 10% of lens homogenates from each group were prepared in 0.1 M sodium phosphate buffer of pH 7.4. Then, respective homogenates were centrifuged by using a refrigerated centrifuge at 10, 000g for 30 min at 4°C. The supernatant was collected and used for the estimation of various biochemical parameters [34]. Estimation of various biochemical parameters Biochemical parameters such as total protein content, sodium, and potassium levels in the lens homogenates were estimated for the determination of the anti-cataract activity of the ethanolic plant extract. Total protein content was measured by using Lowry's method and the amount of sodium and potassium was estimated with the help of flame photometry [34]. Statistical analysis All the results are expressed as mean n ± S.D. for three parallel measurements (n = 3). Results were processed by using Microsoft excel 2019 and SPSS student version 16. The statistical significance of the difference between groups for the various treatments is analyzed by oneway ANOVA followed by the post hoc Tukey test. p < 0.05, p < 0.01, and p < 0.001 were considered statistically significant. Results In this study, the ethanolic extract of the plant C. maritima was used for the determination of TPC, TFC, antioxidant activity, and ex vivo anti-cataract activity using goat eye lenses to know its potential for the effective treatment of cataract. So far, in India, experimental studies have been conducted by a group of scientists, Anitha et al., for the determination of anti-cataract activity of this plant because of its antioxidant potential. But, in the present study, this is the very first time the authors have tried to determine the TPC, TFC, and in vitro antioxidant activity of C. maritima by techniques such as DPPH, hydrogen peroxide, nitrous oxide, and estimation of protein and various ions in the lens homogenates. Literature studies greatly revealed the extensive use of this plant in homeopathic systems of medicines for the prevention and early onset of cataract. The anti-cataract activity of this plant is due to the presence of antioxidants such as phenols and flavonoids which help in combating oxidative stress which is the underlying cause of cataractogenesis in most of the cases. Therefore, this experimental study was conducted to determine the total phenolic and flavonoid content of this plant along with experimental antioxidant and anti-cataract activities. Preliminary phytochemical analysis The phytochemical analysis of the ethanolic extract of the plant C. maritima has revealed the presence of various constituents such as alkaloids, carbohydrates, flavonoids, and others which are tabulated below in Table 1. Total phenolic and flavonoid content As shown in Table 2, TPC was found to be 6.31 ± 0.06 % w/w which clearly signifies the high antioxidant potential of this plant. In this study, the calibration curve of gallic acid (Fig. 1) was plotted to determine the value of TPC by using the calibration curve equation Y = 0.0053X + 0.0107 with an R 2 value 0.9967 where Y represents absorbance and X represents the concentration of gallic acid in μg/ml which is expressed as % w/w. TFC was determined by using quercetin as a standard with the help of the aluminum chloride method spectrophotometrically. For this purpose, the calibration curve of quercetin (Fig. 2) was drawn to obtain a mathematical equation Y = 0.0048 + 0.016 with an R 2 value of 0.998. With this equation, the value of TFC was found to be 2.14 ± 0.09 % w/w, which shows the antioxidant potential of the plant under consideration. Antioxidant activity Antioxidant activity was performed by three different methods such as DPPH, hydrogen peroxide, and nitric oxide. In the DPPH assay technique, rutin was used as a standard (Fig. 3) for the determination of the antioxidant potential of C. maritima. Results obtained with the DPPH method are given in Table 3. From the DPPH method, the IC50 value of the standard compound was found to be 5.45 μg/ml and that of the ethanolic extract of the plant was 73.26 μg/ml (Fig. 4). The hydrogen peroxide method was the second method which was used for the determination of antioxidant potential. In this method, ascorbic acid was used as a standard which showed an IC 50 value of 0.89 mg/ml, while the IC 50 value of the ethanolic extract of the plant was found to be 1.30 mg/ml. Results obtained with this method are shown in Table 3 and Fig. 5 which clearly revealed the antioxidant potential of this plant. The nitric oxide scavenging method was also used for the determination of antioxidant activity. The results obtained with this method are shown in Table 3 and Fig. 6. Ascorbic acid was used as a standard in this method which showed the IC50 value of 27.03μg/ml and the IC50 value for the ethanolic plant extract was found to be 121.85 μg/ml. Thus, results obtained with all the techniques show strong evidence of the antioxidant potential of the plant CM. Ex vivo anti-cataract activity of ethanolic extract of the plant Results of this study are obtained in a photographic manner which are shown in different figures (Figs. 10, 11, 12, and 13). A comparison of photographs obtained for various groups under study clearly revealed the anticataract activity of C. maritima. Estimation of biochemical parameters Estimation of protein present in lens homogenate The amount of protein present in group I (normal control lenses) was found to be 3.17 ± 0.01 g/dl as compared to 1.35 ± 0.03 g/dl in the cataract-induced group II and 2.65 ± 0.02 g/dl and 3.01 ± 0.01 g/dl in cataracttreated group III containing 150ug/ml and cataracttreated group IV containing 300 μg/ml plant extract respectively (Table 5 and Fig. 7). Estimation of sodium and potassium present in lens homogenate Data obtained from this study is shown in Table 5. The concentration of Na + ions, i.e., 101.2 ± 0.03 μg/ml was found to be least in group I as compared to 232.5 ± 0.04 μg/ml in group II (Fig. 8). The cataract-treated groups III and IV showed a decrease in the concentration of Na + ions in a dose-dependent manner. Further, the concentration of K + ions was estimated about 11.6 ± 0.01 μg/ml in group I, whereas 5.9 ± 0.03 μg/ml was determined in cataract-induced group II (Fig. 9). Other groups III and IV showed an increase in concentration with the increase in the dose of the extract. Discussion With much revolutionization of science and technology, the twenty-first century has witnessed great use of herbs and herbal-based products for the effective treatment of almost all severe diseases and disorders which impose an excessively high life-threatening toll on the health of human beings. Knowing the great potential of herbs, nanotechnology, the science which deals with things in nanodimensions, has also entered into the designing of herbal-based drug delivery systems. Scientists believe that the development of a single system based on the dual aspects of nano and herbs to get maximum absorption due to its nano size and least or no toxicity because of its herbal origin can be of much significance to conquer these deadly diseases [35]. One more reason why nowadays the use of herbs and herbal-based drugs has been increasing enormously in modern era therapeutics as compared to synthetic counterparts is the significant antioxidant potential of the herbs. A number of experimental studies claim that high oxidative stress is the main underlying cause of most of the serious diseases [36]. Cataract which is characterized by opacification of the lenses due to the precipitation of natural crystalline protein present in the lenses is one such disease for which the main underlying cause is OS. However, for the effective treatment of cataract, modern surgical techniques are available, but at the same time, the severe postoperative complication cannot be neglected. Socioeconomic factors are some of the other key factors which also play a profound role as major obstacles in the treatment [4]. Thus, the development of an approach based on an antioxidant mechanism other than surgery is a great call of the hour so that the onset of cataract can be stopped or at least delayed. Keeping this In these compounds, the hydroxyl group is known to play a significant role in the scavenging of free radicals thus this parameter can be efficiently used as a measure for the antioxidant potential of plants. TFC was estimated by using quercetin as a standard. The principle of this method is based on the formation of acid-stable complexes by aluminum chloride with flavones and flavonols present in plants. When aluminum chloride comes in contact with the C-4 keto group and C-3 and C-5 hydroxyl groups of flavones and flavanols present in plant samples, it forms acid-stable complexes which can be utilized for the determination of the total flavonoid content of plant samples. It is also capable of forming acidlabile complexes with the ortho-dihydroxyl groups present in A or B rings of flavonoids. Experimental studies revealed the presence of their high amount (TPC, 6.31 ± 0.06 % w/w and TFC, 2.14 ± 0.09 % w/w) which collectively plays a significant role in its antioxidant potential. Thus, it can be said that the high antioxidant potential of this plant is due to the presence of a considerable amount of phenolic and flavonoid content which also plays a profound role in its strong anti-cataract activity. For the estimation of antioxidant study, three different techniques, namely DPPH, hydrogen peroxide, and nitric oxide methods, were used because each single technique is based on different scavenging mechanisms and utilizes a completely different experimental procedure which varies with respect to chemical reaction and reaction time, thus helping in better quantification of the antioxidant potential of the plant under consideration. The DPPH assay technique is based on the simple principle that a DPPH molecule behaves as a stable free radical due to the presence of a spare electron on the molecule which remains delocalized over the entire molecule [18]. This delocalization of the spare electron prevents the dimerization of the DPPH molecule and also results in the deep violet color with a characteristic band of 517 nm in methanol and 520 nm in ethanol when detected spectrophotometrically. On addition of a substance which can donate its hydrogen to the DPPH molecule, this violet color reduces due to the reduction of DPPH and oxidation of substance under consideration. In this study, rutin was used as a standard. The % inhibition value was found to be low in comparison with standard compounds, but as per the standard values for the antioxidant potential of plants, it belongs to the active category and speaks about the high antioxidant potential of the plant. Also, the relationship between the different concentrations of plant extract and percentage inhibition showed a high R 2 value (Table 4) which signifies a proportional relationship between the two variables showing linearity of the curve and increase in antioxidant activity. The increase in the value of percentage inhibition with the increasing concentration of plant extract revealed its antioxidant potential. The hydrogen peroxide technique was also used which is based on the principle of neutralization of H 2 O 2 into water when reacts with the electron donated by the plant extract due to its phenolic content. It was observed from the results of this study that the ethanolic extract of C. maritima scavenged H 2 O 2 in a dose-dependent manner thus possessing significant antioxidant activity. The nitric oxide method is based on the principle of determination of the nitrite ion using the Griess reagent. At physiological pH 7.2, sodium nitroprusside is known to decompose into nitric oxide in an aqueous solution and produces nitrite ions when reacting with oxygen. Results of antioxidant activity (Table 3) based on the determination of IC50 values by all the applied techniques clearly gave strong evidence regarding the great antioxidant potential of this plant because of its high free radical scavenging action and laid a strong foundation for its time immemorial use in the treatment of cataract via the management of OS which is the main culprit in the onset of this disease. The OS-induced ex vivo cataract model was used to determine the anti-cataract potential of this plant. For this purpose, goat eye lenses were collected from the nearby slaughterhouse and various study groups were made. In this study, a total of 24 lenses were used which were divided into four different groups. On the basis of thorough visual inspection, lenses belonging to group I (Fig. 10) were designated with grade 0 because these were found to maintain complete transparency and clarity throughout their incubation period. Lenses of group II (Fig. 11) were graded as +++ because sodium selenite leads to severe oxidative stress due to excessively high production of reactive oxygen species (ROS) which is the major cause of lens damage and development of cataract. Lenses coming under group III (Fig. 12) were graded as + because lenses were almost clear with very slight opacity and those belonging to group IV (Fig. 13) were graded as 0 because all the lenses exhibited complete transparency. The reason attributed to the difference in the results of various groups is the medium of their incubation which gives a high degree of assurance that OS is the main cause of cataract. The use of plant extract decreases OS in group III and group IV in a dose-dependent manner by preventing free radicals generated by sodium selenite and results in the clarity and transparency of lenses almost similar to group I. These results clearly showed that the ethanolic extract of C. maritima possesses promising anti-cataract activity due to reduction in OS because of high levels of phenolic and flavonoid contents as reported by Grace et al. The reduction in OS due to the decrease in selenite-induced free radical generation which further decreases precipitation of protein present in the crystalline lens may be the plausible mechanism of action of C. maritima for the prevention of cataract. Further, biochemical studies (Table 5) conducted for the determination of protein and various ions present in the lenses incubated in different conditions as per their study design strongly supports the high anti-cataract potential of C. maritima. A total protein content (TPRC) study showed that the amount of protein present in the lenses incubated in artificial aqueous humor (group I) was found to be significantly high as compared to lenses of the cataract-induced group because OS results in precipitation of lens protein at p < 0.001. The lenses of the cataract-treated groups (group III and group IV) showed an increase in protein content in a dose-dependent manner thus revealing the high anti-cataract effect of C. maritima (p < 0.01 for group III as compared to group II and p < 0.001 for group IV as compared to group II). The reason of the increase in the concentration of protein with plant extract may be attributed to the presence of phenols and flavonoids in the plant extract. Results and observations of the study conducted to estimate the concentration of ions present in lens homogenates suggested a high concentration of Na + and low concentration of K + ions in the cataract-induced group II (p < 0.001 as compared to group I), whereas lenses of group I and the cataract-treated groups III and IV showed an increase in the concentration of K + ions (p < 0.05 for group III as compared to group II and p < 0.001 for group IV as compared to group II) and decrease in Na + ions (p < 0.05 for group III as compared to group II and p < 0.01 for group IV as compared to group II). This alteration in the ratio of concentration from normal lenses and accumulation of Na + in cataract induced group is probably due to impairment of Na + K + -ATPase activity which decreases TPRC in the lenses resulting in their opacity and development of cataract. The dose of plant extract has a significantly high influence in the concentration of ions present in the treated groups (group III and group IV). With the increase in the concentration of plant extract, a remarkable decrease in the concentration of Na + and an increase in the concentration of K + ions in group IV as compared to group III was reported. This substantial increase in the concentration of K + ions with the dose of plant extract may be due to the presence of a high amount of potassium present in C. maritima as determined by J. Burdon-Cooper with the spectroscopic studies. One-way ANOVA followed by the post hoc Tukey test was applied for the analysis and all the results were found to be statistically significant at p < 0.05, p < 0.01 and p < 0.001. Results of our study also confirm the ability of this plant in a noteworthy way to combat cataract. This research elucidates a multipronged strategy to support the substantial anti-cataract activity of C.maritima due to a considerable amount of phenols and flavonoids found, as well as its contribution in increasing protein and potassium concentrations in cataractous lens which essentially maintains lens transparency. Thus, it has a propensity to prevent or delay the onset of cataract by restricting the insolubilization of lens protein, maintaining the normal activity of natural lens antioxidants and optimum concentration of required ions for the normal functioning of lens, so giving credence to this approach of treatment for age-related or senile cataract. Therefore, our research findings firmly endorse previous investigative studies and its wellprescribed use in homeopathy in treating or delaying the onset of cataract as suggested by Anitha et al. A study related to the global marketing scenarios for the presently existing dosage form of this plant clearly revealed the availability of its solution form which has limitations due to the complex anatomical and physiological functions and barriers of the eye [18,37]. All these shortcomings of simple solution dosage forms limit the maximum outcomes that can be obtained with this potential herbal candidate. Keeping in mind all the abovementioned limitations, we believe that this magical remedy requires special attention for the development of its novel dosages forms to get maximum advantages. Conclusion The obtained results of the study clearly show the high antioxidant potential of the plant C. maritima and support its well-prescribed and extensive use in the treatment of cataract. The findings supports a protective role of C.maritima in pathologies involving oxidative stress, namely cataract. Our findings further corroborate the substantial presence of phenolic and flavonoid content signifying the plant to be a promising contender in treating cataract. However, further studies to identify and isolate the main constituents responsible for its anticataract effect from phenolic and flavonoid class to comprehend the proper dosage form for maximum benefits of this plant needs to be carried out in future.
2021-05-22T13:41:40.463Z
2021-05-22T00:00:00.000
{ "year": 2021, "sha1": "5c49bdc481efff193b12cfb2cb9341baa4b66929", "oa_license": "CCBY", "oa_url": "https://fjps.springeropen.com/track/pdf/10.1186/s43094-021-00258-8", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "5c49bdc481efff193b12cfb2cb9341baa4b66929", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
67074865
pes2o/s2orc
v3-fos-license
A Hadoop-based Molecular Docking System Molecular docking always faces the challenge of managing tens of TB datasets. It is necessary to improve the efficiency of the storage and docking. We proposed the molecular docking platform based on Hadoop for virtual screening, it provides the preprocessing of ligand datasets and the analysis function of the docking results. A molecular cloud database that supports mass data management is constructed. Through this platform, the docking time is reduced, the data storage is efficient, and the management of the ligand datasets is convenient. Introduction Traditionally, lab chemical experiments are needed in the field of drug design, which cost much money and manpower. Computer-aided drug design can be used to focus on the docking objects set, and the period time of making drug can be shortened. With the completion of human genome project and the rapid development of structure biology and protein purification technologies, the suitable targets of receptor molecules have increased dramatically, at the same time the commercial small molecule database has been updated continuously. It is necessary to use computing technology to optimize the process. Computer aided drug molecular design [1] focuses on the molecular docking object sets. However, the search space of molecule docking is tremendous. A rough estimate of docking covered in the search space includes at least 10 30 solutions which need a large amount of computation time. Virtual screening procedures search collections of small molecules, seeking those members that contain a set of features that matches a defined search goal. With growth of structural and nonstructural data of the ligands, it is an important issue to store and manage a large amount of data. Cloud computing technology completes storing and computing of massive data by distributing data to each computing node of cluster through the network. Tsai [2] constructed a cloud-computing system for traditional Chinese medicine(TCM) intelligent screening system (iScreen). Capuccini, M. [3] developed a method to run existing docking-based screening software on distributed cloud resources, Hongjian Li [4] have developed a publicly-accessible web platform called istar. In this paper, we proposed a method to construct a molecular database for ligand preprocessing. We implemented a Hadoop -based virtual screening platform using AutoDock Vina [5] . Hadoop [6] is an open-source framework, originally introduced by Google for parallel processing of many small data items, is widely used in massive data processing. Hive is an open source data warehouse tool which is based on the Hadoop. Hive can map the file into a data table, and provide HQL statements. AutoDock Vina is a well-known tool for protein-ligand docking built in the same research lab as the popular tool AutoDock4 [7] . Our system can reduce a timeline and cost of drug discovery. Data Model The non-drug-like molecules (according to user-defined filters) can be selected before docking. Some ligands datasets which cannot be the targets should be eliminated from the datasets for docking process, so preprocessing of data is needed to improve the efficiency of docking. During the process of ligand preprocessing, users search for the datasets according to the properties of the compound, and the selected ligands are submitted for docking. Two tables are necessary for handling properties and the mol2 file of ligand. Table 1 shows the properties of ligand. Users can retrieve data fields for virtual screening and the retrieved data automatically compose a file in mol2 format. Table2 stores the data of the ligands. The results of docking are stored in table 3. Our platform provides the service for query and the service for preprocessing which inquires information of these ligand data tables. Architecture of Hadoop-Based Molecular Docking Platform The architecture of molecular docking platform based on Hadoop is shown in Figure 1. The molecular docking platform includes four layers, they are User Layer, Hive Layer, MapReduce layer and HDFS(Hadoop Distributed File System) layer. User Layer provides services for users, include preprocessing of ligands and operating of docking results. After preprocessing, the drug-like datasets of ligand will be selected as the input file of AutoDock Vina. Hive Layer provides data support. MapReduce is a high performance computing model of Hadoop. The application is executed in many small fragments of works by different nodes in the clusters. Map/Reduce framework divides the work in two phases: the map phase and the reduce phase separated by data transfer between nodes in the cluster. The Reduce stage produces another sets of key-value pair, as final output based on user defined reduce function. HDFS is a Hadoop distributed file system running on commodity hardware. In HDFS, data is organized into files and directories. Files are divided into uniform sized blocks and distributed across cluster nodes. After docking, the minimum value of the score in the out file will be stored in the result table. Users can retrieve and analysis the information after docking by using the services of this platform. AN Algorithm of Docking Process The essential enhancement is the data management from the main jobs in virtual screening, such as docking and ranking. Users not only can control and inspect the computing process but also attain consistent and logically organize data while computing is in progress or finished. Preprocessing All structures and properties of the molecules required for docking and ranking were calculated and stored in the database(Table1 and Table2). For each molecule in the ZINC library25 CHARMm26 atom types were assigned. Then preprocessing system was applied to calculate the atomic and chemical properties of each molecule. Even though not all of these properties were used in docking and ranking, they were prepared for different kinds of filters. Besides these properties, the mol2 file of each molecule was also stored in the database. Docking In order to improve the efficiency of docking, molecules from the library ligand table were selected according the drug-like rules. The poses of each molecule in the PDB format, with their interaction energies with the receptor and efficiencies (electrostatics and VDW), were stored in the table Pose of the database. During the docking, the computing clients acquired the 3D structure of the molecules directly from the database and stored poses and energies in the database after each docking process finished. Ranking We used a developed approach based on calculations of quantum mechanics to efficiently rank the poses. Algorithm used of docking process is as follows. Ranking table; 8) Update the tag of ligand to "finished"; 9) Endwhile. Experiments and performance evaluation We implemented a Hadoop-based molecular docking platform in order to validate the performance of the platform based on Hadoop. This section describes the hardware components in the platform and the testing scenarios. Figure2 shows the deployment of the experimental system. Figure 2. The deployment of the experimental system based on Hadoop This experimental system assigns one node as controller for the management and five nodes as slaves of the Hadoop cluster. The CPU of controller and each computer is Inter(R) Core™ 2Quad, 2.66Hz processor, and the memory is 8GB, the hard drive of 160GB. We construct a database that store existing chemical databases using MySQL. We performed docking jobs with 2000 ligands on a target receptor. Figure 3 shows the comparison of execution time as the number of docking jobs increases. We compared three different approaches to execute docking jobs. The first approach is to execute docking jobs on only single node which has the best computing performance. The second approach is to execute docking jobs on Grid-based molecular docking system [8] with 5 computation nodes. The third approach is to execute docking jobs using our Hadoop -based molecular docking system. In Figure3, we can see that the performance of our molecular docking system is better than other approaches. When a crash failure happens, in the first case, the docking jobs will be restarted, that means more time will be cost. In the second case, resource manager will migrate the remaining jobs to restart docking in the other computing node. In the third case, resource manager performs rescheduling and job migration when a crash failure happens. Figure 4 shows the comparison of the extra time for three cases to deal with the failure when crash failure happens during the docking process of 2000 ligands. According to the performance testing on the experiment platform, it shows that the Hadoop-based molecular docking system has a good performance, scalability and fault tolerance. Meanwhile, this platform provides preprocessing function and result processing function for users to access. Conclusion In this paper, we design and develop a Hadoop-based molecular docking system, which reduces an unmanageable number of compounds to a limited number of compounds for the target of interest. We constructed a ligand database for virtual screening and developed the preprocessing service and ranking service for uses. Through performance testing on the experiment platform, our system can reduce the timeline and cost of drug discovery. We plan to improve the algorithm to improve efficiency, and make various experiments with large datasets to measure the platform in the future. Acknowledgment These works are mainly supported by National Science Foundation under Grant No.61170168 and No.61170169.
2019-02-17T14:17:00.697Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "080f98593c6e5451fd247c13c6a73f1106160876", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/910/1/012025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3e42bd5ab0a405c66bf7a3bc3293dfa6b7d3effc", "s2fieldsofstudy": [ "Computer Science", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
248322207
pes2o/s2orc
v3-fos-license
The Health Innovation Impact Checklist: a tool to improve the development and reporting of impact models for global health innovations ABSTRACT Donor financing is increasingly relying on performance-based measures that demonstrate impact. As new technologies and interventions enter the innovation space to address global health challenges, innovators often need to model their potential impact prior to obtaining solid effectiveness data. Diverse stakeholders rely on impact modeling data to make key funding and scaling decisions. With a lack of standardized methodology to model impact and various stakeholders using different modeling strategies, we propose that a universal innovation impact checklist be used to aid in transparent and aligned modeling efforts. This article describes a new Health Innovation Impact Checklist (HIIC) – a tool developed while evaluating the impact of health innovations funded under the Saving Lives at Birth (SL@B) program. SL@B, a global health Grand Challenge initiative, funded 116 unique maternal and newborn health innovations, four of which were selected for cost-effectiveness analyses (CEAs) within our evaluation. A key data source needed to complete a CEA was the lives saved estimate. HIIC was developed to help validate draft impact models from the SL@B donors and our own team’s additional modeling efforts, to ensure the inclusion of standardized elements and to pressure test assumptions for modeling impact. This article describes the core components of HIIC including its strengths and limitations. It also serves as an open call for further reviewing and tailoring of this checklist for applicability across global efforts to model the impact of health innovations. Global health; health innovation; impact modeling; checklist; costeffectiveness analysis Background Estimating the health and economic impacts of a policy, program or project is critical for informing and scaling innovative healthcare solutions [1]. Donor funding and investments from multilateral, bilateral, and global health initiatives are increasingly relying on performance-based measures [2][3][4][5]. In addition, donor demands are shifting from technical and academic outputs to impacts that measurably benefit society [6]. To demonstrate efficiency and garner donor and private sector interest, global health implementing agencies need to be able to measure and report the impact of their interventions on health outcomes. In the field of global health innovation, impact is not realized unless an innovation is successfully developed, taken to scale and demonstrates effectiveness [7]. However, with rapidly changing health markets, rigorous evaluations for new interventions are often too costly and time-consuming to conduct relative to decision-making timelines [8][9][10]. It is estimated that less than 5% of drug and/or technology innovations reach scale, while the rate of achieving scale is 14 years on average [11]. With such high levels of uncertainty and wait times, innovators need predictive modeling to estimate the efficiency and future impact of their innovation, enabling donors and investors to make key funding and scaling decisions prior to the availability of widespread effectiveness data. Within the field of maternal, newborn, and child health (MNCH), a few impact modelling tools already exist. The Lives Saved Tool (LiST), developed at the Johns Hopkins Bloomberg School of Public Health, has been used to estimate the impact of scaling up interventions in low-and middle-income countries [12]. The tool uses 'coverage', or the population in need that receives an intervention, as a key input to calculate cause-specific mortality. The default coverage data for the tool comes from large-scale, nationally representative surveys such as Demographic and Health Surveys and Multiple Indicator Cluster Surveys [12,13]. PATH, a nonprofit global health organization, under its Innovation Countdown 2030 initiative modeled 11 interventions using LiST to learn that 6.6 million mothers and children could be saved between 2016 and 2030 if these innovations could be scaled up [14]. PATH's modeling strategy also pivoted around 'coverage' to determine how an innovation could help expand access to basic health services [14]. Another non-profit global organization, Population Services International (PSI), developed its own modelling strategy for estimating the health impact of its product distribution and service delivery efforts [15]. Unlike PATH, PSI sought to understand the impact of a single product or service delivered by the organization, and wanted cross-country and cross-program comparisons. As a result, it adopted a disability adjusted life year (DALY) measure to calculate the number of healthy years of life not lost to disability or death due to a PSI service [15]. RTI International, with support from The Bill and Melinda Gates Foundation, has developed the Maternal and Neonatal Directed Assessment of Technology (MANDATE) model. MANDATE is a web-based tool to assess the impact of medical technologies on maternal, fetal and neonatal mortality in low-resource settings [16]. The model allows users to adjust variables related to a technology's availability, appropriate use and efficacy to estimate the potential number of maternal and newborn lives saved [17,18]. Grand Challenges Canada (GCC) and Results for Development's (R4D) has another impact modeling approach. GCC developed simple spreadsheet models to estimate the number of lives saved and lives improved due to health innovations funded under the Saving Lives at Birth (SL@B) program [7]. These interventions are novel and require the consideration of contextual factors and feasibility of scale-up as they have not demonstrated effectiveness at scale [19]. GCC could not use LiST or MANDATE to model its innovations' impact as these tools do not account for context-specific modalities, while PSI's DALY approach was inconsistent with GCC's lives saved estimate [7]. GCC models are based on the innovation's theory of change, which helps reveal the chain of events that connect the direct effect of the innovation to health outcomes. These chain of events under various scenario analyses, including different assumptions about the effectiveness of an intervention, form the key parameters that are included in the model [7]. With an increase in the use of innovation impact modeling and the diverse range of modeling methodologies being adopted by various organizations (including PATH, PSI, GCC, and the MANDATE initiative), there is a growing need for standardization and quality assurance. Compared with clinical studies that report the effectiveness of an intervention, health intervention impact modeling takes into consideration broader, system-level factors such as the baseline health status of a population benefitting from an intervention, local service delivery capacity, as well as implementation-related issues affecting intervention coverage rates. This results in a diverse range of modeling approaches with varying impact metrics, which can be challenging to review and compare against one another. For example, PATH and GCC both modeled the projected impact of the same innovation (new inhaled formulation of oxytocin, a gold standard therapy for post-partum hemorrhage that currently requires refrigeration and administration by injection) across a similar timeframe but reached diverging estimates -PATH estimated 146,000 [20] maternal lives saved between 2022 and 2030, while GCC estimated 27,000 [7] lives saved between 2020 and 2030, globally. The lack of standardization in modeling makes it difficult to tease out specific assumptions used by both organizations, which generated differences in their model outcomes. Promoting transparency and comparability in impact modeling can potentially stem from the use of reporting guidelines and checklists. Evidence suggests that an endorsement of guidelines by journals can facilitate improved reporting [21]. Organizations using intervention impact modeling usually seek to project the impact of their interventions without necessarily having engaged in complex and large studies and can face challenges in data quality [22]. Guidelines and checklists can assist organizations in ensuring a minimum standard of reporting. Duke University was engaged in 2018 by the USA Agency for International Development (USAID) and GCC -two SL@B funding partners -to design and conduct an evaluation of the SL@B program to determine if it was achieving its intended impact. The SL@B program has issued 147 awards representing 116 unique innovations and 92 organizations addressing critical issues in maternal and newborn health (MNH) in low-resource settings [23]. One component of the evaluation required estimating the potential impact of SL@B-funded innovations on maternal and neonatal mortality, which included reviewing impact models developed by GCC and R4D for four interventions. During this review process, Duke University's Evidence Lab team referred to academic literature to find pre-existing and widely recognized tools and guidelines that could assist them in their validation efforts. Being unable to find one relevant and standardized tool, the team developed its own checklist for health impact models, henceforth referred to as the Health Innovation Impact Checklist (HIIC). Although initially developed to complement our review of GCC's models, HIIC was further developed to provide a generalized reference or guide for various types of innovation impact models for our broader work with global innovators and attempts to consolidate and standardize multiple modeling approaches. The following section will introduce and explain the HIIC using some examples from innovations funded by SL@B. The Health Innovation Impact Checklist (HIIC) The HIIC is a tool to help review the standardized elements and pressure test assumptions of impact models. The checklist (see Table 1) is a qualitative tool designed to review quantitative models and can be used by both reviewers and developers of health intervention impact models to help strengthen their analyses. HIIC is comprised of three sections: 1) Model Description, 2) Assumptions and 3) Scenarios, and each section consists of multiple categories, henceforth referred to as parameters. Each parameter or a measurable element or factor that forms part of the checklist, highlights a particular aspect of an impact model, which the HIIC user gauges as relevant or applicable to their model or not. A parameter, in the context of this checklist, can consist of a single indicator (e.g. the 'time horizon' parameter can be a single year (e.g. 2030)) or a range of estimates (e.g. the 'efficacy' parameter can comprise multiple studies that demonstrate the effectiveness of the innovation under different settings). HIIC does not dictate how to create an impact model, instead it enables the user to review their own model against each parameter. HIIC also requires reviewing sensitivities in model estimates. Sensitivity analysis helps determine the robustness of a model by examining to what degree the model results are affected by changes in inputs or assumptions. By requiring the user to identify and explain the various model parameters and confidence intervals/sensitivities of model outcomes, HIIC promotes transparency in results and comparability across different impact models. The HIIC Model Description section highlights the basic components of intervention impact models, including the theory of change, or the chain of events that connect the direct effects of an innovation to health outcomes. Mapping out this chain of events helps reveal the key measures that determine an innovation's potential impact. Identifying these measures helps the reviewer or modeler gauge the inherent assumptions their model make. For example, the direct effect of a newborn temperature measurement device, known as the BEMPU TempWatch [24], would be an increase in the number of identified hypothermia cases that would not have been identified and treated in the absence of the TempWatch. The outcome, for example the number of newborn lives saved, will depend on many assumptions including but not limited to the number of newborns receiving access to the device, newborns using the device, and newborns receiving treatment after identification of hypothermia, all of which form the key measures to gauge the projected impact of BEMPU's TempWatch [24]. The HIIC description section also highlights the following: time horizon of the model or the number of years across which impact is being measured (SL@B used 2030 as the end year to project impact, corresponding with the Sustainable Development Goals' (SDGs) timeline); target population that will use an intervention or for whom the intervention will have an effect (e.g. the population of interest in the case of the BEMPU TempWatch is low birth weight newborns who are more likely to develop hypothermia than normal weight babies, particularly in lower resource settings) [24,25]; and the study perspective, which determines from whose standpoint the modeling exercise is being conducted. The perspective of a model may be one or more of the following: societal, healthcare sector, health practitioners, patients, innovators, and funding agencies supporting the development of an innovation and others. The impact of an intervention cannot be realized in the same manner across different perspectives due to divergent interests, making it crucial to identify which perspective to model from the outset. For example, a narrow perspective, such as that of health providers', will not account for the use of resources outside the health sector or the greater welfare to society, which will be captured under the societal perspective. Considering that economic resource availability and output of any society are limited, improving healthcare via novel innovations will require devoting more resources to health, which may necessitate forgoing benefits or opportunities in other sectors [26]. The Assumptions section is subdivided along demographic factors, efficacy and fidelity of the innovation, and health system factors. Demographic factors refer to the context in which an innovation is being implemented and includes the disease burden parameter. The goal of this parameter is to question whether the model considers the baseline demographic factors, such as fertility trends, neonatal mortality or maternal mortality in the beneficiary population and models how these factors might change in the absence of innovation. Including trends in demographics and disease burden helps ensure that the model does not overestimate the potential future impact of the innovation. The efficacy and fidelity to treatment sub-section includes the evidence base of the model and the protocols of real-world implementation of an intervention. The efficacy parameter gauges the effectiveness of an intervention during typical and perfect use and compares these against a counterfactual (hypothetical alternative to actual conditions minus the intervention) [27]. The effectiveness of an innovation oftentimes is the single most important factor that determines how impactful the intervention will be, making this parameter's quality of evidence base an imperative task. Randomized controlled trials and propensity score matching studies in peer-reviewed journals are typically the ideal form of evidence, but other sources such as intervention pilots or long-term Real-world fidelity to treatment protocol (intermediary) Does the model include parameters for provision of and quality of service delivery by intermediaries (e.g. health workers, manufacturing, facility infrastructure, etc.)? Describe the source of information used. Real-world fidelity to treatment protocol (beneficiary) Does the model include parameters for fidelity to treatment protocol by the beneficiary (e.g. improper/sub-optimal use or engagement with innovation) ? Describe the source of information used. studies can also be quoted with additional discounting to account for limitations. Likewise, reliance on non-peer reviewed reports for parameter data is often necessary. By requiring the user of the checklist to report their sources of evidence, HIIC promotes transparency and allows internal and external reviewers of impact models to review and compare model data, assumptions and sources across different models. Health system factors The fidelity to treatment parameter reviews the process of intervention development and administration from a service delivery perspective. This parameter evaluates evidence on the ease of development, implementation and use of intervention by intermediaries, including manufacturers, health professionals, caregivers, and beneficiaries. For example, modeling the impact of Pratt Pouch, an innovation that delivers Nevirapine (NVP) (an antiretroviral prophylaxis) in a small sachet to HIV-exposed infants, requires taking into account the 'correct' use of the pouch [28]. Mothers can mistakenly fail to empty the complete contents of the pouch into an infant's mouth or not complete the entire six-week NVP regimen, resulting in incorrect use, which can reduce the effectiveness of and thus negatively affect the innovation's impact [28]. The Health System Factors section identifies key components of the overarching health system in which an intervention is being implemented, which can affect an innovation's utility and potential impact. The health system should be inclusive of all potential implementation challenges given current human and resource constraints. If an intervention is in the form of a product or its functioning depends on the availability of certain equipment or tools at a health facility, then the supply chain parameter gauges if the model takes into account the availability of relevant equipment, tools, or products to enable intervention use. If the intervention is a service and requires trained healthcare staff to administer it, then the attrition of health intermediaries parameter checks if the model incorporates the regular turnover of health workers, which can decrease the knowledge of and use of an intervention. If the intervention does not cater to severe cases that need referral or only caters to highly severe referred cases, then the referral parameter requires that the model discounts the patients that the intervention does not serve. Access to the health intervention may also vary across different segments of the population such as urban versus rural residents or those in different wealth quintiles (economic status), etc. The equitable access parameter reports if these differences in access have been incorporated in the model. The Scenarios section identifies the overarching expansion strategy for the innovation and enables the reviewer to report on intervention scale-up. If the innovation is modeled to achieve universal coverage in a certain country (versus, for example, scale in government or private sector health facilities only) then all other parameters must also follow suit. For example, the time horizon should reflect the time needed to scale-up across the country, while the disease burden parameter must identify fertility or mortality trends across the entire country's population. Discussion One of the key strengths of the HIIC is that it brings consistency and transparency across different kinds of intervention impact models. In the field of global health innovation where a diverse range of modeling methodologies are being adopted by different organizations, HIIC enables comparison between key elements of the models. Ensuring comparability and standardized approaches for impact modeling should help donors and governments as they seek to invest in and report out on effective and efficient innovations to achieve targeted health outcomes. For innovators and delivery organizations, comparability in modeling can facilitate a deeper understanding of their innovation's performance by helping them identify parameters against which their innovations may be suboptimal. In the health innovation field, where the effectiveness base for early-stage innovations can be missing or lacking strong evidence, a tool such as HIIC can help strengthen the analysis by enabling modelers to be more systematic and transparent while developing their estimates. HIIC is by no means a replacement for strategies already being used to model impact where effectiveness data already exist, but rather it helps to ensure more accurate and comparable estimates prior to the availability of that data. Using HIIC enables the modeler to state the assumptions that their model makes and justify the sensitivities used, making it easier for the modelers and reviewers studying the models to understand the rationale behind the estimates. HICC should help in communication between different organizations as they model impact so that any divergent estimates can be more easily explained and clarified. Modeling the impact of health innovations is by nature a complex task. Each impact model presents its own unique measurement challenges, and requires quantification of diverse input, output and outcome measures. This complexity in design, however, is not replicated in HIIC, which cannot fully encompass all aspects of a model or dictate a particular framework that a model must use. Instead, HIIC only poses questions and highlights key parameters, which the user of the checklist determines whether they apply to their model or not. Some parameters included in HIIC such as 'efficacy' and 'equitable access' can prove to be difficult to measure and require in-depth knowledge of an innovation and its implementation. Including these parameters, however, is critical for assessing impact. Despite strong estimates not being easily accessible, reporting on these parameters in HIIC will ensure that the strength of the evidence is captured, which can ultimately inform a reviewer about the robustness (or limitations) of a particular impact model. Conclusion To make HIIC more comprehensive and potentially increase its user base, the next steps would entail engaging impact modelers who are using modeling techniques to review and critique HIIC. Field-testing HIIC on a range of impact models, keeping track of iterations resulting from this exercise and disseminating key learnings and revised versions of the checklist can add significant contribution to this trending field of innovation impact modeling. Differential weighting of parameters and their evidence base, scoring, and creating a more comprehensive list of essential and elective model components are examples of what future iterations of HIIC might look like. This article serves as an open call to further review and tailoring of this tool for applicability across global efforts to model the impact of health innovations. Disclosure statement No potential conflict of interest was reported by the author(s). Funding information The author(s) reported there is no funding associated with the work featured in this article. Authors contributions JNB and AF conceived the original checklist idea; MS expanded on the checklist and wrote the first draft; all authors reviewed and approved the final draft. Paper context There are varying global health innovation impact modeling approaches being used to demonstrate impact. A lack of standardization across these methodologies can create issues of transparency and comparability. The newly developed Health Innovation Impact Checklist (HIIC) is a qualitative tool for reviewing quantitative models, designed for modelers and reviewers to pressure test assumptions and review the standardized elements of health innovation impact models. We invite stakeholders who estimate innovation impact to further refine and test this checklist.
2022-04-23T06:22:57.159Z
2022-04-22T00:00:00.000
{ "year": 2022, "sha1": "4cefe54e0f147c36450907bf49a4166b18a758d3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cd6a15d453e5f17e9c1178e62e08aee1b21dd691", "s2fieldsofstudy": [ "Medicine", "Economics", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
211229380
pes2o/s2orc
v3-fos-license
Industrial Engineering for Healthcare Management – Example Lean Management and ICT Tools Abstract Industrial engineering is a field dealing with optimization of complex processes, systems, or organizations by developing, improving and implementing integrated systems of people, money, knowledge, information, equipment, energy, and materials. Hence, the scope of industrial engineering is wide and includes various fields, from manufacturing, through banking, different types of services, to administration and healthcare. Various industrial engineering tools could be implemented in healthcare settings. The use of such tools is popular in western economies. For example, simulation modelling of services is popular in the US. However, there is still a very limited number of case studies on the application of such tools in healthcare that would consider the Polish economy. The aim of this paper is to present selected successful applications of lean management tools in the Polish healthcare. This may serve as an inspiration for healthcare organizations to search for and implement methodological approaches to improve their services. Introduction Medicine is a dynamically developing field. It can be observed, however, that the organization of healthcare in hospitals and clinics still has a significant scope for improvement. This means that the potential of development of medicine is not yet fully exploited and constrained by the limitations of healthcare organization. Healthcare is a very dynamically growing sector of economy and the labor market. The general health condition of society is very important for the growth of economy and if neglected, it generates significant costs for the national economy (Serwis Rzeczypospolitej Polskiej, 2015). Polish healthcare is assessed rather poorly using Euro Health Consumer Index, i.e. 585 out of 1000 possible points, which places Poland at the fourth Aleksander Buczacki et al. position from the bottom in the ranking. Worse results were achieved only by Albania, Romania, and Hungary. The EHCI value is increasing very slowly (511 in 2014, 523 in 2015, 564 in 2016, 584 in 2017) (Björnberg & Phang, 2019). Industrial engineering offers a variety of methods and tools that could be implemented in order to improve healthcare, specifically if considering such functions as organization, planning, directing (leading), controlling, and motivation. Literature distinguishes two types of mentality, i.e. managerial and clinical. The latter is usually in opposition to organizational improvements in healthcare. Such issues have been noticed at governmental levels, which resulted in top-down and country-wise computerization of the Polish healthcare. Implementations of Healthcare Information Systems and their integration with governmental databases are factors important for smooth operation of healthcare units. These are however, insufficient nowadays. Management sciences and industrial engineering offer some interesting approaches for organizational improvements in their toolbox, e.g. lean management focused on the elimination of waste (Jap. muda). Waste results in irregularities (Jap. mura) and less than optimal allocation of resources (Jap. muri) (Imai, 2012). Lean management evolved from the Toyota Production System. There are also numerous information and communication technologies that can be applied in healthcare for organizational improvements, e.g. simulation modelling and autoidentification. The aim of this paper is to present a subjective review and excerpts from the lean management and ICT toolbox, leading to an analysis of their possible applications in healthcare settings. A case study is presented to show applications of lean management tools in a hospital emergency department (ED). Specifically, 5S, Value Stream Mapping, and standardization are discussed for the chosen ED in order to show how to identify problems and plan improvements methodologically. It is also indicated that other complementary tools for the chosen ED may be implemented. The case study in question may constitute a certain benchmark for other EDs, with 235 public EDs in Poland (Serwis Rzeczypospolitej Polskiej, 2019). Lean Healthcare Lean management originated from the lean manufacturing concept, which focuses on the identification and elimination of waste occurring in production processes. The author and designer of the Toyota Production Industrial Engineering for Healthcare Management... System defined 7 types of waste, i.e. transportation, inventory, motion, waiting, overproduction, over-processing, and defects (Ohno, 1998). It should be stressed that the same types of waste can be observed not only in production processes, but in other areas as well, e.g. administration, healthcare, and services in general. In practice, a specific waste usually influences other types of waste. Waste elimination should be planned considering their influence on all other types of waste. For example, over-production usually generates inventory and other types of waste in succeeding operations. All lean methods and techniques focus on the aforementioned identification and elimination of waste. Later publications put the emphasis on people (Liker, 2004). All activities should be focused on the creation of value as defined by the customer (could be an internal one). Activities such as value-adding, necessary but non-value-adding, and non-value-adding (waste) could be observed in manufacturing processes. All processes should be mapped and analyzed in the context of the delivered value, i.e. non-value-adding operations should be eliminated, stoppages and backflows should be eliminated, and the process should be executed perfectly and in accordance with the customer's takt time (in a Just-in-Time manner). Problems should be solved immediately. The lean toolbox can be divided into 3 categories (Gladysz & Buczacki, 2018): -identification and analysis of waste, e.g. causal diagrams, Value Stream Mapping (VSM), -implementation of improvements, e.g. Single Minute Exchange of Die (SMED), poka-yoke, -process monitoring, e.g. andon, supermarket. One method or tool may have different purposes and they are not necessarily typical for the lean concept only. Usually they are also utilized in other approaches focused on process improvement. Lean management as a concept implemented in healthcare is labeled as "lean healthcare" (Kovacevic et al., 2016) or "lean hospitals" (Graban, 2011). Jimmerson (2017) considered the seven mudas for healthcare and listed: 1) Confusion, 2) Motion/conveyance, 3) Waiting, 4) Overprocessing, 5) Inventory, 6) Defects, 7) Overproduction. The first application of lean healthcare was reported in 2000 in the UK, followed by 2002 in Aleksander Buczacki et al. the USA and today it is accepted globally (Radnor et al., 2012). There is evidence of its positive impacts on healthcare organization, e.g.: -reduction of average waiting time for the first visit from 23 to 12 days and of lead time by 48% in Scotland Cancer Center, -reduction of time taken to process important categories of blood from 2 days to 2 hours, reduction of average turnaround time in pathology from over 24 hours to 2-3 hours in Royal Bolton Hospital, -reduced staff walking distance, reduced lab space and specimen processing turnaround time by 20%, reduced manpower and their transfer to other critical points, decreased numbers of days of average patient's stay from 6.29 to 5.72 days in Nebraska Medical Center, -intensive care unit cost-reduction of almost 0.5 mln USD per year, 90% reduction in number of recorded infections after 90 days of implementation of changed procedure for intravenous line insertion in Pittsbugh General Hospital. On the contrary, there are substantial barriers for organizational innovations, exemplified by lean healthcare. They are rooted in the well-known concept of clinical mentality (Freidson, 1972). Although this is not a new phenomenon, the concept is still valid nowadays. Clinical mentality is appropriate and required in medical procedures. However, it limits organizational improvements. The reasons lie in the different viewpoints of medical and managerial staff in terms of: -execution of processes (patient-wise vs. organization-wise), -responsibility (personal vs. organizational), -organizational dependencies (horizontal/collegial vs. vertical/hierarchical), -timeframe (short-term vs. long-term), -feedback (immediate/concrete vs. delayed/fuzzy), -tolerance for ambiguity (low vs. high). Both mentalities (managerial and clinical) seem to be fully contrary. However, both are necessary for effective and efficient performance of healthcare units in their different aspects. Managers obviously need to understand the medical priorities of medical staff. On the other hand, medical staff should be open for organizational improvements, which would lead to better quality of healthcare processes. Systematic literature review of lean healthcare has been presented and included 101 papers from the period 2000-2016 (Antony et al., 2019). The scope of this analysis proves that there are many reports on lean implementations in healthcare, and those papers answer questions concerning their drivers, limitations, motivations, or benefits. The discussed review, however, Industrial Engineering for Healthcare Management... synthesized the knowledge based on US (29), UK (23), and Swedish (13) applications. There had been no paper related to Poland. Therefore, the general question was raised, i.e. whether lean healthcare was applied in Poland. And if it was, then whether this should be disseminated and presented as a potential benchmark for other healthcare units. This goal was challenged by the identification of an exemplary lean healthcare application in Poland and a discussion of such a case study. Implementation of the lean approach in the Polish healthcare system is still in its emerging phase. There are some initiatives focused on the development of some tools dedicated to Polish hospitals, e.g. The LeanOZ project conducted by the Polish Society on Healthcare Economics. The results of the project were disseminated during a conference. Although it seems to be an important effort, it is not enough to fully disseminate knowledge about the lean approach among Polish healthcare institutions. Therefore, the aim of this paper is to start a discussion concerning lean healthcare in Poland through journal channels of communication and knowledge dissemination. Information and Communication Technologies for Healthcare Modern organization needs symbiosis and synergy both from organizational improvement based on methodological approaches (such as lean management) and ICT, which has become necessary to deliver goods and services. Such is also the case with healthcare services. Hospital Information Systems are the part of informatics in healthcare settings that is mainly focused on administrative issues related to the delivered services. Therefore, one may find them also labeled as hospital management systems/software. These systems are built from different modules allowing better patient information management. Example modules include, but are not limited to, registration features and repository of laboratory analyses (e.g. imaging, blood, etc.). A feature widely employed in such systems are barcodes used for patient identification in hospitals, but also for inventory (assets) management. Although barcodes are in their mature stage, there are also other autoidentification technologies, which can be used in hospitals. Such technologies are real time locating systems (RTLS) based on active radio frequency identification (RFID) (often based on standard Wi-Fi infrastructure, but proprietary radio standards also exist). RTLS is used for real time asset and staff management. Assets equipped with an RTLS tag are located on air and their location is accessible via software for anyone concerned. A good example may be infusion pumps, whose number is lower than the number of departments using them. Knowing the location and status of pumps in a hospital, it is possible to order the closest available one, saving time and costs. It is also possible to call the closest available medical worker with the necessary skills in case of an urgent situation -in this case RTLS tags serve as staff's personal badges. These tags, distributed to patients (e.g. in a form of bracelet), may also be equipped with panic buttons or movement and drop sensors, which enable them to send an alarm when an unexpected event occurs (e.g. patient's fall). Another application of RFID (in this case passive) technology is tagging of surgical tools. Each tool equipped with a unique RFID tag is identified at each step, e.g. entering/leaving the operating room, or entering/leaving the autoclave for sterilization. Simulation modeling is the tool that is widely used in industrial engineering, especially in manufacturing and logistics settings. However, simulation models may also be designed for healthcare setting, e.g. to simulate dispersion of hospital acquired infections, or the flow of patients through specific processes (e.g. to simulate the necessary capabilities of imaging laboratories), etc. Application of selected IE tools in Polish Emergency Department According to the national regulation in Poland, hospital emergency department (ED) may be formed in a hospital, which has: -a surgical department with traumatology (and pediatric surgery in the case of children's healthcare), -an anesthesiology and intensive care department, -a diagnostic imaging laboratory. ED must be enabled with 24/7 access to: -services delivered by the medical diagnostic laboratory, -computed tomography, endoscopic examinations, including gastroscopy, rectoscopy, bronchoscopy, laryngoscopy, -equipment for examinations at the patient's bed (critical parameters analyzer, bedside x-ray set, and mobile ultrasound scanner). ED must also have access to 24/7 airport or airstrip in close proximity (no need for specialized transportation to the ED, transportation no longer than 5 min.). Additionally, ED must employ at least head of the department (the doctor in charge of the department), a ward nurse, doctors in Industrial Engineering for Healthcare Management... the number necessary for the proper functioning of the ward (at least two doctors who are simultaneously in the ED), and nurses or paramedics in the number necessary for the proper functioning of the ward (Rozporządzenie Ministra Zdrowia z dnia 27 czerwca 2019 r. w sprawie szpitalnego oddziału ratunkowego, 2019). The ED being considered in this article is the one within Copernicus Provincial Multidisciplinary Centre of Oncology and Traumatology in Lodz. The analyzed ED is a medium one as far as Polish conditions are concerned. It consists of the following Points of Services: -3 registration desks; -Triage room; -3 assessment rooms (with beds); -2 tomography stations; -Treatment room; -2 X-ray stations; -2 resuscitation rooms; -8 ED beds. In the analysed ED, basic elements of lean have been implemented. The staff and management use the PDCA cycle for process improvement. As a basis, 5S was implemented as well as elements of visual management (Figure 1). Figure 1. Examples of 5S, visual management and standardization -pre-intubation checklist (left), resuscitation procedure (center), organization of workstation (right) In the ED, data about waste (classified in accordance with the lean approach) is collected and analysed. Value stream mapping in a healthcare setting is discussed in detail in (Jimmerson, 2017). Aleksander Buczacki et al. The process at ED where time is of particularly importance for treatment is treatment of stroke patients. For this reason, this process was chosen for value stream mapping and the following aspects were analyzed: ED layout, alternative paths of the process, interviews with doctors, database of patients' treatments (year 2016), results of treatment based on discharges' records. Timestamps of the execution of the different steps of treatment were analyzed. The analysis considered a sample of 24,395 ED patients in 2016, including 577 stroke patients, with 104 cases of thrombolysis among them. Obviously, the presented results reflect the specific characteristics of the analyzed ED. Those results, however, may serve as a benchmark for a methodological approach to analyses and organizational improvements in healthcare. Current state VSM (Figure 2) presents the synthetic parameters and indicators depicting the analyzed process (Buczacki et al., 2017). It is constructed of six layers (swim lines), which are most important from the point -Health status -diagram based on standardized EQ-5D-5L survey questionnaire (scale 0-100 points); -Supporting processes and locations where services are delivered without the presence of patients; -PoSs (Point of Services) -locations where medical services are delivered with the presence of patients; -Key VSM parameters -depicting quantitative flow ratio of patients, including parameters of resources; -Process time -divided into service/treatment time and waiting/transport time and calendar lead time (the sum of calendar days of treatment); -Costs of value stream -operational costs for each stage of the process allocated to the costs of room's occupation, equipment exploitation, personnel, medicaments, materials, services related to patients. The current state map was analysed, and a list of critical issues was created, i.e.: -Critical indicator for stroke patients is stroke-to-thrombolysis time, which includes door-to-needle time in the ED; -Critical internal indicator for stroke treatment is door-to-CT time; -Critical success factor is communication between paramedics and ED staff; -Critical success factor is the standardized procedure of thrombolysis with detailed work breakdown among ED personnel; -Critical success factor is communication with doctors about the planned treatments. -Critical success factor is communication between employees performing CT and those responsible for thrombolysis. Current state VSM visualizes all phases of a patient's treatment. Patient can be described by two general states, i.e. in treatment (processed) or not in treatment. Processing time (PT) depicts the duration of value-adding activities. If a patient is not treated, then two sub-states are possible, i.e. waiting or transported. Both those sub-states are represented by the WT (time of non-value-adding activities) parameter on VSM. The WT/PT ratio was lower than 10%, which is relatively low, and indicated the potential for organizational improvements. An analysis of current state VSM shows very high WT values for intensive care and stroke and early rehabilitation departments. Therefore, this is where improvement ac-Aleksander Buczacki et al. tions should be implemented first of all. Based on the current state value stream map, the following actions/improvements were proposed as standards for doctors and ED staff and as a way to improve critical issues (listed above): -Introduction of NIHSS (National Institute of Health Stroke Scale) survey; -Implementation of checklists; -Inclusion of a pharmacist in medical teams; -Inclusion of a social official in the process of identification of witnesses of symptoms (possibly contact by phone, e-mail etc.); -Implementation of PoCT (Point of Care Testing) for lab tests. Several areas of necessary improvement actions, identified through value stream mapping, are listed in the "Application of selected IE tools in Polish Emergency Department" section. The proposed improvement actions are strictly related to standardisation of operations in the ED. It seems that a reduced door-to-needle is a rational result for the standardisation and design of detailed procedures (with responsibilities, assets listed, forms designed, etc.) for: -Communication between paramedics and ED staff -questionnaire of data to be collected from paramedics by ED staff; -Communication for doctors about schedules (leading to a decrease of patient waiting times); -Communication between CT and thrombolysis performers leading to smooth flow between those locations, with reduced delays, waiting, and transport cycle times, -Optimized and validated procedure for thrombolysis (leading to a decrease of thrombolysis cycle time); Emergency departments are appropriate units for the application of simulation modelling as the settings are fuzzy and of stochastic nature. Simulation is not yet implemented in the considered ED. However, practices and models were identified that could constitute a solid base for simulation modelling in the analyzed ED. The existing experiences with discrete-event simulation modelling applied for ED can be used as a starting point for simulation modelling of the specific case of the ED in question. Figure 3 and Figure 4 present a simplified model (Anylogic, 2019). Many authors, however, have developed more complex models dedicated to door-to-doctor (Ribeiro et al., 2016), overcrowding (Ahalt et al., 2018), impact of work pressure (Choi et al., 2019), and many others. Applying agent-based modelling for the analysed ED, one may design a model with three states of the agent (patient), i.e. 1) treated (processed), 2) waiting -not treated, 2) transported -not-treated. Conclusion The lean approach is not very common in Polish hospitals. There are some implementations, but they are rather fragmentary and do not apply to the whole process. This situation should change, as implementation of the lean approach would give much room for improvement of healthcare services. Healthcare process analysis could be supported by IT technology in different roles, for the following: -Data acquisition; -Data processing; -Process analysis, forecasting, configuration (simulation). A single reference model implemented by Minister of Health regulation would make it possible to achieve scalable effect for implementations of possible solutions in EDs. For emergency departments, it is common to assume time-to-bed and length of stay as output parameters. Input data include resources, such as beds, equipment, cars, doctors, nurses, technicians, etc. The goal might consist in optimal allocation of cars and rescue bases, paramedics, doctors, nurses, and equipment as well as organization of care services, which enable the achievement of the required service level. Industrial Engineering for Healthcare Management... The presented examples, even those simplistic in their nature, proved to have great potential for improvement and a significant impact of basic lean management tools on the effectiveness of healthcare services in an example Polish ED. It is worth noting that the presented example also presents further possibilities of improvement by enabling continuous flow whenever possible.
2020-02-20T09:11:15.968Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "154025ab378315c78bb26a92d96ad55c40669532", "oa_license": "CCBY", "oa_url": "https://content.sciendo.com/downloadpdf/journals/slgr/60/1/article-p19.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "737134d9cbccee2751a91477e00f2f20cdb8a11e", "s2fieldsofstudy": [ "Engineering", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
4440126
pes2o/s2orc
v3-fos-license
New York University School of Medicine Drug Development Educational Program: 2‐Year Benchmark Drug development (DD) is a multidisciplinary process that spans the translational continuum, yet remains an understudied entity in medical schools and biomedical science institutes. In response to a growing interest and unmet need, we implemented a DD course series that details identification of viable molecular targets, clinical trial design, intellectual property, and marketing. Enrollment is open to faculty, postdoctoral trainees, and MD, PhD, and MS students. After 2 years, 37 students and 23 students completed the fall and spring courses, respectively. Pre/post‐surveys demonstrated gained knowledge across course topics, with mean survey scores increased by 66% (p < 0.001) after each course. Lectures for each course were consistently rated highly, with a mean course rating of 4.1/5. Through this program, trainees will have a more innovative approach toward identification of therapeutic targets and modalities. Furthermore, they will learn to integrate technology and biomedical informatics to find creative solutions in the DD process. Drug discovery and drug development (DD) are often referred to as a pipeline that starts with a clinical question, moves to the laboratory, and, if successful, journeys through trials and production to clinics and patients, and, in the end, affects practice standards and guidelines. However, this process is rarely straightforward and can take between 10 and 15 years. 1 Additionally, the cost to discover and develop one new drug has been increasing for the past several decades, 1,2 costing on average US $2.6 billion. This significant cost and time is due to many newly discovered chemical compounds falling into the translational gap, or failing to make it through clinical trials and become viable products that are available to the public. Only 1 of every 10,000 new chemical compounds identified during the discovery process is developed with the goal of advancing to human trials, and, in the end, only 10-16% of drugs that enter phase I trials are approved. 1 Due to the cost, risk of failure, and development time that is often over half the length of a patent term, there is a trend among larger drug companies to invest at later stages of DD, when clinical proof-of-concept has already been established. 1,2 There has also been a trend for companies to focus on specialty medicines or biologics, as opposed to primary care medications, due to a higher unmet medical need and, therefore, more potential for profit. 3 Collaborative partnerships between the pharmaceutical industry and academic institutions have been proposed as one method to make the DD process more efficient and costeffective. 1,2 To this end, many drug companies have reorganized their infrastructure so that their research and development facilities are located in cities with large universities, creating hotspots for bioscience innovation. 3 However, there is still the issue of overcoming the translational gap. From the industry perspective, pharmaceutical companies have recognized that the translational gap is a significant barrier to productivity and have designed translational medicine guides to improve their research and development process. 4 As DD epitomizes the translational research continuum, we believe that education, specifically an early career stage, is part of the solution. At the time of writing, there are no known DD programs within the 62 US active members of the Clinical and Translational Science Award/Clinical and Translational Science Institute consortium. We describe here our Drug Development Educational Program at The New York University-New York City Health and Hospitals Clinical and Translational Science Institute at the New York University School of Medicine. METHODS Our DD program was created in an effort to examine drug discovery and development with a fine lens and teach interested students about both successes and failures with the hope that those trained will go forward and improve the process. We were awarded an educational grant in response to an National Institute of Diabetes and Digestive and Kidney Disease educational program grant announcement (PAR-10-092); the program was created by centralizing, adapting, and expanding existing courses at New York University School of Medicine as well as opening enrollment to a more extensive NYU student base, with future plans to enroll externally. The goals are ultimately to teach a new generation of researchers and drug discovery/development career-minded individuals at the graduate and postgraduate level to harness novel bench-to-bedside empirical knowledge in their research and/or related endeavors using a multidisciplinary approach. Our educational program is unlike standard pharmacology courses, in that our courses highlight the many essential and innovative features of the DD process and integrate all components of the translational continuum, from molecular signaling pathways at the bench to public health policies and practice standards. The curriculum focuses on both the scientific and "nonscientific" core tenets in DD and development with speakers from academia, industry, the economic and legal sectors, as well as those from government agencies. Currently, our DD program consists of two courses, one in the fall and one in the spring semester, both of which were There are no prerequisites for either course and both courses were designed to be appealing to a wide variety of students with different backgrounds and knowledge of pharmacology. There is also no set order to the course series; depending on their program and schedule, students may take the fall course followed by the spring course or vice versa. Fall course: Drug development in a new era (BMSC-GA 4419) In our fall course, students learn how a new chemical entity, drug, or device is brought to the consumer market. There are many different avenues of drug discovery and product development, and many aspects of development are focused on satisfying regulatory requirements mandated by the US Food and Drug Administration and other regulatory agencies. The US Food and Drug Administration's Center for Drug Evaluation Research asserts that their mission is to "protect and promote public health by ensuring that human drugs are safe and effective." 8 As such, preclinical, pharmacokinetic, pharmacodynamic, stability, toxicity trials, clinical trials, and postmarketing surveillance are all elements that are important for researchers, as well as those who propose to gain expertise in the basic and environmental health sciences, to understand as prerequisites for US/global market approval. Furthermore, protocol planning, safety monitoring, data, and cost analysis are essential parts of this interdependent and collaborative process involving individuals from a diverse range of disciplines, including basic and clinical sciences, statistics, management, legal, and marketing departments. Core tenants of the course are to provide an overview of this innovative, multidisciplinary process. To ensure that an interesting and broad range of topics is covered, invited lecturers are from the academic and private sectors and are comprised of physicians as well as nonmedical professionals. Presentations range in content from regulatory control to marketing strategies ( Table 1) and run for 90 min, followed by a 30-min discussion period. Spring course: Molecular signaling and drug development (BMSC-GA 4475) The spring course explores the nature of DD from the biomedical and biochemical perspectives. Students are taught principles of creating therapeutics in a laboratory, ranging from topics as diverse as glycosaminoglycan signaling pathways to RAS and AKT signaling in the therapeutic drug response. In contrast to the fall course, most lectures are taught by people with high-level scientific knowledge on topics with the potential to convey information that would be helpful in the design of new drugs from a bench standpoint ( Table 1). Similar to the fall course, lectures run for 90 min with a 30-min discussion period. Program assessment Pre-and post-course surveys On the first and last day of each course, students completed a seven-item (fall) or eight-item (spring) survey containing questions about their perceived knowledge of course topics (see Supplementary Files S1 and S2). Each survey also had one additional question asking about how relevant students thought the course was to their career (Figure 1). Of note, one knowledge question, which asked about identifying and categorizing drugs, was added to the course survey www.wileyonlinelibrary/cts for the spring course during the second year of the program. Response options for each question were: nothing (1); almost nothing (2); some (3); and a great deal (4). Pre/post differences were compared on each individual knowledge question item as well as on a total knowledge score calculated by summing the score of each knowledge question. Pre/post differences in the career relevance question were analyzed separately. Lecture ratings On the last day of each course, students rated each lecture on four domains: content, presentation, relevance, and overall (see Supplementary Files S3 and S4). Response options for each domain were: poor (1); lower than expected (2); satisfactory (3); above expectations (4); and superior (5). The mean score of each domain was calculated for each lecture from individual students' scores. Then, a grand mean of each domain for the course was calculated using each individual lecture's mean score. Course enrollment and career development support During the first 2 years of our program, 37 students enrolled in the fall course, Drug Development in a New Era, including 11 MD/Masters of Science in Clinical Investigation students (30%), 11 PhD students (30%), 4 faculty (11%), 7 fellows (20%), and 3 MS students (9%). In these same years, 23 students enrolled in the spring course, Molecular Signaling and Drug Development. Of these, 21 were PhD students (91.3%), 1 was an MD/PhD student (4.3%), and 1 was faculty (4.3%). Enrolled students were eligible for support for career development opportunities, such as attendance at conferences, workshops, and fairs ($20,000-$25,000/year). Pre/post-course surveys Fall course: Drug development in a new era Of 37 students who enrolled in the Drug Development in a New Era course across both years, 33 completed both the pre-and post-course surveys (89.2%). The responses to all of the individual knowledge questions and the career relevance question at both time points were not normally distributed (Kolmogorov-Smirnov p < 0.001; Shapiro-Wilk p < 0.001 for each item), therefore, nonparametric tests were used for data analysis. The pre-course total knowledge score was normally distributed (Kolmogorov-Smirnov p = 0.20; Shapiro-Wilk p = 0.65), but the post-course total knowledge score was not (Kolmogorov-Smirnov p = 0.02; Shapiro-Wilk p = 0.03), therefore, nonparametric tests were used for data analysis. Pre/post differences in knowledge were analyzed using the Wilcoxon Signed Rank test (Table 2, Figure 1). There were significant differences in each individual knowledge question (Z = −4.22 to −5.03; p < 0.001) as well as the total knowledge score (Z = −5.02; p < 0.001). Pre/post differences in career relevance were also analyzed using the Wilcoxon Signed Rank Test (Table 2, Figure 1). The mean pre-course career relevance score was 3.49 (0.69) and the mean post-course score was 3.48 (0.67), indicating that students thought the course was highly relevant to their careers. There was no difference between perceived career relevance (Z = 0.00; p = 1.00). Students were asked to rate their knowledge of course domains and perceived career relevance of the course on a four-point scale (nothing, almost nothing, some, or a great deal). There were significant self-reported increases in knowledge for all questions, however, there were no significant pre/post-course differences in perceived career relevance. Pre/post differences in knowledge were analyzed using the Wilcoxon Signed Rank Test (Table 3, Figure 2). There were significant differences in each individual knowledge question (Z = −2.98 to −3.34; p ࣘ 0.003), except for the question about identifying and categorizing drugs that was added during the second year of the course (Z = −1.34; p = 0.18) and, therefore, answered by a small number of students (n = 4). There was also a significant difference in the total knowledge score (Z = −3.41; p < 0.001). Figure 2 Responses to pre-and post-course surveys for the Molecular Signaling and Drug Development course. As in the fall course, students were asked to rate their knowledge of course domains and perceived career relevance of the course on the same four-point scale. There were significant self-reported increases in knowledge for all questions except question 6. Again, there were no significant pre/post-course differences in perceived career relevance. Pre/post differences in career relevance were also analyzed using the Wilcoxon Signed Rank Test (Table 3, Figure 2). The mean pre-course career relevance score was 3.65 (0.49) and the mean post-course score was 3.47 (0.70), again indicating the high relevance of the course to students' careers. There was no difference between perceived career relevance (Z = −0.59; p = 0.56). Lecture ratings Thirty students in the Drug Development in a New Era course completed the lecture ratings (81.1%). The mean rating for each domain was: content 4.04 (0.82), presentation 3.95 (0.89), relevance 4.12 (0.82), and overall 4.02 (0.84). These mean scores correlate to a rating of above expectations across all four domains, with most responses ranging from satisfactory to superior. In the Molecular Signaling and Drug Development course, 16 students completed the lecture ratings (69.6%) and, again, the lectures were rated very highly, with mean ratings of: content 4.16 (0.85), relevance 4.24 (0.83), and overall 4.18 (0.81). DISCUSSION Limitations As a newly developed program at a single academic institution, there are a number of limitations to our current program evaluation. First, we did not collect background information about the students who enrolled in either course, in terms of prior experience in DD, prior education in pharmacology, or why they chose to enroll in our DD program. Our sample size was also limited (n = 37 in the fall course and n = 23 in the spring) because there is only one session of each course offered per semester. Additionally, both pre-and post-course surveys as well as lecture ratings were not mandatory for students enrolled in the course. Although the Drug Development in a New Era course had a relatively high response rate (89.2% completed both pre-and post-course surveys, 81.1% completed lecture ratings), the Molecular Signaling and Drug Development course had lower responses rates (65.2% for the surveys, 69.6% for lecture ratings), which could reflect a lower interest in, or opinion of, the spring course. The pre-and post-course surveys were also limited in the type of data collected; they only ask about self-reported knowledge of each course topic and there are no quizzes or final examination to determine knowledge more objectively. Finally, as our DD program is relatively new, with the first courses running in the 2012-2013 academic year, we have not collected longitudinal data about how our program influences the career trajectory or research interests of students who have taken one or both courses. Program evaluation and expansion The course series is in its fourth year and we continue to collect course surveys and lecture ratings. The next step in evaluating our program includes better characterizing the prior experience of students who enroll in our courses in terms of their interest and prior experience with pharmacology and DD. We also plan on following up with students after graduating to determine their involvement in translational research, especially DD, and to survey them on their perceived impact of our program on their career trajectory and current work. Due to increasing enrollment, we have expanded the DD program in several ways. First, we introduced another course entitled "Biotechnology Industry, Structure, and Strategy" in the spring 2015 semester. Second, we developed a New York State approved concentration and certificate program entitled "Health Innovations and Therapeutics" intended for students interested in health entrepreneurship (Table 4). Finally, we plan to extend our educational program to include more postdoctoral trainees by collaborating with NYU's biomedical institutes and the Broadening PhD Career Awareness and Preparation, a new model for training scientist for careers outside of academia whose goal is to transform the nature of scientific training into a tailored program that maximizes quality and efficiency. CONCLUSION Our DD educational program is brimming with potential, as demonstrated by student reported-increased knowledge across all course domains, high relevance of course topics to their future careers, and consistently high ratings of our multidisciplinary lecturers. These successes and ongoing expansion make our program poised to produce researchers and clinicians capable of tackling the complex issues in modern DD and narrowing the translational gap. Our program also can serve as a model for like-minded academic institutions who aim to develop innovative, collaborative programs committed to shortening the path to developing new diseasemodifying therapies and technologies in order to improve public health.
2018-04-03T01:10:53.650Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "1509342e68e3c23293f0b1627fe035b2a9a9271b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/cts.12410", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1509342e68e3c23293f0b1627fe035b2a9a9271b", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251647501
pes2o/s2orc
v3-fos-license
Structural basis and dynamics of Chikungunya alphavirus RNA capping by nsP1 capping pores Significance Here, we present biochemical and structural characterization of the capping pathway carried out by the Chikungunya virus nonstructural protein 1 (nsP1) capping pores. We provide five cryo-EM structures that represent the different steps of the reaction. These structures reveal the molecular determinants and dynamics associated with the alphavirus capping process. In addition, we biochemically demonstrate RNA capping specificity and the reversibility of the reaction which allows nsP1 to cap and decap RNAs and to release intermediates of the reaction. These data provide biochemicalclues about the enzymatic activity of nsP1 capping pores and a structurallandscape that will be instrumental for the design of effective antivirals targeting the viral RNA capping for blocking alphaviral infection. Alphaviruses are emerging positive-stranded RNA viruses which replicate and transcribe their genomes in membranous organelles formed in the cell cytoplasm. The nonstructural protein 1 (nsP1) is responsible for viral RNA capping and gates the replication organelles by assembling into monotopic membrane-associated dodecameric pores. The capping pathway is unique to Alphaviruses; beginning with the N 7 methylation of a guanosine triphosphate (GTP) molecule, followed by the covalent linkage of an m 7 GMP group to a conserved histidine in nsP1 and the transfer of this cap structure to a diphosphate RNA. Here, we provide structural snapshots of different stages of the reaction pathway showing how nsP1 pores recognize the substrates of the methyl-transfer reaction, GTP and S-adenosyl methionine (SAM), how the enzyme reaches a metastable postmethylation state with SAH and m 7 GTP in the active site, and the subsequent covalent transfer of m 7 GMP to nsP1 triggered by the presence of RNA and postdecapping reaction conformational changes inducing the opening of the pore. In addition, we biochemically characterize the capping reaction, demonstrating specificity for the RNA substrate and the reversibility of the cap transfer resulting in decapping activity and the release of reaction intermediates. Our data identify the molecular determinants allowing each pathway transition, providing an explanation for the need for the SAM methyl donor all along the pathway and clues about the conformational rearrangements associated to the enzymatic activity of nsP1. Together, our results set ground for the structural and functional understanding of alphavirus RNA-capping and the design of antivirals. structural biology | biochemistry | virology | alphavirue | membrane proteins Chikungunya virus (CHIKV) is an arbovirus transmitted to the human host by members of the Aedes mosquito family, causing infections that are characterized by fever, rashes, and debilitating joint pain. Although infections are rarely lethal and usually resolve in a few weeks, in some cases, symptoms can persist for years (1), and recent outbreaks have resulted in an unprecedented number of infections in populations exposed to the virus (2). Alphaviruses such as CHIKV possess a positive-sense, single-stranded genome that must be replicated following release into the host cell. Genome replication occurs in membranous replication organelles (called "spherules"); invaginations that are derived from remodeling of the host cell membrane during infection (3). Each spherule houses a replication complex (RC) formed from four virally encoded nonstructural proteins (nsPs) that function cooperatively in RNA synthesis (4). Within the RC, nsP4 is the RNA-dependent RNA polymerase (5,6), nsP2 has helicase activity (7) and proteolytically cleaves the nsPs from a viral polyprotein precursor for the formation of mature RCs (8,9), and nsP3 has a role in recruitment of host factors to the spherule (10). NsP1 is the membrane anchor for the complex (11), forming dodecameric pores that associate monotopically with the membrane in the necks of the spherules to gate their entrance (12,13). Enzymatically, nsP1 also has a role in addition of cap0 structures to the 5′ end of the positive-sense viral RNAs (14)(15)(16). Cap structures, minimally formed from covalent linkage of an (m 7 GMP) moiety to the first nucleoside of the RNA via a 5′-5′ triphosphate (3p) bond (Cap0), are universally found in host messenger RNAs (mRNAs) and are essential to the processing, stability, and translation of transcripts (17). In higher eukaryotes, the 2′O ribose maybe further methylated (Cap1) along with internal bases of the mRNA. For viruses, capping of viral RNAs is thus often exploited as a means for hijacking the host translation machinery, and has an additional role in the evasion of host innate immunity through preventing the recognition of terminal RNA phosphates by cytosolic RIG-I and IFIT1 receptors (18). Chikungunya infection produces two viral RNA species that are capped by nsP1; the full-length 11.8 kbp positive-sense genomic RNA (gRNA), and a subgenomic RNA (sgRNA) of 4.3 kbp that is transcribed from the second open reading frame at later stages of infection and encodes only the structural polyprotein (19,20). Intriguingly, despite the central role of RNA capping in viral infection, recent studies suggest that not all alphaviral gRNAs packaged into virions are capped, and that uncapped RNAs may have an important role in modulating the host immune response to infection (21). NsP1 directs cap synthesis via a mechanism that differs from the conserved capping pathways of most cellular and viral capping enzymes (15). In canonical pathways, a dedicated guanylyltransferase (GTase) enzyme transfers a guanosine monophosphate (GMP) moiety from GTP to the 5′ phosphate of a diphosphate (2p) RNA (GpppA N ). Methylation of the guanosine by a separate methyltransferase enzyme (or enzymes) yields the cap structure (m 7 GpppA N for cap0). NsP1 possesses both N′7 methyltransferase and guanylyltransferase activity and reverses the order of these reactions. A methyl group must be transferred to GTP from an SAM substrate (forming m 7 GTP) prior to transfer of the m 7 GMP cap structure to the enzyme to form a covalent nsP1-cap0 intermediate on a conserved histidine (CHIKV-H37) (15). The cap is finally transferred to the 5′ terminal phosphate of a diphosphate RNA (m 7 GpppA N ) (16). It has been proposed that the diphosphate RNA substrate is produced through the triphosphatase activity of nsP2, known to be specific to the removal of γ-terminal phosphates (22). Recent cryo-electron microscopy (cryo-EM) structures of nsP1 capping pores (12,13) and a partial RC (23) have provided important insights into nsP1 function, demonstrating that oligomerization into dodecamers is necessary for capping activity and driven by underlying interactions with the membrane. However, the structural basis for the noncanonical order of the reaction pathway was not understood, nor how the substrates are recognized or how many of the 12 sites within the ring can be active simultaneously. To address these questions, we provide a suite of cryo-EM structures of detergent solubilized nsP1 capping pores in complex with substrates from different steps of the capping pathway, in addition to the biochemical characterization of the RNA capping activity. The structures reveal that simultaneous binding of the SAM and GTP is necessary for optimal substrate positioning for guanylyltransferase activity, potentially providing a molecular rationale for why methylation precedes guanylylation. We identify residues that are critical for substrate binding and residues playing a pivotal role in the different positioning of the guanosine triphosphate moiety during the capping process, including an arginine from a neighboring protomer in the pore (R275) which further explains the allosteric activation of nsP1 capping by pore formation. Our capping assays demonstrate that nsP1 exhibits sequence and structure specificity in capping of RNA substrates, suggesting that capping of the viral RNA occurs cotranscriptionally. We demonstrate that nsP1 is capable of releasing significant amounts of intermediates of the reaction and can decap RNA substrates resulting in the production of uncapped alphaviral RNAs. Finally, we show that the decapping reaction leads to a repositioning of regions important for the guanylyltransferase reaction in the nsP1 protomers, inducing opening of the pore. Together, our results provide a large body of work for understanding the capping process of alphaviruses, revealing the molecular determinants and protein dynamics associated to this process. We discuss the implications of our findings on infection and host adaptation considering also recently reported structural findings on nsP1 pores (13). SAM Binding to nsP1 Is Flexible in the Absence of GTP. Each nsP1 protomer is built around a methyltransferase fold common to SAM-dependent methyltransferases with additional insertions (membrane binding and oligomerisation (MBO) loops 1 and 2) and extensions (ring aperture membrane binding oligomerisation (RAMBO) domain) that contribute to pore formation, oligomerization, and membrane binding (SI Appendix, Fig. S1 A and B). In the context of the ring, the capping domains are located in the crown above the membrane, where a bilobal pocket in each protomer links a SAM binding site facing the exterior of the ring and a GTP binding site facing the interior ( Fig. 1 A and C). To better understand the initiating methylation step of GTP in the nsP1 pathway, we solved the cryo-EM structures of detergent solubilized nsP1 pores in the presence of a 100-fold molar excess of SAM or GTP substrates (SI Appendix, Table S1). Common to other N7-SAM dependent methyltransferases (N7 MTases), the SAM ligand binds in a pocket near the switch point of the Rossmann fold in the capping domain ( Fig. 1B and SI Appendix, Fig. S1 A and B). The adenosine base and ribose sit within a cavity defined between loops β1-ηA above the SAM (including sequence motif 63 DIG 65 that is highly conserved in methyltransferases) and loop β2-αB below (including sequence motif 89 DPER 92 ) (see sequence alignment and legend in SI Appendix, Fig. S2). The pocket is gated from the solvent exterior of the ring by loop αC-β4 above the zinc binding site. The SAM binding site is highly exposed to the solvent and largely defined by flexible loops (SI Appendix, Fig. S1 C and D). The density for the SAM ligand is not fully defined in the binding pocket, where in maps reconstructed with symmetry, only density for the base and ribose moieties is clearly visible (Fig. 1B). In comparison with the apo form of nsP1, neighboring helices αB and C are very poorly defined and overall local resolution for this region is worse compared to the apo and other bound states (SI Appendix, Fig. S3). Contacts made to the SAM ligand are primarily weak van der Waal's (VdW) interactions, involving residues I64-A67 from the DIG motif and P83, R85, T137 and D138 to the base; S86, D89, and R92 from the DPER motif to the ribose and residues R70, G151, and D152 to the methionine ( Fig. 1B and SI Appendix, Table S2). The SAM base forms a hydrogen bond (H-bond) between N6 and the side chain of residue T137 in nsP1, which maintains a second H-bond with R85. Both residues are conservatively mutated among alphaviruses (SI Appendix, Fig. S2). Interestingly, D138, which is highly conserved in other N7 MTases and typically confers SAM binding specificity through an H-bond to the N6 amine, is barely defined in the density of the disordered αC-β4 loop and too distant to contact the amine (~4 Å). The purine base is bound in the anti-conformation to the ribose, where the 2′ and 3′ hydroxyl groups of the ribose form H-bonds to the side chains of D89 of helix αB. Mutagenesis studies have identified both D89 and R92 as important for SAM binding (24). Although the density for the methionine is quite poorly resolved, its position replaces the side chain of R70 in the apo form of nsP1, which is also poorly defined in the structure. The methionine is surrounded by residues G65, S66, and A67, and Q151 and D152, which are within H bonding distance to the amine group of the methionine. To determine whether differences in protomer conformation or substrate occupancy could underlie the poor definition of the SAM binding site in the maps, we performed focused classification following symmetry expansion with a mask centered on the capping domain (SI Appendix, Fig. S4). We could identify four classes by focused classification. In 20.6% of the particles, the SAM site was empty (class 3), despite the high molar excess of SAM added to the protein. For a second class (class 1) including another 18% of particles, only density corresponding to the purine and ribose of the SAM is visible, but the remainder of the active site is clearly defined. In a third class (Class 2), including 22.9% of the particles, the ribose and methionine are also clearly defined, but density for helices αC and αB above the SAM binding site were barely visible. Taken all together, these data suggest that in the absence of GTP, SAM is not stably bound, and does not satisfy some of the contacts required for correct positioning of the substrate for methyl transfer. The correlation between the presence of SAM in the active site and the disordering of helixes αC and αB suggests that binding of the SAM alone also induces significant destabilization of part of the nsP1 capping domain. This could explain why apo nsP1 does not copurify with significant amounts of SAM or SAH, as is observed for various other SAM-dependent N7 MTases [DENV NS5 (25,26), human RNMT (27)], and contrasts with N7 MTases that require SAH/SAM cofactors for stabilization and crystallization (vaccinia MTase) (28). GTP Binding Occurs in a Deeper Pocket Relative to Other N7-MTases and Requires Occupation of the Adjacent SAM Pocket. The GTP pocket in the capping domain of nsP1 is defined by the tip of strand β4, loop β4-αD, and helix αZ, where loops αC-β4 and β4-αD communicate with the adjacent SAM site ( Fig. 1 C and D). Strand β11 of the beta sheet insertion in the capping domain forms a lid over the pocket. The guanosine base fits in a hydrophobic pocket defined by residues D152, Y248, F241 (β10-11 lid), Y154 (β4-αD loop), and F178 (β5). Base stacking occurs in between D152 and Y248, where the latter residue changes rotamer relative to the SAM-bound and apo structures to become planar with the guanosine. E250 of strand β11 forms H-bonds to the N1 and N2 of the guanosine base, and the carboxylate group of residue D152 (β4-αD loop) maintains a network of H-bonds with GTP-O6 and N7 through an intermediate water molecule. Both contacts are conserved in other N7 MTases and confer specificity for methylation of a GTP substrate over adenine, where the N1 position is unprotonated and interaction with the E250 carboxylate would be unfavorable (29). However, relative to other N7 MTases, the GTP pocket is deeper in nsP1, and the guanosine base and ribose are bound further in the fold (SI Appendix, Fig. S1D). The overall effect of this deeper binding pose compared to canonical N7 MTases is to align the alpha phosphate of the GTP with H37, a catalytic residue identified to form a phosphoramide bond in the m 7 GMP-nsP1 covalent intermediate (15,16). Nonetheless, the position of the GTP alpha phosphate is still at 4.8 Å from the histidine side chain Nε and thus too distant to undergo nucleophilic attack. Density for the secondary structures defining the GTP site is well defined compared to the SAM site, implying greater rigidity of the binding site (SI Appendix, Fig. S3). However, as for the SAM ligand, only the base and ribose of the GTP are clearly defined in the maps, suggesting that the phosphates are flexibly bound. The side chains of the positively charged arginine residues lining the path for the phosphates (R92, R70 and R41) are also disordered, with the exception of R41, which becomes ordered relative to the apo and SAM-bound structures and forms H-bonds to the poorly defined β and γ phosphates. Residues C82-89 that connect to the adjacent SAM site are also disordered, and increasing contour levels in the map reveal continuous density beyond N7 of the GTP that aligns with the binding path for SAM/SAH ligands but could not be assigned, as if the SAM binding pocket was base promiscuous to a certain extent and GTP was nonspecifically occupying the site. In conclusion, the structure clearly shows that many of the residues essential for GTase transfer (see next section) are flexible with only GTP bound, and that further stabilization of the GTP by ligand binding in the SAM site is required for subsequent methyl transfer and GTase reactions. The superposition of both SAM-and GTP-bound structures shows that the GTP N7 is apically positioned for an in-line SN2 nucleophilic attack and at 2.8 Å of the SAM methyl group (SI Appendix, Fig. S5A). The arrangement of SAM and GTP bound alone to the active site is thus representative of the premethylation state of the reaction. Simultaneous Binding of SAH and m 7 GTP Induces a Metastable Conformation of the Active Site. To investigate guanylylation of nsP1, the second step of the reaction, we acquired structures of the protein with SAH and m 7 GTP in the absence of magnesium, a cofactor necessary for nsP1 GTase activity. As expected, we found that both ligands occupy the active site in a state corresponding to the postmethylation reaction but prior to the m 7 GMP transfer to the nsP1. The structure of nsP1 in the postmethylation state shows no overall conformational changes with respect to the SAM-and GTP-bound structures. However, the definition of the ligand and SAM binding site densities drastically improves, suggesting that the ligands are more stably bound together and that this induces an ordering of the active site ( Fig. 2A). Many contacts formed to the SAH purine and ribose are shared with the SAM structure, (G65, P83, D89, D138, and V156) ( Fig. 2B and SI Appendix, Table S2). However, there is a slight rotation in the SAH ribose and base (Fig. 2B), resulting in an ordering of the αC-β4 loop and bringing the side chain of conserved D138 within H-bonding distance of N6 of the SAH base. Additional contacts are made to the methionine, which is now clearly anchored within the cavity. The methionine carboxyl forms VdW/weak H-bonds with the backbone of G65 and side chain of R70. The side chain of R92 becomes ordered and forms a contact to the methionine sulfur, held in position by an interaction with the N 7 methyl group of the m 7 GTP (SI Appendix, Fig. S5B). The methyl and sulfur moieties are still apically positioned as for an in-line SN2 nucleophilic attack at 3.4 Å of distance, slightly longer than in the premethylation state (2.8 Å). This suggests that there is minimal relocation of the substrates following the first methylation reaction and R92 appears to be important for stabilizing the sulfur leaving group in the methylation reaction. There is significant movement of the m 7 GTP phosphates and an ordering of the surrounding arginine residues (Fig. 2C) relative to the GTP-bound structure. R41 forms a new hydrogen bond to the oxygen bridging the alpha and beta phosphate, and R70 moves to bridge the beta and gamma phosphates. R92 and R275 of the neighboring protomer move in to directly coordinate the alpha phosphate aligned with H37 ( Fig. 2C and SI Appendix, Fig. S5B). This contact provides direct evidence that oligomerization is required to complete the GTP substrate binding site, explaining why nsP1 monomers are inactive for GTase activity (12). However, it is immediately clear that the substrate is not correctly positioned for nucleophilic attack by the catalytic histidine for guanylylation of the enzyme. The histidine Nε is still 4.4 Å away from the alpha phosphate ( Fig. 2A) and the beta and gamma phosphate leaving group are not apically positioned to the nucleophile for formation of a pentavalent transition state. Surprisingly, attempts to capture the m 7 GMP covalently bound intermediate by adding 2 mM MgCl 2 and incubating for 2 h yielded exactly the same configuration of the nsP1 active site substrates with two new densities attributable to Mg 2+ metal ions. Mg1 appears to coordinate the oxygen atoms from the beta and gamma phosphates (Pβ and Pγ) O2B and O3G, respectively, and a second Mg2 atom appears to coordinate the catalytic H37 and D36 sidechain at 2 Å and 2.3 Å distance ( Fig. 2A and SI Appendix, Fig. S5B). In this conformation, Pα is still too far from H37 for nucleophilic attack, and there is no density for a magnesium ion in proximity to Pα that is usually necessary to increase electrophilicity and promote nucleophilic attack. The structure suggests that Mg2, coordinated in this state by the catalytic histidine and D36, could be playing this catalytic role by coordinating Pα after rotation of the m7GTP phosphates. In conclusion, the presence of m 7 GTP and SAH in the active site results in a metastable conformation that cannot transition directly to the covalent transfer of m 7 GMP to the catalytic H37, possibly constituting a regulated checkpoint of the capping reaction during infection. Structural Basis of the Guanylyltransfer Reaction. We finally obtained electron microscopy maps corresponding to the nsP1m 7 GMP covalent complex by incubation of the nsP1 protein with the m 7 GTP and SAH substrates and 27-mer RNA, suggesting that the presence of RNA is able to bypass the metastable state and stimulate the guanylylation reaction (Fig. 3A). We also obtained a similar structure when substituting the 27-mer by a 15-mer RNA. Here, the position of the guanosine base moves as the ribose and alpha phosphate rotate by 56°, bringing the Pα close enough to H37 for establishing a covalent bond (Fig. 3B). As a consequence of the phosphate repositioning, residue R41 is now closer to the alpha phosphate and a new H-bond is formed between the ribose and S44 sidechain (SI Appendix, Fig. S5C). In parallel, Y248 is now too far from the alpha phosphate for hydrogen bonding, and sidechains of residues R92 and R275 of the neighboring protomer are no longer visible in the maps (Fig. 3B). Intriguingly, the SAH substrate remains bound in the SAM binding pocket now extensively contacting by VdW interactions the m 7 GMP moiety as a consequence of its rotation (from 3 to 14 contacts, see SI Appendix, Table S2). There is no density for the pyrophosphate product, suggesting that it leaves with the magnesium ions. The H-bonds of the base with E250, and the ribose with Y285, are maintained despite the rotation of the base. E250, Y285, and R41 appear to be the pivotal residues on which the GMP moiety turns for approaching the catalytic histidine, R41 being next to the Pα transfer reaction site. The conformation of R41 is supported by the stacking of R70, which reaches the active configuration only after SAM/SAH binding. This indicates that both residues, highly conserved in alphavirus nsP1 but not in other methyltransferases (SI Appendix, Fig. S2), are central for the switch between the methylation and guanylylation steps of the nsP1 capping reaction in coordination with R92 and R275. Mutation of any of the arginine residues proposed to coordinate the switch between the methylation and guanylylation steps of the nsP1 capping reaction (R92, R70, and R41) to alanine resulted in a loss of methyl transfer and guanylyltransfer to nsP1 (SI Appendix, Fig. S8). In all the steps, the binding of SAM or SAH is necessary for the proper configuration of the active site for the binding of the different GTP-derived intermediates, whether by direct VdW contacts between methionine and base or, more importantly, by residues sharing contacts with both GTP and SAH such as D152, R70, or R92. The NsP1 RNA Capping Reaction Is Structure and Sequence Specific and Reversible. To investigate the final step of the reaction, cap transfer from nsP1 to different RNA substrates was followed using 32 αP labeled GTP in the presence of SAM and autoradiography of the products following their separation in 8M urea 20% polyacrylamide gel electrophoresis (PAGE) gels. Alphaviral gRNAs and sgRNAs contain highly conserved and stable stem loop structures in the 5′UTR just downstream of the cap (Fig. 4A and SI Appendix, Fig. S6) that may be important for transcription from the minus strand sequence during the replication cycle. To investigate the role of such stem loop structures in capping, we compared capping of a sequence corresponding to the first 27 nucleotides of the CHIKV gRNA sequence preserving the first stem-loop structure, and capping of a 15-nucleotide substrate that truncates this loop (Fig. 4 A and B). Only the 15-mer was appreciably capped when di-or tri-phosphorylated at the 5′ end, but not with a free 5′ hydroxyl group (Fig. 4A). The similarity in capping activity for di and tri-phosphorylated RNA suggests that the same capped forms are achieved and implies the possibility of a concomitant triphosphate hydrolysis during the guanylyltransfer reaction, something that has been observed in a parallel study (30). The 27-nucleotide-long CHIKV RNA was not appreciably capped in any of the phosphorylated states ( Fig. 4A and SI Appendix, Fig S7), and following prolonged incubation times, only lowermolecular-weight RNAs of similar length to the 15-mer were capped, presumably the products of partial RNA degradation or residual products from in vitro transcription of the synthesized RNAs (SI Appendix, Fig. S7). This indicates that the RNA binding cavity is too narrow to accommodate a double-stranded RNA stem loop structure. All arginine mutants tested for GTase activity were also inactive for RNA capping of an identical 15-mer sequence (SI Appendix, Fig. S8). When investigating sequence specificity, nsP1 was also able to cap a 15-nucleotide-long RNA with a sequence derived from the 5′UTR of Venezuelan Equine Encephalitis virus (VEEV), a related new-world alphavirus where the first four nucleotides of the sequence are conserved with CHIKV. However, nsP1 was unable to cap an unrelated control RNA of the same size with a different initiating 5′ RNA sequence (beginning GAG) (Fig. 4A), and with no predicted secondary structure formation. Despite considerable sequence variability in the 5′UTR sequences of alphaviruses, 3 of the first 4 nucleotides (the first AU and fourth G) in gRNAs and sgRNAs are highly conserved between Old-and New-World viruses and possibly the determinants of specificity (SI Appendix, Fig. S7). Thus, the capping reaction is specific for alphaviral RNAs short enough to prevent the formation of secondary structures. These results are consistent with the recent identification of the second U as the nucleotide recognized by nsP1 through specific interactions with residues at the N-terminal extension of the capping domain when authors were able to obtain a structure of nsP1 with a capped dinucleotide RNA through mutation to alanine of the catalytic His 37 to inactivate gunaylyltransferase activity (30). In our hands, incubation of nsP1 with an uncapped diphosphate RNA in the absence of substrates yielded empty pores identical to the apo form in cryo-EM data collection, suggesting that the m 7 GMP-nsP1 form of the enzyme may be required for enhanced binding of the RNA. Digestion with P1 nuclease and analysis of the RNA capping reaction products by thin-layer chromatography (TLC) was used to confirm the presence of a bona fide cap m 7 GpppA structure (Fig. 4C). Intriguingly, in the presence or absence of RNA, where GTP and SAM were present in the reaction, species migrating as a cap structure (m 7 GpppA) were consistently observed in TLC. This suggests that nsP1 has the capacity to cap the GTP nucleotide or any contaminating GDP, something that has already been observed for nsP1 from VEEV (31). In addition, m 7 GTP, the product of the methyl transfer reaction, as well as m 7 GDP and m 7 GMP, which are not products on the reaction pathway, were identified. The nucleotide capping activity of nsP1 could potentially alter the GTP/m 7 GTP homeostasis of the cell and potentially affect many cellular GTP-dependent metabolic processes. The m 7 GMP is a product that could only be released from hydrolysis of the m 7 GMP-nsP1 covalent complex or from loss of the cap from the RNA in a decapping reaction. To test for the latter possibility, we labeled the triphosphate 27-mer and 15-mer CHIKV RNAs with a 32 αP cap0 structure using a commercially available vaccinia capping enzyme. This enzyme is able to cap both the 15-mer, and particularly 27-mer RNA sequences with greater efficiency than nsP1. RNAs were isopropanol precipitated to remove excess nucleotides and incubated with increasing ratios of nsP1 in the presence and absence of SAH to test for loss of the cap (Fig. 4D). While signal for the 27-mer was unchanged, a decrease in radioactivity was observed for the 15-mer as the concentration of nsP1 was increased, whether in the absence or presence of SAH. TLC analysis of the reaction products confirmed that this corresponded mainly to loss of m 7 GMP, and not m 7 GDP as in the commercial Schizosaccharomyces pombe decapping enzyme control reaction (Fig. 4E), where increased levels of m 7 GMP were observed in the presence of SAH. The levels of m 7 GMP were largely unchanged in a control reaction following digestion of the reaction products with P1 nuclease digestion, confirming that the m 7 GMP is released from decapping of the RNA. This confirms that the nsP1 capping reaction is reversible, conferring the enzyme with decapping activity. Together these results suggest that capping of the alphaviral RNA most likely occurs cotranscriptionally prior to folding of the conserved SL1 loop. This would protect the viral RNA from decapping as the loop folds during synthesis, while other cellular mRNAs beginning AUG could be potentially decapped. The Decappping Reaction Induces Conformational Changes in nsP1 and Opens the Capping Pores. Capping and decapping are two directions of the same reaction. In order to investigate the structure of the pores after the capping/decapping reaction, we analyzed by cryo-EM the structure of nsP1 pores incubated with a cap0 11-nucleotide-long CHIKV RNA. All the previously described structures in this article have the same overall conformation of the pore. However, after the decapping reaction, we could distinguish two different three-dimensional (3D) classes; one similar to the previous structures and a second with significant changes in the first 130 residues and the C-terminal alpha helix k of nsP1 (Fig. 4F). Electron density for the m 7 GMP base was found in the GTP binding site in both classes. The nsP1 protomers appear tilted outward 8° with respect to the equatorial axis of the ring (Fig. 4G). The pore opening results on an increase of the inner aperture of 3 Å (from 70 to 73 Å) and the outer diameter of 12 Å (from 178 Å to 190 Å) and a change in the surface charge distribution concomitant with a projection of the active site toward the top of the ring (Fig. 4 G and H). This conformation resembles the structure of nsP1 pores when expressed in mammalian cells (Discussion) (13), and was not observed in datasets of m 7 GMP-nsP1 produced following incubation of the substrates and uncapped RNA (Fig. 3). We also tested if the presence of an uncapped RNA without substrates would affect the pore aperture and the resulting structure is identical to the apo form (PDB: 6Z0V). Thus, we can conclude that the decapping reaction induces a motion in the ring resulting in an opening of the pore aperture. Discussion Although the nsP1 capping mechanism has been well characterized enzymatically over the years, the structural basis for the noncanonical order of the pathway has remained elusive. The highly symmetrical capping pores of chikungunya present an opportunity for detailed structural characterization of the nsP1 capping pathway via analysis of cryo-EM structures that represent the different stages of the pathway. The structures show that although many of the contacts formed to the SAM/SAH and GTP/m 7 GTP substrates are conserved with other N7 MTases, there are significant differences in the configuration of the active sites that may have allowed the protein to evolve additional GTase activity. Notably, the GTP binds in a deeper pocket in nsP1, aligning the alpha phosphate with the catalytic histidine for cap transfer. It appears that there is minimal movement in the positions of the substrates between the methylation and guanylylation reactions, but several key side chains change positions to mediate the transition between methyltransferase and guanylyltransferase activity. Simultaneous binding of the substrates appears to be necessary for engaging these residues and for correct positioning of the substrates within the active site. NsP1 is unable to robustly form a covalent complex with GTP or m 7 GTP in the absence of SAM/SAH (12,15), and we demonstrate that when bound alone, the GTP and SAM substrates exhibit substantial flexibility beyond the purine base. In comparison, the SAH and m 7 GTP structures bound together are more stably anchored within the cavity by more stable contacts made to the nsP1 protein. The displacement of R70 by the methionine moiety of the SAM ligand appears to be essential for the positioning of the surrounding residues and the GTP phosphates in the active site, and in transfer of the cap to H37, the guanosine base conformation is stabilized by stacking with the SAH molecule. Overall, these structural details explain why methylation must precede guanylylation in the capping pathway of nsP1. Our study also provides insights into the sequence and structural preferences of nsP1 for RNA substrates. We show that RNA capping is reversible and suggest that capping occurs cotranscriptionally prior to folding of the SL1 loop, which may have a protective role in preventing decapping of the RNA following its formation in addition to preventing recognition of the cap0 structure by host IFIT1 (18). Recent studies have suggested that only a small percentage of alphaviral gRNAs packaged into virions are capped (21). Although our understanding of the roles of these RNAs in infection is still in its infancy, it has been demonstrated that increasing the capping activity of nsP1 in the context of a virus is detrimental to Sindbis virus (SINV) infection, suggesting that capping activity must be finely tuned (32). Finally, future research is required to address if nsP1 has the capacity to decap cellular mRNA substrates, which are mainly modified with a cap1 structure in higher eukaryotes, contributing to host translational shut down. Certain viruses, including poxviridae, encode viral decapping enzymes that are expressed at later stages of infection and remove cap structures on host mRNAs to prevent their recognition by eIF4e at the ribosome (33). Such strategies require that the viral RNA be translated via an alternative ribosomal recognition mechanism, such as an internal ribosome entry site (IRES) in poxviridae. It has been reported that a conserved stem loop structure downstream of the initial AUG codon of the alphaviral sgRNA may promote translation independently of eIF4G in an infection context (34,35). The sequence specificity that we find in our capping experiments suggests that nsP1 could target specific cellular mRNAs, beginning with AUG. To our knowledge, there are no precedents of enzymes with both capping and decapping activities. Both activities would need to be highly regulated in the context of the RC. While this manuscript was in preparation, Zhang et al. (30) have reported an independent study presenting structures of some of the states of the capping reaction provided here with nsP1 capping pores produced in mammalian cells. The expression and purification of capping pores in mammalian cells delivers complexes with an expanded pore conformation, irrespective of the stage of the reaction pathway. When we express nsP1 in insect cells and purify the pores with the same detergents and conditions used in their study (Methods), we consistently observe a contracted conformation of nsP1 pores. Thus, the expression system and not the purification protocol determines the nsP1 conformation, whether due to different lipid compositions of the inner plasma membrane or different components in the cytoplasm. The methyl transfer and first guanylyltransfer reactions occur without significant conformational changes in the nsP1 structure in both mammalian and insect cells derived pores. Here, we show how the postdecapping state induces motions that trigger an opening in the pore, similar to the open form of nsP1 complexes expressed in mammalian cells. This open form increases surface exposure of the RNA binding pockets, with a concomitant redistribution of surface charges along a path leading to the internal pore. All of these differences can potentially induce changes in the full replication complex that may determine different stages of RC functioning in the late and early steps of infection. Since differences in capping pore conformations also depend on the expression system, these could result in different behaviors of the RC in the host, reflecting host adaptation of the replication machinery. It will be interesting to determine whether the pore opening we observe for the decapping reaction in a detergent micelle is also observed in the context of a membrane bilayer, where the pore movements may be more constrained. Our structure of the nsP1 complex with SAH and m 7 GTP shows nsP1 in a postmethylation state and metastable initial state of the guanylyltransferase reaction. This unreacted state was also observed by Zhang et al. and is thus found in the expanded and contracted forms of the nsP1 pores (purified from mammalian and insect cells, respectively), independent of pore conformation. Transfer of the m 7 GMP to nsP1 is only observed following incubation with RNA, whether in the form of a capped RNA (in this study and in Zhang et al.) or with uncapped RNA in the presence of m 7 GTP and SAH (in this study). The RNA is thus able to trigger the reaction, even if not stably binding to the complex by, for instance, changing the exposed arginine distribution that holds the m 7 GTP in the postmethylation state. Indeed, the hydrogen bonds maintained between R92 and R275 of the neighboring protomer and the gamma phosphate of the m 7 GTP must be broken to allow for the rotation of the alpha phosphate observed in the nsP1 m 7 GMP covalently linked structure, bringing the phosphate group within attacking distance of H37 and apically positioning the pyrophosphate leaving group. Intriguingly, despite the difference in overall fold, such a metastable state has also been described for other GTases (36), where an opening and closing of the active site induces phosphate relocation and GMP transfer from a GTP substrate. The loss of the PPi group and the occupancy of SAH within the active site is also observed in Zhang et al.'s structure (PDB: 7FGH). However, our comparison of the structures in the presence or absence of magnesium allowed to identify a second Mg 2+ ion coordinating with residue His 37. Interestingly, when we incubate nsP1 in the presence of SAH and m 7 GTP, we can detect by radioactivity or western blotting with an antibody specific for m 7 GMP, the covalent link to nsP1 in denaturing conditions. This is indeed a well-established test for guanylyltransferase activity (12,16). These data suggest that rather than a particular conformation of the pores (expanded or contracted), it appears that some event is required to trigger the reaction, whether the presence of RNA (physiological conditions) or by treating the sample with denaturing agents. Our data strongly suggest that in the context of the RC, the covalent transfer of m 7 GTP to the H37 could be a regulated checkpoint of the capping reaction. In conclusion, the different structural snapshots of the capping reaction presented here describe in detail the individual role of the residues involved in substrate recognition and N7 methylation and guanylyltransferase reactions. The results show that SAM and GTP substrate binding is interdependent and essential for ordering of the active site to allow all steps of the reaction. We identify residues R70 and R41, not present in other conventional MTases, as main players for the GTP binding and transfer of m 7 GMP to His 37. These results provide a mechanistic explanation for the peculiar alphavirus capping pathway characterized biochemically over decades, paving the way for future research on understanding alphavirus RNA capping and the structure-based design of antivirals against alphavirus infections. Methods Purification of nsP1 Rings. NsP1 was expressed in Hi5 cells (Thermo Fisher) as outlined in the study by Jones et al. (12) using baculovirus technology (37). Protein samples of nsP1 with GTP or SAM substrates were purified as described in the study by Jones et al. (12), in fos-choline12 detergent. For the SAH and m 7 GTP and m 7 GMP complexes, this method was adapted to obtain single rings using the protocol of Zhang et al. (13), where the solubilization step was performed with 1% n-dodecyl-b-D-maltoside (DDM) and samples were exchanged into 0.01% glyco-diosgenin (GDN). Briefly, following recovery of the membranes from lysed cells by ultracentrifugation at 100,000 g, membranes were resuspended at 100 mg/mL in 35 mM tris, 200 mM NaCl, 1 mM TCEP, and 5% glycerol with 1% DDM for 2 h at 4 °C. The soluble fraction recovered post centrifugation at 100,000 g was applied to Ni-NTA resin in batch (1 mL of resin per gram of solubilized membrane) and washed with 10 column volumes of wash buffer containing GDN to exchange the detergent (35 mM tris pH 7.6, 200 mM NaCl, 1 mM TCEP, 5% glycerol, 40 mM imidazole, 0.01% GDN). The samples were eluted in elution buffer (35 mM tris pH 7.6, 200 mM NaCl, 1 mM TCEP, 5% glycerol, 300 mM imidazole, 0.01% GDN) and concentrated with a centrifugal concentrator with a 100 kDa MW cutoff prior to application to a Superose6 10/30 column in gel filtration buffer (25 mM HEPES pH 7.6, 150 mM NaCl, 1 mM TCEP, 0.01% GDN). For both fos-choline and GDN-purified samples, the central peak fraction was selected for cryo-EM. Mutant nsP1 proteins (H37A, R41A, R70A, and R92A) were generated using a Q5 site-directed mutagenesis protocol (New England Biolabs or NEB) and were produced in Hi5 cells using baculovirus technology. Mutant proteins were purified using the same protocol as for wild-type nsP1, where gel filtration profiles and negative-stain EM were used to confirm that intact pores were formed. Sample Preparation for Cryo-EM. All single-particle datasets were collected from nsP1 rings embedded in detergent micelles. For the SAM and GTP nsP1 samples, 0.3 mg/mL of nsP1 was incubated with 0.5 mM of each substrate in gel filtration buffer (25 mM tris pH7.6, 150 mM NaCl, 1 mM TCEP, 0.065% fos-choline 12). 3 μL of each sample was applied to Quantifoil R2.2 Copper Rhodium grids (mesh size 300) with a homemade carbon coating after glow discharging for 1 min at 100 mA. Samples were vitrified in a Vitrobot mark IV using blot force 0 for 3 s at 25 °C and 95% humidity. For the SAH and m 7 GTP, nsP1 was incubated at 0.2 mg/mL in gel filtration buffer (25 mM HEPES pH 7.6, 150 mM NaCl, 1 mM TCEP, 0.01% GDN) with 0.5 mM SAH and 0.5 mM m 7 GTP in gel filtration buffer supplanted with 2 mM MgCl 2 , and incubated for 2 h at 30 °C prior to direct application to an EM grid and freezing. For the formation of the m 7 GMP intermediate, the same protocol was followed but a fivefold excess of the CHIKV 27-mer RNA was added to the reaction just before freezing. For complexes produced from decapping of RNA, a fivefold molar excess of a 15-mer RNA modified with a 5′ cap0 structure was incubated with the protein (0.2 mg/mL) in gel filtration buffer supplemented with 2 mM MgCl 2 . For all samples, 3 μL of sample was applied to Quantifoil R2.2 gold grids (mesh size 300) that had been coated with a homemade film of graphene oxide after glow discharging for 10 s at 100 mA. Samples were vitrified in a Vitrobot mark IV using blot force -3 for 3 s at 25 °C and 95% humidity. To verify that the detergent was not altering the conformation or flexibility of the active site in the protein, for the SAH and m 7 GTP complex, a dataset was also collected for a sample purified in fos-choline12 for direct comparison, where the maps showed no significant differences. Cryo-EM Data Collection. With the exception of the m 7 GMP nsP1 covalent complex, all final datasets were collected on a Krios at CM01 of the ESRF at 300 kV equipped with a post column LS/97 energy filter (Gatan), slit width 20 eV. For the SAM and GTP datasets, images were acquired with a K2 summit camera in counting mode at a nominal magnification of 165,000 (corresponding to a sampling rate of 0.827 Å or 1.06 Å per pixel, see SI Appendix, Table S1) across a defocus range of 1 to 2.5 μm. For the SAM dataset, 4,500 movies were recorded with a dose rate of 7.2e-per pixel per s for an exposure time of 4 s distributed over 40 frames, yielding a total accumulated dose of 42.4e-per Å 2 . The GTP dataset was recorded with a dose rate of 15.6e-per pixel per s for an exposure time of 3.4 s distributed over 40 frames, yielding a total accumulated dose of 42e-per Å 2 . 3,077 movies for the SAH and m 7 GTP dataset were recorded with a K3 camera operating in superresolution mode, with a superresolution pixel size of 0.42 Å and nominal magnification of 105,000. The total dose was 38e-distributed over 40 frames, with a dose rate of 14.9e-per pixel per s for an exposure time of 1.85 s. The m 7 GMP dataset was recorded on a TALOS Artica microscope operating at 200 kV (Instruct platform, CNB Madrid). 773 movies were recorded with a Falcon III camera operating in counting mode, at a nominal magnification of 120,000 and corresponding pixel size of 0.855 Å per pixel. Accumulated dose was 32e-per Å 2 in 38 s, distributed over 60 frames with a dose rate of 0.73e-/pix/s. Cryo-EM Data Processing. Datasets were analyzed in parallel in Relion (version 3.0) (38) and cryoSPARC (39). Frame alignment and correction for beam-induced motion was performed in MotionCorr2 (40) using patch alignment, and CTF correction was performed with CTFFind4 (41) from nondose-weighted micrographs. Images with poor ice quality, excessive astigmatism, or with no Thon rings beyond 5Å in Fourier power spectra were discarded from further processing. Particle picking was performed from dose-weighted micrographs using Warp (42) or Relion's template matching method, using templates that had been generated from an initial round of picking and 2D classification. Particles were extracted with a box size of 300 to 360 pixels, and were binned twice for initial processing. 2D classification in Relion or cryoSPARC was used for removal of bad particles, and an ab initio model was generated from these particles without imposing symmetry. 3D classification performed with the ab initio models was used for separation of single and double rings for datasets purified in fos-choline detergent, but otherwise did not reveal any differences in conformations or occupancy state between rings. Classification was performed with a soft spherical mask of 290, and with coarse alignment sampling (7.5°). For each dataset, the best 3D class was used for autorefinement in Relion or nonuniform refinement in cryoSPARC in c1, following reextraction of particles to the original pixel size. As no significant differences in protomers were observed, refinements were repeated imposing c12 symmetry and with masking. Maps were sharpened using postprocessing in Relion. Masks used for refinement and sharpening were generated through filtering of the reconstructed volume to 15 Å, and through extending the mask by 3 voxels and adding a soft edge of 3 voxels. To look for differences between protomers in rings, focused classifications of the capping domain was performed following symmetry expansion of the particles. Particle sets from c12 refinement were expanded using relion_sym-metry_expand to align all protomers. A mask placed around a single capping domain was used to perform signal subtraction on the remainder of the images, where the subtracted particles were reboxed to 90 pixels on a region centered around the mask coordinates. The capping domain mask was generated as outlined above, using an .mrc map generated from a single capping domain using the mol2mapcommand in chimera from the nsP1 PDB, or from segmentation of the map in UCSF Chimera (43). The subtracted images were reconstructed without alignment to generate a reference for 3D classification. 3D classification was performed without alignment using between 3 and 8 classes for robustness, with a T value of 25. Resulting maps were sharpened suing Phenix autosharpen map (44). All models were built into the cryo-EM maps using the nsP1 PDB structure 6Z0V (12) and were subjected to iterative rounds of refinement and model building in Phenix (44) and Coot (45). Synthesis and Purification of RNA Substrates. RNA substrates corresponding to the first 27 or 15 nucleotides of the CHIKV genome (strain S27) were synthesized by in vitro transcription, with a Type II promoter to yield an AUG starting codon. To obtain substrates with a 5′ diphosphate, a fivefold excess of ADP was added to the other nucleotides for synthesis. RNAs were resolved on an 8M urea 20% acrylamide gel and extracted using sodium acetate and isopropanol precipitation. Post washing of the pellets with 70% ethanol, RNAs were resuspended in water and stored at −20 °C until use. Sample quality was assessed by 8M-urea PAGE and analysis of A 260 /A 280 and A 230 /A 260 ratios. RNA Capping Assays. For RNA capping assays, 2 μM nsP1 was incubated with 100 μM SAM, 1 μM α 32 PGTP (to have a final specific activity of 0.1 μCurie/μL in a reaction), and 5 μM RNA at 30 °C for 2 h in capping buffer (50 mM HEPES pH 7.6, 50 mM KCl, 5 mM DTT, and 2 mM MgCl 2 ). Transfer of the m 7 GMP cap to the RNA was visualized in 8M urea PAGE (20% gels) using autoradiography. The commercially available vaccinia virus capping system (NEB #M2080S) was used as a positive control. Thin-layer chromatography (TLC) was used to confirm for the presence of the cap structure and identify other lower-molecular-weight products. 5 μL of each RNA capping reaction was digested with P1 nuclease and treated with proteinase K (NEB #P8107S), prior to application to a TLC membrane (Macherey-Nagel) preactivated in absolute ethanol. Samples were premigrated in water and then transferred to 0.65M Li 2 SO 4 or 1M (NH 4 ) 2 SO 4 as a mobile phase. The membrane was dried and visualized using autoradiography, comparing to migration standards. RNA Decapping Assays. To produce capped RNAs, 20 μM of CHIKV 27-mer or 15-mer RNA was incubated with vaccinia capping enzyme (NEB #M2080S) in the presence of 1 mM SAM and α 32 PGTP (to have 0.3 μCi/μL specific activity in the final reaction) for 1 h at 37 °C. The enzyme was heat inactivated at 75 °C for 2 min and then removed with 1 μL Strataclean resin (Agilent). Capped RNAs were precipitated using 2M ammonium acetate and isopropanol to remove any residual nucleotides, and the pellet was washed twice with 70% ethanol prior to resuspension in the same volume of H 2 O used for the initial reaction. Resuspended capped RNA was incubated at a final estimated concentration of 2 μM (calculated assuming 50% recovery from the precipitation reaction) with increasing molar ratios of nsP1 (from 1:1 to 10:1) for 2 h at 30 °C in capping buffer with or without 100 μM SAH. RNA incubated in the absence of nsP1 or in the presence of S. pombe mRNA decapping enzyme (NEB #M0608S) were used as negative and positive controls, respectively. Loss of the cap from the RNA was followed with autoradiography and 20% acrylamide urea PAGE or TLC, as described above. For decapping assays, TLC was performed with or without P1 nuclease digestion.
2022-08-19T13:27:09.154Z
2022-08-13T00:00:00.000
{ "year": 2023, "sha1": "588eb779ffedd76ec2243f3c2f11e19eeb573844", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1073/pnas.2213934120", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cec855b87e4cbc2873db6e104cc4398eef3dc1ea", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
246363390
pes2o/s2orc
v3-fos-license
Characterizing Scalar Metasurfaces Using Time-Domain Reflectometry Two efficient methodologies for the determination of electromagnetic (EM) constitutive properties of scalar metasurfaces are introduced and discussed. In contrast to the available methods, and in line with the recent increasing interest in time-domain (TD) analyses of metasurfaces, we show that the material parameters of a scalar metasurface can be readily achieved directly in the TD merely from the EM reflected pulse shape. The two methodologies are based on an analytical TD reflectometry (TDR) approach and a modern stochastic optimization technique. A number of illustrative numerical examples demonstrating the validity and properties of the proposed techniques are presented. I. INTRODUCTION Time-domain (TD) reflectometry (TDR) is an efficient nondestructive testing methodology that is based on the detection and subsequent interpretation of a pulsed electromagnetic (EM) field signal reflected by a device under test [1]. It has found a wealth of applications in remote characterization of faulty electric transmission-line systems [2], ultra-wide-band antennas [3], radar targets [4], optical fibers [5] and many others. The present paper aims at proposing new applications of TDR to the determination of EM constitutive properties of a class of metasurfaces. Generally, metasurfaces can be viewed as purposefully created thin screens that hold great promise for designing novel high-performance antennas, absorbers [6] and lenses [7] (see [8], for other relevant references). Their design can be based on analytical models (e.g. [9]), or more generally, on dedicated numerical techniques (see [10]- [13], for example) that incorporate cross-layer transition conditions [14], [15]. The latter approach has been recently pursued in Ref. [16], where the conjugate gradient minimization is applied to achieve the desired source distributions. While being robust and efficient, this approach is limited to harmonic fields, and thus to linear and time-invariant metasurfaces. The same limitation applies to Refs. [17], [18], where the frequencydomain (FD) method of moments is applied to synthesize metasurface holograms. In order to exploit application potentialities offered by space-time metamaterials [19], [20], both direct and inverse modeling procedures have to be formulated in the TD [21]. An initial contribution to this effort is presented in this article, where we introduce two efficient methodologies capable of extracting material properties of a scalar metasurface from the pulse reflected by the metasurface. The first TDR approach is analytic and is inspired by the TDR experiment designed for the detection, localization and characterization of faults in power lines [22]. The second methodology relies on a stochastic optimization approach, the feasibility of which has been demonstated in [23]. The (forward) mathematical model employed in both inversion approaches is based on the TD saltus-type conditions applying to thin screens with combined magneto-dielectric properties [15]. II. PROBLEM DEFINITION The problem under consideration is shown in Fig. 1. Position in the examined configuration in the 3-D space R 3 is specified by coordinates {x, y, z} with respect to an orthogonal Cartesian coordinate system with the origin O and unit vectors {i x , i y , i z } forming the standard base. We shall analyze the pulsed EM field scattered by the metasurface, which is located in a homogenous isotropic and loss-free medium described by the electric permittivity ϵ 0 and magnetic permeability µ 0 . We assume that the layer occupies is the surface of the layer in the xy plane and δ its thickness. Indeed, the electric permittivity or/and magnetic permeability of a metasurface in the real-frequency domain may have an imaginary part. The present analysis, however, is carried out entirely in the time domain, where losses manifest themselves through additional time-convolution operators in the EM constitutive relations. Metasurfaces are frequently designed by arranging a relatively large number of small scatterers into a 2-D regular pattern of vanishing thickness. To simplify the modeling of their complex structure, the EM scattering by such surfaces is commonly analyzed using bulk material parameters. This model, in line with the broad definition given in Ref. [20], is also adopted in the present work. In particular, for the sake of simplicity, the metasurface is assumed to be described by the (homogenized) relative electric permittivity and relative magnetic permeability, which are the desired material parameters to be extracted. The (mostly negligible) effect of electric conductivity or/and linear magnetic hysteresis losses can be incorporated through the model introduced in Ref. [15]. III. SOLUTION METHODOLOGIES In this section we shall describe two approaches to achieving the relative electric permittivity ϵ r and relative magnetic permeability µ r of the scalar metasurface (see Fig. 1). The employed scattering model relies heavily on the results introduced in [15]. A. ANALYTICAL APPROACH The inverse material characterization methodology described in this section is based on the idea hinted at in [22] regarding a TD reflectometric scheme for characterizing faults on a transmission line. Combining the ideas presented in Refs. [15] and [22], the desired material parameters can be precisely determined with minimal computational complexity. Without any loss of generality we assume that the examined layer is irradiated by an impulsive y-independent, TEpolarized EM plane-wave defined as: where p 0 = sin (β)/c 0 and γ 0 = cos (β)/c 0 are the slowness parameters in the x and z direction, β is the angle of incidence and c 0 is the corresponding EM wave speed. The TD wave reflected from the scalar metasurface represented by [15,Eq. (10)] can be with the aid of the Laplace transform written aŝ where s is the Laplace-transform parameter (= complex frequency). Furthermore, Ψ and Ω represent the influence of the desired parameters ϵ r and µ r via [15,Eqs. (11,12)] where Y 0 = ϵ 0 /µ 0 denotes the wave admittance and δ is the thickness of the layer. Adopting further the TDR methodology presented in Ref. [22], the plane-wave signature of the incident EM field is described by the exponential pulse: where e m is the pulse amplitude and α denotes the pulse decay coefficient. It is noted that the decay coefficient corresponds to 1/t w , where t w is the pulse time width. The sdomain counterpart of Eq. (5) then immediately follows aŝ Combining Eq. (2) with (6), transforming the result to the TD, we find at once where t ′ = t − p 0 x − γ 0 z and H(t) denotes the Heaviside unit-step function. Calculation Procedure The closed-form TD expression (7) specifying the pulse reflected against a scalar metasurface will be next used to determine its constitutive material properties. To that end, we shall conduct two experiments considering two distinct exponential-pulse excitations that differ in their pulse decay coefficients. For the sake of clarity, we denote these experiments and its corresponding α-coefficients as follows Consequently, Eq. (5) can be used to represent the two pulses in experiments A and B as e i (t|α A,B ) = e m exp (−α A,B t)H(t). Upon carrying out these experiments, we get two reflected pulses from which we can extract their peaks and the corresponding instants. The first step is to calculate the echograms E r y (x, z, t ′ |α A ) for experiment A and E r y (x, z, t ′ |α B ) for experiment B. For the sake of brevity, we shall further drop the (unit) amplitudes of the incident pulses and consider the normalized reflected Consequently, using Eq. (7) we obtain Next, taking the time differentiation of Eq. (10) we find instants t r;αA and t r;αB at which Pursuing this approach, we end up with the following (independent) equations In practice, the time derivative is calculated using a suitable finite-difference approximation. Illustrative example In the following example, the probed metasurface is excited by the uniform plane waves with the unit amplitudes e m = 1 V/m with the angle of incidence β = π/4 rad. The interrogation pulses, shown in Fig. 2, are further described by their decay coefficients α A = 1 ps −1 and α B = 0.5 ps −1 . For this instance, the layer under consideration is characterized by thickness δ = 50 µm and ϵ r = 18, µ r = 6. Figure 3 represents Eq. (10) for the selected parameters of the incident pulses and metasurface. Figure 4 shows a graphical representation of the solutions of Eq. (11) for the selected parameters of the incident pulses and the metasurface. The corresponding peaks, indicated in Fig. 3, are denoted by E p;αA = E r y (x, z, t r;αA |α A ), E p;αB = E r y (x, z, t r;αB |α B ). Substituting the data extracted from the echograms, (t r;αA,B , E p;αA,B ), in Eq. (10), we end up with a system of two (non-linear) Eqs. (12a), (12b) with two uknowns Ψ and Ω. Its solution finally yields via Eqs. (3) and (4) the desired material parameters ϵ r and µ r . (12b) B. GLOBAL OPTIMIZATION APPROACH A method for characterization of thin-sheet metasurfaces was introduced in [23]. The method is based on the cooperation of an arbitrary stochastic optimizer with a relatively new VOLUME 4, 2016 model to the TD EM fields in the vicinity of a metasurface with combined magneto-dielectric properties. This TD solution is used as the forward solver that evaluates the candidate solutions u proposed by the global optimization algorithm that solves the single-objective optimization problem that can be formulated as follows : Here, N is the total number of discrete time samples n, r is the position vector where the electric field is observed. Symbols E comp and E true denote the "computed (optimized)" and "true (measured)" observed electric field, respectively. Candidate solutions u are vectors u = {ϵ r , µ r } that can be located at the decision space Γ. Generally, the optimization problem defined by (13) can be solved by any single-objective optimization algorithm, in general.However, the comparative study [23] proved that Particle Swarm Optimization (PSO) algorithm [24] can solve that problem in the most effective way from the set of algorithms containing four other state-of-the-art algorithms namely: Genetic Algorithm [25], Differential Evolution [26], Invasive Weed Optimization [27], and Covariance Matrix Adaptation-Evolutionary Strategy [28]. Therefore, only the PSO algorithm is used in this study. PSO is the representative of the so-called swarm intelligence algorithms. The set of particles (decision space vectors u) co-operates to search for the position with the best value of the objective function. All the particles move in the decision space with a different velocity. The velocity of every particle is updated based on a combination of three procedures: 1) the particle is forced to continue in the random inertial movement, 2) the particle is attracted to its personal best position (position visited by the particle with the so-far best value of the objective function), and 3) the particle is attracted to the global best position (the best position from all personal best positions in the swarm). Therefore, the particle uses cognitive learning (it benefits from its own experience) and social learning (it benefits from the knowledge of the whole swarm) as well. The balance between the exploration and exploitation of the algorithm is made by the balance of weights for the individual procedures 1-3). For more details about the principles of PSO, the reader is refered to [24]. The MATLAB implementation of the PSO algorithm in the software package FOPS [29] is used in this study. The trade-off between all three "forces" mentioned above can be set by user-defined controlling parameters. The inertia weight w forces particles to explore the decision space Γ, the cognitive learning factor c 1 supports the local exploitation of the area near the personal best, and the social learning factor c 2 favors the exploitation of the area near the global bets. All the controlling parameters of the PSO algorithm used in this study are summarized in Tab. 1. All the results presented below are based on statistical data collected for 100 repetitions of every optimization run. IV. NUMERICAL EXAMPLES A. ANALYTICAL APPROACH In this subsection, we present the results of the analytical approach for a variety of selected parameters β (= the angle of incidence) or δ (= the layer's thickness). Furthermore, we will perform the calculation for two distinct pairs of interrogation pulses illustrated in Fig. 5. Thus, both experiments A and B will be carried out twice (see Eqs. (8) and (9)). For the analytical model to apply, the metasurface is assumed to be very thin with respect to the spatial support of the excitation pulse, that is, c 0 /α >> δ. Therefore, the pulse time width of the excitation pulses is to be chosen with respect to this condition. To find out the influence of β and δ on the desired parameters, we are considering two mutually independent cases. Tables 2 and 3 contain the values of α-coefficients, β and δ. In both cases, the EM constitutive properties of the layer ϵ r = 50 and µ r = 3 remain constant. Based on the parameters shown in Tabs. 2 and 3, the graphical outputs (Figs. 7 to 10) of the obtained echograms E r y (x, z, t ′ |α A,B ) and its derivative ∂ t E r y (x, z, t ′ |α A,B ), for both experiments, are below. For the sake of clarity, the zerocrossing regions are zoomed and incorporated in the figures as their insets. From these regions, we can read the key values t r;αA,B and E p;αA,B that are next used to extract the desired material parameters. It is interesting to observe that in contrast to the incident pulse shapes (see Fig. 5), the reflected ones take both positive and negative values. This fully complies with the enforced cross-layer conditions (see Ref. [15]) and the pertaining EM field equations. Tables 4 to 7 summarise the instants t r;αA,B , normalized to the value of pulse time width t w;αA,B , and corresponding peaks E p;αA,B for both cases. It is seen from Eq. (2) that if Ψ = Ω, the reflected field response is identically zero and the metasurface is thus electromagnetically transparent. Equivalently, this condition can be written as where β 0 is the angle of incidence at which the metasurface behaves as transparent. For the considered TE polarized EM wave, the Eq. (14) can be satisfied when µ r ≥ ϵ r . Figure 6 illustrates the angles β 0 satisfying Eq. (14) for three different values µ r = {75; 100; 150} and constant ϵ r = 50. Apparently, in this circumstance, the analytical TDR approach fails, which calls for an alternative approach. It will be next demonstrated that a stochastic optimization approach helps to bypass the limitation. B. GLOBAL OPTIMIZATION APPROACH The where D is the dimension of the decision space vector u (D = 2 in our case). The superscript true denotes the searched (optimal) values: u true 1 = ϵ r = 50.0, and u true 2 = µ r = 3.0 in our case. First, we show the influence of the incidence angle β on the accuracy of the proposed characterization method. The DER values for different values of β from range 0 ≤ β ≤ 3π/8 are plotted in Fig. 11. The accuracy slightly decreases with increasing β. The figure compares the accuracy of the method for different number of objective function evaluations VOLUME 4, 2016 (OFE) determined by number of agents and iterations: OFE = N A × N I . While for N A = N I = 30 we get the result with an unacceptable error of order 10 1 , all other combinations provide results with DER < 1.0. It is interesting, that curves for combinations N A = 50, N I = 100 and N A = 100, N I = 50 almost overlap. This implies that it does not matter whether we invest the available OFE rather to more agents or iterations. Next, we investigate how the DER metric depends on the metasurface thickness δ. This knowledge can be of great importance as the parameters of the characterization method (namely the width t w of the EM wave pulse) can be adjusted according to the thickness of the sample under test. Figure 13 shows the DER metric against values δ from interval 0.1 ≤ δ ≤ 20.0 (in µm). Overall, the accuracy of the characterization gets better with growing δ. This effect can be explained by the variable shape of the objective function that depends on the mutual relation between the layer's EM constitutive parameters and its thickness. There is an obvious discontinuity in DER for all combinations of the number of agents and iterations near value δ ≈ 8.0 µm. To further investigate the mutual influence of the t w and δ parameters, we perform the parametric study over them. The resulting average values of the DER metric for combinations of values t w and δ are shown in Fig. 12. Please note that the colormap is scaled logarithmically in the figure. The figure clearly shows a slice where the method achieves the error below the order of 10 −4 . The major advantage in using the global optimization algorithm approach is that it works reliably for all combinations of the searched relative permittivity and permeability values. This can be evidenced by the results shown in Fig. 14. Here, a contour plot with a logarithmic scale of DER metric values for different combinations of metasurface parameters ϵ r and µ r from the whole space Γ is shown. It proves that a reasonably low value of the error (under 10 −2 ) can be achieved for any pair ϵ r -µ r and the error falls below 10 −5 for a significant part of Γ where µ r > 2.5. The orange dashed line in Fig. 14 denotes the region of transparency, where the metasurface is not visible for the used EM wave [15]. The methodology based on the global optimization, however, yields the correct material parameters even if the condition of transparency is met. V. CONCLUSION We have presented analytical TDR and stochasticoptimization approaches to characterizing the constitutive properties of a scalar metasurface directly in the TD. It has been shown that the analytical TDR approach allows to obtain the material properties exactly in a computationally effortless manner. On the other hand, the analytical methodology fails in the region of TD EM transparency. In this circumstance, the stochastic optimization approach lends itself to remedy the issue. This has been demonstrated on a number of illustrative numerical examples demonstrating the high accuracy and robustness of the optimization approach.
2022-01-21T16:03:44.394Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "eb8762f1a4b83f087da5138f94f4d13ad2092338", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09686750.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "d1ed1df85e5fd5d736dd422f98040e4da2839de3", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Computer Science" ] }
248290784
pes2o/s2orc
v3-fos-license
Microencapsulation of Dandelion (Taraxacum officinale L.) Leaf Extract by Spray Drying SUMMARY Research background Due to numerous health-promoting properties, dandelion has been used in traditional medicine as a herbal remedy, but also as a food product. Dandelion health benefits are ascribed to the presence of different bioactive compounds in its tissues, among which polyphenols play a significant role. However, the low stability of polyphenols is a critical parameter for their successful implementation into products. Thus, their encapsulation using appropriate carrier vehicles is highlighted as an effective technique for their stabilization and protection. The aim of this study is to microencapsulate dandelion leaf extract using spray drying and different carrier materials for the first time. Experimental approach In spray drying, low inlet temperature of 130 °C was employed to preserve sensitive dandelion polyphenols, while guar gum, gum arabic, inulin, maltodextrin, pectin and alginate were used as carriers. The influence of different carriers and their content on physicochemical, morphological and colour properties, polyphenolic content and encapsulation efficiency of polyphenols in dandelion powders was examined. Specific polyphenols were determined using HPLC-PAD analysis. Their release profiles and antioxidant capacity in simulated gastrointestinal conditions were also evaluated. Results and conclusions Compared to plain dandelion powder, carrier-containing dandelion powders have favourably increased solubility, enhanced flow and cohesive properties, reduced particle size and prolonged release of polyphenols under simulated gastrointestinal conditions. Powders were characterized by low moisture content (~2-8%) and high solubility (~92-97%). Chicoric acid was the most abundant compound in dandelion powders. Pectin-dandelion powder showed to be the most effective for microencapsulation of polyphenols, especially for chicoric acid entrapment (74.4%). Alginate-dandelion powder enabled the slowest gradual release of polyphenols. Novelty and scientific contribution Spray drying at 130 °C and the applied carriers proved to be effective for microencapsulation of dandelion extract, where polyphenolic-rich dandelion powders, due to good physicochemical and encapsulation properties, could serve for the enrichment/production of different functional food products. Also, due to the lack of data on dandelion encapsulation, the obtained results could be of great interest for researchers in the encapsulation field, but also for food industry, especially in the field of instant powders. INTRODUCTION Dandelion (Taraxacum officinale L. Weber ex F.H. Wigg), a nontoxic herb from Asteraceae family, has for centuries been used in a traditional medicine worldwide, mainly due to its antirheumatic, anti-inflammatory, anticarcinogenic, hepatoprotective, antioxidant and hypoglycaemic properties. These properties have been attributed to the large number of bioactive compounds found in dandelion tissues, like terpenes, flavonoids and phenolic compounds (1). Despite numerous benefits for human health, studies dealing with dandelion phytochemicals, especially polyphenols, are still limited. According to our search of the Web of Science databases, in the last few years only 25 scientific articles have studied polyphenols in dandelion (Taraxacum officinale L.) plant. In general, most of the studies are focused on the evaluation of bioactive composition of dandelion flowers and root and their health benefits, rather than of leaves (2). The stability of bioactive compounds is a critical parameter for their successful incorporation in food products, since they are sensitive to environmental conditions (oxygen, light, heat and water), and therefore, their shelf life and bioavailability are affected. After oral consumption, bioactive compounds are going through rapid intestinal and first-pass metabolism, followed by transformation of their chemical structure and changes in bioactivities. Today, scientists are searching for some adequate solutions that will ensure stability of bioactive compounds in the gastrointestinal tract, allow their controlled release at the appropriate target in the organism and protect them during food processing and storage (3). Accordingly, encapsulation represents an effective technique that can ensure protection of a wide range of the specific sensitive compounds or whole extracts in adequate carrier systems. There are various techniques to encapsulate extracts: electrohydrodynamic process such as electrospinning and electrospraying (4)(5)(6), phase changes such as nanoprecipitation and antisolvent-dialysis (7,8) and spray drying (9,10). Spray drying is a well-established encapsulation technique in a food sector, mainly employed to produce commercial micro-sized powders from liquid feedstocks in a single step (11). Around 90 % of all industrially produced microencapsulated compounds are prepared by spray drying (12). Such wide application of spray drying could be ascribed to its numerous advantages: process simplicity and low operating costs, scale-up capability, control of particle size, shape and morphology, fast and energy-efficient technology, possibility of working with highly viscous feeds through preheating, applicability to both hydrophilic and hydrophobic food ingredients, design of particles with controlled release properties, high encapsulation efficiency and extended shelf life of the obtained powders, etc. (10). The final powder properties after spray drying are influenced by the feed properties, feed flow rate, gas flow rate, aspiration ratio, inlet air temperature and outlet temperature (13). In general, spray drying implies high temperatures, where inlet air temperature is usually from 150 to 200 °C, while outlet temperature is usually around 70-90 °C (9). The application of high inlet temperature reduces the relative humidity of the drying gas and forms particles with less moisture content, leading to a drier powder that does not stick to the drying chamber (14). The outlet temperature is a result of the air inlet temperature, drying gas flow rate, feed flow rate and feed concentration (9). A higher inlet temperature causes a proportional increase in the outlet temperature (15). In order to obtain a powder with the low moisture content, it is preferable to achieve the small temperature difference between the inlet and outlet temperature and to set high inlet temperature (9). However, if the temperatures during spray drying are too high, degradation of sensitive and volatile bioactive compounds could occur, followed by reduced effectiveness of the encapsulation process (16). Although compounds are subjected to this temperature only for a short time, which should not affect much the bioactive properties of encapsulates (3), the application of low, both inlet and outlet, temperatures should be of interest for scientists and industry. Although many scientists are still using high inlet temperatures, some authors have applied low inlet temperatures (~100-130 °C) for spray drying of plant extracts, but they encountered certain challenges. These challenges were mainly related to a high moisture content of the powders after the application of low temperatures, which is not suitable for adequate shelf life of powders. Along with the application of low inlet temperature that could highly protect sensitive bioactive compounds, the appropriate choice of suitable carrier in spray drying also plays a crucial role. Employed carriers could strongly determine the physicochemical and morphological properties of the produced microparticles, as well as the encapsulation efficiency of the entrapped compound. The most often used biopolymers for spray drying include natural gums, proteins, carbohydrates and lipids (17). Thus, to produce high quality spray drying powders with appropriate physicochemical, bioactive and sensory characteristics, many parameters should be considered. Despite numerous health-promoting effects of dandelion and according to the authors' knowledge, there are no many studies dealing with spray drying of dandelion extract. Only two studies focused on the encapsulation of dandelion bioactive compounds using ionic gelation technique. Bušić et al. (18) immobilized polyphenols from dandelion leaf extract using ionic gelation of alginate and implemented some new filler materials (cocoa powder and carob) to alginate network. On the other hand, Belščak-Cvitanović et al. (19) examined emulsion-templated microencapsulation of dandelion flower polyphenols using ionotropic gelation of alginate and pectin. Both approaches enabled high protection of dandelion polyphenols. Thus, for the first time, this study aims to microencapsulate aqueous dandelion (Taraxacum officinale L.) leaf extract using spray drying and different carrier materials (guar gum, gum arabic, inulin, maltodextrin, pectin and alginate). With the aim of protecting sensitive dandelion polyphenols, low inlet temperature (130 °C) was employed. The influence of carrier type and their content on physicochemical, morphological and colour properties of the obtained dandelion powders was evaluated. The encapsulation efficiency of dande lion polyphenols (chicoric acid, total polyphenols and hydroxycinnamic acids), retained antioxidant capacity and release profiles of polyphenols and antioxidant capacity of dandelion powders in simulated gastrointestinal fluids were examined as well. Preparation of dandelion extract and carrier solutions for spray drying Dry dandelion leaves were ground using a domestic grinder Braun KSM2 (Kronberg, Germany) and sieved to obtain a homogenized fraction (~200 µm). Dandelion extracts were prepared by pouring 200 mL of distilled water (80 °C) over 20 g of plant and stirring with a glass rod for 30 min. The water temperature was maintained during the extraction. Afterwards, the obtained dandelion extract was filtered through a tea strainer containing a 4-layer cotton gauze and filled up with distilled water to 200 mL. Carrier/delivery solutions for spray drying were prepared by dissolving different biopolymers in previously prepared dandelion extract (200 mL) containing (in %, m/V): guar gum 0.5, gum arabic, pectin and alginate 4, inulin and maltodextrin 10 %. Selected viscosity of the carriers was chosen in order to provide adequate liquid atomization during spray drying and to collect free-flowing powders. The prepared carrier solutions were mixed overnight on a magnetic stirrer at 300 rpm (C-Mag HS 7; IKA, Staufen, Germany) and 4 °C, and then 200 g of each solution were spray dried. Microencapsulation by spray drying Dandelion extract and carrier solutions were spray dried using a mini Büchi B-290 spray dryer (Büchi Labortechnik AG, Flawil, Switzerland), equipped with a 0.7 mm diameter nozzle. Laboratory-scale operating conditions were chosen according to preliminary tests and set as follows: inlet temperature (130±2) °C, outlet temperature (66±2) °C, air flow rate 600 L/h, liquid feed rate 8 mL/min (30 %), aspiration 100 %, compressed air for liquid atomization (600 kPa) and nozzle pressure drop 55 kPa. The obtained dandelion powders (one plain and six carrier-containing powders: guar gum, gum arabic, inulin, maltodextrin, pectin and alginate) were collected in tight plastic containers and stored at 4 °C until the analysis. Plain dandelion powder served as a control sample. The process yield (%) of dandelion powder was calculated as the ratio of the total dry mass of the obtained powders (m 2 ) and the dry mass of the material in its own carrier solution (m 1 ), as follows (20): Physicochemical properties The moisture content (%) of dandelion powder was determined gravimetrically by oven drying (oven Tehtnica, Železniki, Slovenia) at 105 °C to a constant mass, according to the modified standard AOAC Method 966.02 (21). Solubility (%) of dandelion powder was determined according to the modified method described in a study of Belščak-Cvitanović et al. (22). A mass of 2.5 g of powder was suspended in 25 mL of distilled water at 30 °C. The suspension was stirred occasionally for 30 min and centrifuged (centrifuge SL 8R; Thermo Fisher Scientific, Suzhou, PR China) for 10 min at 10 867×g. The supernatant was collected, fully drained into an aluminium dish and dried to a constant mass at 105 °C. The mass of the solids recovered after drying was used to calculate the solubility of the powders. Wettability of dandelion powder was measured using the modified method described by Jinapong et al. (23). Briefly, 0.1 g of powder was sprinkled into a 250-mL beaker and then 100 mL of distilled water (25 °C) were added. Wettability was expressed as a time in seconds that is required for all the powder to become completely wet (when all the particles penetrate the surface of the water). For determination of bulk density (ρ B /(g/mL)), dandelion powder was gently loaded into a graduated cylinder and the exact mass of sample, along with the volume occupied by the sample, was recorded. The ρ B was determined by dividing the net mass of the sample with the volume occupied by the sample in the cylinder. The tapped density (ρ T /(g/mL)) was calculated by dividing the net mass of the sample with the volume of sample in the cylinder after it was gently tapped 100 times onto an appropriate rubber mat. Flowability and cohesiveness of dandelion powder were expressed through Carr index (CI, %) and Hausner ratio (HR) and calculated according to the following equations (23) (23). The particle size distribution (PSD) of dandelion powders was measured by a laser diffraction method using Mastersizer 2000 (Malvern Instruments, Worcestershire, UK) equipped with the Scirocco 2000 dispersion unit. The PSD parameters included d(0.5) and span. Parameter d(0.5) is also known as the mass median diameter or the median of the volume distribution, and represents the size in µm at which 50 % of the sample is smaller and 50 % of the sample is larger than this size. The span represents the relative factor value which describes the PSD width. The span value was calculated according to the following equation: where d(0.9) and d(0.1) are the diameters at which 90 and 10 % of the population is below each value, respectively. As the span value is closer to 1, the PSD is narrower (20,24). All analyses were performed in triplicate and presented as mean value±standard deviation (S.D.). SEM analysis Scanning electron microscopy (SEM) analysis was applied to evaluate morphological characteristics of dandelion powders. The analysis was performed using a TESCAN Mira3 microscope (TESCAN ORSAY HOLDING a.s., Brno, Czech Republic). Powders were applied to stubs using a two-sided adhesive tape coated with a layer of gold (50 nm) and analysed at an acceleration voltage of 4-5 kV. Colour properties The colour, i.e. L* (lightness), a* (redness and greenness) and b* (yellowness and blueness) values, of dandelion powders was evaluated using a spectrophotometer (CM-3500d; Konica Minolta, Tokyo, Japan). For the analysis, powders were put into adequate Petri dishes ensuring a homogenous and representative sample. Total colour difference (∆E) was calculated according to the following equation (25): where subscript 0 refers to the colour values of plain dandelion powder (reference). The colour deviation in comparison to the reference sample was rated according to the following range: ΔE<0.2 (no visible colour difference), ΔE=0.2-1.0 (noticeable colour difference), ΔE=1-3 (visible colour difference), ΔE=3-6 (well visible colour difference) and ΔE>6 (apparent colour deviation) (26). Five replicate measurements were performed and the results were presented as mean value±S.D. Determination of specific polyphenolic compounds Specific polyphenols in dandelion powder were identified and qualified on Agilent 1100/1200 Series HPLC device, equipped with a photodiode array detector (Agilent, Santa Clara, CA, USA) and a reversed-phase column ACE Excel 5 Su-perC18 (Advanced Chromatography Technologies, Aberdeen, Scotland, UK) (250 mm×4.6 mm, 5 μm i.d.). For the analysis, 0.25 g powder was dissolved in 10 mL distilled water until complete dissolution. In order to eliminate the polysaccharides, the solutions were mixed in a defined ratio with methanol, centrifuged (centrifuge SL 8/ 8R; Thermo Fisher Scientific) at 10 867×g for 10 min and filtered through 0.45--mm cellulose acetate filters (Nylon Membranes, Supelco, Bellefonte, PA, USA). After precipitation with methanol, 10 μL filtered sample were injected into the system. HPLC analysis was performed according to the method of Belščak-Cvitanović et al. (27), using φ(o-phosphoric acid)=0.1 % in water or in methanol as solvents. Specific compounds were identified by comparing the retention times and spectral data with those of standards, while quantification was performed using calibration curve of each standard. Results were expressed as mg of identified compound per g of sample. Analyses were repeated three times and the results were presented as mean value±S.D. Encapsulation efficiency The contents of chicoric acid (ChicA), total polyphenols (TP), hydroxycinnamic acids (HCA) and retained antioxidant capacity in dandelion powders were evaluated by dissolving 0.25 g powder in 10 mL distilled water with mixing on a magnetic stirrer (C-Mag HS 7; IKA) until complete dissolution. The encapsulation efficiency (%) was calculated as the ratio between the content of the investigated compound in the aqueous solution of dissolved powders and its respective content in the initial carrier solution. The content of ChicA was determined by HPLC as previously explained in the section Determination of specific polyphenolic compounds. TP were determined spectrophotometrically (model Helios γ; ThermoSpectronic, Cambridge, UK) according to the modified method of Lachman et al. (28), using Folin-Ciocalteu's reagent, while HCAs were evaluated in a reaction with Arnow reagent (29). Antioxidant capacity was measured with the ABTS radical cation decolorization assay (30). Analyses were repeated three times and the results were presented as mean value±S.D. Fourier-transform infrared spectroscopy The attenuated total reflectance Fourier-transform infrared spectroscopy (ATR-FTIR) analysis of plain carriers, obtained plain dandelion powder and carrier-containing dandelion powders was performed using the IRAffinity-1 FTIR spectrophotometer (Shimadzu, Kyoto, Japan). The spectral range was from 4000 to 600 cm -1 , while the resolution was 4 cm -1 . In vitro release of polyphenols and antioxidant activity The release profiles of TP, HCA and antioxidant capacity (ABTS assay) of the obtained dandelion powders were determined in a simulated gastric (SGF) and intestinal (SIF) fluids. SGF consisted of sodium chloride and hydrochloric acid (pH=1.2), while SIF comprised phosphate buffer (pH=7.4). For the analysis, 0.3 g powder was suspended in 30 mL SGF, previously heated at 37 °C, and mixed for 120 min on a magnetic stirrer (C-Mag HS 7; IKA) at 100 rpm. The suspension was kept constantly at 37 °C. At defined time intervals, an aliquot of 2 mL was withdrawn from the solution and shortly centrifuged in a mini centrifuge (MLX-106; Life Science Products, Inc., Frederick, CO, USA) at 2000×g. The clear supernatant was collected for the analysis. An aliquot of the fresh fluid (2 mL, 37 °C) was added back to the solution just after the supernatant was taken out. After 120 min in SGF, the solution was centrifuged (centrifuge SL 8/8R; Thermo Fisher Scientific) for 5 min at 10 867×g and the supernatant consisting of SGF was discarded. Collected microparticles were washed with distilled water and suspended in 30 mL SIF, previously heated at 37 °C, at previously defined conditions (37 °C, 100 rpm). Samples were again collected at defined time intervals, as described, until crude microparticles were completely disintegrated (in 240 min). The release profile was determined by evaluating the content of TP (mg gallic acid equivalents, GAE per g of sample), HCA (mg caffeic acid, CaffA per g of sample) and antioxidant capacity (mmol Trolox per g of sample), as described in section Encapsulation efficiency of polyphenols and retained antioxidant capacity, in SGF/SIF supernatants collected at a defined time. Analyses were repeated twice and the results were presented as mean value±S.D. Statistical analysis Statistical analysis was performed using Statistica, v. 12 (31), where one-way analysis of variance (ANOVA) with the Tukey's post hoc test was run to determine the influence of different carrier materials and their content on the physicochemical and colour properties, polyphenolic content and encapsulation attributes of dandelion powders. The probability level of p<0.05 was considered significant. RESULTS AND DISCUSSION The results of the process yield obtained after spray drying of carrier solutions are not shown in a table. However, the yield was obtained in the following descending order (in %): gum arabic-dandelion powder 86.82>maltodextrin-dandelion powder 75.15>inulin-dandelion powder 74.74>guar gum-dandelion powder 68.80>pectin-dandelion powder 54.75>alginate-dandelion powder 48.50. Physicochemical properties of dandelion powders produced by spray drying The moisture content of plain dandelion powder was 6.1 % ( Table 1), and this sample was insignificant (p>0.05) only to gum arabic-dandelion powder. The addition of carriers to the delivery solution differently affected moisture content. Implementation of gum arabic, inulin and maltodextrin in the delivery solution decreased moisture content of the evaluated dandelion powder, while guar gum, pectin and alginate increased its values. The lowest moisture content (1.93 %) was achieved in maltodextrin-dandelion powder, which differed significantly (p<0.05) from all other samples. Guar gum-dandelion powder exhibited the highest moisture content (8.0 %), insignificantly (p>0.05) different from the values obtained for pectin-dandelion powder (7.60 %) and alginate-dandelion powder (7.3 %). These results were expected, since maltodextrin-dandelion powder had the highest content of the carrier (10 %) in the delivery solution and consequently the highest total solid content. Contrary was noted for guar gum-dandelion powder (0.5 % of carrier). The results indicated a higher influence of carrier content on the moisture content of dandelion powder, rather than carrier type. In the present study, to preserve the stability of sensitive polyphenols during spray drying, low inlet temperature (130 °C) was applied. In general, the usage of high inlet temperatures (150-190 °C, and higher) leads to the faster heat transfer between the product and the drying air. Consequently, a higher temperature gradient is achieved between the atomized feed and the drying air, resulting in the greatest driving force for the water evaporation and reduced moisture content of powders (32). This was confirmed by Sablania and Den Bosco (33), who reported decreased moisture content of powders after increasing inlet temperature. Moreover, authors found inlet temperature of 165 °C as optimal for spray drying of Murraya koenigii leaf extract using maltodextrin and gum arabic, with moisture content of 3.03 %. Bhusari et al. (34) There are many factors that could influence the solubility of spray-dried powders: properties of raw materials, carrier systems (type and content), physicochemical properties of the final powder (moisture content, particle size, physical form of the particle) and drying parameters (atomization, inlet and outlet temperatures and feed flow rate) (35). The solubility of the obtained dandelion powders was higher than 90 %, which is in agreement with other studies (20,36). It ranged from 91.8 (plain dandelion powder) to 97.1 % (guar gum-dandelion powder) ( Table 1). The obtained results implied that the presence of carriers enhanced the solubility of dandelion powder, especially in the case of guar gum-, pectin-and alginate-dandelion powders. These samples had significantly (p<0.05) higher solubility than plain dandelion powder. Among carrier-containing dandelion powders, the one containing maltodextrin was characterized by the lowest solubility (92.0 %), which was significantly (p<0.05) lower than the solubility of dandelion powders containing guar gum, pectin or alginate. Moreover, samples containing the highest carrier content (10 %, maltodextrin-and inulin-dandelion powders) had the lowest solubility. On the other hand, the sample prepared with the lowest carrier content (0.5 %, guar gum-dandelion powder) was characterized by the highest solubility. This highlighted the impact of carrier content on the solubility of dandelion powders. However, when observing carrier-containing dandelion powders prepared with the same carrier content (groups prepared with 4 or 10 %), insignificant (p>0.05) difference within each group was reported. The selected delivery materials differently affected the wettability of the produced dandelion powders. Compared to plain dandelion powder, there was an uneven trend after the addition of carriers to the delivery mixture. Plain dandelion powder became completely wet after 238 s and it was significantly (p<0.05) different from guar gum-and alginate-dandelion powders. The longest time to penetrate the surface of the water was determined for dandelion powders containing guar gum (27 601 s) and alginate (24 218 s) ( Table 1). These samples differed significantly (p<0.05) from each other and from all other carrier-containing dandelion powders. The shortest time to get wet was measured for pectin-dandelion powder (98 s), whose values of wettability were significantly (p<0.05) different from guar gum-, gum arabicand alginate-dandelion powders. An uneven trend was observed in the influence of carrier content on the wettability of the prepared powders. Thus, the obtained results indicated that the wettability of powders was more influenced by the carrier type, rather than the carrier content. The bulk density of plain dandelion powder (0.28 g/mL) differed significantly (p<0.05) from other powders, except for gum arabic-dandelion powder (0.29 g/mL) ( Table 1). Alginate-dandelion powder had significantly (p<0.05) higher bulk density (0.39 g/mL) than the other samples. Bhusari et al. (34) determined similar bulk density values (0.39−0.69 g/ mL), where the bulk density of tamarind pulp powders decreased after increasing the carrier content. In the present study, the sample prepared with the lowest carrier content (0.5 %, guar gum-dandelion powder) was characterized by the significantly (p<0.05) lowest bulk density value (0.20 g/ mL), since it differed significantly (p<0.05) from all other samples. These results could be related to the moisture content of guar gum-dandelion powder, since this sample contained the highest moisture content. However, the same trend was not observed for samples containing the highest carrier content (10 %, maltodextrin-and inulin-dandelion powders). These samples were not marked with the highest values of bulk density. An uneven trend, depending on the carrier content, was observed in the study of Şahin-Nadeem et al. (37). Moreover, Belščak-Cvitanović et al. (16) determined that the content of carriers did not show marked influence on the bulk density of green tea powders. On the other hand, the type of carrier exhibited significant effect on the bulk density of powders. The present study also revealed the main impact of carrier type rather than its content on the bulk density of dandelion powders. Carr index (CI), a flowability parameter, ranged from 35.7 (alginate-dandelion powder) to 49.7 % (plain dandelion powder) ( Table 1). Compared to plain dandelion powder, CI of carrier-containing powders decreased, indicating enhanced flowability properties of the dandelion powder after the addition of carriers to the delivery solution. The flowability of plain and maltodextrin-dandelion powders was characterized as very bad (CI>45), and these samples were insignificantly (p>0.05) different from each other. The flowability of other samples was ranked as bad (CI=35−45). Hausner ratio (HR), a parameter indicating cohesiveness of the produced powders, followed the results obtained for CI. HR of carrier-containing dandelion powders (1.56−1.86) was lower than the HR of plain dandelion powder (2.0), which differed significantly (p<0.05) from all carrier-containing dandelion powders ( Table 1). If HR is lower, the cohesiveness of powders is more favourable. Even though HR values of carrier-containing dandelion powders decreased, which is favourable, all samples showed high cohesiveness (HR>1.4). When observing the delivery system with the best flow and cohesive properties, the one prepared with alginate could be highlighted as the best, due to the lowest CI and HR values. The content of carriers differently affected CI and HR, since the established trend cannot be seen. Here, this put the importance of choosing a suitable carrier in the first place. PSD parameters for the examined dandelion powders are presented in terms of d(0.5) and PSD width (span) and they are shown in Table 1. Particle size is one of the most important quality parameters that is determined in the industry of food powders, since it could highly affect their handling, transportation and shelf life properties (35). It is related to the atomizer type, physical properties of the carrier solution and content (38). In the present study, d(0.5) of plain dandelion powder was 71.1 µm. On the other hand, d(0.5) of carrier-containing dandelion powders was significantly (p<0.05) lower, even 3−10 times, and it ranged from 6.7 to 22.4 µm ( Table 1). This highlighted the importance of carriers in the delivery solutions. In general, a small particle size of around <50 µm is characteristic of spray-dried powders (39), which is consistent with the results obtained here. Among carrier-containing dandelion powders, the smallest d(0.5) was characteristic of maltodextrin-dandelion powder, while the largest d(0.5) was of guar gum-dandelion powder. These results suggested that samples prepared with a higher content of carrier (10 %, maltodextrin and inulin, insignificant (p>0.05) difference) resulted in powders with smaller particle size, and vice versa. Such results again underlined the impact of carrier content, rather than the carrier type on the d(0.5) of dandelion powders. Hashib et al. (40) also reported this relation, where the particle size of spray-dried pineapple powders decreased after increasing the content of maltodextrin. However, there are differences among studies including the PS and the carrier content. Contrary to previous results, some studies reported that increasing the carrier content in the delivery solution leads to a larger particle size (41). Furthermore, the powder particle size could influence some other physical properties, like the bulk density. In general, the bulk density of powders increases with decreasing particle size, since particles with smaller size reduce the void spaces among particles and put the particles in close form (42). This is consistent with the present results. Such relation was observed for alginate-dandelion powder, which had the highest bulk density (0.39 g/mL) and the smallest d(0.5) (7.98 µm). Conversely, guar gum-dandelion powder had the lowest bulk density (0.20 g/mL) and the highest d(0.5) (22.4 µm). Span factor values were calculated employing d(0.1), d(0.5) and d(0.9) values, according to Eq. 4. The span of plain dandelion powder (4.3) was significantly (p<0.05) different from all other samples, and its value, depending on the employed carrier, was lower or higher ( Table 1). Among carrier-containing dandelion powders, samples prepared with gums exhibited the highest span values (guar gum-dandelion powder 12.9 and gum arabic-dandelion powder 28.10), but their values were significantly different (p<0.05). On the contrary, the lowest and insignificant (p>0.05) span values were determined for alginate-(2.16) and maltodextrin-dandelion powders (2.79). Since span values of all dandelion powders were higher than 1, the obtained results implied that samples analysed in this study indicate broader size distribution and high polydispersity of spray-dried powders. However, this is not surprising for spray-dried particles. Ćujić-Nikolić et al. (43) also reported span values higher than 1 for spray-dried chokeberry extracts, similar to this study. When observing the influence of carrier content on PSD width, an uneven trend was reported, indicating here the greater importance of the carrier type, rather than its content. SEM analysis of dandelion powders produced by spray drying SEM micrograph of plain dandelion powder revealed polydispersed distribution of the microparticles, with dented surface and visible cracks (Fig. 1a). The improvement of morphological properties of carrier-containing dandelion powders highly depended on the carrier selection (Fig. 1). When using hydrocolloid gums, like guar gum and gum arabic (Figs. 1b and 1c), an extremely inhomogeneous structure was still observable. The particles were not finely dispersed, revealing irregular and highly compacted structure of these powders. Certain bulges on the surface of guar gum-dandelion powder were also visible. The most favourable morphology among the samples was determined for inulin -dandelion powder (Fig. 1d). This sample was characterized by the most spherically shaped and uniform microparticles, smooth surface and no visible dents and ruptures on the surface. When maltodextrin was used as a carrier (Fig. 1e), wrinkled shape, shrinking and structure cracking were observed. However, the sphericity of the microparticles of maltodextrin-dandelion powder was somewhat improved compared to the powders prepared with guar gum and gum arabic. Araujo-Díaz et al. (44) reported similar morphology when employing inulin and maltodextrin as carriers for spray drying of blueberry juice. They also obtained particles with a spherical shape and smooth surface when using inulin, while rougher surface was determined in maltodextrin powders, which is in accordance with this study. Furthermore, compared to maltodextrin-dandelion powder, similar morphological characteristics were obtained of pectin-and alginate-dandelion powders (Figs. 1f and 1g). However, less rough surface and collapsed form, but with a more irregular particle shape were observed for these two samples. In addition, when observing SEM images of carrier-containing dandelion powders, inulin-and maltodextrin-dandelion powders revealed the most homogeneous and loose structure of the microparticles, with minimal agglomeration. This could be in correlation with particle size, since these samples scored the lowest d(0.5). However, if correlating the morphology of the powders with the carrier content, an uneven trend is observable. Thus, the obtained results showed a greater importance of the selected carrier type than its content on the morphology of dandelion extract. Colour properties of dandelion powders produced by spray drying Whether spray-dried powders are used as final products or they are intended for implementation in some other products, the determination of the colour properties is one of the most important quality parameters, especially if taking into account that coloured powders can be used as colouring agents. The lightness (L*) of plain dandelion powder was 57.2 (Table 2). The addition of carriers to the delivery solution significantly (p<0.05) increased L* values, resulting in a lighter colour of these samples. The exception was guar gum-dandelion powder, which was insignificantly (p>0.05) darker (L*=56.7) than the plain one (lower L* value). Among carrier-containing dandelion powders, the powder containing maltodextrin was screened as the lightest one (L*=81.3), significantly (p<0.05) different from others. When observing the impact of carrier content on L* values, the lightness of samples increased after increasing the carrier content. Thus, dandelion powders containing 10 % maltodextrin or inulin were the lightest. On the contrary, the sample prepared with the lowest content of the carrier (0.5 % guar gum) was the darkest, suggesting strong effect of the carrier content on the lightness of the evaluated powders. Also, an important impact of the carrier type on the lightness of dandelion powders was observed, since all samples prepared with an equal content of carrier were significantly (p<0.05) different. These results suggest that both the carrier type and its content have a strong effect on the lightness of dandelion powders. Carrier-containing dandelion powders were significantly (p<0.05) greener (lower a* values) than plain dandelion powder. The exception was dandelion powder containing guar gum, with significantly (p<0.05) higher a* value than others, indicating a redder colour of this sample ( Table 2). Among carrier-containing dandelion powders, the greenest was the one with maltodextrin (significantly (p<0.05) different from others), in descending order followed by (less green): inu-lin>alginate>gum arabic>pectin>guar gum. Such order revealed that the green colour of powder was stronger as the content of the carrier was higher and vice versa. Compared to plain dandelion powder, the yellowness (b* values) significantly (p<0.05) increased in the guar gum-dandelion powder, while in other carrier-containing dandelion powders this value significantly (p<0.05) decreased. In general, as the powders were greener, their colour was less yellow. As mentioned above, maltodextrin-and inulin-dandelion powders were the greenest and accordingly they were the least yellow (the lowest b* values). However, maltodextrinand inulin-dandelion powders had significantly (p<0.05) different values. The opposite was observed for the guar gum-dandelion powder, which was the least green, and accordingly its yellowness was the highest ( Table 2). Thus, here the yellow colour of powder was stronger as the carrier content was lower. Also, when observing both a* and b* values, an inverse relation with the applied carrier content was reported (higher content of carrier, lower a* and b* values). Şahin-Nadeem et al. (37) also reported the same decrease in a* and b* values of spray-dried sage powders after increasing the carrier content, while L* values increased, as it was the case in this study. Moreover, gum arabic-and alginate-dandelion powders (4 % carrier) exhibited insignificant difference (p>0.05) in terms of both a* and b* values, while other samples prepared with the same carrier content were significantly (p<0.05) different. In general, if taking into account the results obtained for a* and b* values of dandelion powders, both the carrier type and content had a high impact on the greenness and yellowness of the examined powders. Since L*, a* and b* values could be changeable each time due to the nature of raw material, it is important to observe colour differences in terms of ∆E. The lowest ∆E (2.70) was ascribed to guar gum-dandelion powder, while maltodextrin-dandelion powder scored the highest ∆E (27.3), as shown in Table 2. The results indicated that as the content of carrier is higher, ∆E is higher as well, and reciprocally. Furthermore, ∆E among all carrier-containing dandelion powders was significantly (p<0.05) different. These results followed the ones determined for other colour parameters, where both the carrier type and content had high influence on ∆E of dandelion powders. According to the determined ∆E values, visible colour difference was reported for guar gum-dandelion powder, while other powders were characterized by an apparent colour difference. Polyphenolic composition, encapsulation efficiency of polyphenols and retained antioxidant capacity of dandelion powders produced by spray drying HPLC analysis of dandelion powders revealed HCAs as a major group of polyphenolics found in dandelion leaves. Among them, caftaric (CaftA), chlorogenic (ChlA), caffeic (Caf-fA) and chicoric acid (ChicA) were identified in the analysed samples. Such superiority of HCAs among other polyphenolic groups was confirmed in other studies, with chicoric acid representing the main compound found in dandelion (1,18). In the present study, ChicA was also marked as the most abundant specific polyphenol of dandelion powder, followed by CaftA>ChlA>CaffA. As shown in Table 3, the mass fraction of CaftA ranged from 2.03 (inulin-dandelion powder) to 10.50 mg/g (plain dandelion powder). When comparing plain dandelion powder and other carrier-containing powders, the latter had significantly (p<0.05) lower mass fraction of CaftA. This trend was also seen among other detected HCAs. This behaviour indicated that the addition of carriers to the delivery solution decreased the HCA content. Belščak-Cvitanović et al. (16) observed similarly that the contents of epigallocatechin gallate and caffeine decreased in green tea powders containing various biopolymers, compared to plain green tea powder. The results obtained in this study could be explained by the potential formation of polyphenol-polysaccharide complexes. Here, a certain amount of dandelion polyphenols could be attached to and absorbed on the used polysaccharide carrier, which eventually resulted in a reduced content of the detected free dandelion polyphenols. Namely, polyphenols are highly reactive molecules that can establish interactions with different macromolecules and consequently form different polyphenol-macromolecule complexes (45). Studies modelling plant cell walls using cellulose and pectin showed that such materials were able to absorb model phenolic acids and anthocyanins through non-covalent interactions (46). Moreover, guar gum-dandelion powder had the highest mass fraction of CaftA (9.21 mg/g) among carrier-containing dandelion powder, differing significantly (p<0.05) from the others. This pattern was also observed for other analysed HCAs. On the other hand, inulin-dandelion powder had the lowest mass fraction of CaftA (2.03 mg/g), also differing significantly (p<0.05) from the others, with the exception of maltodextrin-dandelion powder. The mass fraction of ChlA in plain dandelion powder was 3.38 mg/g, while among carrier-containing dandelion powders its mass fraction ranged from 0.39 (gum arabic-dandelion powder) to 2.32 mg/g (guar gum-dandelion powder) (Table 3). The mass fraction of ChlA in gum arabic-, inulin-and maltodextrin-dandelion powders was insignificantly (p>0.05) different from each other. Furthermore, CaffA was reported as the least abundant (0.11−1.12 mg/g) HCA found in dandelion powders, while Chi-cA was as the most represented one (7.74−39.3 mg/g) ( Table 3). As in other cases, plain dandelion powder had the highest mass fraction of both CaffA and ChicA, differing significantly (p<0.05) from the others. Among carrier-containing dandelion powders, the one with inulin had the lowest mass fraction of both acids, and this sample was insignificantly (p>0.05) different only from maltodextrin-dandelion powder. Guar gum-dandelion powder had the highest mass fraction of Caf-fA and ChicA, and it was significantly (p<0.05) different from all others. In general, the results suggested an inverse relation of the carrier content and HCA mass fraction in dandelion powders, since the sample prepared with the lowest carrier content (0.5 % guar gum) had the highest mass fraction of the analysed HCA, and reciprocally. This highlighted the impact of carrier content on HCA mass fraction. Siacor et al. (47) obtained similar trend, where polyphenol content in spray-dried mango powders highly decreased by increasing the carrier content. Such relation in this study could be ascribed to the possible interactions of dandelion polyphenols and employed carriers, where a higher carrier content could mean more material to interact with polyphenols, more polysaccharide -polyphenol complexes and therefore fewer free polyphenols to examine. Furthermore, samples prepared with the highest carrier content (10 % maltodextrin or inulin) did not differ significantly (p>0.05) in the evaluated HCA mass fraction. On the other hand, gum arabic-, pectin-and alginate-dandelion powders prepared with 4 % of carrier differed significantly (p<0.05) in the HCA mass fraction (exception being CaffA content in gum arabic-and alginate-dandelion powders). This revealed that the carrier type also affected the HCA mass fraction. The obtained results suggested that both the carrier type and content influenced the mass fraction of HCA in dandelion powders. Since ChicA was the most abundant polyphenol in dandelion leaves, its encapsulation efficiency was examined using HPLC analysis. The highest encapsulation efficiency, and significant (p<0.05) to others, was evaluated in pectin-dandelion powder, with 74.4 % ChicA entrapment ( Table 3). It was followed by guar gum-dandelion powder (55.9 %), while samples containing inulin (23.21 %) and maltodextrin (24.11 %) enabled the lowest ChicA retention. These two samples were insignificantly (p>0.05) different from each other. Considering that guar gum-and pectin-dandelion powders had the highest mass fraction of ChicA, while maltodextrin-and inulin-dandelion powders had the lowest one, such results are not surprising. The results (spectrophotometrically determined) obtained for the encapsulation efficiency of TP confirmed pectin-dandelion powder as a sample with the highest entrapment rate of TP (63.57 %), differing significantly (p<0.05) from the others. The encapsulation efficiency of around 40 % was quantified for alginate-and gum arabic-dandelion powders, and it was not significantly different (p>0.05). The same insignificant trend (p>0.05) was determined for inulin-and maltodextrin-dandelion powders, which had the lowest encapsulation efficiency (~17 %). Similar results were obtained for the encapsulation efficiency of HCA, where again pectin-dandelion powder had the highest ability to encapsulate HCA (67.90 %), significantly (p<0.05) higher than others ( Table 3). Insignificant (p>0.05) difference in the encapsulation efficiency of HCA was obtained only for guar gum-and alginate-dandelion powders. On the other hand, inulin-(18.62 %) and maltodextrin-dandelion powders (20.03 %) showed the lowest ability to entrap HCA, but in this case they differed significantly (p<0.05). ABTS analysis again highlighted pectin-dandelion powder as a sample with the highest ability (62.9 %), while inulinand maltodextrin-dandelion powders had the lowest ability (21.5−24.5 %) to retain antioxidant capacity (Table 3). Also, all samples were significantly (p<0.05) different. Since mainly polyphenols are responsible for antioxidant capacity, such results that followed the ones determined for encapsulation efficiency of polyphenols are expected. Moreover, it can be observed that samples prepared with the highest carrier content (10 % maltodextrin or inulin) were the ones with the lowest ability to encapsulate polyphenolics and retain antioxidant capacity. On the contrary, that was not the case with the sample prepared with the lowest carrier content (0.5 % guar gum), since this sample was not marked as the one with the highest encapsulation efficiency. Here, the group that contained 4 % carrier (gum arabic, pectin and alginate) stood out as the group that enabled the highest encapsulation efficiency of polyphenolics and retained antioxidant capacity, with a pectin-dandelion powder being the most efficient. Thus, the results revealed a higher influence of the selected carrier type than its content on the encapsulation parameters of dandelion powders. In addition, there are certain discrepancies among the studies related to the encapsulation efficiency and carrier content. Arepally and Goswami (48) reported that the encapsulation efficiency of probiotics in spray drying increased after increasing the content of gum arabic. On the contrary, Şahin-Nadeem et al. (37) April-June 2022 | Vol. 60 | No. 2 found that the encapsulation efficiency of sage TP notably decreased after increasing the carrier content, like in this study. FTIR spectroscopy of dandelion powders produced by spray drying The chemical properties of plain carrier materials, plain dandelion powder and carrier-containing dandelion powders were investigated using ATR-FTIR spectroscopy (Fig. 2). Sample spectra dominantly exhibited the bands that are ascribed to characteristic vibrations of carbohydrates ~1020 cm -1 (C-O stretching vibrations) and ~1410 cm -1 (-CH 2 bending vibrations), while the band at around 1600 cm -1 is identified in the spectra of natural gums (49). Also, the spectra showed the presence of the O-H groups (~3000-3600 cm -1 ), and C-H vibrations were identified at ~2900 cm -1 (16,19). It should be pointed out that during the preparation of initial delivery solutions, no visible interactions (e.g. precipitation) between ingredients were observed. This enabled satisfactory liquid flow and atomization during spray drying. Potential interactions between dandelion bioactive compounds and carriers may be observed by shifting in the position of the bands related to the O-H vibrations. After spray drying, the bands in the spectrum of guar gum-dandelion powders overlapped with the dandelion bands, indicating higher amounts of dandelion extract attached to the surface of particles and consequently suggesting a lower protection of the extract when this carrier was applied. Primary reason for this might be that the content of guar gum (0.5 %) was the lowest among the employed carriers, which led to insufficient formation of a protective carrier layer around the extract. However, in some cases the usage of a higher carrier content is impossible, due to the high viscosity of their solutions that cannot be processed by spray drying. In the case of pectin-dandelion powder, overlapping of carrier bands around 1200−800 cm -1 was identified, suggesting that not only the content, but also the carrier type, is critical for optimal protection of the encapsulated compound. On the other hand, the domination of bands of gum arabic, inulin, maltodextrin and alginate in the spectra of corresponding carrier-containing dandelion powders possibly indicated relatively good protection of dandelion polyphenols by these carriers (50). Furthermore, in order to obtain high encapsulation efficiency and protection of active compounds, we selected carrier materials that are well established as suitable for this type of encapsulates and active ingredients. Chemical interactions between the carrier and the active compounds should be limited in order to facilitate all steps during encapsulation. Potential chemical interactions between the extract components and carriers are most probably molecular interactions, which may explain the changes in the positions of OH bands in the spectra of carrier-containing dandelion powders. Release profiles of polyphenols and antioxidant capacity of dandelion powders produced by spray drying Although the highest content of the released TP (as gallic acid equivalents) was reported for plain dandelion powder (70.26 mg/g), this sample provided the fastest release of TP, where up to 100 % was liberated in the first 5 min in SGF (Fig. 3a). On the other hand, an extended release rate of TP was reported for other carrier-containing dandelion powders, but Fig. 2. Fourier-transform infrared spectra of: a) plain carriers, and b) plain dandelion powder and carrier-containing dandelion powders produced by spray drying in the end lower TP contents were released from these samples. Gum arabic-, inulin-and maltodextrin-dandelion powders released the TP completely in SGF, while guar gum-, pectin-and alginate-dandelion powders continued to release TP in SIF. Guar gum-dandelion powder showed a gradual release of TP during the first 30 min in SGF, whose release again continued in SIF, up to 180 min of analysis (51.20 mg/g). However, alginate-dandelion powder showed a gradual release of TP during all 240 min of analysis (33.38 mg/g). A similar pattern was noticed in the release profiles of HCA and antioxidant capacity. After 5 min of analysis, plain dandelion powder released the fastest the HCA (w(CaffA)= 63.45 mg/g) (Fig. 3b) and antioxidant capacity (n(Trolox)=0.189 mmol/g) (Fig. 3c). However, the released content of HCA and antioxidant capacity from plain dandelion powder was the highest compared to others. On the contrary, carrier-containing dandelion powder mostly enabled a longer gradual release of polyphenols and antioxidant capacity, but finally lower amount was discharged during analysis. Such reduced content of released polyphenols and antioxidant capacity in carrier-containing dandelion powders could be ascribed to the potential interactions between the dandelion polyphenols and polysaccharides from the applied carriers. When observing carrier-containing dandelion powders, the fastest release of both HCA and antioxidant capacity was reported for inulin-and maltodextrin-dandelion powders. They were totally degraded after 10 (maltodextrin-dandelion powder) and 20 min (inulin-dandelion powder) of analysis in SGF. However, they enabled a gradual release only for the first 5 min, after which the values decreased. Moreover, guar gum-, alginate-and pectin-dandelion powders mainly enabled prolonged release of HCA and antioxidant capacity in both SGF and SIF. Guar gum-dandelion powder enabled a gradual release of HCA for the first 20 min (w(CaffA)=42.51 mg/g) in SGF, (Fig. 3b). Regarding release of antioxidant capacity, guar gum-dandelion powder allowed a gradual release for the first 30 min (n(Trolox)=0.148 mmol/g) in SGF. After transport in SIF, guar gum-dandelion powder again started releasing the antioxidant capacity until 140 min of analysis (n(Trolox)=0.164 mmol/g), after which the antioxidant capacity values decreased (Fig. 3c). However, this sample enabled the highest content of released HCA and antioxidant capacity among all carrier-containing dandelion powders. Although the content of released HCA and antioxidant capacity from alginate-dandelion powder was lower than of guar gum-dandelion powder, HCA content was gradually released from this sample during all 240 min of analysis (only exception in 60 min of SGF), or during 180 min for antioxidant capacity, which is preferable overall. Moreover, it was noticed that the powders prepared with the highest carrier content (10 % inulin and maltodextrin) enabled the fastest release of bioactive compounds at the end of analysis, accompanied by their lowest content. On the contrary, the sample containing the lowest carrier content (0.5 % guar gum) released the highest content of bioactive compounds after 240 min of analysis. However, alginate-dandelion powder prepared with 4 % of a carrier in the delivery solution was characterized as a sample with the longest gradual release of the evaluated compounds. Thus, if considering the released content and a favourable gradual release pattern, both the carrier type and its content affected the release profiles of polyphenols and antioxidant capacity from the examined dandelion powders. CONCLUSIONS Dandelion (Taraxacum officinale L.) leaf extract has been for the first time microencapsulated successfully in different carrier materials using spray drying at low inlet temperature (130 °C). The results showed that the carrier type influenced more the wettability, bulk density, Carr index, Hausner ratio, particle size distribution width, morphological and encapsulation properties, while the content of the applied carriers had a higher effect on the moisture content, solubility and d(0.5) of the evaluated dandelion powders. However, both the carrier type and its content affected the colour characteristics, hydroxycinnamic acid (HCA) content and the release properties of the examined samples. Among the carrier-containing dandelion powders, guar gum-dandelion powder exhibited the highest solubility, the lowest total colour difference (a favourable trait) and the highest HCA content, but also the highest moisture content. Although inulin and maltodextrin are often used as carriers for spray drying of polyphenol-rich extracts, the obtained results implied that they were not suitable carriers for encapsulation of dandelion leaf extract (the lowest encapsulation efficiency and the fastest liberation of polyphenols and antioxidant capacity from dandelion powders). On the other hand, they had the most desirable morphological properties and the lowest d(0.5) and moisture content, which explains their wide usage in spray drying. Although pectin is not often used as a delivery vehicle for the encapsulation based on drying, here pectin-dandelion powder enabled the highest retention of polyphenols and antioxidant capacity. These results could open a new direction for examination of pectin in spray drying of plant extracts. Furthermore, alginate-dandelion powder provided preferably the longest gradual release of all evaluated compounds in simulated gastrointestinal conditions, which highlighted alginate-dandelion powder as a sample with the best release properties of dandelion polyphenols. Also, this sample was characterized by the most acceptable bulk density, flow and cohesive properties. Depending on a final purpose of powders, all employed carriers fulfil either physicochemical properties or encapsulation parameters, which in the end justifies their usage in this experiment. Results obtained from this study could be of high importance for scientific researchers in the field of preservation and microencapsulation of sensitive bioactive compounds. Since the produced dandelion powders are rich in polyphenols and are characterized as powders with good physicochemical characteristics, these results also have great impact on the functional food industry, where such powders could serve for the production and enrichment of various food products, especially instant powders. Although the study gave good initial results, further studies are required to determine the optimal carriers aimed for the production of dandelion powders with the best physicochemical properties (emphasis on enhancing flow properties) and even higher polyphenol loading capacities. S. Karlović performed and validated the data for colour analysis. I. Špoljarić conducted the SEM analysis. G. Mršić enabled the use of SEM equipment and interpreted the data. K. Žižek ran the statistical analysis. D. Komes designed the research, supervised the whole process, critically revised the manuscript and approved the final version for the submission.
2022-04-21T15:22:08.800Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "cc1afaec987932b8ef6ec0f08b8d3103d7c36427", "oa_license": "CCBY", "oa_url": "http://www.ftb.com.hr/images/InPress/7384-in_press.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e55a3abdc87ccf1225c97ca66e4ed2e6a8f0280", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
55488996
pes2o/s2orc
v3-fos-license
Resistance Spot Welding Parameters Optimal Selection for Automotive Steel Plate In welding workshop, multi plate lapping is used to resistance welding usually, but so many uncertainty factors are there in welding process. In order to use less test times to meet production requirements and solve the welding nugget problem of multilayered panel lap, the optimizing welding parameter for 3 layer of low carbon steel plate with thickness of 0.8 mm + 1.4 mm + 1.8 mm was found in this paper. Experimental results show that: the welding current 9.6/8.8/11.1 KA, welding time 5/19/5 CY, electrode pressure 3 kn is suitable for 3 layer of low carbon steel plate with thickness of 0.8 mm + 1.4 mm + 1.8 mm, multiple-pulse welding parameter is appropriate for multilayered panel lap, and diameter of nugget on each layer can achieve 6mm. Introduction Resistance spot welding is the most common connection method in the automobile manufacturing industry. [1]. There are more than 5,000 solder joints in the mini-car body such as the body floor, roof, front body, etc. [2][3][4][5][6][7], different areas bear the energy absorption in the event of a collision, connection, support and other functions. [8][9]. The actual application needs to connect different thickness, different material of steel plate for welding. [10][11][12][13]. Currently, two-layer or three-layer laps with different strengths and thicknesses have occurred in the same station while the resistance spot welding process parameters are difficult to apply to. [14][15]. This article takes the 1 st station of the front body area in Wuling Hongguang S' production line (this station welds the left front beam welded part and the front plate welding assembly) as the research object, which provides new ideas and methods for different overlapping resistance spot welding parameters selection of Wuling Hongguang S. Materials and Methods Taking Obara integrated resistance spot welding machine (Model: ST21, rated power 180KVA), supporting X-type manual welding gun (Model: UXH-C9625) for welding, as shown in Figure 1. The front left girder and dash panel parts are all Baosteel BLD low-carbon steel plates and table 1 shows the chemical composition of the test steel plate. A total of 11 welding spots are lapped between the front girder and the dash panel, among them, 10 lapping pads for two plates, 0.8 mm thick and 1.2 mm thick, and three welding lapping plates with a plate thickness of 0.8 mm+1.4 Mm+1.8mm. As there are two-layer plates and three-layer plates, test is processed as follows: Firstly, taking the same plate and thickness to simulate three-layer plate overlap, choosing different welding parameters to test on three-layer plates in order to obtain welding parameters with different thickness and further define the process parameters that meet the technical requirements. Then verify the left front girder and the dash panel connection area to obtain the final process parameters. According to the leading plate thickness of the overlapping parts, refer to the "Suggested Value of SAW-GM Wuling Resistance Spot Welding Parameters" to select the resistance spot welding process parameters for the test. Table 2 shows the initial process parameters for resistance spot welding. The Influence of the Initial Welding Process Parameters on Welding Three test pieces (0.8mm + 1.4mm + 1.8mm) with the same plate, thickness, and process were used. According to the welding process parameters in Table 2, the welding process was tested by selecting the current 8.5KA, electrode pressure 3KN, and welding time 9CY. The heat affected zone size of the solder joint was measured with a vernier caliper to be 8.59mm, as shown in Figure 2 (a). After the welded test piece was torn, there was a welding nugget (size 4.86mm) at the overlap of the 0.8mm+1.4mm plate, as shown in Figure 2 (b); but the 1.4mm +1.8mm plate lap joints did not form a nugget, tearing the solder joints after the brittle fracture surface was in a particle state, as shown in Figure 2 (c); The test results show that the initial welding process parameters are not applicable to the actual lap state of the site, and the welding process parameters need to be adjusted one by one so that the welding of the three-layer plate meets the requirements of the welding nugget. Left Front Girder Welding Test and the Dash Panel Connection Area The main factors influencing the nugget size of the solder joints of the test piece are welding current (KA), welding time (CY), electrode pressure (KN) and pulse, and other parameters remain unchanged. The electrode of this test adopts a spherical electrode cap and the dimension of the end face is φ7mm, and the process parameters are tested in a one-by-one comparison. The welding process parameters are shown in Table 3. Experiment 2 While maintaining the welding time and electrode pressure constant, increase the welding current to 10.0 kA, Table 3. After tearing the welded test piece, there is a welding nugget at the lap joint of the 0.8mm+1.4mm plate. The size of the nugget is φ5.69mm, as shown in Figure 3 (a); but the 1.4mm+1.8mm plate joints did not form a nugget, tearing the solder joints after the brittle fracture surface was in a particle state, as shown in Figure 3 The welding current was maintained at 10.0 kA, the welding time was increased to 14 cy, and the electrode pressure was maintained at 3.0 kN, Table 3. After the welding test piece was torn, there was a welding nugget at the overlap of the 0.8mm+1.4mm plate, but the nugget still did not form at the overlap of the 1.4mm+1.8mm plate, and the brittle fracture of the welding spot showed the particle state, as shown in the figure 4 (c) shows. Because the thickness of the three-layer plate is quite different, and the welding parameter nugget that meets the standard cannot be obtained by gradually increasing the single item of welding parameters, therefore, we have considered the selection of two welding pulses to increase the preheat welding before welding. Experiment 4 Based on the results of the first three tests, we selected two pulses for testing. The first pulse: the welding current 7.8KA, the welding time 7cy; the second pulse: the welding current 10.0KA, the welding time 14cy, the electrode pressure remains unchanged at 3.0KN, Table 3. After welding is completed, the heat affected zone of the solder joint becomes significantly larger, and the diameter is φ9.6mm, as shown in Figure 5 (a); after the welded test specimen is torn, there is a welding nugget at the overlap of the 0.8mm+1.4mm plate. The size of the nugget is φ6.05mm, as shown in Figure 5 (b); no nugget is formed at the overlap of the 1.4mm+1.8mm plate, and the brittle fracture of the solder joint indicates the particle state, but the shear stress is significantly greater than that of the single pulse. The parameters of the plate solder joints are large, as shown in Figure 5 (c). In general, this process parameter does not meet the specified nugget size requirements. Experiment 5 Based on the test 4, we used the parameters of the electrode pressure to 3.8 KN to verify the welding test piece under the condition that the welding current and welding time remain unchanged. The heat affected zone size of the solder joint is φ10.75mm up to the standard upper limit, as shown in Figure 6 (a); after tearing the solder joint, there is a nugget at the overlap of the 0.8mm+1.4mm plate and increases to φ6.13mm, as shown in Figure 5 (b); but there is still no obvious nugget formed at the overlap of the 1.4mm+1.8mm plate, as shown in Figure 5 (b); if the welding current, welding time or electrode pressure is separately increased, It is difficult to produce obvious results, and there is a risk of overheating in the heat-affected zone of the solder joint. Experiment 6 On the basis of test 5, taking into account the number of welds in this station and the requirements for production tempo, we chose the soft specifications commonly used in welding parameters, increased it to 3-pulse welding parameters, and increased the preheating process and main The welding nugget holding process and welding parameters are shown in Table 3. The first pulse: the welding current 8.8KA, the welding time 5cy; the second pulse: the welding current 11.1KA, the welding time 19cy; the third pulse: the welding current 9.6KA, the welding time 5cy; the electrode pressure is maintained at 3.0KN. After tearing the solder joints, welding nuggets existed on the 0.8mm+1.4mm plate, and the measured nugget size was φ6.19mm, as shown in Figure 7 (a); there was obvious welding at the overlap of the 1.4mm+1.8mm plate. The nugget, measured nugget size is φ6.82mm, as shown in Figure 7 (b); and there are obvious nuggets on both sides of the middle side plate (1.4mm thickness), both sides of the plate (0.8mm and 1.8mm Thickness) Holes with significant tearing of solder joints, as shown in Figure 7 (c). Using the welding parameters of test 6, the quality of the solder nuggets of the solder joints was acceptable. The actual working conditions of all parts overlap are more complex than that of the welding test piece. The data of the test 6 that passed the test is used to carry out the welding experiment on the manufacturing vehicle, and the hammer and chisel are used at the three-layer welding point (0.8mm+1.4mm+1.8mm) for the non-destructive test, lift the chisel 30° up and down, no welding is found in the solder joints, and the surface of the sheet metal was free from burrs and cracks. The quality of the non-defective solder joints of the manufacturing vehicle was acceptable, as shown in Figure 8. According to the quality requirements of resistance spot welding of SAIC-GM-Wuling, it is necessary to conduct full-destructive testing on the body joints on a regular basis. The joints in the front body area of this model were subjected to a full-destructive test. There were solder nuggets on both sides of the welded joints, cracks appeared on both sides of the sheet, and the quality of the solder joints was acceptable, as shown in Figure 9. Conclusion Obtained the resistance spot weld nugget dimensions with pulses of 1, 2 and 3, taking into account comprehensively, 3 pulse welding parameters, welding current 8.8/11.1/9.6KA, welding time 5/19/5CY, electrode pressure 3KN, The diameter of the nugget between the overlapping plates of each layer was ≥φ6 mm. It is suitable for lap welding of plates with different thickness of 0.8mm+1.4mm+1.8mm. In this paper, taking the left front beam and front panel lap joint as an example, it is expounded that when encountering the overlap between the two-layer board and the three-layer board, the welding process parameters for the three-layer board overlap should be selected preferentially which could solve the problem of welding nugget from the solder joints, hence provides a new idea and method for other automobile manufacturers.
2019-04-30T13:08:03.292Z
2018-08-18T00:00:00.000
{ "year": 2018, "sha1": "bbbae4dd9b1fa90126e51a0a14602fc2cc4f6695", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijse.20180201.18.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "86cf58010fdb838a67028888bc329cae265e63ce", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
9431611
pes2o/s2orc
v3-fos-license
One-Page Multimedia Interactive Map The relevance of local knowledge in cultural heritage is by now acknowledged. It helps to determine many community-based projects by identifying the material to be digitally maintained in multimedia collections provided by communities of volunteers, rather than for-profit businesses or government entities. Considering that the search and browsing of texts, images, video, and 3D models related to places is more essential than using a simple text-based search, an interactive multimedia map was implemented in this study. The map, which is loaded on a single HyperText Markup Language (HTML) page using AJAX (Asynchronous JavaScript and XML), with a client-side control mechanism utilising jQuery components that are both freely available and ad-hoc developed, is updated according to user interaction. To simplify the publication of geo-referenced information, the application stores all the data in a Geographic JavaScript Object Notation (GeoJSON) file rather than in a database. The multimedia contents—associated with the selected Points of Interest (PoIs)—can be selected through text search and list browsing as well as by viewing their previews one by one in a sequence all together in a scrolling window (respectively: “Table”, “Folder”, and “Tile” functions). PoIs—visualised on the map with multi-shape markers using a set of unambiguous colours—can be filtered through their categories and types, accessibility status and timeline, thus improving the system usability. The map functions are illustrated using data collected in a Comenius project. Notes on the application software and architecture are also presented in this paper. Introduction The value and significance of local knowledge in cultural heritage and the importance of preserving it for future generations is already recognised.Consequently, the process of involving communities has matured gradually over time, overcoming the unwillingness of knowledge holders to reveal information and stories.In this context, maps play a central role in gathering and sharing knowledge, as illustrated, for example, by the recent crowd sourcing initiative to create the Philippines Heritage Map, powered by the Arches, and maintained by the local stakeholders, government units, heritage practitioners, and volunteers [1].The means of publishing geo-information may differ.They include the following: • Individuals using easy-to-use internet tools to construct sites that are almost entirely populated by user-generated content, without restriction on the nature of the content [2]. • The public using services-such as Wikimapia and Flickr-allowing citizens to provide descriptions of Points of Interest (PoIs) together with geographic coordinates, an activity depicted with the term "volunteered geographic information" (VGI), coined in 2007 by Goodchild [3]. Largely missing are "the mechanisms needed to ensure quality, to detect and remove errors, and to build the same level of trust and assurance that national mapping agencies have traditionally enjoyed" [3].Moreover, it has been estimated that more than half of the cultured population do not have a basic ability with maps, suggesting that developing interactive maps incorporating multimedia [4] would satisfy the perception of the information.To date, however, interactive maps do not use multimedia content or only use it as an aid to understand the map content, with poor attention to the use of colours and icons in the markers, as in the following examples. Interactive Map Case Studies Different approaches to mapping PoIs are illustrated below through the analysis of the interfaces and architectures of the interactive maps of Cyprus, Malta, and Japan.A comparative table of their features-including our map-is also presented. The Interactive Map of Cyprus Interface.The interactive map of Cyprus (Figure 1) starts without displaying markers; the user selects/deselects PoIs categories to show on the map by clicking on their markers from a list in the window "PoIs Categories" (located on the left side of the window), which is committed to user interface interactions.The markers have the same shape (a shield) with different background colours and different foreground images.The colours are not very useful because different categories sometime have the same background colour (for example, violet, in "Landmarks", "Museums", "Theatres", and "Info" types).Overlapping markers are replaced with a special circular marker with colour and diameter proportional to the overlapped PoI's number.Clicking on a marker in the map opens a callout with the PoI's name, address, and category; there are five function keys: • "From Here" and "To Here" replace the "PoIs Categories" window with the "Navigation" window, allowing the reception of directions to/from the PoI and another PoI or an address. • "Share" opens a pop-up window to send the PoI's location and a message between two emails. • "Report" sends a comment and the user's email address to the developer. • "More" opens another window with additional info (such as telephone number, if available) and a small zoomable map with the PoI's location. The other user interface interaction windows are "Search", to show the PoI selected in the text search box (it works irrespective of the activated categories); "Layers", to select the base map (Geomatic/Satellite), the language (GR/EN) and overlays (Nicosia Bike Routes); "Print"; and "Clear Map". ISPRS Int.J. Geo-Inf.2017, 6, 34 2 of 16 have a basic ability with maps, suggesting that developing interactive maps incorporating multimedia [4] would satisfy the perception of the information.To date, however, interactive maps do not use multimedia content or only use it as an aid to understand the map content, with poor attention to the use of colours and icons in the markers, as in the following examples. Interactive Map Case Studies Different approaches to mapping PoIs are illustrated below through the analysis of the interfaces and architectures of the interactive maps of Cyprus, Malta, and Japan.A comparative table of their features-including our map-is also presented. The Interactive Map of Cyprus Interface.The interactive map of Cyprus (Figure 1) starts without displaying markers; the user selects/deselects PoIs categories to show on the map by clicking on their markers from a list in the window "PoIs Categories" (located on the left side of the window), which is committed to user interface interactions.The markers have the same shape (a shield) with different background colours and different foreground images.The colours are not very useful because different categories sometime have the same background colour (for example, violet, in "Landmarks", "Museums", "Theatres", and "Info" types).Overlapping markers are replaced with a special circular marker with colour and diameter proportional to the overlapped PoI's number.Clicking on a marker in the map opens a callout with the PoI's name, address, and category; there are five function keys: • "From Here" and "To Here" replace the "PoIs Categories" window with the "Navigation" window, allowing the reception of directions to/from the PoI and another PoI or an address. • "Share" opens a pop-up window to send the PoI's location and a message between two emails. • "Report" sends a comment and the user's email address to the developer. • "More" opens another window with additional info (such as telephone number, if available) and a small zoomable map with the PoI's location. The other user interface interaction windows are "Search", to show the PoI selected in the text search box (it works irrespective of the activated categories); "Layers", to select the base map (Geomatic/Satellite), the language (GR/EN) and overlays (Nicosia Bike Routes); "Print"; and "Clear Map".Architecture.The interactive map is a rich web-based Geographic Information System (GIS)-platform (gmapi.js).Based on the OpenLayers application programming interface (API), it uses "iframes" to load content from different servers.Today, most interactive maps use iframes (to display a web page within a web page) and most implementations require the use of JavaScript, Cascading Style Sheets (CSS), and HTML5 to build a responsive iframes-based website.However, there are disadvantages to using iframes, for example: • iframes can make the development of a website complicated. • It is easy to create badly constructed websites using iframes.The most common mistake is including a link that creates duplicate web pages displayed within an iframe. • Search engines that reference a web page only give the address of that specific document.This means that search engines might link directly to a page that was intended to be displayed within a frameset. • Users have become so familiar with normal navigation using tables, the back button, and so on, that navigating through a site that uses iframes can be problematic. • The use of too many iframes can put a high workload on the server. The main advantages of HTML5 iframes are the possibility to view multiple documents within a single web page and ability to load pages from different servers in a single frameset.This solution, although evolving with the adoption of new attributes and deprecation of others, might hinder evolution and upgrades of the application and its responsiveness.As an alternative, our application uses jQuery with CSS3 lightening for the user interface (for example, in order to simplify the manipulation of elements as the navigation menu; see Section 4.1), and AJAX (Asynchronous JavaScript and XML) for expediting the user interaction (for example, in order to update the web page without reloading it). The Interactive Map of Malta Interface.The interactive map of Malta (Figure 2) is able to produce optimal results because of a good design.It starts without markers; the user selects/deselects PoIs types (grouped by categories) to show on the map by checking items from a list, without connection to the coloured markers on the map.The markers have the same shape (an upturned and tilted red drop) with a coloured core.Sometimes different types in the same category and in different categories have the same colour in the core (for example, black, in "Travel Agents" and "Fortification and Towers"). ISPRS Int.J. Geo-Inf.2017, 6, 34 3 of 16 Architecture.The interactive map is a rich web-based Geographic Information System (GIS)platform (gmapi.js).Based on the OpenLayers application programming interface (API), it uses "iframes" to load content from different servers.Today, most interactive maps use iframes (to display a web page within a web page) and most implementations require the use of JavaScript, Cascading Style Sheets (CSS), and HTML5 to build a responsive iframes-based website.However, there are disadvantages to using iframes, for example: • iframes can make the development of a website complicated. • It is easy to create badly constructed websites using iframes.The most common mistake is including a link that creates duplicate web pages displayed within an iframe. • Search engines that reference a web page only give the address of that specific document.This means that search engines might link directly to a page that was intended to be displayed within a frameset. • Users have become so familiar with normal navigation using tables, the back button, and so on, that navigating through a site that uses iframes can be problematic. • The use of too many iframes can put a high workload on the server. The main advantages of HTML5 iframes are the possibility to view multiple documents within a single web page and ability to load pages from different servers in a single frameset.This solution, although evolving with the adoption of new attributes and deprecation of others, might hinder evolution and upgrades of the application and its responsiveness.As an alternative, our application uses jQuery with CSS3 lightening for the user interface (for example, in order to simplify the manipulation of elements as the navigation menu; see Section 4.1), and AJAX (Asynchronous JavaScript and XML) for expediting the user interaction (for example, in order to update the web page without reloading it). The Interactive Map of Malta Interface.The interactive map of Malta (Figure 2) is able to produce optimal results because of a good design.It starts without markers; the user selects/deselects PoIs types (grouped by categories) to show on the map by checking items from a list, without connection to the coloured markers on the map.The markers have the same shape (an upturned and tilted red drop) with a coloured core.Sometimes different types in the same category and in different categories have the same colour in the core (for example, black, in "Travel Agents" and "Fortification and Towers").Moreover, the list of categories and types is only given alphabetically, making it impossible to associate the markers with the types they represent.Clicking on a marker in the map opens a callout with the name, image, and short description of the PoI, with a "Read more" link (all links load a page with a zooming satellite map without contents).There is also a "Get Directions" non-functional link.Twelve languages are available for the interface, but categories and types often remain in English. Architecture.The interactive map uses a "REpresentational State Transfer" (REST) web service and all of the features that the ".net" framework provides.REST uses the HyperText Transfer Protocol (HTTP) for all four CRUD (Create/Read/Update/Delete) operations; consequently, this application runs on almost any "online" device or application, but the Malta application is not a responsive web page. As an alternative, our application is not only conformed to the REST constraints (RESTful) but also responsive, having adopted the leading framework in this category, jQuery, which provides not only a uniform Ajax API, but also a large number of cross-browser helper functions. Responsive website design is a well-noted approach for website design that caters optimal viewing experience to users while browsing, with easy reading, navigation, simple scrolling, minimal browser resizing, and cross-device compatibility.Website responsiveness has become a transcendent search engine-ranking factor and significantly influences Google search results.The primary goal of responsive web design is to cater a consistent browsing experience to everyone, regardless of what sort of device is being used.It fluidly adapts to almost all the resolutions and screen sizes and works smoothly on each device.With seamless experience, content and media are easily digestible while browsing on multiple devices including iPhone, smartphone, laptop, and desktop. Responsive websites are becoming the future of website design, as they eliminate the need to have multiple designs for desktop, mobile, and tablets.Moreover, they are more suitable, with respect to mobile websites, for the application of "Search Engine Optimisation" (SEO) methodology-strategies, techniques, and tactics used to increase the amount of visitors to a website by obtaining a high-ranking placement in the search results page of a search engine-including Google, Bing, Yahoo, and other search engines. The Interactive Map of Japan Interface.The interactive map of Japan (Figure 3) provides five categories, using markers with the same shape and different colours.A clustering marker is not provided and the callout contains only the PoI's name.The map can be visualised by region.Moreover, the list of categories and types is only given alphabetically, making it impossible to associate the markers with the types they represent.Clicking on a marker in the map opens a callout with the name, image, and short description of the PoI, with a "Read more" link (all links load a page with a zooming satellite map without contents).There is also a "Get Directions" non-functional link.Twelve languages are available for the interface, but categories and types often remain in English. Architecture.The interactive map uses a "REpresentational State Transfer" (REST) web service and all of the features that the ".net" framework provides.REST uses the HyperText Transfer Protocol (HTTP) for all four CRUD (Create/Read/Update/Delete) operations; consequently, this application runs on almost any "online" device or application, but the Malta application is not a responsive web page. As an alternative, our application is not only conformed to the REST constraints (RESTful) but also responsive, having adopted the leading framework in this category, jQuery, which provides not only a uniform Ajax API, but also a large number of cross-browser helper functions. Responsive website design is a well-noted approach for website design that caters optimal viewing experience to users while browsing, with easy reading, navigation, simple scrolling, minimal browser resizing, and cross-device compatibility.Website responsiveness has become a transcendent search engine-ranking factor and significantly influences Google search results.The primary goal of responsive web design is to cater a consistent browsing experience to everyone, regardless of what sort of device is being used.It fluidly adapts to almost all the resolutions and screen sizes and works smoothly on each device.With seamless experience, content and media are easily digestible while browsing on multiple devices including iPhone, smartphone, laptop, and desktop. Responsive websites are becoming the future of website design, as they eliminate the need to have multiple designs for desktop, mobile, and tablets.Moreover, they are more suitable, with respect to mobile websites, for the application of "Search Engine Optimisation" (SEO) methodologystrategies, techniques, and tactics used to increase the amount of visitors to a website by obtaining a high-ranking placement in the search results page of a search engine-including Google, Bing, Yahoo, and other search engines. The Interactive Map of Japan Interface.The interactive map of Japan (Figure 3) provides five categories, using markers with the same shape and different colours.A clustering marker is not provided and the callout contains only the PoI's name.The map can be visualised by region.Architecture.The interactive map uses a web service to retrieve a "comma separated value" (CSV) file available at a given Uniform Resource Locator (URL), using an HTTP GET request.It parses data in a file and next visualizes on the map the data stored in that file; some URL parameters customize and filter the CSV content to be returned. However, this solution could be limiting.When dealing with large volumes of data or adopting data that contain hierarchical information (e.g., multiple media for a single PoI), the "JavaScript Object Notation" (JSON) data format is used instead of CSV. Moreover, most modern APIs are RESTful, and therefore natively support JSON input and output.Several database technologies support it and it is significantly easier to use it in most programming languages as well. Comparison of Features The applications discussed above have different interfaces and similar architecture (JSON, REST API, AJAX, jQuery, CSS), but they are not as functionally complete as our interactive map (Table 1), extensively illustrated in the Sections 4 and 5. In a traditional web application, every time the application calls the server, it renders a new HyperText Markup Language (HTML) page, triggering a page refresh in the browser.This does not happen in our application, showing basic contents in an initial page, which is static for SEO reasons.Later, all UI (User Interface) interactions occur on the client side, through jQuery and CSS, and after the initial page is loaded, the server acts purely as a service layer, through AJAX which calls return data (not markups) in a particular JSON format called GeoJSON, dynamically updating the map without reloading it. Sending the application data as GeoJSON creates a separation between the presentation (HTML5 markup and CSS3) and the application logic (AJAX requests and GeoJSON responses), making it easier to design and evolve each layer.In a well-architected "Single-Page Application" (SPA; RESTful) application, we can change the HTML5 markup and CSS3 style without modifying the code that implements the application logic. Moreover, in all the maps used in the case studies, it is not easy to recognise the categories and types of PoIs because they adopt a unique shape and sometimes use the same colours for different categories/types. The above examples, mainly centred on PoI mapping, show the need to improve the interface in order to give users better instruments to find results according to their needs.With this objective, a procedure has been designed to simplify the presentation of Cultural Heritage (CH) contents belonging to geo-referenced PoIs through a multimedia map available on the Web with advanced functionalities.It was developed in the framework of the Must See Advisor (Mu.S.A.) project, which aims to give visibility to communities by valuing knowledge from selected stakeholders. Knowledge Collecting The Mu.S.A. Project began with the aim to give visibility to lesser-known sites by valuing knowledge from selected communities such as municipality teams, cultural associations, and secondary schools. Applications have been developed for Tirana in Albania [8] and Syracuse in Italy [9], progressively strengthening the interconnections between knowledge-expressed through multimedia objects-and places. More recently, an experimental activity was carried out to test the application in collaboration with the "Ricciotto Canudo" secondary school in the city of Gioia del Colle (a small town in the hinterland of south Italy). The students working in the "The European Traveller Guide" Comenius project aim to develop, among other things, an awareness of the Township's heritage.Engagement of the youth is necessary to ensure that these residents will be committed to the township in the future.As children are more involved in their community, it also entices the parents to stay involved as well [10]. The activity of the students involved collecting for each PoI selected in the Comenius project, a number of attributes: name, location (longitude and latitude; address), and category, with a thumbnail of a representative photo, physical accessibility status, century, age, and short and extended description. In the second phase, the multimedia documents pertaining to each PoI are collected, and the following data stored in another excel file: reference PoI, document name, description, category (sheet, image, video, and 3D multimedia object), source and its URL, with a preview image.These data are stored in an excel file and used to generate the one-page multimedia interactive map (Figure 4) through an authorware environment. Among others, three industrial archaeology PoIs were located, with the aim of overcoming the often negative associations of neglected or abandoned industrial sites, which are often demolished, resulting in the loss of an important piece of our history. Knowledge Collecting The Mu.S.A. Project began with the aim to give visibility to lesser-known sites by valuing knowledge from selected communities such as municipality teams, cultural associations, and secondary schools. Applications have been developed for Tirana in Albania [8] and Syracuse in Italy [9], progressively strengthening the interconnections between knowledge-expressed through multimedia objects-and places. More recently, an experimental activity was carried out to test the application in collaboration with the "Ricciotto Canudo" secondary school in the city of Gioia del Colle (a small town in the hinterland of south Italy). The students working in the "The European Traveller Guide" Comenius project aim to develop, among other things, an awareness of the Township's heritage.Engagement of the youth is necessary to ensure that these residents will be committed to the township in the future.As children are more involved in their community, it also entices the parents to stay involved as well [10]. The activity of the students involved collecting for each PoI selected in the Comenius project, a number of attributes: name, location (longitude and latitude; address), and category, with a thumbnail of a representative photo, physical accessibility status, century, age, and short and extended description. In the second phase, the multimedia documents pertaining to each PoI are collected, and the following data stored in another excel file: reference PoI, document name, description, category (sheet, image, video, and 3D multimedia object), source and its URL, with a preview image.These data are stored in an excel file and used to generate the one-page multimedia interactive map (Figure 4) through an authorware environment. Among others, three industrial archaeology PoIs were located, with the aim of overcoming the often negative associations of neglected or abandoned industrial sites, which are often demolished, resulting in the loss of an important piece of our history.The examples reported in the figures refer to "Distilleria Cassano", one of the most important monuments of industrial archaeology in Apulia, and included on the list of monumental and environmental heritage. Upon its renovation, it hosts events.For that PoI, the interactive map has a link to a 3D multimedia object: a virtual tour of the building (Figure 5). The examples reported in the figures refer to "Distilleria Cassano", one of the most important monuments of industrial archaeology in Apulia, and included on the list of monumental and environmental heritage. Upon its renovation, it hosts events.For that PoI, the interactive map has a link to a 3D multimedia object: a virtual tour of the building (Figure 5). Multimedia contents (filterable by category: gallery, sheet, video, and 3D multimedia object) are accessible through their previews, managed by "Folder", "Tile", and "Table" functions provided in the "Menu" (for all the PoIs) and in the PoI's "Callout" (Figure 6). Multimedia contents (filterable by category: gallery, sheet, video, and 3D multimedia object) are accessible through their previews, managed by "Folder", "Tile", and "Table" functions provided in the "Menu" (for all the PoIs) and in the PoI's "Callout" (Figure 6).The examples reported in the figures refer to "Distilleria Cassano", one of the most important monuments of industrial archaeology in Apulia, and included on the list of monumental and environmental heritage. Upon its renovation, it hosts events.For that PoI, the interactive map has a link to a 3D multimedia object: a virtual tour of the building (Figure 5). Multimedia contents (filterable by category: gallery, sheet, video, and 3D multimedia object) are accessible through their previews, managed by "Folder", "Tile", and "Table" functions provided in the "Menu" (for all the PoIs) and in the PoI's "Callout" (Figure 6). The "Menu" The "Menu" component has the following functions: • "Best site", to locate the most interesting site according to its ranking (in development). • "Satellite", to switch the map to/from the earth view. • "Slideshow", to switch on/off a moving set of one image/PoI.This function can be activated using the command "?photo" in the URL.It is useful for public installations (as, for example, the installation in Syracuse [9], where a touchscreen totem is used) to capture the user's attention. • "Time slider", to filter markers according to the selected century. • "Folder", to show in a window all previews one by one in sequence (manually/automatically)-a toolbar allows them to be filtered by category (Figure 7, on the left).All the previews of images could be visualised in an "Image viewer" that allows the user to pan and zoom the image-like using a magnifying glass-thereby creating a sense of immersion and stimulating curiosity. An overview panel gives the user full control over details (Figure 7, on the right). • "Table ", to list in a window all items alphabetically ordered based on the field selected by the user. A cascade list allows them to be filtered by category.A search function is also provided (Figure 8, on the left). • "Tile", to display in a scrolling window all items-a toolbar allows them to be filtered by category (Figure 8, on the right). The "Menu" The "Menu" component has the following functions: • "Best site", to locate the most interesting site according to its ranking (in development). • "Satellite", to switch the map to/from the earth view.• "Slideshow", to switch on/off a moving set of one image/PoI.This function can be activated using the command "?photo" in the URL.It is useful for public installations (as, for example, the installation in Syracuse [9], where a touchscreen totem is used) to capture the user's attention. • "Time slider", to filter markers according to the selected century.• "Folder", to show in a window all previews one by one in sequence (manually/automatically)a toolbar allows them to be filtered by category (Figure 7, on the left).All the previews of images could be visualised in an "Image viewer" that allows the user to pan and zoom the image-like using a magnifying glass-thereby creating a sense of immersion and stimulating curiosity.An overview panel gives the user full control over details (Figure 7, on the right). • "Table ", to list in a window all items alphabetically ordered based on the field selected by the user.A cascade list allows them to be filtered by category.A search function is also provided (Figure 8, on the left).• "Tile", to display in a scrolling window all items-a toolbar allows them to be filtered by category (Figure 8, on the right).The "Menu" component has the following functions: • "Best site", to locate the most interesting site according to its ranking (in development). • "Satellite", to switch the map to/from the earth view.• "Slideshow", to switch on/off a moving set of one image/PoI.This function can be activated using the command "?photo" in the URL.It is useful for public installations (as, for example, the installation in Syracuse [9], where a touchscreen totem is used) to capture the user's attention. • "Time slider", to filter markers according to the selected century.• "Folder", to show in a window all previews one by one in sequence (manually/automatically)a toolbar allows them to be filtered by category (Figure 7, on the left).All the previews of images could be visualised in an "Image viewer" that allows the user to pan and zoom the image-like using a magnifying glass-thereby creating a sense of immersion and stimulating curiosity.An overview panel gives the user full control over details (Figure 7, on the right).• "Table ", to list in a window all items alphabetically ordered based on the field selected by the user.A cascade list allows them to be filtered by category.A search function is also provided (Figure 8, on the left).• "Tile", to display in a scrolling window all items-a toolbar allows them to be filtered by category (Figure 8, on the right).Two function icons are provided on each preview, giving users the possibility to locate the reference PoI (available only in the "Folder" and "Tile" windows called by the "Menu") or to activate in a new window the multimedia connected to the selected preview (for example: "Image viewer" for images-Figure 7, on the right). The "Callout" The callout component, which can be opened by clicking on a PoI box in the sidebar or on a marker on the map, shows name, address, and the thumbnail of the PoI.It also shows function icons, a short description, the building period, and status icons.A "more" link allows users to see the extended description (Figure 9).The function icons "Folder", "Tile", and "Table " (as in the "Menu" component) apply only to the multimedia content belonging to the PoI under examination (the locate function icon is not available). To allow map embedding in a page related to a specific PoI, this function can be activated using the command "?n=#&callout=on" in the URL, where # is the PoI code.It is also possible to show the map assigning the zoom factor using the command "?zoom=#", centred on a point with given coordinates ("lat=#&lon=#"), or given code, as previously viewed ("?n=#").On the bottom, two status icons are reported; they show the physical accessibility status of that PoI (through three emoticons: easy, uneasy, and restricted). ISPRS Int.J. Geo-Inf.2017, 6, 34 9 of 16 Two function icons are provided on each preview, giving users the possibility to locate the reference PoI (available only in the "Folder" and "Tile" windows called by the "Menu") or to activate in a new window the multimedia connected to the selected preview (for example: "Image viewer" for images-Figure 7, on the right). The "Callout" The callout component, which can be opened by clicking on a PoI box in the sidebar or on a marker on the map, shows name, address, and the thumbnail of the PoI.It also shows function icons, a short description, the building period, and status icons.A "more" link allows users to see the extended description (Figure 9).The function icons "Folder", "Tile", and "Table " (as in the "Menu" component) apply only to the multimedia content belonging to the PoI under examination (the locate function icon is not available). To allow map embedding in a page related to a specific PoI, this function can be activated using the command "?n=#&callout=on" in the URL, where # is the PoI code.It is also possible to show the map assigning the zoom factor using the command "?zoom=#", centred on a point with given coordinates ("lat=#&lon=#"), or given code, as previously viewed ("?n=#").On the bottom, two status icons are reported; they show the physical accessibility status of that PoI (through three emoticons: easy, uneasy, and restricted).The interactive map (improved version for Albania)-the "Callout" with the extended description (centre).In the "Menu" (opened on the left) the function "Satellite" and "Slideshow" are active, then the map is in earth view and the slideshow runs on bottom.The map shows only the PoI.s belonging to the "Architectural ensemble", the only type switched-on in the sidebar (right). The "Sidebar" The sidebar has two sections: a search box with a dynamic legend (Figure 10, on the left and centre) and a filter panel (Figure 10, on the right). Search Box with Dynamic Legend The search box allows the user to find PoIs according to the typed letters, while updating the dynamic legend contents accordingly (Figure 10, on the left).The dynamic legend displays an info box of PoIs listed by category and type if available (as for "Basilica" and "Church" types in the "Architectural monuments" category-Figure 10, on the centre) using colours unambiguous to both colour-blind and non-colour-blind people [11].The interactive map (improved version for Albania)-the "Callout" with the extended description (centre).In the "Menu" (opened on the left) the function "Satellite" and "Slideshow" are active, then the map is in earth view and the slideshow runs on bottom.The map shows only the PoI.s belonging to the "Architectural ensemble", the only type switched-on in the sidebar (right). The "Sidebar" The sidebar has two sections: a search box with a dynamic legend (Figure 10, on the left and centre) and a filter panel (Figure 10, on the right). Search Box with Dynamic Legend The search box allows the user to find PoIs according to the typed letters, while updating the dynamic legend contents accordingly (Figure 10, on the left).The dynamic legend displays an info box of PoIs listed by category and type if available (as for "Basilica" and "Church" types in the "Architectural monuments" category-Figure 10, on the centre) using colours unambiguous to both colour-blind and non-colour-blind people [11].These constraints limit the total number of different categories to five.However, this is not a problem because having more than five colours at a time causes colour trouble instead of reducing ambiguity [12]. Each category/type box contains data according to the map status (i.e., each content can change from time to time according to the search/filter results): • On the left, a "marker button" filters on the map all the markers for that category-subsequently, the box colour changes to a grey/default colour. • In the centre, the "category/type name button" shows/hides its group of PoI info boxes (containing a thumbnail image, and the name and address of each PoI). • A meter on the right shows the number of PoIs available at that moment; the search for "nic" produced one "Architectural ensemble" and two "Architectural monument" PoIs-among them the PoI named "Church of Saint Nicolas". Filter Panel The filter panel allows the user to select the following parameters: • "Data" (default field for text search: title; additional field: address and description). • "Period" (default: no filter; any period registered in the database available). The multimedia category can be selected directly through a toolbar (in the "Folder" and "Tile" windows) or through a menu (in the "Table" window). The Architecture Every software system has its own architecture, but not every software architecture is defined.This is what can make the difference as to whether a system works and how it is received by stakeholders.Sometimes the technology decisions mistakenly shape the architecture.The application must support a variety of different clients including desktop browsers, mobile browsers, and native mobile applications.It might also integrate with other applications via either web services or a message broker.Large complex software goes through a series of deconstructions at different levels.These constraints limit the total number of different categories to five.However, this is not a problem because having more than five colours at a time causes colour trouble instead of reducing ambiguity [12]. Each category/type box contains data according to the map status (i.e., each content can change from time to time according to the search/filter results): • On the left, a "marker button" filters on the map all the markers for that category-subsequently, the box colour changes to a grey/default colour. • In the centre, the "category/type name button" shows/hides its group of PoI info boxes (containing a thumbnail image, and the name and address of each PoI). • A meter on the right shows the number of PoIs available at that moment; the search for "nic" produced one "Architectural ensemble" and two "Architectural monument" PoIs-among them the PoI named "Church of Saint Nicolas". Filter Panel The filter panel allows the user to select the following parameters: • "Data" (default field for text search: title; additional field: address and description). • "Period" (default: no filter; any period registered in the database available). The multimedia category can be selected directly through a toolbar (in the "Folder" and "Tile" windows) or through a menu (in the "Table" window). The Architecture Every software system has its own architecture, but not every software architecture is defined.This is what can make the difference as to whether a system works and how it is received by stakeholders.Sometimes the technology decisions mistakenly shape the architecture.The application must support a variety of different clients including desktop browsers, mobile browsers, and native mobile applications.It might also integrate with other applications via either web services or a message broker.Large complex software goes through a series of deconstructions at different levels. At the higher (abstract) level, the architectural pattern used successfully is "Model View Controller" (MVC) [13], which is concerned with the subsystems of an application's relationships and collaborations with each other.It provides a strategy for large-scale components, the global properties, and mechanisms of a system.Specifically, our application is an example of a "Single-Page Application" (SPA), which is a different way of building HTML5 applications from traditional web page development. In traditional web applications, the client initiates the communication with the server by requesting a page; the server then processes the request and sends the HTML of the page to the client.In ulterior interactions with the page, for example, the user navigates to a link or submits a form with data, a new request is sent to the server, and the flow starts again: the server processes the request and sends a new page to the browser in response to the new action requested by the client.In Single-Page Applications (SPAs) the entire page, usually, is loaded in the browser after the initial request, but subsequent interactions take place through AJAX (Asynchronous JavaScript and XML or JSON, often used in the AJAJ variant) requests.This means that the browser has to update only the portion of the page that has changed; there is no need to reload the entire page. At the lower level, various schemes are provided for refining and building smaller subsystems.The MVC pattern defines not only the roles (Model, View, or Controller) objects play in the application, but it defines the way objects communicate with each other.The benefits of adopting this pattern are numerous.An application implemented using MVC can be RESTful or not; the application was designed to be RESTful.Many objects in this application tend to be more reusable, and their interfaces tend to be better defined.Further, the application is more easily extensible than other applications.The multimedia interactive map has a layered architecture (three layers) and consists of different types of components (Figure 11, from the bottom upwards): At the higher (abstract) level, the architectural pattern used successfully is "Model View Controller" (MVC) [13], which is concerned with the subsystems of an application's relationships and collaborations with each other.It provides a strategy for large-scale components, the global properties, and mechanisms of a system.Specifically, our application is an example of a "Single-Page Application" (SPA), which is a different way of building HTML5 applications from traditional web page development. In traditional web applications, the client initiates the communication with the server by requesting a page; the server then processes the request and sends the HTML of the page to the client.In ulterior interactions with the page, for example, the user navigates to a link or submits a form with data, a new request is sent to the server, and the flow starts again: the server processes the request and sends a new page to the browser in response to the new action requested by the client.In Single-Page Applications (SPAs) the entire page, usually, is loaded in the browser after the initial request, but subsequent interactions take place through AJAX (Asynchronous JavaScript and XML or JSON, often used in the AJAJ variant) requests.This means that the browser has to update only the portion of the page that has changed; there is no need to reload the entire page. At the lower level, various schemes are provided for refining and building smaller subsystems.The MVC pattern defines not only the roles (Model, View, or Controller) objects play in the application, but it defines the way objects communicate with each other.The benefits of adopting this pattern are numerous.An application implemented using MVC can be RESTful or not; the application was designed to be RESTful.Many objects in this application tend to be more reusable, and their interfaces tend to be better defined.Further, the application is more easily extensible than other applications.The multimedia interactive map has a layered architecture (three layers) and consists of different types of components (Figure 11, from the bottom upwards): Technically, SPA design and the initial development of SPAs were complex.It was necessary to find solutions to overcome the long waiting time, allowing SPAs to grow to optimal size.In this approach, the system client consists of HTML, jQuery, and CSS files that are partially or entirely rendered by the server, and sent to the web browser in real time.In this SPA, after the first page loads, all interaction with the server happens through AJAX calls.These AJAX calls return data in JSON format, specifically GeoJSON data structure.Technically, SPA design and the initial development of SPAs were complex.It was necessary to find solutions to overcome the long waiting time, allowing SPAs to grow to optimal size.In this approach, the system client consists of HTML, jQuery, and CSS files that are partially or entirely rendered by the server, and sent to the web browser in real time.In this SPA, after the first page loads, all interaction with the server happens through AJAX calls.These AJAX calls return data in JSON format, specifically GeoJSON data structure. The application uses the JSON data to update the page dynamically, without reloading the page, and separates the user interface (UI) library and the data (GeoJSON).It communicates with the server only through the JSON REST API (send/receive JSON using AJAX), allowing both parts to be independently developed and tested.This separation makes it easier to design and evolve each layer. In addition, the application was designed to fit into an ordinary processing environment that includes structured programs and libraries (utilities, widgets, and plug-in), the most important of which is Google Maps API [14]. The SPA Approach (at the Higher Level) SPAs are web apps that load a single HTML page and dynamically update that page as the user interacts with the app.Instead of spreading the functionality of the multimedia interactive map across a collection of separate web pages with hyperlinks between them, it is possible to define a single root page on which the users will land and never leave as long as they are using the application.This is a type of client-side logic that switches out the data and chunks of content within that page, allowing the users to navigate logical screens without leaving the page.This means that users never see a full-page refresh while using the application; instead, they see a change in a portion of the screen based on their interaction, and those changes can be done in a more fluid way with transitions to enhance the user experience.SPAs are fast, as most resources such as HTML pages, CSS files, and scripts are only loaded once throughout the life span of the application and only data are transmitted back and forth, reducing the bandwidth usage, which is also a plus.SPAs can use caching and local storage effectively.It is easy to scale and to cache resources.SPAs operate and feel more like an application than a web page.A major architectural advantage of an SPA is the huge reduction in the "chattiness" of the application.It was designed properly to handle most processing on the client and to reduce the number of requests to the server.In fact, an SPA makes it possible to do entirely offline processing, which is huge in this context.A "chatty" application has, as an important performance characteristic, a large number of remote requests and corresponding replies ("application turns" or "app turns" in Transaction Trace terminology).These are also often referred to as network round-trips, especially in developer documentation. The negative performance impact of these application turns increases with path latency, making remote access a challenge for chatty applications.Note that chattiness is not inherently bad, only when coupled with network latency does it become a performance problem.Other advantages include the following: • "Easier state tracking"-an SPA does not need to use cookies, form submission, local storage, session storage, etc. to remember state between two page loads; • SPA boilerplate content, which is on every page (header, footer, logo, copyright banner, etc.), only loads once per typical browser session.This application is performed without additional overhead latency caused by switching "pages". SPAs are distinguished by their ability to redraw any part of the UI without requiring a server roundtrip request to retrieve HTML.This is achieved by separating the data from the presentation of data via a model layer that handles data and a view layer that reads from the models. If an SPA grows to a significant size, loading the entire application on page load may be detrimental to the experience because this is akin to loading all pages of a website when only the home page was requested.The main advantage of this solution is that each component of the template may be included dynamically based on the inclusion and substitution of template fragments.Page content is downloaded first, along with any CSS and JQuery that may be required for its initial display, ensuring that the user gets the quickest apparent response during the page loading.Any dynamic features that require the page to complete loading before being used are initially disabled, and then only enabled after the page has loaded.This causes the jQuery to be loaded after the page contents, which improves the overall appearance of the page load. The view layer is the most complex part of modern SPAs.After all, this is the whole point of an SPA-to make it easy to have awesomely rich and interactive views.Views have several tasks to perform: • Rendering a template.A method that takes data and maps it/outputs it as HTML5 is needed. • Updating views in response to change events.When model data changes, we need to update the related view(s) to reflect the new data. • Binding behaviour to HTML5 via event handlers.When the user interacts with the view HTML5, a method that triggers behaviour (code) is needed. For the implementation of the editing environment, characterised by a high degree of interaction, we adopted the jQuery UI library, a jQuery-based library that provides a high level of abstraction for programming interaction and animation, advanced graphic effects, and customisable event handling. The Model layer's job is to represent the problem domain, maintain state, and provide methods for accessing and mutating the state of the application.This layer (in particular, data services RESTful API components) is responsible for reading from the backend using a simplified and more powerful API.It accepts JSON data, and returns JSON objects that are converted into Models, from the data store/cache and also queries the backend as well.Lookups by ID can be fetched directly from the cache, but queries that are more complicated need to be sent to the backend in order to search the full set of data. The data services were built using the lighter RESTful architecture, and JSON has become a de facto standard data exchange format for REST web services. The RESTful Approach (at the Lower Level) In this architected SPA, we can change the HTML markup without touching the code that implements the application logic and all UI interaction occurs on the client side, through jQuery and CSS.In computing, REST is an architectural style defined by a set of six constraints, which are intended to promote performance, scalability, simplicity, modifiability, visibility, portability, and reliability [15].REST consists of a coordinated set of components, connectors, and data elements within a distributed hypermedia system, where the focus is on component roles and a specific set of interactions between data elements rather than on implementation details. The adoption of the REST architectural style is aimed at obtaining a "stateless" solution software in which each request from a client to the server contains all of the information necessary to process the request and the server does not store any session data on behalf of the client; instead, the client must store all session data.Indeed, the reliance of SPA on REST is perhaps the most immediately apparent characteristic of SPA.Furthermore, SPA is easy to debug with "Google Chrome", which allows monitoring of network operations and investigation of page elements and the data associated with them.All the major mobile platforms, including Apple's iOS, Google's Android, and Palm's WebOS, use similar webKit-based browsers.Hence, technologies like HTML5 and CSS3 will continue to be improved and supported.The application works for the majority of mobile platforms and also works on any HTML5-compliant web browser.A browser compatibility test conducted saw the application rendering without any errors on "Google Chrome", the only web browser on which we tested the application because of its widespread usage. HTML5 enables developers to write truly "responsive" applications that resize automatically according to the browser and the screen size, automatically detecting and changing the UI in compliance with the running platform and the orientation of the device.The combination of SPA and "responsive web design" [16] pattern appears to have established itself as a significant trend for efficient web application development.SPA's inherent separation of UI and application logic creates an opportunity to share common application logic and testing assets through a shared RESTful API.This allows browser-based web applications and native mobile applications to share the same application code on the "backend". The "GeoJSON" Data Type To simplify the publication of geo-referenced information, the application stores all the data in a GeoJSON (Geographic JavaScript Object Notation) file rather than in a database, which would require the installation of a DBMS if it was not already present on the server hosting the system.GeoJSON is a specialisation of the JSON data interchange format that can manage geo-referenced data using a subset of instructions provided by the JavaScript language. GeoJSON supports cartographic visualisations by facilitating the display of lines, polygons, and other geometry objects.Developed as an open standard and widely adopted, it can be used to create map layers as the storage format. There are several development platforms aiming to represent complex datasets; the best is probably GitHub, a web-based Git repository hosting service used by over 12 million people, rendering any file with the ".geojson" suffix as a map. GeoJSON gives users a number of advantages.They are as follows: • It standardises the method used to pass information.Multiple vendors have subsequently adopted this method, which enables us to have APIs that all operate in the same manner.This can help us to discard Google Maps APIs and switch to OpenLayers or future GIS platforms; the operation would be the same, thereby preserving the integrity of the data. • Relative to the client-server computing model, the backend can serve multiple clients in the same manner or, conversely, the client can render maps regardless of how the backend is implemented-as long as it uses GeoJSON.This enables the client to be independent of the map server and simply become a consumer of GeoJSON, irrespective of how it was generated. • It can be used with modern programming languages and is readily available to run using JavaScript without further parsing.The geo-data is easily browsable because it is a regular JavaScript object.This easily facilitates further processing. It is easy to read and write.A complete GeoJSON data structure is always an object (in JSON terms).In GeoJSON, an object consists of a collection of name/value pairs, also called members. The Client Logic of Our Application The multimedia interactive map uses AJAX, with a client-side control mechanism that utilises jQuery to interact with users and controls.The advantage of this control mechanism is that a specific section or a simple object can be updated without reloading the entire page to prevent the invocation of unnecessary page life cycle events, with the following effects: • Reduction in network latency to minimise the response time. • Web applications with the feel of desktop applications. • Updating of data behind the scene. Most current SPAs still use the term "Controller layer".However, we believe that SPAs require a better term, because they have more complex state transitions than a server-side application.We clearly need a model to hold data and a view to deal with UI changes, but the glue layer consists of several independent problems: global state changes, like going offline in a real time application or delayed results from AJAX that get returned at some point from backend operations and more. The solutions used each have their own terms, such as event bindings, change events, and initialisers.Google Maps API is integrated in the application for rendering geospatial data within a web browser and for accessing rich mapping features.It is "optimised" for smartphones, with a set of APIs developed by Google, which allow communication with Google Services and their integration to numerous other services.Another advantage of this is that not all data have to be transferred. Only the data one wishes to transfer are transferred; the less data transferred, the faster the transfer and the less likelihood of breakdown.The Client logic component automatically synchronises data from your UI (view) with the JSON objects (model) through two-way data binding: from server (server storage) or from client (HTML storage-client-side storage).jQuery fits well in this model because the whole end-user interaction experience is handled with logic on the client as well as on the server in varying percentages per page. There are several reasons to use client-side storage: • The multimedia interactive map is available when the user is offline, possibly synchronising data back once the network is reconnected. • It is a performance booster.Consequently, it is possible to show a large corpus of data as soon as the user clicks on the application objects (slideshow, tile, table, etc.), instead of waiting for it to download again. • It is an easy programming model, with no server infrastructure required.Of course, the data are more vulnerable and the user cannot access these data from multiple clients.Therefore, it should only be used for non-critical data, particularly cached versions of data that are also "in the cloud". Conclusions and Future Work This paper presented a project report about the design and implementation of a multimedia interactive map loaded in a single HTML page, tested in collaboration with the "Ricciotto Canudo" secondary school in the city of Gioia del Colle. At present, the tool provides advanced functionality to easily deal with the manipulation of multimedia objects via a graphical UI, but it will be necessary to enhance the application in several ways. Interface.The interface will be improved to allow users to share the popularity of each PoI using a simple "yay/nay" approach because it is unambiguous (everyone generally either likes or dislikes something).The ranking formula will be Rank = (Like + 1)/(Dislike + 1) × LOG10(Like + 1) to avoid the division by zero and rank PoIs with more likes more highly than PoIs with fewer likes [17]. Moreover, an authorware environment for producing the interactive map will be developed.It will have a live preview of the data assigned and multimedia objects uploaded, giving the user a direct feedback means. Architecture.We have developed a medium tablet version and a large desktop version for the interactive map; a small mobile version will soon be completed.Up to this point, we have focused our attention on a specific responsiveness strategy, optimising the process that delivers the output by eliminating time-wasting and using idle time to prepare for the operations a user might do next.The interactive map delivers intermediate results, before the operation is finished (before all images are loaded) without the user noticing. Appropriate mechanisms needed to ensure quality [18], and detect and remove errors have been developed and assessed.However, establishing some sort of trust in the collected VGI dataset is an important factor to avoid incorrect or malicious geographic annotations. Up to now, we checked positional accuracy by comparison (a manual approach preferred over an automated approach to avoid any processing errors [19]).Moreover, "Keep Right", "Osmose", or "OSM Inspector" can be used to visualize detected errors in the map. To ensure the usability of the web application under different situations, test cases will be written covering the different scenarios, of not only functional usage, but also technical considerations such as network speeds and screen resolution.We will utilise a questionnaire that measures five components: content, accuracy, format, ease of use, and timeliness [20]. Figure 1 . Figure 1.The interactive map of Cyprus-screenshot of screen session in the website [5].Figure 1.The interactive map of Cyprus-screenshot of screen session in the website [5]. Figure 1 . Figure 1.The interactive map of Cyprus-screenshot of screen session in the website [5].Figure 1.The interactive map of Cyprus-screenshot of screen session in the website [5]. Figure 2 . Figure 2. The interactive map of Malta-screenshot of screen session in the website [6].Figure 2. The interactive map of Malta-screenshot of screen session in the website [6]. Figure 2 . Figure 2. The interactive map of Malta-screenshot of screen session in the website [6].Figure 2. The interactive map of Malta-screenshot of screen session in the website [6]. Figure 3 . Figure 3.The interactive map of Japan-screenshot of screen session in the website [7]. Figure 3 . Figure 3.The interactive map of Japan-screenshot of screen session in the website [7]. Figure 4 . Figure 4.The interactive map of "Gioia del Colle"; the map displays all PoIs because the "Time slider" (on the left, detached from the frame) is set on the XX Century.The main components of our interactive map are: "Menu" (left); "Callout" (centre); and "Sidebar" (right). Figure 4 . Figure 4.The interactive map of "Gioia del Colle"; the map displays all PoIs because the "Time slider" (on the left, detached from the frame) is set on the XX Century.The main components of our interactive map are: "Menu" (left); "Callout" (centre); and "Sidebar" (right). Figure 5 . Figure 5. Virtual tour of "Distilleria Cassano"-a 3D multimedia object accessible through the map.It is possible to activate the site's layout, displaying all the viewpoints of the virtual tour (right). Figure 6 . Figure 6.Interconnections between knowledge-expressed through multimedia objects-and PoI.s. Figure 5 . Figure 5. Virtual tour of "Distilleria Cassano"-a 3D multimedia object accessible through the map.It is possible to activate the site's layout, displaying all the viewpoints of the virtual tour (right). Figure 5 . Figure 5. Virtual tour of "Distilleria Cassano"-a 3D multimedia object accessible through the map.It is possible to activate the site's layout, displaying all the viewpoints of the virtual tour (right). Figure 6 . Figure 6.Interconnections between knowledge-expressed through multimedia objects-and PoI.s. Figure 6 . Figure 6.Interconnections between knowledge-expressed through multimedia objects-and PoI.s. Figure 7 . Figure 7.The interactive map-"Folder" window (left) with an image preview; it is possible to activate an "Image viewer" with the overview panel (right). Figure 8 . Figure 8.The interactive map-"Table" window (left) with filter and search functions called from the callout (in background) and "Tile" window (right). Figure 7 . 16 4. 1 . Figure 7.The interactive map-"Folder" window (left) with an image preview; it is possible to activate an "Image viewer" with the overview panel (right). Figure 7 . Figure 7.The interactive map-"Folder" window (left) with an image preview; it is possible to activate an "Image viewer" with the overview panel (right). Figure 8 . Figure 8.The interactive map-"Table" window (left) with filter and search functions called from the callout (in background) and "Tile" window (right). Figure 8 . Figure 8.The interactive map-"Table" window (left) with filter and search functions called from the callout (in background) and "Tile" window (right). Figure 9 . Figure 9.The interactive map (improved version for Albania)-the "Callout" with the extended description (centre).In the "Menu" (opened on the left) the function "Satellite" and "Slideshow" are active, then the map is in earth view and the slideshow runs on bottom.The map shows only the PoI.s belonging to the "Architectural ensemble", the only type switched-on in the sidebar (right). Figure 9 . Figure 9.The interactive map (improved version for Albania)-the "Callout" with the extended description (centre).In the "Menu" (opened on the left) the function "Satellite" and "Slideshow" are active, then the map is in earth view and the slideshow runs on bottom.The map shows only the PoI.s belonging to the "Architectural ensemble", the only type switched-on in the sidebar (right). Figure 10 . Figure 10.The interactive map-Dynamic legend with meters showing the number of PoIs according to the search results for "nic" (left); the legend opened at PoI level (centre); filter panel (right). Figure 10 . Figure 10.The interactive map-Dynamic legend with meters showing the number of PoIs according to the search results for "nic" (left); the legend opened at PoI level (centre); filter panel (right). • Data Source (Data services, Data access, Offline storage): Models/Collections of Models; • Interaction with the application (Server logic -PHP: Hypertext Preprocessor, Client logic-JQUERY), state capturing and navigation (Navigation): Events, Routing; • Markup for presenting data (Initial page, User interface): Templates. Table 1 . Comparison of the features of the interactive maps used in the case studies and our map (non-functional and/or incomplete content functions are considered absent).PoI, Point of Interest; SEO, Search Engine Optimisation. ISPRS Int.J. Geo-Inf.2017, 6, 34 6 of 16 belonging to geo-referenced PoIs through a multimedia map available on the Web with advanced functionalities.It was developed in the framework of the Must See Advisor (Mu.S.A.) project, which aims to give visibility to communities by valuing knowledge from selected stakeholders.
2017-01-31T08:35:28.556Z
2017-01-24T00:00:00.000
{ "year": 2017, "sha1": "2b105417b3cd1cc0001930a856df78f979984009", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2220-9964/6/2/34/pdf?version=1485244428", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "2b105417b3cd1cc0001930a856df78f979984009", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
226236775
pes2o/s2orc
v3-fos-license
On No-Sensing Adversarial Multi-player Multi-armed Bandits with Collision Communications We study the notoriously difficult no-sensing adversarial multi-player multi-armed bandits (MP-MAB) problem from a new perspective. Instead of focusing on the hardness of multiple players, we introduce a new dimension of hardness, called attackability. All adversaries can be categorized based on the attackability and we introduce Adversary-Adaptive Collision-Communication (A2C2), a family of algorithms with forced-collision communication among players. Both attackability-aware and unaware settings are studied, and information-theoretic tools of the Z-channel model and error-correction coding are utilized to address the challenge of implicit communication without collision information in an adversarial environment. For the more challenging attackability-unaware problem, we propose a simple method to estimate the attackability enabled by a novel error-detection repetition code and randomized communication for synchronization. Theoretical analysis proves that asymptotic attackability-dependent sublinear regret can be achieved, with or without knowing the attackability. In particular, the asymptotic regret does not have an exponential dependence on the number of players, revealing a fundamental tradeoff between the two dimensions of hardness in this problem. I. INTRODUCTION The decentralized multi-player multi-armed bandits (MP-MAB) problem has received increasing interest in recent years [1]- [5]. In MP-MAB, multiple players simultaneously play the bandit game and interact with each other through arm collisions. When two or more players play the same arm simultaneously, they all get a reward 0 (or equivalently loss 1) instead of the true underlying reward of that action. This model is largely motivated by practical applications such as cognitive radio [6]- [9] and wireless caching [10], where standard (single-player) MAB does not fully capture the system complexity and user interactions must be taken into account in conjunction with the bandit game. Depending on how rewards are generated, the MP-MAB game can be either stochastic or adversarial, as in the single-player bandit problem. Most of the existing works focus on the stochastic setting, in which a well-behaved stochastic model exists for each arm (albeit unknown to the players). On the other hand, the (oblivious) adversarial setting makes no stochastic assumption on the rewards and assigns an arbitrary reward sequence to each arm exogenously. This is a considerably harder problem because of the need to fight the adversary while interacting with other players. Since the MAB problem for a single player is well understood, a predominant approach for both stochastic and adversarial MP-MAB is to let each player play the single-player MAB game while avoiding collisions for as much as possible [1], [2], [7], [11], [12]. Recently, a pioneering work [3] proposes to purposely instigate collisions as a way to communicate between players. Such implicit communication is instrumental in breaking the performance barrier and achieving a regret that approaches the centralized multi-play MAB [13], [14]. This idea has been extended to several variants in the stochastic setting [15]- [18] as well as adversarial MP-MAB [5], [19], with improved regret performance for all models. All the aforementioned works make an important assumption of collision sensing -any collision with another player is perfectly known. Such "collision indicator" plays a fundamental role in both collision avoidance and forced-collision communication. It is widely recognized that a more difficult problem in MP-MAB is the no-sensing scenario, in which players can only observe the final rewards but not collisions. The difficulty lies in that the zero rewards can indistinguishably come from collisions or null arm rewards. Recently, there is some progress on the stochastic no-sensing problem [4], [20]. In particular, the fundamental idea of implicit communication is again proved crucial in achieving regret that approaches the centralized counterpart [9]. Nevertheless, the most difficult setting of no-sensing adversarial MP-MAB in a fully decentralized setting remains wide open. To the best of the authors' knowledge, reference [5] is the only work that achieves a sublinear regret by modifying the collision-sensing algorithm to reserve "safe" arms for players. However, the asymptotic regret O T 1− 1 2M is almost linear in T when M is large, where M is the number of involving players and T is the time horizon of the game. We note that this exponential dependence on M reveals a particular dimension of hardness (multiple players) in the no-sensing adversarial MP-MAB problem. Recent development has repeatedly demonstrated that implicit communication is crucial in achieving lower regret. However, as pointed out in [5], it is unclear how to implicitly communicate without collision information in an adversarial environment. This work makes progress in the no-sensing adversarial MP-MAB problem by addressing the challenges in incorporating implicit communication. This work reveals a novel dimension of hardness associated with the no-sensing adversarial MP-MAB problem: attackability of the adversary, that is orthogonal to the multi-player dimension of hardness. More specifically, we depart from the approach of [5] which always assumes the worst possible adversary while focusing on the multi-player hardness, and study the relationship between the attackability hardness and implicit communications. Notably, all possible adversaries can be classified based on this new concept of attackability, which is defined either by a local view (for a "one-time" attack) or a global view (for the cumulative attacks). The hardness of attackability may or may not be aware by the players, and we develop a suite of Adversary-Adaptive Collision-Communication (A2C2) algorithms under both attackability-aware and attackability-unaware settings, which adaptively adjust the implicit communication by learning the attackability of the adversary in an online manner. All of the A2C2 algorithms utilize some (common) new elements that have not been considered before in no-sensing adversarial MP-MAB, such as an information-theoretical Z-channel model and error-correction coding, to design a forced-collision communication protocol that can effectively fight against the adversary and achieve a non-dominant communication regret in the no-sensing setting. On the other hand, for the more challenging attackability-unaware setting, we show that a simple "escalation" estimation of the attackability, a novel error-detection repetition code, and randomized synchronizations are crucial to handle the unknown attackability. A key idea behind algorithms in the attackability-unaware setting is that communication error is not bad if it happens to all players, as such error does not affect player synchronization. The regret analysis of the A2C2 algorithms shows that they can achieve attackability-dependent sublinear regrets asymptotically, without an exponential dependence on the number of players as in [5]. This benefit, however, does not lead to a universally lower regret. In fact, we may view A2C2 of this paper and the method of [5] as operating at two different regimes in the two-dimensional hardness space (multi-player and attackability). As a preview of the analytical results, Fig. 1 and Fig. 2 numerically illustrate the theoretical dependency of the asymptotical regret (i.e. scaling) of A2C2 algorithms and the no-sensing algorithm from [5] on the two dimensions of hardness. When fixing the attackability, i.e., fixing the local attackability parameter α to be 0.7 or the global attackability parameter β to be 0.7, the regrets of α-unaware A2C2 algorithm and the β-unaware A2C2 algorithm only rise slowly with M in Fig. 1, since their T terms are oblivious to M and the overall dependency on M is only a multiplicative factor. However, the regret of [5] increases sharply with more players due to the exponential dependency. When M is large (larger than 4 in Fig. 1), the advantage of A2C2 algorithms is obvious. On the other hand, while fixing the number of players, the regret performance of [5] is immune to the change of attackability. However, A2C2 algorithms have exponential dependencies on the attackability. As a result, their performances are very good when the adversary's attackability is weak or medium, but degrade quickly when the attackability is extremely strong. Philosophically speaking, this result shows that one can trade off the multi-player dimension of hardness with the attackability dimension of hardness, which may provide insight into other relevant adversarial bandit problems. A comparison of the regret bounds are given in Table I for both collision-sensing and no-sensing adversarial MP-MAB algorithms. The rest of the paper is organized as follows. Related works are surveyed in Section II. The no-sensing adversarial MP-MAB problem is formulated in Section III. The general algorithm structure is presented in Section IV, followed by algorithms for known (Section V) and unknown (Section VI) attackability. The regret analysis of all algorithms is given in Section VII. Finally, Section VIII concludes the paper. II. RELATED WORK Collision-sensing stochastic and adversarial MP-MAB. As stated in Section I, initial approaches for collision-sensing MP-MAB adopt single-player MAB algorithms with various collisionavoidance protocols. Examples include Explore-then-Commit [11], UCB [1], [2], ǫ-greedy [7] for stochastic MP-MAB, and EXP3 [12] for adversarial MP-MAB with a regret of O T 3 4 . Although these strategies achieve sublinear regret, their performance cannot approach the centralized counterparts. In particular, for the stochastic environment, there is a multiplicative factor M increase α: local attackability (see Corollary 1); β: global attackability (see Corollary 2); With the notation ofÕ(·), the logarithmic factors of T and K are ignored. in the regret coefficient of log(T ) compared with the natural lower bound of centralized MP-MAB [13], [14], which has long been considered fundamental due to the lack of communication among players. The idea of implicit communication with forced collisions is introduced by the SIC-MMAB algorithm [3], where bits 1 and 0 are transmitted by collision and no collision, respectively. The theoretical analysis of SIC-MMAB shows, for the first time, that the regret of decentralized MP-MAB can approach the centralized lower bound in the stochastic environment. The DPE1 algorithm [15] further improves the regret by combining the KL-UCB algorithm [22] with implicit communication. Similar ideas have also been extended to other stochastic variants, such as the heterogeneous setting [16], [18], where rewards are player-dependent. For the adversarial environment, implicit communication also proves to be effective. In particular, the C&P algorithm [19] achieves a regret of O T 2 3 by invoking forced collisions to let players coordinately perform a centralized EXP3 algorithm. The performance for two players (M = 2) has been improved to O T log(T ) in [5] by applying a filtering strategy with bandit-type information supported by implicit communication, which approaches the lower bound of Θ √ T [21]. No-sensing stochastic and adversarial MP-MAB. No-sensing MP-MAB represents a more challenging scenario and the progress has been limited. A collision-avoidance scheme is investigated in [4] for the stochastic environment, which cannot approach the centralized lower bound. Some initial attempts to incorporate implicit communication in the no-sensing stochastic setting, e.g., sharing arm indexes instead of statistics, are discussed in [3]. The EC-SIC algorithm proposed in [9] proves that it is possible to approach the centralized lower bound even without information of collision. For the most difficult case of no-sensing adversarial MP-MAB, progress is extremely limited. To the best of our knowledge, [5] is the only work studying this problem. The idea is to design a collision-avoidance approach by reserving "safe" arms for players, which results in a regret of O T 1− 1 2M that has an exponential dependency on M due to the limited coordination. Cooperative MP-MAB. This is another line of MP-MAB research where explicit communications are allowed (under certain constraints) and players do not collide with each other. Such scenarios have been studied in both stochastic and adversarial environments [23]- [27], which are under a completely different framework than this work. A. The no-sensing adversarial MP-MAB problem We focus on the following decentralized no-sensing adversarial MP-MAB model. There are l πm(t) (t); otherwise, she always receives loss 1 regardless of l πm(t) (t). The actual loss s πm(t) (t) received by player m at time t can be written as If the players have access to both s πm(t) (t) and η πm(t) (t), it is a collision-sensing problem and information of η πm(t) (t) is unavailable and players only know s πm(t) (t), the problem is a nosensing one as considered in this paper. In this no-sensing setting, a loss 1 can indistinguishably 7 come from collisions or be exogenously generated by the adversary, and player m makes deci- The lack of information on the collision indicators complicates the MP-MAB problem in general [3], [9], and this challenge is more significant for the adversarial setting [5]. Note that if l k (t) = 1, ∀k, t, the no-sensing setting is equivalent to collision-sensing. In the adversarial MP-MAB model, the notion of regret can be generalized with respect to the best allocation of players to arms as follows [19]: l km (t). We are interested in the expected regret E[R(T )] where the expectation is with respect to the algorithm randomization. As shown in [5], one cannot obtain any non-trivial regret guarantees facing an adaptive adversary. This work thus focuses on the oblivious adversarial MP-MAB where the reward generation of the adversary is independent of the actions of players. Equivalently, the loss sequence is chosen by the adversary at the beginning of the game. B. Attackabilities of the adversary To explore the idea of forced-collision communication in the no-sensing adversarial setting, the overall horizon T is divided into the exploration and communication phases, similar to the approaches in collision-sensing settings [3], [19]. Information is shared by purposely created collisions in the communication phases to maintain synchronization and coordination between players in the subsequent exploration phases. However, in the no-sensing setting, loss 1 assigned by the adversary can be viewed as a certain "attack", since players have no knowledge whether it comes from the adversary or collision. Such loss-1 attack has very different impacts on the regret of different phases: • Exploration phase. Under the assumption that the preceding communication phase is successful, no negative influence occurs when the adversary attacks (assigns loss 1) in the exploration phase, since the regret is measured by the gap from the optimal choice. • Communication phase. The loss-1 attack in a communication phase may lead to communication errors for players, which jeopardize the essential coordination among them and lead to a potential linear regret due to collisions in the subsequent exploration phase, as illustrated in Fig. 3. From the previous studies in the stochastic setting [3], [9], it is clear that any bandit policy attempting to enable forced-collision communication in the no-sensing setting will have a dependency on the environment's ability to "attack" such communications. This naturally requires bounding the worst-case loss in communications. In the stochastic settings, such ability is characterized by a positive lower bound µ min such that 0 < µ min ≤ min k∈[K] µ k , where µ k is the mean of arm k's rewards. For example, such a lower bound is assumed to exist and be known to all players in [3], [9]. Analogous to the role of µ min in the stochastic MP-MAB models, we propose a new concept to characterize the adversarial environment, called the adversary's attackability, which represents the upper bound on the adversary's mechanism in generating loss 1's. This is a notable distinction to [5] where no communication is utilized in the no-sensing setting, and thus modeling the adversary's attackability is not necessary. More specifically, in this work, we define two types of attackabilties: the local attackability and the global attackability, which provide two different ways to categorize all the adversaries as detailed in this section. First, the local attackability aims at modeling the one-time worst-case attack. In a communication phase shown in Fig. 3, the worst case is for this phase to see all loss 1's from the adversary, because no information can be reliably shared in such situation. In other words, the local attackability is captured by the maximum length of contiguous loss 1's assigned on the loss sequence, since it represents the longest duration that no reliable communication can happen. Without loss of generality, the local attackability is defined as follows. The local attackability captures the one-time "budget" for the adversary attack. The adversaries sharing the same parameter α can be viewed as in the same category. Another perspective is to consider the overall attacks over T as the global attackability. It captures the total amount of loss 1's assigned on one arm, and is defined as follows. Similar to local attackability, the adversaries share the same parameter β can be viewed as in a same category, where the overall "budgets" for the adversary attacks are of the same order. However, note that since the local attackability parameter α in Corollary 1 does not provide any bound on the overall attack budget, it is more stringent than the global attack parameter β in It is important to keep in mind that Corollaries 1 and 2 represent two ways of categorizing the adversaries rather than imposing constraints or requirements on them. Each category still has many adversaries as long as their scalings of attackability are the same, and every possible adversary is in a certain category. In addition, as shown in the subsequent sections, such categorization does not even need to be aware by the players. In this scenario, we do not impose more assumptions than [5]. Rather, the attackability view represents a different angle of the same no-sensing adversarial MP-MAB problem, and the proposed algorithms can adapt to the varying attackability in an automatic way, based on the perceived category of the adversary that it faces. IV. ALGORITHM OUTLINE All the algorithms proposed in this paper have two different phases: exploration phases and communication phases, and share a common leader-follower structure [18], [19]. Player 1 (leader) determines arm assignments for the remaining players (followers). The arm assignment is transmitted to each follower in the communication phases. Then, in the following exploration phases, all the players keep sampling the assigned arms. This section introduces the arm assignment procedure for the exploration phases and gives a brief discussion on the communication phases, which will be separately discussed in the following sections for different attackability scenarios. A. Exploration Phase Assuming explicit communications are allowed, i.e., in the centralized model, the challenge of exploration phases is how to choose M arms to explore for all the M players. We note that this is similar to the adversarial multi-play problem, where the leader (the centralized agent) player m is assigned with arm A m (t). As commonly adopted in the multi-play setting [28], each subset of M distinct arms {A 1 , ..., A M } is viewed as a single meta-arm A to be chosen by the leader. The set of all meta-arms K is defined as: An arm assignment policy that builds on [19] is designed in this paper. At time t − 1, players first explore the assigned arms in A(t − 1). Then, the leader updates an unbiased loss estimator is the loss that the leader observes on her arm A 1 (t − 1) at time t − 1. Note that the update only requires past observations from the leader, which is designed to reduce the communication burden and to facilitate the generalization to the decentralized setting. For time t, the cumulative loss estimatorL A (t) for each meta-arm A ∈ K is first updated as the sum of the loss estimations of its elementary arms up to time t − 1, i.e.,L A (t) = t−1 υ=1 k∈Al k (υ). Then, the EXP3 algorithm [29] is applied to the meta-arm MAB problem, so that each meta-arm A ∈ K is sampled with a probability P A (t) which is proportional to exp(−ηL A (t)), as the exploration meta-arm A(t) for time slot t. The loss estimator l k (t) is then again updated after pulling the chosen meta-arm. At time t + 1, the same procedures are performed to get L A (t + 1) A∈K , {P A (t + 1)} A∈K and A(t + 1). As shown in [19] and Appendix B, this algorithm guarantees a regret bound of 2M K log(K)T when η = log K M M KT . Furthermore, since there are |K| = K M meta-arms, computing the probability P A (t) for each meta-arm and the marginal probability A:k∈A∈K P A (t) for an arm k that is to be updated would lead to an exponential complexity if it is done naively. However, with a concept called K-DPPs [30], sampling and marginalization can be made more efficient. As shown in [19], the complexity of sampling a meta-arm and computing the marginal probability for a fixed arm can both be reduced to O(KM) with K-DPPs, which makes it less complex for implementation. Lastly, with the centralized algorithm described above, the key adjustment to the decentralized setting is to notify followers of their assigned arms by forced-collision communications. However, to avoid a linear communication regret due to frequently updating, the exploration phase is extended from one time slot to τ slots. This means each player is fixated on one arm for at least τ slots, and the update happens only after each exploration phase. The leader then uses her samples of losses observed during this entire exploration phase as the feedback to assign arms for the next phase. Note that although this infrequent switching reduces the communication burden, it also degrades the regret guarantees [31]; we will elaborate on this aspect in the analysis. When there is no ambiguity, the time variable in A(t), P A (t),l k (t),l k (t) andL A (t) are replaced by the corresponding phase index as A(p), P A (p),l k (p),l k (p) andL A (p) under the decentralized setting for the p-th phase. B. Communication Phase In the communication phases, arm assignments are transmitted from the leader to the followers with forced collisions. Functions Send() and Receive() are used in the algorithm description for the sending and receiving procedure with forced collisions. Every player is first assigned with a unique communication arm corresponding to her index, i.e., arm m for player m. In the collision-sensing setting, players only need to take predetermined turns to communicate by having the "receive" player sample her own communication arm and the "send" user either pull (create collision; bit 1) or not pull (create no collision; bit 0) the receive player's communication arm to transmit one-bit information. For a player that is not engaged in the current peer-to-peer communication, she keeps pulling her communication arm to avoid interrupting other ongoing communications. Since the collision indicator is perfectly known in the collision-sensing setting, player m can receive error-free information after implicit communication. In the more challenging no-sensing setting, there is no information about the collision indicator, which means attacks, i.e., loss 1 assigned by the adversary, may cause communication errors and incur a linear regret in the subsequent exploration phase as shown in Fig. 3. The no-sensing settings are discussed under four different scenarios (two with knowledge of attackability while the other two without such knowledge) in the following sections, respectively. • α-unaware and β-unaware. Similar to the above but players have no knowledge of α (resp. β). These are reported in Section VI-A and Section VI-B, respectively. This is the more challenging case, and the main focus of this work. V. ATTACKABILITY KNOWN TO PLAYERS In this section, the attackability is assumed to be perfectly known by all players, but such information does not tell the players how and when the attacks would happen. The two definitions of attackability lead to two algorithms, which also serve as the building blocks of subsequent algorithm designs on the attackability-unaware setting. A. α-aware Although the local attackability is theoretically more stringent than the global attackability, it is relatively easier to handle. The α-aware A2C2 is presented in Algorithm 1 (leader) and (collision) is always received correctly, while bit 0 (no collision) can be potentially corrupted by a loss 1 from the adversary. In other words, the adversary attack is asymmetric -she can attack bit 0 but not bit 1. From an information-theoretic point of view, this corresponds to a Z-channel model [32] as shown in Fig. 4. We note that this connection to the Z-channel model was first utilized in [9] to study stochastic no-sensing MP-MAB. A key challenge in the adversary setting as compared to [9], however, is that a fixed crossover probability does not exist. Error-correction code with long blocklength. The idea of utilizing error-correction code naturally arises under the formulation of a Z-channel model. With the knowledge of α, the key idea to overcome the full attackability is to "overpower" the adversary via codes that have sufficient error-correction capabilities [33]. Different error-correction codes, ranging from simple repetition code to more complex (and powerful) algebraic and nonlinear codes, can be adopted in the proposed algorithms. To facilitate the regret analysis, we choose to use the repetition code [34] and functions rEncoder() and rDecoder() are used in the algorithms as the encoder and decoder 2 . At the encoder, each information bit is expanded to a string of length where ν = max 3β−1 2 , 0 , is sufficient of achieving a sublinear regret that is better than α-aware A2C2. In fact, a closer look reveals a very surprising result: for β ≤ 1 3 , we have ν = 0, which means there is no need for coding at all in communication phases (see Section VII for details). VI. ATTACKABILITY UNKNOWN TO PLAYERS In this section, the assumption of the knowledge of attackability is removed. No information on the parameter α or β is revealed for the players. The α-unaware and β-unaware settings are tackled separately in the subsequent subsections. A. α-unaware With no information of α, the main difficulty lies in how to prepare for the worst case without incurring a linear loss. All the key features in α-aware A2C2 are still applied in the adaptive algorithm called α-unaware A2C2, but several new ideas are needed: an error-detection code to estimate α, and a synchronization procedure with randomized length to synchronize the estimation update among players. The algorithms for the leader and followers are presented in Algorithms 3 and 4, respectively. Estimation of α. Without the knowledge of α, no effective prevention is possible for communications in the worst-case scenario. We propose to adaptively estimate α in an escalation fashion. The interval [0, 1] (support of α) is uniformly divided into sub-intervals with length ǫ, where ǫ > 0 is an arbitrarily small constant. The estimated value α ′ starts with α ′ = 0, and increases with a step size of ǫ while a communication failure is observed until the upper limit is reached. As we see in the regret analysis, this seemingly naive estimation works very well. k∈Al k (v); Then set ∀A ∈ K, P A (p) ← Error-detection repetition code. The aforementioned escalation mechanism to estimate α relies on knowing when communication failure happens, which is non-trivial. This leads to the second idea of utilizing a special kind of error-detection code for the Z-channel, called the constant weight code [35]. Codewords in one constant weigh code share the same Hamming weight, which enables error detection. As noted in [35], a constant weight code can detect any number of asymmetric errors, and the maximal number of constant-weight codewords of length n can be attained by taking all codewords of weight ⌈ n 2 ⌉ or ⌊ n 2 ⌋. Thus, codeword length of O(log(K)) is theoretically sufficient to enable a constant-weight code with O(K) codewords 4 . To facilitate discussions, a specific kind of constant weight code is adopted. As shown in Randomly choose in S forà m (p); F ← 1 {|S| > 1} ⊲ Comm Error or not 5: for q = 1, 2, ..., N (ξ) do 6: Exploration Phase: there is only one index as the decoder output, which means the communication is successful (example 'S' in Fig. 5). This suggests that it is sufficient to maintain the current estimation α ′ . Functions eEncoder() an eDecoder() are used in the algorithms for the corresponding error-detection encoder and decoder, respectively. Synchronization with randomized length. Error-detection repetition code allows each follower to decide whether the α-estimation needs to be updated. However, the leader does not have access to this information and such estimation across players may not be synchronized, which poses a significant challenge that calls for communication for synchronization. Note that B. β-unaware Similar ideas of the previous design can be applied to the β-unaware setting, and the resulting β-unaware A2C2 algorithm is presented in Appendix F. Some important differences to α-unaware A2C2 are explained in this section. Unlike the α-estimation starting from α ′ = 0, the estimation β ′ starts with 1 4 as analyzed in Section VII. In each communication phase, the arm assignment is similarly encoded with the error-detection code but the codeword length for each bit is adjusted to k(T, ν ′ ) = Θ(T ν ′ ), where 3 . The increased coding rate is due to the same reason described in Section V-B, i.e., a certain amount of communication errors are tolerable within the global attackability bound. As for the feedback from the followers, two different "uplink" operations are performed: one to maintain the unbiased loss estimations and the other to keep players synchronized with the estimated β ′ . First, after each arm assignment, followers only immediately notify the leader of communication errors for her to maintain an unbiased loss estimation. This 'uplink' does not indicate the need of updating β ′ since β ′ is an estimation of the overall budget and there is no following 'downlink'. Since communication errors in this uplink can only influence the subsequent exploration phase, it is performed with a repetition code of length k(T, ν ′ ) = Θ(ν ′ ). Then, for the update of β ′ , each player keeps counting the overall number of attacks on her communication arm and reports to the leader when the estimated budget is exceeded. To reduce the communication burden, the update of β ′ and the synchronization procedure is performed only at potential updating time slots rather than after each communication phase. Specifically, similar iterations of 'uplink' and 'downlink' for synchronization happen every T β ′ k(T,ν ′ ) phases, with length k(T, β ′ ) = Θ(T β ′ ) in each round and a random number of rounds N(ξ) ∈ 0, ⌈T ξ ⌉ . The choice of length k(T, β ′ ) is because that synchronization error may influence the entire remainder time slots. Similar to analysis in Section VI-A, the adversary must attack exactly the last round of synchronization to succeed in breaking the coordination among players. VII. PERFORMANCE ANALYSIS This section is devoted to the theoretical analysis for all proposed A2C2 algorithms. Detailed proofs can be found in Appendix C to F. α/β-aware. The regret of α-aware A2C2 and β-aware A2C2 algorithms are first presented in Theorems 1 and 2, respectively. M KT /τ , the expected regret of α-aware A2C2 algorithm is bounded by where ǫ is an arbitrarily small constant. It is worth noting that for β ≤ 1 3 , we have ν = 0 which means there is no need for coding at all. Another note is that for α = 0 or β ≤ 1 3 , the known regret in collision-sensing setting O T α/β-unaware. Without knowledge of the attackability, the performance of α-unaware A2C2 and β-unaware A2C2 algorithms are guaranteed in Theorems 3 and 4, and analyzed subsequently. under the estimation α ′ , the expected regret of the α-unaware A2C2 algorithm is bounded by where ǫ > 0 is an arbitrarily small constant. under estimation β ′ starting from 1 4 , the expected regret of the β-unaware A2C2 algorithm is bounded by where ǫ > 0 is an arbitrarily small constant. Similar to the β-aware case, we have ν ′ = 0 for β ≤ 1 4 in the β-unaware A2C2 algorithm, which indicates there is no need of coding for assigning arms and reporting communication . It is also worth noting that Theorems 2 and 4 provide better dependencies on T than Theorems 1 and 3, respectively, which reinforces the intuition that the local attackability parameter α in Corollary 1 is more stringent than the global attackability parameter β in Corollary 2. Compared with the regret of O MK in [5], it can be observed that the regret results of A2C2 have an exponential dependence on the attackability rather than the number of players M, which could be an advantage while dealing with a large number of players. From another perspective, these two different dependencies reveal two orthogonal "dimensions of hardness" in the no-sensing adversarial MP-MAB problem: multiple players and attackability. As no information sharing among players is utilized in [5], the coordination is limited and the difficulty of the problem grows exponentially with the number of players. In our work, forced collisions are used for communications and coordination among players is established. As a result, the regret shifts the exponential dependence from number of players (M) to attackability (α or β), and the dependence on M is only a multiplicative factor. A. Proof sketch: α-unaware Regret of the α-unaware A2C2 algorithm can be decomposed as R 3 (T ) = R expl Lemma 1. Denoting ζ = 1{successful preceding communication}, the expected exploration regret of the α-unaware A2C2 algorithm is bounded as: . Lemma 2. The expected communication regret of α-unaware A2C2 is bounded by Finally, the risk of losing synchronization during updates has a regret of R sync 3 (T ), which is analyzed in the following lemma. Lemma 6. The expected communication regret of β-unaware A2C2 is bounded by: Lemma 7. The expected regret caused by potential synchronization errors of β-unaware A2C2 is bounded by: In addition to similar optimizations over τ and ξ as in the analysis of α-unaware-A2C2, the choice of ν ′ is also optimized so that the exploration regret caused by the global attackability, i.e. E[R err 4 (T )], does not dominate the total regret. C. Discussions As shown in Theorems 1 to 4, the regrets of the A2C2 algorithms can be sublinear as long as the adversary cannot (asymptotically) attack all time slots, i.e., α < 1 or β < 1, which is similar to the no-sensing stochastic setting that sublinear regrets can be achieved only when µ min > 0. Further, as stated in Section III-B, it is possible that β = 1 while α < 1, and in such cases, α-(un)aware A2C2 can still achieve a sublinear regret. B. The centralized algorithm For the completeness of this work, we first provide a regret analysis of the algorithm described in Section IV-A, which is based on [19]. Proof. First, we show thatl A (t) = k∈Al k (t) is an unbiased estimation of the true loss from meta-arm A at time t, i.e., l A (t) = k∈A l k (t). Denoting P (t) = {P A (t)} A∈K , we first show that for any arm k ∈ [K],l k (t) is an unbiased estimation of l k (t) as where equations (i) and (ii) are from the definitions ofl k (t) and P A (t), respectively. From the law of total expectation, we can derive E l k (t) = E E l k (t)|P (t) = l k (t). Finally, with l A (t) = k∈Al k (t), by the linearity of expectation,l A (t) is an unbiased estimation of l A (t). With the standard EXP3 regret guarantees, the centralized regret [19], [29] is bounded as: The term E l A (t) 2 |P (t) can be simplified as: where equation (i) is becausel k (t) = 0 holds for at most one arm. With this result, we get: (l k (t)) 2 B:k∈B∈K P B (t) A:k∈A∈K where inequality (i) is the result of l k (t) ≤ 1, and equation (ii) is from η = The following result establishes a regret bound for the blocked version of the centralized algorithm, which is important for the ensuing analysis in the decentralized setting. Theorem 6 ( [19], [31]). Let Π be a bandit algorithm with an expected regret upper bound of R(T ). Then the blocked version of Π with a block size τ has a regret upper bound of τ R(T /τ )+τ . In the multi-player case, the last term τ , which represents the additional regret when T is not divisible by τ , is converted into Mτ . C. α-aware The overall regret of α-aware A2C2 can be decomposed as: . D. β-aware The same error detection code in Section VI-A is also used in the β-aware A2C2 algorithm. However, it is not used for the update of estimations, but rather only to maintain an unbiased loss estimation. The β-aware A2C2 algorithm is presented in Algorithms 5 (leader) and 6 (follower). Algorithm 5 β-aware A2C2: Leader Randomly choose in S forà m (p) 5: (T ) in α-aware A2C2. The estimation α ′ = jǫ is denoted as α j for an integer j for simplicity. The overall exploration time steps when α j is used as the α-estimation is denoted as T e,j , and the corresponding τ and ξ with α j are denoted as τ j and ξ j , respectively. Proof of Lemma 1. Proof. Denote T s,j and T s = w j=0 T s,j ≤ w j=0 T e,j ≤ T as the length of exploration with successful preceding communications under estimation α j and in total, respectively. The exploration under each estimation α j with successful preceding communications can be viewed alone as a bandit game with horizon T s,j and block τ j . Thus, by applying Theorem 6 to Eqn. (1), ∀T > T * , the first term in exploration regret can be bounded as Proof. Before completing the estimation of α ′ , each communication for synchronization has a failure probability of 1 ⌈T ξ ⌉ , which in the worst case has a linear regret MT . With a union bound for all communication phases with estimation less than α w , we can get that: T e,j τ j 1 T ξ j MT ≤ 2M . F. β-unaware The β-unaware algorithm for the leader and followers are presented in Algorithms 7 and 8, respectively. The following proofs focus on β ≥ 1 4 , and the case of β ≤ 1 4 can be obtained as a special case where the estimation β ′ is kept as 1 4 . Under estimation β ′ = 1 4 + jǫ, denoted as β j for simplicity, similar notations of T e,j , τ j , ξ j and ν j are applied referring to the overall time that the current estimation holds and the corresponding parameters. Proof of Lemma 5.
2020-11-03T22:45:52.152Z
2020-11-02T00:00:00.000
{ "year": 2020, "sha1": "361844cfc05b9f89488777a2e1f25f360f897f36", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2011.01090", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "361844cfc05b9f89488777a2e1f25f360f897f36", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
244276953
pes2o/s2orc
v3-fos-license
Effect of annealing temperature and time on recrystallization behavior of Mg-Gd-Y-Zn-Zr alloy In this paper, the effect of annealing treatment on the microstructure and hardness of extruded Mg-9Gd-4Y-2Zn-0.5Zr alloy (wt. %) was discussed. The microstructure evolution of the alloy under different annealing conditions was studied by optical microscope (OM), scanning electron microscopy (SEM) and electron backscatter diffraction (EBSD), and the variation of hardness was analyzed. With the increase of annealing temperature, the large deformed grains first break into small recrystallized grains. When the temperature continues to increase, the recrystallized grains grow abnormally with the precipitation of chain phase and the fragmentation of lamellar long-period stacking ordered (LPSO) phase. The alloy does not recrystallize at low temperature, and the recrystallize grains grow abnormally at high temperature. The increase of annealing time will also lead to abnormal growth of recrystallized grains. The texture gradually diffuses from the classical extrusion texture to the extrusion direction (ED). The results show that under the condition of 430 °C × 5 h, the recrystallization volume of the alloy is the largest, the recrystallization grain distribution is uniform, and the hardness value is the highest. Introduction Magnesium (Mg) alloy has the characteristics of low density, high specific strength, good electromagnetic shielding and light weight, which is the best material for lightweight equipment. It is widely used in mechanical vehicles, military aerospace and other fields, and has great strategic significance for national defense security and civil science and technology progress [1,2]. However, due to the special dense hexagonal structure of Mg alloy, its strength and ductility at room temperature are poor [3,4], which affects the development of Mg alloy. Therefore, how to improve the strength of Mg alloys is widely concerned. Alloying is considered to be one of the most effective methods to improve mechanical properties [5,6]. Zn, Al, Mn and other elements have been widely used in the development of new Mg alloys [7,8]. Heating simple Mg alloys to high temperatures is also limited because the microstructure of fine grained alloys is often unstable and exhibits extensive grain growth. Attempts have been made to improve their thermal stability by adding different alloying elements. In this regard, it is reported that the addition of Gd and other rare earth (RE) elements significantly improves the mechanical properties at high temperatures due to solid solution and precipitation strengthening [9,10]. Therefore, in the past decades, Mg-RE alloys containing LPSO phases have attracted wide attention due to their excellent properties at high temperatures [4,5,11]. In addition, grain refinement is an effective way to improve the mechanical properties of alloys and metals [12,13], which is considered to be the only way to improve the strength and ductility of Mg alloys [2,14]. Subsequently, many new technologies have been developed to produce ultra-fine grained Mg alloys, such as equal channel angular extrusion and reciprocating upsetting extrusion [15][16][17]. Dynamic recrystallization (DRX) by large plastic deformation [2,15]. In addition to dynamic recrystallization, recrystallized grains can also be nucleation and growth in the intermediate annealing between each two deformation passes or in the subsequent annealing process of the deformed alloy, namely, the static recrystallization (SRX) process [10,18,19]. Annealing can refine grains, form new microstructure, soften and restore ductility and work ability of deformed materials [2]. In recent years, many studies have analyzed the static recrystallization process of cold rolled Mg-Gd-Y alloy after deformation. L Y Zhao et al [17,[20][21][22] analyzed the nucleation orientation, grain orientation evolution and preferential growth process of cold rolled Mg-Gd-Y alloy during static recrystallization [15,23]. Many studies have also analyzed the effect of twinning on the recrystallization behavior. Taking S H Lu et al [17] as an example, the microstructure evolution of forging samples during annealing at 450°C was tracked, and the tensile twinning and its contribution to SRX were analyzed in detail. Few studies on My-Gd-Y-Zn-Zr alloy focus on the formation and growth of LPSO phase during heat treatment, mainly focused on the formation and growth of LPSO phase during heat treatment. Jianxiong Xiao et al [7] studied the effect of annealing time on grain size and LPSO phase of Mg-6.9Gd-3.2Y-1.5Zn-0.5Zr alloy. He proved the morphology change of recrystallized grain internal phase and the influence of LPSO on recrystallization behavior during annealing. However, the effect of annealing temperature and time on microstructure and mechanical properties of extruded Mg-9Gd-4Y-2Zn-0.5Zr alloy is still unclear. In this study, the Mg-Gd-Y-Zn-Zr alloy after solution extrusion was treated by peak aging, followed by annealing at 350°C-450°C and 1 h-20 h for the same time at different temperatures and annealing at the same temperature for different time. The effects of time and temperature on the grain size and precipitates of the alloy during annealing were observed, and the evolution of texture during annealing and the promotion or inhibition of precipitation relative to recrystallization grain nucleation and growth were analyzed. The effect of annealing time and temperature on the microstructure evolution and mechanical properties of the alloy was clarified by comparing the hardness changes after secondary aging treatment. Experiment In this experiment, Mg-9Gd-4Y-2Zn-0.5Zr alloy was used for research. The semi-cylindrical as-cast sample with the size of Φ5×5 mm was solution treated at 520°C for 24 h, and then the extrusion sample with extrusion ratio of 16: 1. Then, the sample was subjected to peak aging treatment at 200°C for 9 h and cooled in water [24][25][26]. In order to observe the recrystallization behavior of the alloy at different temperatures and different time, and the precipitation phenomenon and texture change during the recrystallization process. Several samples annealed at different temperatures are defined as T350 sample, T420 sample, T430 sample, T450 sample. The experimental design is shown in table 1. All the annealing experiments were completed in the resistance furnace, and the furnace temperature change was controlled within 5°C during annealing. The extruded sample was cut along the direction parallel to the extrusion, and characterized and analyzed. After sandpaper grinding and mechanical polishing, the corrosion was carried out with acetic acid, alcohol, water and picric acid, and the microstructure was observed under OM (Zeiss). The polished sample was ion thinning to remove the surface stress layer and put into the scan electron microscope (SEM, SU-5000) for EBSD test. The EBSD data were analyzed by Channel 5 software, open OSC file to clean up the noise, cleaning work is divided into three times: First, select Grain CI Standardization in cleanup date option, select Cleanup and add to project as new dataset in Operation option, click OK to complete the first cleaning. Select Grain Dilabon in the cleanup date option, select Cleanup inplace in the Operation option, and click OK to complete the second cleanup. Select Neighor Orientation Correlation in the cleanup date option, select Cleanup inplace in the Operation option, and click OK to complete the third cleanup. Map, Chart and Texture are used to analyze the OSC file after cleaning. The steps of IPF diagram are as follows: { none } option is selected at Grayscal in Map Style option, Inverse Pole Figuire option is selected at Color, large angle grain boundary (15°-100°) and small angle grain boundary (0°-15°) are added in Boundaries option, line width and color are selected as coarse black line and fine white line respectively. The steps to draw the GS (Grain Size) diagram are: select the Grain Size (diamter) in the Type option of Chart, select the Log diagram in X-Axi at the Parameters in the Edit option, and click to determine to generate the grain size diagram; the steps to draw the PF graph are: select the Add PF option in the Texture option, add the texture of the 0001 surface, click OK and open the New Texture Plot, click OK to generate the PF graph. The observation directions of all samples were parallel to the ED. In order to compare the effect of annealing behaviors on the mechanical properties of the alloy, the alloy with the best comprehensive mechanical properties after annealing was selected for the second peak aging treatment at 200°C for 47 h, and the hardness changes of Microstructure analysis of alloy in initial state Taking the sample state after peak aging as the initial state, the initial OM microstructure of the alloy is shown in figure 1, which is mainly composed of block LPSO phase (shown by blue arrow) and large long stripan deformation grains (shown by red arrow) [27,28]. The characteristics of deformation grains are that large grains are compressed perpendicular to the extrusion direction and elongated along the direction parallel to the extrusion. The alloy exhibits obvious bimodal microstructure, and the coarse deformed grains are surrounded by fine dynamic recrystallization grains, as shown by the green arrow in the figure 1 [24,29,30]. Figure 2 shows the KAM diagram of the aged alloy. It can be seen from the figure that there is a high dislocation density at the grain boundary of the deformed grain of the strip, and it gradually decreases to the grain interior, indicating that the residual stresses generated after extrusion are mostly concentrated at the grain boundary of the deformed grain. Effect of annealing temperature on recrystallization behavior In order to observe the effect of annealing temperature on the static recrystallization behavior of Mg-9Gd-4Y-2Zn-0.5Zr alloy, an annealing experiment was carried out in the temperature range of 350°C-450°C. The microstructure of the alloy annealed at several low, medium and high sample temperatures for 5 h under OM is shown. Figure 3(a) comparison of the alloy annealed at 350°C for 5 h The initial grains in figure 1 have no obvious recrystallized grains, which are mainly composed of large deformation grains (red arrow) and fine granular grains (green arrow). Fine grains are speculated to be recrystallized grains that have just occurred nucleation, which indicates that there is an obvious nucleation process, but the occurrence rate of static recrystallization is very slow at low temperatures [19,31]. Figures 3(b), (c) is the microstructure of the alloy annealed at 420°C and 430°C for 5 h under OM. It can be seen that the recrystallization rate of the alloy is obviously faster at higher temperature. At 420°C, there is still a large strip deformation grain that does not recrystallize. In contrast, the recrystallization occurs more completely at 430°C with higher temperature, and a relatively complete static recrystallization occurs. There is basically no large deformation grain. It can be seen that only a small part of the matrix is not dissolved. The growth of SRX grains is not isotropic, that is, the growth is delayed in some regions. A large number of chain phases precipitated above recrystallized grains during annealing, as shown by the yellow arrow in figure 3(c) [32][33][34][35]. Figure 3(d) is the microstructure of the alloy annealed at 450°C for 5 h. It can be seen from the figure 3 that most of the deformed grains have static recrystallization, but because of the high temperature, most of the recrystallized grains have abnormal growth behavior, and there are still some undissolved phases. Compared with 420°C or 430°C, although the coarse matrix dissolves much more, the grain size is too large, which will affect the comprehensive mechanical properties of the alloy, so it is not the most suitable annealing temperature. Figure 4 shows the SEM images of the initial state of the alloy and annealing at 420°C×5 h and 430°C × 5 h. According to the analysis of SEM images, a large number of chain phases (shown by yellow arrows) precipitated in the upper layer of the matrix during annealing, most of which gathered at the recrystallized grain boundaries and were parallel to the ED [36]. The energy spectrum analysis is shown in table 2. The main components of precipitation are Mg matrix and rare earth elements Gd and Y. It is found that the precipitation number of chain phase at 430°C is significantly greater than that at 420°C, that is, the occurrence of recrystallization behavior is accompanied by the precipitation of phase, and the precipitation rate and precipitation number of chain phase increase with the increase of temperature. According to the 500 times magnification in the red frame, it is found that a large number of block phase precipitates in the upper layer of lamellar LPSO phase except the annealed Chain-like granular precipitate. Table 2 is the energy spectrum analysis of the block phase. It is concluded that the composition of the phase is mainly Y and Gd. According to the composition analysis of the phase, it is speculated that the block phase should be rich in RE. Effect of annealing time on recrystallization behavior According to the annealing pre-experiment, the recrystallization effect is relatively optimal when the annealing temperature is 420°C and 430°C in the temperature range of 350°C-450°C. The diagram is OM after annealing at 420°C and 430°C for 8 h and 10 h. By comparing (b) (c) annealing at 420°C and 430°C for 5 h in figure 3, it is found that with the increase of annealing time, the size and number of coarse deformed grains gradually decrease, and the finer static recrystallized grains are replaced. The volume fraction of recrystallization at 420°C for 8 h is higher than that at 5 h, and the deformation grains of large blocks are reduced. After annealing at 430°C for 8 h, the original deformed grains basically disappeared, but the grain size increased significantly. In addition, compared with 430°C×8 h and 420°C×8 h, shown in figures 5(a), (b), it is found that the effect of temperature on recrystallization behavior is much greater than that of time on recrystallization behavior. Longer annealing can promote the uniformity of recrystallized grains when recrystallization occurs incompletely, but further increasing the annealing time will only make the grains grow abnormally when recrystallization occurs relatively complete. At 10 h, precipitation and precipitation behaviors were observed in different degrees at both temperatures. At 420°C, most fine lamellar LPSO phases (shown by blue arrows) and a small number of deformed grains were mixed in the middle of recrystallized grains. At 430°C, only a few narrow recrystallized grain strips (shown by black arrows) were transformed into precipitates, and a large number of chain phases were precipitated on the recrystallized grain strips. It shows that the recrystallization grains with larger volume fraction will not be produced after annealing for a certain time, even the precipitation may hinder the recrystallization behavior of recrystallization grains further. According to the amplification diagram of annealing at 420°C and 430°C for 10 h, it is found that a large number of short lamellar phases with different tilt angles precipitated at the upper or boundary of recrystallized grains can be observed under the OM at both temperatures after annealing for 10 h. It is speculated that the phase precipitation should be generated after the recrystallization occurs completely and the recrystallization grain growth process reaches the limit after continuing heating [37,38]. By comparing the OM microstructure of the alloy at several annealing temperatures and time, it is concluded that the optimal annealing condition is 430°C×5 h. Under this annealing condition, the recrystallization volume fraction of the alloy is large, the grain refinement effect is good and the size distribution is uniform, and there is no abnormal growth grain [39,40]. The grain size is relatively uniform at 420×8 h, but the grain size is slightly larger than that at 430×5 h. Lower annealing temperature and shorter annealing time can not make the recrystallization behavior completely, higher temperature and longer annealing time will lead to abnormal grain growth. Analysis of EBSD The microstructure of the alloy was characterized by EBSD. Figure 6 shows the grain orientation diagram and Grain size diagram of the alloy in different states. Figure 6(a) is the overall orientation image of the initial alloy. The initial alloy is composed of elongated grains distributed along the extrusion direction. The orientation of a single grain is the same, and the orientation of similar grains is similar. A large number of small angle grain boundaries are distributed at the grain boundaries, as shown in the white part of figure 6(a). During the extrusion process, the grains are elongated along the extrusion direction, and a large number of fine dynamic recrystallization grains are distributed at the grain boundaries. The grain size shows a bimodal distribution. As shown in figure (c), the percentage of small grains and large grains is higher, so it shows a higher percentage peak. The orientation image of small grains in the initial state is shown in figure 6(b), and the volume fraction of small grains is 34.3%. Figures 6(d), (e) is the orientation image of the whole grains and small grains annealed at 420°C for 5 h. The grains with grain size less than 10 μm are defined as crystalline grains [41,42], and the volume fraction of recrystallization reaches 49.4%. However, the distortion energy stored in the crystal is not enough due to the low temperature and short heating time. There are still some large deformed grains without recrystallization. There is still a high dislocation density at the boundary of deformed grains, and small angle grain boundaries are still dense in deformed grains. Figure 6(f) is the grain size distribution of annealed at 20°C for 5 h. Due to the occurrence of return behavior, some medium-sized grains have static recrystallization behaviors, and the bimodal structure is more obvious. Figures 6(g), (h) and (j), (k) are orientation images of whole grains and recrystallized grains annealed at 420°C for 8 h and 430°C for 5 h, respectively. It can be seen that with the increase of annealing time and annealing temperature, the volume fraction of recrystallization increases to more than 60%, and the large deformation grains almost disappear. After annealing at 430°C for 8 h, the deformed grains are basically transformed into small recrystallized grains. There is no large amount of deformed grains, and the small angle grain boundary is basically disappeared [43]. However, some grains have abnormal growth. The grain size changes from the initial bimodal structure to the small grain size, and the recrystallization volume fraction reaches 70%. Compared with several states, the grain size distribution of the alloy at 430°C * 5 h is the most uniform, the recrystallization volume fraction is higher and there is no abnormal growth phenomenon. Discussion In this paper, the effects of temperature and time on the recrystallization behavior during annealing treatment are mainly studied. In the previous section, we mainly analyzed the microstructure changes of grain size and precipitation phase during annealing. However, the performance changes caused by microstructure changes are not clear. Texture is a key factor affecting mechanical properties of wrought magnesium alloys [44]. In close-packed hexagonal metals, preferential growth of basal oriented grains in magnesium alloys has been reported, and it is believed that this preferential growth is the main reason for the formation of basal texture [43]. Figure 7 shows the PF diagram of the basal plane under different annealing conditions. It can be seen that the alloy has obvious extrusion texture, and the peak texture strength reaches 11.56 after aging. With the increase of annealing time, the texture is dispersed to the extrusion direction, and the maximum texture strength of the basal plane decreases. The results show that with the annealing treatment, the basal texture is weakened, and the dense hexagonal crystal deflects in all directions. At 430°C×5 h, it has shown a relatively uniform distribution, and the peak texture intensity is also reduced to 3.31. This may be due to texture weakening caused by grain refinement during annealing [45]. Figure 8 describes the occurrence of static recrystallization behavior during annealing. Figure a shows that the composition of the original grains is mainly lamellar LPSO phase and the original deformed grains. After low temperature or short time annealing, the lamellar LPSO begins to break, and the static recrystallized grains first occur from the grain boundaries with higher residual stress [46]. As the annealing temperature increases or as the annealing time increases, the recrystallization behavior at the grain boundaries gradually spreads to the middle of the deformed grains, and static recrystallization occurs at the broken lamellar LPSO phase, as shown in figure (c). When the annealing behavior continues, accumulated enough energy, static recrystallization behavior gradually complete. This is similar to the research of S H Lu [17]. Figure 9(a) shows the hardness comparison curves of the alloy after one aging at 420°C and 430°C for different annealing time. Figure 10 shows the standard deviation and variation coefficient of two aging states before and after annealing [47]. As shown in the figure 9, the hardness of the alloy at aging state is higher, and the hardness decreases significantly after annealing. The hardness changes at the two temperatures are similar. The hardness of the alloy decreased continuously in the first five hours and increased from 5 h to 8 h. According to the microstructure analysis of the alloy, the increase of hardness caused by grain refinement at 5 h was greater than that caused by annealing. The abnormal growth of recrystallized grains at 8 h resulted in the decrease of properties and hardness of the alloy [48,49]. Figure 9(b) is the hardness curve of the alloy annealed at 420°C and 430°C for two times. It can be seen from the figure that the hardness of the alloy is improved after annealing for 5 h and then secondary aging treatment, and the hardness begins to decrease after 5 h, which is consistent with the change trend of hardness during annealing. Comparing the two figures, it is found that the hardness of the alloy at two temperatures is similar, and the static recrystallization behavior after annealing can improve the mechanical properties of the alloy [24]. After secondary aging, the hardness at 430°C×5 h is slightly higher than that at 420°C×5 h, which is in line with the grain refinement effect at 430°C×5 h shown above. Conclusion In this paper, the effects of static recrystallization behavior of Mg-Gd-Y-Zn-Zr alloy annealed at different temperatures and time on the microstructure and hardness of the alloy were systematically discussed and studied. The main conclusions are as follows: 1. The RE Mg alloy extruded with large extrusion ratio was annealed. When annealing temperature was 430°C and annealing time was 5 h, the recrystallized grains were fine and uniform, and the volume fraction of recrystallized grains was small. 2. A large number of blocky and granular phases are precipitated during the annealing process of RE Mg alloy, and the hardness of the alloy can be significantly improved within a certain range. 3. The static recrystallization behavior generated by annealing will produce the phenomenon of texture weakening, and the texture parallel to the ED is deflected perpendicular to the ED.
2021-11-18T16:16:11.528Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "1e2812446ad5c60f3c5ad36a8d3d75f772fe896d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ac39c1", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e04daadceebb8dfa1bc5e20a4abec59108d6b908", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
123322338
pes2o/s2orc
v3-fos-license
Upper bound of the time-space non-commutative parameter from gravitational quantum well experiment Starting from a field theoretic description of the gravitational well problem in a canonical non-commutative spacetime, we have studied the effect of time-space non-commutativity in the gravitational well scenario. The corresponding first quantized theory reveals a spectrum with leading order perturbation term of non-commutative origin. GRANIT experimental data are used to estimate an upper bound on the time-space non-commutative parameter. Introduction Non-commutative (NC) spacetime [1], where the coordinates x μ satisfy the non-commutative algebra, has regained prominence in the recent past, and field theories defined over the NC spacetime are currently a subject of very intense research [2,3]. Various gauge theories including gravity are being studied in a NC perspective formally [4,5,6,7,8,9,10] as well as phenomenologically [11,12,13,14]. A part of the endeavour is to find the order of the NC parameter and exploring its connection with observations [15,16,17,18,19,20,21]. In particular, GRANIT experimental data [22], which shows the quantum states of the neutrons trapped in earth's gravitational field, have been used to set an upper bound on the momentum space NC parameters [23,24] by analyzing the gravitational well problem using NC quantum mechanics, where non-commutativity is introduced among the phase-space variables at the Hamiltonian level. Therefore, non-commutativity in the time-space sector, i.e., θ 0i = 0 is not accounted for. Time-space non-commutativity poses certain difficulties regarding unitarity and causality [25,26,27], which could be avoided by a perturbative approach [28,29,30,31]. Therefore, a search for any possible upper bound on the time-space NC parameter is very much desirable. In this paper, we have studied the effect of time-space NC (if any) on the spectrum of a cold neutron trapped in a gravitational quantum well, starting from a NC Schrödinger field theory. The NC Schrödinger action To model a non-relativistic particle with a constant background gravitational field in NC spacetime, we start with the NC Schrödinger action in the deformed phase-space, where the ordinary product is replaced by the star product [31,32,33]. Here, the fields are defined as functions of the phase-space variables and the redefined product of two fieldsφ(x) andψ(x) is The action for the system in vertical x − y (i = 1, 2) plane and gravitational background in the x-direction reads:Ŝ Under composition, the Moyal bracket between the coordinates is [x μ ,x ν ] = iΘ μν , where the non-trivial components are: Since the effect of non-commutativity is expected to be small, we have expanded the star product and considered only the first order correction terms. A physically irrelevant re-scaling 1 to the field variable, re-definition of the observable mass, and the partial derivative (∂ y ) are given by: give the final effective NC Schrödinger action as: which gives the equation of motion for the fieldψ(x). Reduction to first quantized theory In a field theoretic setting, we have imposed non-commutativity and found the only non-trivial change in the Schrödinger equation is, indeed, originating from time-space non-commutativity. Specifically, it shows up only in the direction of the external gravitational field g = −ge x . Since the first and second quantized formalisms are equivalent as far as Galilean systems are concerned, we can, hereafter, reinterpreteψ, the basic field, as a wave function and carry out an equivalent NC quantum mechanical analysis. From Eq. (5), we easily read off the Hamiltonian as: The last term in Eq. (6) represents a perturbation H 1 in the gravitational quantum well problem described by H 0 , which we now briefly review. Ordinary gravitational quantum well The first two terms in Eq. (6) describe the quantum states of a particle with massm trapped in a gravitational well. Since the particle is free to move in y-direction, its energy spectrum is continuous along y and the corresponding wave function will be a collection of plane waves ψ(y) = +∞ −∞ g(k)e iky dk, where g(k) determines the group's shape in phase-space. The analytical 1 Such re-scalings are only viable in a region of spacetime, where variation of the external field is negligible. Since the results we have derived are to be compared with the outcome of a laboratory-based experiment, we can safely assume a constant external gravitational field throughout. . The eigenfunctions can be expressed in terms of the Airy function φ(z) as ψ n (x) = A n φ(z), with eigenvalues given by the roots of the Airy function α n , with n = 1, 2 . . . as: The dimensionless variable z is related to the height x by z = 2m 2 g/h 2 1/3 (x − E n /mg). The normalization factor for the n-th eigenstate is given by: The wave function for a particle with energy E n oscillates below the classically allowed height x n = Eñ mg , and above x n it decays exponentially. This was realized experimentally by Nesvizhevsky et al [22] by letting cold neutrons flow with horizontal velocity 6.5 ms −1 through a horizontal slit formed between a mirror below and an absorber above. The number of transmitted neutrons as a function of absorber height is recorded, and the classical dependence is observed to change into a stepwise quantum-mechanical dependence at a small absorber height. The experimentally found value of the classical height for the first quantum state is x exp 1 = 12.2 ± 1.8 (syst.) ± 0.7 (stat.) μm, and the corresponding theoretical value can be determined from Eq. (7) for α 1 = −2.338, yielding x 1 = 13.7 μm. This value is contained in the error bars and allow for maximum absolute shift of the first energy level with respect to the predicted values: The values of the constants taken in this calculations areh = 10.59 × 10 −35 Js, g = 9.81 ms −2 , andm = 167.32 × 10 −29 kg. Analysis of the perturbed energy spectrum Going back to the effective NCQM theory, we have now analyzed the perturbed system in Eq. (6). The perturbative potential given by H 1 = η m 2 g 2 /h x is a direct manifestation of time-space non-commutativity. So, we have worked out an upper bound for the time-space NC parameter following [23] by demanding that the correction in the energy spectrum should be smaller or equal to the maximum energy shift allowed by the experiment [22]. We have worked out the theoretical value of the leading order energy shift of the first quantum state numerically. It is just the expectation value of the perturbation potential, given by: where I 1 ≡ +∞ α 1 dzφ(z)zφ(z). The values of the first unperturbed energy level E 1 is determined from Eq. (7) with α 1 = −2.338 as: The normalization factor A 1 is calculated from Eq. (8). The integrals in A n and I 1 are numerically determined for the first energy level A 1 = 588.109, and I 1 = −0.383213 . The first order correction in the energy level is given by ΔE 1 = 2.316 × 10 −23 η J. Comparing with Interestingly, the value of the upper bound derived, here, can be shown [18] to be consistent with the results of [23,24]. Conclusions In this paper, we have obtained an effective NC quantum mechanics (NCQM) for the gravitational well problem starting from a NC Schrödinger action coupled to external gravitational field. We have re-interpreted this one particle field theory as a first quantized theory and obtained an effective NCQM. The outcome of our calculation shows that the timespace sector of the NC algebra introduces non-trivial NC effects in the energy spectrum of the system. We have demanded that the calculated perturbation in the energy level should be less than or equal to the maximum energy shift allowed by the GRANIT experiment [22]. This comparison leads to an upper bound on the time-space NC parameter. However, one should keep in mind that this value is only in the sense of an upper bound, and not the value of the parameter itself.
2019-04-20T13:11:49.202Z
2014-03-05T00:00:00.000
{ "year": 2014, "sha1": "f36042343bf9caf06efb01577bd37c407684e15f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/484/1/012071", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0ac2046e41fbf8c8e73fa97594c9566800691de2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
21658260
pes2o/s2orc
v3-fos-license
Dicyanomethylene Substituted Benzothiazole Squaraines: The Efficiency of Photodynamic Therapy In Vitro and In Vivo The lack of ideal photosensitizers limits the clinicalapplication of photodynamic therapy (PDT). Here we report the PDT efficiency of dicyanomethylene substituted benzothiazole squaraine derivatives. This class of squaraine derivatives possess strong absorption and long excitation and emission wavelengths (ex/em, 685/720 nm). They show negligible dark toxicity, but can generate singlet oxygen under irradiation resulting in the apoptosis and necrosis of cells (phototoxicity). Changing the side chains of these compounds greatly influences their albumin-binding rate, cellular uptake and their phototoxicity. One of the squaraine derivatives with two methyl butyrate side chains shows high PDT efficiency in a mouse subcutaneous xenograft model under the irradiation of a 690 nm laser. These results show the great potential of dicyanomethylene substituted benzothiazole squaraines to be the leading compound of near-infrared photosensitizers in PDT. Introduction Photodynamic therapy (PDT), as a noninvasive and precisely directed method for cancer treatment, has attracted increasing attention recently. This therapy involves the combination of nontoxic photosensitizers and visible light. Under irradiation of light, photosensitizers in tumor cells produce a variety of reactive oxygen species (ROS). ROS attack the intracellular structures (e.g., plasma membrane, mitochondria, lysosomes, and nuclei) by irreversible enzyme inactivation, lipid peroxidation, protein denaturation, and crosslinking, as well as other structural changes, which result in the damage of structure and function of cells (Bugaj, 2011;Lucky et al., 2015;Velema et al., 2014). In PDT, the photosensitizer plays a key role. The reported photosensitizers are mainly porphyrin-based derivatives (Lucky et al., 2015;Ormond and Freeman, 2013). Several classes of non-porphyrin derivatives, such as cyanines (Fekrazad et al., 2015;Gluth et al., 2015), xanthenes (Gianotti et al., 2014;Wang et al., 2013;Yao et al., 2014), phenothiazines (Samy et al., 2015;Yu et al., 2015), and anthraquinones (Sharma and Davids, 2012;Theodossiou et al., 2009) were investigated recently. Most of the reported photosensitizers absorb light in the visible region. An ideal photosensitizer should possess strong absorption in the wavelength range of 600-800 nm, where the tissue penetration by light is higher, with good yields of ROS and minimal dark toxicity (toxicity in the absence of light) (Avirah et al., 2012;Lucky et al., 2015;Ormond and Freeman, 2013). Additionally, good photosensitizers should possess favorable biodistribution, such as easily penetrating and retaining in target tissue. However, with these disparate requirements, few compounds can be qualified as an ideal photosensitizer. Squaraines belong to a class of cyanine dyes composed of a central electron deficient four-membered ring core flanked by electron-rich aromatic moieties. They possess good light stability, sharp and intense absorption bands (ε~10 5 M −1 cm −1 ) in the red to near-infrared (NIR) region, and high fluorescence quantum yield (Mayerhoffer et al., 2012(Mayerhoffer et al., , 2013Song et al., 2009;Sreejith et al., 2008;Wang et al., 2010a,b). These photochemical and photophysical properties make squaraines highly suitable for biological applications, such as fluorescent sensors, fluorescent labels (Jisha et al., 2010;Oswald et al., 1999;Patonay et al., 2004), and photosensitizers (Ramaiah et al., 1997(Ramaiah et al., , 2002(Ramaiah et al., , 2004Santos et al., 2004). Although, some squaraines have been designed and synthesized as potential photosensitizers in PDT (Avirah et al., 2012), only a few reports describe the photosensitizing effects of these squaraines in vivo (Devi et al., 2008) or in cultured cells (Ramaiah et al., 2002(Ramaiah et al., , 2004. Most recently, we synthesized a new dicyanomethylene substituted benzothiazole squaraine derivative (CSTS) (Scheme 1), which possesses a strong absorption band (ε 685 nm = 1.65 × 10 5 M −1 cm −1 ) and high fluorescence quantum yield (0.47). Because of the increased sterical congestion resulting from the dicyanomethylene substitution on the central four-membered ring, CSTS possesses a rigid π-conjugated planar structure (Jin et al., 2014). The dicyanomethylene substitution also results in a long ex/em wavelength of CSTS (ex/em, 685/720 nm). These properties make dicyanomethylene substituted benzothiazole squaraines potential near-infrared photosensitizers for PDT. However, CSTS cannot enter cell membranes, limiting its further application in vivo. In order to investigate the efficiency of dicyanomethylene substituted benzothiazole squaraines for PDT, in this paper we designed and synthesized three CSTS analogues (CSBE, CSME, and CSBM) (Scheme 1), and compared their photosensitizing effects in solution and in cultured cells. Additionally, we also tested the PDT efficiency of CSBE in a tumor xenograft mouse model. Cell Lines MCF-7 (breast cancer), PC-3 (prostate cancer) and A549 (non-small cell lung cancer) were bought from the Cell Resource Center of Shanghai Institute for Biological Sciences (Chinese Academy of Sciences, Shanghai); A549T (Taxol-resistant A549 subline) from Shanghai Aiyan Biological Technology Co., Ltd. (Shanghai, China); LoVo (colon cancer), HCT-8 (colon cancer) and K562 (chronic myelogenous leukemia) from Cell Culture Center of Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences (Beijing, China). H1299 (non-small cell lung cancer) cells were purchased from ATCC (Manassas, VA). Cells were grown in DMEM (Hyclone) medium containing 10% fetal bovine serum (FBS, Gibco) in a humidified incubator with 5% CO 2 at 37°C. Instruments 1 H NMR spectra were collected with a Bruker Avance 400 MHz spectrometer using DMSO-d 6 as solvent. ESI-mass spectra (MS) were recorded with a LC-MS 2010A (Shimadzu) instrument. High-resolution MALD-TOF MS were collected with a Bruker Daltonics Flex-Analysis. Fluorescence spectra were measured on a Hitachi F-4500 fluorescence spectrofluorometer (Kyoto, Japan). UV-Vis spectra were collected with a UH5300 spectrophotometer (HITACHI, Japan). General Procedures The synthesis route is illustrated in Scheme 2. General Procedure for the Synthesis of Compound 2 Compound 2 was prepared according to a previously reported method (Gromov et al., 1997). A mixture of 2-methyl benzothiazole (1.25 mL, 9.83 mmol), methyl γ-bromobutyrate (0.70 mL, 4.89 mmol), and iodomethane (0.70 mL) was heated for 6 h at 120°C. After cooling to room temperature, the solution was added dropwise to an excess amount of diethylether to form a precipitate. The product was collected by filtration and then purified on a silica gel column using ethyl acetate/ petroleum ether as eluent to afford the product. General Procedure for the Synthesis of CSBE, CSME, and CSBM Functionalized squaric acid derivative 1 (291 mg, 1 mmol) and compound 2 (2 mmol) were heated to reflux in a 5:5:1 mixture of toluene, n-butanol, and quinolone (20 mL) using a Dean-Stark trap for 8 h. After cooling to room temperature, the solution was added dropwise to an excess amount of diethylether to form a precipitate. The residue was obtained after centrifugation and then purified on a silica gel column using DCM/methanol as eluent to afford the crude product. After a drying step under reduced pressure, the final compound was obtained. Measurement of Singlet Oxygen The singlet oxygen ( 1 O 2 ) generation from the squaraines upon irradiation was studied using 1,3-diphenylisobenzofuran (DPBF) as the 1 O 2 indicator (Yuan et al., 2015;Zimcik et al., 2009). DPBF reacts irreversibly with 1 O 2 which causes a decrease in the intensity of the DPBF absorption band at 418 nm. In a typical experiment, an equal volume of DPBF in DMSO (ABS 418 nm = 1) and squaraines in DMSO (ABS 690 nm = 0.5) was mixed; the solutions were irradiated with a 690 nm laser at 6 mw/cm 2 , and their absorbance at 418 nm was recorded with a UH5300 spectrophotometer after irradiation for 0, 5, 10, 15, 20, 25, 30, 35, and 40 s. Confocal Imaging Cells were seeded in a confocal dish (35 × 12 mm, Φ20 mm glass bottom) and grown for 24 h, then incubated with fresh medium (1 mL) containing squaraines (0.5 μM) and Lyso-Tracker Blue or Mito-Tracker Green for 1 h. After washed with PBS (pH 7.4) three times, cells were imaged under a confocal microscope (OLYMPUS FV1000-IX81, Olympus Corporation, Japan), using a 100 × objective lens. The fluorescence of squaraines was excited at 635 nm and collected within 650-750 nm. The fluorescence of Lyso-Tracker Blue and Mito-Tracker Green were excited at 405 and 488 nm, respectively. Photo-/Dark-Cytotoxicity The dark cytotoxicity of these squaraines was tested with seven cancer cell lines (PC-3, MCF-7, HCT-8, A549, A549T, K562, and LoVo) using Cell Counting Kit-8 (CCK-8) kit (Dojindo Laboratories, Kumamoto, Japan). Briefly, cells (5 × 10 3 /well) were seeded into 96-well plates and grown for 18 h; then added with compounds at different concentrations and further incubated for 48 h. Finally, the media were replaced with fresh media (100 μL) (without FBS) containing 10 μL of CCK-8 reagent and incubated for 1 h. The absorbance at 450 nm was collected on a plate reader (SpectraMax M5, Molecular Devices, CA, USA). The Scheme 2. Synthetic route of CSBE, CSME, and CSBM. cell survival percentage (SP) was calculated according to Eq. (1): where A, A s and A 0 are the absorbance of experimental group, control group and blank group (no cells). For photo-cytotoxicity, seven cancer cell lines (MCF-7, PC-3, H1299, A549, A549T, K562, and LoVo) was used. Briefly, cells (1 × 10 4 cells/ well) grown in 96-well cell culture plates were incubated for 24 h with concentrations from 2 to 10 μM of compound. Then the plate was washed twice with warm PBS containing 2.5% FBS and incubated for 30 min. The cells were irradiated in a dark room with a 690 nm diode laser light at 150 mW/cm 2 to achieve a total light dose of 3.6 J/cm 2 . Then the plates were further incubated at 37°C for an additional 24 h, and the cell viability was measured with a CCK-8 kit as described above. Cell Apoptosis Assay The apoptosis assay of MCF-7 cells after treatment with CSBE was performed by flow cytometry assay with Annexin V-FITC Apoptosis Kit according to the vendor's instructions. This method is based on the detection of differences in phosphatidylserine on cytomembranes using Annexin V and the detection of membrane damage with propidium iodide (PI). Briefly, MCF-7 cells treated with CSBE (10 μM) for 24 h were irradiated with a total light dose of 3.6 J/cm 2 and were further cultured for 0, 1, 3, 6, 12, and 24 h. Then cells were double-stained with 200 μL of labeling solution containing FITC-labeled Annexin V and PI for 15 min at room temperature. Subsequently, the samples were analyzed with BD FACS Calibur with Cell Quest software (BD Bioscience, San Jose, CA) by counting 1 × 10 4 events with an excitation wavelength of 488 nm. NIR Fluorescence Imaging In Vivo and Tissue Biodistribution Athymic nude mice were obtained from Shanghai SLRC Laboratory Animal Co., Ltd. and maintained in a controlled environment and freely access to standard food and water. Prior to initiation of the experiments, mice were acclimatized to husbandry conditions for 1 week to eliminate the stress. All experiments were performed in strict compliance with the guide of the care and use of laboratory animals of Hunan University Laboratory Animal Center. The Hunan University Animal Study Committee approved the experiments. The xenografted tumors were established by subcutaneous injection of 5 × 10 6 H1299 cells into the right shoulder of female nude mice (4-5 week). Tumor size was measured daily using calipers, and allowed growing to 5-7 mm in diameter. The tumor-bearing nude mice were intraperitoneally, or intratumorally, injected (at the bottom of the tumor) with 50 μL of CSBE (3.25 mg/mL, 5 mM) or tail intravenously injected with 50 μL of CSBE (0.325 mg/mL, 0.5 mM). The mice were sacrificed at indicated time, and the major organs, including the tumor, liver, heart, lung, spleen, and kidney, were dissected and imaged with IVIS Lumina XR (Caliper Life Sciences) with an excitation band pass filter at 675/30 nm and an emission at 740/80 nm. A tumor-bearing mouse intratumorally injected with CSBE was anesthetized and imaged at pre-injection, 10 min, 1, 2, 6, 12, 24, 48, and 96 h post-injection with the imaging system as described above. Photodynamic Treatment In Vivo The H1299 cells xenografted tumors were established as described above. When the tumors had grown to an average size of 5-6 mm in diameter, the mice were randomly divided into four groups (n = 6) and administered through intratumoral injection (50 μL at the bottom of the tumor) with a) PBS, b) PBS + light, c) CSBE (3.25 mg/mL, 5 mM in PBS), and d) CSBE + light. Tumors in mice of group b and d were irradiated with a NIR laser (690 nm, 500 mW/cm 2 ) for 15 min after 24 h post-injection. Two days later, therapeutic agents were injected again, and mice were again subjected to NIR light irradiation for 15 min after further injection 24 h as mentioned above. The body weight of mice and tumor size were measured daily. The tumor volume was calculated according to the formula of V = 1/2 (larger diameter) × (smaller diameter) 2 . When the tumor size in control groups reached 1600 mm 3 , tumors and major organs were collected from tumor-bearing mice and embedded in OCT (an embedding medium used for frozen tissue) and frozen at −80°C. Histological examination was performed using Pannoramic MIDI (3DHistech Ltd., Budapest, Hungary) after stained the tissue sections with hematoxylin and eosin (H&E). Statistical Analysis All data are presented as mean ± SD. The statistical analysis was performed by One-way analysis of variance (ANOVA). Statistical significance was accepted at the level of p b 0.05. Design and Synthesis of CSBE, CSME, and CSBM Previously, we synthesized a new squaraine derivative, CSTS, which possesses a crescent-shaped π-conjugated planar core (Jin et al., 2014). The photophysical properties of CSTS suggest that it holds great promise as a photosensitizer in PDT. However, our recent study indicated that CSTS cannot enter cells, which limits its further application in PDT. CSTS carries two sulfonic groups, which may hinder its penetration into the negatively charged cell membrane. Because the photophysical properties of CSTS are mainly contributed by the dicyanomethylene substituted benzothiazole squaraine core, it is possible to obtain new dicyanomethylene substituted squaraines for PDT investigation by changing the side chains of CSTS. In order to improve the cellular uptake, three CSTS analogues (CSBE, CSME, and CSBM) were designed and synthesized by displacing negatively charged sulfonic acid groups with electrically neutral methyl or methyl butanoate groups. The PDT effects of these compounds were investigated. Absorption and Emission Spectra of CSBE, CSME, and CSBM Cyanine dyes usually self-aggregate in aqueous solution via van der Waals forces and π-π stacking interactions. CSTS was reported to mainly form H-aggregates in aqueous solution (Jin et al., 2014). Thus the absorption and emission spectra of CSBE, CSME, and CSBM were measured in mixed solutions of water and DMSO (Fig. 1). These three compounds showed a very broad and low absorption band around 600 nm in water, suggesting their aggregation. With increasing DMSO content (V DMSO / V H2O : 0/10-10/0), the absorption band of these compounds increased, red-shifted and finally became a sharp band around 670-690 nm when the DMSO content reached 60% (CSBE), 70% (CSME), or 80% (CSBM) (see Fig. 1a), suggesting the transformation from H-aggregate form to monomer form of these compounds. The emission spectra showed almost no fluorescence emission of these compounds at low DMSO content, and then an increasing fluorescence band (710-745 nm) with increasing DMSO content (see Fig. 1b), which corresponded well to the increase of the absorption band around 670-690 nm. These results indicate that these compounds have the same absorption and fluorescence spectra as that of CSTS. Based on the content of DMSO that caused the change of absorption and emission, the aggregation tendency of these compound was CSBE b CSME b CSBM. Binding of CSTS, CSBE, CSME, and CSBM to HSA The binding to plasma proteins can significantly influence the therapeutic, pharmacodynamic, and toxicological action of drugs, such as absorption, distribution, cellular uptake, and clearance properties. It is generally accepted that only the free drug in plasma is available to elicit a pharmacological effect. Because human serum albumin (HSA) is the most abundant protein in human blood plasma, the interaction of CSBE, CSME, CSBM, and CSTS with HSA was investigated. Fig. 2 shows the absorption and emission spectra of these compounds in the presence of different concentrations of HSA. In the absence of HSA, the absorption spectra of CSTS in PBS show a broad band with a peak at around 600 nm and a shoulder around 690 nm, and CSBE, CSME, and CSBM show a very broad band with the maximum around 600 nm (broken line), suggesting the heavier aggregation of these compounds than that of CSTS. The addition of HSA to these compounds only caused a remarkably spectral change of CSTS (Fig. 2), that is, the absorption peak red-shifted to around 690 nm and increased with the rise of HSA concentration; the addition of HSA did not affect the absorption spectra of CSBE, CSME, and CSBM. Furthermore, the addition of HSA did not change the fluorescence spectra of CSBE, CSME, and CSBM, but greatly enhanced the fluorescence of CSTS. These results suggest that only CSTS bound to HSA and was located in a hydrophobic environment, which may due to its negatively charged sulfonate groups. The negligible HSA binding of CSBE, CSME, and CSBM implies the potential to act as photosensitizers in PDT. Singlet Oxygen Measurements The 1 O 2 generation upon the irradiation of a photosensitizer with light plays a key role in photodynamic therapy. The 1 O 2 generation from these four squaraines upon irradiation was measured by using 1,3-diphenylisobenzofuran (DPBF) as the 1 O 2 indicators (Yuan et al., 2015). In the absence of these compounds, the absorbance of DPBF at 418 nm decreased slightly by b 10% after irradiation under a 690 nm laser beam for 40 s. However, in the presence of these four compounds, the DPBF absorbance decreased with increased irradiation time and decreased by 65% (CSTS) and 75% (CSBE, CSME, and CSBM) after irradiation for 40 s (Fig. 3). Furthermore, the addition of 1 O 2 scavenger, vitamin C, almost totally inhibited the decrease of the absorbance of DPBF. These results suggest that these compounds can generate 1 O 2 under the irradiation at 690 nm. Cellular Uptake and Intracellular Localization of CSBE, CSME, CSBM, and CSTS The cellular uptake and subcellular localization of photosensitizers play a key role in PDT (Noodt et al., 1999). In order to evaluate the feasibility of these compounds as photosensitizers, we compared the cellular uptake and localization of these compounds. Confocal imaging showed that CSTS could not enter living cells, while the other three compounds were able to enter living cells. CSBE and CSME with one and two methyl butanoate side chains, respectively, exhibited higher cellular uptake than CSBM based on the fluorescence intensity in cells. The co-staining with Mito-Tracker Green (mitochondria probe) and Lyso-Tracker Blue (lysosome probe) showed that these three dyes were mainly located in lysosomes (Fig. 4). Flow cytometry assay further showed that the order of cellular uptake of these compounds was CSBE N CSME N CSBM N CSTS (Fig. S1). The low uptake of CSTS may be due to its negatively charged sulfonate groups that do not facilitate penetration through cell membranes. These results suggest that the side arms on squaraines greatly influence their cellular uptake. Dark-and Photo-Cytotoxicity The dark-and photo-cytotoxicity of these compounds were assessed by Cell Counting Kit-8 (CCK-8) assay. For dark-cytotoxicity assay, MCF-7, A549, A549T, PC-3, HCT-8, K562, and Lovo cells were incubated with different concentrations (2 to 100 μM) of these compounds for 48 h without irradiation. The survival of all these cells was N95% even where the concentration of these compounds was as high as 100 μM (Fig. 5b), suggesting the low cytotoxicity of these compounds. The photo-cytotoxicity assay was firstly tested on MCF-7 cells. MCF-7 cells were incubated with different concentrations of these compounds for 24 h, and then irradiated under a 690 nm laser beam to achieve a total light dose of 3.6 J/cm 2 ; the cell viability was measured after further incubation for 24 h. As shown in Fig. 5a, CSTS did not show significant photo-cytotoxicity, while CSBE, CSME, and CSBM showed dose-dependent photo-cytotoxicity. CSBE showed the strongest photo-cytotoxicity with IC 50 values of 1.135 μM. The order of the photo-cytotoxicities of these compounds was CSBE N CSME N CSBM N CSTS, which is consistent with their cellular uptake. Further photo-cytotoxicity assay of CSBE and CSME on other six cell lines showed similar results as on MCF-7 (Fig. S2). The negligible dark-cytotoxicity but remarkable photo-cytotoxicity of CSBE, as well as its activation by long wavelength laser beam, makes it hold significant photochemotherapeutic potential. Cell Apoptosis and Necrosis Induced by CSBE After Irradiation To further investigate the possible photo-cytotoxicity mechanism of the dicyanomethylene substituted benzothiazole squaraines, the Annexin V-fluorescein/PI assay was performed to explore the apoptotic profiles of MCF-7 cells after treatment with CSBE and irradiation. As shown by Fig. 6, most cells were alive after the irradiation treatment (the result of 0 h). However, after further culture of the treated cells, the number of early apoptotic cells (lower right quadrants (Annexin V + and PI − )) and late apoptotic/dead cells (upper right quadrants (Annexin V + and PI + )) increased with culture time. After culturing for 24 h, 70% of cells were in apoptotic stage or dead. This set of results suggests that treatment with CSBE and irradiation did not directly cause the cell damage immediately; the attack by 1 O 2 generated by photo-activated CSBE caused the apoptosis and the necrosis of cells. In Vivo Study The excellent photodynamic effect in vitro of CSBE motivated us to investigate the photodynamic therapy in vivo. To reveal the distribution in vivo, we intraperitoneally, tail intravenously, or intratumorally injected mice with CSBE and examined the ex vivo fluorescence distribution in organs. After tail intravenous or intraperitoneal injection, the fluorescence of CSBE was observed in tumor, liver, kidney, lung, and spleen tissues but not in heart tissues (Fig. S3a). After intratumoral injection, the fluorescence of CSBE was also found in tumor, liver, kidney, lung, and spleen tissues, suggesting that CSBE can not only target tumors but also other organs in vivo. By intratumoral injection (at the bottom of the tumor), the strong fluorescence in tumor was observed at ten minutes; then the fluorescence reached the maximum at 1 h and lasted to 6 h, and then gradually reduced until 72 h (Fig. S3b). To ensure the same amount of CSBE in tumors, we used intratumoral injection for further photodynamic therapy. When the tumors grew to a size of 5-6 mm in diameter, 24 mice were divided into four groups randomly (6 mice/group) and then treated with: a) PBS, b) PBS + light, c) CSBE, and d) CSBE + light respectively. The mice from control groups (a) and (c) were not irradiated, while the mice from control group (b) and experimental group (d) were exposed to a 690 nm laser (500 mW/cm 2 ) as described in section 2.11 (Fig. 7a). The body weights were recorded, and the tumor sizes were measured daily after the above treatments. The mean body weights of all the groups did not change much (~20 g) (Fig. S4) within 14 days. In the PBS, PBS + light, and CSBE groups, the tumor sizes of all mice were significantly increased, and reached~1.6 cm 3 within 14 days after treatment (Fig. S5). The tumor volumes of all mice in these groups changed as a function of time ( Fig. 7b and c), and showed indistinguishable growth rates among these groups, suggesting that only laser irradiation or CSBE injection did not affect the tumor development. However, in all 6 mice in the CSBE + light group, tumor sizes were gradually reduced compared with that in the PBS group (p b 0.05). These results suggest the innocuity and the excellent photodynamic effect of CSBE in vivo. In addition, the histological examination of the tumors, liver, and kidney slices was performed after treatment for 14 days. Compared with the tumor sections from the three control groups, cell damage were observed in the tumors with both CSBE injection and laser Fig. 6. Apoptosis and necrosis of MCF-7 cells after treatment with CSBE and irradiation. Viable cells are Annexin V − /PI − . The Annexin V + /PI − cells are early in the apoptosis process, whereas the Annexin V + /PI + represent late-apoptotic or necrotic cells. irradiation (Fig. 7d). The treatment in the CSBE + light group results in severe cellular damage, such as cell lysis, shrinkage, and nuclear condensation. Thus, the PDT of cancer combined with CSBE injection and 690 nm laser irradiation is a highly effective and feasible. It is worthwhile to note that the histological examination of liver and kidney slices did not found any cell injury (Fig. S6), further confirming the innocuity of CSBE. Discussion On the basis of the results of the in vitro study, we confirmed the innocuity and high PDT efficacy of CSBE using a nude mouse xenograft in vivo model. One limitation of CSBE is the lack of tumor targeting. However, the in vitro results showed that the high PDT efficiency of CSBE was mainly determined by its dicyanomethylene substituted benzothiazole squaraine core structure. The side chains on this squaraine core structure mainly affected the pharmacokinetic properties of its derivatives. Therefore, through modification with different side chains and immobilization with targeting units (e.g. folic acid, gefitinib, gleevec, RGD peptide, antibody and even nanoparticles) on this dicyanomethylene substituted benzothiazole squaraine core, it is possible to obtain ideal photosensitizers for PDT. In summary, three CSTS analogues (CSBE, CSME, and CSBM) with electrically neutral side chains were synthesized. Compared with CSTS, these synthesized compounds exhibited similar optical properties and singlet oxygen yield under irradiation, and exhibited much lower HSA binding rates. All these compounds showed very low dark-cytotoxicity, but exhibited different photo-cytotoxicities with the order of CSBE N CSME N CSBM N CSTS, which is consistent with the cellular uptake efficiency of these compounds. In vivo experiments demonstrated that under irradiation of a 690 nm laser, CSBE could totally inhibit the tumor growth by damaging cells, suggesting the high efficacy as a photosensitizer in PDT.
2018-04-03T04:47:17.626Z
2017-08-09T00:00:00.000
{ "year": 2017, "sha1": "234d08784b8d9cb2637a1ab2dbe7cb40e705f9c7", "oa_license": "CCBYNCND", "oa_url": "http://www.ebiomedicine.com/article/S2352396417303274/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "234d08784b8d9cb2637a1ab2dbe7cb40e705f9c7", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
264437003
pes2o/s2orc
v3-fos-license
Exploitation of multi-walled carbon nanotubes/Cu(ii)-metal organic framework based glassy carbon electrode for the determination of orphenadrine citrate Metal organic frameworks (MOFs), with structural tunability, high metal content and large surface area have recently attracted the attention of researchers in the field of electrochemistry. In this work, an unprecedented use of multi-walled carbon nanotubes (MWCNTs)/copper-based metal–organic framework (Cu-BTC MOF) composite as an ion-to-electron transducer in a potentiometric sensor is proposed for the determination of orphenadrine citrate. A comparative study was conducted between three proposed glassy carbon electrodes, Cu-MOF, (MWCNTs) and MWCNTs/Cu-MOF composite based sensors, where Cu-MOF, MWCNTs and their composite were utilized as the ion-to-electron transducers. The sensors were developed for accurate and precise determination of orphenadrine citrate in pharmaceutical dosage form, spiked real human plasma and artificial cerebrospinal fluid (ACSF). The sensors employed β-cyclodextrin as a recognition element with the aid of potassium tetrakis(4-chlorophenyl)borate (KTpCIPB) as a lipophilic ion exchanger. The sensors that were assessed based on the guidelines recommended by IUPAC and demonstrated a linear response within the concentration range of 10−7 M to 10−3 M, 10−6 M to 10−2 M and 10−8 M to 10−2 M for Cu-MOF, MWCNTs and MWCNTs/Cu-MOF composite based sensors, respectively. MWCNTs/Cu-MOF composite based sensor showed superior performance over other sensors regarding lower limit of detection (LOD), wider linearity range and faster response. The sensors demonstrated their potential as effective options for the analysis of orphenadrine citrate in quality control laboratories and in different healthcare activities. Introduction The use of solid-contact ion-selective electrodes (SC-ISEs) as wearable sensors has become a topic of signicant interest for monitoring human health conditions.These sensors allow for real-time, non-destructive, and non-invasive analysis of ions in biological uids.By removing the inner lling solution and inner reference electrode, SC-ISEs can be designed with more exibility and require simpler production processes, making them compatible with modern planar processing technologies. 1e glassy-carbon electrode is a new-generation solid-contact ion-selective electrode (SC-ISE) that features a layered device architecture.This electrode comprises an electrical contact that is coated with an ion-to-electron transducer and followed by an ionselective membrane (ISM).Its primary goal is to deliver efficient analytical performance with a stable and robust design that can be applied for long-term analysis without any deterioration in performance.Any SC-ISE comprises two primary components: the ion recognition element and the transducer layer.The function of the transducer layer is to convert the ionic current to electronic current and stabilize the potential at the interface between the membrane and the substrate.Meanwhile, the recognition element (e.g., ionophores) is used to impart selectivity against a particular ion, which is achieved through various interactions such as the target's nature (charge and size), the ability to form weak interaction-based supramolecular assemblies (host-guest), and/or hydrophobic/hydrophilic forces. 2 A variety of solid-contact functional materials have been introduced into SC-ISEs as ion-to-electron transducers such as conducting polymers, carbon nanotubes, graphene and recently metalorganic frameworks (MOFs). 3MOFs represent an intriguing class of RSC Advances PAPER porous and crystalline materials that constructed from the assembly between metal ions and functional organic ligands. 4They were rst studied and investigated in 1965 by Tomic. 5Their unique properties such as the large surface area, tailored pore size, high stability, and high porosity enable them to be good candidates in various applications including sensing, 6 gas storage, 7 catalysis, 8 chiral separation, 9 and other interesting analytical applications. 10he use of MOFs in electrochemical sensing may be restricted due to their low electronic conductivity and instability in aqueous solutions.5][16][17][18] To address the limitations of MOFs, the incorporation of highly-conductive materials has been identied as an effective strategy.Carbonbased materials, such as multi-walled carbon nanotubes (MWCNTs), have garnered signicant interest in the electrochemical eld owing to their remarkable physical and chemical properties, such as excellent electrical conductivity, high stability, and good mechanical strength. 17The inclusion of MWCNTs in the sensor design not only reduces electrical impedance but also enhances the electrochemical reactivity of analytes when compared to a single metallic environment. Basolite® C 300, Cu-BTC MOF or copper benzene-1,3,5tricarboxylate is one of the MOFs family with rigid crystal structure, space group P1. 19The chemical and crystal structures of Cu-BTC MOF are shown in Fig. 1.It has a characteristic pyramidal skeleton with prominent edges.The surface area of Cu-BTC MOF is 343.32 m 2 g −1 and it has a signicant thermal stability. 20arbon nanotubes (CNTs) is a class of sp 2 hybridized carbon nano-materials which was rst discovered by Iijima in 1991. 21WCNTs are formed of multiple layers of graphene are wrapped concentrically.MWCNTs are characterized by being always metallic and the electronic transfer occurs across the carbon nanotube allowing the passage of current with minimum heating effect.The surface area of MWCNTs is approximately 10-20 m 2 g −1 . 22They have outstanding physicochemical characteristics, for instance excellent electrical conductivity, high thermal stability and high surface area. 23Owing to their promising properties, MWCNTs were exploited recently in many elds either alone or in nanocomposites, especially in electroanalysis. 24,25n this work, a comparative study was applied between three GCEs that were proposed for the accurate, precise and sensitive determination of orphenadrine citrate (ORPH) in different matrices including real human plasma samples, pharmaceutical dosage form and ACSF samples.The developed sensors were based on the incorporation of Cu-MOFs, MWCNTs and MWCNTs/Cu-MOFs composite as different transducers with bcyclodextrin ionophore as a recognition element and potassium tetrakis(4-chlorophenyl)borate (KTpCIPB) as a lipophilic ion exchanger and investigated to be facile, non-invasive and rapid sensors for monitoring the concentration of orphenadrine citrate without applying time-consuming extraction methods. To the best of our knowledge, there is no potentiometric method in literature that explored the incorporation of either Cu-BTC MOF or MWCNTs/Cu-MOFs in a potentiometric sensor for the determination of orphenadrine citrate.Moreover, it is the rst time to harness Cu-BTC MOF as an ion-to-electron transducer for the analysis of a pharmaceutical drug in different matrices.The proposed sensors provide a promise for the analysis of orphenadrine citrate in real life applications. Orphenadrine citrate or (RS)-(dimethyl-2-(2methylbenzhydroxy)ethyl) amine citrate; Fig. 2 is an anticholinergic drug that is commonly used to treat muscle spasm owing to its potent central and peripheral effects. 26Muscle spasms signicantly affect the quality of life of patients suffering from liver cirrhosis.Orphenadrine citrate represents a very effective drug with prolonged therapeutic effect. 27It can be used as an analgesic with different co-administered drugs such as paracetamol, ibuprofen and diclofenac potassium. 28,29Orphenadrine citrate is considered a member of the centrally acting skeletal muscle relaxants, their use is limited by somnolence and the potential for abuse and dependency.The drug's effects on the central nervous system (CNS) may include dizziness, confusion, blurred vision, agitation, hallucinations, and headaches.In cases of excessive dosage, signicant toxicity may occur, leading to CNS depression, which can manifest as stupor, respiratory depression, coma, and even death. 30Therefore, an accurate, facile and precise method of analysis is needed for the rapid analysis of orphenadrine citrate in plasma and in cerebrospinal uid as the drug can pass the bloodbrain barrier which is very critical in case of drug abuse. 30 review of the literature indicates that various techniques have been used for the quantitative analysis of orphenadrine citrate, including potentiometry, [31][32][33] voltammetry, 34 chromatography, [35][36][37][38][39] and spectrophotometry. 40,41The potentiometric sensors reported in the literature were based on the use of either conventional liquid contact-ISE or coated wire electrodes (CWs) with the application of a plasticized membrane containing the orphenadrine-tetraphenyl borate/reineckate ion-pair complex as the electroactive material.The absence of an intermediate layer between the membrane and inner electrode in these sensors could result in potential instability owing to the formation of a water layer.Furthermore, the sensors described in the literature were only utilized for detecting orphenadrine citrate in bulk, tablets, and spiked human plasma. Apparatus The equipment used included a CLEAN digital ion analyzer PH 600, model 007747 (China), a model 900201 Ag/AgCl double junction reference electrode (Thermo-Orion), and a Heidolph MR Hei-Standard magnetic stirrer, model 100818877. Standard solutions A standard solution of orphenadrine citrate (1 × 10 −1 M) was prepared by dissolving the necessary quantity of pure orphenadrine citrate in 100 mL of acetate buffer (pH 5) freshly.The working solutions of orphenadrine citrate were then generated by diluting the stock solution using acetate buffer (pH 5) to obtain concentrations ranging from 1 × 10 −2 M to 1 × 10 −10 M. Procedures 2.4.1.Preparation of the transducers dispersion.Cu-BTC MOF, MWCNTs dispersions were prepared by suspending 50 mg of Cu-BTC MOF, carboxylated MWCNTs in 50 mL of N,N-DMF, separately and sonicated for 8 h at 25 °C to get homogeneous dispersions.The carboxylation of the pristine MWCNTs was performed as mentioned in detail in our previous work. 42or MWCNTs/Cu-MOFs composite, 25 mg of each of Cu-BTC MOF and carboxylated MWCNTs were dispersed in 50 mL of N,N-DMF and ultrasonicated for 10 h at 25 °C to get homogeneous dispersion. 2.4.2.PVC membrane preparation.The PVC sensing paste was prepared by mixing 0.19 g PVC, 0.4 g NPOE, 50 mg KTpCIPB and 50 mg b-CD.The components of the mixture were dissolved in 5 mL THF and mixed thoroughly to get a homogenous paste. 2.4.3.Fabrication of the proposed sensors.The glassy carbon electrodes were polished with alumina slurry and cleaned with ethanol and deionized water before being dried at room temperature.The ion-to-electron transducer layers of Cu-MOF, MWCNTs, and MWCNTs/Cu-MOF composite were prepared by separately drop-casting 5 mL, 7 mL, and 7 mL of each onto the glassy carbon electrodes.The electrodes were then allowed to dry at room temperature for 4 hours.Once fully dried, 20 mL of PVC paste were drop-casted on each conductive layer, and the electrodes were le to dry overnight at room temperature before being conditioned in a 1 × 10 −2 M orphenadrine citrate aqueous solution for 1 hour prior to measurements. Sensor's calibration An electrochemical cell was designed and the potential of the proposed sensors was measured against Ag/AgCl double junction reference electrode (Thermo-Orion).About 20 mL aliquots of orphenadrine citrate ranging in concentration from 1 × 10 −2 to 1 × 10 −10 M were transferred into a series of 50 mL beakers.The emf readings were recorded by immersing the each of proposed sensors separately with the reference electrode in each solution with continuous stirring till attaining a constant potential reading.Graphs were created by plotting the electrode potential readings against the negative logarithmic concentration of orphenadrine citrate.A graphical depiction of the sensor's assembly is presented in Fig. 3. Molecular docking Molecular docking and visualization were conducted in silico for orphenadrine as the guest and the selected CD-ionophore as the host using Molecular Operating Environment (MOE; 2019.0102). 43The canonical SMILES of orphenadrine was obtained from the PubChem database (https:// pubchem.ncbi.nlm.nih.gov/;accessed on 8 June 2023).The 3D structure of orphenadrine was constructed from its 2D structure and then energy minimized using the EHT forceeld with a 0.1 kcal mol −1 Å −2 gradient RMS in MOE. Three-dimensional structure of the selected CD-ionophore was extracted from the appropriate protein complex; retrieved from the protein data bank (https://www.rcsb.org/;accessed on 8 June 2023).The a-CD was extracted from B. thetaiotaomicron SusE with alpha-cyclodextrin (pdb: 4FEM, 2.50 Å), 44 the b-CD was extracted from B. thetaiotaomicron SusD with betacyclodextrin (pdb: 3CK8, 2.10 Å) 45 and the g-CD was extracted from E. coli branching enzyme with gamma cyclodextrin (pdb: 5E70, 2.33 Å). 46 In order to prepare the CD-ionophore structures for the docking process, the Quick-Prep panel in MOE was utilized.This preparation involved energy minimization, protonation at pH = 5, xing and tethering atoms, deleting unnecessary water molecules, and initial renement at a gradient RMS of 0.1 kcal mol −1 Å −2 .Following this, the docking process for orphenadrine with the chosen CD-ionophores was conducted using alpha triangle placement with Amber10: EHT forceeld.The resulting docked structures were then rened using forceeld and scored using the Affinity dG scoring system. Pharmaceutical sample analysis To determine the average weight of one Norex® tablet, twenty tablets were weighed.A precise amount of nely ground powder tablets, equivalent to 0.461 g of orphenadrine citrate, was then transferred into a 100 mL volumetric ask and lled with acetate buffer at pH 5 to prepare a 1 × 10 −2 M stock solution.Appropriate dilutions were made from the prepared stock to obtain different concentrations of orphenadrine samples. Determination of orphenadrine citrate in real human plasma samples For plasma samples preparation, 0.5 mL of plasma was spiked with different concentrations of standard orphenadrine citrate followed by the addition of 0.5 mL of acetonitrile to precipitate the plasma proteins.Following centrifugation at 10 000 rpm for 3 minutes, 0.5 mL of the resulting supernatant was transferred into a 10 mL volumetric ask and diluted with acetate buffer at pH 5 to generate samples of different concentrations.The developed sensors were then employed to determine orphenadrine citrate concentrations using the corresponding regression equation. Determination of orphenadrine citrate in ACSF ACSF solution was prepared according to a previous procedure adopted from elsewhere. 47Two distinct solutions, A and B, were combined to create the prepared solution.Solution (A) was prepared by mixing 738.66 mg of D-(+)-glucose, 7012.8 mg of NaCl, 155.4 mg of CaCl 2 , 162.6 mg of MgCl 2 $6H 2 O, and 337.34 mg of Na acetate in 1 liter of acidic electrolyte solution with a pH of 3.9.Solution (B) was prepared by mixing 2184.3 mg of NaHCO 3 , 223.65 mg of KCl, and 62.4 mg of NaH 2 PO 4 in 1 liter of alkaline electrolyte solution with a pH of 8.The two solutions were individually ltered and then mixed equally at a temperature of 25 °C to form ACSF. Orphenadrine citrate standard solutions of various concentrations were prepared by combining 1 mL of each solution with 1 mL of ACSF in a 10 mL volumetric ask.These resulting solutions were then diluted with acetate buffer pH 5 up to the mark to obtain samples with concentrations of 1 × 10 −3 M, 1 × 10 −4 M, 1 × 10 −5 M, and 1 × 10 −6 M. Molecular docking To gain insights into orphenadrine-cyclodextrin ionophore interactions (guest-host interactions); molecular docking studies were performed.Results of the molecular docking showed that orphenadrine ts perfectly within each of the a-CD, b-CD and g-CD ionophores developing an inclusion complex with reasonable binding affinity (Table 1). Furthermore, the average molecular diameters for a-CD, b-CD and g-CD are 8.51, 10.32, 13.91 Å, respectively and the average diameter of orphenadrine is 9.10 Å; indicating that b-CD is ideally suited to orphenadrine for perfect tting (Fig. 4). In molecular interactions, orphenadrine showed H-bond interaction with the a-CD pocket.With the b-CD, amino group of orphenadrine formed H-bond interaction, while both phenyl rings showed pi-H bond interactions, indicating good binding interactions.Finally, amino group and one phenyl ring exhibited H-bond and pi-H bond interactions, respectively with the g-CD pocket. Practically, PVC-coated graphite electrodes were fabricated utilizing the prepared PVC membrane in Section 2.4.2 with the incorporation of a-CD, b-CD and g-CD, separately and were applied separately for the determination of ORPH in aqueous solutions.The a-CD, b-CD and g-CD based electrodes exhibited Nernstian responses of 48.78, 55.76 and 52.06 mV per concentration decade, respectively over the concentration range of 1 × 10 −2 M to 1 × 10 −5 M. Intriguingly, docking results were correlated with Nernstian responses; revealing that orphenadrine-b-CD inclusion complex is the most stable one. Sensor's performance characteristics The sensing of the potentiometric CPE towards the target analyte is governed by the presence of b-CD as a recognition element.The accessibility of orphenadrine molecule to the cavity of b-CD and the formation of selective hydrogen bonding into their cavities resulting in the formation of stable hostguest complexes.The potentiometric response was produced as a result of the generation of the phase boundary potential due to the formation of such inclusion complexes.The charge separation, whose magnitude is concentration-dependent, is formed at the interface between the electrode paste and the aqueous sample.This results in the generation of the potential difference (emf) between the reference electrode (concentrationindependent potential) and the SC-ISE.In addition to the molecular docking study, the conductometric measurement was executed to affirm the effective interaction between b-CD and orphenadrine.Fig. 5 illustrates a plot of the conductance (Lm) vs. b-CD/EP.HCl mole ratio.The conductance was gradually decreased with the addition of b-CD and it leveled out at a molar ratio of b-CD to orphenadrine molecule of around one.It indicates the formation of a rather stable (1 : 1) stoichiometry combination of orphenadrine and b-CD.This complex seems to have lower mobility than free, uncomplexed orphenadrine, which would limit its ability to transfer charges and reduce the solution's conductivity. The construction and performance characteristics of the studied GCEs based on the drop casting of the PVC membrane containing the recognition element b-CD over the transducer lm.To study the effect of the transducer layer thickness on the electrode response, 3 different electrodes were fabricated with different volumes of the drop casted transducer layer (5 mL, 7 mL and 10 mL).The optimum potentiometric response was attained with a transducer layer thickness of 5 mL, 7 mL and 7 mL for Cu-BTC-MOF, MWCNTs, and MWCNTs/Cu-MOF based sensors, respectively as being represented in Fig. 6.When using the drop casting technique to increase the thickness of the transducer layer, it is possible that the material may form "islands."These islands can negatively impact the electrical contact between the transducer and the electrode, ultimately hindering it. 11e proposed sensors' performance characteristics were assessed in line with the IUPAC recommendation 48 and the results were compiled in Table 2. The compared to that of MWCNTs (10-20 m 2 g −1 ) which allows higher contact between GCE and the ISM that enhances ion-toelectron transduction at the interface. The lifetime and stability of the studied sensors was monitored through continuous measuring their linearity range, the calibration slope, response time and LOD to ensure their a The mean of ve measurements taken at ve different concentration levels.b The mean of ve determinations of three QC samples.c The mean of ve determinations of three QC samples of using three independently fabricated sensors. precision within ±2% of their original values.The MWCNTs/ Cu-MOF based sensor showed the maximum stability and longer lifetime for 69 days.Reversibility of the proposed sensors was investigated by measuring the potential values of ORPH samples from high to low concentrations and from low to high concentrations as shown in Fig. 8.It was found the response of the three proposed sensors is reversible and the time taken for equilibrium from high to low concentration is longer than from low to high concentration.By comparing the dynamic response of the three sensors, it was found that superior response of MWCNTs/Cu-MOF based GCE relative to the others.The time need to attain equilibrium from high to low concentration was about 22 s ± 1.5 and that from low to high concentration was about 8 s ± 1.3. Water layer test The water layer test is used to identify any possible dri in the response of SC-ISEs due to the formation of a water layer between the transducer and the ISM.For this test, the potential reading of 1 × 10 −4 M orphenadrine citrate was monitored for 2 hours as the primary ion, followed by 1 × 10 −4 M melitracen hydrochloride as an interfering ion for another 2 hours, and then back to 1 × 10 −4 M orphenadrine citrate for 2 hours.Fig. 9 shows that the response of the proposed sensors did not change aer conditioning with the interfering ion for 2 hours, indicating the absence of a water layer in all sensors.This could be attributed to the hydrophobic nature of the prepared transducer layer that prevents the formation of a water layer at the interface with the ISM. 49,50 Effect of pH The impact of pH on the response of the proposed sensors was studied by using 1 × 10 −4 M orphenadrine citrate solution.The pH values of the investigated solutions were adjusted in the range from 2 to 10 using aliquots of diluted hydrochloric acid or sodium hydroxide solutions.The proposed Cu-BTC-MOF, MWCNTs and MWCNTs/Cu-MOF based GCEs showed stable constant readings over the range 3-7, 3-8 and 3-7, respectively as shown in Fig. 10.Therefore, pH 5 was adopted as the working pH value for the proposed sensors using acetate buffer, where orphenadrine citrate is protonated.Above pH 8, it was observed that the potential readings decreased owing to the presence of orphenadrine citrate in a non-protonated form.Nevertheless, below pH 3, the sensors are saturated with hydrogen ions that disturbs the performance of the sensors. Sensor's selectivity To evaluate the selectivity of the proposed sensors in the presence of interferents and co-administered drugs, the matched potential method 48 was employed.This involved adding a known amount of orphenadrine citrate solution (a A ′) to a reference solution (1 × 10 −4 M orphenadrine citrate) and measuring the resulting potential change (DE).Next, the reference solution was supplemented with a solution of an interfering ion with an activity of (a B ) to generate an equivalent potential change (DE), and the selectivity coefficient (log K pot orphenadrine,interferent ) was calculated using the following equation: Table 3 lists the selectivity coefficients of the tested samples, which demonstrate the high selectivity of the sensors towards orphenadrine citrate. Table 4 compares the response characteristics of the proposed GCEs with those of the previously reported selective potentiometric sensors for orphenadrine citrate.The results demonstrated that the suggested sensors exhibited better response characteristics and stability than the other reported ones.The MWCNTs/Cu-MOF based GCE showed should wider linearity range, shorter response time, longer stability and higher sensitivity than the other reported sensors. Analytical applications The proposed sensors have been applied for the determination of orphenadrine citrate in Norex® tablets, spiked samples of human plasma and ACSF without any sample extraction or pretreatment steps.As being reported in Table 5, the proposed electrodes exhibited high recovery values for the determination of orphenadrine in different matrices.The results indicate the high efficiency and accuracy of the proposed GCEs especially MWCNTs/Cu-MOF based GCE.The results of proposed CPEs of the quantitation of orphenadrine citrate in pharmaceutical tablets and human plasma were statistically compared to the official USP method and other reported method, 35 respectively.As shown in Table 5, no signicant differences were detected between the reported methods and the proposed sensors using the Student's t-test and F-test at p = 0.05. Conclusion In this study, three GCEs were evaluated for their ability to detect orphenadrine citrate in pharmaceutical formulations, real human plasma, and ACSF solutions.Cu-BTC MOF was used for the rst time as an ion-to-electron transducer in a potentiometric sensor.To enhance the transducer's limited conductivity, it was mixed with MWCNTs.The MWCNTs/Cu-MOFbased sensor outperformed the Cu-MOF or MWCNTs-based sensors in terms of linearity range, response time, sensitivity, and stability.All sensors provided precise and accurate recovery values, allowing them to detect orphenadrine citrate at concentrations as low as 1 × 10 −7 M, 1 × 10 −6 M, and 1 × 10 −8 M for Cu-MOF, MWCNTs, and MWCNTs/Cu-MOF-based sensors, respectively.The investigated sensors exhibited high selectivity and could be considered as suitable candidates for orphenadrine citrate analysis in quality control labs.The high sensitivity of the proposed sensors in biological matrix promotes them to be applied for the quantitation of the drug in bioavailability and bioequivalence studies.The future perspective of our research is the chemical synthesis of more sensitive and stable ion-to-electron transducer composites to be applied in different electrochemical measurements. Conflicts of interest There are no conicts of interest to declare. MWCNTs, Cu-BTC-MOF and MWCNTs/Cu-MOF based GCEs exhibited a Nernstian mono-valent cation ideal responses of 57.19 ± 0.33, 58.85 ± 0.45 and 60.05 ± 0.16 mV per concentration decade over the concentration range of 1 × 10 −2 M to 1 × 10 −6 M, 1 × 10 −3 M to 1 × 10 −7 M and 1 × 10 −2 M to 1 × 10 −8 M, respectively.It revealed the superior sensitivity of the MWCNTs/Cu-MOF based sensor with LOD value of 4 × 10 −9 M that was measured by the intersection of the two extrapolated linear portions of the curves.The potentiometric behavior of the proposed sensors is represented in Fig. 7. MWCNTs/Cu-MOF based sensor exhibited better linearity (r 2 = 0.996) and faster response time (5 s ± 1.3) compared to MWCNTs based sensor (r 2 = 0.991) with response time (7 s ± 2.1) and Cu-BTC MOF based sensor (r 2 = 0.992) with response time (5 s ± 1.8).The faster response time of the Cu-BTC MOF and MWCNTs/Cu-MOF based sensors is owing to the large surface area of the Cu-BTC MOF molecule (343.32 m 2 g −1 ) Fig. 6 Fig. 6 The effect of the ion-to-electron transducer layer on the potentiometric response of the GCE, (a) Cu-MOF, (b) MWCNTs and (c) MWCNTs/Cu-MOF composite. Fig. 7 Fig. 7 Profile of the potential (mV) versus −log concentrations of orphenadrine citrate (M) for the proposed sensors. Fig. 8 Fig. 8 The dynamic response time from low to high and high to low concentrations in (a) Cu-MOF based GCE, (b) MWCNTs based GCE and (c) MWCNTs/Cu-MOF based GCE. Fig. 10 Fig.10The influence of pH on the potentiometric response of the proposed sensors. Table 1 Docking results of orphenadrine with a-CD, b-CD and g-CD pockets Table 2 The electrochemical properties of the suggested sensors Table 4 Comparison between the proposed sensors and the electrochemical sensors documented in the literature for detecting of orphenadrine citrate The official method of USP 2023 of the pharmaceutical tablets was RP-HPLC with UV detection at 257 nm.The mobile phase was formed of acetonitrile : phosphate buffer (pH 3.6) (50 : 50) with a ow rate of 2 mL min −1 .The average recovery of six concentrations of orphenadrine was 99.57 ± 0.85.b The applied reported method of the human plasma matrix was RP-HPLC with UV detection at 215 nm.The mobile phase was formed of Acetonitrile : water (50 : 50), pH = 2.6 using propylparaben sodium as internal standard.The average recovery of six concentrations was 100.07 ± 1.14.c The gures in parenthesis are the theoretical values of t and F at p = 0.05.
2023-10-25T05:09:08.937Z
2023-10-18T00:00:00.000
{ "year": 2023, "sha1": "b54d88677ddc6391bda8dca2389d84205e160f16", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b54d88677ddc6391bda8dca2389d84205e160f16", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
239024568
pes2o/s2orc
v3-fos-license
Towards Optimal Correlational Object Search In realistic applications of object search, robots will need to locate target objects in complex environments while coping with unreliable sensors, especially for small or hard-to-detect objects. In such settings, correlational information can be valuable for planning efficiently. Previous approaches that consider correlational information typically resort to ad-hoc, greedy search strategies. We introduce the Correlational Object Search POMDP (COS-POMDP), which models correlations while preserving optimal solutions with a reduced state space. We propose a hierarchical planning algorithm to scale up COS-POMDPs for practical domains. Our evaluation, conducted with the AI2-THOR household simulator and the YOLOv5 object detector, shows that our method finds objects more successfully and efficiently compared to baselines,particularly for hard-to-detect objects such as srub brush and remote control. I. INTRODUCTION Object search is a fundamental capability for robots in many applications including domestic services [1,2], search and rescue [3,4], and elderly care [5,6]. In realistic settings, the object being searched for (e.g. pepper shaker) will often be small, outside the current field of view, and hard to detect. In such settings, correlational information can be of crucial value. Specifically, suppose the robot is equipped with a prior about the relative spatial locations of object types (e.g., stoves tend to be near pepper shakers). Then, it can leverage this information as a powerful heuristic to narrow down or "focus" the search space, by first locating easierto-detect objects that are highly correlated with the target object ( Fig. 1). Doing so has the potential to greatly improve search efficiency; unfortunately, previous approaches to object search with correlational information tend to resort to ad-hoc or greedy search strategies [7,8,2,9] or assemble a collection of independent components [10], which may not scale well to complex environments. We follow a long line of work that models the object search problem as a partially observable Markov decision process (POMDP) [7,11,12,13]. This formalization is useful because object search over long horizons is naturally a sequential, partially observed decision-making problem: the robot must (1) search for the target object by visiting multiple viewpoints in the environment sequentially, and (2) maintain and update a measure of uncertainty over the location of the target object, via its belief state. However, existing POMDPbased approaches assume object independence for scalability of maintaining and reasoning about the belief states and do not consider correlational information between objects in the environment during the search process [13,14,15]. We introduce COS-POMDP (Correlational Object Search POMDP), a general planning framework for optimal object Fig. 1: We study the problem of object search using correlational information about spatial relations between objects. This example illustrates a desirable search behavior in an AI2-THOR scene, where the robot leverages the detection of a large StoveBurner to more efficiently find a small, hard-to-detect PepperShaker. search with given correlational information. Critically, COS-POMDPs avoid the exponential blow-up of naively maintaining a joint belief about all objects while preserving optimal solutions to this exponential formulation. COS-POMDPs model correlational information using a correlation-based observation model. The correlational information is given to the robot as a factored joint distribution over object locations. In practice, this distribution can be approximated by learning it from data [8,16] or interpreting human speech [17,18]. We address scalability by proposing a hierarchical planning algorithm, where a high-level COS-POMDP plans subgoals, each fulfilled by a low-level planner that plans with low-level actions (i.e., given primitive actions); both levels plan online based on a shared and updated COS-POMDP belief state, enabling efficient closed-loop planning. We evaluate the proposed approach in AI2-THOR [19], a realistic simulator of household environments, and we use YOLOv5 [20,21] as the object detector. Our results show that, when the given correlational information is accurate, COS-POMDP leads to more robust search perfomance for target objects that are hard-to-detect. In particular, for target objects with a true positive detection rate below 40%, COS-POMDP significantly outperforms the POMDP baseline not using correlational information by 42.1% and a greedy, nextbest view baseline [2] by 210% in terms of SPL (success weighted by inverse path length) [22], a recently developed metric that reflects both search success and efficiency. II. RELATED WORK Object search involves a wide range of subproblems (e.g., perception [7,12], planning [23,11], manipulation [24,25]) and different types of target objects (moving [26] or static [23]). We consider static objects and an environment where the set of possible object locations is given, but we assume no object location is known a priori. Garvey [27] and Wixson and Ballard [9] pioneered the paradigm of indirect search, where an intermediate object (such as a desk) that is typically easier to detect is located first, before the target object (such as a keyboard). More recently, probabilistic graphical models have been used to model object-room or object-object spatial correlations [2,7,8,28,29]. In particular, Zeng et al. [2] proposed a factor graph representation for different types of object spatial relations. Their approach produces search strategies in a greedy fashion by selecting the next-best view to navigate towards, based on a hybrid utility of navigation cost and the likelihood of detecting objects. In our evaluation, we compare our sequential decision-making approach with a greedy, next-best view baseline based on that work [2]. Recently, the problem of semantic visual navigation [30,31,32,33,34] received a surge of interest in the deep learning community. In this problem, an embodied agent is placed in an unknown environment and tasked to navigate towards a given semantic target (such as "kitchen" or "chair"). The agent typically has access to behavioral datasets for training on the order of millions of frames and the challenge is typically in generalization. Our work considers the standard evaluation metric (SPL [22]) and task success criteria (object visibility and distance threshold [31]) from this body of work. However, our setting differs fundamentally in that the search strategy is not a result of training but a result of solving an optimization problem. Finally, our hierarchical planning algorithm for COS-POMDPs differs in not limiting POMDP to local use [10], or assuming navigation tasks for low-level macro-actions [35]. III. PROBLEM FORMULATION We present a general formulation of correlational object search as a planning problem, where a robot must search for a target object given correlational information with other objects in the environment. We begin by describing the underlying search environment and the capabilities of the robot, followed by the problem definition. A. Search Environment and Robot Capabilities The search environment contains a target object and n additional static objects. The set of possible object locations is discrete, denoted as X . The locations of the target object x target ∈ X and other objects x 1 , . . . , x n ∈ X are unknown to the robot and follow a latent joint distribution Pr(x 1 , . . . , x n , x target ). The robot is given as input a factored form of this distribution, defined later in Sec. III-B. The robot can observe the environment from a discrete set of viewpoints, where each viewpoint is specified by the position and orientation of the robot's camera. These viewpoints form the necessary state space of the robot, denoted as S robot . The initial viewpoint is denoted as s init robot . By taking a primitive move action a from the set A m , the robot changes its viewpoint subject to transition uncertainty T m (s robot , s robot , a) = Pr(s robot |s robot , a). Also, the robot can decide to finish a task at any timestep by choosing a special action Done, which deterministically terminates the process. At each timestep, the robot receives an observation z factored into two independent components z = (z robot , z objects ). The first component z robot ∈ S robot is an estimation of the robot's current viewpoint following the observation model O robot (z robot , s robot ) = Pr(z robot |s robot ). The second component z objects = (z 1 , . . . , z n , z target ) is the result of performing object detection. Each element, z i ∈ X ∪ {null}, i ∈ {1, . . . , n, target}, is the detected location of object i within the field of view, or null if not detected. The observation z i about object i is subject to limited field of view and sensing uncertainty captured by a detection model D i (z i , x i , s robot ) = Pr(z i |x i , s robot ); Here, a common conditional independence assumption in object search is made [2,14], where z i is conditionally independent of the observations and locations of all other objects given its location and the robot state s robot . The set of detection models for all objects is D = {D 1 , . . . , D n , D target }. In our experiments, we obtain parameters for the detection models based on the performance of the vision-based object detector (Sec. VI-B). B. The Correlational Object Search Problem Although the joint distribution of object locations is latent, the robot is assumed to have access to a factored form of that distribution, that is, n conditional distributions, C = {C 1 , . . . , C n } where C i (x i , x target ) = Pr(x i |x target ) specifies the spatial correlation between the target and object i. We call each C i a correlation model. This model can be learned from data or, in our case, be given by a domain expert. Formally, we define the correlational object search problem as follows. Given as input a tuple (X , C, D, s init robot , S robot , O robot , A m , T m ), the robot must perform a sequence of actions, a 1 , . . . , a t , where a 1 , . . . , a t−1 ∈ A m and the last action is Done. The success criteria depends on the robot state and the target location at the time of Done, and the robot should minimize the distance traveled to find the object. In our evaluation in AI2-THOR, we use the success criteria recommended by Batra et al. [31], defined in Sec. VI-A. IV. CORRELATIONAL OBJECT SEARCH AS A POMDP In this section, we introduce the COS-POMDP, a POMDP formulation that addresses the correlational object search problem, followed by a discussion on its optimality. We begin with a brief review of POMDPs [36,37,38]. A. Background: POMDPs A POMDP is formally defined as a tuple (S, A, Z, T, O, R, γ), where S, A, Z denote the state, action, and observation spaces, T (s , a, s) = Pr(s |s, a), O(z, s , a) = Pr(z|s , a) are the transition and observation models, and R(s, a) ∈ R is the reward function. At each timestep, the agent takes an action a ∈ A, the environment state transitions from s ∈ S to s ∈ S according to T , and the agent receives an observation z ∈ Z from the environment according to O. The agent typically maintains a belief state b t : S → [0, 1], a distribution over the states and a sufficient statistic for the history of past actions and observations h t = (az) 1:t−1 . The agent updates its belief after taking action a and receiving observation z: b z,a (s ) = ηO(z, s , a) s T (s , a, s)b(s), where η is the normalizing constant [36]. The solution to a POMDP is a policy π that maps a history to an action. The value of a POMDP at a history under policy π is the expected discounted cumulative reward following that policy: where γ is the discount factor. The optimal value at the history is V (h) = max π V π (h). B. COS-POMDP Definition Given an instance of the correlational object search problem defined in Sec. III-B, we define the Correlational Object Search POMDP (COS-POMDP) as follows: • State space. The state space S is factored to include the robot state s robot ∈ S robot and the target state x target ∈ X . A state s ∈ S can be written as s = (s robot , x target ). Importantly, no other object state is included in S. tored over the objects, and each z ∈ Z is written as z = (z robot , z objects ), where z objects = (z 1 , . . . , z n , z target ). • Transition model. The objects are assumed to be static. Actions a m ∈ A m change the robot state from s robot to s robot according to T m , and taking the Done action terminates the task deterministically. • Observation model. By definition of z, Pr(z|s) = Pr(z robot |s robot ) Pr(z objects |s) where Pr(z robot |s robot ) is defined by O robot (z robot , s robot ). Under the conditional independence assumption in Sec. III, Pr(z objects |s) can be compactly factored: The first term in Eq (2) is defined by D target , and each Pr(z i |x target , s robot ) is called a correlational observation model, written as: where the two terms in Eq (4) are the detection model D i ∈ D and correlation model C i ∈ C, respectively. • Reward function. The reward function, R(s, a) = R(s robot , x target , a), is defined as follows. Upon taking Done, the task outcome is determined based on s robot , x target , which is successful if the robot orientation is facing the target and its position is within a distance threshold to the target. If successful, then the robot receives R max 0, and R min 0 otherwise. Taking a move action from A m receives a negative reward which corresponds to the action's cost. In our experiments, we set R max = 100 and R min = −100. Each primitive move action (e.g., MoveAhead) receives a step cost of −1. C. Optimality of COS-POMDPs The state space of a COS-POMDP involves only the robot and target object states. A natural question arises: have we lost any necessary information? In this section, we show that COS-POMDPs are optimal, in the following sense: if we imagine solving a "full" POMDP corresponding to the COS-POMDP, whose state space contains all object states, then the solutions to the COS-POMDP are equivalent. Note that a belief state in this "full" POMDP scales exponentially in the number of objects. We begin by precisely defining the "full" POMDP, henceforth called the F-POMDP, corresponding to a COS-POMDP. The F-POMDP has identical action space, observation space, and transition model as the COS-POMDP. The reward function is also identical since it only depends on the target object state, robot state, and the action taken. F-POMDP differs in the state space and observation model: • Observation model. Under the conditional independence assumption stated in Sec. III, the model for observation z i of object x i involves just the detection model: Pr(z i |s) = Pr(z i |x i , s robot ). Since the COS-POMDP and the F-POMDP share the same action and observation spaces, they have the same history space as well. We first show that given the same policy, the two models have the same distribution over histories. Theorem 1. Given any policy π : h t → a, the distribution of histories is identical between the COS-POMDP and the F-POMDP. Proof: (Sketch) We prove this by induction. When t = 1, the statement is true because both histories are empty. The inductive hypothesis assumes that the distributions Pr(h t ) is the same for the two POMDPs at t ≥ 1. Then, by definition, Note that Pr(a t |h t ) is the same under the given π. We can show the two POMDPs have the same Pr(z t |h t , a t ); the full proof is available in Appendix A. 1 Using Theorem 1, we are equipped to make a statement about the value of following a given policy in either the COS-POMDP or the F-POMDP. Corollary 1. Given any policy π : h t → a and history h t , the value V π (h t ) is identical between the COS-POMDP and the F-POMDP. Proof: By definition, the value of a POMDP at a history is the expected discounted cumulative reward with respect to the distribution of future action-observation pairs. Theorem 1 states that the COS-POMDP and F-POMDP have the same distribution of histories given π. Furthermore, the reward function depends only on the states of the robot and the target object. Thus, this expectation is equal for the two POMDPs at any h. Finally, we can show that COS-POMDPs are optimal in the sense that we described before. Corollary 2. An optimal policy π * for either the COS-POMDP or the F-POMDP is also optimal for the other. Proof: Suppose, without loss of generality, that π * is optimal for the COS-POMDP but not the F-POMDP. Let π be the optimal policy for the F-POMDP. By the definition of optimality, for at least some history h we must have V π (h) > V π * (h). By Corollary 1, for any such h the COS-POMDP also has value V π (h), meaning π * is not actually optimal for the COS-POMDP; this is a contradiction. V. HIERARCHICAL PLANNING Despite the optimality-preserving reduction of state space in a COS-POMDP, directly planning over the primitive move actions is not scalable to practical domains even for state-ofthe-art online POMDP solvers [38]. Nevertheless, planning POMDP actions at the primitive level has the benefit of controlling fine-grained movements, allowing goal-directed behavior to emerge automatically at this level. Therefore, we seek an algorithm that can reason about both searching over a large region and searching carefully in a local region. To this end, we propose a hierarchical planning algorithm to apply COS-POMDPs to realistic domains (Fig. 2). The pseudocode and a detailed description is provided in Appendix B. As an overview: (1) A topological graph is first dynamically generated to reflect the robot's belief in the target location. Nodes are places accessible by the robot, and edges indicate navigability between places [39]. (2) Then, a high-level COS-POMDP is instantiated which plans subgoals that can be either navigating to another place, or searching locally at the current place. Both types of subgoals can be understood as viewpoint-changing actions, except the latter keeps the viewpoint the same. (3) At each timestep, a subgoal is planned using a POMDP solver, and a low-level planner is instantiated corresponding to the subgoal. This low-level planner then plans to output an action from the action set A = A m ∪{Done}, which is used for execution. In our implementation, for navigation subgoals, an A * planner is used, and for the subgoal of searching locally, a lowlevel COS-POMDP is instantiated whose action space is the primitive movements A m , and solved using a POMDP planner [40]. (4) Upon executing the low-level action, the robot receives an observation from its on-board object detector. This observation is used to update the belief of both the highlevel COS-POMDP as well as the low-level COS-POMDP (if it exists). (5) If the cumulative belief captured by the nodes in the current topological graph is below a threshold (50% in our experiments), then the topological graph is regenerated to better reflect the belief. (6) Finally, the process starts over from step (3). If the high-level COS-POMDP plans a new subgoal different from the current one, then the low-level planner is re-instantiated. Our algorithm plans actions for execution in an online, closed-loop manner, allowing for reasoning about viewpoint changes at the level of both places in a topological graph and fine-grained movements. This is efficient in practice because typical mobile robots can be controlled both at the low level of motor velocities and the high level of navigation goals [41,42]. VI. EXPERIMENTAL SETUP We test the following hypotheses through our experiments: (1) Leveraging correlational information with easier-to-detect objects can benefit the search for hard-to-detect objects; (2) Optimizing over an action sequence improves performance compared to greedily choosing the next-best view. A. AI2-THOR We conduct experiments in AI2-THOR [19], a realistic simulator of in-household rooms. It has a total of 120 scenes divided evenly into four room types: Bathroom, Bedroom, Kitchen, and Living room. For each room type, we use the first 20 scenes for training a vision-based object detector and learning object correlation models (used in some experiments), and the last 10 scenes for validation. The robot can take primitive move actions from the set: {MoveAhead, RotateLeft, RotateRight, LookUp, LookDown}. MoveAhead moves the robot forward by 0.25m. RotateLeft, RotateRight rotate the robot in place by 45 • . LookUp, LookDown tilt the camera up or down by 30 • . The robot observes the pose of its current viewpoint without noise. To be successful, when the robot takes Done, the robot must be within a Euclidean distance of 1.0m from the target object while the target object is visible in the camera frame. The maximum number of steps allowed is 100. B. Object Detector We use YOLOv5 [21], a popular vision-based object detector. This contrasts previous works evaluated using a ground truth object detector [33] or detectors with synthetic noise and detection ranges [2,15]. We collect training data by randomly placing the robot in the training scenes. Detection Model. Since vision detectors can sometimes detect small objects from far away, we consider a line-ofsight detection model with a limited field of view angle: This detection model is parameterized by: TP, the true positive rate; FP, the false positive rate; r, the average distance between the robot and the object for true positive detections; σ, the width of a small region around the true object location where a detection made within that region, though not exactly accurate, is still accepted as a true positive detection. We set σ = 0.5m. The notation N (·) denotes a Gaussian distribution. The V(s robot ) denotes the line-of-sight field of view with a 90 • angle. The V E (r) denotes the region inside the field of view that is within distance r from the robot. The weight δ = 1 if the detection is within V E (r), and otherwise δ = exp(− z i − s robot − r) 2 . C. Target Objects We choose the target and correlated object classes based on detection statistics. The list of target object classes and other correlated classes for each room type is listed below (in no particular order). For detection statistics, please refer to Table I and Table IV D. Correlation Model We consider a binary correlation model that takes into account whether the correlated object and the target are close or far. Note that our method is not specific to this model. We use this model since it is applicable between arbitrary object classes and can be easily estimated based on object instances. where Close(·, ·) and Far(·, ·) are opposite, class-level predicates, · denotes the Euclidean distance, and d(·, ·) is the expected distance between the two object classes. In Sec. VII, we conduct an ablation study where d(·,target) is estimated under different scenarios: accurate: based on object ground truth locations in the deployed scene; estimated (est): based on instances in training scenes; wrong (wrg): same as accurate except we flip the close/far relationship between the objects so that they do not match the scene. ; (2) success rate (SR) and (3) dicounted cumulative rewards (DR). The SPL for each trial i is defined as SPL where S i is the binary success outcome of the search, i is the shortest path between the robot and the target, and p i is the actual search path. The SPL measures the search performance by taking into account both the success and the efficiency of the search. As a stringent metric, i uses information about the true object location, but it does not penalize excessive rotations [31]. Therefore, we also include the discounted cumulative rewards (DR) metric with γ = 0.95, which takes rotation actions into account. F. Baselines Baselines are defined in the caption of Table I. Note that Greedy-NBV is based on [2] where a weighted particle belief is used to estimate the joint state over all object locations. During planning, the robot selects the next best viewpoint to navigate towards based on a cost function that considers both navigation distance and the probability of detecting any object. This provides a baseline that is in contrast to the sequential decision-making paradigm considered by COS-POMDPs and the modeling of only robot and target states. G. Implementation Details Objects exist in 3D in AI2-THOR scenes. Since the robot can tilt its camera within a small range of angles, all methods (except Random) estimate target object height among a discrete set of possible height values, Above, Below, and Same, with respect to the camera's current tilt angle. POMDP-based methods are implemented using the pomdp_py [43] library with the POUCT planner [40]. The rollout policy uniformly samples from move actions towards the target or possibly leading to a non-null observation about an object. [21] vision detector and are given accurate correlational information. Target-POMDP uses hierarchical planning but only the target object is detectable. Greedy-NBV is a next-best view approach based on [2]. Random chooses actions uniformly at random. The highest value of each metric per room type is bolded. Parentheses contain 95% confidence interval. Ablation study results are bolded if it outperforms the best result from the main evaluation. COS-POMDP is more consistent, performing either the best or the second best across all room types and metrics. VII. RESULTS AND DISCUSSIONS Our main results by room type are shown in Table I; results over all room types are in the appendix. The performance of COS-POMDP is more consistent compared to other baselines at either the best or the second best for all metrics in the four room types. The performance is broken down by target classes in Table II. Greedy-NBV performs well for AlarmClock in Bedroom; it appears to experience less instability in the particle belief as a result of particle reinvigoration. COS-POMDP appears to be the most robust when the target object has significant uncertainty of being detected correctly, including ScrubBrush, CreditCard, Candle RemoteControl, Knife, and CellPhone. An example search trial for CreditCard is visualized in Fig. 3. For target objects with a true positive detection rate below 40%, COS-POMDP improves the POMDP baseline that ignores correlational information by 42.1% in terms of the SPL metric (p = 0.028), and it is more than 2.1 times better than the greedy baseline (p = 0.023). Both results are statistically significant. Indeed, when the target object is reliably detectable, such as Television, the ability to detect multiple other objects may actually hurt performance, compared to Target-POMDP, due to noise from detecting those other objects and the influence on search behavior. Ablation Studies. We also conduct two ablation studies. First, we equip COS-POMDP with a groundtruth object detector, as done in [33], henceforth called COS-POMDP (gt). This shows the performance when the detections of both the target and correlated objects involve no noise at all. We observe better or competitive performance from using groundtruth detectors across all metrics in all room types. The gain over COS-POMDP in terms of SPL is not statistically significant (p = 0.069). Additionally, we use correlations estimated using training scenes (COS-POMDP (est)) as well as incorrect correlational information that is the reverse of the correct one (COS-POMDP (wrg)). Indeed, using accurate correlations provides the most benefit, while correlations estimated through this naive method could offer benefit compared to using incorrect correlations in some cases (Bathroom and Bedroom), but can also backfire and hurt performance in others. Therefore, properly learning correlations is important, while leveraging a reliable source of information, for example, from a human at the scene, may offer the most benefit. VIII. CONCLUSION AND FUTURE WORK In this paper, we formulated the problem of correlational object search as a POMDP (COS-POMDP), and proposed a hierarchical planning algorithm to solve it in practice. Our results show that, particularly for hard-to-detect objects, our approach offers more robust performance compared to baselines. Directions for future work include investigating different correlation models, searching in more complex settings that involve e.g., container opening and dynamic objects, and evaluating on a real robot platform. APPENDIX A. Proof of Theorem 1 Theorem 1. Given any policy π : h t → a, the distribution of histories is identical between the COS-POMDP and the F-POMDP. Proof: We prove this by induction. When t = 1, the statement is true because both histories are empty. The inductive hypothesis assumes that the distributions Pr(h t ) is the same for the two POMDPs at t ≥ 1. Then, by definition, Pr(h t+1 ) = Pr(h t , a t , z t ) = Pr(z t |h t , a t ) Pr(a t |h t ) Pr(h t ). Since Pr(a t |h t ) is the same under the given π, we can conclude Pr(h t+1 ) is identical if the two POMDPs have the same Pr(z t |h t , a t ). We show that this is true as follows. B. Hierarchical Planning Algorithm Below, we describe the hierarchical planning algorithm proposed in Sec. V in detail. The pseudocode for this algorithm is presented in Algorithm 1 and illustrated with legend in Fig. 5. To enable the planning of searching over a large region, we first generate a topological graph, where nodes are places accessible by the robot, and edges indicate navigability between places. This is done by the SampleTopoGraph procedure (Algorithm 2). In this procedure, the nodes are sampled based on the robot's current belief in the target location b t target , and edges are added such that the graph is connected and every node has an out-degree within a given range, which affects the branching factor for planning. An example output is illustrated in Fig. 2. Then, a high-level COS-POMDP P H is instantiated. The state and observation spaces, the observation model, and the reward model, are as defined in Sec. IV-B. The move action set and the corresponding transition model are defined according to the generated topological graph. Each move action represents a subgoal of navigating to another place, or the subgoal of searching locally at the current place. Both types of subgoals can still be understood as viewpointchanging actions, except the latter keeps the viewpoint at the same location. For the transition model T (s , g, s) where g represents the subgoal, the resulting viewpoint (i.e., s robot ∈ s ) after completing a subgoal is located at the destination of the subgoal with orientation facing the target object location (x target ∈ s). The Done action is also included as a dummy subgoal to match the definition of the COS-POMDP action space (Sec. IV-B). At each timestep, a subgoal is planned using an online POMDP planner, and a low-level planner is instantiated corresponding to the subgoal. This low-level planner then plans Algorithm 1: OnlineHierarchicalPlanning Input: P = (X , C, D, s init robot , S robot , O robot , A m , T m ). Parameter: maximum number of steps T max . Output: Action sequence a 1 , · · · a t (Sec. III-B). ; P H ← HighLevelCOSPOMDP(P, V, E, b t ); subgoal ← plan POMDP online for P H ; if subgoal is navigate to a node in V then s robot ← argmax srobot b robot (s robot ); a t ← A * (subgoal, s robot , A m , T m ); else if subgoal is search locally then P L ← LowLevelCOSPOMDP(P , b t ); a t ← plan POMDP online for P L ; else if subgoal is Done then a t ← Done end z t ← execute a t and receive observation; b t+1 ← BeliefUpdate(b t , a t , z t ); t ← t + 1; end to output an action a t from the action set A = A m ∪{Done}, which is used for execution. In our implementation, for navigation subgoals, an A * planner is used, and for searching locally, a low-level COS-POMDP P L is instantiated with the primitive movements A m in its action space. (We use PO-UCT [40] as the online POMDP solver in our experiments.) Upon executing the low-level action a t , the robot receives an observation z t ∈ Z from its on-board perception modules for robot state estimation and object detection. This observation is used to update the belief of the high-level COS-POMDP, which is shared with the low-level COS-POMDP. Finally, the process starts over from the first step of sampling a topological graph. If the high-level COS-POMDP plans a new subgoal different the current one, then the lowlevel planner is re-instantiated. This algorithm plans actions for execution in an online, closed-loop fashion, allowing reasoning about viewpoint changes both at the level of places in a topological graph as well as fine-grained movements. Algorithm 2 is the pseudocode of the SampleTopoGraph algorithm, implemented for our experiments in AI2-THOR. We set M = 10, d sep = 1.0m, ζ min = 3, ζ max = 5. In our implementation, the topological graph is resampled only if the cumulative belief captured by the nodes in the current topological graph, srobot∈V p(s robot ), is below 50%. Otherwise, the same topological graph will be returned. C. Additional Results and Discussions The performance over all scenes and target classes are shown in Table III. In summary, COS-POMDP outperforms the baselines across all three metrics. Based on Table II, we also observe an advantage for COS-POMDP for objects that are detected at a closer distance on average (r). In particular, The performance gain over Greedy-NBV is statistically significant (p < 0.001) in terms of SPL, and the performance gain over Target-POMDP is statistically significant (p = 0.04) in terms of discounted cumulative rewards. In addition, COS-POMDP is significantly better than COS-POMDP (est) (p = 0.002) and COS-POMDP (wrg) (p = 0.012) in terms of SPL. COS-POMDP (gt) outperforms COS-POMDP in SPL but is not significant (p = 0.069). COS-POMDP (est) performs worse than wrong COS-POMDP (wrg) in Kitchen and Living room. Our observation is that scenes in those room types have greater variance in size and layout, making estimated correlations less desirable in validation scenes, and they may contain multiple instances of some object classes such that search by COS-POMDP (wrg) may actually benefit from belief update using the reverse of correct correlational information since it may in fact increase the probability over one of the true target locations. . This is an enlarged version of the figure as Fig. 2 with a legend. A high-level COS-POMDP plans subgoals that are fed to a low-level planner to produce low-level actions. The belief state is shared across the levels. Both levels plan with updated beliefs at every timestep. D. Detection Statistics for Correlated Object Classes The correlated object classes are chosen to have, in the validation scenes, at least 60% true positive rate and generally above 70%, and around or below 5% false positive rate and an average true positive detection distance of around 2m or more. This contrasts the target classes where either the true positive rate is below 60% (with many below 50%), false positive rate around 5-10%, or the average true positive detection distance of around or less than 2.5m, Table IV shows the detection statistics for correlated object classes. The detection statistics of target object classes can be found in Table II
2021-10-20T01:16:08.898Z
2021-10-19T00:00:00.000
{ "year": 2021, "sha1": "5d46a094d0add44e2d00fca9cba4867b51f1957a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8b98e0171bbda9fda5ecd9c1ed36c6d8a0f1e8a7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
27378030
pes2o/s2orc
v3-fos-license
TwiSe at SemEval-2017 Task 4: Five-point Twitter Sentiment Classification and Quantification The paper describes the participation of the team “TwiSE” in the SemEval-2017 challenge. Specifically, I participated at Task 4 entitled “Sentiment Analysis in Twitter” for which I implemented systems for five-point tweet classification (Subtask C) and five-point tweet quantification (Subtask E) for English tweets. In the feature extraction steps the systems rely on the vector space model, morpho-syntactic analysis of the tweets and several sentiment lexicons. The classification step of Subtask C uses a Logistic Regression trained with the one-versus-rest approach. Another instance of Logistic Regression combined with the classify-and-count approach is trained for the quantification task of Subtask E. In the official leaderboard the system is ranked 5/15 in Subtask C and 2/12 in Subtask E. Introduction Microblogging platforms like Twitter have lately become ubiquitous, democratizing the way people publish and access information. This vast amount of information that reflects the opinions, news or comments of people creates several opportunities for opinion mining. Among other platforms, Twitter is particularly popular for research due to its scale, representativeness and ease of access to the data it provides. Furthermore, to facilitate the study of opinion mining, high quality resources and data challenges are organized. The Task 4 of the SemEval-2017 challenges, entitled "Sentiment Analysis in Twitter" is among them. The paper describes the participation of the team Twitter Sentiment (TwiSe) in two of the subtasks of Task 4 of SemEval-2017. Specifically, I participated in Subtasks C and E. Both of them assume that sentiment is distributed across a fivepoint scale ranging from VeryNegative to VeryPositive. Subtask C is a sentiment classification task, where given a tweet the aim is to assign one of the five classes. Subtask E is a quantification task, whose aim is given a set of tweets referring to a subject to estimate the prevalence of each of the five classes. The tasks are described in more detail at (Rosenthal et al., 2017). The rest of the paper is organized as follows: Section 2 describes the feature extraction steps performed in order to construct the representation of a tweet, which is the same for both subtasks C and E. Section 3 details the learning approaches used and Section 4 summarizes the achieved performance. Finally, Section 5 concludes with pointers for future work. Feature Extraction In this section I describe the details of the feature extraction process performed. My approach is heavily inspired by my previous participation in the "Twitter Sentiment Analysis" task of SemEval-2016, which is detailed at Balikas and Amini (2016). Importantly, the code for performing the feature extraction steps described below is publicly available at https://github. com/balikasg/SemEval2016-Twitter_ Sentiment_Evaluation. There are three sets of features extracted: 1. Word occurrence features, 2. Morpho-syntactic features like counts of punctuation and part-of-speech (POS) tags, 3. Semantic features based on sentiment lexicons and word embeddings. For the data pre-processing, cleaning and tokenization 1 as well as for most of the learning steps, I used Python's Scikit-Learn (Pedregosa et al., 2011) and NLTK (Bird et al., 2009). Word occurrence and morpho-syntactic features Following (Kiritchenko et al., 2014;Balikas and Amini, 2016) I extract features based on the words that occur in a tweet. The aim is to describe the lexical content of the tweets as well as to capture part of the words order. The latter is achieved using N -grams, with N > 1. To reduce the dimensionality of the representations when using Ngrams, especially with noisy data such as tweets, I use the hashing trick. Hashing is a fast and space-efficient way for vectorizing text spans. It turns arbitrary features into vector indices of predefined size (Weinberger et al., 2009). For example, assume that after the vocabulary extraction step one has a vocabulary of dimensionality 50K. This would result in a very sparse vector space model and longer training for a classifier. Feature hashing can be seen as a dimensionality reduction process where a hash function given a textual input (vocabulary item) associates it to a number j within 0 ≤ j ≤ D, where D is the dimension of the new representation. The word-occurrence and morpho-syntactic features I extracted are: • N -grams with N ∈ [1, 4], projected to 20Kdimensional space using the hashing function, 2 • character m-grams of dimension m ∈ [4, 5], that is sequences of characters of length 4 or 5, projected to 25K-dimensional space using the same hashing function. The sizes of the output of the hashing function for N -grams and character m-grams (20K and 25K respectively) were decided using the validation set. Also, I applied the hashing trick only for these two types of features, • # of exclamation marks, # of question marks, sum of exclamation and question marks, bi-1 We adapted the tokenizer provided at http: //sentiment.christopherpotts.net/ tokenizing.html 2 I used the signed 32-bit version of Murmurhash3 function, implemented as part of the HashingVectorizer class of scikit-learn. nary feature indicating if the last character of the tweet is a question or exclamation mark, • # of capitalized words (e.g., GREAT) and # of elongated words (e.g. coool), # of hashtags in a tweet, • # of negative contexts. Negation is important as it can alter the meaning of a phrase. For instance, the meaning of the positive word "great" is altered if the word follows a negative word e.g. "not great". We have used a list of negative words (like "not") to detect negation. We assumed that words after a negative word occur in a negative context, that finishes at the end of the tweet unless a punctuation symbol occurs before. Notice that negation also affects the N -gram features by transforming a word w in a negated context to w N EG, • # of positive emoticons, # of negative emoticons and a binary feature indicating if emoticons exist in a given tweet, and • The distribution of part-of-speech (POS) tags (Gimpel et al., 2011) with respect to positive and negative contexts, that is how many verbs, adverbs etc., appear in a positive and in a negative context in a given tweet. Semantic Features With regard to the sentiment lexicons, I used: • manual sentiment lexicons: the Bing liu's lexicon (Hu and Liu, 2004), the NRC emotion lexicon (Mohammad and Turney, 2010), and the MPQA lexicon (Wilson et al., 2005), • # of words in positive and negative context belonging to the word clusters provided by the CMU Twitter NLP tool 3 , # of words belonging to clusters obtained using skip-gram word embeddings, • positional sentiment lexicons: the sentiment 140 lexicon (Go et al., 2009) and the Hashtag Sentiment Lexicon (Kiritchenko et al., 2014) I make, here, more explicit the way I used the sentiment lexicons, using the Bing Liu's lexicon as an example. I treated the rest of the lexicons similarly, which is inspired by (Kiritchenko et al., 2014). For each tweet, using the Bing Liu's lexicon I generated a 104-dimensional vector. After tokenizing the tweet, I count how many words (i) in positive/negative contexts belong to the positive/negative lexicons (4 features) and I repeat the process for the hashtags (4 features). To this point one has 8 features. I repeat the generation process of those 8 features for the lowercase words and the uppercase words. Finally, for each of the 24 POS tags the (Gimpel et al., 2011) tagger generates, I count how many words in positive/negative contexts belong to the positive/negative lexicon. As a result, this generates 2 × 8 + 24 × 4 = 104 features in total for each tweet based on the sentiment lexicons. With respect to the features from text embeddings, I opt for cluster-based embeddings inspired by (Partalas et al., 2016). I used an in-house collection of ∼ 40M tweets collected using the Twitter API between October and November 2016. Using the skip-gram model as implemented in the word2vec tool (Mikolov et al., 2013), I generated word embeddings for each word that appeared in the collected data more than 5 times. Therefore, each word is associated with a vector of dimension D, where I set D = 100, which I did not validate. Then, using the k-means algorithm I clustered the learned embeddings, initializing the clusters centroids with k-means++ (Arthur and Vassilvitskii, 2007). Having the result of the clustering step, I produced binary cluster membership features for the words of a tweet. For instance, assuming access to the results of k-means with k = 50, each tweet's representation is augmented with 50 features, denoting whether words of the tweet belong to each of the 50 clusters. The number of the clusters k in the k-meams algorithm is a hyperparameter, which was set to 1, 000 after tuning it from k ∈ {100, 250, 500, 1000, 1500, 2000}. The Learning Approach The section describes the learning approach for Subtasks C and E. For each of them, I used a Logistic Regression optimized with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm from the quasi-newton family of methods, and in particular its limited-memory (L-BFGS) approximation (Byrd et al., 1995). 4 4 From scikit-learn: 'LogisticRegression(solver='lbfgs'). Fine-grained tweet classification The output of the concatenation of the representation learning steps described at Section 2 is a 46,368-dimensional vector, out of which N -grams and character m-grams correspond to 45K elements. We normalize each instance using l 2 norm and this corresponds to the vector representation of the tweets. I train a Logistic Regression as implemented in Scikit-learn (Pedregosa et al., 2011) using L 2 regularization. The hyper-parameter C that controls the importance of the regularization term in the optimization problem is selected with grid-search from C ∈ {10 −4 , 10 −3 , . . . , 10 4 }. For grid-search I used a simple train-validation split, which is described in the next section. Once the C parameter is selected, I retrained the Logistic Regression in the union of the instances of the training and validation sets. In addition, as shown in Figure 1 ("Class Distribution: Training data"), the classification problem is unbalanced as the distribution of the examples across the five sentiment categories is not uniform. To account for this, I assigned class weights to the examples when training the Logistic Regression. The goal is to penalize more misclassification errors in the less frequent classes. The weights are inversely proportional to the number of instances of each class. 5 This is also motivated by the fact that the official evaluation measure is the macroaveraged Mean Absolute Error (MAE M ) that is averaged across the different classes and accounts for the distance between the true and predicted class. More information about the evaluation metrics used can be found at (Rosenthal et al., 2017). Fine-grained tweet quantification While the aim of classification is to assign a category to each tweet, the aim of quantification is to estimate the prevalence of a category to a set of tweets. Several methods for quantification have been proposed: I cite for instance the work of G. Forman on classify and count and probabilistic classify and count (Forman, 2008) and the recently proposed ordinal quantification trees (Da San Martino et al., 2016). In this work, I focus on a classify and count approach, which simply requires classifying the tweets and then aggregating the classification results. The official evaluation measure is Earth Movers Distance (EMD) averaged over the subjects of the data, described in detail at (Rosenthal et al., 2017). The classification and the quantification methods I use rely on efficient operations in terms of memory (hashing) and computational resources (linear models). The feature extraction and learning operations are naturally parallellizable. I believe that this is an important advantage, as the end-to-end system is robust and fast to train. The Experimental Framework The data Table 1 shows the data released by the task organizers. To tune the hyper-parameters of my models, I used a simple validation mechanism: I concatenated the "Train2016", "Devel-opment2016", and "DevTest2016" (9,070 tweets totally) to use them as training and I left the "Test2016" as validation data. I acknowledge that using the "Test2016" part of the data only for validation purposes may be limiting in terms of the achieved performance, since these data could have also used to train the system. I also highlight that by using more elaborate validation strategies like using the subjects of the tweets, one should be able to achieve better results for tuning. Official Rankings Table 2 shows the performance the systems achieved. There are two main observations. For Subtask C, where TwiSe is ranked 5 th , I note that the system is a slightly improved version of the system of (Balikas and Amini, 2016), ranked first in the Subtask in the 2016 edition. The only difference is the addition of the extra features from clustering the word embeddings. This entails that significant progress was made to the task, which is either due to the extra data ("Test2016" we only used for validation) or more efficient algorithms. On the other hand, TwiSe is ranked 2 nd in Subtask E. This, along with the simplicity of the approach used that is based on aggregating the counts of the classification step, entails that there is more work to be done in this direction. Five-Scale Classification: Error Analysis Analyzing the classification errors, one finds out that the (macro-averaged) mean-absolute-error per sentiment category is distributed as follows: VeryNegative: 0.836, Negative: 0.566, Neutral: 0.584, Positive: 0.771, VeryPositive: 0.443. The system performed the best in the VeryPositive class (lowest error) and the worst in the VeryNegative class. Interestingly, the system did not do as well in the Positive class. To better understand why, Figure 1 plots the distribution of the instances across the five sentiment classes, for the training data we used and the test data. Notice how the Positive class is the dominant in the training data, while this changes in the test data. I believe that that the distribution drift, between the training and test data is indicative as of why the system performed poorly in the "Positive" class. Five-Scale Quantification: Error Analysis I repeat, here, the error analysis process for the quantification task. The best performance was achieved in the subject "leonard cohen", whose EMD was 0.029 while the worst performance in the topics "maduro" (EMD=0.709) and "medicaid" (EMD=0.660). The distribution of sentiment for "leonard cohen" is very similar to the distribution of sentiment in the training set, Kullback-Leibner divergence of 0.140. On the other hand the Kullback-Leibner divergence for "maduro" and "medicaid", which are both skewed towards the negative sentiment, are 1.328 and 0.896 respectively. Although a more detailed error analysis is required in order to improve the performance of the system, I believe that the distribution drift between the training examples and the examples of a subject plays an important role. This may be further enhanced by the fact I used a classify and count approach which does not account for drifts. Conclusion The paper described the participation of TwiSe in the subtasks C and E of of the "Twitter Sentiment Evaluation" Task of SemEval-2017. Importantly, my system was ranked 2 nd in Subtask E, "Five-point Sentiment Quantification" using a simple classify and count approach on top of a Logistic Regression. An interesting future work direction towards improving the system aims at better handling distribution drifts between the training and test data.
2017-08-13T01:23:28.510Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "11b65832aa32cfda869d75aea38d245d4a7c067d", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/S17-2127.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "11b65832aa32cfda869d75aea38d245d4a7c067d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
267100163
pes2o/s2orc
v3-fos-license
Impact of Intermittent Fasting and/or Caloric Restriction on Aging-Related Outcomes in Adults: A Scoping Review of Randomized Controlled Trials Intermittent fasting (IF) and caloric restriction (CR) are dietary strategies to prevent and attenuate obesity associated with conditions and aging-related outcomes. This scoping review examined the cardiometabolic, cancer, and neurocognitive outcome differences between IF and CR interventions among adults. We applied a systematic approach to scope published randomized controlled trials (databases: PubMed, CINAHL Plus, PsychInfo, Scopus, and Google Scholar) from inception through August 2023. The initial search provided 389 unique articles which were critically appraised. Thirty articles met the eligibility criteria for inclusion: 12 were IF, 10 were CR, and 8 were combined IF and CR interventions. IF and CR were associated with weight loss; however, IF studies tended to report greater adherence compared with CR. Overall, IF and CR were equivalently effective across cardiometabolic, cancer, and neurocognitive outcomes. Our findings suggest that IF has health benefits in a variety of conditions and may be better accepted and tolerated than CR, but more comparative research is required. Introduction Dietary fasting and energy deprivation have been ever-present evolutionary and historical experiences among human populations.Modern society has afforded food surplus and the potential for overconsumption, both associated with the rising prevalence of obesity and aging-related chronic diseases.Humans have evolved to undergo periods of food scarcity and involuntary fasting (i.e., periods of time without food) [1,2].Food restriction and voluntary fasting have been widely practiced across many cultures throughout history for religious, medicinal, and traditional purposes [3,4].Within the context of modern Western society, adapted practices of these recurrent human experiences have been formed, chiefly under the practice of intermittent fasting (IF) and/or calorie restriction (CR) [4,5]. While there are instances of irregular food availability and food insecurity, these are considered involuntary circumstances and do not necessarily involve caloric restriction [6].Intermittent fasting, centered on eating time and frequency, involves voluntary abstinence of caloric consumption over specific periods of hours and/or days and does not necessarily involve calorie restriction [7,8].CR entails an overall reduction in daily caloric intake, generally >20% less than a normative energy intake, and does not necessitate intake to occur during any specific time domain for energy intake [5,[9][10][11].While CR has long held a prominent standing in the fields of longevity and obesity prevention [5], CR at the recommended reduction in energy intake is likely not sustainable in the long term for most humans.IF has emerged as a viable and rapidly moving field recognized as a dietary strategy and potential alternative to CR [12].Therefore, the more recent exploration of IF in humans warrants significant transdisciplinary attention and evaluation in comparison with CR for similarities and differences in health and longevity domains.Such evaluations are particularly relevant as more randomized control trials (RCTs) are available to collate for systematic synthesis. The conceptual framework and mechanistic rationale of IF regimens differ substantially from CR. IF establishes a predetermined timeframe of caloric consumption rather than counting or tracking calorie intake.While IF models can include a secondary emphasis on calories or macronutrients, it is not a requirement [7]; rather, it is the eating behavior (i.e., the timing of eating and fasting) that is paramount.While not appropriate for all populations (e.g., individuals with active/history of eating disorders, frailty, pregnancy, or advanced age), both IF and CR strategies are generally well-tolerated and demonstrate acceptable safety profiles [13,14].However, IF may afford increased adherence and long-term sustainability [15]. IF is an umbrella term that includes many different regimens.Three general examples are (1) Prolonged nightly fasting (PNF), which promotes food intake during a specific interval of time that is in alignment with biological cycles of circadian rhythm, i.e., calorie consumption during waking hours and abstinence during the nighttime; (2) Alternate day fasting (ADF), which supports ad libitum energy intake on alternating days coupled with fasting days, i.e., no caloric consumption; and (3) Time-Restricted Eating (TRE), which dictates windows of specific time lengths allotted each day for eating and fasting (see Table 1 Review of terminology).An individual typically has flexibility in selecting eating timeframes in a TRE protocol as long as the eating window is restricted and consistent.Other IF protocols have been more recently termed 'periodic fasting' (PF) and may involve fasting for several days (e.g., two to seven days) repeated once per month or heavy restriction of a specific macronutrient (i.e., protein) [8].Collectively, these IF regimens have been implemented in RCTs that form a growing body of research suggesting that IF supports the modulation of favorable shifts in health outcomes. Caloric restriction: Overall reduction in calories compared with normative energy intake, generally involving the reduction in energy intake >20% daily without malnutrition.Intermittent fasting: Voluntary abstinence of caloric consumption over periods of hours and/or days.Prolonged nightly fasting: Daily eating within a timeframe that is in alignment with the biological circadian rhythm (i.e., food/beverage caloric consumption during the active waking hours and abstinence during the nighttime).Alternate day fasting: Ingestion of ad libitum energy intake on alternating days coupled with fasting days (i.e., no food/beverage caloric consumption).Time-restricted eating: A specific, although flexible, window for daily timing restrictions on eating and fasting.Periodic fasting: Both IF and CR have been reported as geroprotective and have been employed to buffer against cardiometabolic perturbation, cancer, neurocognitive decline, and various other ailments associated with obesity and aging and may promote life extension [5,8].For example, these practices may better maintain blood glucose and lipid metabolism [16,17], induce neurotrophic and autophagic responses [18,19], and increase the production of important metabolites (e.g., ketones and brain-derived neurotrophic factor) [20,21] that may promote reductions in oxidative stress and inflammation [22].Such changes, over time, are expected to improve cardiometabolic status, cancer, and neurocognition. As the number of RCTs investigating IF and CR has recently increased, the current review sought to contextualize the current body of literature better, combining outcomes into aging-related domains and mapping key comparisons for IF and CR.We implemented a scoping review approach with the aim of investigating geroprotective domains of cardiometabolic, cancer, and neurocognitive outcomes.As a guide for study element mapping and data collation, our main questions were: The limitations of this broad approach notwithstanding, utilizing a systematic process for both IF and CR provides a high-level perspective and encompassing grasp on the current state of the human RCT literature.Such efforts are important in moving beyond narrative synthesis and establishing a more formalized framework for future systematic methods (i.e., meta-analyses) and future RCT construction. Protocol and Registration This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for scoping review guidelines [23] (see Figure 1).and the recommendations of the Cochrane Collaboration in the preparation of this scoping review [24].All methods used in relation to the research questions, search strategy and process, inclusion/exclusion criteria, and risk of bias assessment have been deposited and registered prior to the literature search on Open Science Framework (https://osf.io/ns8am/, accessed on 25 April 2022). Data Search This comprehensive literature search was performed by a librarian on 12 and 13 May of 2022, and was conducted using the following English-language databases: PubMed, Cumulative Index to Nursing and Allied Health Literature (CINAHL) Plus with full text, PsycInfo, Scopus, and Google Scholar.An updated search was also performed on 17-18 August 2023.The search included literature published between journal indexing and was not limited by language; subject headings were substituted as appropriate.Below is listed the search strategy for Scopus; all other searches were based on these keywords: (TITLE-ABS-KEY ({healthy aging} OR {cognitive aging} OR aging OR "age associated disease" OR "age associated diseases" OR "age related disease" OR "age related diseases") AND TITLE-ABS-KEY ("intermittent fasting" OR {alternate day fasting} OR {time restricted feeding} OR {time restricted eating} OR {alternate day fasting} OR {prolonged overnight fasting} OR {periodic fasting} OR {metabolic switching} OR omad OR {one meal a day} OR {restricted diet} OR {calorie restriction} OR {caloric restriction} OR "low calorie diet" OR {restricted diet} OR "restrictive diet") AND TITLE-ABS-KEY ("randomized controlled trial" OR "randomised controlled trial" OR rct) AND NOT TITLE-ABS-KEY (animal* OR rats OR rat OR mouse OR mice OR rodent* OR dogs OR dog OR cat OR cats )).In addition, where appropriate, selected reviews and included RCTs were hand-searched to capture any articles missed during the database searches. All citations were sent to Zotero version 6.0.30 to check for retracted papers and then saved into Covidence™ (https://www.covidence.org;accessed on 24 June 2022) for future screening.At the full-text screening stage, any references without attached PDFs were searched and attached to Covidence by the librarian.Each citation was blind screened by two reviewers at each screening level.Conflicts were decided by a third reviewer. Eligibility Criteria Briefly, the population of interest was human adults.The interventions included in the current scoping review were any form of IF, CR, or a combination and/or comparison of both.Control groups were required to continue their regular eating and exercise habits.Finally, the outcomes of interest included cardiometabolic, cancer, and neurocognitive factors.Articles were required to be RCTs. Screening and Data Extraction Abstracts found during the search process were exported to Covidence, a web-based platform for managing systematic reviews, and screened for eligibility [D.L.J., A.E.M., and J.H.]; potentially eligible articles were read in full text and examined independently to determine if a given study met the predetermined criteria [D.L.J., N.A.H., A.E.M., and J.H.].Any disagreements were discussed with the intent to resolve the issue(s) and reach a consensus; as needed, an additional author [D.D.S.] was consulted for further discussion and final decision. Data were independently extracted [D.L.J., N.A.H., A.E.M., and J.H.] using a data table developed a priori.Data extracted included subject characteristics, study duration, intervention, study design, and outcomes.In line with the full-text selection process, authors [D.L.J., N.A.H., A.E.M., and J.H.] double-checked that these extracted data were correct, and further, any disagreements were discussed with the intent to resolve the issue(s) and reach consensus. Search Results A total of 30 articles were included in the current review, as outlined in the search and screening process in Figure 1.Briefly, the initial search provided 449 records from PubMed, CINAHL Plus with full text, PsycInfo, Scopus, and Google Scholar databases; the updated search provided 129 records.After removing duplicate records from both searches (n = 189), 389 unique records were then screened in Covidence by title and abstract.Based on the defined inclusion criteria, 304 records were excluded.The remaining 85 articles were then full-text screened, excluding 39, leaving 46 for critical appraisal.Finally, these articles were evaluated by critical appraisal, ultimately leaving a total of 30 articles for inclusion in the current scoping review. CR Interventions Overall, 10 of the included studies investigated CR with the use of a parallel group design (Table 2).Of the ten included studies, eight reported cardiometabolic findings related to weight or fat loss following CR [28,29,[32][33][34]37,39,43], with the majority of studies reporting a reduction in body weight [28,[32][33][34]37,39,43], three studies reported a reduction in overall body fat [28,29,32], four studies reported a reduction in adiposity or visceral fat deposits targeted in the abdomen [28,32,37,39], and one study reported a reduction in fat mass [37].In brief, Weiss et al. (2006) conducted a 12-month CR study examining healthy, non-obese, sedentary individuals (n = 18) compared with an exercise training intervention (n = 18) and a healthy lifestyle control group (n = 10).Study results demonstrated that both the CR and the exercise groups had increased levels of adiponectin, a hormone released by adipocytes that aids with insulin sensitivity [40].Tam et al. (2012) conducted a 6-month RCT among healthy, overweight individuals comparing a CR intervention (i.e., 25% reduction in energy intake; n = 12) to a CR + exercise group (i.e., 12.5% reduction in energy intake +12.5% increase in exercise energy expenditure; n = 12) compared with a control group (n = 11).Study results for the CR and CR + exercise groups led to reduced circulating levels of leptin, another adipokine secreted exclusively by adipocytes [37]. Of the ten studies included, three studies had primary aims related to circulating concentrations of glucose and insulin [29,37,40].A 12-month study employing CR among healthy, sedentary individuals found a decrease in glucose levels and insulin area under the curve (AUC) [40].An increase in insulin sensitivity was reported in CR two studies [37,40].A decrease in resting metabolic rate was found after a 3-month period of CR [51].In a study including healthy, non-obese individuals, a decrease in Creactive protein (CRP) was found after a 3-month period of CR [29].A decrease in blood pressure was found in two studies with healthy individuals following a 3-month period of CR [28,39].Pierce et al. (2008) conducted a 12-week RCT among 40 non-diabetic men and women (ages 21-69) who were overweight or obese comparing a CR intervention group to an attention control group.An increase in brachial artery flow-mediated dilation was found in the CR intervention group [32].Dengo et al. (2010) conducted a 4-week RCT among midlife and older adults (n = 16) who were overweight or living with obesity to compare a weight loss intervention group to a control group to assess measures of arterial stiffness.Outcomes resulted in decreased β-stiffness index and carotid-femoral pulse wave velocity among the CR (i.e., weight loss) group [28]. Only one study that employed a CR regimen investigated cancer outcomes using a fasting-mimicking diet (FMD) (i.e., low in calories, sugar, and protein; high in unsaturated fats) in 100 generally healthy individuals [39].Participants in the intervention group were asked to consume an FMD for 5 consecutive days for a period of 3 months; comparatively, the control group consumed an unrestricted diet.Investigators found that the regimen reduced levels of insulin-like growth factor 1 (IGF-1), a hormone associated with several types of cancer [56,57], following three FMD cycles within a 3-month period. In relation, only one study investigated neurocognitive-related outcomes.Prehn et al. (2017) conducted an RCT among postmenopausal women with obesity (n = 19) comparing CR intervention (i.e., low-calorie diet and negative energy balance) to a control group (i.e., no dietary changes) on the primary outcomes of neurocognition.Study results indicated improved recognition memory, paralleled by functional connectivity to parietal areas through increased gray matter volume in the inferior frontal gyrus and hippocampus, and augmented hippocampal resting state in the CR group compared with the control group [43]. Of the IF studies, seven included a form of TRE, or time-restricted feeding (TRF).In brief, in an 8-week RCT, Cienfuegos et al. (2020) compared the effects of a 4-h feeding window to a 6-h feeding window to a control group (i.e., no mealtime parameters) on body weight and cardiometabolic risk factors.Post-intervention outcomes demonstrated comparable outcomes across the two TRF regimens with respect to reduced body weight, insulin resistance, and oxidative stress compared with the control group [27].Lowe et al. (2020) conducted a 12-week RCT with 116 adults who were randomized to either a TRE group which eating ad libitum (from 12:00 p.m. to 8:00 p.m.), abstaining from caloric restriction outside that window, compared with a consistent meal timing group instructed to eat three meals per day.Results demonstrated that the TRE group had a significant decrease in weight and increased lean mass index compared with the control group [30].Hajek et al. (2021) also implemented a parallel-group design with three arms consisting of the 5:2 diet with and without behavioral support (target reduction of 500-600 kcal/daily) and a control over a 12-month period.Both intervention groups achieved similar weight loss, though they suffered from high levels of attrition, with 56% completing the 5:2 diet with self-help and 45% completing the 5:2 diet only.Among a group of healthy, non-obese midlife and older adults, Martens et al. (2020) conducted a pilot randomized crossover trial where participants were randomly assigned to engage in 6 weeks of TRF (i.e., self-select staring time for 10-11 h, required to maintain same 8-h feeding window each day) or normal feeding [31].Among the 24 study participants, TRF was evaluated to be highly adherent, safe, and well-tolerated.In a crossover RCT, Stote et al. (2007) reported a reduction in cortisol in participants fasting other than the consumption of one meal per day over a 6-month period (which did not include caloric restriction) compared with a "control diet" which included the same number of calories divided across three daily meals (i.e., breakfast, lunch dinner) [35].In a controlled feeding clinical trial, Sutton et al. (2018) randomized prediabetic men to an eTRF regimen (6-h feeding window; dinner before 3:00 p.m.) or a control group with a feeding schedule of a 12-h period (selected by the participant) [36]. Results from this study indicate that the eTRF group showed improved insulin sensitivity, blood pressure, oxidative stress, and appetite compared with the control group.In a recent 2022 5-week RCT, Xie et al. examined two different TRE protocols (i.e., early TRF [eTRF]; n = 30; and mid-day TRF [mTRF]; n = 30) compared with a control group (n = 30) among healthy individuals living without obesity [49].Study results indicate that eTRF was more effective at improving insulin sensitivity and fasting glucose while reducing body mass and adiposity, ameliorating inflammation, and increasing gut microbial diversity as opposed to mTRF and/or the control group.In a 6-week study of 45 women ages 60 and older, Domaszewski et al. (2020) explored a 16:8 TRF intervention group (i.e., abstinence from food intake 16 h/day from 8:00 p.m. to 12:00 a.m. the next day; n = 25) compared with a control group (n = 20) that was asked to follow an eating plan based on their previous habits [53].Study results demonstrated body weight in the TRF group decreased by ~4.4 pounds. Ezpeleta et al. ( 2023) conducted a 3-month RCT comparing ADF combined with exercise to ADF alone to exercise alone among adults (n = 80; ages 23-65; 81% female) living with obesity and non-fatty acid liver disease [42].Post-intervention results demonstrated that intrahepatic triglyceride, body weight, fat mass, and waist circumference were all significantly reduced in the ADF + exercise combination group compared with the control group.In a 2019 pilot RCT, Cho et al. examined the effects of ADF and exercise on cholesterol among a group of adults (n = 112) living with overweight or obesity [54].Study findings indicate that exercise, with or without ADF, improved cholesterol.A 2019 study by Stekovic et al. demonstrated that 4 weeks of ADF improved general health markers among middle-aged adults while also initiating a reduction in caloric restriction by 37% [55].Further, ADF improved cardiovascular markers and reduced fat mass-results indicate that ADF may have positive physiological impacts and is safe to participate in as a non-pharmacological intervention. No IF studies directly assessed cancer outcomes, though Bartholomew et al. (2021) examined neurocognitive outcomes but did not find a significant difference in BDNF or GCPi levels in participants that underwent a 24-h water-only fasting intervention one to two times per week over a 6.5 month period compared with a control group that ate ad libitum [26].The participants from the included IF studies were overweight or obese, except for a few instances where they were a healthy normal weight [31,35,55]. IF and CR Interventions Combined and Compared Overall, 8 studies combined and/or compared IF with CR (Table 4) [38,41,[44][45][46][47][48]50] and investigated cardiometabolic outcomes, with the exception of one study that validated mood and quality of life questionnaires implemented by Teng et al. (2011).Notably, the two articles published by this research group employed Muslim sunnah fasting with 300-500 CR over the course of 3 months in a Malaysian population [45,46].The participants were non-obese (i.e., BMI 23.0-29.9kg/m 2 ) and reported decreases in feelings of depression and increases in energy levels compared with the ad libitum control group [45].In a larger study, Teng et al. (2013) noted overall cardiometabolic improvement (e.g., blood pressure, lipid profile, and decreased body fat) with significant increases in total rejoining of DNA cells and a decrease in damage of DNA cells and lipid peroxidation. Of these studies, two assessed various forms of CR (i.e., ADF) in patients with nonalcoholic fatty liver disease, reporting improvements in liver steatosis and fibrosis [47] and body weight [48].Further, four studies assessed IF and CR regimens for 12 months or longer [38,41,44,50].Briefly, Trepanowski et al. (2017) looked at ADF vs. CR (75% of energy needs every day) vs. a control group over a 12-month period.The first 6 months were a weight loss period, while the last 6 months were a weight maintenance period.Weight loss and other outcomes such as blood pressure, heart rate, triglycerides, fasting glucose, fasting insulin, insulin resistance, C-reactive protein, or homocysteine concentrations at months 6 or 12 were similar between intervention groups.Though, the dropout rate was highest for the ADF group (38%).Schübel et al. (2018) used the 5:2 model, reducing energy intake two days per week to promote a 20% reduction in energy compared with continuous calorie restriction and a control group (no advice to restrict energy).While both intervention groups lost body weight, the IF group did not exert stronger effects on the adipose tissue transcriptome, circulating biomarkers (of glucose metabolism, lipid metabolism, and inflammation as well as adipokines and steroid hormones), body weight, or VAT and SAT volumes.Moreover, dropout rates are two times lower in the IF group compared with the CR group (i.e., ~8% vs. ~16%).Finally, Lin et al. (2023) compared TRE (eating between noon and 8:00 pm) to CR (25% energy restriction) over a 12-month period.Notably, the TRE group had similar attrition (TRE: 13% vs. CR: 17%) and reduced energy intake and body weight compared with CR (TRE: −4.6 kg vs. CR: −5.4 kg).None of the included studies directly reported on cancer or neurocognitive outcomes. Discussion The current scoping review aimed to explore and understand the existing body of literature on IF and/or CR interventions by investigating their effects on aging-related domains, including cardiometabolic, cancer-specific, and neurocognitive outcomes.The review included 30 articles.Both CR and IF interventions demonstrated significant effects on reduced body weight and fat reduction.IF interventions, including TRE, ADF, and other forms of IF, showed positive impacts on body weight and fat reduction [27,30,35,49,[52][53][54][55].Similarly, studies implementing CR reported consistent reductions in body weight and fat mass [28,29,32,34,37,39,43].These findings are in line with previous research suggesting the efficacy of both IF and CR interventions in promoting weight loss and managing adiposity [45][46][47][48]50]. Weight loss is a common goal for many individuals, and dietary strategies such as IF and CR have gained popularity due to their potential effectiveness in achieving weight loss.In addition, these dietary strategies may have other potential benefits, such as improving metabolic health and lifespan.It is important to note while not appropriate for all populations (e.g., individuals with active/previous eating disorders, frailty, pregnancy, or advanced age), both IF and CR strategies are generally well-tolerated and demonstrate acceptable safety profiles [13,14].However, IF may support decreased attrition and longterm sustainability, especially when CR is employed at reductions used by members of CR Society International, a nonprofit organization promoting CR to extend human longevity, as an example (~30% below calorie maintenance requirements) [15].As an approach to weight loss, CR is far more well-recognized and generally understood to be the equation of consuming fewer calories than one normally would. As an alternative dietary strategy, IF may offer an approach to improve aging-related outcomes, including cardiometabolic, cancer, and neurocognitive outcomes.The majority of studies populated by the current review focused on cardiometabolic outcomes.Both IF and CR interventions exhibited improvements in blood lipids, glucose metabolism, and insulin sensitivity.IF interventions showed positive effects on blood lipids and glucose metabolism [31,35,49,52,54,55], while CR demonstrated reductions in LDL-cholesterol and total cholesterol/HDL ratio [29].These findings are promising, indicating the potential benefits of both IF and CR in managing cardiovascular risk factors.Moreover, several studies reported reductions in oxidative stress and inflammation following select IF and CR interventions.Lower levels of CRP were observed in participants engaging in IF or CR [29]. The neurocognitive effects of IF and CR were assessed in a limited number of studies.Both interventions may have potential benefits for neurocognitive health.One study found that CR led to improved recognition memory and altered functional connectivity in specific brain regions [43].These preliminary findings warrant further investigation into the impact of IF and CR on brain health, cognitive function, and the potential attenuation of neurodegenerative diseases. Adherence to long-term CR regimens was generally reported to be low, indicating that CR may not be sustainable for most individuals over extended periods.On the other hand, IF interventions demonstrated increased adherence and long-term sustainability, suggesting that IF may be more feasible as a continued and lasting dietary strategy for some populations.Sustainable dietary approaches are crucial for long-term health benefits, and the potential of IF in this regard warrants additional attention.Indeed, as the classic CR study conducted by Ancel Keys in the 1940s reported, depression increases following substantial restriction (i.e., ~45% from baseline maintenance) [58], which may contribute to reduced adherence and long-term sustainability. Studies combining and/or comparing IF and CR interventions demonstrate promising results on cardiometabolic outcomes, especially in populations with specific health conditions such as NAFLD [47,48].However, more research is needed to fully understand the feasibility and efficaciousness of combined IF and CR approaches for varied populations and select health outcomes. This current study provides a comprehensive scoping review of the existing RCT literature on IF and/or CR interventions in adult populations, including cardiometabolic, cancer, and neurocognitive outcomes.However, there are several acknowledged limitations.First, the broad approach adopted in this review may have introduced potential bias in the study selection process and could have influenced the heterogeneity of interventions and outcomes studied.To address these limitations, future research should focus on high-quality RCTs to support the number of studies per outcome and the diversity in methods.Additionally, longer-term studies with larger sample sizes and more diverse populations are necessary to assess the generalizability and sustainability of these dietary strategies.Furthermore, a notable paucity of data in the current scoping review limits the overall findings related to cancer and neurocognitive outcomes.Despite the comprehensive nature of this scoping review, some recent relevant studies may not have been included in the review, and there is a possibility of publication bias affecting the results.Limited data availability resulted in highly heterogeneous data, which is reflective of the current state of the science regarding studies with human participants engaged in IF and/or CR interventions.Moreover, the scarcity of RCTs comparing IF and/or CR interventions to appropriate control groups imposed limitations on the number of applicable studies for the current review.To overcome the noted limitations, future research should focus on conducting additional RCTs that directly compare IF and CR interventions with appropriate control groups.These studies should aim to include larger and more diverse populations, and longer-term follow-up should be incorporated to assess the adherence, sustainability, and long-term effects of these dietary strategies.Furthermore, efforts should be made to reduce heterogeneity among interventions and outcomes through standardized protocols. Conclusions In conclusion, the current scoping review highlights the potential geroprotective effects of IF and CR on cardiometabolic, cancer, and neurocognitive outcomes.Both IF and CR protocols show promise in improving weight loss, blood lipids, glucose metabolism, and insulin sensitivity and reducing oxidative stress and inflammation.While CR has been extensively studied in obesity, IF is a relatively new and understudied dietary strategy that warrants further attention.The findings of this review emphasize the need for more RCTs and robust methodological approaches to understand better the potential mechanisms and long-term effects of IF and/or CR on aging-related outcomes.Ultimately, this research Figure 1 . Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram for the scoping review screening process [25]. Figure 1 . Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram for the scoping review screening process [25]. Fontana et al. (2007) conducted a 1-year RCT in 48 non-obese individuals who were assigned to one of three study groups: (1) 20% CR (n = 18); (2) 20% increase in energy expenditure through exercise; and (3) healthy lifestyle guideline control group.Results from this study indicated that both intervention groups yielded a decrease in the HOMA-IR index and levels of LDL-cholesterol and total cholesterol/HDL-cholesterol ratio[29].Coutinho et al. (2017) conducted a 12-week RCT among 35 individuals with obesity comparing two CR groups: (1) continuous energy restriction (n = 18) and (2) intermittent energy restriction (n = 17) on outcomes of body composition and weight loss induced compensatory responses. Table 1 . Review of Terminology. ➢ What aging-related outcomes have been examined in RCTs of IF and CR? ➢ What are the within-study effects of IF and CR on cardiometabolic, cancer-specific, and neurocognitive outcomes compared with controls in adults?➢ What are the differences in the effects of CR versus IF RCT interventions on neurocognitive, cardiometabolic, and cancer domain-specific outcomes in adults? Table 2 . Characteristics of included studies that implemented calorie restriction interventions. Table 3 . Characteristics of included studies that implemented intermittent fasting interventions. Table 4 . Characteristics of included studies that implemented calorie restriction and intermittent fasting interventions.
2024-01-24T16:03:15.153Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "1b02dfeaef4ec369cb6f573fabadbfc7e9adaaa9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/16/2/316/pdf?version=1705759204", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3128a862e321ffe4b320185f888674f0df468f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49717310
pes2o/s2orc
v3-fos-license
Comparison of methods to estimate water‐equivalent diameter for calculation of patient dose Abstract Modern CT systems seek to evaluate patient‐specific dose by converting the CT dose index generated during a procedure to a size‐specific dose estimate using conversion factors that are related to patient attenuation properties. The most accurate way to measure patient attenuation is to evaluate a full‐field‐of‐view reconstruction of the whole scan length and calculating the true water‐equivalent diameter (D w) using CT numbers; however, due to time constraints, less accurate methods to estimate D w using patient geometry measurements are used more widely. In this study we compared the accuracy of D w values calculated from three different methods across 35 sample scans and compared them to the true D w. These three estimation methods were: measurement of patient lateral dimension from a pre‐scan localizer radiograph; measurement of the sum of anteroposterior and lateral dimensions from a reconstructed central slice; and using CT numbers from a central slice only. Using the localizer geometry method, 22 out of 35 (62%) samples estimated D w within 20% of the true value. The middle slice attenuation and geometry methods gave estimations within the 20% margin for all 35 samples. | INTRODUCTION The volumetric Computed Tomography Dose Index (CTDI vol ) provided by CT scanners is a calculated quantity representing the dose delivered to a standardized, homogeneous calibration phantom of a specified size based on CT parameter settings used during the scan. 1 Because the CTDI vol does not account for an individual patient's size or attenuation properties, it therefore is not a direct measurement of the absorbed dose delivered to a patient. 2 To address this, the size-specific dose estimate (SSDE), which modifies CTDI vol using a factor related to patient size, was introduced by the American Association of Physicists in Medicine (AAPM) in 2011. 3 As part of this effort, the AAPM Task Group 204 developed size-specific conversion factors (k) to better estimate patient radiation absorption properties and size-specific doses. These conversion factors are multiplied by CTDI vol to obtain the SSDE. Members of AAPM Task Group 220 4 further developed the technique by using the attenuation of x rays through the body, as measured by the CT scanner, to calculate patient water-equivalent diameter (D w ), the diameter of a cylindrical volume of water with equivalent mean attenuation. D w is a more precise metric of body size for the selection of a conversion factor because it accounts for radiation absorption directly by using attenuation information. 5,6 D w may be calculated directly from a full-field-of-view reconstruction, 7 or estimated using the geometric measurement methods of TG204. Geometric estimation requires the use of additional corrections based on the body region scanned to account for differences in attenuation of abdominal and thoracic anatomy. Calculation of D w using reconstructed attenuation values is more patient-specific and uses data directly relevant to the metric of interest. It is therefore the preferred method for determining the appropriate conversion factor. 4 The reconstructed region is ideally the full scan range, though Leng et al. showed that D w calculation from a central slice can be an acceptable substitute. 8 Anam et al. demonstrated that a fully automated image processing and D w calculation method can match manual calculation across a range of scan regions in both phantoms and human patients. 9 The purpose of this paper is to compare D w values from three estimation methods to a reference standard D w value calculated using a full-field-of-view, full scan range reconstruction. The first 2.A | Selection of image data sets CT scan data from 35 sets of anonymized patient scan images were used to test each calculation method. Of these data, 18 were abdomen scans and 17 were thorax scans. The selected scans had a localizer radiograph with no truncation of tissue. Sets of axial "noncontrast" or "soft tissue" slices were used for attenuation analysis. Slices were mostly from full-field-of-view reconstructions, with a few exceptions that had a small amount of skin truncation. 2.B | Calculation and comparison of D w values Calculation of D w values using the whole scan range and center-slice attenuation methods, as well as the center-slice geometry method, was carried out by scripts written in MATLAB (Natick, MA). D w values from a localizer geometry method developed by Philips Healthcare (Andover, MA) were also compared. Each method used involved the use of an edge detection image analysis algorithm to separate patient anatomy from background structures such as the table and padding. In each MATLAB script, Sobel edge detection was used, as shown in Fig. 1. The threshold for determining the pixel value difference that defined the outside of the patient was varied by data set and edge detection was inspected visually to confirm that it matched the visual border. The localizer analysis process was not available and proper edge detection could not be confirmed. 2.B.1 | Implementation of attenuation measurement methods Each pixel in a reconstructed image contains information of the attenuation (attenuation coefficient l) of x rays through the corresponding volume in the form of a CT number. AAPM TG220 4 outlines a method for using these data to calculate D w , which is an ideal metric for estimating a patient's radiation absorption properties because it uses the patient's attenuation information directly. CT numbers are defined relative to the attenuation coefficient of water, so they can be used to calculate the cross-sectional area (A w ) of a cylinder of water with average attenuation equivalent to that of the body in the analyzed slices [eqs. In these equations, l water and lðx; yÞ are the attenuation coefficients of water and of the tissue in the voxel denoted by the coordinates (x, y) of the slice, respectively. A pixel is the area of one pixel, recorded in the DICOM data, and A ROI is the area of the region of interest, determined by image analysis. CTðx; yÞ is the CT number of voxel (x,y), and CTðx; yÞ is the average CT number value of the slice. D w is then related to a correction factor using tables from AAPM Task Group 220. 4 The reported patient D w is the average D w of all slices of the desired analysis range. Full scan range attenuation measurement The reference method, to which the three other estimation methods were compared, used attenuation data from all slices along the full scan length. Using complete attenuation information from the whole scan provided the most accurate D w value. The D w of each slice was DAUDELIN ET AL. The processing of each image using this method was not observed in this study. Only the initial localizer image and the final D w value were available. 2.C. | Comparison of methods Estimation method D w values and the reference D w obtained using the full scan attenuation data were compared directly using the mean difference (signed) and mean absolute difference (positive). The distribution of each set of differences was compared using a nonparametric two one-sided test of equivalence (TOST) adopted from Mara et al. 11 and carried out using Microsoft Excel. A TOST does not assume no difference as the null hypothesis, but rather presents the burden of proving equivalence. TOSTs can also indicate whether a method has a bias upward or downward compared to the reference method. The TOST provides left-side and right-side zscores, for which values greater than 2.58 correspond to significant results (P < 0.01). Both sides must show significance to conclude equivalence within a specified margin. Task Group 220 stated that D w calculation using attenuation data from a localizer radiograph should be within 20% of the reference value. 4 We use this 20% margin as the basis for comparison between estimated D w values and the reference value, as well as for the equivalence margin of the two one-sided test. 4.A.2 | Availability of complete attenuation information In clinical situations, a physician may only require reconstruction of a small internal region. When a reconstruction of the patient along the whole scan range is not available, the method of calculating D w from attenuation data will not yield an accurate result. Using the attenuation data from an incomplete reconstruction will produce a smaller D w , and overestimate the final SSDE value. A localizer radiograph image of the full scan area is always available, so a method that accurately estimates patient size using this image would not be subject to truncation issues. Any D w estimation method that averages data or uses a smaller data set to represent the whole is susceptible to inaccuracy when analyzing a patient with large variations in anatomical shape along the scan range. A method that measures one to three central slices is especially vulnerable to this error, so caution should be taken when dealing with such cases. 4.B | Impact of measurement inaccuracy on SSDE Error in SSDE increases with the error of the measured value used to determine the CTDI vol -to-SSDE conversion factor. However, conversion factors given by tables in TG220 scale differently depending on the dimension that is considered. Figure 6 shows how the percent difference between consecutive conversion factors changes with the value of the measured dimension. These differences accumulate when measurement error is >1 cm. Potential errors under 10% may not be worth considering in a clinical setting, but methods that can incorrectly measure patient dimensions by many centimeters introduce substantial error in dose estimation. | CONCLUSIONS This study demonstrates that D w estimation methods using the geometry or attenuation data from a central, full-field-of-view reconstructed slice consistently produce results within 20% of their reference values and comply with the guidelines set by TG220. However, the localizer radiograph geometry method resulted in a considerable number of scans (13 of 35, 37%) that deviated by more than 20% of the reference value. The authors suggest that edge detection methods employed by localizer radiograph geometry methods should be fully evaluated prior to implementation in a clinical setting. Although the method using reconstructed images are more accurate, full-field-of-view reconstructions are not always available, and incomplete reconstructions can lead to SSDE overestimation. Localizer geometry-based methods do not calculate D w exactly, but a proper implementation could produce sufficiently accurate dose estimates using the localizer radiograph, which is already available, while avoiding outliers caused by varying reconstruction practices. This is especially true for smaller patients, since there is less variation in SSDE conversion factors at low values of the lateral dimension when it is the sole metric for patient size. When both AP and LAT localizer images are available, a localizer geometry analysis method could meet or exceed the accuracy of a central slice geometry method, which is already highly accurate for patients with uniform anatomy. CONFLI CTS OF INTEREST Andrew Daudelin was a paid intern of Philips Healthcare (Andover, MA), which provided data from their localizer radiograph analysis method, during this study. Chris Martel is an employee of Philips Healthcare.
2018-07-17T00:36:53.153Z
2018-07-07T00:00:00.000
{ "year": 2018, "sha1": "d80db1faeb2236bae33a66184a2268127434a4ce", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/acm2.12383", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d80db1faeb2236bae33a66184a2268127434a4ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
52272382
pes2o/s2orc
v3-fos-license
Short-term disability progression in two multiethnic multiple sclerosis centers in the treatment era Background: Short-term disease progression is well documented in clinical trials, but there are limited published data on disease course in real-life practice. Methods: Patient-derived Multiple Sclerosis Severity Score (PMSSS), a disease severity rank score, was computed at each visit for consecutive MS patients attending two large, ethnically diverse MS centers in New York metropolitan area. Disability was assessed via Patient-Determined Disease Steps (PDDS). Clinicians recorded disease subtype and relapse status at each visit, but did not rate disability. PMSSS change from the first to the last visit was calculated for the cohort as a whole and for subgroups of interest. Multivariable regression models were constructed for predicting final PMSSS based on readily available predictor variables collected at the initial visit and relapse history during follow up. Results: A total of 1740 consecutive patients from New York University (n = 1079) and Barnabas (n = 661) MS Care Centers were included. During follow up (mean 2.4 ± 0.82 years, range 1–4 years), mean PDDS score increased from 1.9 ± 2.2 to 2.3 ± 2.2 (p < 0.0001), while PMSSS remained roughly unchanged (initial PMSSS = 3.71 ± 2.73, last PMSSS = 3.81 ± 2.76, paired t test, p = 0.28). The only major predictor of final PMSSS was the initial PMSSS. Demographic variables (age, sex, race) or relapse status did not predict final severity score. Conclusions: Baseline disability in two MS clinics was much lower than in the reference population from which PMSSS was derived. We observed no discernable slowing of disability accumulation during the short-term follow up in our cohort compared with the reference cohort. Overwhelmingly the most important predictor of final disease severity rank score was the initial disease severity rank score. Introduction Short-term multiple sclerosis (MS) progression in clinical trials is well documented, 1,2 but there is paucity of published data on disease progression in real-life contemporary practice. These data are needed to answer the question of whether there has been a discernable slowing of disability progression in the contemporary MS clinic population. To address this question, one needs a practical tool for assessing disease progression in the clinical setting. The Patient-derived Multiple Sclerosis Severity Score (PMSSS) is such a tool. 3 PMSSS is a decile rank of the Patient-Determined Disease Steps (PDDS) among patients with similar disease duration in the North American Research Committee on Multiple Sclerosis (NARCOMS) registry. Determining PMSSS places minimal demands on the patient or the clinician: the patient records their disability on a single-question PDDS questionnaire, while the clinician reads out PMSSS corresponding to the patient's PDDS and disease duration from the published reference table. 3 PMSSS could thus be realistically obtained on nearly all patients with MS without compromising clinical operation. We used PMSSS to track disease progression in consecutive patients with MS attending two large, ethnically diverse MS centers and identified predictors of the final PMSSS based on readily available variables collected at the initial visit and relapse history during follow up. Methods We included consecutive patients from the urban New York University (NYU) MS Care Center in New York, NY, USA and the suburban Barnabas MS Care Center in Livingston, NJ, USA who were evaluated between June 2010 and November 2016. All patients were diagnosed with MS by their treating neurologists (2010 McDonald's criteria) 4 and completed two or more self-rated disability assessments (PDDS) more than one year apart. PDDS is a freely available, selfreported eight-point scale that measures global neurological impairment in MS. 5 PDDS correlates strongly with the Expanded Disability Status Scale (EDSS), the 'gold standard' of disability assessment in MS. 6,7 We required that the patient's disease duration at the time of the last visit be less than 45 years as PMSSS can only be calculated for disease durations up to 45 years. At each visit, baseline and follow up, the treating neurologist documented disease subtype and recorded whether the patient had a relapse within 3 months of the visit. The study received an exemption determination from the institutional review boards (IRBs) of NYU Langone Medical Center (New York) and Barnabas Medical Center (Livingston, NJ). No informed consent was required by the IRBs as this was a retrospective study. In order to meet the IRB exempt review status, we excluded patients younger than 18 years old and those who could not follow written instructions in English. PMSSS was computed for each patient visit using the published reference table. 3 In addition, we assigned each patient to their respective 'severity grade', as described previously. 8 In brief, the PMSSS scale is divided into six equal grades and each grade (sextile), by design, comprises around one sixth of the reference NARCOMS population. The six-grade classification allows for an easy comparison between distributions of severity scores in our clinic populations and the reference population. Initial and final PMSSS scores for the cohort as a whole and subgroups of interest were compared using t tests. Linear multivariable regression was conducted predicting final PMSSS from age, sex, race (white versus African American versus Hispanic versus other), duration of follow up, relapse status (yes/no relapse during follow up), interaction terms of initial PMSSS × disease duration and of relapse status (yes/no) × age × sex. Since our aim was to assess longer-term effects of relapses on PMSSS rather than their immediate impact, we have excluded from the model any PMSSS measurements taken within 3 months of a relapse. All analyses were carried out using JMP and SAS software; p < 0.05 was considered statistically significant. Results Baseline characteristics 1740 consecutive patients from NYU (n = 1,079) and Barnabas (n = 661) MS Care Centers met our inclusion criteria. Demographic characteristics for each center as well as for the combined cohort are shown in Table 1. Compared with Barnabas, NYU patients were slightly younger (mean age 44 versus 46 years), less likely to be female (72% versus 77%) and more ethnically diverse (white patients comprised 52% of those at NYU versus 74% at Barnabas). The two centers were similar with respect to disability: median PDDS in each center was 1, corresponding to 'mild disability', and the percentage of patients with ambulatory assistance (PDDS >3) was around 25% in both centers. Identities of disease-modifying therapies (DMTs) at baseline were available for patients from NYU MS Center only and were as follows: 20% infusible medications (natalizumab, rituximab, alemtuzumab); 29% oral agents (fingolimod, dimethyl fumarate); 27% first-line injectables (interferon β and glatiramer acetate); 22% no DMTs; and the remaining 3% nonapproved or 'unknown' therapies. DMTs were not recorded on subsequent visits in the database, so duration on therapy could not be estimated. Initial DMTs were not collected for Barnabas patients, but would be expected to parallel NYU experience, as practice patterns were similar among physicians in the two centers. Percentage of patients in each of the sextile severity grades at the initial visit is shown in Figure 1 (the dotted line represents expected percentage based on the NARCOMS population). Distributions of patients across the severity grades were similar at NYU and Barnabas. Both centers had notable overrepresentation of patients in the two milder sextile grades (1 and 2): combined total in our clinics was 56% (versus 33% in NARCOMS); and underrepresentation of patients in the two most severe grades (5 and 6): a combined total of 20% (versus 33% in NARCOMS). Longitudinal follow up Mean duration of follow up was 2.4 ± 0.82 years and over 99% of patients had follow up between 1 and 4 years. Mean PDDS score at the initial visit was 1.9 ± 2.2. The final PDDS score was higher than the initial score, 2.3 ± 2.2, p < 0.0001. Mean PMSSS rank score for the cohort was similar at baseline (3.71 ± 2.73) and last follow up (3.81 ± 2.76; t test, p = 0.28). Figure 2 shows the distribution of final severity grades stratified by the initial severity grade. In total, 51.3% of patients stayed in their original severity grade at the last follow up, 86.9% of patients were within one grade of their original severity grade and 96.1% were within two grades of their original grade. Of the patients in the 'mild MS' group (first sextile) at baseline, 68% remained in the mild sextile at last follow up and 20% moved up to the second severity sextile. Of the patients with 'aggressive MS' (sixth sextile), 76% remained in the sixth sextile and 17% moved down to the fifth severity sextile. We also compared patients whose PMSSS had increased over the period of observation (accelerated accumulation of disability, N = 607) with patients whose PMSSS had decreased (slowing in accumulation of disability, N = 1133). The two groups were similar with respect to age, percent female, percent white, but relapses were more frequent in those whose PMSSS increased (21.6%) compared with those whose PMSSS declined (15.9%, rate ratio 1.36, p < 0.0001). Predictors of final PMSSS: a multivariate regression model We constructed a multivariable ordinal regression model with final PMSSS as an outcome variable, with age, sex, race, initial PMSSS, relapse status and duration of follow up as predictor variables. The single most important predictor of final PMSSS in our model was the initial PMSSS (p < 0.0001). Initial PMSSS by far dominated all other predictor variables, explaining 66% of the variance, while the additional variables contributed less than 1%. Neither age nor relapse during follow up (and <3 months of the assessment) were predictive of the final PMSSS. However, age × relapse interaction term was a significant predictor of the final score (p < 0.0025). The impact of a relapse was greater the younger the patient. This effect was driven by women (interaction term age × sex × relapse was only significant for women and not for men). Interaction term initial PMSSS × duration of follow up was not significant (p = 0.3139), implying that lack of change in PMSSS was unlikely to be due to differential follow-up times among our patients. We also modeled 'two-grade increase' (33.3% increase in PMSSS) as a categorical outcome variable in a logistic regression model that used same predictor variables as the linear model of the PMSSS ordinal data. Initial PMSSS remained the most important predictor of two-grade increase. Age, relapse status, and the relapse × age term were not significant predictors, while African-American race was associated with higher odds of a two-grade increase. Discussion Distributions of patients into sextile grades in the two MS centers were similar to each other but milder compared with the reference MS population. 3 Only one in four patients in our centers needed an assistive device for ambulation. These data are in line with contemporaneous reports that document low disability in patients attending MS clinics. 9,10 The reference population, on the other hand, derives from a longitudinal registry and is subject to cohort effects from earlier decades of diagnosis. During mean follow up of 2.4 years, the average PDDS disability score for the cohort increased from 1.9 to 2.3, while the severity score remained largely unchanged for both centers. The lack of change in PMSSS implies that disability accumulation in our patients proceeded as would be expected for patients with similar baseline scores in the reference NARCOMS population. 11 How do we reconcile the fact that baseline severity scores in our cohort were much lower than in NARCOMS and yet there was no evidence in a slowing of disability accumulation over the short term compared with the NARCOMS population? One plausible explanation is that the follow-up period in our study was insufficient to detect a downward beneficial change in disease trajectory, which would only become apparent with a longer timescale. Indeed, a recent model showed lower disability (EDSS) scores in a treated cohort compared with what would be expected in natural history studies, but the effect was apparent only after 6 years of treatment. 12 Milder disability at baseline also makes it more challenging to detect potential treatment benefit due to 'floor effect'. Milder disability in a contemporary setting could also be partly due to 'stage migration', wherein people who would not have been classified as having MS based on the clinical Poser criteria are now so classified using the less restrictive McDonald criteria at an earlier stage of the disease. 13 Finally, learning effect with the PDDS scale is a potential bias, with greater accuracy on subsequent administrations, but this seems unlikely as PDDS data collected in our clinics yielded expected results; for example, higher scores in patients with progressive disease versus relapsing disease, and higher scores in patients of African descent compared with white patients, as shown previously. 14 A lack of improvement in severity rank scores in our cohort contrasts with decreases in severity rank scores seen in clinical trials of two highly effective DMTs, natalizumab 15 and alemtuzumab. 16 These discrepant results may be partly due to differences between patients enrolled in these trials versus an unselected clinic population. Patients who participated in the trials of these high-efficacy agents had relapsing disease and above average inflammatory disease at enrollment. Our clinic patients, however, are much more like the MS population as a whole: they represent all disease subtypes; have had disease for variable time periods (on average, for a decade or more); and most were receiving a variety of DMTs. Moreover, patients enrolled in the clinical trials are required to have recent disease activity, so some diminution of disease activity in the trials is expected due to regression-to-the-mean phenomenon, which is independent of drug effect. 17 The large sample size of our cohort allowed us to compare baseline disease severity and disability progression in several subgroups of interest. As expected, baseline severity rank scores were highest in patients with progressive disease subtype and those needing assistance to ambulate. Among racial/ethnic subgroups, African Americans had the highest baseline severity scores, followed by Hispanic Americans, followed by white Americans, in agreement with our prior analyses. 18 Interestingly, no change in severity scores was observed for any of the subgroups of interest, including patients with relapses 3 months or more from the last visit. Patients with progressive disease showed a trend for worsening PMSSS with time. Multivariable regression analysis identified initial PMSSS as by far the most robust predictor of final PMSSS. This may be due, in part, to statistical considerations: regression to the mean and the fact that change in PMSSS, and therefore the final PMSSS, is determined, in part, on the initial PMSSS. From a clinical point of view, it is remarkable that the initial PMSSS explained 66% of the variance in the final PMSSS, while all additional factors accounted for less than 1% of the variance. Age and other demographic factors were not significant predictors of final PMSSS in our regression model, but if the initial PMSSS was omitted from the model then older age and duration of follow up became significant predictors of higher final PMSSS (data not shown). These data imply that the known, modest predictors of MS course, such as male sex, older age at onset, early sphincter involvement, or even progressive from onset form of the disease, 19 may not be nearly as important for prognosis as the current severity score, at least for short-term prognostication. We observed that interaction of age × relapse status had a small, but significant impact on the final PMSSS score, implying that when relapses occurred in younger persons with MS, they tended to have a greater impact on severity rank score. This is in line with a prior study that found that relapses occurring later in the disease course, especially after the onset of the progressive phase, have little or no impact on the accumulation of disability. 20 Interestingly, age × relapse interaction term was only significant for women and not for men. Perhaps, this was partly due to the fact that a smaller proportion of older men experienced relapses during follow up (8.6% of those who were >45 years old versus 10.7% of women), and relatively more men were in the progressive phase. The strength of our study is the use of two large clinic-based patient populations, which allowed us to check for reproducibility of findings. Notwithstanding, the clinic population may underrepresent some subgroups of patients, such as untreated patients, older patients or bedbound patients. Moreover, though we made an effort to collect data from every patient, we have inevitably missed some of our patients who did not wish to or were not able to respond to the disability questionnaire. Another limitation of the study is the exclusive reliance on patients for disability rating; the clinician's rating of disability was not recorded. We, and others, have shown that PDDS correlates highly with EDSS, 6,7 yet patients' self assessment may not always agree with that of the physician due to cognitive impairment or misattributing debility from non-MS causes to MS. Finally, PDDS, as EDSS, is a scale that is heavily weighted toward ambulation and does not reflect
2018-10-18T06:42:31.487Z
2018-09-11T00:00:00.000
{ "year": 2018, "sha1": "cdd383ad22366947ddb78f04dd7edbd648d9cf25", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1756286418793613", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cdd383ad22366947ddb78f04dd7edbd648d9cf25", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234593583
pes2o/s2orc
v3-fos-license
Heavy Metal (Pb, Hg) Contained in Blue Swimming Crab (Portunus pelagicus Linnaeus, 1758) in Cengkok Coastal Waters, Banten Bay, Indonesia Increasing number of industries and settlements in Banten Bay were subsequently followed by an increase in the amount of waste, whether in the form of solid, liquid or gas that can pollute the environment. One of the toxic pollutants is heavy metal.The entry mechanism of the heavy metal Mercury (Hg) and Lead (Pb) in body of the crab (Portunus pelagicus), namely through the process of digestion food. This study was conducted for 6 months, from March to August 2019, and aimed to analyze the heavy metal content levels (Pb and Hg) and the safe consumption level of the blue swimming crab (Portunus pelagicus) in the waters. The heavy metal concentration in the meat was measured through the AAS (Atomic Absorption Spectrophotometer) AA 7000 series Shimadzu. The analysis showed that the Pb and Hg contained in the blue swimming crab were still under the quality standards. Also, the bio-concentration factors of the blue swimming crab were low (<100). Water quality data observed as temperature, salinity, TSS, pH, dissolved oxygen, turbidity, and transparency stayed in the range of tolerable limits for the survival of marine organisms. Maximum weekly intake calculation refers to the tolerable limits issued by the Joint FAO/WHO Expert Committee on Food Additives (JECFA). The JECFA recommends calculating the PTWI of each heavy metal if it accumulates in the human body for methyl mercury 1.6 μg.kg bw.week-1 and for lead not exceed 25 μg.kg bw.week-1. The safety consumption level of blue swimming crab from Cengkok Coastal water was 2.3 kg of meat.week-1 (for adults) and 0.6 kg of meat.week-1 (for children). Introduction Banten Bay is a 10 x 15 km large with shallow (<20m) embayment on the north-west coast of Java, Indonesia. It is located 60 km west of Jakarta along the northern coastline of Java. The Bay has a diameter of about 15 km and a depth of about 12 m. It is characterized by relatively high turbidity (Lestari et al., 2017). The threats from human pressure and changes in the natural conditions in Banten Bay increased over time. These conditions were further aggravated by the increasingly intense economic development, especially industrial activities. Increases in coastal pollutants, largely due to human activity on land, have an impact on seagrass ecosystems, and affect water quality such as increased turbidity. It is estimated that 60% of the seagrass beds have been destroyed which has an impact on the cover and density of seagrass species on Pulau Panjang in Banten City (Sugianti et al., 2018). The increase in the number of industries and residential areas in Banten Bay is a result of the increase in the amount of waste (ie solid, liquid and gas) that pollute the environment. One of the pollutants requiring special attention is heavy metals that are often used as raw or supporting materials in various industrial activities. The entry of the contaminants, especially into sea waters, can reduce the quality of the waters. Besides, changing water quality, due to heavy metals deposited in the sediments, can also cause the transfer of toxic chemicals from sediments to organisms (Permanawati et al., 2013). This result is alarming since heavy metals are accumulative elements both in sediments and bodies of aquatic organisms. The organisms will be contaminated as a result of the high level of heavy metals in these sea waters (Wahyuningsih et al., 2015). Various activities taking place in the Banten Bay are thought to have an impact on the biota of the waters. One of the most abundant biotas in the waters of Banten Bay is the blue swimming crab (Portunus pelagicus). The blue swimming crab is a species that has significant economic value among other species such as P. trituberculatus, P. gladiator, P. sanguinus, P. hastatoides. Changes in the ecological conditions of Banten Bay due to heavy metals are thought to affect both the biological characteristics and the sustainability of the crab, as they will have an impact on catching activities such as a reduction in the catches and fishermen's income (Prabawa et al., 2014). Blue swimming crab is an active organism. However, when not moving, the blue swimming crab will stay at the bottom of the water at a depth of 35 meters and live immersed in the sand or muddy coastal areas, mangroves, and rocks (Indriyani, 2006). Adult crabs feed on mollusks, crustaceans, fish, or carcasses at night. Blue swimming crab habitats are sandy, muddy sand beaches, and rocky islands, as they swim near sea level (about 1 m) to a depth of 65 meters (Moosa, 1985). Blue swimming crab is a typical benthic organism. It might also be good indicators reflecting the contamination levels in surface sediment because crabs are known to reside in the surface sediment and feed on benthic prey items living among contaminated sediments. The bioaccumulation of heavy metals in the blue swimming crab is transferred through sediments and seawater (Zao et al., 20012). Suwandana et al. (2011) conducted heavy metal research in the waters of the Banten Bay and Irnawati et al., (2014). It was observed that, in general, the conditions of the Banten Bay waters were reasonably stable (ecosystem) classified as not polluted to moderately polluted. This research is considered crucial because the blue swimming crab is one of the biotas with high economic value often utilized by the community around Cengkok Coastal waters (Banten Bay). Data regarding the heavy metal contents (Pb and Hg) in the blue swimming crab meat in Cengkok Coastal waters (Banten Bay) is still limited and needs to be updated. Over time, the input of waste continues to grow along with ecological changes in waters. Thus, research related to heavy metals needs to be carried out on an ongoing basis. This research is expected to add information about the status of heavy metal pollution (Pb, Hg) on crab fisheries (Portunus pelagicus) and the aquatic environment, to determine the safety level of consumption of crab meat that can be consumed by humans. Materials and Methods The research was conducted in Cengkok Coastal waters (Banten Bay) (Figure 1.). Blue swimming crab sampling was carried out monthly from March to August 2019. Blue swimming crab samples, were taken at three different stations. Crab samples were taken using gill nets and traps. The number of samples taken at each site were 25-35 individuals and divided into two sizegroups based on carapace width, namely large (>10 cm), and small (<10 cm). The samples were dissected and organ (meat) was stored in the freezer to maintain the quality before analyzing for heavy metal content. Water sampling was carried out at the surface using a 250 mL polyethylene bottle, sterilized, and then added with nitric acid (HNO3) as a preservative and stored in a coolbox before analyzing in the laboratory. Sampling, sample handling, and analysis in the laboratory referred to (APHA, 2012). Measurement of water physical and chemical parameters was carried out in-situ and laboratory analysis. In situ observations and parameter measurements involved temperature, transparency, salinity, pH, and DO. The blue swimming crab was analyzed morphometrically before dissection, the crab sample was weighed, and the carapace width measured. Then, the blue swimming crab was dissected to take the meat. The meat was taken from the abdomen. The weight of meat monthly taken from the whole sample was 50-60 grams. The type of destruction carried out in this study was the wet destruction method. Blue swimming crab samples (P. pelagicus) were destroyed through the Nitric Acid-Perchloric Acid Digestion method, i.e., samples were oxidized using nitric acid (HNO3), and perchloric acid (HClO4) to dissolve metals (Eviati, 2012). The wavelengths were used in AAS to detect heavy metals (Pb and Hg) in blue swimming crab meat with 283, 3 nm, and 253, 7 nm, respectively (SNI, 2009). The analysis of Pb used the Direct Air-Acetylene Flame method, while that of Hg used the Cold Vapor-Atomic Absorption Spectrometry method. Wavelengths used for heavy metal analysis in water samples were (Pb and Hg) were 217.0 nm and 253.7 nm. The principle of AAS work is the amount of energy absorbed proportionally to the concentration of heavy metals in the sample (APHA, 2012). In the wet destruction process, 30 g of blue swimming crab meat was put into the erlenmeyer, and 10 mL of HNO3 solution and 2 mL of HClO4 solution were added. The sample was allowed to stand for one night. The sample was heated at 100 O C for 1 h 30 mins, while the temperature was increased to 130 O C for 1 h. Next, the temperature was increased again to 150 O C for 2 h 30 min. The heating process was carried out until the yellow steam disappeared. If there were still yellow vapors, the heating process would be carried out again. Afterward, the temperature was increased to 170 0 C for 1 h. The extract was then filtered into a 50 mL volumetric flask, then diluted by adding distilled water and homogenized. Besides, the results of the analysis were compared with the Indonesian National Standard for Limits of Metal Contamination in Food (SNI, 2009). The analysis of bioconcentration factors was based on the content of heavy metals in biota divided by heavy metals contained in the sea or sediment. The bio-concentration coefficients were calculated using the following formula (Yunasfi et al., 2019). The maximum concentration limit of heavy metal concentration in food that might be consumed weekly (Maximum Weekly Intake) using threshold figures was published by international food organizations and organizations World Health Organization (WHO) and Joint FAO/WHO Expert Committee on Food Additive (JECFA) (Nuraini et al., 2017). After the determination of the Maximum Weekly Intake value and the value of the concentration of heavy metals, the maximum tolerable intake (MTI) value can be calculated through the following formula, maximum weekly intake (MWI) (mg) divided by concentration of heavy metals in blue swimming crab meat (mg/kg) (Turkmen et al., 2009). Result and Discussion Heavy metals are pollutants that can endanger the aquatic environment and contaminate the biota there in. Their toxicity is a significant problem that affects ecology, evolution, nutrition, and the environment. Heavy metals generally have atomic density values greater than 4 g.cm -3 or 5 times greater than water density. They enter the environment naturally through human activities. Various sources of heavy metals include soil erosion, natural weathering, industrial waste, urban waste, and pesticides, or insecticides. The heavy metals that are most often contained in waste are Arsenic, Cadmium, Chromium, Copper, Lead, Nickel, and Zinc. All of these heavy metals threaten both human health and the environment (Nagajyoti et al., 2010). The results of the analysis of the Pb content are presented in Figure 2. Lead (Pb) concentration in blue swimming crab (P pelagicus) meat taken Banten Bay ranged from 0.005-0.071 mg.kg -1 . In most of the analysis results, the Pb content in blue swimming crab meat (P. pelagicus) was under the AAS device limit, which was <0.005 mg.kg -1 . The average Pb content from March to August was still below the SNI 2009 quality standard, which was 0.5 mg.kg -1 . Statistical test results showed that the Pb content analyzed in each sampling period based on size differences did not differ significantly (p≥0.05). Lead (Pb) is sourced from industrial activities and vehicle exhaust air pollution. The level of lead solubility is low, and its presence in water is relatively small (Connell and Miller 2006). Based on research conducted by Wahyuningsih et al., (2015) the Pb metal content in P. pelagicus caught around Jakarta Bay near Banten Bay in the East and West monsoons is generally not detectable (<0.042 mg.kg -1 ), this shows an increase in heavy metal content in the blue swimming crab around the bay of banten. In addition, based on the analysis conducted by Hapsari et al., (2017), the heavy metal content of lead in meat (muscle) threadfin bream (Nemipterus sp.) in Banten Bay meat (muscle) amounted to 19,098±7.949 mg.kg -1 . The results of the Hg analysis in blue swimming crab meat (P. pelagicus) is presented in Figure 3. Mercury (Hg) concentration in the blue swimming crab meat (P. pelagicus) was, on average, below the AAS tool limit of <0.002 mg.kg -1 . This value did not exceed the quality standard set by SNI 2009, which was 1 mg.kg -1 . Statistical test results showed that the mercury content analyzed in each sampling period based on size differences did not differ significantly (p≥0.05). Mercury is emitted from both anthropogenic and natural sources, primarily as elemental mercury (Hg 0 ). Mercury pollution in the water can occur through two channels, namely direct deposition from the atmosphere or liquid waste and through run-off surface water (Fisher et al., 2012). Water quality data observed (Table 2.), i.e., as temperature, salinity, TSS, pH, dissolved oxygen, turbidity, transparency, were included in the seawater quality standards for biota listed in the Decree of the State Minister for the Environment 51 of 2004 (KNLH, 2004). The results were not significantly different and stayed still in the range of tolerable limits for the survival of marine organisms. Bio-concentration factors described the characteristics of chemicals concentrated in organisms within an environment. Bio-concentration factor values (BCF) are needed to classify and label the level of persistence, bioaccumulation, and evaluate toxicity (PBT), as well as for the chemicals present in the environment (Lombardo et al., 2010). Blue swimming crab bio-concentration factors are presented in Table 3. Bio-concentration is a process in which organisms absorb chemicals from the surrounding environment in the process of respiration through the gills or the surface of the skin. Bio-concentration factors can be influenced by the content of organic matter found at the sampling location (Mackay et al., 2018). The BCF results showed that the average value of the blue swimming crab bio-concentration factor (P. pelagicus) is low (<100). According to Amriani (2011) JECFA currently recommends calculating the provisional tolerable weekly intake (PTWI) of individual heavy metals instead of the acceptable daily intake (ADI) to compare their pollution levels considering their toxicity accumulated in the human body, heavy metals tolerable weekly intakes for methyl mercury 1.6 μg . kg bw.week -1 and for lead not exceed of 25 μg . kg bw.week -1 (Table 1). (Kim et al., 2012). The maximum concentration of heavy metals in foodstuffs that may be consumed weekly (maximum weekly intake) uses a threshold rate seen from the smallest limit value of the type of heavy metal content. Thus, there is no precipitation of metals in the body that can cause death. MWI needs to pay attention to the tolerable limits issued by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) (Nuraini et al., 2017). The values of allowed blue swimming crab meat consumption (P. pelagicus) for a week are presented in Tables 4 and 5. Based on Tables 4 and 5, it can be seen that the lowest consumption limits for adults and children are 2.3 kg of meat.week -1 and 0.6 kg of meat.week -1 . Meanwhile, the highest consumption was 30.6 kg of meat.week -1 for adults and 7.7 kg of meat.week -1 for children. These values were still safe for public use. The maximum concentration limit of heavy metal concentration in food that may be consumed weekly (maximum weekly intake) has the benefit of knowing the limit of consumption of contaminated meat in preventing adverse effects on human health (Hidayah et al., 2014). The maximum consumption value was used as a reference to avoid the harmful effects of heavy metals entering the body (Prastyo et al., 2017). The strength level of massive metal entry into the tissue is Cd> Hg> Pb> Cu (Darmono, 1995). However, based on the results of the study, the Pb contained in blue swimming crab meat (Portunus pelagicus) in Cengkok Coastal waters was higher than Hg, not indicating a linear condition between the strength of entry into the tissue and the content of heavy metals in meat. From the results of observations, both the heavy metal content Pb and Hg were still in the safe category of consumption because they have not exceeded the quality standard set by SNI 7387: 2009, namely for the crustacean category of 0.5 mg.kg -1 (for Pb) and 1 mg.kg -1 (for Hg). Although the results showed that the content of heavy metals in the blue swimming crab meat in Banten Bay had not exceeded the quality standard, this must still be observed. Seasonal factors and sampling station points can cause the difference in heavy metal content in each month. The waters of Banten Bay are affected by two seasons, namely the west season, which is the rainy season, and the east season which is the dry season. The statistical test results showed that heavy metal contents analyzed in each sampling period in March-August did not show significant differences (p≥0.05). Increased levels of lead (Pb) and mercury (Hg) in June were allegedly due to crab samples taken at station 1 (Figure 1). The small crab sampling location (Station 1) is located at the mouth of the Cengkok River, which is close to residential and industrial areas. According to Suhartono (2011), lead (Pb) originating from burning motor fuel generally contains tetraethyl additives such as Pb. Meanwhile, according to Riani (2019), mercury (Hg) comes from industrial waste and coal mining. Various activities that take place around Banten Bay include industrial activities (Putri et al., 2012). Also, an increase in the Pb and Hg contents in the blue swimming crab meat in June, July, and August were allegedly due to the dry season. According to Makarim et al. (2012), in June-August, the waters of Banten Bay experienced a dry season. Various factors influence the difference in the concentration of Pb and Hg in the crab between the dry and rain seasons. According to Younis et al. (2015), seasonal differences influence changes in environmental conditions in water bodies, wherein the rainy season, water sources, and flow rates tend to be higher, resulting in dilution. Differences in heavy metal concentrations by season also occur in research by Nurhayati and Putri (2019) in Cirebon waters, where the concentration of heavy metals in green mussels increased in the dry season. The season also affected the levels of heavy metals, especially lead (Pb), which was strongly correlated. The heavy metal content in large size blue swimming crabs if averaged was higher than small size crabs. However, a statistical test related to the analysis of heavy metal content based on the treatment of size differences revealed no significant differences (p≥0.05). This is presumably because the size of biota is identical to the life of the biota, so the prolonged exposure to metals received by biota that have an older age will accumulate more heavy metals (a positive correlation between crab size and the ability to accumulate heavy metals). The strength of metal accumulation by biota will increase as the biota experiences growth (Sari et al., 2017). Conclusion The Pb content in crab meat (Portunus pelagicus) taken from the waters of the Bay of Banten has a range of 0.005-0.071 mg.kg -1 and the Hg content of crab meat (P. pelagicus) ranges from 0.002-0.042 mg.kg -1 . The lead (Pb) and mercury (Hg) contents in Banten Bay blue swimming crab meat were mostly under the SNI quality standard (2009). Based on the results of observations of water quality parameters, the condition of Banten Bay waters is still within the quality standard range that has been stipulated by the Minister of Environment Decree No. 51 of 2004 to support marine biota that live in mangroves. The content of heavy metals in the blue swimming crab meat in each sampling period based on size differences had no significant effect. The bio-concentration factor of the blue swimming crab was low (<100). Blue swimming crab meat (P. pelagicus) was taken from the Cengkok Coastal Waters of Banten Bay, can be consumed as long as it does not exceed the safety level. The safety level of blue swimming crab consumption from Cengkok Coastal and surrounding area is 2.3 kg of meat.week -1 (for adults) and 0.6 kg of meat.week -1 (for children). Size Safety level consumption per week (kg meat.
2021-05-16T00:04:10.184Z
2020-11-24T00:00:00.000
{ "year": 2020, "sha1": "99b97b7fe36a25bd10a9ba7fd6e84c24c11e362c", "oa_license": "CCBYSA", "oa_url": "https://ejournal.undip.ac.id/index.php/ijms/article/download/31978/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "660e895d03dbcbc74541106731363f5de927903e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
81980714
pes2o/s2orc
v3-fos-license
Transcriptome Profile Analysis on Ovarian Tissues of Autotetraploid Fish and Diploid Red Crucian Carp Polyploidization can significantly alter the size of animal gametes. Autotetraploid fish (RRRR, 4nRR = 200) (4nRR) possessing four sets of chromosomes were derived from whole-genome duplication in red crucian carp (RR, 2n = 100) (RCC). The diploid eggs of the 4nRR fish were significantly larger than the eggs of RCC. To explore the differences between the ovaries of these two ploidies of fishes at the molecular level, we compared the ovary transcriptome profiles of 4nRR fish and RCC and identified differentially expressed genes (DEGs). A total of 19,015 unigenes were differentially expressed between 4nRR fish and RCC, including 12,591 upregulated and 6,424 downregulated unigenes in 4nRR fish. Functional analyses revealed that eight genes (CDKL1, AHCY, ARHGEF3, TGFβ, WNT11, CYP27A, GDF7, and CKB) were involved in the regulation of cell proliferation, cell division, gene transcription, ovary development and energy metabolism, suggesting that these eight genes were related to egg size in 4nRR fish and RCC. We validated the expression levels of these eight DEGs in 4nRR fish and RCC using quantitative PCR. The study results provided insights into the regulatory mechanisms underlying the differences in crucian carp egg sizes. INTRODUCTION Polyploidy is a very common phenomenon. In vertebrate evolution, polyploidy is considered to have led to the evolution of more complex forms of life by providing the opportunity for new functions to evolve (Ohno, 1970;Epstein, 1971). Polyploidy, including allopolyploidy and autopolyploidy, is both widespread and evolutionarily important (Van de Peer et al., 2017). Allopolyploids contain genomes from distinct taxa, while autopolyploids are formed by genomes from the same species (Van Drunen and Husband, 2018). Phenotypic changes induced by chromosome duplications have been reported since the early 20 th century (Stebbins, 1947). A well-known effect of polyploidy in plants and animals is cell enlargement (Knight and Beaulieu, 2008), but less evident effects can also occur (Maherali et al., 2009). In plants, for example, polyploidy often modifies physiological traits such as transpiration, and rates of photosynthesis or growth (Levin, 2002). Following such changes in physiology, shifts in ecological tolerance have been demonstrated for some taxa (Levin, 2002). Polyploidy can also induce phenotypic modifications in reproductive traits, but surprisingly, these effects have received less attention. Sometimes, polyploids have reproductive organs that are larger than those of their diploid counterparts (Robertson et al., 2011). Following their instantaneous multiplication in DNA content, polyploids can experience processes that either expand or shrink their genomes (Leitch et al., 2008). This increase in DNA has great potential to induce phenotypic variation (Chen, 2007). The relationships between genome size and phenotypic traits have been discussed in comparative studies at a broad phylogenetic levels (Muenzbergova, 2009), but few studies have analyzed how and whether genome size or polyploidy can modify phenotypic traits at the microevolutionary scale (Lavergne et al., 2009). In fish, polyploidization can obviously alter egg size. For example, allotetraploid hybrids of red crucian carp × common carp can produce diploid eggs that are obviously larger than those of their parents (Liu et al., 2004(Liu et al., , 2007. Forés et al. (1990) studied egg activity in Scophthalmus maximus and found that these eggs reached a higher fertilization ratio when the diameter of the egg was 0.9-1.1 mm, but when the diameter was 1.1-1.2 mm, the fertilization ratio was lower. Thus, egg diameter is an important parameter that can reflect the positives or negatives of egg mass. Autotetraploid fish (4nRR) (Qin et al., 2014) derived from genome duplication in RCC. Autopolyploids often differ ecologically and phenotypically from their low ploidy parents (Husband et al., 2016), but because studies are commonly performed on long-established cytotypes (Comai, 2005). It is unclear whether differences are due to instantaneous changes associated with the whole-genome duplication (WGD) event or divergences through selection after the fact (Weiss-Schneeweiss et al., 2013). In this process, we found that egg diameters of 4nRR fish are larger than those of RCC. Meanwhile, through selfcrossing RCC and 4nRR fish, we found that the fertilization ratio of RCC (96.70%) was higher than that of 4nRR fish (65.36%). In livestock and wildlife, egg quality is affected by a number of factors and is highly variable, including egg size (Chapman et al., 2014). Egg size plays an important role in the heredity and reproduction of fish. In this study, we examined the transcriptomes of mature ovarian tissues from 4nRR fish and RCC using RNA-seq. The purposes of this research were to expand the genetic resources available for crucian carp, analyze differentially expressed genes (DEGs) between 4nRR fish and RCC and identify genes related to egg diameter. Overall, our results were valuable for understanding valuable genomic information and the molecular mechanism of ovarian development in 4nRR fish and RCC. In addition, this study helped establish a foundation for polyploid evolution and molecular breeding in crucian carp and other closely related species. Ethics Statement Fish researchers were certified under a professional training course for laboratory animal practitioners held by the Institute of Experimental Animals, Hunan Province, China (Certificate No. 4263). All fish were euthanized using 2-phenoxyethanol (Sigma, United States) before dissection. This study was carried out in accordance with the recommendations of the Administration of Affairs Concerning Experimental Animals for the Science and Technology Bureau of China. The protocol was approved by the Administration of Affairs Concerning Experimental Animals for the Science and Technology Bureau of China. Sample Collection and Preparation One-year old female 4nRR fish and RCC were obtained from the State Key Laboratory of Developmental Biology of Freshwater Fish, Hunan Normal University, China. The ploidy status of the 4nRR fish and RCC was tested by flow cytometry as described by Qin et al. (2014). Three one-year-old mature female 4nRR fish and RCC were chosen. Ovarian tissues were removed from the 4nRR fish and RCC after euthanasia using 2-phenoxyethanol (Sigma, United States). In this experiment, 4nRR fish were used as treatment group, while RCC was used as a control group. The ovarian tissues of the 4nRR fish and RCC were then divided into three parts; the first part was used to measure egg diameters to test differences in egg size between 4nRR fish and RCC using multiple-contingency-table analyses (Sokal and Rohlf, 1981); the second part was fixed in 4% paraformaldehyde solution for histological observation as described in Cao and Wang (2009); the third part was promptly frozen in liquid nitrogen, stored at -80 • C, and then used for RNA-Seq and Real-time Quantitative PCR Detecting System (qPCR) analysis. Total RNA was extracted from 4nRR fish and RCC ovarian tissues using a Total RNA Kit II (TaKaRa, China) according to the instructions of the manufacturer. For each ploidy. Each amounts of RNA from three 4nRR fish and three RCC were pooled to offer templates for construction of the RNA-Seq library (Supplementary Figure 1). Measurement of the Size of the Eggs and the Histology Observation of Ovary Ten female 4nRR fish and ten female RCC were sorted into two groups producing "high-quality" or "low-quality" eggs as described by Chapman et al. (2014). The diameters of 167 4nRR fish eggs and 167 RCC eggs were measured by a Vernier caliper. We used analyses of variance (ANOVA) (Osterlind et al., 2001) and multiple comparison tests (LSD method) (Williams and Abdi, 2010) to test for differences in egg size between 4nRR fish and RCC using SPSS Statistics 21.0. The values of the independent variables were expressed as the mean ± SD. The gonads of 4nRR fish and RCC were fixed in Bouin's solution for the preparation of tissue sections. The paraffin-embedded sections were cut and stained with hematoxylin and eosin. Gonadal structure was observed with a light microscope and photographed with a Pixera Pro 600ES. RNA Sequencing Library Construction and Illumina Sequencing The cDNA library was constructed using high quality RNA. Poly (A) was separated using oligo-dT beads (Qiagen, Dusseldorf, Germany). The fragmentation buffer was added to break all the mRNA into short fragments. Random hexamer-primed reverse transcription was used for first-strand cDNA synthesis. The second cDNA strand synthesis was subsequently performed using DNA polymerase I and endonuclease. The quick PCR extraction kit was used to purify the cDNA fragments. These purified cDNA fragments were rinsed with EB buffer for end reparation Poly (A) addition and then ligated to sequencing barcodes. The fragments with a size suitable for sequencing criteria were isolated from the gels and enriched by PCR amplification to construct the final cDNA library. The cDNA library was sequenced on the Illumina sequencing platform (Illumina Hiseq TM 2500) using paired-end technology in a single run, by Novogene Technologies (Beijing, China). The Illumina GA processing pipeline was used to analyze the images and for base calling. De novo Assembly and Functional Annotation Raw reads were filtered using Fastqc software (Babraham Bioinformatics) (Davis et al., 2013) to obtain paired-end clean reads. All clean reads were used for assembly using Trinity software (Grabherr et al., 2011) with the following parameters: (1) minimum assembled contig length to report = 100bp; (2) maximum length expected between fragment pairs = 250 bp; and (3) count for K-mers to be assembled by Inchworm = 25. After assembly, contigs longer than 200 bp were used for analysis. The contigs were connected to obtain sequences that could not be extended further at either end, and the sequences of the unigenes were generated. The unigenes were further spliced and assembled to acquire maximum length non-redundant unigenes using TGICL clustering software (J. Craig Venter Institute, Rockville, MD, United States). Finally, Blastx was used to compare the unigenes base on E-value < 10 −5 (Altschul et al., 1997) with the non-redundant protein (Nr), SwissProt, Kyoto Encyclopedia of Genes and Genomes (KEGG) and Clusters of Orthologous Group (COG) databases (E-value < 10 −3 ). Gene ontology (GO) annotation of the unigenes was completed using Blast2GO based on the results from the NCBI Nr database annotation. Blastn was used to align the unigenes to the Nr database and search for proteins with the highest sequence similarity to the given unigenes, accompanied by their protein functional annotations. A heat map which grouped genes according to FPKM values was generated in Cluster3.0 (De Hoon et al., 2004). Identification of Differentially Expressed Genes (DEGs) The mapped reads were normalized according to the FPKM for each unigene between the 4nRR and RCC fish, which was beneficial for comparing unigene expressions (McCarthy and Smyth, 2009) of 4nRR and RCC fish. The DEGs were identified by the DEGseq package ) by applying the MA-plot-based method with a random sampling model. DEGs between 4nRR and RCC fish were selected based on the following filter criteria: (1) false discovery (FDR) < 0.05; and (2) |log 2 (foldchange)| > 1 (Storey and Tibshirani, 2003;Lv et al., 2013). Validation of RNA-Seq Results by qPCR To verify the reliability of the RNA-seq results, eight DEGs (CDKL1, CKB, AHCY, ARHGEF3, TGFβ, SCP1, WNT11 and CYP27A) involved in the development of ovarian tissues were selected for validation using quantitative real-time PCR (qPCR) on a Prism 7500 Sequence Detection System (Applied Biosystems, United States) with a miScript SYBR Green PCR Kit (Qiagen, Germany). The reaction mixture (10 µL) comprised 2.5 µL cDNA (1:3 dilution), 5 µL SYBR Premix Ex TaqTMII (TaKaRa), 0.5 µL specific forward primer, 0.5 µL reversal primer, and 1.5 µL water. Real-time PCR was performed on biological replicates in triplicate. The amplification conditions were as follows: (1) 50 • C for 5 min, (2) 95 • C for 10 min and (3) 40 cycles at 95 • C for 15 s, followed by 60 • C for 45 s. The average threshold cycle (Ct) was calculated for 4nRR fish and RCC using the 2 − Ct method (Pfaffl, 2001) and normalized to that of β-actin. Finally, a melting curve analysis was completed to validate the specific generation of the expected products. Comparison of Egg Size One-year-old 4nRR and RCC fish were used in this research. The ovaries of one-year 4nRR and RCC fish developed well and contained II, III, and IV oocytes. Furthermore, large numbers of eggs were stripped from one-year-old 4nRR fish and RCC, respectively. The results showed that 4nRR and RCC fish had reached sexual maturity by one year of age (Figures 1A,B). The average egg diameters of the RCC and 4nRR fish were 13.67 and 17.71 mm, respectively ( Table 1). Eggs from 4nRR fish were significantly larger than those from RCC fish (Figures 1C,D) (t = −33.370, p < 0.05). Sequencing, de novo Assembly and Functional Annotation RNA-seq (Feng et al., 2012) was conducted on 4nRR and RCC fish ovarian tissue. A total of 118.1 million 150 bp paired-end reads were generated. After removing low-quality reads and short read sequences, a total of 108.1 million clean reads (91.54%) were obtained (Supplementary Table 1), and these reads were used for the following analyses. Ovarian tissues from RCC and 4nRR fish were used to generate 212,573 transcripts and 149,851 unigenes. (Altschul et al., 1997;Lv et al., 2013). Among these unigenes, 38,140, 26,510, 21,296, 51,507, 135,474, 36,236, and 40,008 were identified in the GO, KO, KOG, NR, NT, PFAM and SwissProt databases, respectively (Supplementary Figure 3). Clean RNA sequencing reads were deposited in the NCBI Sequence Read Archive (SRA) under accession numbers SAMN07418623 and SAMN07418624 1 . The Differentially Expressed Genes Between the Two Kinds of Crucian Carp A total of 19,015 unigenes were differentially expressed between the RCC and 4nRR fish. In total, 12,591 unigenes were upregulated in 4nRR fish, while 6,424 unigenes were downregulated in 4nRR fish compared with RCC. Some upregulated genes in 4nRR fish, such as vitellogenin (Vtg), Meiotic nuclear division 5 homolog B (Md5b), Mediator of RNA polymerase II transcription subunit 25 (Mpts), Transcription factor TFIIIB component (Tfc), Cell division cycle-associated protein 3(Cdc3), S-phase kinase-associated protein (Skp1), Bcl-2-related ovarian killer protein homolog A (Bokp), Ovarian Frontiers in Genetics | www.frontiersin.org cystatin (Oct), Dynein regulatory complex protein 1 (Drc1) and Cyclin-dependent kinase-like 1 (CDKL1) ( Table 2), were mainly involved in the regulation of cell proliferation and cell division, gene transcription, ovary development and energy metabolism, showing that these genes might be related to egg diameter in crucian carp. Using log ratio values, we performed hierarchical clustering of 16,581 DEGs based on their expression. Expression levels during the stages of ovarian development were divided into 24 categories based on K-means clustering. Detailed expression profile clusters between 4nRR and RCC fish are shown in Supplementary Figure 4. The expression patterns not only indicate the diverse and complex interactions among genes, but also suggest that unigenes with similar expression patterns may have similar functions in the development of ovary. expression patterns of the eight genes by qPCR ranged from significantly different to similar to those indicated by the RNAseq analysis (Figure 5). Significance of Polyploidization Polyploidization of chromosomes was thought to be one of the most important mechanisms in species evolution (Masterson, 1994). Polyploidization is a major factor that drives plant genome evolution (Stupar et al., 2007) and fish evolution (Finn and Kristoffersen, 2007). Polyploidization not only significantly shaped the genomes but also affected other genetic aspects including gene expression (Cheung et al., 2009). Polyploids may contain genomes from different parental species (allopolyploidy) or multiple sets of the same genome (autopolyploidy). Many studies have revealed that polyploid genomes undergo major chromosomal, genomic, and genetic changes (Doyle et al., 2008;Buggs et al., 2011Buggs et al., , 2012Ainouche et al., 2012). Despite the great progress in clarifying the genomic and transcriptomic changes that accompany polyploidization, few studies have explicitly correlated these changes with phenotype alterations (Gaeta et al., 2007). The changes in the characteristics of polyploids were mainly caused by differences in gene expression (Stupar et al., 2007;Chen et al., 2010), and thus, RNA-seq technologies can now be used in a highthroughput manner to investigate such phenotypic changes In each panel, 4n means 4nRR fish and 2n means RCC fish. Different lowercase letters indicate significant differences (p < 0.05) (means ± SD of relative expression; n = 9 for each group). (Cui et al., 2013;Qiao et al., 2013;Zhang et al., 2017). Here, we showed that autotetraploidization causes increased egg size in 4nRR fish compared to RCC fish. We established a 4nRR fish lineage to better understand the genetic impact imposed by autopolyploidization. The 4nRR fish were derived from a whole genome duplication of RCC and possessed four sets of chromosomes derived from RCC (Qin et al., 2014). However, phenotypic changes were present in the 4nRR fish, including increased blood cell and germ cell sizes compared with RCC fish. Notably, the phenotypic and molecular data reported here were due to autopolyploidy rather than cultivar influence, as similar effects on the RCC and 4nRR fish cultivars were found. Significance of Egg Size Study Autopolyploidy is traditionally considered to cause reduce fertility or sterility compared with diploid progenitors (Cifuentes et al., 2013). However, recent research showed that 4nRR fish can produce unreduced diploid eggs and showed dual reproductive modes of sexual reproduction and gynogenesis (Qin et al., 2015). In this research, the histological features of the gonads revealed that the 4nRR and RCC fish both possessed normal gonadal structure and could reach maturation. In the breeding seasons, large numbers of eggs were harvested from one-year-old 4nRR and RCC fish. These results showed that autotetraploidization did not cause fertility or sterility. Previous studies suggested that polyploid formation could induce various types of genomic changes . Comparative analysis based on egg size measurements revealed that the average diameter of diploid eggs from 4nRR fish was 17.71 mm, which was significantly larger than average haploid eggs with a diameter of 13.67 mm in RCC, suggesting that genetic factors were likely to be the cause of this difference in ovary development and egg diameter. In mature ovaries, the increased oocyte volume was mainly due to the incorporation of vitellogenin (Santos et al., 2007;Schilling et al., 2015). This process requires a range of enzymes to provide hormonal and energy support for the synthesis and breakdown of vitellogenin (Williams et al., 2014). We found that the egg diameters of 4nRR fish were obviously larger than those of RCC fish. Developing oocytes were thought to be largely non-transcribed and serve as a repository for specific maternal RNA, proteins and other molecules important for fertilization, initiation of zygotic development, and transition to embryonic gene expression (Santos et al., 2007;Reading et al., 2013;Chapman et al., 2014). Through self-mating experiments between 4nRR and RCC fish, we found that the fertilization of 4nRR fish to be lower than that of RCC. The result showed that variation in sizes of fish eggs has been associated with polyploidization. Among the 19,051 DEGs identified in this study, most of the key genes were involved in protein processing, fat and energy metabolism, cytoskeleton, steroidogenesis activities and cell division. Cluster analysis of the genes differentially expressed between 4nRR and RCC fish identified a list of genes, of which 12,591 were more highly expressed in 4nRR fish and 6,424 were more highly expressed in RCC fish. With reference to the relevant literature (Heringstad et al., 2000;Dubrac et al., 2005;Menges et al., 2005;Knoll-Gellida et al., 2006;Santos et al., 2007), we screened 8 key genes (CDKL1, AHCY, ARHGEF3, TGFβ, WNT11, CYP27A, GDF7 and CKB) related to egg development. Compared with RCC, there existed some genes in 4nRR fish that showed a marked up-regulation, (specifically CDKL1, AHCY, ARHGEF3, TGFβ, WNT11, CYP27A, GDF7 and CKB) which might account for the differences in the egg diameters between RCC and 4nRR fish. CDKL1 was a member of the cyclin-dependent kinase-like (CDK) protein family, which was a group of serine/threonine kinases (Santos et al., 2007). The cyclin dependent kinase CDKL1 controls the cell cycle, which was best understood in the model organism Saccharomyces cerevisiae. AHCY (S-adenosylhomocysteine hydrolase) was the cellular enzyme that cells rely on for replication (Heringstad et al., 2000). ARHGEF3 was a regulatory small GTPase that mediates signal transduction (Mullin et al., 2008) and was related to energy metabolism. Transforming growth factor β (TGFβ) and its signaling effectors act as key determinants of carcinoma cell behaviors, which play a key role in steroid hormone and vitellogenin synthesis during ovary development (Knoll-Gellida et al., 2006). WNT11 regulates cell fate and patterns during embryogenesis. In many different tissues, CYP27A played an important role in cholesterol and bile acid metabolism and fatty acid metabolism (Dubrac et al., 2005). In our previous study, obvious expression difference of the gnrh2, gthb and gthr were found in the 4nRR fish (Qin et al., 2018). Altogether, our results provide a foundation for the further characterization of gene expression in 4nRR and RCC fish with respect to egg size. DATA AVAILABILITY The datasets generated for this study can be found in National Center for Biotechnology Information, SAMN07418623 and SAMN07418624. AUTHOR CONTRIBUTIONS SL and QQ conceived and designed the study. YW and YP contributed to the experimental work and wrote the manuscript. MZ, XH, and LC performed most of the statistical analyses. YW and WL designed the primers and performed the bioinformatics analyses. MT and CZ collected the photographs. All authors read and approved the final manuscript.
2019-03-19T13:04:10.711Z
2019-03-19T00:00:00.000
{ "year": 2019, "sha1": "a8fe6f5440b095e5feeda8376371e3c21039fae6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2019.00208/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a8fe6f5440b095e5feeda8376371e3c21039fae6", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237314689
pes2o/s2orc
v3-fos-license
Platelet-Therapeutics to Improve Tissue Regeneration and Wound Healing—Physiological Background and Methods of Preparation Besides their function in primary hemostasis, platelets are critically involved in the physiological steps leading to wound healing and tissue repair. For this purpose, platelets have a complex set of receptors allowing the recognition, binding, and manipulation of extracellular structures and the detection of pathogens and tissue damage. Intracellular vesicles contain a huge set of mediators that can be released to the extracellular space to coordinate the action of platelets as other cell types for tissue repair. Therapeutically, the most frequent use of platelets is the intravenous application of platelet concentrates in case of thrombocytopenia or thrombocytopathy. However, there is increasing evidence that the local application of platelet-rich concentrates and platelet-rich fibrin can improve wound healing and tissue repair in various settings in medicine and dentistry. For the therapeutic use of platelets in wound healing, several preparations are available in clinical practice. In the present study we discuss the physiology and the cellular mechanisms of platelets in hemostasis and wound repair, the methods used for the preparation of platelet-rich concentrates and platelet-rich fibrin, and highlight some examples of the therapeutic use in medicine and dentistry. Introduction In case of vascular injury, platelets detect the presence of subendothelial structures (extracellular matrix components) leading to the adhesion and aggregation of the anuclear cell type and primary hemostasis [1,2]. Secondary, platelets induce the activation of plasmatic coagulation and hemostasis. For the description of these complex temporal and spatial sequence of events occurring on the platelets' surface, the cell-based coagulation model was developed [3]. Subsequent to clot formation, a further important mechanism of platelets takes place: activated GPIIb/IIIa receptors located on filopodia bind to fibrin, resulting in outside-in signaling of platelets and the activation of the contractile apparatus [4][5][6][7]. As a result, the platelets' filopodia pull at the fibrin fibers to retract the clot [8]. Interestingly, the volume is reduced by the platelets action to at least 50% of volume and is inhibited when the red blood cells are tightly packed [9]. The huge effect of clot retraction has important physiological consequences: the size of the wound is retracted, the firmness of the clot is increased, densely packed polyhedral red blood cells form an impermeable membrane, and ischemia due to thrombosis of a vessel can be resolved by reperfusion [9][10][11]. In addition to their function in hemostasis, the clot consisting of platelets, leukocytes, fibrinogen, and erythrocytes coordinates inflammation and wound healing [12,13]. Inflammation is an important consequence of hemostasis, as injury can potentially be accompanied by the entry of pathogens. Besides leukocytes, platelets are an important and early step in the immune response to danger and infection and constitute an important cell type of both innate and adaptive immunity [2,14]. Platelets can detect pathogens and danger via pathogen recognition receptors, bind bacteria and viruses, induce the release of bactericidal NETs from neutrophils, and release platelet microbial proteins when stimulated with thrombin or lipopolysaccharides [14,15]. A further step initiated by platelets is wound repair, propagated by the release of various growth factors from alpha-granules attracting several different cell types necessary for wound healing. In the following, we will highlight the different functions of platelets, from vascular injury to wound healing, and will discuss the involvement in a platelet-centered approach. Moreover, we will discuss the preparation technique and the current use of platelet-rich plasma for local therapies in humans. Platelet Physiology Platelets are small anuclear cells with a short half-life of 10 days derived from megakaryocytes [16]. Platelets have a uniform discoid shape, a complex intracellular structure with a cytoskeleton maintained by dynamic action, can synthesize proteins, are capable of dividing, and can respond to various stimuli with shape change, adhesion, aggregation as well as exposure of phosphatidylserine on the surface to induce clot formation and hemostasis [17][18][19]. For a long time, the importance of these cells in hemostasis was acknowledged and platelet concentrates are nowadays a valuable treatment option in bleeding patients with thrombocytopenia or thrombocytopathy [20]. Besides this function, the involvement in wound healing and immune system function has been recognized. Thus, platelets have been demonstrated to affect inflammation, thrombosis, atherosclerosis, and metastasis [2,21,22]. For their diverse functions, platelets are equipped with multiple receptors to recognize their environment [23]. Physiological answers are achieved by inside-out signaling, leading to the important activation of the GPIIb/IIIa receptor (αIIbβ3) and to changes of the cytoskeleton, as well as the release of hundreds of mediators stored in intracellular granules. Moreover, platelets can communicate with other cells via extracellular vesicles (EV) and 90% of blood stream vesicles, which were named platelet dust in earlier times, derive from this cell type [24]. Alpha granules are described to release more than 300 proteins responsible for coagulation, anticoagulation, and fibrinolysis as well as being involved in inflammation, immunity, cell adhesion, and growth [14]. Among the mediators involved in coagulation are the factors V, XIII, and IX, fibrinogen, and the von Willebrand factor. Anticoagulant proteins include antithrombin, protein S, and tissue factor pathway inhibitors. Moreover, plasminogen and plasminogen activator inhibitor, proteins involved in fibrinolysis, can be released. Mediators involved in the recruitment of immune cells are the chemokines CXCL1, epithelial neutrophil activating peptide-78, platelet factor 4, monocyte chemoattractant protein-1, macrophage inflammatory protein 1 alpha, thymus-and activation-regulated chemokine (TARC), and regulated on activation, normal T cell expressed and secreted (RANTES = CCL5), as well as integral membrane proteins GPIIb/IIIa, GPIbalpha-IX-V, GPVI, TLT-1, and P-selectin. Moreover, many growth factors including platelet derived growth factor (PDGF), connective tissue growth factor (CTGF), stromal-derived factor-1 alpha, vascular endothelial growth factor (VEGF), tumor growth factors (TGFalpha, TGFbeta), and the fibroblast growth factor FGF-1, as well as the microbicidal proteins thymosin-beta4 and thrombocidin 1 and 2 can be released. In contrast to alpha-granules, both the content and the function of delta-granules (dense bodies) are far less diverse, and the bioactive amines (serotonin, histamine), nucleotides, and poly-and pyrophosphates are all involved in clot formation and coagulation. Lambda-granules are comparable to lysozymes in other cell type, are responsible for the degradation of proteins, lipids, and carbohydrates, and are thus involved in the removal of cell debris [28]. Extracellular Vesicles An interesting new area of research are platelet derived extracellular vesicles. They are classified in micro-vesicles (100-1000 nm) and exosomes (30-100 nm), which are generated by different mechanisms (fusion of multi-vesicular bodies vs. budding from plasma membrane) and can be differentiated by cell surface markers [29]. They contain proteins, lipids, metabolites, miRNA, and nucleic acids and are involved in cell crosstalk and are thus involved in coagulation, inflammation, immunoregulation, and angiogenesis. Moreover, EVs have important functions in tissue repair and may exert the beneficial effects of plateletrich plasma used in humans. In this regard, this beneficial effect can be "highjacked" by cancer cells for development and progression. Platelets in Hemostasis Hemostasis can be divided into three stages. Initially, vasoconstriction occurs at the side of vessel injury. Thereafter, a platelet plug is formed at the place of injury, a phenomenon called primary hemostasis. Secondary hemostasis leads to activation of coagulation system and the typical thrombus formation. Under physiological conditions, resting platelets circulate in the blood stream to detect disturbances of vascular integrity. In case of vascular injury, platelets bind to von Willebrand factor via the glycoprotein GP Ibalpha of the GPIb-IX-V complex and GPVI. In turn, these events activate platelets GPIIb/IIIa, inducing aggregation and stable binding of platelets to the injury via fibrin and von Willebrand factor [1,4,30]. Moreover, platelet activation leads to the release of granules with pro-aggregatory and pro-coagulatory content, and tissue factor exposition on extravascular cells initiates further activation of platelets. The Classical View on the Coagulation System In an initial attempt to understand the cooperative function of coagulation factors leading to hemostasis, the coagulation factors were initially described as an enzyme cascade leading to the generation of fibrin and hemostasis [31,32]. The coagulation factors were grouped in an intrinsic and an extrinsic pathway, which converged to the common pathway. The details are shown in Figure 1. While this classical model is very suitable to classify the effects of drugs and coagulation defects, it does not reflect the in vivo conditions and the important contribution of platelets. The Cell-Based Coagulation Model To describe the concerted action of coagulation factors and platelets more exactly, the cell-based coagulation model has been proposed [3] which is shown in Figure 2. Under physiological circumstances, coagulation factors are restricted to the vascular space by the The Cell-Based Coagulation Model To describe the concerted action of coagulation factors and platelets more exactly, the cell-based coagulation model has been proposed [3] which is shown in Figure 2. Under physiological circumstances, coagulation factors are restricted to the vascular space by the endothelium and have no contact to extravascular cells commonly expressing tissue factor on their surface. Vascular injury leads to the exposition of tissue factor bearing cells to the coagulation factors. Factor VII binds to tissue factor, is activated, and in turn activates the coagulation cascade. The minute amounts of thrombin are capable in activating platelets in the amplification phase. In the third step, the propagation phase, large amounts of thrombin are generated on the surface of the activated platelets sufficient to form a fibrin clot. Control of Hemostasis There are several important pathways controlling coagulation. Thrombin, built during coagulation at the side of vascular injury, binds to endothelial thrombomodulin. The complex activates protein C which, in turn, inactivates the activated coagulation factors V an VIII:C. Protein S, which is the cofactor of protein C, assists the downregulation of coagulation [34]. Antithrombin is the most important circulating inhibitor of coagulation and binds and inactivates thrombin as well as several coagulation factors [35]. In case of clot formation, tissue plasminogen activator is released at the site of injury and converts plasminogen to plasmin [36]. Plasmin, in turn, degrades fibrin to fibrin degradation products. The activity of the fibrinolytic system is modulated by plasminogen activator inhibitor I and II, plasmin inhibitor, and thrombin-activatable fibrinolysis inhibitor. Platelets in Clot Retraction Subsequent to the formation of fibrin fibers, GPIIb/IIIA receptors located on the filopodiae of the activated platelets bind to fibrin fibers and induce via outside-in signaling the activation of the contractile apparatus [9][10][11]. As a result, platelets pull at the fibrin fibers and lead to the contraction of the clot. Clot retraction is limited when red blood cells are compacted [37]. While the cellular mechanisms of signal-transduction and the contractile mechanisms have been investigated in detail, only sparse information in humans is available. It is generally accepted that clot retraction is an important mechanism to (i) improve wound healing and to (ii) enable the reperfusion of a vessel in case of thrombosis The cell-based coagulation system is a simplification which does not include the function of red blood cells in hemostasis [12]. Red blood cells improve hemostasis through an increase of blood viscosity, cause the margination of platelets in the blood stream, can adhere to the endothelium, and can thus favor thrombotic events under certain pathological conditions (e.g., diabetes). Moreover, red blood cells are involved in NO-metabolism, can release thromboxane A2 and ADP, and thus affect platelet aggregation and adhesion. Hemolysis of red blood cells and release of hemoglobin, generating ROS, can further activate coagulation and platelet activation. In addition to red blood cells, leukocytes have been shown to be involved in coagulation, thrombosis, and tissue damage [33]. Control of Hemostasis There are several important pathways controlling coagulation. Thrombin, built during coagulation at the side of vascular injury, binds to endothelial thrombomodulin. The complex activates protein C which, in turn, inactivates the activated coagulation factors V an VIII:C. Protein S, which is the cofactor of protein C, assists the downregulation of coagulation [34]. Antithrombin is the most important circulating inhibitor of coagulation and binds and inactivates thrombin as well as several coagulation factors [35]. In case of clot formation, tissue plasminogen activator is released at the site of injury and converts plasminogen to plasmin [36]. Plasmin, in turn, degrades fibrin to fibrin degradation products. The activity of the fibrinolytic system is modulated by plasminogen activator inhibitor I and II, plasmin inhibitor, and thrombin-activatable fibrinolysis inhibitor. Platelets in Clot Retraction Subsequent to the formation of fibrin fibers, GPIIb/IIIA receptors located on the filopodiae of the activated platelets bind to fibrin fibers and induce via outside-in signaling the activation of the contractile apparatus [9][10][11]. As a result, platelets pull at the fibrin fibers and lead to the contraction of the clot. Clot retraction is limited when red blood cells are compacted [37]. While the cellular mechanisms of signal-transduction and the contractile mechanisms have been investigated in detail, only sparse information in humans is available. It is generally accepted that clot retraction is an important mechanism to (i) improve wound healing and to (ii) enable the reperfusion of a vessel in case of thrombosis [38]. Moreover, thrombasthenia Glanzmann-Naegeli and Bernhard-Soulier syndrome are characterized by a bleeding phenotype associated with disturbed clot retraction and defective GPIIb/IIIa receptors [8]. Only few information of altered clot retraction in acquired diseases is available: increased clot retraction has been demonstrated in coronary heart disease, while a decrease was shown in uremic patients [39,40]. Platelets in Immunology and Wound Healing Wound healing starts with hemostasis, the formation of a fibrin scaffold, and an inflammatory response as a first line of defense with a recruitment of neutrophils and monocytes. Platelets are involved in both the innate immune system and adaptive immunity responses [41]. For this purpose, platelets are equipped with several pattern recognition receptors such as Toll-like receptors and C-type lectin receptors, which can detect pathogen associated molecular patterns and danger associated molecular patterns [42]. TLR4-induced activation of platelets results in the release of proinflammatory mediators, the recruitment of leukocytes, and aggregates of platelets with leukocytes or monocytes [15]. Leukocytes, in turn, release cytokines and chemokines to modulate inflammation. Moreover, neutrophils can release reactive oxygen species as well as their nuclear content to form neutrophil extracellular traps (NETs) to fight pathogens [43]. Besides their function in innate immunity, platelet CD40-ligand, expressed upon activation, can bind to many immune cell types, including B-cells, T-cells, and endothelial cells via their CD40-receptors [44]. In this way, platelets can modulate the release of cytokine and immunoglobulin production. Subsequent to this initial phase necessary to eliminate pathogens, angiogenesis occurs, which includes endothelial cell proliferation, migration, and branching of vessels [45]. Moreover, pericytes as well as all other cell types of the perivascular space proliferate. In addition, circulating progenitor cells from the bone marrow support new vessel formation. During the development of blood vessels, fibroblasts proliferate and invade into the clot and shift the cellular environment from the inflammatory to a growth state. Differentiation of some fibroblasts to myofibroblasts leads to a further retraction of the wound and finally to scar formation, in parallel to re-epithelialization. Platelets have been involved in the whole process of healing via the release of multiple growth factors in their secretome [46]. Increased platelet concentration using platelet-rich plasma in the wound has regularly improved wound healing in several animal models and is FDA approved. However, thrombocytopenia in a mouse model did not affect wound healing as judged by angiogenesis, collagen synthesis, and re-epithelialization [47]. Preparation of Platelet-Rich Plasma and Platelet-Rich Fibrin The rational for the use of blood based local therapy relies on the finding that a blood clot improves wound healing by the release of growth factors, chemokines, and antibiotic agents from platelets [14,48]. An increase of platelet count in the wound was shown to enhance wound healing in an experimental setting. While platelets are shown to be an important source of mediators, there is ample evidence that the other constituents of blood, including leukocytes and fibrin, can also contribute to wound healing. Leukocytes are important for local defense and are capable in releasing growth factors (similar to platelets). However, the eventual harms due to their inflammatory action is an object of the debate. Moreover, the fibrin mesh serves as a scaffold for immune cells, fibrocytes, and stem cells. Different preparation methods of platelet concentrates are shown in Figure 3. clot formed during centrifugation containing enriched platelets, as well as leukocytes at the border line. The simpler handling with only one centrifugation step and without the need of thrombin addition is the important advantage of this method. Indeed, this method is widely used in clinical medicine. Meanwhile, several different centrifugation protocols are available, which show different enrichment in platelet and leukocyte count as recently reviewed [48]. For the preparation enriched in platelets, centrifugation of whole blood samples is used. In order to increase or delay coagulation and thus the formation of the fibrin mesh, centrifugation is performed in either glass or plastic tubes. Centrifugation with 60-1200× g for 3 to 15 min leads to a separation of cell types due to the different physical properties [57]. Red blood cells are sedimented at the bottom of the vial. Located above the red blood cell sediment, there is a small layer named the buffy coat which consists of platelets and leukocytes. On top of the buffy coat, the plasma is located. Depending on the centrifugation conditions, platelet concentration of plasma varies. The harsher the centrifugation conditions, the more platelets will sediment in the buffy coat. According to the different protocols, leukocyte-poor platelet-rich fibrin, leukocyte-rich fibrin, and liquid platelet rich fibrin can be differentiated [48,58]. Figure 3. Methods for the preparation of platelet-rich concentrates. Initial steps for the preparation of platelet-rich plasma, platelet-rich lysate, and platelet vesicles are identical. Note, that both classical platelet-rich fibrin and injectable platelet-rich fibrin can be obtained by a single centrifugation without the necessity of anticoagulation; leu: leukocytes; rbc: red blood cells; bc: buffy coat. Figure 3. Methods for the preparation of platelet-rich concentrates. Initial steps for the preparation of platelet-rich plasma, platelet-rich lysate, and platelet vesicles are identical. Note, that both classical platelet-rich fibrin and injectable platelet-rich fibrin can be obtained by a single centrifugation without the necessity of anticoagulation; leu: leukocytes; rbc: red blood cells; bc: buffy coat. Platelet Rich Plasma In 1997, autologous platelet-rich plasma was first used in oral and maxillofacial surgery [49]. For the preparation, anticoagulated whole blood is centrifuged to separate red blood cells from platelets [27]. Red blood cells sediment at the bottom of the vial and plasma is located at the top. Between red blood cells and plasma there is a small layer called the buffy coat containing leukocytes and platelets. Depending on the conditions of centrifugation, the plasma contains a variable number of platelets. The plasma containing platelets is collected, sedimented in a second centrifugation step, and reconstituted in a defined amount of plasma. Thereafter, coagulation is initiated by the addition of Ca ++ or thrombin [50,51]. The aim of the centrifugation steps is the enrichment of platelets (as well as leukocytes) to enhance the therapeutic effect. Typically, an enrichment of platelets from 150 × 10 9 -350 × 10 9 /L in whole blood to about 1000 × 10 9 /L in platelet-rich plasma are judged to be advantageous [50]. Meanwhile, more than 40 different procedures for the preparation of platelet-rich plasma are described [50]. Platelet Rich Fibrin In 2006, Choukroun et al. described the use of another platelet preparation obtained in a single centrifugation step and called it platelet-rich fibrin [52][53][54][55][56]. The procedure includes a single centrifugation (400× g; 12 min) of whole blood without anticoagulation in a glass vial. After a single centrifugation step, the red blood sediment is separated from a clot formed during centrifugation containing enriched platelets, as well as leukocytes at the border line. The simpler handling with only one centrifugation step and without the need of thrombin addition is the important advantage of this method. Indeed, this method is widely used in clinical medicine. Meanwhile, several different centrifugation protocols are available, which show different enrichment in platelet and leukocyte count as recently reviewed [48]. For the preparation enriched in platelets, centrifugation of whole blood samples is used. In order to increase or delay coagulation and thus the formation of the fibrin mesh, centrifugation is performed in either glass or plastic tubes. Centrifugation with 60-1200× g for 3 to 15 min leads to a separation of cell types due to the different physical properties [57]. Red blood cells are sedimented at the bottom of the vial. Located above the red blood cell sediment, there is a small layer named the buffy coat which consists of platelets and leukocytes. On top of the buffy coat, the plasma is located. Depending on the centrifugation conditions, platelet concentration of plasma varies. The harsher the centrifugation conditions, the more platelets will sediment in the buffy coat. According to the different protocols, leukocyte-poor platelet-rich fibrin, leukocyte-rich fibrin, and liquid platelet rich fibrin can be differentiated [48,58]. Platelet Lysates A third method claimed platelet lysate has recently been developed [59]. For this platelet product, platelet-rich plasma is treated by either freeze thaw cycles or sonication. The resulting platelet lysates can be stored frozen for an extended time. While the use in clinical medicine is relatively new, the preparation is often used as a source of growth factors in cell cultures. Platelet Extracellular Vesicles In the future, a fourth method for the use of platelet in clinical medicine might derive on the finding that activation of platelets leads to the mass release of extracellular vesicles, which could serve as the source of growth factors in local therapies [60]. Two types of extracellular vesicles can be differentiated: exosomes (30-100 nm) and microvesicles (100-1000 nm) [61]. There is no information available on the advantages and disadvantages of each preparation regimen in human use. However, different compositions have been demonstrated and may lead to different characteristics. Therapeutic Use of Platelet Rich Concentrates in Human Diseases Countless in vivo and in vitro studies exist on the clinical use of platelet concentrates in medicine, and there are more than three hundred reviews and meta-analyses on the clinical use in different settings. It is therefore beyond the scope and the possibilities of a single review to give in-depth information on the use of all subspecialities. According to most reviews and meta-analyses, the level of evidence is somewhat limited by the fact that small studies with few patients and a high risk of bias dominate the literature. Moreover, the use of many different platelet rich concentrates complicates the interpretation and comparison of the results, and the need for studies with standardized platelet-rich concentrates has been emphasized [62]. In this regard, the evidence for the use of other commonly used blood products, including red blood cell concentrates, platelet concentrates, fresh frozen plasma, and factor concentrates, is often limited in different settings and reflects that research is most often investigator-initiated studies with limited financial support. Thus, further well-designed prospective studies may be valuable to better judge the advantages and disadvantages of platelet concentrate based therapies and to find the best platelet preparation. However, despite these limitations, there seems to be sufficient evidence in the literature for the use of platelet-rich plasma and platelet-rich fibrin in many clinical settings. Platelet-Rich Concentrates in Medicine and Dentistry Platelet concentrates are applied in many medical subdisciplines including sports medicine, plastic surgery, dermatology, otolaryngology, gynecology, urology, and diabetology, among others. Moreover, platelet-rich concentrates are widely used in dentistry and oral and maxillofacial surgery [63][64][65]. The use of platelet-rich concentrates in wound healing is well established [66,67]. Platelet-rich fibrin and platelet-rich plasma are used in chronic diabetic wounds as an efficient, economical, and simple adjuvant method to support tissue regeneration [68,69]. It can also be injected to treat scars such as acne or striae distensae [70,71]. Moreover, platelet-rich concentrates are increasingly used in androgenetic alopecia [72,73] as well as skin rejuvenation and skin augmentation [66,74]. Another area is the use in chronic, mostly degenerative pain conditions. In knee arthrosis, degenerative disc disease (intradiscal treatment), facet pathologies (intrafacial injection), and sacroiliitis, reduced pain scores and increased functionality were observed after platelet-rich plasma therapy [75]. Improvement can also be observed in severe temporomandibular joint disorders through platelet-rich fibrin injection after arthrocentesis into the upper join space or with adjuvant platelet-rich plasma insertion during arthroscopy procedures or arthrocentesis [76,77]. In dentistry, the application of platelet preparation is widely used for bone repair, a significantly faster healing of bony defects was determined radiologically after adjunctive use of platelet-rich fibrin [78]. Wide ranges of possible applications are also known in the treatment of periodontitis [65]. Bone defects can be filled with previously platelet-rich fibrin-inoculated substitutes, which leads to a significant reduction in probing depths [63,79,80]. In addition, the application of a platelet-rich fibrin membrane can prevent the ingrowth of epithelial cells into the treated bony defect and can promote the ingrowth of osteogenic and angiogenic cells [81,82]. The use of platelet-rich fibrin in the surgical treatment of Class II furcation appears to improve periodontal regeneration. In combination with bone graft substitutes, vertical clinical attachment loss was significantly reduced [83]. Moreover, improved regeneration of the periodontal attachment has been demonstrated [63,64,67]. Platelet-rich fibrin membranes are also considered a promising alternative for covering recessions; compared to subepithelial connective tissue grafts-the gold standard-no significant difference in gingival recessions, clinical attachment level, and probing depths were observed [63,84]. Thus, the results of the study indicate that the use of invasive procedures and the necessity of a graft can be avoided by platelet-rich fibrin. Another application of platelet concentrates is the use in vitality-preserving endodontics. A positive influence of platelet-rich fibrin and platelet-rich plasma preparations on healing after vital amputation was observed. However, platelet-rich plasma seems to lead to less coronal discoloration. The combined use of platelet-rich plasma and mineral trioxide aggregate (MTA) showed a better prognosis compared to the use of MTA alone for apexogenesis [85,86]. Similarly, positive effects were observed with the use of platelet-rich plasma or platelet-rich fibrin compared to the most used therapeutic method of blood clot revascularization, for the regeneration of immature permanent teeth [87]. Conclusions Platelet-rich plasma and platelet-rich fibrin are used as a source of various mediators, which favor hemostasis, wound healing, and tissue repair. The effects can be explained by the application of supernormal concentrations of various platelet-derived mediators, including many growth factors, chemokines, hemostatic and antibiotic peptides, as well as the fibrin mesh serving as a scaffold for repair. While there is much evidence that the plateletconcentrates are effective in many clinical settings, the advantages and disadvantages of
2021-08-28T05:19:31.423Z
2021-07-22T00:00:00.000
{ "year": 2021, "sha1": "0010dfd65288462633492efb3f10b64ee26f5f3f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/9/8/869/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0010dfd65288462633492efb3f10b64ee26f5f3f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234640337
pes2o/s2orc
v3-fos-license
Qualitative Identification of Roseburia hominis in Faeces Samples Obtained from Patients with Irritable Bowel Syndrome and Healthy Individuals : Various products coded by genes recognized in microbiota are involved in many biochemical pathways in human body. Bacteria composition in the gastrointestinal tract may be an important aspect of selected diseases' pathogenesis, including irritable bowel syndrome (IBS). Traditional research methods based on classical microbiology, using selective media for bacterial growth, have proven to be ineffective. The use of genetic methods allows the identification of unidentified microbiota, including anaerobes. Roseburia hominis is a flagellated gut anaerobic commensal bacterium, producing short-fatty acids. The knowledge about the microbial components of the intestinal ecosystem is still very limited, including Roseburia hominis. The study aimed to identify Roseburia hominis in the faeces samples obtained from IBS patients and healthy individuals, using PCR techniques. The differences between studied groups were observed , R. hominis may play a role in IBS etiology. Introduction All the microorganisms inhabiting the particular human body regions or organs are collectively called a microbiome.The human body encompasses several various microbiomes that include specific populations of microorganisms [1].Currently, intensive research is underway on the microbiomes and their influence on human health.The knowledge about microbiomes is still very limited, including the intestinal microbiome [2]. Nowadays, the gut microbiome is one of the great interests of researchers because of its potential.Bacteria composition in the gastrointestinal tract may be an essential aspect of the pathogenesis of selected diseases.Gut microbial imbalance (dysbiosis) may lead to various diseases, including irritable bowel syndrome (IBS) [3].Whereas human microbiomes are abundant in unculturable bacteria , and traditional research methods based on classical microbiology have been proved ineffective, it is necessary to characterize in detail their composition to further evaluate the function of particular microbiota [4].Roseburia hominis is a flagellated gut anaerobic commensal bacterium, producing short-fatty acids.This property is essential in gut motility, immunity maintenance, and anti-inflammatory properties.There are reports suggesting that Roseburia spp.may play a role in IBS's pathogenesis [5].The knowledge about the microbial components of the intestinal ecosystem is still very limited, including Roseburia hominis. Methods The study protocol was approved by the Institutional Review Board at Poznan University of Medical Sciences.All individuals provided informed consent after the possible consequences of the study were explained, in accordance with the Declaration of Helsinki.The aim of the study was to identify Roseburia hominis in the faeces samples using PCR techniques.The study was conducted on samples obtained from IBS patients (women, n=70, and men, n=50) and individuals without any intestinal symptoms (women, n=28, and men, n=23).After bacterial DNA extraction using the spincolumn method (ZymoBIOMICS DNA Miniprep Kit, Zymo Research, USA), DNA concentrations were measured using DeNovix Spectrophotometer (DeNovix, USA) and stored in −20 ± 2 °C for further analyses. Qualitative identification of Roseburia hominis, based on the amplification of RHOM_14625 and RHOM_14635 gene fragments was performed.PCR products were purified using an ExoSAP-IT for PCR product Clean-Up (Affymetrix, USA), and the specificity was confirmed by Sanger sequencing (sequence reading was performed at Genomed, Poland). Then, a statistical analysis of the obtained data using the Chi-square test was conducted. Results and Discussion The human digestive tract, especially its distal segment, is colonized by numerous bacteria that create an intricate community called the gut microbiome.Its presence is crucial for maintaining health by preventing gut colonization by pathogens, producing nutrients, and maintaining the integrity of intestinal mucosa [6].Nowadays, the human gut microbiota is under research to understand better the vast influence on the human body.Some species likely play an essential role in the gut microbiome, especially in some diseases pathogenesis. R. hominis are relatively newly recognized probiotic bacteria species [7], the most proficient butyrate producers [8], and considered as the most mobile species in the gut microbiome [9].Those bacteria occur predominantly in the colon.R. hominis has the ability to penetrate the mucus layer and stick to the surface of host intestinal epithelial cells, which promotes probiotic potential of these gut bacteria [10]. Roseburia spp.may play a role in gut diseases.Roseburia spp. was observed to be reduced in the gut in individuals affected by inflammatory bowel diseases (IBD) [11,12].Machiels et al. indicated that the lack of R. hominis in the gut microbiome among patients with colitis ulcerosa (UC) was found [13].What is more, Chassard et al. hypothesized that IBS symptoms were correlated with Roseburia spp.dysbiosis [14]. In our study, the assessed RHOM_14625 gene fragments of R. hominis were recognized in samples derived from 9 (13%) female and 21 (42%) male IBS patients, and in 15 (54%) and 7 (30%) female and male control individuals, respectively.The difference in the presence of the evaluated gene between healthy individuals and IBS patients was statistically significant, and the p-value was 0.0001 (Fig. 1).Considering the presence of the second evaluated gene fragment of RHOM_14635 in R. hominis fragment, the PCR-amplified fragment was detected in 35 (50%) and 33 (66%) samples obtained from female and male patients, respectively, and in 18 (64%) and 7 (30%) samples from female and male control individuals.The difference in the analyzed gene distribution was statistically significant (p-value = 0.02) (Fig. 2).Previously, Rigsbee et al. reported that Roseburia spp.abundance was the same among healthy children as well as in diarrhea-predominant IBS children in stool samples [15].However, Chassard et al. reported in detail that Roseburia spp. was reduced among patients with the IBS -constipation subtype comparing to healthy individuals [12].What is interesting, after IBS treatment, Roseburia spp.abundance in the gut microbiome was found to be comparable to healthy individuals [16,17].These and other scientific reports suggest considerable complexity of bacterial composition and function in various diseases, including IBS. Conclusions Further molecular studies are necessary to evaluate the role of Roseburia hominis in intestinal microbiome in IBS patients.Taking into consideration the obtained results, it can be assumed that R. hominis might play a role in the assessed microbiome in IBS etiology. Figure 1 . Figure 1.(A) Histograms representing an occurrence of RHOM_14625 gene fragment among studied groups.(B) Interaction plot shows the difference between the frequencies of RHOM_14625 gene fragment occurrence in studied groups.The association is seen between: i) females with IBS and males with IBS; ii) females with IBS and females without IBS; iii) females with IBS and males without IBS. Figure 2 . Figure 2. (A) Histograms representing an occurrence of RHOM_14635 gene fragment among studied groups.(B) Interaction plot shows the difference between the frequencies of RHOM_14635 gene fragment occurrence in studied groups.The interaction is seen between females without IBS and males with IBS.
2021-05-17T00:03:01.066Z
2020-11-02T00:00:00.000
{ "year": 2021, "sha1": "c276a97ab93ad24c2cae458d1c9bde6f4876f6d0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-3900/66/1/28/pdf?version=1623803030", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2d9c470842bd32b5a2e4afd01dfad8a74b34b76b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
238060021
pes2o/s2orc
v3-fos-license
Evaluation of Branding as a Tool for Entrepreneurship Success : This study evaluated branding as a tool for entrepreneurship success. The study is delimited to bakeries in Ikeji-Arakeji Area in Ori-Ade Local Go vernment of Osun State, Nigeria. The study is guided by three research questions and seeks to know the effects of packaging, customer’s taste and product’s price on the growth of bakeries, rate of patronage, volume of sales and customer’s choice. Five companies producing bread and two hundred bread customers are considered. A descriptive survey research design is used to determine the effects of branding on the success of entrepreneur among companies producing bread products. The data collected is analyzed using simple frequency counts, percentage mean and standard deviation. The results from the study revealed that unique packaging has ability to increase customer’s patronage and that an increase in patronage results into an increase in sales which in turns impacts business growth. The results also revealed that customers buy more of bread from which they derive high satisfying taste. It is also shown in the results that quality of bread is not determined by price and that people tend to buy more of the bread that are carrying lower prices irrespective of their quality. Going by the results obtained, the study concluded that adequate and unique branding influence entrepreneurship success. The study therefore recommends that firms should endeavor to improve their branding system as this will enable them to remain relevant in market and stand the phase of competition. However, as the competitive turbulence become a constant in business environment, the goal itself is not to predict the future, which Drucker (1998) regarded as a lofty idea in the first place-but to anticipate a future made possible by the effect of changes taken place and made possible by the existing process as it intertwined with the present and stimulates the future. Within the framework of this study, the author examines brand management as a tool for success in entrepreneurship. The study begins with an appraisal of branding as a competition policy followed by explanation for key elements of branding as a marketing principle that serves as the basis for anticipation of future demand amidst 'permanent change'. It then highlights relevant guidelines that entrepreneurs could follow in formulating and implementing branding policy. Statement of the Problem It can be said categorically that "the marketplace isn't what it used to be." It is changing radically as a result of major forces such as technological advances, globalization and deregulation. These forces have created new behaviors and challenges; Customers increasingly expect higher quality and service and some customization, brand manufacturers are facing intense competition from domestic and foreign brands, which is resulting in rising promotion costs and shrinking profit margins, store-based retailers are suffering from an oversaturation of retailing. Enterprises of today are aspiring to achieve increase customer's patronage and that an increase in patronage results into an increase in sales and in increase in sales impacts business growth. In our competitive market, everyone is struggling to improve and satisfy the needs of the customers. An enterprise cannot survive without constant patronage which will leads to sales growth. Previous works have established how branding has changed the perceptions of the customers about different products and the effects on their loyalty to the companies. Most enterprises of today are failing because there is low level of patronage and that is why branding has to be taken more seriously. Brand basically has to do with creating a new name, logo or symbol for a new product thereby increasing the level of patronage that leads to increase in sales of business (Keller, 2013). Today, a number of enterprises are trying to get a high market share in order to survive a high market share in order to survive the ever-competitive market. Many of them believe that this can be achieved through effective branding. The challenge many firms have today is the cost of branding that will shrink their profits which have made a lot of firms to neglect branding (Tether 2013). Hence, this research looks at Evaluation of branding as a tool for the success of entrepreneur. Objectives of the Study The aim of this study is to investigate branding as a tool for success in entrepreneurship using companies producing breads in Ikeji-Arakeji in Ori-Ade LGA of Osun State, Nigeria and the following are the objectives that this research work tends to achieve:  To assess the way packaging technique, adopt by the bread industries enhance the growth of bakeries.  To ascertain the effect customer taste, have on the rate of patronage of bread among bakeries.  To identify the role price plays in determining the volume of sales of bread. Research Questions  In what way can the packaging technique adopted by the bread industries enhance the growth of bakeries?  What effect does customer taste have on the rate of patronage of bread among bakeries?  In what way can the price determine volume of sales of bread? Literature Review Farhana (2014) defines a brand as a strategic asset; it could be seen as a promise that constantly needs to be delivered upon in order to gain brand equity. The substance of what a brand is, or for who it is, is found in the brand identity. Svedberg (2014) argues that the identity is the substance of what the brand is. It is through the identity that the venture can create its promises. The identity of a new venture is characterized by the culture, design, behavior and communication the venture has and it is the very essence of a brand. By establishing a successful brand Bresciani and Eppler (2010) argues that it enhances the possibilities for customer acquisition and customer retention, which in turn enhances the chances to build a favorable reputation. Brand equity is defined as a set of assets and liabilities, which is directly connected to the brand name and logo in order to add value to the customer (Aaker, 2004). Within this view as Keller (2003) says, technically speaking, whenever a marketer creates a new name, logo, or symbol for a new product, he or she has created a brand. He recognizes, however, that brands today are much more than that. As can be seen, according to these definitions, brands had a simple and clear function as identifiers. Bresciani and Eppler (2010) argue that branding is crucial for establishing a new company. However, the entrepreneur often underestimates the company's ability to succeed and therefore fails to "think big". Branding is based on general considerations of the possible future of the business. Establishing a successful brand enhances the possibilities for customer acquisition and retention to build favorable reputation. However, many entrepreneurs have a lack of awareness about the importance of building a strong brand during their business creation process (Andersson, 2012). The reason for developing a strong brand is to highlight the uniqueness of an organization. Hence it is improving the possibilities of the company to succeed on the market (Rode and Vallaster, 2005). When branding a venture, the company stands in front of many challenges. New ventures lack an established identity, its reputation and internal structures are unformed and there is a lack of knowledge over what branding is and how important it is (Bresciani and Eppler, 2010; Rode and Vallaster, 2005). One challenge, according to Rode and Vallaster (2005) and Boyle (2003), is the lack of resources, the low knowhow and the little amount of time that can be spend on these types of issues. However, marketing and especially branding can be seen as the interface between a small company and its external environment. Hence, it can differentiate between surviving and failing start-up companies (Stokes, 2006). In the process of building a strong brand, both interactions with consumers (Gardner, 2014) and strategizing are important (Bresciani and Eppler, 2010), however, theories concern both are difficult to find. As companies focus extensively on their service offering and their customers on the market place of today, the theoretical framework causes problem on service logic and entrepreneurial marketing. Furthermore, two schools of brand building are outlined (Isaksson, 2005). Chan (2004), in their study on "A Survey-Based Method for Measuring and Understanding Brand Equity and Its Extendibility" they develop a new survey-based method for measuring and understanding a brand's equity in a product category and evaluating the equity of the brand's extension into a different but related product category. It uses a customer-based definition of brand equity as the added value endowed by the brand to the product as perceived by a consumer. It measures brand equity as the difference between an individual consumer's overall brand preference and his or her brand preference on the basis of objectively measured product attribute levels. To understand the sources of brand equity, the approach divides brand equity into attribute-based and non-attribute-based components. The study conducted by Arjun and Morris (2001), on the topic, "The Chain of Effects from Brand Trust and Brand Affect to Brand Performance: The Role of Brand Loyalty". The authors examine two aspects of brand loyalty, purchase loyalty and attitudinal loyalty, as linking variables in the chain of effects from brand trust and brand affect to brand performance (market share and relative price). The results indicate that when the product-and brand-level variables are controlled for, brand trust and brand affect combine to determine purchase loyalty by high market share and attitudinal loyalty by high relative price. James et al. (2001), in their research paper titled, the Effect of Brand Attitude and Brand Image on Brand Equity operationalizes brand equity and empirically tests a conceptual model adapted from the work of Aaker (1991) and Keller (1993) considering the effect of brand attitude and brand image on brand equity. The results indicate that brand equity can be manipulated at the independent construct level by providing specific brand associations or signals to consumers and that these associations will result in images and attitudes that influence brand equity. The results suggest that focusing on the constructs that create brand equity is more relevant to managers than trying to measure it as an aggregated financial performance outcome. The research findings of Peter et al. (2003), in their paper, A Comparison of Online and Offline Consumer Brand Loyalty, the authors' compared consumer brand loyalty in online and traditional shopping environments for over 100 brands in 19 grocery product categories. They compared the observed loyalty with a baseline model, a new segmented Dirichlet model, which has latent classes for brand choice and provides a very accurate model for purchase behavior. The results show that observed brand loyalty for high market share brands bought online is significantly greater than expected, with the reverse result for small share brands. In contrast, in the traditional shopping environment, the difference between observed and predicted brand loyalty is not related to brand share. Also, Yi Zhang in 2015 reviewed extant studies about the impact of brand image on consumer from perspective of customer equity. It also presented the shortcomings of current research and pointed out the trends for future study. While Huakuai and Ying (2011) in their study on brand management problems in SMEs using Gävle Vandrarhem and Chailease International Finance Corporation-Shenzhen Branch as their case study. They use Funnel model and empirical studies to solve the brand management problems encountered in SMEs. He identified three main problems in SMEs which are a narrow interpretation of brand management, a lack of resources and time, and also observed that less attention is given to brand management. The majority of the previous works have established how branding has changed the perceptions of the consumers about different products and the effects of this on their loyalty to the companies. Some have tried to measure brand equity while some have revealed the customers satisfaction through brand management. However, no effort has been made to establish how branding can help entrepreneur achieve success and help the sustaining entrepreneurial activities in the business organization. This study tends to bridge the gap by trying to establish the effect of branding on every entrepreneurial activity in companies producing similar products and also offers suggestion on the area that needs to be covered in future research. Methodology In the study, a descriptive survey research design was used to determine the effects of branding as a tool for the success of entrepreneur among companies producing similar products using companies producing bread in Ikeji-Arakeji, Ori-Ade LGA of Osun State, Nigeria as a case study. A sample size of 200 bread consumers and 5 managers of the bakeries in Ikeji-Arakeji were selected using simple random sampling techniques and total enumeration technique respectively. The simple random sampling method gives members of the population (customers) equal chance of being selected to partake in the study without any element of influencing the chance of selecting the others, while total enumeration was used to include the bakery managers of all the bread producing firms in Ikeji-Arakeji, Ori-Ade LGA, Osun State, Nigeria. Primary data was collected using sets of well-structured questionnaires for the managers and the customers. These questionnaires were constructed to enable the respondents indicate their opinions about the problem of the study; this was supported by personal interview where necessary to have face to face interaction with the respondents. The questionnaires were constructed to have two parts of five sections. Part one captured the demographic information of the respondents while part two was constructed to give answers to the research questions. Demographic Characteristics of the Respondents This section analyses the socio-economic characteristics of the bakery managers and those of their customers' respectively. The responses of managers of the five (5) selected bakeries were analyzed and tabulated in Variables Disagree Agree Well packaged products help in attracting more sales -5 (100%) Packaging is a way of adding value to product/services. -5 (100%) The organisation image can be promoted through branding -5 (100%) Packaging is a good way of differentiating products from competitors. Analysis of Research Question 1 In what way can the packaging techniques adopt by the bread industries enhance the growth of bakeries in Ikeji-Arakeji? The results in the Table 2 reveal that all the managers 5 (100%) agreed that well packaged products help in attracting more sales, all of them 5 (100%) agreed that Packaging is a way of adding value to product/services. The results also show that all of the bakery managers (100%) agreed that good packaging does not only enhance customer retention, but also promotes organization image in the market. The results also show that all the managers (100%) agreed to packaging as a good way of differentiating products from competitors, all of them also agreed that packaging is a mean of increasing company's output and 100% of the managers agreed that effective packaging promotes patronage. This implies that all the managers are aware of the importance of packaging to the growth of their businesses. Variables Disagree Agree We often eat bread 1-2 times a week 3 ( What effect does customer taste have on the rate of patronage of bread among bakeries in Ikeji-Arakeji? Meanwhile, the analysis of customers' opinions on the effects of taste on the rate of bread patronage as tabulated in Table 3b reveals that majority of the customers 196 (98.5%) agreed that they often eat bread 1-2 times in a week. Meanwhile, 190 (95%) of them still said they often eat bread every day. These results show an inconsistency in the opinions of the customers as to the rate at which they consume bread. Majority of the customers 175 (87.5%) agreed that they usually buy another bread product whenever their favourite bread brand is not available. Almost all the customers 198 (99%) agreed that they could taste differences among the bread produced in Ikeji-Arakeji. When almost half 98 (49%) of the customers preferred Bakery A and identified it as the bread with the best taste, 27 (13.5%) love to eat Bakery B bread, 25 (12.5%) love to eat Bakery Cbread, 28 (14%) preferred buying Bakery D bread and 22 (11%) of the customers like buying Bakery El bread and same percentage identified it as the bread with the best taste. The result implies that among all the bread makers in Ikeji-Arakeji, Bakery A, bread is the most preferred and the bread with the best taste and so are ready to pay more prize in obtaining it Variables Disagree Agree My bread choice is determined by the price in market 2 (1.0%) 198 (99.0%) The quality of a bread justifies its price 1 (0.5%) 199 (99.5%) Consumers buy more of a bread based on their prices 1 (0.5%) 199 (99.5%) I would be willing to pay somewhat higher prices in order to appreciate the benefits provided by the product In what way can the price determine volume of sales of bread in Ikeji-Arakeji? Table 4 shows that customers opinion on the effects of price on the sales and performance. The table reveals that almost all the customers 198 (99%) agreed that they buy bread based on their prices in the market. 199 (99.5%) of the respondents agreed that the quality of a bread justifies its price and consumers buy bread based on the price in the market. This is in relation with what the manager said about price of their products. The implication of this is that bread products are sold based on the quality of the bread; that high quality bread products are sold with high prices while low quality bread products are cheaper in the market. The results also revealed that all the customers 200 (100%) buy bread that does not lose taste easily. Almost all the customers 199 (99.5%) admitted that the prices put on their bread choice commensurate with the satisfaction they derived from the products. Moreover, 183 (91.5%) of the respondents agreed that bread products with higher prices give higher satisfaction. However, all the customers 200 (100%) agreed that bread products with lower prices attract more patronage irrespective of their qualities. Conclusion Going by the results obtained, product branding can be in form of packaging, taste and price. Depends on their adoptions, all of these are special tools, if well managed, can influences entrepreneurship success of bread producers. Each element of branding plays a different and unique role in production, level of sales and customers' satisfaction. It is discovered in the study that the customers' taste, their purchasing power and level of satisfaction derived from the bread products influences their decision to buy and consequently, impacts on company sales, level of output and the success of the firm. Since packaging, taste, price and size of products enhance an increase in customer's patronage and consequently influence growth of the organization, products firms should therefore endeavor to improve their branding system, and this will enable them to remain relevant in market and in the face of competition. Therefore, bread makers should strategies a way of better integration of these elements in their productions process.
2021-08-24T21:27:36.863Z
2021-02-28T00:00:00.000
{ "year": 2021, "sha1": "02555e5b62498df37d44b658ecc074b275c88700", "oa_license": null, "oa_url": "http://www.internationaljournalcorner.com/index.php/ijird_ojs/article/download/158249/109239", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "31c20cc1befa79deaeddb6596c70f349d3adf3a0", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
782201
pes2o/s2orc
v3-fos-license
The cluster relic source in A521 We present high sensitivity radio observations of the merging cluster A521, at a mean redsfhit z=0.247. The observations were carried out with the GMRT at 610 MHz and cover a region of $\sim$1 square degree, with a sensitivity limit of $1\sigma$ = 35 $\mu$Jy b$^{-1}$. The most relevant result of these observations is the presence of a radio relic at the cluster periphery, at the edge of a region where group infalling into the main cluster is taking place. Thanks to the wealth of information available in the literature in the optical and X-ray bands, a multi--band study of the relic and its surroundings was performed. Our analysis is suggestive of a connection between this source and the complex ongoing merger in the A521 region. The relic might be ``revived' fossil radio plasma through adiabatic compression of the magnetic field or shock re--acceleration due to the merger events. We also briefly discussed the possibility that this source is the result of induced ram pressure stripping of radio lobes associated with the nearby cluster radio galaxy J0454--1016a. Allowing for the large uncertainties due to the small statistics, the number of radio emitting early--type galaxies found in A521 is consistent with the expectations from the standard radio luminosity function for local (z$\le$0.09) cluster ellipticals. Introduction Radio observations reveal that a number of galaxy clusters host diffuse synchrotron radio emission, not obviously associated with cluster galaxies, ex-tended on cluster scale and referred to as radio halos and relics. These sources probe the presence of magnetic fields and relativistic particles mixed with the hot gas in the intracluster medium (ICM). A promising approach in our understanding of the nature of these sources is the possibility that turbulence and shocks induced by cluster mergers may be able to re-accelerate pre-existing electrons in the ICM, producing the emission from radio halos and relics (see the recent reviews by Brunetti 2003, Sarazin 2004). Both halos and relics are characterised by very low surface brightness. They lack an obvious optical counterpart and can reach sizes > ∼ Mpc. The class of radio halos is at present well defined (see Giovannini & Feretti 2002 for a review). Halos are detected in the central regions of galaxy clusters, show a fairly regular shape, and are usually unpolarized. They are characterised by steep integrated radio spectra, i.e. α > ∼ 1 for S ∝ ν −α , although the spectral index distribution may show small scale inhomogeneities . High frequency spectral steepening is present in few cases (Feretti 2005, Giacintucci et al. 2005. Cluster relic sources are less homogeneous and more difficult to classify, possibly due to our still limited knowledge and understanding of their formation and evolution. They are usually located in peripheral cluster regions, and show many different morphologies (sheet, arc, irregular, toroidal). At present ∼ 20 relics and candidates are known (Kempner & Sarazin 2001;, however the observational information is still limited. Their radio emission is usually highly polarized (up to 30%). For those few sources with multifrequency imaging, a steep integrated spectrum is found (α > ∼ 1, up to ultra-steep regimes). All clusters known to host a radio halo and/or a relic soure are found to show some degree of disturbance in the distribution of the hot gas and of the optical galaxies (Buote 2001, Schuecker et al. 2001, Sarazin 2002. Some well studied and impressive examples are for instance the radio halo in A 2163 ) and the double relics in A 3667 (Roettiger et al. 1999) and A 3367 (Bagchi et al. 2005). It is interesting to point out that the observational link between cluster halos, relics and the merging phenomena has been outlined a posteriori. A different and promising approach is the a priori selection of clusters experiencing well studied merging events, in order to determine the occurence of halos and relics. As an example, deep radio observations of the merging chain of clusters in the core of the Shapley Concentration led to the discovery of a radio halo at the centre of A 3562, which is the faintest radio halo known to date (Venturi et al. 2000, Giacintucci et al. 2004, and given its very low surface brightness it would have not been detected in a "blind" radio survey. In this paper we present Giant Metrewave Radio Telescope (GMRT) 610 MHz observations of the galaxy cluster A 521. This cluster was selected for the search of extended cluster scale radio emission on the basis of the wealth of optical (Maurogordato et al. 2000, hereinafter M00;Ferrari et al. 2003, hereinafter F03) and X-ray (Arnaud et al. 2000, hereinafter A00;Ferrari et al. 2005, hereinafter F05) information available in the literature, suggesting a very complex dynamical state. Furthermore, thanks to the amount of photometric and spectroscopic data available, A 521 is also an ideal environment to study the effects of cluster mergers on the radio emission properties of the member galaxy population, which is still controversial (see e.g. Venturi et al. 2000, Giacintucci et al. 2004, Owen et al. 1999. The observations presented here are part of a much larger project carried out with a deep GMRT radio survey at 610 MHz (Venturi et al. in prep.), whose aim is the search for radio halos and relics in clusters at intermediate redshift (z=0.2÷0.4), to test current statistical expectations from models for the formation of cluster radio halos (Cassano et al. 2004;Cassano & Brunetti 2005;Cassano et al. in prep.). The outline of the paper is as follows: in Section 2 we give a summary of the main properties of A 521 and its complex dynamical state of merging; the 610 MHz observations and the data reduction are described in Section 3; in Section 4 we present the radio source sample in the A 521 region; in Section 5 we give the radio-optical identifications and present the nuclear radio activity of cluster early-type galaxies; the relic radio source is presented in Section 6; the discussion is carried out in Section 7; our conclusions are summarized in Section 8. The cluster Abell 521 Abell 521 is a rich galaxy cluster, located at a mean redshift z=0.247. Its general properties are given in Table 1. Note that the cluster coordinates in the table are those of the ROSAT/HRI X-ray centre (A00); the X-ray luminosity (0.1-2 keV band) is from Böhringer et al. 2004 (REFLEX galaxy cluster catalogue); the velocity dispersion σ v is from F03; the temperature value is taken from A00; the virial mass M V was computed from the L X -M V relation in Reiprich & Böhriger (2002), adopting the cosmology used in this paper; R V is the corresponding virial radius. Detailed X-ray (A00, F05) and optical studies of A 521 (M00, F03) revealed that the dynamical state of this cluster is very complex, since it is still undergoing multiple merging events. Figure 1 sketches the scenario proposed by these authors. From the X-ray analysis A00 concluded that the main merger episode is occurring along the North-West/South-East direction (arrow in Fig. 1), between the main cluster (G11) and a northwestern compact group (G111), whose Xray emissions are centered on RA=04 h 54 m 08.6 s , DEC=−10 • 14 ′ 39.0 ′′ and RA=04 h 54 m 05.8 s , DEC=−10 • 13 ′ 00.4 ′′ respectively. The gas mass ratio between G11 and G111 components is M gas,main /M gas,sub ∼ 7 (A00). In their more recent analysis F05 reported a misalignment between the X-ray and optical merger axis. The study of the galaxy distribution and the substructure analysis carried out by F03 revealed the existence of many optical subclumps aligned along the merger direction. In particular these authors identified the following velocity groups along this axis (see Figure 1): − G11: the main cluster with a mean velocity <v>= 73965 km s −1 and a velocity dispersion σ ∼ 930 km s −1 ; − G111: a group dynamically bound to the brightest cluster galaxy (BCG), with a very low velocity dispersion (σ ∼ 250 km s −1 ) and a slightly higher mean velocity (<v>=74290 km s −1 ) as compared to the main cluster G11. This group is associated with the compact X-ray group detected by A00, which is probably falling onto the cluster from the NW direction. The small difference in the mean velocity between G111 and G11 suggests that the merging is likely to take place on the plane of the sky. − G112: a compact group bound to the cluster, whose velocity (<v>=74068 km s −1 , σ ∼ 570 km s −1 ) is similar to that of the infalling group G111. The virial masses estimated for G111 and G112 in F03 on the basis of the optical information are much smaller (∼ one order of magnitude) than that of the main cluster G11. We note that the mass ratios between G11 and the infalling groups estimated from the optical information are even larger than those derived from the X-ray data. − G2: a group South-East of A 521, at a projected distance of ∼ 900 kpc from the X-ray centre of the main cluster G11. This group has a mean velocity of <v>=78418 km s −1 (σ ∼ 500 km s −1 ), which is much higher than the cluster velocity. On the basis of the two-body criteria, F03 concluded that this group is probably not bound to A 521. Furthermore, F03 found also evidence of a filamentary structure of galaxies in the central region of the cluster, extending along the NE-SW direction, with velocity <v>=73625 km s −1 and high velocity dispersion. This structure has been interpreted by F03 as evidence of an older merger, which occurred along a direction orthogonal to the axis of the presently ongoing merger. Radio observations We observed the cluster A 521 at 610 MHz with the GMRT on 7 and 8 January 2005, using simultaneously the R (USB) and L (LSB) bands of 16 MHz each, for a total frequency band of 32 MHz. Table 2 gives the details of the observations. The observations were carried out in spectral line mode, with 128 channels in each band, with a spectral resolution of 125 kHz. The four data sets (7 and 8 Jan 2005, USB and LSB) were analysed individually. The data calibration and reduction were performed using the NRAO Astronomical Image Processing System (AIPS) package. The sources 3C 147 and 3C 48 were observed as primary calibrators at the beginning and at the end of the observations, to determine and correct for the bandpass shape and for the initial amplitude and phase calibration. The source 0447-220 was used as secondary phase calibrator and was observed every 20 minutes throughout the observation. In order to reduce the size of the data set, after the bandpass calibration the central channels of each data set were averaged to 6 channels of ∼ 2 MHz each. For each data set, images were produced using the wide-field imaging technique, with 25 facets covering a total field of view of ∼ 1 × 1 square degree. After a number of phase self-calibration cycles, a final step was made, allowing for phase and amplitude corrections in order to improve the quality of the final images. The residual errors in the estimated flux density are < ∼ 5%. The four self-calibrated data sets were then averaged from 6 channels to 1 single channel and finally combined together using the AIPS task DBCON. We note that bandwidth smearing is relevant only at the edge of the wide field, and does not affect the region presented and analysed here, i.e. the inner 30 ′ ×30 ′ (see Section 4). The images from the combined data set were obtained using again the wide-field imaging technique, combined with the task FLATN and finally corrected for the primary beam appropriate to the GMRT antennas at 610 MHz. We produced images over a range of resolutions, reaching rms noise levels of the order of 1σ ∼ 35 − 40 µJy b −1 . For the purpose of the present paper we show only the image tapered to a resolution of 13.1 ′′ ×8.1 ′′ , in p.a. 56 • . The 5σ detection limit in this image, 0.20 mJy b −1 , corresponds to a radio power limit of 3.5×10 22 W Hz −1 . The sample of radio sources The total field of view of our observations is ∼ 1 • × 1 • , which is much larger than the cluster size. Here we present only a portion of the whole field, with size of 30 ′ × 30 ′ , centered on RA=04 h 53 m 00 s ÷ 04 h 55 m 00 s and DEC=−09 • 55 ′ 00 ′′ ÷ −10 • 25 ′ 00 ′′ . At the cluster distance this corresponds to a region as large as ∼ 7 × 7 Mpc 2 . The 610 MHz radio emission from this region is shown in Figure 2. In the figure we also plotted a solid circle with a radius corresponding to the virial radius of A 521 (Tab. 1), and a dashed circle, representing the 5' radius region covered by the redshift catalogues in M00 and F03. The cross marks the X-ray centre of the cluster (Tab. 1). The radio emission from the A 521 region is dominated by point-like and marginally resolved sources. However, the whole field is characterised by the presence of three extended radio sources. Two of them (see Section 5.2) are located in the northern part of the field and are associated with early-type galaxies without redshift information. The elongated structure located South-East with respect to the cluster centre is the relic source discussed in Section 6. We used the AIPS task SAD to identify sources in the final 13.1 ′′ × 8.1 ′′ image of A 521 (1σ= 40 µJy b −1 ). Given a radio image, this task (1) finds all potential sources whose peak is brighter than a given level; (2) Gaussian components are fitted; (3) positions and flux density values are given. The task also produces a residual image, which can be inspected to identify both extended sources not well fitted by Gaussians, and sources with peak flux density lower than the previous threshold. As first step, we used SAD to find all sources with peak flux density greater than 0.32 mJy b −1 , i.e. 8 times the rms noise level in the field. Then, on the residual image we searched for all sources with peak flux densities in the range 5σ -8σ (i.e. 0.20 -0.32 mJy b −1 ). On this image we identified also the extended sources. Each radio source of the list was then carefully inspected, and the flux density values given by SAD for the unresolved or marginally resolved sources were checked using the task JMFIT. For the extended sources the flux density was obtained by means of TVSTAT. The final list of radio sources (over the whole ∼ 1 • × 1 • field), contains a total of 101 radio sources above the peak flux density limit of 0.20 mJy b −1 ; 52 radio sources out of the total are located in the 30 ′ ×30 ′ region shown in Figure 2, and are presented in Table 3, where we give: − columns 1, 2 and 3: respectively name (GMRT-) and J2000 position; − column 4 and 5: respectively peak and integrated flux density at 610 MHz, corrected for the primary beam attenuation. Note that the flux density given for the relic source (J0454-1017a) does not include the embedded point sources, whose flux density was estimated from the full resolution image (8.6 ′′ × 4.0 ′′ , see Table 1) and subtracted; − column 6: radio morphology. We classified the sources as unres.= unresolved and ext. = extended. Moreover we indicated as WAT a wide-angle-tailed morphology, D a double structure and Rel the relic source. For the double sources we give the position of the radio barycentre and for the extended sources we give the position of the radio peak. We detected all the 20 radio sources found by A00 in their analysis of a portion of the 1.4 GHz NRAO VLA Sky Survey (NVSS, Condon et al. 1996) image, with a size similar to the field shown in Figure 2. We note that their sources labelled 12, 14 and 15 are part of the diffuse radio relic (see Section 6), and their sources labelled 7, 11 and 13 are part of the wide-angle tail described in Section 5.2. The remaining radio sources listed in Table 3 are either undetected or only marginally visible in the NVSS, due to the different resolution and the sensitivity limit of the NVSS survey (1σ=0.45 mJy b −1 ). Optical Identifications The sample of 52 radio sources presented in the previous section was crosscorrelated with the Super COSMOS/UKST Southern Sky Object Catalogue (Hambly et al. 2001) and the APM Catalogue (Maddox et al. 1990), to search for optical counterparts. Radio/optical overlays (using the DSS-1) were visually inspected for all the candidate identifications, and for the remaining radio sources in the sample, in order to find possible optical counterparts lost by the incompleteness of these catalogues. We estimated the reliability of the optical identifications on the basis of the parameter R, which takes into account the uncertainty in the radio and optical positions: where ∆ r−o is the offset between the radio and optical coordinates, and σ o and σ r are the optical and the radio position errors respectively. We adopted a mean positional uncertainty of σ o =1.5 arcsec for the optical catalogues (Unewisse et al. 1993), and with the parameters of our observations we estimated an average radio positional error of 1 arcsec both in right ascension and declination (Prandoni et al. 2000). We considered reliable identifications all matches with R ≤ 3, i.e. we assume that all matches ≤ 3 are due to the random distribution of the positional errors, while for R > 3 the difference between the optical and radio position is significant. For two sources we found R > 3, and therefore we cosidered them uncertain identification (see notes to Table 4). We found 21 radio-optical identifications (including the 2 uncertain cases), which correspond to 40% of our radio source sample. In order to find the sources associated with A 521 member galaxies, we crosscorrelated our sample of identified sources with the redshift catalogues in M00 and F03. We note that these two catalogues do not cover the full region of 30 ′ ×30 ′ analysed in the present paper, therefore our redshift search is actually restricted to a region of ∼ 5 ′ (1.1 Mpc) radius from the cluster centre (Fig. 2). This region includes 17 radio sources, 11 of which with an optical counterpart. Among these, 8 radio galaxies are located within the velocity range 70000 -80000 km/s. One of them, however (J0454-1016b) is located in the group G2 (see Sect. 2), considered to be unbound in F03, therefore it will not be considered in the analysis presented in Sect. 5.3. The list of the radio-optical identifications is reported in Table 4, where we give: − column 1: radio and optical name, where FMC and MBP stands for optical counterparts from F03 and M00 respectively; − columns 2 and 3: J2000 radio and optical coordinates; − column 4: integrated flux density at 610 MHz and I magnitude given by the SuperCOSMOS or the APM catalogue when available, otherwise determined from the R magnitude adopting the (R-I)=0.77 colour for early-type galaxies at z=0.2 (Fukugita et al. 1995). The I magnitudes are corrected for a galactic absorption of A I =0.146 (Schlegel et al. 1998); if they were derived from the R magnitudes, we first corrected these latter using an absorption A R =0.201; − column 5: radio morphology and (R-I) colour from the SuperCOSMOS or the APM catalogue; − column 6: radio power at 610 MHz and radial velocity; − column 7: information about the galaxy type, and spectral features of the optical counterpart from M00 and F03 and R parameter. Note that for the extended radio galaxies discussed in Section 5.2, no value of R is given, due to the extended and complex radio morphology. Unresolved radio galaxies in the A 521 region Among the 8 cluster radio galaxies (see Table 4), we found three late-type and four early-type galaxies. One of them, J0454-1013b, is associated with the Brightest Cluster Galaxy (FMC65). For the remaining source, J0454−1016a, no colour information is available, however on the basis of its featureless spec- (*): this identification is uncertain since the candidate optical counterpart is misplaced with respect to the radio emission peak; (**): for the optical counterpart of this source only the magnitude b J =21.71 is available. This galaxy falls within the radio contours of the source, but it is dislocated with respect to the emission peak. trum (F03), and of the (R-I)=1.26 colour (taken from the SuperCOSMOS catalogue), we can bona fide classify it as an early-type galaxy. For the cluster radio galaxies we searched for segregation effects both in the plane of the sky and in the velocity space. In Figure 3 we show the distribution of the radio galaxies within 1.5 Mpc from the cluster centre, overlaid on the DSS-1 optical image and the X-ray isophotes from an archival ASCA observation (∼ 45 ks). This X-ray image was extensively analysed in A00; the purpose of its use here is to illustrate the relative distribution of the hot gas and the positions of the cluster radio galaxies. The 8 radio galaxies of A 521 are represented by circles, while the two radio galaxies at a redshift different from A 521 are indicated by x-points. Squares show the location of radio sources identified with objects without redshift information, and diamonds are radio sources with no optical identification. Moreover the cross indicates the X-ray (ROSAT/HRI) centre of the cluster (Tab. 1) and the blue large circles and ellipses represent the dynamical groups of optical galaxies found by F03 ( Section 2, Fig. 1). The misplacement between the ROSAT cluster centre and the peaks of the ASCA isophotes can be explained as due to the lower resolution of the ASCA image, combined with the large uncertainties in the positional accuracy (A00). It is clear that the radio galaxy distribution in A 521 is not random. In particular, all the radio galaxies within the cluster X-ray emission are aligned along the NW-SE axis, which is the suggested direction of the ongoing merger (A00, F05). On the other hand, no radio galaxy was detected along the old merger axis (roughly perpendicular to the NW-SE axis), where F03 found a ridge of high optical density. We note that a similar situation (at least in projection onto the plane of the sky) is found in A 2255 , where the radio galaxies are distributed along the merger axis. In Figure 4 the velocity distribution of the radio galaxies in A 521 is compared to the distribution of the whole velocity sample in M00 and F03 with secure velocity measurement. Objects corresponding to late-type radio galaxies are marked as black bins, the early-type radio galaxies are indicated by the dashed bin. The early-type objects are all found in the bins containing the bulk of the cluster galaxies, while the late-type radio galaxies are at the edge of the velocity distribution, even excluding the G2 member from these qualitative considerations. In particular, the three radio galaxies in the northern part of the G111 group (the BCG group, associated with the group detected in the X-ray) and the source belonging to G112 (see Figure 3) are all early-type galaxies, and are within a narrow range in velocity, i.e. 74282÷75146 km s −1 . F03 already noted that G111 and G112 are essentially at the same velocity within ∼ 200 km s −1 (Sect. 2). The remaining early-type radio galaxy of A 521 is the westernmost optical identification in Figure 3 and has a lower velocity (73044 km s −1 ), consistent in redshift with the main cluster G11 (Fig. 1). Among the late-type radio galaxies, one is located within the G111 group boundary (see Figure 3), but has a significantly lower velocity with respect to the group. The remaining two sources are embedded in the relic emission (Section 6) and projected within the G2 group. Of these two, J0454−1016b (the northwestern one) has a velocity consistent with the group, which is probably unbound to A 521, while the velocity of the other is significantly lower and consistent with the main cluster. All the radio galaxies in A 521 have low radio power, exception made for J0454-1016a, located in projection close to the radio relic (see Table 4 and Figure 5), with logP 610MHz (W Hz −1 ) = 24.69. Interestingly, two late-type radio galaxies are more powerful than the early-type ones, suggesting enhanced star formation. Unfortunately, the data available in the literature do not allow Fig. 3. Location of the radio sources within 1.5 Mpc from the A 521 centre, overlaid on the DSS-1 optical frame image and the X-ray ASCA contours. The cross indicates the ROSAT/HRI centre of the cluster (Tab.1). Circles represent the 8 radio galaxies belonging to A 521; the x-points represent the position of two radio galaxy located at a redshift different from A 521; squares are radio sources identified with an optical object without redshift information; diamonds are radio sources with no identified optical counterpart. Large blue circles and ellipses indicate the dynamical groups of optical galaxies. The X-ray contours are 3.0×10 −5 ÷ 2.4 × 10 −4 cts sec −1 and are spaced of 1.5 ×10 −5 cts sec −1 . to confirm this possibility. Extended radio galaxies in the A 521 region The radio galaxy population in A 521 is dominated by point-like low power objects. However, two extended radio galaxies are well visible in the northern region of the 30 ′ × 30 ′ field of Figure 2. Their radio emission is overlaid on the DDS-1 in Figs. 6 (J0454-1006) and 7 (J0453-0957). Unfortunately, no redshift information is available for the two optical counterparts, whose apparent magnitude is similar to that of the brightest cluster galaxies. Their (B−I) colour, derived from the SuperCOSMOS catalogue, is in very good agreement with the red sequence of the elliptical galaxies of the cluster, i.e. < B − I >∼ 2.6 (see Fig.15 in F03). In particular, for J0454-1006 (B-I)=2.47 and for J0453-0957 (B-I)=2.64, therefore they might be part of A 521, despite the large distance from the cluster centre. If we assume that they are located at the average cluster distance, their total radio power is logP 610MHz = 25.26 ± 0.08 and logP 610MHz =24.95 ± 0.08 for the wide-angle-tail Fig. 4. Velocity distribution of the 8 radio galaxies of A 521 compared to the redshift distribution of cluster galaxies. Black bins represent late-type galaxies, dashed bins indicate early-type galaxies. The five galaxies with the highest velocity belong to the unbound group G2. J0454-1006 and the source J0453-0957 respectively. These values are consistent with their radio structure, typical of intermediate power radio galaxies. The tailed morphology of J0454-1006 is suggestive of interaction between the radio plasma and the external medium. Furthermore, even though this source is outside the boundary of the cluster X-ray emission (Figs. 3 and 8), it is projected at a distance within the virial radius of A 521 (Tab. 1, Fig 2). Therefore its distorted radio structure may be the result of a recent accretion at the cluster virial radius. On the contrary, the morphology of J0453-0957, which is outside the virial radius, appears undisturbed. AGN activity in A 521 In order to understand if the ongoing merger event in A 521 has significant effect on the radio emission of the AGN cluster population, it is useful to compare the number of observed radio galaxies in this merging environment with the number expected from the the radio luminosity function (RLF) for early-type galaxies in normal clusters and in the field. As reference we used the RLF at 1.4 GHz computed by Ledlow & Owen 1996, hereinafter LO96) for early-type galaxies in a sample of local Abell clusters. We are aware that the reference luminosity function was computed with a sample of clusters at lower redshift, i.e. z≤0.09. Unfortunately, no statistical information on the RLF of cluster radio galaxies at the redshift of A 521 is available at present. This point will be further addressed in Section 7. The analysis carried out by LO96 includes all radio galaxies within 0.3 Abell radius (R A ), with logP 1.4GHz (W Hz −1 ) ≥ 22.03 and optical counterpart brighter than M R =−20.5. This magnitude limit corresponds to m R =19.8 at the distance of the cluster A 521, and to the limit I lim =19.0, adopting the (R−I) = 0.77 colour for early-type galaxies at redshift z=0.2, reported in Fukugita et al. 1995). Three early-type radio galaxies of A 521 match the above constraints for the radio power and the optical magnitude, i.e. ∼ 4.7% of the total. The total number of early-type galaxies belonging to A 521 can be derived using the information in F03. A fraction of 57.6% of their spectroscopic sample is composed by early-type galaxies, and for I < I lim , 85.5% of these objects are cluster members (see Figure 2 in F03). We corrected the number of early-type cluster galaxies for the incompleteness of the F03 spectroscopic sample, which is ∼ 55% up to I lim . We obtained that the total number of galaxies in the A 521 region brighter than I lim is 131, of which 112 belonging to the cluster. Taking all these constraints into account, we end up with 64 early-type members in the inner 0.3 R A of A 521. On the basis of LO96, the expected number of radio emitting early-type galaxies is 6, i.e. ∼ 10% of the total. Allowing for the large poissonian uncertainties given by the small numbers we are dealing with, our 3 detections are consistent with the expectations from LO96 well within 1σ. In Sect. 5.1 we pointed out the preferred location of the radio galaxies along the NW-SE axis. Unfortunately, the incomplete optical information (i.e. nonuniform coverage of A 521, see F03) does not allow any consideration on the connection between the distribution of the radio galaxies and that of the early-type objects in the cluster. The relic source in A 521 The most remarkable feature of A 521 is the presence of a region of diffuse radio emission (Fig. 5) in the south-eastern peripheral part of the cluster, at a projected distance of ∼ 4 arcmin (i.e. 930 kpc) from the A 521 centre, and at the border of the X-ray emission of the cluster (Figure 8). The morphology of the source (labelled J0454-1017a in Table 3) is arc-shaped and highly elongated. Its total angular size along the major axis is ∼ 4 ′ , corresponding to a linear size of ∼ 930 kpc, and its largest transversal angular size is only ∼ 50 ′′ , corresponding to 200 kpc. This source was first detected at 1.4 GHz with the Very Large Array (VLA) by Ferrari (2003), and an image is given also in the appendix of F05. The resolution of the radio image in Figs. 5 and 8 is high enough to rule out the possibility that this object is a blend of different radio sources. If we exclude the cluster radio galaxies embedded in the diffuse emission (the point sources A, B and C in Fig. 5 and Table 4), the extended radio source does not appear to be associated with any optical counterpart. The size and the radio morphology, as well as the lack of an optical identification, suggest that the diffuse radio source located at the outskirts of A 521 can be included in the class of cluster relics. Fig. 5 also shows that the cluster radio galaxy A (J0454-1016a, the most powerful source in A 521) is located only 1.5 ′ away from the relic (in the plane of the sky), and a faint bridge of radio emission connects the two sources. Even though projection effects in A 521 should be taken into account, we note that this situation is similar to what is found in the Coma cluster, where a bridge of radio emission connects the tails of the radio galaxy NGC 4789 and the prototype relic source 1253+275 (Giovannini et al. 1991). In order to properly determine the value of the total flux density of the relic at 610 MHz, we integrated over the whole region covered by its emission and subtracted the flux density of the embedded point-sources (see Section 4). The flux density is S 610M Hz = 41.9±2.1 mJy, which gives a radio power logP 610M Hz (W Hz −1 ) = 24.91. Using the NVSS information, we estimated the total spectral index of the relic between 610 MHz and 1400 MHz. The NVSS flux density at 1400 MHz is S 1400M Hz =16.2±1.5 mJy (after subtraction of the embedded point sources), and therefore α 1400M Hz 610M Hz = 1.14 ± 0.16. A zero-order estimate of the energy density of the relativistic plasma and magnetic field associated with the relic can be obtained under the assumption of minimum energy conditions (e.g. Pacholczyk 1970). Assuming a power law spectrum for the electrons with slope δ = 2α + 1 (α = 1.14), and the classical minimum energy equations (normally computed in the frequency range between ν 1 =10 MHz and ν 2 =100 GHz), we obtained an equipartition magnetic field B eq =0.4 µG. However, we note that for this value of B eq , the electrons with Lorentz factor γ ∼ 2.5 × 10 3 emit at 10 MHz, and thus the contribution of electrons with γ < 2.5 × 10 3 to the total energy density is not taken into account. A more accurate approach is given by adopting equipartition equations with a low energy cut-off γ min in the electron energy distribution (not in the emitted synchrotron spectrum). Using the equations given in Brunetti et al. 1997), we derived the following value for the magnetic field: with B eq ′ expressed in µG. The parameters of the relic source are given in Table 5 (B eq ′ given for γ min = 50). Both values of the equipartition magnetic field are in agreement with the estimates found in the literature for radio relic sources and cluster radio halos, i.e. in the range 0.1 -1 µG (see the review by Govoni & Feretti 2004). Discussion The main results of our 610 MHz GMRT study of A 521 can be summarized as follows. i) We detected a relic source, whose projected location is just at the boundary of the X-ray emission from the intracluster gas (Section 6); ii) We compared the number of detected radio loud AGN with the expectations from the radio luminosity function by LO96 (inner 0.3 R A and logP 1.4 GHz (W Hz −1 ) ≥ 22.43) and found 3 objects, to be compared to the 6 expected (Section 5.3). Point (i) is by far the most relevant. In the following we will discuss these results in the light of the assessed ongoing merger in this galaxy cluster. Cluster merger and AGN/starburst radio activity Our analysis on the cluster radio galaxies showed that the number of radio emitting early-type galaxies in the A 521 is consistent with the expectations from the standard RLF (3 against 6) if we allow for the large uncertainties due to the very small number statistics. Such comparison should be taken with care, since it is done with the local (z≤0.09) radio luminosity function for cluster ellipticals (see Section 5.3), while A 521 is at redshift 0.247. However, this result is striking if we take into account the positive evolution of the RLF for X-ray selected high redshift clusters (Stocke et al. 1999, Branchesi et al. 2005, and leads us to safely conclude that the multiple merger events in A 521 are not increasing the probability of an early-type galaxy to develop a nuclear radio source compared to other less extreme environments, as already found in the complex merging environment of A 3558, in the central region of the Shapley Concentration (Venturi et al. 2000;Giacintucci et al. 2004). The optical analysis of F03 showed that the late-type star forming galaxies, are preferentially located on the axis perpendicular to the direction of the merger (see Fig. 1). The radio power limit in our observations (logP 610MHz (W Hz −1 )= 22.54) favours the detection of AGNs, so very little can be inferred on the role of the ongoing merger on the starburst activity in this cluster. However, if we consider logP 610M H (W Hz −1 ) = 23 as a reasonable upper limit for the radio emission from starburst galaxies (e.g. Condon 1992), we can conclude that no major starburst emission is detected. Unfortunately, no information on the infrared flux of the late-type radio galaxies in our sample is available in the literature to give support to our conclusions. The merging events in A 521 and the formation of the relic The most important result of this paper is the detection of the relic source, which gives further observational support to the hypothesis of a close connection between cluster mergers and relic radio emission. The relic in A 521 is located in projection at the border of the cluster X-ray emission. It is slightly inclined with respect to the outer ring of the ASCA X-ray isophotes (see Fig. 8). A number of models have been proposed to explain the origin of radio relics. All these models invoke a connection between these sources and the presence of a shock within the X-ray gas driven by a merging episode (see for instance Markevitch et al. 2005). Simulations of cluster mergers (Ricker & Sarazin, 2001) show that the merging of two subclusters leads to the formation of two shocks (front and back shock). A 521 has a complex dynamics. The main cluster has a mass of the order of 10 15 M ⊙ and has been undergoing multiple minor merging events with groups whose mass is ∼ 1/10 lower. Using the dynamical analysis of M00, F03 and F05, and from inspection of Figure 1, a possible scenario is that the group G111 is falling onto the main cluster G11, coming from North-West. Furthermore, the presence of the two optical groups G12 and G112 south of the central part of the cluster (see Figs. 1 and 3) suggests that also the southern part of the cluster region may be dynamically active. Merger shock One possibility is that relativistic electrons are accelerated from the thermal pool by the passage of a strong merger shock (Ensslin et al. 1998, Röttgering et al. 1997. In this case the spectrum of the emitting electrons 1 is related to the Mach number M of the shock by δ = 2 (M 2 + 1) (M 2 − 1) + 1 (e.g. Blandford & Eichler 1987). Here we include also the effect of particle aging ∆(δ) = 1, which comes out from the combined effect of Inverse Compton energy losses and continuous injection. In the case of A 521 the spectral index of the relic is α = 1.14, which gives δ = 3.28, and the requested Mach number of the shock is M ∼ 3.9. Adiabatic compression A second possibility is adiabatic compression of fossil radio plasma by a merger shock (Ensslin & Gopal-Krishna 2001, hereinafter EG-K01). In this case, the numerical 3-D MHD simulations by Ensslin & Brüggen (2002, hereinafter EB02) predict a variety of radio morphologies and polarization properties, which may be reasonably well matched by the available high sensitivity radio images. Another important requirement in this scenario is the presence of an active radio galaxy in the proximity of the relic. This constraint is satisfied in A 521, where the radio galaxy J0454-1016a is located only 1.5 ′ from the relic, i.e. a projected distance of ∼ 350 kpc. A previous cycle of activity of this radio galaxy could have provided the fossil radio plasma in the ICM, revived by the shock compression. Ram pressure stripping Finally, another appealing possibility is that the relic in A 521 is the result of ram pressure stripping of the radio lobes of J0454−1016a (a) by group merger in the southern cluster region, or (b) by the infalling of J0454−1016a itself through G11. This scenario requires that the internal pressure of the lobes P int is significantly smaller than the external ram pressure, i.e. where v merg is the infalling velocity in units of 1000 km s −1 , ρ ICM is the density of the intracluster medium, and n is the number density in units of 10 −4 particle cm −3 . The projected distance of J0454−1016a from the relic (∼ 350 kpc) requires a time of the order of t cross ≈ 3.5 × 10 8 ( v merg 10 3 km/s ) −1 yr to be crossed by the ICM of any merging group (case (a)), or by J0454-1016a itself (case (b)). We note that in (a) we assume that the core position of J0454-1016a is not affected by the group merger dynamics. In order to allow the electrons in the radio lobes to still emit in the radio band, the time t cross should be smaller than the life-time of the radiating electrons. This implies v merg > ∼ 3000 km/s. Such velocity leads to a Mach number M > ∼ 2 for the merging group, or for J0454-1016a. Is there a shock in the external region of A 521? All the possibilities given above to explain the formation of a radio relic require the presence of a shock in the external region of A 521, with Mach number in the range 2 < ∼ M < ∼ 4. In search for an observational signature of a shock in the relic region we re-analysed the public archive Chandra ACIS-I (39 ksec, OBSID 901) and ACIS-S (39 ksec OBSID 430) observations, analysed also in F05. We point out that the work in F05 is mainly restricted to the central and northern part of the cluster, and does not include the region which is relevant to the present discussion. We processed tha data using CIAO 3.2 and the newest calibration database CALDB 3.1.0. On the (0.5-5) keV background-subtracted and exposurecorrected image of the cluster, we extracted the radial X-ray surface brightness profile perpendicular to the relic source, using a 80 • sector centered on the cluster centre and containing the relic. The X-ray brightness profile does not show any evidence in support of the existence of a shock front at the projected location of the relic. This result is consistent with what we found in re-analysing the archive ASCA data (see also Fig. 8). We point out, however, that this is not enough to rule out a connection between the relic and the presence of a shock. Two more issues should be considered. In particular: (a) the relic is very peripheral, therefore the cluster X-ray surface brightness is very low here, and deeper X-ray imaging is necessary to investigate the presence of a shock; (b) projection effects should also be taken into account in the analysis. We defer a detailed discussion on this point to a future paper. Summary In this paper we presented high sensitivity observations of the cluster of galaxies A 521, carried out at 610 MHz with the GMRT. The cluster is known to have a complex dyamics, and the radio emission from the cluster was analysed in detail, using the multiband (X-ray and optical) information available in the literature. We found that the AGN activity in the cluster is consistent with the local RLF for cluster ellipticals within the large poissonian errors, i.e. we have 3 detections out of 6 expected sources. This result suggests that the multiple merging events in A 521 are not increasing the AGN radio activity in the early-type population compared to other environments. A radio relic was detected at the cluster periphery, and a few possible scenarios for the presence of this relic were discussed. One possibility is that the relic is connected to the presence of shock waves induced by the merger. Such shocks may have accelerated relativistic electrons or "revived" fossil radio plasma through adiabatic compression of the magnetic field or shock re-acceleration. The presence of an active cluster radio galaxy in the proximity of the relic suggests that the "revived" plasma might be connected to previous cycles of activity in this object. The radio properties of the relic require a high Mach number for such shock.
2014-10-01T00:00:00.000Z
2005-10-14T00:00:00.000
{ "year": 2005, "sha1": "d238f520af5a03fd418f610a2a7a70a4e9446901", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d238f520af5a03fd418f610a2a7a70a4e9446901", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270514049
pes2o/s2orc
v3-fos-license
Chemogenetic activation of histamine neurons promotes retrieval of apparently lost memories Memory retrieval can become difficult over time, but it is important to note that memories that appear to be forgotten might still be stored in the brain, as shown by their occasional spontaneous retrieval. Histamine in the central nervous system is a promising target for facilitating the recovery of memory retrieval. Our previous study demonstrated that histamine H3 receptor (H3R) inverse agonists/antagonists, activating histamine synthesis and release, enhance activity in the perirhinal cortex and help in retrieving forgotten long-term object recognition memories. However, it is unclear whether enhancing histaminergic activity alone is enough for the recovery of memory retrieval, considering that H3Rs are also located in other neuron types and affect the release of multiple neurotransmitters. In this study, we employed a chemogenetic method to determine whether specifically activating histamine neurons in the tuberomammillary nucleus facilitates memory retrieval. In the novel object recognition test, control mice did not show a preference for objects based on memory 1 week after training, but chemogenetic activation of histamine neurons before testing improved memory retrieval. This selective activation did not affect the locomotor activity or anxiety-related behavior. Administering an H2R antagonist directly into the perirhinal cortex inhibited the recovery of memory retrieval induced by the activation of histamine neurons. Furthermore, we utilized the Barnes maze test to investigate whether chemogenetic activation of histamine neurons influences the retrieval of forgotten spatial memories. Control mice explored all the holes in the maze equally 1 week after training, whereas mice with chemogenetically activated histamine neurons spent more time around the target hole. These findings indicate that chemogenetic activation of histamine neurons in the tuberomammillary nucleus can promote retrieval of seemingly forgotten object recognition and spatial memories. Introduction Memory retrieval can become challenging over time, a process often accelerated by various neurological and psychiatric disorders [1][2][3], negatively impacting quality of life.However, even memories that seem forgotten may be still stored latently in the brain, as evidenced by their occasional spontaneous recollection.Thus, enhancing positive modulators for memory retrieval might help recover these seemingly lost memories.While some studies have reported retrieval recovery in animals and † Yuto Yokoi and Ayame Kubo contributed equally to this work.*Correspondence: Hiroshi Nomura hnomura@med.nagoya-cu.ac.jp 1 Endowed Department of Cognitive Function and Pathology, Institute of Brain Science, Graduate School of Medical Sciences, Nagoya City University, Nagoya 467-8601, Japan 2 Department of Pharmacology, Graduate School of Pharmaceutical Sciences, Hokkaido University, Sapporo 060-0812, Japan humans [4][5][6], the underlying mechanisms remain largely unexplored. Histamine in the central nervous system represents a potential target for restoring memory retrieval [7].Brain histamine is produced mainly in tuberomammillary nucleus (TMN) neurons, is released across various brain regions, and plays a role in learning and memory, wakefulness, motivation, and energy balance [8][9][10].Histamine H3 receptor (H3R) inverse agonists/antagonists stimulate the histaminergic nervous system by increasing histamine synthesis and release [11].We have previously shown that H3R inverse agonists/antagonists enhance perirhinal cortex (PRh) activity and restore the retrieval of forgotten long-term object recognition memories in mice [4].Similar effects on recognition memory retrieval have been observed in humans [4].Other studies have also reported that H3R inverse agonists/antagonists enhance memory retrieval [12,13].However, it remains unclear whether heightened histaminergic activity alone is sufficient for restoring memory retrieval, as H3Rs are located in other neuron types and influence the release of various transmitters (e.g., γ-aminobutyric acid (GABA), glutamate, acetylcholine, and noradrenaline) [14][15][16].Indeed, H3R inverse agonists/antagonists can affect brain functions through modulation of dopamine [17]. In this study, we selectively activated histamine neurons using a chemogenetic approach [18] to determine whether this selective activation is sufficient to induce memory retrieval recovery.We also investigated the necessity of activating histamine H2 receptors in this process.Finally, we employed the Barnes maze test to assess the effectiveness of chemogenetic activation of histamine neurons in retrieving spatial memories. We utilized the novel object recognition test as a measure of memory, which examines whether the mouse can distinguish between novel objects and objects previously encountered during the training session.Our previous study indicated that mice fail to discriminate novel objects from familiar objects 3 days after training [4].However, administering thioperamide, an H3R inverse agonist/antagonist, during the test period between 3 days and 1 month after training restored this ability to discriminate.Therefore, in this study, the test session was conducted 1 week after the training session (Fig. 1G).When the hM3Dq mice were given an intraperitoneal injection of CNO during the test session, they showed increased exploration of the novel object compared with the familiar object (Fig. 1H).For control groups, we had mice with hM3Dq-mCherry in histamine neurons given saline and mice with mCherry in histamine neurons given either saline or CNO.These control groups did not show a preference for the novel object.To quantify this effect, we calculated the discrimination score, reflecting the ability to discriminate between novel and familiar objects.The score for the hM3Dq-CNO group was greater than those of the control groups (Fig. 1I).Additionally, the distance traveled during the test session was comparable across all 4 groups (Fig. 1J).Furthermore, we evaluated anxiety-related behavior using the elevated plus maze test, as anxiety levels might affect memory retrieval.The time spent in open arms and the number of visits to open arms were consistent across all groups (Fig. 2).These results indicate that chemogenetic activation of histamine neurons promotes the retrieval of forgotten object memories. We investigated whether the activation of histamine receptors plays a role in the recovery of memory retrieval.The PRh is a critical region for the novel object recognition test.The H2 receptor (H2R) is expressed in the PRh [19], and our previous study indicated that H2R activation in the PRh is critical for improving the retrieval of forgotten object recognition memories by H3R inverse agonists/antagonists [4].Therefore, we aimed to determine whether H2R activation in the PRh is necessary for memory retrieval prompted by chemogenetic activation of histamine neurons.Mice with hM3Dq in their histamine neurons were given a local administration of either ranitidine, an H2R antagonist, or saline via infusion cannulas in the PRh 30 min before the test session (Fig. 3A-C).This was followed by intraperitoneal administration of CNO to all the mice.Mice treated with saline exhibited a preference for exploring the novel object, consistent with previous findings (Fig. 1D).In contrast, mice treated with ranitidine showed reduced exploration of the novel object, similar to their interaction with the familiar object (Fig. 3D).The discrimination score was lower in the mice administered ranitidine compared to those receiving saline.These results suggest that H2R activation in the PRh is required for the successful recovery of memory retrieval. Finally, we examined whether chemogenetic activation of histamine neurons influences the retrieval of forgotten spatial memories using the Barnes maze test.Over 4 days of training, mice learned which hole on the platform had a box underneath them to escape.The probe test, conducted either 1 day or 1 week after the training with the H Mice with hM3Dq in histamine neurons treated with CNO showed a preference for exploring the novel object (**P < 0.0001, Sidak's test after two-way repeated measures ANOVA (interaction, F(3, 34) = 8.60, P = 0.0002)).I The discrimination score, a measure of distinguishing novel objects from familiar objects, was greater in the hM3Dq-CNO group compared to the control groups (**P < 0.01, Tukey's test after two-way ANOVA (interaction, F(1, 34) = 6.35,P = 0.0166)).J The distance traveled during the test session was comparable across groups.mCherry-Saline: N = 9 mice, mCherry-CNO: N = 9 mice, hM3Dq-Saline: N = 10 mice, hM3Dq-CNO: N = 10 mice.Values are reported as mean ± SEM escape box removed, assessed their memory retention.Results from the one-day test showed that mice spent more time around the hole where the escape box was originally located and its adjacent holes (Fig. 4A).On the other hand, in the 1-week test, the mice did not show a preference for any specific hole.Based on these results, we chose a 1-week interval between training and testing for subsequent experiments. The mice with hM3Dq-mCherry in their histamine neurons underwent 4 days of Barnes maze training.One week after the final training session, they were subjected to the probe test.Either saline or CNO was administered 30 min before this test.The mice given CNO spent more time at the target hole compared to those treated with saline (Fig. 4B).These findings indicate that chemogenetic activation of histamine neurons promotes the retrieval of forgotten spatial memories. Discussion In our study, we employed a chemogenetic method to selectively activate histamine neurons and found that this selective activation alone is sufficient to restore the retrieval of object recognition memories.It is important to note that the novel object recognition test can be influenced by various internal states, such as anxiety and activity levels.However, our findings indicate that activating histamine neurons did not affect anxietyrelated behavior or locomotor activity in these mice.Furthermore, we discovered that administering an H2R antagonist directly into the PRh hindered the recovery of memory retrieval.Given the role of the PRh in processing object recognition memories, it is likely that the activation of histamine neurons positively affects the PRh neuronal activity underlying memory traces through H2R activation, thereby facilitating the recovery of memory retrieval. The improved memory retrieval observed in our study may be attributed to the excitatory effects of histamine through H2R activation [7].H2R activation leads to an increase in intracellular cyclic AMP (cAMP) and activation of protein kinase A(PKA).This sequence of events reduces afterhyperpolarization by suppressing Ca 2+ -dependent K + channels, thereby increasing neuronal excitability.Additionally, cAMP directly influences the hyperpolarization-activated cation channel HCN2, causing depolarization [20].Moreover, H2R activation also diminishes the activity of inhibitory interneurons via K v3.2 -containing K + channels, further influencing the activity within neural networks [21].These excitatory impacts of histamine may play a role in enhancing memory retrieval.In fact, inducing depolarization in neurons in the PRh replicates the recovery of memory retrieval that are caused by injections of H3R inverse agonist/antagonist injections [4] or the chemogenetic activation of histamine neurons.Given that the reactivation of neurons integrated into memory traces and the synchronized neuronal activity are crucial for memory retrieval [22][23][24][25][26][27], histamine could facilitate this reactivation and/or synchronized activity for enhanced retrieval.Indeed, histamine has been shown to increase the reactivation of behavior-relevant PRh neuronal populations in brain slices [4], and an H3R inverse agonist/antagonist enhances synchronized activity in the PRh in vivo [28].However, further studies employing in vivo neuronal recordings [29] are Fig. 4 Chemogenetic activation of histamine neurons promotes retrieval of forgotten spatial memories.A Mice underwent 4 days of training in the Barnes maze.Memory testing occurred either 1 day or 1 week after the last training day.The mice showed a preference for the target and adjacent holes in the 1-day test, but no hole preference was observed in the 1-week test (**P < 0.001, Sidak's test after two-way repeated measures ANOVA (interaction, F(19, 114) = 4.53, P < 0.0001)).N = 4 mice.B Mice with hM3Dq underwent the same training and a probe test 1 week later.The mice with CNO spent more time around the target hole compared to those treated with saline (**P < 0.0001, Sidak's test after two-way repeated measures ANOVA (interaction, F(19, 285) = 2.87, P < 0.0001)).Saline: N = 8 mice, CNO: N = 9 mice.Values are reported as mean ± SEM needed for a more precise understanding of how histamine induces the recovery of memory retrieval. Determining whether the enhancement of memory retrieval through histamine neuron activation is a common feature across various memory tasks is crucial for identifying the mechanisms underlying retrieval enhancement.Previous studies have explored the relationship between pharmacological activation of the histamine system and memory retrieval.However, these studies were limited to specific memory tasks, such as object and social recognition memory tasks, and inhibitory avoidance [12,13,30].Focusing on a single memory task makes it challenging to eliminate confounding factors and to determine whether an intervention modulates a neural basis specific to that memory task or common to memory retrieval.Therefore, in this study, we utilized the Barnes maze test, a method not previously employed in studies of histamine-induced retrieval enhancement, to test whether the activation of histamine neurons facilitates the retrieval of spatial memories.Our findings suggest that histamine does not act on processes specific to object recognition memory but may activate a common neural foundation for memory retrieval.Considering the potential involvement of distinct neural circuits in spatial and novel object recognition memories [31], further research is needed to clarify the significance of the H2R signaling in boosting spatial memory retrieval. Our study shows that activating histamine neurons does not affect locomotor activity during the test session of the novel object recognition test.This finding contrasts with a previous study that observed an increase in locomotor activity following the same manipulation [32].This discrepancy could stem from differences in the behavioral task and the timing of experiments.We presented objects to mice to assess memory, and the test was performed during the light-on phase, whereas Yu et al. performed the test in an object-free open field during the light-off phase [32]. Although our results indicate that histamine neuron activation does not influence anxiety-related behavior, it is important to note that this does not rule out a potential association between histamine and anxiety.Previous studies demonstrated that histamine may be involved in anxiety.The lesion of TMN decreases anxiety-related behavior [33], while elevating histamine levels through thioperamide increases anxiety-related behavior only when pretreated with zolantidine, an H2R antagonist [34].In addition, mice lacking Hdc exhibit more anxietyrelated behavior [35].These findings suggest a complex role of histamine in modulating anxiety. In conclusion, we demonstrated that chemogenetic activation of histamine neurons promotes the retrieval of object recognition and spatial memories.Future circuit and molecular analyses will determine the mechanisms underlying the histamine-mediated recovery of memory retrieval. Drugs CNO (Enzo Life Sciences) was prepared in a solution of 0.5% DMSO in saline and administered to the mice via intraperitoneal injection at a dose of 0.01 ml/g body weight.The control treatment consisted of an identical volume of 0.5% DMSO in saline.The chosen CNO dose (1 mg/kg) was based on prior studies [37].Ranitidine hydrochloride (Tokyo Chemical Industry, Tokyo, Japan) was dissolved in saline and directly administered into the PRh.The control group received a comparable volume of saline.The dose of ranitidine was selected based on our previous study [4]. To infuse the H2R antagonist into the PRh, guide cannulas were implanted bilaterally 1 mm above the PRh (A/P: -3.05 mm, M/L: ± 4.55 mm, D/V: -2.8 mm) and secured with a self-curing adhesive resin cement (Super-Bond, SUN MEDICAL, Moriyama, Japan).Dummy cannulas (33-gauge) were then inserted into each guide cannula to prevent clogging.Mice were given at least 7 days for postoperative recovery. Novel object recognition test The test was conducted similarly to that in our previous study [4] with minor modifications.Mice first underwent habituation sessions for three consecutive days, during which they explored an open field (32 cm × 32 cm × 35 cm) for 15 min each day.During the training session, they were placed in the field with two identical objects and allowed to explore for 15 min.In the test session, they explored for 5 min in the presence of one familiar object and one novel object.These objects were similar in texture and size but distinct in shape.The roles of familiar and novel objects were counterbalanced among the mice.A discrimination score was calculated for each mouse as the ratio (T2-T1)/(T1 + T2) [T1 = time spent exploring the familiar object, T2 = time spent exploring the novel object].The test area and objects were cleaned with 70% ethanol solution between trials.All sessions were recorded by a camera, and the video was analyzed by using either Noldus Ethovision XT 10 software (Fig. 1) or DeepLabCut [39] (Fig. 3).Exploration was defined as the mouse's nose being within 4 cm of an object's center, excluding sitting on the object. Elevated plus maze test The test was conducted similarly to that in our previous study [40] with minor modifications.Mice were positioned at the center of the elevated plus maze, which consisted of a central area (8 cm by 8 cm) and four extending arms.Two of these arms were open, each measuring 8 cm wide and 25 cm long, while the other two were enclosed, having the same dimensions but with 25 cmhigh walls on the sides and end.At the start of each test, the mice were placed in the central section, facing one of the enclosed arms.The animals' movements were tracked over 5 min using a camera fixed above the maze's center. The duration the animal spent in the open and closed arms, and the number of entries into the arms were calculated using Noldus Ethovision XT 10 software.An entry into any arm was considered valid when the animal placed all four paws into that arm. Barnes maze test The Barnes maze test was conducted using a circular platform, which was brightly lit at 360-390 lx and 90 cm in diameter, and elevated 76 cm above the floor.This platform featured 20 holes, each 4.5 cm in diameter, positioned 5 cm from its edge.On the first day, the mice were familiarized with an escape box measuring 15 × 9 × 6 cm for 3 min.From the second to the fifth day, they underwent training sessions.During these sessions, the platform had an escape box placed beneath one of the holes.The location of the escape box was fixed across 4 days of training and different among the mice.The mice, initially placed at the platform's center and covered by a holding chamber, were given 10 s before the chamber was removed.They then had 180 s to freely explore and find the escape box.The session concluded when a mouse fully entered the escape box.These sessions were conducted thrice daily, at 20-min intervals.On either the sixth or twelfth day, a probe test was carried out, during which the escape box was removed from the platform.The procedure mirrored the training sessions but lasted only 90 s.The mice's behavior was captured by a camera, and the amount of time spent within 5 cm from the center of each hole was analyzed using Noldus Ethovision XT 10 software. Microinfusions For the microinfusions, 0.5 µL of the solution was administered to each side using 28-gauge infusion cannulas.These cannulas extended 1 mm below the guide cannulas and were operated with a pump for 2 min.To ensure effective diffusion of the solutions, the infusion cannulas remained in place for at least 2 min following the infusion. Immunohistochemistry Immunohistochemistry was carried out following tissue processing.Tissue sections were incubated in PBST (0.1% Triton X-100 in PBS) for 15 min at room temperature (RT).For HDC immunostaining, they were then blocked using PBS-BX (3% BSA, 10.25% TritonX-100 in PBS) for 1 h at RT.The sections were incubated with a rabbit polyclonal antibody against HDC (dilution 1:800, POG, Cat.#16045) overnight at 4 °C.Following antibody incubation, the sections were washed three times with PBS-BX for 15 min each, then incubated with AlexaFluor 488 Goat anti-rabbit IgG (dilution 1:1000, Invitrogen, Cat.#A32731) for 2 h at RT.This was followed by 5-min staining with DAPI in PBS.For c-Fos immunostaining, the sections were then blocked using PBST containing 10% normal goat serum (Abcam, Cat.#ab7481) for 1 h at RT.They were treated with a rabbit polyclonal antibody against c-Fos (1:1000, Millipore, Cat.#ABE457) overnight at 4 °C.Following antibody incubation, the sections were washed three times with PBST for 15 min each, then incubated with AlexaFluor 488 Goat anti-rabbit IgG for 2 h at RT.This was followed by 5-min staining with DAPI in PBS.After two additional 5-min washes in PBS, the sections were mounted on glass slides with a mounting medium (20 mM Tris, 0.5% N-propyl gallate, 90% glycerol, pH 8.0).Imaging was performed using a laserscanning confocal microscope (A1RS + , NIKON, Tokyo, Japan).The proportion of HDC or c-Fos positive cells within mCherry positive cells in the TMN was calculated. Statistical analyses Values are reported as mean ± SEM (standard error of the mean).Statistical analysis was performed using two-way analysis of variance (ANOVA), repeated-measures ANOVA, Tukey's test, Sidak's test, and two-sided unpaired t-test, where appropriate. Fig. 2 Fig. 3 Fig. 2 No impact of chemogenetic activation of histamine neurons on anxiety-related behavior.A Time spent in open arms of the elevated plus maze was consistent across the 4 behavioral groups.B The frequency of visits to open arms was similar across all groups.mCherry-Saline: N = 11 mice, mCherry-CNO: N = 10 mice, hM3Dq-Saline: N = 12 mice, hM3Dq-CNO: N = 13 mice.Values are reported as mean ± SEM tests were conducted during the light phase of this cycle.Animal experiments were performed with the approval of the Institutional Animal Care and Use Committee of Hokkaido University (approval number: 16-0043) and Nagoya City University (approval number: 22-018).The study adhered to the Hokkaido University and Nagoya City University guidelines for the care and use of laboratory animals and complied with several national guidelines: the Guidelines for Proper Conduct of Animal Experiments (Science Council of Japan), the Fundamental Guidelines for Proper Conduct of Animal Experiments and Related Activities in Academic Research Institutions (Ministry of Education, Culture, Sports, Science and Technology, Notice No. 71 of 2006) and, the Standards for Breeding and Housing of and Pain Alleviation for Experimental Animals (Ministry of the Environment, Notice No. 88 of 2006).
2024-06-16T06:18:17.002Z
2024-06-15T00:00:00.000
{ "year": 2024, "sha1": "445bb84905c91c36c12be51fe09716dca89781e1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6aec9a8ecd74adac7f164dec5732f9e16e429cda", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
21268163
pes2o/s2orc
v3-fos-license
Hormone replacement therapy increases levels of antibodies against heat shock protein 65 and certain species of oxidized low density lipoprotein Unidades de 1Ateroscleroses, and 2Cardiologia Geriátrica, Instituto do Coração, Faculdade de Medicina, Universidade de São Paulo, São Paulo, SP, Brasil 3Departamento de Biofísica, Universidade Federal de São Paulo, São Paulo, SP, Brasil 4Departamento de Imunologia and Instituto de Medicina Tropical da Universidade de São Paulo, Instituto de Ciências Biomédicas IV, Universidade de São Paulo, São Paulo, SP, Brasil L. Uint1, O.C.E. Gebara2, L.B. Pinto2, M. Wajngarten2, P. Boschcov3,4, P.L. da Luz1 and M. Gidlund4 Macrophages, activated T cells, B cells and immunoglobulins are found in atherosclerotic lesions, suggesting that autoimmunity plays a significant role in the pathogenesis of coronary artery disease (CAD) (1).Low density lipoprotein (LDL) and oxidative modifications of LDL particles contri-bute to the genesis and progression of atherosclerosis by promoting endothelial damage and amplifying the inflammatory response within the vessel wall.Highly oxidized LDL (oxLDL) is cytotoxic and may cause endothelial damage; however, less modified oxLDL is an immune stimulatory molecule and may up-regulate class II antigen molecules found on monocytes and therefore stimulate T cells (2).Oxidized LDL is also an antigen evoking antibodies against oxLDL and higher levels of antibodies against oxLDL have been found in patients with coronary atherosclerosis compared to normal controls (3). Another antigen that may play an important role in atherogenesis belongs to the heat shock protein (Hsp) family.Hsp are highly conserved proteins synthesized when cells are exposed to stressful stimuli such as inflammation, infection and oxidizing agents. Increased expression of Hsp 60 has been reported on endothelial cells, macrophage and smooth muscle cells in human atherosclerotic plaques.Mean antibody titers of Hsp 60 were higher in CAD patients than in controls and were also related to disease severity (4). It has also been shown that oxLDL induces Hsp in monocytes and therefore both antigens may be important for atherogenesis.However, little is known about how these antigens vary during the atherosclerotic process or how antiatherogenic therapy could alter their serum titers. We studied 20 postmenopausal women (62 ± 6.9 years) subjected to hormone replacement therapy (HRT) before and 90 days after receiving daily oral capsules of 0.625 mg equine conjugate estrogen plus 2.5 mg medroxyprogesterone acetate.Serum samples were collected before and after HRT with written informed consent from patients recruited from the Geriatric Cardiology Unit, Heart Institute, University of São Paulo.The study was approved by the Ethics Committee of the Heart Institute.Patients had no previous record of CAD, all were nonsmokers and nondiabetic and none had received previous HRT.Serum samples were aliquoted and stored at -70ºC until analysis.Total cholesterol and high density lipoprotein (HDL) were determined by enzymatic and colorimetric methods such as cholesterol oxidase phenol aminoantipyrine and phosphatidic acid/magnesium chloride.Plasma triglyceride levels were determined with an enzymatic commercial kit (Abbott, South Passadena, CA, USA).LDL was estimated according to the Friedwald equation.All samples were tested at the same time and separately for antibodies against LDL with a low (LoxLDL) and high (HoxLDL) degree of oxidative modification and against a highly purified recombinant Hsp 65 by ELISA. The results for the 20 participants in the study are shown in Figure 1 1.Reactivity of serum IgG antibodies against low density lipoprotein with a high (HoxLDL) and low (LoxLDL) degree of oxidative modification and heat shock protein (Hsp) before and after hormone replacement therapy.LDL was purified by ultracentrifugation and oxidized with 10 µM CuSO 4 as described by Frostegard et al. (5).ELISA was performed with LoxLDL (TBARS: absorbance at 532 nm = 0.051), or HoxLDL (absorbance at 532 nm = 0.71) or Hsp (6).Plates were coated with 1 µg/ml antigen and blocked with 5% fat-free milk.Samples were diluted 1:500 in phosphate-buffered saline.Bound IgG was detected by adding a sorbent-purified rabbit peroxidase-labeled polyclonal anti-IgG antibody (1:3000, Dako A/S, Carpinteria, CA, USA).After incubation the reaction was developed by the addition of orthophenylenediamine (Sigma, St. Louis, MO, USA).The assay was read at absorbance of 490 nm in a microplate reader (BioRad, Hercules, CA, USA). We found an increase in plasma levels of antibodies against Hsp 65 (0.316 ± 0.03 vs 0.558 ± 0.11, P = 0.047, ANOVA) and against LoxLDL (0.100 ± 0.01 vs 0.217 ± 0.02, P<0.0001, ANOVA) after 90 days of HRT; however, no change was found in serum antibodies against HoxLDL (0.171 ± 0.05 vs 0.217 ± 0.08).When reactive antibodies were determined pre-and post-therapy for each individual patient, no correlation was found between levels of antibodies against Hsp 65 and against anti-LoxLDL and anti-HoxLDL after HRT, strongly suggesting that they represent independent antibody reactivities (Table 1).However, a significant correlation was found between antibodies against Hsp 65 and HoxLDL at baseline before HRT (P = 0.02). The exogenous administration of sex hormones to this relatively homogeneous group of women after a prolonged postmenopausal period provides a unique opportunity to analyze the systemic effects of hormonal stimuli.In contrast to the majority of reports on HRT using a prolonged study period, our aim was to access a more acute response to this treatment, i.e., 2-3 months.We studied lipoprotein profile changes and also antibody formation against two antigens implicated in atherosclerosis.HRT improved the lipid profile, reducing total cholesterol and LDL and increasing HDL, in agreement to previous reports (7). The simultaneous measurement of antibodies against different antigens related to CAD yielded several interesting results.The analysis of two oxLDL preparations with different degrees of oxidative modification derived from the same LDL revealed a change in antibody reactivity against LoxLDL but not HoxLDL during the time of observation.The role of antibodies against anti-oxLDL in atherogenesis remains unclear.Antibodies against oxLDL have been correlated with the severity of CAD.However, in experimental atherosclerosis animal immunization with oxLDL antigens prevented foam cell formation.Our data showed the presence of at least two classes of antibodies against oxLDL and a significant increase of only one class after treatment.This finding supports previous data about the heterogeneity of the oxLDL antibody response and its dual beneficial and detrimental effects.No correlation was found between LDL and cholesterol levels.The entire LDL particle and/or cholesterol could harbor oxidative-induced antigenic determinants and constitute two potential sources of antigens.In a recent study, Heikkinen et al. (8) demonstrated that HRT did not alter serum levels of anti-oxLDL antibodies.However, in the cited study women were younger (mean age, 52 years) and were treated for a longer period of time.The antibodies were similar to HoxLDL used in the present investigation, in agreement Pre-and post-treatment (three months after HRT) samples were tested as described in Figure 1, and the correlation coefficent was determined using the Excel statistical program (Microsoft, San Diego, CA, USA). with our own findings since plasma levels of antibodies against HoxLDL were similar before and after HRT.The same serum samples were tested simultaneously for Hsp antibodies.We found a significant increase in antibody response against Hsp 65 although the mechanism involved remains unclear.Nevertheless, this result agrees with a recent report demonstrating that estrogen was able to regulate the expression of Hsp (both the 90 and 70 subclasses) in human endometrium and several human cell lines (9).How antibodies against Hsp or cross-reactive antibodies against Hsp could influence the genesis and progression of CAD requires further investigation.However, the linkage between borderline hypertension, carotid intimal thickness and carotid atherosclerosis in response to higher serum titers of Hsp antibodies and the demonstration that these antibodies are cytotoxic to endothelial cells strongly suggest that Hsp may be involved in the genesis and progression of atherosclerosis.The correlation between HoxLDL and Hsp pretreatment could indicate the existence of cross-reactivity between antibodies.This potential antigenic cross-reactivity may lead to sizable apo B degradation and further antigenic stimuli. We have shown that HRT can rapidly alter an ongoing humoral antibody response against potential cross-reactive or autoantigens.The link between oxLDL and Hsp may induce Hsp in monocytes and an inflammatory response within the vessel wall.In patients with known CAD, HRT may trigger the humoral response and increase antibodies against Hsp, and potentially aggravate an established lesion.Taken together, these findings suggest that initial HRT may be potentially deleterious.Whether or not the immune response is temporary or sustained and deleterious requires further investigation. and Table 1.After 90 days, HRT promoted a significant
2017-06-30T19:01:33.256Z
2003-04-01T00:00:00.000
{ "year": 2003, "sha1": "6393c9f7fbc1dc75066e98316560b959eaec31fb", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjmbr/a/ptwRLYGvdgjSSmWtggdBJYQ/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6393c9f7fbc1dc75066e98316560b959eaec31fb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
218949727
pes2o/s2orc
v3-fos-license
The electrical properties of NiFe2O4-PVDF nanocomposite prepared by sol-gel method NiFe2O4 nanocomposite is a soft magnetic material with low saturation coercivity and magnetization but has high electrical resistivity making this material suitable for electric field applications. The purpose of this study was to determine the effect of NiFe2O4 composition on the electrical properties of NiFe2O4-PVDF nanocomposites prepared by the sol-gel method. Nickel Ferrite nanocomposites - polyvinylidene fluoride (NiFe2O4-PVDF) are made using the sol-gel method and then grown using a spin coating device on a glass substrate. NiFe2O4-PVDF nanocomposites are characterized using X-ray diffraction (XRD) which aims to determine the crystal system, whereas to determine the electrical properties of nanocomposite resistivity Four Point Probe are used. Based on the research conducted on the electrical properties of the NiFe2O4-PVDF nanocomposite, the resistivity value of the NiFe2O4- PVDF nanocomposite with variations in the composition of the PVDF used 10:10, 10:20, and 10:30 was 36.66913333 Ohm.m; 55,63242667 Ohm.m; and 20,85797333 (Ohm.m). While the results of the resistivity values of the nanocomposite NiFe2O4-PVDF with variations in the composition of NiFe2O4 used 10:10, 20:10, and 30:10 are 36,69613333 Ohm.m; 12.07853333 Ohm.m; and 9.616773333 (Ohm.m). It can be concluded that the greater the variation in the composition of NiFe2O4 and PVDF, the resistivity value decreases. Introduction Nanocomposite material is a material whose development is very promising in nanostructured research [1,2]. Nanostructured material attracts many scientists because of its very small size and volume surface area ratio. This is due to the size that affects the chemical properties and physics which are very different from the large material in the same chemical composition [3]. NiFe2O4 nanocomposite is a soft magnetic material with low saturation coercivity and magnetization but has a high electrical resistivity making this material suitable for electrical field applications [4]. The nanocomposite is a material that is formed from two combinations, namely the matrix as a reinforcement filler and filler protector as an amplifier of the matrix [5]. The conductive polymers that are currently being developed are Polyvinylidene Fluoride or can be abbreviated as PVDF [6]. Polymers that have great thermal stability are Polyvinylidene Fluoride (PVDF). This happens because the chemical resistance of these polymers to aggressive reagents is widely used in the preparation process of nanofiltration (NF) and ultrafiltration (UF) [7]. The development of science and technology is growing rapidly in all applied fields, including in the field of nanoparticle technology. One of the advantages of using it is that the material produced can have better characteristics than the material that was previously available. Lately, a lot of research has been developed about ferrite spinel nanoparticles. This was developed because of its excellent electrical and magnetic properties and very broad application fields in storage systems, ferrofluid technology, magnetocaloric refrigerators, and medical diagnostics [8]. In this research, NiFe2O4-PVDF nanocomposite was made and tested the electrical properties of the nanocomposite. The method used to synthesize spinel ferrite nanocomposite is the sol-gel method then spin coating. One of the most successful methods of preparing nano-sized metal oxide material is the sol gel method. The sol is a colloidal suspension in which the dispersed phase is solid and the dispersing phase is liquid. Spin Coating is one method of making thin layers using around. The spin coating method is quite simple, can be done at room temperature, and is effective for making thin layers. The spin coating method is a process where the coating material as an individual particle is driven by gas flow pressurized to the surface temperature. These particles hit the substrate, stick and form a suitable thin surface [9]. Experimental This type of research is an experiment, where in this study using a characterization tool that is XRD, FTIR, and Four Point Probe (FPP). XRD is used to see the peak of the sample used in this study, whether there is a peak of NiFe2O4 and the peak of PVDF, if it has been able to peak from both of these then proceed with FTIR characterization. Where in this characterization is used to determine the chemical bonds formed in the material that is NiFe2O4 and PVDF, if it has been shown the type of NiFe2O4-PVDF compound that has been bound to a number area then characterization can be continued using FPP, from this test obtained values of resistivity. In this study, tools used were spatulas, permanent magnets, mortars and pestles, 100 mesh filters, plates, beakers and measuring cups, drop pipettes, glass substrates, digital scales, Hem-E3D, magnetic stirrers, furnaces, ovens, cleaners ultrasonic, spin coating, XRD, FTIR, and Four Point Probe (FPP). The materials used to make Nanocomposite Nickel Ferrite-Polyvinylidene Fluoride (NiFe2O4 -PVDF) are iron ore, aquabidest, flour, Alcohol Antiseptic, Nickel Oxide (NiO), Nitric Acid (HNO3), citric acid, Polyethylene Glycol (PEG), Oxalic acid , ethylene glycol, Polyvinylidene Fluoride (PVDF) and Tetrahydrofuran (THF). Implementation This research was carried out in stages, starting with iron ore purification that is by being crushed as smooth as possible using mortar and pestle, filtered with ordinary sieves and pulled with permanent magnet as much as 20 pulls then washed using aquabidest, dried and pulled again with permanent magnet as much as 30 times to separate it from residual impurities. At the stage of refining the sample using HEM-3D was carried out for 30 hours to be made into nanoparticles. Making Fe3O4 sol-gel by mixing iron sand after milling as much as 17.4 grams and 4.5 grams of oxalic acid using magnetic stirrer at 110 0 C for 15 minutes, then adding ethylene glycol as much as 55 ml then stirring for 2 hours at 80 0 C. Making NiFe2O4 is by weighing NiO which has been in the furnace as much as 1.25 gr with a scale tool; Fe3O4 as much as 4.35 gr; Citric Acid as much as 5.55 gr; PEG as much as 11.1 gr. All weighed ingredients are mixed and put the magnetic bar into the beaker then in the stirrer with Magnetic Stirrer for 2 hours at 90 0 C with a rotating speed of 250 rpm until a gel is formed. Let stand for a while, then dry in the oven for 24 hours at a temperature of 110 0 C. Furnished at 400 0 C for 2 hours. After finishing and forming NiFe2O4, precursor NiFe2O4 is formed by weighing 3 grams of NiFe2O4 and dissolved with 70 ml THF in a beaker, then cleaning using Ultrasonic for 2 hours (until dissolved). Preparation of precursor PVDF is by weighing 3 grams of PVDF with a weighing device then dissolved with THF of 70 ml in a measuring flask that already has a temperature thermometer closed using a magnetic stirrer for 2 hours [10]. In the manufacture of NiFe2O4-PVDF nanocomposites using the Tetrahydrofuran (THF) solvent system. Makes five variations for precursor NiFe2O4 with precursor PVDF. Sol-gel NiFe2O4 solution was dissolved into THF using ultrasonic Cleaner for 2 hours, after adding PVDF and THF with a ratio (3 gr PVDF: 70 ml THF) mixed using a magnetic stirrer at a temperature of 70 0 C until the PVDF was dissolved continuously into the solvent. Varying PVDF + THF with NiFe2O4. The variation is 30ml: 10ml, 20ml: 10ml, 10ml: 10ml, 10ml: 20ml, 10ml: 30ml. The THF-NiFe2O4 solution was added to the PVDF solution for 1 day. The nanocomposite is dripped on a glass substrate and rotated at 3000 rpm for 60 seconds using a spin coating. The thin film that has been formed is then brushed with an oven for 30 minutes at 60 0 C. The thin film of NiFe2O4-PVDF nanocomposite is characterized using X-Ray Diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR) to determine nanocomposite chemical bonds and to determine the characteristics of resistivity of electrical properties using the Four Point Probe (FPP). Results and discussion The results of this study were in the form of identification of data, structure and grain size tested using XRD, the capacitance values tested using FPP and FTIR were used to look at the chemical compound structure of the NiFe2O4-PVDF nanocomposite.Characterization using XRD showed that the NiFe2O4-PVDF nanocomposite thin films were developed on glass substrates, characterized by peaks in X-Ray diffraction patterns with variations in composition of NiFe2O4 and PVDF can be seen in Figure 1. NiFe2O4-PVDF nanocomposite with a composition of 30:10 obtained observations from the IR spectrum of NiFe2O4-PVDF nanocomposites synthesized using the sol-gel method with FT-IR spectroscopy using KBr method. The NiFe2O4 -PVDF nanocomposite IR spectrum can be observed at wavenumbers 600-4000 cm -1 . In Figure 2 can be seen from the FTIR graph of NiFe2O4-PVDF nanocomposites in the area of 1194.32 cm -1 which shows the types of NiFe2O4-PVDF compounds that have been bound. Data analysis results of the characterization of the electrical properties of the NiFe2O4-PVDF nanocomposite using the FPP method obtained the resistivity value of a material. The cross-sectional area used in this study is 1 cm because the size of the glass substrate used for this study is 1 cm. The relationship between the composition of the NiFe2O4-PVDF nanocomposite using the FPP method with the resistivity for NiFe2O4 changes is shown in Figure 3. Figure 3, the resistivity value of nanocomposite NiFe2O4-PVDF with variations in the composition of NiFe2O4 used 10:10, 20:10, and 30:10 is 36.69613333 Ohm.m; 12.07853333 Ohm.m; and 9.616773333 (Ohm.m). Can be seen the relationship between the composition of NiFe2O4-PVDF with resistivity for NiFe2O4 changes where the greater the value of the composition given to NiFe2O4, the resistivity value will be smaller. In the variation of NiFe2O4-PVDF 30:10 the smallest resistivity value is 9.616773333 (Ohm.m). This is influenced by the large crystal size of the sample variation. The relationship between the composition of the NiFe2O4-PVDF nanocomposite using the FPP method with the resistivity for PVDF changes is shown in Figure 4. Figure 4, it can be seen the relationship between NiFe2O4-PVDF composition and resistivity for PVDF changes where the greater the value of the given PVDF composition, the smaller the resistivity value. Comparison with journals [11] the greater the value of the matrix, the resistivity value will decrease. Journals [12] stated that the more PVDF in the solution, the smaller the resistivity value. This proves that the experiment is in accordance with the theory.
2020-05-07T09:14:49.373Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "8c86595bb8d21c67164a84174a581f2ac6554148", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1481/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ffa67acdd4ec0b91f0aea87769aa50c37119e6dd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
6227027
pes2o/s2orc
v3-fos-license
Measurement of the B0_s semileptonic branching ratio to an orbitally excited D_s** state, Br(B0_s ->Ds1(2536) mu nu) In a data sample of approximately 1.3 fb-1 collected with the D0 detector between 2002 and 2006, the orbitally excited charm state D_s1(2536) has been observed with a measured mass of 2535.7 +/- 0.6 (stat) +/- 0.5 (syst) MeV via the decay mode B0_s ->D_s1(2536) mu nu X. A first measurement is made of the branching ratio product Br(b(bar) ->D_s1(2536) mu nu X).Br(D_s1(2536)->D* K0_S). Assuming that D_s1(2536) production in semileptonic decay is entirely from B0_s, an extraction of the semileptonic branching ratio Br(B0_s ->D_s1(2536) mu nu X) is made. PACS numbers: 13.25.Hw,14.40.Lb Semileptonic B 0 s decays into orbitally excited P -wave strange-charm mesons (D * * s ) are expected to make up a significant fraction of B 0 s semileptonic decays and are therefore important when comparing inclusive and exclusive decay rates, extracting CKM matrix elements, and using semileptonic decays in B 0 s mixing analyses. For B meson semileptonic decays to heavier excited charm states, more of the available phase space is near zero recoil, increasing the importance of corrections in heavyquark effective theory (HQET) [1], effectively tested here. D * * s mesons (also denoted D sJ ) are composed of a heavy charm quark and a lighter strange quark in an L = 1 state of orbital momentum. In the heavy-quark limit, the spin s Q of the heavy quark and the total angular momentum, j q = s q + L of the light degrees of freedom (quark and gluons), are separately conserved and the latter has possible values of j q = 1 2 or 3 2 . The surprisingly light masses of the j q = 1 2 states: D * s0 (2317) and D s1 (2460) [2], plus the observation of new D sJ states [3], deepens the need for a better understanding of these D * * s systems since they may be quark molecular states, a new and very different arrangement of quarks. In our decay of interest, the j q = 3 2 angular momentum can combine with the heavy quark spin to form the J P = 1 + (D s1 ) state which must decay through a D-wave to conserve j q = 3 2 . The D ± s1 (2536) is expected to decay dominantly into a D * and K meson to conserve angular momentum. In this Letter we present the first measurement of semileptonic B 0 s decay into the narrow D ± s1 (2536) state. This state is just above the D * K 0 S mass threshold and has been observed previously [4]. Events compatible with the decay chainb Charge conjugate modes and reactions are always implied in this Letter. Assuming that D − s1 (2536) production in a semileptonic decay is entirely from B 0 s , the branching ratio Br(B 0 s → D − s1 (2536)µ + ν µ X) can be determined by normalizing to the known value of the branching fraction Br(b → D * − µ + ν µ X) = (2.75 ± 0.19)% [5] to avoid uncertainties in the b-quark production rate. This semileptonic branching ratio includes any decay channel or sequence of channels resulting in a D * and a lepton (muon in our case), and all b hadrons, and therefore includes the relative production of each b hadron species starting from ab quark. Since the final state of interest, D − s1 (2536) → D * − K 0 S , is reconstructed from a D * and a K 0 S , the selection is broken up into two sections: one to reconstruct the D * with an associated muon, coming dominantly from B meson decays resulting in a number of candidates, N D * µ , and then the addition and subsequent formation of a vertex of a K 0 S with the D * and muon, resulting in N Ds1 candidates. To find the branching ratio, the following formula is used: The input f (b → B 0 s ) [5] is the fraction of decays where a b quark will hadronize to a B 0 s hadron. ǫ K 0 S is the efficiency in the signal decay channel to reconstruct and make a vertex with a K 0 S to form a D s1 (2536), given that a D * and a muon have already been reconstructed. Later we will identify the ratio of efficiencies as R gen The D0 detector [6] and following analysis [7] are described in more detail elsewhere. The main elements relevant to this analysis are the silicon microstrip tracker (SMT), central fiber tracker (CFT), and muon detector systems. This measurement uses a large data sample, corresponding to approximately 1.3 fb −1 of integrated luminosity collected by the D0 detector between April 2002 and March 2006. Events were reconstructed using the standard D0 software suite. To avoid lifetime biases compared to the MC simulation, the small fraction of events were removed that entered the sample only via triggers that included requirements on impact parameters of tracks. To evaluate signal mass resolution and efficiencies, Monte Carlo (MC) simulated samples were generated for signal and background. The standard D0 simulation and event reconstruction chain was used. Events were generated with the pythia generator [8] and decay chains of heavy hadrons were simulated with the evtgen decay package [9]. The detector response was modeled by geant [10]. Two background MC samples were also generated: a cc sample, and an inclusive b-quark sample containing all b hadron species with forced semileptonic decays to a muon. In both cases, all events containing both a D * and a muon were retained. B mesons were first selected using their semileptonic decays, B → D * − µ + X. At this point in the selection, the D * +µ sample is dominated by B 0 d → D * − µ + ν µ X decays. For this analysis, muons were required to have hits in more than one muon layer, to have an associated track in the central tracking system, and to have transverse momentum p µ T > 2 GeV/c, pseudorapidity |η µ | < 2, and total momentum p µ > 3 GeV/c. Two oppositely charged tracks with p T > 0.7 GeV/c and |η| < 2 were required to form a commonD 0 vertex which were then combined with a muon candidate to form a common decay point following the procedure described in Ref. [11]. For each D 0 µ + candidate, an additional soft pion was searched for with charge opposite to the charge of the muon and p T > 0.18 GeV/c. The K − and π + from the decay of the D 0 were both required to have more than five CFT hits. To reduce the contribution from prompt cc production, a requirement was made on the transverse decay length, L xy , significance of the D * µ vertex of L xy /σ(L xy ) > 1. After these cuts, the total number of D * candidates in the mass difference, M (D * ) − M (D 0 ), peak of Fig. 1 is D ± s1 (2536) candidates were formed by combining a D * candidate with a K 0 S . D * candidates were first selected by requiring the mass difference M (D * ) − M (D 0 ) to be in the range 0.142-0.149 GeV/c 2 . The two tracks from the decay of the K 0 S were required to have opposite charge and to have more than five hits in the CFT detector. The p T of the K 0 S was required to be greater than 1 GeV/c to reduce the contribution of background K 0 S mesons from fragmentation. A vertex was then formed using the reconstructed K 0 S and the D * candidate of the event. The decay length of the K 0 S was required to be greater than 0.5 cm. To compute the D ± s1 (2536) invariant mass, a mass constraint was applied using the known D * ± mass [5] instead of the measured invariant mass of the Kππ system. Finally, the invariant mass of the reconstructed D ± s1 (2536) and muon was required to be less than the mass of the B 0 s meson [5]. The signal model employed for the fit to the D * K 0 S invariant mass spectrum was a relativistic Breit-Wigner convoluted with a Gaussian function, with the resonance width fixed to the value 1.03 ± 0.05 (stat) ± 0.12 (syst) MeV/c 2 measured by the BaBar Collaboration [12] and a Gaussian width determined to be 2.8 MeV/c 2 from MC simulation of the signal. The MC width value was scaled up by a factor of 1.10 ± 0.10 to account for differences between data and MC resolution estimates. The unbinned likelihood fit used an exponential function plus a first-order polynomial to model the background with a threshold cutoff of M (D * ) + M (K 0 S ). The fit, shown in Fig. 2, gives a central value for the mass peak of 2535.7 ± 0.7 (stat) MeV/c 2 , a yield of N Ds1 = 45.9 ± 9.1 (stat) events, and a significance of 6.1σ for the background to fluctuate up to or above the observed number of signal events. The efficiencies used in Eq. 1 are estimated using the MC simulation, after implementing suitable correction factors to ensure proper modeling of the underlying bhadron p T spectrum, as well as trigger effects. An eventby-event weight, applied as a function of the generated p T of the B s , was determined by comparing the generated p T (B) in MC with the p T distribution of fully reconstructed B + → J/ψK + candidates in data collected primarily with a dimuon trigger [13]. Most events for this analysis were recorded using single muon triggers, and an additional weight was applied as a function of p T (µ) to further improve the simulation of trigger effects. Reweighted MC events were used in the determination of efficiencies described below, and indicated uncertainties are due to MC statistics. Using the MC sample of inclusiveb → D * µX events, specific major decay modes were identified. Efficiencies for each of these decay modes to pass the D * µ selection, including the efficiency to reconstruct the soft pion from the D * , were then determined. The predicted fraction F i of each channel contributing to the D * µ sample before further cuts was found following a procedure similar to that given in Ref. [14]. The efficiency ǫ i for each channel was found and a weighted sum was calculated, giving an estimated total efficiency for reconstruction of ǫ(b → D * µ) = (5.88 ± 0.80)%, where the uncertainty is dominated by the MC statistics used to find ǫ i , and uncertainties on external inputs [5] used to estimate F i . Applying the same cuts for reconstructing the D * µ for the signal channel, the efficiency ǫ(B 0 s → D s1 µ → D * µ) = (3.20 ± 0.02)%, results in a ratio of efficiencies of R gen D * = 0.547 ± 0.075. The signal MC sample was used to determine the efficiency to reconstruct D − s1 (2536) → D * − K 0 S given a reconstructed D * µ as a starting point. This efficiency is hence effectively that of reconstructing a K 0 S → π + π − and forming a vertex with the D * µ, and includes the branching ratio Br(K 0 S → π + π − ) [5] for ease of use in calculating the branching ratio product. The reconstruction efficiency was found to be ǫ K 0 S = (10.3±0.4)% where the uncertainty is due to MC statistics. The process cc → D * − µ + ν µ X can contribute to N D * µ since a D * meson can come from the hadronization of thē c quark, and the muon can come from the semileptonic decay of the hadron containing the c quark. To determine the number of events in our signal reconstructed from a prompt D * , a comparison was made of the decay length significance distribution observed in the data with the same distribution predicted by MC for b → D * µX and any excess at shorter significances was interpreted as cc contribution. For the decay length significance cut used in the analysis, L xy /σ(L xy ) > 1, the fraction of N D * µ from cc production was estimated to be (3.9 ± 2.5)%. A check using a prompt cc MC sample results in a consistent estimate. The value of N D * µ was corrected downward accordingly. The contribution from cc production to N Ds1 where one charm quark hadronizes directly to a D s1 (2536) and the other decays directly to a muon was estimated to be negligible using relative production ratios and spincounting arguments [15]. Systematic uncertainties for the branching ratio product are summarized in Table I and discussed below. The uncertainty in the normalizing branching ratio [5] Br(b → D * µX) was taken as a systematic uncertainty. For determining N D * µ , the signal and background model parameters were varied in a correlated fashion and a systematic uncertainty was assigned. The estimated cc production contribution was varied by the indicated uncertainty. In the determination of N Ds1 , the functional forms of the signal and background models were varied in a number of ways to determine the sensitivity of the candidate yield. In addition, the scaling of the widths was varied by ±10% to check the sensitivity to uncertainty in mass resolution. By comparing the p T (µ) distribution for the signal using the default ISGW2 decay model [16] to the HQET semileptonic decay model [9], a weighting factor was found and applied to the fully simulated signal MC events, and the efficiency determined again. The difference observed was assigned as a contribution to the systematic uncertainty of ǫ K 0 S and R gen D * . When estimating ǫ K 0 S , the uncertainty due to modeling of the b hadron p T spectrum was derived by using an alternate weighting technique. The cuts on the p T and decay length of the K 0 S were varied and a systematic uncertainty on the efficiency due to this source was also assigned. Discrepancies in track reconstruction efficiencies between data and MC in low-p T tracks were accounted for by assigning a systematic uncertainty to each of the pion tracks in the K 0 S reconstruction [17,18]. The uncertainty in R gen D * is due to a combination of MC statistics and uncertainties in PDG branching ratio values and production fractions, f (b → b hadron). The uncorrelated systematic uncertainty is given in Table I. The estimated systematic uncertainties were added in quadrature to obtain a total estimated systematic uncertainty on the branching ratio product of 16.8%. The branching ratio product was determined to be: To assess the systematic uncertainty on the mass measurement, the same variations of the D s1 (2536) mass signal model, as well as background functional form, were applied as described above. The mass values used for the mass constraints on the decay products were varied within their PDG uncertainties and were also set to the D0 central fit values. Ensemble tests indicated that the statistical error is correct. From the observed variations, a total systematic mass uncertainty of 0.5 MeV/c 2 was taken, for a mass measurement of: m(D s1 ) = 2535.7 ± 0.6 (stat) ± 0.5 (syst) MeV/c 2 . This measured mass value is in good agreement with the PDG average value of 2535.34 ± 0.31 MeV/c 2 [5]. To allow comparison of this measurement to theoretical predictions, the semileptonic branching ratio alone as shown in Table II is extracted by taking the hadronization fraction into B 0 s as f (b → B 0 s ) = 0.103 ± 0.014 [5] and also assuming that Br(D s1 (2536) → D * K 0 S ) = 0.25 [9]. This is the first experimental measurement of this semileptonic branching ratio and is compared to a number of theoretical predictions [1,19,20] of the exclusive rate in Table II. The systematic uncertainty on this quantity is as described earlier, and the error labeled "(prod. frac.)" is due to the current uncertainty on f (b → B 0 s ). The first two theoretical predictions include relativistic and 1/m Q corrections, while the third does not. The result is found to be consistent within uncertainties with the first two theoretical predictions, and demonstrates the need for such corrections.
2019-04-21T13:05:30.039Z
2009-02-03T00:00:00.000
{ "year": 2007, "sha1": "7cb4f887e2b7bbf4ddd465b53d6483373ef59f9e", "oa_license": null, "oa_url": "https://eprints.lancs.ac.uk/id/eprint/63761/2/PhysRevLett.102.051801.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "94c3884f0aa74b508777cf14b72c75fdda15d676", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry", "Medicine" ] }
241189142
pes2o/s2orc
v3-fos-license
Electrocardiographic Predictors of Mortality: Data from a Primary Care Tele-Electrocardiography Cohort of Brazilian Patients Computerized electrocardiography (ECG) has been widely used and allows linkage to electronic medical records. The present study describes the development and clinical applications of an electronic cohort derived from a digital ECG database obtained by the Telehealth Network of Minas Gerais, Brazil, for the period 2010–2017, linked to the mortality data from the national information system, the Clinical Outcomes in Digital Electrocardiography (CODE) dataset. From 2,470,424 ECGs, 1,773,689 patients were identified. A total of 1,666,778 (94%) underwent a valid ECG recording for the period 2010 to 2017, with 1,558,421 patients over 16 years old; 40.2% were men, with a mean age of 51.7 [SD 17.6] years. During a mean follow-up of 3.7 years, the mortality rate was 3.3%. ECG abnormalities assessed were: atrial fibrillation (AF), right bundle branch block (RBBB), left bundle branch block (LBBB), atrioventricular block (AVB), and ventricular pre-excitation. Most ECG abnormalities (AF: Hazard ratio [HR] 2.10; 95% CI 2.03–2.17; RBBB: HR 1.32; 95%CI 1.27–1.36; LBBB: HR 1.69; 95% CI 1.62–1.76; first degree AVB: Relative survival [RS]: 0.76; 95% CI0.71–0.81; 2:1 AVB: RS 0.21 95% CI0.09–0.52; and RS 0.36; third degree AVB: 95% CI 0.26–0.49) were predictors of overall mortality, except for ventricular pre-excitation (HR 1.41; 95% CI 0.56–3.57) and Mobitz I AVB (RS 0.65; 95% CI 0.34–1.24). In conclusion, a large ECG database established by a telehealth network can be a useful tool for facilitating new advances in the fields of digital electrocardiography, clinical cardiology and cardiovascular epidemiology. Introduction Cardiovascular diseases are the main cause of mortality both worldwide and in Brazil, and are responsible for 31.2% of total deaths and a mortality rate standardized by age of 256.0 per 100,000 inhabitants [1]. The electrocardiogram (ECG) is a low-cost, easy-access and non-invasive exam used for cardiovascular assessment, and possesses both diagnostic and prognostic value. Epidemiological studies using the ECG began in the 1940s with the first cardiovascular cohorts [2]. However, ECG reports were very heterogeneous due to the lack of an established coding system appropriate to epidemiological and population-based studies [3]. The Minnesota Code [4] was created in 1960 to standardize ECG classification and enable comparison between different populations. In the following decades, many papers were published on the use of the ECG in population-based studies, showing the prognostic value of different electrocardiographic abnormalities [5][6][7][8][9][10][11]. Simultaneously, the evolution of computerized ECG and automated interpretation had a great impact on cardiovascular epidemiological studies [12,13]. Systems that are capable of transmitting electrocardiographic tracings over the Internet and software packages that enable automatic analysis and coding of tracings have revolutionized the electrocardiography of population-based studies, enhancing its applications and facilitating the study of large populations [14][15][16][17]. The identification of new electrocardiographic variables as predictors for cardiovascular events is an important objective of research among electronic cohorts, especially when performing ECG for population screening remains controversial [18,19] and the benefit of the traditional ECG markers with cardiovascular risk scores for discrimination and reclassification is questionable [18,20]. The use of new technologies such as artificial intelligence (AI) is a promising tool in this field for the recognition of potential non-traditional electrocardiographic risk factors. Despite many studies on ECG abnormalities and their prognostic value having been published, their data are usually from cohorts that typically include hundreds or thousands of patients, or even from secondary care or an inpatient setting, resulting in very specific populations. Big data sets with over one million patients are relatively new, especially in the outpatient setting, and can provide more precise estimates of the risk related to each ECG abnormality in the community setting. This information should be useful for physicians in the primary care setting, and may help to support clinical decisions. Thus, the present study aims to describe the development and clinical applications of an electronic cohort, entitled the Clinical Outcomes in Digital Electrocardiography (CODE) study [21]. This cohort is derived from a digital ECG database obtained by the Telehealth Network of Minas Gerais (TNMG), Brazil [22], from 2010 to 2017, and linked to the mortality data from the national information system with more than 1.5 million patients. Study Design This study is based on a retrospective cohort of primary care patients from Minas Gerais, Brazil, whose ECGs were analyzed by the Telehealth Network of Minas Gerais (TNMG) cardiologists between 2010 and 2017. TNMG currently covers 817 of the 853 counties in Minas Gerais and nearly 400 in other Brazilian states. It has already acquired more than five million ECGs since its implementation [23]. Inclusion Criteria Patients older than 16 years with 12-lead ECGs performed at TNMG between 2010 and 2017 were included in the study. For the specific analysis of ventricular pre-excitation, all age groups were included. Exclusion Criteria Isoelectric recordings and those with interference, reversal or poor positioning of electrodes, which compromised the analysis, were excluded (6.03%). For the analysis of electrocardiographic changes, patients who underwent more than one ECG had only the first exam analyzed; subsequent recordings were excluded (28.20%). Data Collection ECGs were performed by the local primary care professional using digital electrocardiographs manufactured by Tecnologia Eletrônica Brasileira model ECGPC (São Paulo, Brazil) or Micromed Biotecnologia model ErgoPC 13 (Brasilia,Brazil). Clinical data (age, sex and comorbidities) were collected using a standardized questionnaire. Clinical conditions included self-reported smoking, hypertension, diabetes, dyslipidemia, Chagas disease, previous myocardial infarction and chronic obstructive pulmonary disease. Specific software, developed in-house, was able to capture an ECG tracing, upload the ECG and the patient's clinical history, and then transmit the data to the TNMG analysis center via the internet. The clinical information, ECG tracings and reports were stored in a specific database. All data managed and transferred followed the national law for security and protection of the database. For the purpose of the present study, the Glasgow 12-lead ECG analysis program (license 28.4.1, approved for use on 16 June 2009) was used to automatically interpret all ECGs available in the database, exporting the diagnosis as interpreted by both Glasgow and Minnesota codes. Data Analysis Major Electrocardiographic Abnormalities The major electrocardiographic abnormalities included were atrial fibrillation (AF), right bundle branch block (RBBB), left bundle branch block (LBBB), first, second and third degree atrioventricular blocks (AVB) and ventricular pre-excitation [24]. ECGs were analyzed by a team of fourteen trained cardiologists using standardized criteria [24]. Each ECG was interpreted by only one cardiologist. The ECG report was recorded as an unstructured free text. To recognize ECG abnormalities among these million reports, a computational linguistics program was used. First, the cardiologist's text was preprocessed by removing "stop-words" (such as: the, is, at, which and on) and generating n-grams, defined as a contiguous sequence of n items from a given sample of text or speech. Then, we used a self-supervised learning classification model based on artificial intelligence, using a recurrent neural network as a classifier [25,26], which was built with a 2800-sample dictionary manually created by specialists based on text from real diagnoses. The final report with the ECG abnormalities was obtained by imputing the classifier results for recognition of each ECG abnormality. The classification model was tested on 4557 medical reports manually labeled by two cardiologists with 80.7% positive predictive value, 94.3% sensitivity and 87.0% F1 score for AF; 86.1% positive predictive value, 95.4% sensitivity and 90.9% F1 score for RBBB; 91.4% positive predictive value, 86.0% sensitivity and 88.6% F1 score for LBBB; 75.6% positive predictive value, 93.5% sensitivity and 83.6% F1 score for AVB, and 96.7% positive predictive value, 96.7% sensitivity and 96.7% F1 score for ventricular pre-excitation [27]. F1 score is a measure of the model's accuracy and it is calculated from the positive predictive value and the sensitivity of the test. The diagnosis of electrocardiographic abnormalities was accepted, without manual review, when there was agreement in the cardiologist's report with one of the automatic systems (Minnesota or Glasgow). The ECGs in which the abnormality was reported by the cardiologist only or by the two automatic systems were manually reviewed by trained staff (Figure 1). For LBBB and RBBB, 17,903 ECGs were revised, while for AVB 9038, AF 4343 and ventricular pre-excitation 1090 tracings were amended. This represents 1.3% of the total number processed, or 2.4 million ECGs. Hearts 2021, 2, FOR PEER REVIEW 4 Figure 1. Diagram for ECG abnormality diagnosis. Concordance between the cardiologist's report and one of the automatic systems (Glasgow or Minnesota) was required for a diagnosis to be accepted without manual revision. Probabilistic Linkage The electronic cohort was obtained linking data from the ECG exams (name, sex, date of birth, city of residence) and those from the national mortality information system, using standard probabilistic linkage methods (FRIL: Fine-grained record linkage software, v.2.1.5, Atlanta, GA, USA) [21,28]. Statistical Analysis Qualitative variables were described by frequency distribution. Data obtained from continuous quantitative variables were expressed as mean and standard deviation or median with interquartile range. For the analysis of the electrocardiographic abnormalities, the time elapsed between the date of the electrocardiogram (index event) and the event of interest (date of death) was considered a dependent variable. The presence of the electrocardiographic abnor- Figure 1. Diagram for ECG abnormality diagnosis. Concordance between the cardiologist's report and one of the automatic systems (Glasgow or Minnesota) was required for a diagnosis to be accepted without manual revision. Probabilistic Linkage The electronic cohort was obtained linking data from the ECG exams (name, sex, date of birth, city of residence) and those from the national mortality information system, using standard probabilistic linkage methods (FRIL: Fine-grained record linkage software, v.2.1.5, Atlanta, GA, USA) [21,28]. Statistical Analysis Qualitative variables were described by frequency distribution. Data obtained from continuous quantitative variables were expressed as mean and standard deviation or median with interquartile range. For the analysis of the electrocardiographic abnormalities, the time elapsed between the date of the electrocardiogram (index event) and the event of interest (date of death) was considered a dependent variable. The presence of the electrocardiographic abnormality was an independent variable, along with the clinical characteristics of the population. The comparison group was patients without major electrocardiographic changes, which included both those with a normal ECG and those with all other abnormalities. Patients who did not present with an event of interest by the end of follow-up were censored, but were included in our analysis with follow-up time until the study's end date (September 2017). The non-parametric Kaplan-Meier method was used to calculate survival. The level of statistical significance was defined for p values less than 0.05, calculated by the Log rank test. The Cox proportional regression multivariate model was used for all analyses, except for AVB, in which we used the Log-normal model, since the assumptions of the Cox model could not be achieved. Hazard ratio (HR) with 95% confidence interval was used for the ECG abnormalities analysis, except for the AVB survival analysis, in which relative survival risk (RS) was used. RS under 1 means lower survival rate, while RS over 1 means higher survival rate. Analyses were adjusted for age, sex and comorbidities. The R statistical program (version 3.4.3, Vienna, Austria) was used for all analyses. CODE Cohort From 2,470,424 ECGs, 1,773,689 patients were identified. A total of 1,666,778 (94%) underwent a valid ECG recording from 2010 to 2017, with 1,558,421 patients over 16 years old. Most patients were women (60.8%), and mean age was 51.6 (SD ±17.6) years. The overall mortality rate was 3.31% in a mean follow-up of 3.7 years. The clinical conditions of all adult patients and the prevalence of the studied abnormalities are described in Table 1. Survival Analysis: ECG Abnormalities All ECG abnormalities, with the exception of ventricular pre-excitation and second degree AVB Mobitz I, were associated with higher mortality for all causes. Patients with AF and LBBB were also at higher risk of cardiovascular mortality (Table 2, Figure 2). Discussion The resulting dataset has several potential applications, both for technical and ical-epidemiological studies. Previous studies from our group showed that ECG ab malities that are considered important, such as pre-excitation syndrome, have no p Discussion The resulting dataset has several potential applications, both for technical and clinicalepidemiological studies. Previous studies from our group showed that ECG abnormalities that are considered important, such as pre-excitation syndrome, have no prognostic impact in a community setting. On the other hand, the risk of dying for a person with RBBB is almost as high as with LBBB, the latter being considered a much stronger marker of risk in general cardiology practice [28][29][30]. Patients with AF were at a higher risk of mortality compared to the other abnormalities. First degree AVB was a more severe ECG abnormality than Mobitz I, which had a benign prognosis in this population. A 2:1 AVB in the 12-lead ECG was associated with 79% reduction of relative survival, probably indicating an infranodal block. According to the World Health Organization, primary health care is an integral part of a country's health system, with a main focus on the social and economic development of the community [31]. Its essence is to treat people, not specific diseases and conditions. Actions related to health promotion and both primary and secondary prevention of cardiovascular diseases are necessary to improve collective health. In this context, the search for new features that are capable of predicting individual cardiovascular risk and, therefore, stimulating development of cost-effective preventive actions, is a matter of great importance. Several tests, such as coronary calcium score, carotid and vertebral echodoppler, and serum measurement of ultrasensitive C-reactive protein have already been recommended for re-stratification of cardiovascular risk [32], although their cost-effectiveness is questionable [32], especially in the context of public health. On the other hand, an inexpensive and widely available exam, such as the ECG, could diagnose abnormalities such as AF, RBBB, LBBB and AVB that imply a higher risk for mortality regardless of age, sex or previous comorbidities. Stratification of cardiovascular risk by ECG could be a potentially useful tool for clinical practice, especially in primary health care. Identifying the patient who will benefit most from tighter control of blood pressure, diabetes, and cholesterol levels may prevent cardiac events in the future. Electrocardiographic abnormalities draw attention to the potential severity of the patient's condition and the importance of more intensive treatment. In addition, they may help to rationalize and prioritize referrals to secondary or tertiary referral centers. Electronic cohorts with a large amount of data are powerful sources for the development of population based studies, and, therefore, provide more strong evidence to be used in healthcare. Information on ECG parameters or abnormalities from big data sets [33] may have a major impact by distinguishing between benign and potentially life-threatening cardiac conditions. Each population has specific features, such as social, racial and lifestyle characteristics, that have an impact in their health [34]. Chagas disease, for example, is prevalent in Brazil and is associated with major ECG abnormalities [35], while it is very rare in United States and Europe. AI in healthcare is the future pathway to managing big data from electronic cohorts. The development of machine learning (ML) models for disease prediction and diagnosis is in a state of exponential growth. In electrocardiography, AI algorithms have been extensively studied for both the automatic diagnosis of electrocardiographic alterations [36] as well as for the prediction of cardiovascular events and identification of new cardiovascular risk factors [37]. Estimation of age and sex by electrocardiographic tracing alone has also been demonstrated [38]. Furthermore, the isolated analysis of the 12-lead ECG can predict mortality within one year with good accuracy, even in tracings reported as normal [39]. AI can extract information from the electrocardiogram that is undervalued and/or unrecognized by conventional methods of analysis, adding diagnostic and prognostic value. The CODE study is now also working with ML techniques. We found good performance of a deep neural network in the recognition of six ECG abnormalities [36]. In the field of prognosis and health promotion, the concept of an electrocardiographic age via AI, compared with the patient's biological age, is promising [40]. This new promising cardiac biomarker can summarize the individual electrocardiographic characteristics simply and intuitively. It has the potential to provide patients with accessible and understandable information about their cardiovascular risk. More of our results will soon be available and should highlight the importance of ECG epidemiological studies with both traditional and AI methods. Our study has limitations. Data on comorbidities were self-reported, and thus might have been under-reported. The clinical data came from a predetermined questionnaire not tailored for this study. Therefore, some important variables with impacts on the cardiovascular prognosis, such as heart failure, were unavailable and not considered as comorbidities in the multivariate analysis. The AI classifier used for ECG report classification had good accuracy, sensitivity and positive predictive value, but can make errors. In order to minimize this problem, we included the automatic classification of Glasgow and Minnesota in the diagnostic algorithm. Furthermore, manual revision was done in more than 30,000 ECGs to confirm the presence of the ECG abnormality. The quality of the data from the national mortality information system varies according to region within the state of Minas Gerais; therefore, the information from the national mortality system is heterogeneous among the regions of Minas Gerais such that misclassification of the basic cause of death can occur. The probabilistic linkage also has some issues, such as less than perfect sensitivity and the possibility of false pairs. We defined a high cut off point (94 of 100) for true pairs and made manual revisions in doubtful cases. Conclusions Electrocardiographic markers are predictors of mortality in the TNMG population. AF, LBBB, RBBB and AVB are associated with a higher risk of death from all causes, regardless of age, sex and associated comorbidities. AF and LBBB are independent predictors of higher cardiovascular mortality. Ventricular pre-excitation and Mobitz I second-degree AVB are not associated with higher overall mortality. An electronic cohort with a large amount of ECG data can be a useful prognostic tool and provide a stimulus for future developments in the fields of digital electrocardiography, clinical cardiology and cardiovascular epidemiology. Institutional Review Board Statement: This study complied with all relevant ethical regulations. The CODE Study was approved by the Research Ethics Committee of the Universidade Federal de Minas Gerais, protocol 49368496317.7.0000.5149. Since this is a secondary analysis of anonymized data stored in the TNMG, informed consent was not required by the Research Ethics Committee for the present study. All researchers who deal with datasets signed terms of confidentiality and data utilization. Informed Consent Statement: Not applicable. Data Availability Statement: Researchers affiliated with educational or research institutions can make requests to access the datasets. Requests should be made to the corresponding author of this paper. They will be forwarded and considered on an individual basis by the Telehealth Network of Minas Gerais. The estimated time needed for data access requests to be evaluated is three months. If approved, any data use will be restricted to non-commercial research purposes. The data will only be made available on the execution of appropriate data use agreements.
2021-10-15T15:17:08.019Z
2021-09-29T00:00:00.000
{ "year": 2021, "sha1": "2f2d1d35e26096874e74559d60038b6930174a07", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-3846/2/4/35/pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0c2ccd99c45e14f3c4454e0ed8d7bfec43fc080e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
41010060
pes2o/s2orc
v3-fos-license
Development of ulcerative colitis under the immunosuppressive effect of cyclosporine Summary In recent studies, cyclosporine has been used for the treatment of both ulcerative colitis and Crohn's disease. The results of these studies were variable. We report on a patient who was treated for 6 years with cyclosporine after kidney transplantation. He developed chronic distal colitis with all the features of ulcerative colitis. An infectious etiology of the colitis was carefully excluded. High-dose treatment with methylprednisolone was required to induce remission. This report shows that immunosuppressive therapy with cyclosporine did not prevent the development of ulcerative colitis in this patient. There is increasing evidence that immunoregulatory abnormalities play a central role in the pathogenesis of the inflammatory bowel diseases, ulcerative colitis and Crohn's disease [12]. It has been suggested that a hyperreactivity and increased activation of intestinal T cells are important factors contributing to the engoing mucosal inflammation in these diseases [12,13]. A recent case report of a patient with severe Crohn's disease who became infected with human immunodeficiency virus (HIV) later in the course of his disease could indicate a role of CD4 ÷ T cells in the disease progression. With the loss of CD4 ÷ cells and the development of a manifest immunodeficiency, the patient went into stable remission of the intestinal disease [5]. Thus, patients with inflammatory bowel disease may benefit from immunosuppressive treatment such as cyclosporin A (CsA) which preferentially inhibits T cell function. This drug is widely used in organ transplantation, and initial studies Abbrev&tions: CsA = cyclosporin A; HIV = human immunodeficiency virus; IgM = immunoglobulin M; RIA = radioimmunoassay suggested a beneficial effect when given to patients with certain diseases of autoimmune origin. CsA has also been used in the treatment of severe Crohn's disease, but with variable success [1,2,[7][8][9]11]. Recently, Lichtiger and Present [6] reported favourable results for CsA in the treatment of refractory ulcerative colitis. We report on a patient who developed ulcerative colitis after 6 years of CsA treatment. Case report A 60-year-old man with end-stage renal disease of unknown origin had received a cadaver kidney transplant in 1984 after 2 years on chronic hemodialysis. The immunosuppressive therapy consisted of methylprednisolone and CsA. Methylprednisolone was withdrawn 6 months after transplantation, followed by monotherapy with CsA. The CsA dosage was adjusted according to the blood levels (median values around 140 pg/1 as measured by the CsA RIA kit, Sandoz, Basel, Switzerland, using CsA-specific antibodies). The kidney transplant function was normal, and there had been no rejection episodes. There were no apparent complications of immunosuppressive therapy with CsA from 1984 to 1990. In May 1990, the patient complained of intermittent rectal bleeding which he had first noticed in January 1990. When the patient was seen in our hospital, he appeared well, and the head, neck, lungs, heart, and extremities were normal on physical examination. Abdominal and rectal examinations were negative. The hemoglobin concentration was 12.1 g/dl, and the white blood cell count was 8.1 x 109/1 with a normal differential count. Serum creatinine and serum urea levels and tests for liver enzymes were normal. Repeated stool and rectal biopsy cultures were negative for Salmonella, Shigella, Yersinia, Campylobacter, pathogenic Escherichia coli, C/ostridium difficile, and others. No parasites were found by repeated microscopical examinations of Colonoscopy showed an edematous, friable mucosa, with a granular appearance and adhering mucus, some spontaneous bleeding, and multiple submucosal punctate bleedings in the distal colon (rectum to the mid-part of the descending colon). The colonic mucosa proximal to the descending colon was normal. On histological examination, rectal and sigmoidal biopsy specimens showed mucosal hyperplasia with active and chronic infiltrates. Typical mucosal crypt abscesses were found, the crypts were partly destroyed, and there were epithelial regenerates (Fig. 1). These findings confirmed the endoscopically suspected diagnosis of ulcerative colitis. X-radiography studies of the small intestine were normal. After an observation period of 2 months the patient was treated with sulfasalazine enemas, but the symptoms persisted. Since the patient's abdominal symptoms did not improve and endoscopy showed worsening of the distal colitis, hydrocortisone enemas were given 2 weeks later. However, no clinical improvement was apparent, and the endoscopic appearance still persisted. Therefore, 6-methylprednisolone was given orally at an initial dose of 48 mg per day and tapered over a 6-week period to a maintenance dose of 8 mg daily in addition to i g sulfasalzine 3 times daily. Under this treatment, the patient's symptoms resolved quickly. Repeat sigmoidal and rectal biopsy specimen showed a marked regression of the previous chronic inflammatory infiltrates. Oral medication with 8 mg 6-methylprednisolone and 3 g sulfasalazine was continued, and the patient showed no symptoms for about 1 year. Then he again observed rectal bleeding. Colonoscopy revealed friable mucosa with a granular appearance and punctate mucosal hemorrhage from the rectum up to the descending colon. Hydrocortisone enemas were given, but the symptoms only resolved when mesalazine enemas were introduced. After 3 weeks of topical treatment, 500 mg mesalazine 3 times per day were given instead of sulfasalazine. The remission is stable now for 3 months (March 1992), and oral 6-methylprednisolone has been reduced to 8 mg every other day. CsA treatment has been continued for the whole observation period, adjusted according to blood levels, and the kidney transplant function is still normal. Discussion To our knowledge this is the first report of a patient who developed ulcerative colitis during continuous therapy with the immunosuppressive agent CsA. In immunocompromised patients, chronic intestinal infections are found which are macroscopically and histologically indistinguishable from in-flammatory bowel disease [10]. Therefore, special efforts were made including multiple stool and biopsy examinations, electron microscopy, immunohistology, and serology to rule out an infectious cause for the chronic colitis in our patient. Our observation may be important since CsA has been applied as a therapeutic agent in severe inflammatory bowel disease [1,2,4,[6][7][8][9]11]. Most treatment studies were done so far in Crohn's disease. In one randomized, placebo-controlled study, a significant improvement was described in comparison with placebo [1]. However, this study has been criticised with respect to the grade of improvement [3]. From the available data, it seems that it is possible to reduce the disease activity for a relatively short period, but an early flare-up was observed in a considerable percentage of patients. In the one study on the treatment of ulcerative colitis with CsA [6], there was a favorable outcome in 11 of 15 patients, but the period of observation was relatively short, and the study was not placebo-controlled. Data on the effect of CsA on the highly specialized intestinal immune system are lacking except for one study: In an animal model of intestinal inflammation, Chlamydia trachomatis proctitis of nonhuman primates, it was shown that CsA can inhibit the primary antibody response to C. trachomatis and the C. trachomatis-specific proliferation of peripheral blood lymphocytes after rectal infection with this agent. However, the C. trachomatisspecific proliferation of spleen and mesenteric lymph node lymphocytes was not inhibited [15]. These results indicate that during CsA administration, antigen-reactive lymphocyte populations stimulated in the mucosal environment may expand in tissue sites, even when the antibody and peripheral cellular immune responses are inhibited [14]. This may be of importance for the treatment of inflammatory bowel diseases with CsA: While the systemic immune responses are suppressed by CsA, the imbalanced immune reaction at the level of the mucosa may still persist. The clinical finding of an early flare-up soon after or still under the influence of CsA is consistent with this hypothesis. Our observation of the onset of ulcerative colitis in a CsA-treated patient shows that this disease can develop under the immunosuppressive effect of CsA. It is an indication that CsA cannot prevent the initial event leading to the chronic intestinal inflammation seen in ulcerative colitis. Interestingly, only high-dose treatment with methylprednisolone was able to induce remission in our patient. Final conclusions on the role of CsA in the treatment of ulcerative colitis cannot be drawn from our observation. It may, however, serve as a note of caution for the introduction of this agent in the treatment of ulcerative colitis.
2017-07-28T21:19:50.194Z
1992-07-01T00:00:00.000
{ "year": 1992, "sha1": "c57bec4bb2498d54dba66effd876cd0f92e83ba8", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7087535?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c57bec4bb2498d54dba66effd876cd0f92e83ba8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
314275
pes2o/s2orc
v3-fos-license
Stimulation MatrixFlo II breaker can be used at temperatures between 100° and 180°F. It decreases the pH of borate-crosslinked fracturing fluids, reducing fluid viscosity and “yield stress” to provide better fluid flowback. When used with enzymes, MatrixFlo II breaker will also lower the pH of the system and initiate enzyme breaker activity to degrade the polymer backbone further. MatrixFlo II breaker and enzyme breakers combine to create a dual-mechanism breaker system that completely degrades polymer gels. Comment Many patients do not achieve freedom from seizure even with appropriate drug therapies and surgical treatments.In recent years, intracranial electrical stimulation therapy with implanted electrodes has attracted attention as a treatment for these patients.However, as of December 2017, intracranial electrical stimulation therapy with implanted electrodes has not been approved in Japan. Stimulation of the anterior nucleus of thalamus is performed by stimulating bilateral anterior nuclei of the thalamus intermittently using an implanted stimulator.For partial seizures in adults, the median seizure reduction rate is 40% after three months of treatment 1) .The effect may last for 5 years 2) .Adverse events include subjective depressive symptoms and memory impairment. Responsive stimulation of the seizure onset zone is performed by implanting deep or subdural electrodes at 1-2 epileptogenic zones, which automatically detect seizure onset and initiate stimulation.For partial seizures in adults, the mean seizure reduction rate is 38% after 3 months of treatment 3) .The effect may last for 5 years 4) .Adverse events include intracranial hemorrhage and wound infection. Multiple institutes have reported the efficacy of hippocampal stimulation for temporal lobe epilepsy, but the number of cases is limited [5][6][7][8][9] . C 10-1 Should vagus nerve stimulation therapy be added to drug therapies for drug-resistant temporal lobe epilepsy? Recommendation We suggest to add vagus nerve stimulation to drug therapies (GRADE 2D) (weak recommendation, very low level of evidence). • Supplementary note: In principle, vagus nerve stimulation is considered for patients with no indication for curative surgery.Implantation of the vagus nerve stimulation device involves surgery under general anesthesia in an experienced hospital.After implantation, the patients need to be followed in the hospital where the operation was performed or other facilities by experts with experience in stimulator control. Background, priority of the problem In patients with drug-resistant epilepsy in whom seizures are not controlled even after trials of two appropriate antiepileptic drugs, further addition of drugs has only limited effect.Vagus nerve stimulation added to antiepileptic drug therapy is expected to provide additive effect of seizure frequency reduction.Because vagus nerve stimulation is less invasive and has lower seizure control effect as compared with brain surgery with craniotomy, it may be selected as a treatment option in patients with no indication for curative neurosurgery. Comment Evidence summary Only one randomized controlled trial (RCT) examined the effectiveness of vagus nerve stimulation adjunct to best medical practice (BMP) (intervention group) versus BMP alone (control group) for drug-resistant epilepsy 1) .We therefore considered also to use observational studies.However, because the outcomes of those studies, such as reduced seizure frequency and mood change, are susceptible to placebo effect, we determined to use the single RCT. Regarding efficacy, the relative risk for 50% seizure frequency reduction was 1.34 (95% confidence interval 0.59-3.04),and NNT (number needed to treat: indicating the number of persons needed to treat to achieve the outcome for one person) was 25.As for mood changes, there were no significant differences between the intervention group and control group in the scores for several scales: QOLIE-89 (89-item Quality of Life in Epilepsy Inventory), CES-D (Center for Epidemiologic studies Depression scale), and NDDI-E (Neurological Disorders Depression Inventory in Epilepsy scale).Regarding mood changes, the only scale showing a statistically significant difference was the 7-point evaluation scale CGI-I (Clinical Global Impression of Impression Important Scale), but the difference was only 0.5 (95% confidence interval 0.99-0.01),showing a small effect.For serious adverse events, vocal cord paralysis and brief respiratory arrest occurred only in the intervention group, but were transient with no sequelae.There was no significant difference in the adverse event of dysphonia between the intervention group and the control group. It should be noted that the selected RCT was prematurely terminated by the sponsor due to a low recruitment rate, because many study candidates did not accept randomization of the treatment.Therefore, the study may be underpowered for detection of the outcome. 3-1. What is the overall quality of evidence across outcomes? In the study reviewed, the risk of bias was high overall, which was judged as serious for all the outcomes, and was downgraded by one rank.The inconsistency of results was not downgraded because of only one study used.The indirectness was judged as not serious and without any problems.As for imprecision, the confidence intervals in many analyses crossed the clinical decision threshold, and it was hence downgraded by one or two ranks.As for publishing bias, there was only one study, and therefore was not downgraded.Consequently, the level of evidence for the outcomes was as follows: "very low" for seizure frequency ≤ 50%, serious adverse events, and dysphonia; and "low" for the other outcomes.The overall level of evidence was "very low". 3-2. What is the balance between benefits and harms? Since there was only one RCT, the certainty of the effect estimate was low, and it was difficult to consider the balance between benefits and harms. 3-3. What about patients' values and preferences? The importance of outcomes has great inter-individual differences, and it should be diverse.It should be noted that some patients place importance on the reduction of seizure frequency, while others regard the risk of adverse effects to be more important. 3-4. What is the balance between net benefit and cost or resources? The electrode implantation for VNS surgery is conducted under general anesthesia.Vagus nerve stimulation is covered by medical insurance, and the medical insurance fee scale for implantation is 24,350 points, and that for exchange is 4,800 points (as of January 11, 2018).The reoperation should be done once every few years for replacement of the power generator because of degradation of the condenser.Considering the effectiveness for refractory epilepsy and the above-mentioned factors, the cost was judged to be moderate. 3-5. Recommendation grading During the discussions at the panel meeting, considering the moderate burden and cost, and the few alternative treatment options available, the panelists concluded that it was reasonable to use this treatment method despite a certain amount of harm, burden and cost.The unanimous decision was "to propose implementing vagus nerve stimulation for drug-resistant epilepsy".As an additional consideration, the patients' families at the panel meeting expressed the following opinion: "We desire to overcome social constraints.If there is any method to solve this, please include it as one of the options." Descriptions in other related guidelines In Japan, the "Practice guideline of vagus nerve stimulation therapy for epilepsy" 2) was published by the Japan Epilepsy Society in 2012, which states that "VNS has accommodative effect on drug-resistant epileptic seizures [recommendation grade A]".Also, the American Academy of Neurology released a guideline update entitled "Vagus nerve stimulation for the treatment of epilepsy" in 2013.This guideline update describes the possibilities of the effectiveness of vagus nerve stimulation appearing several years after VNS operation, the effectiveness in children [rate of > 50% seizure reduction: 55% (95% confidence interval 50-59%)], and an increased risk of infection in children compared to adults [odds ratio 3.4 (95% confidence interval 1.0-11.2)]. According to the guidelines in Japan and overseas and the recommendation from the ILEA, the indication for vagus nerve stimulation is, in principle, patients who have no indication for curative neurosurgery [2][3][4] . Treatment monitoring and evaluation Vagus nerve stimulation treatment requires adjustment of the stimulation conditions, management of complications, and solving equipment troubles.Epilepsy specialists or doctors trained by the specialists should perform monitoring and evaluation after the operation based on expert knowledge. Possibility of future research The RCT reviewed for this CQ had high risk of bias.Therefore, it is desirable to have more RCTs with better quality.In addition, further research focusing on how to identify good responders and the effects on status epilepticus is needed in the future.• Supplementary note: Adjustment of stimulation conditions should be conducted in the hospital where the electrode implantation was performed or in a hospital/institution where VNS specialist is present. Background, priority of this issue The efficacy of vagus nerve stimulation is known to depend on the stimulation conditions.The intensity of stimulation should be adjusted while monitoring its therapeutic effect and adverse effects.Therefore, it is necessary to clarify whether high intensity stimulation or low intensity stimulation is superior when conducting VNS. In addition, as mentioned in CQ 10-1 "Should vagus nerve stimulation therapy be added to drug therapies for drugresistant temporal lobe epilepsy?",we have difficulty in performing comparison between real VNS and sham VNS (with no stimulation).Therefore, there is an increase in randomized controlled trials (RCTs) using low intensity stimulation as sham stimulation (placebo stimulation or pseudo-stimulation) to compare with high intensity stimulation. There is one Cochrane Review 1) on a similar clinical question.This review shows that high intensity stimulation has superior therapeutic effect, while treatment withdrawal is rare both when using high and low intensity stimulation. Comment Evidence summary There were 4 RCTs that examined the efficacy of vagus nerve stimulation therapy for drug-resistant epilepsy [2][3][4][5] . Fr efficacy, the relative risk for seizure frequency ≤ 50% was 1.74 (95% confidence interval 1.14-2.65)and NNT (number needed to treat: indicating the number of persons needed to treat to achieve the outcome for one person) was 10.For adverse events, low level stimulation was significantly superior in dysphonia and hoarseness (relative risk 2.06, 95% confidence interval 1.34-3.17)and dyspnea (relative risk 2.43, 95% confidence interval 1.29-4.57).Treatment withdrawal, cough, and pain did not differ significantly between high level and low level stimulations. 3-1. What is the quality of evidence about the overall outcomes? In all the studies collected, the risk of bias was low overall, and the level was not downgraded for all the outcomes.For inconsistency of the results, I 2 was 32% for only dysphonia / hoarseness.Since the effect estimate differed between studies, heterogeneity was considered high.Inconsistency was thus considered serious and was downgraded one rank.There was no problem with indirectness, and was judged not serious.As for imprecision, the confidence intervals in many analyses crossed the clinical decision thresholds, and hence was downgraded by one or two ranks.Regarding publication bias, there were only four studies, and therefore was not downgraded.Consequently, the level of evidence for the outcomes was as follows: "moderate" for seizure frequency ≤ 50%, cough, and dyspnea; "low" for treatment withdrawal, dysphonia / hoarseness, and pain.The overall level of evidence was "low". 3-2. What is the balance between benefits and harms? High level stimulation was superior to low level stimulation for the outcome of seizure frequency ≤ 50%.Among the adverse events, dysphonia/hoarseness and dyspnea showed lower rates in low level stimulation, but since there was no significant difference in treatment withdrawal between two groups, there must be few adverse events serious enough to cause treatment withdrawal.According to expert opinion, many adverse events are reversible and can be controlled by adjusting the stimulation current intensity.Taken together, we decided that high level stimulation is probably superior in terms of the balance between benefits and harms. 3-3. What about patients' values and preferences? We concluded that there is probably no significant uncertainty and variability in patient's values and preferences because high level stimulation is more effective than low level stimulation, and although adverse events are more prevalent in high level stimulation, they are reversible and can be controlled by adjusting the stimulation current. 3-4. What is the balance between net benefit and cost or resources? Adjustment of stimulation intensity can be done by placing the programming wand over the subcutaneously implanted generator; thus resources and costs are negligible.However, reoperation is needed every few years to replace the generator when the battery runs out of power.Battery consumption is higher for high level stimulation than for low level stimulation.Based on these, it was decided that high level stimulation costs moderately more as compared to low level stimulation. 3-5. Recommendation grading In the discussions at the panel meeting, high level stimulation was considered superior in efficacy, and adverse effects were acceptable because most of them were presumably at a level that would not cause treatment withdrawal.As for burden and cost, high level stimulation was expected to consume more battery power, requiring more frequent generator exchange.Based on the above arguments, despite considerable adverse events that did not cause treatment withdrawal as well as the increased burden and cost, we finally unanimously recommended using high level stimulation, considering the highly anticipated seizure control effect. Descriptions in other related guidelines In Japan, the "Guideline on implementation of vagus nerve stimulation therapy for epilepsy" 6) was published by the Japan Epilepsy Society in 2012, which states that "In principle, initiate VNS two weeks after implantation.Start with low stimulation intensity and then gradually increase the intensity while monitoring the adverse effects [recommendation grade C]". In 2013, the American Academy of Neurology released a guideline update entitled "Vagus nerve stimulation for the treatment of epilepsy" 7) .There is no recommendation for high level or low level stimulation in that guideline.However, it states that whether stimulation at a higher frequency is more likely to reduce seizures than usual stimulation remains unknown. Treatment monitoring and evaluation For adjusting stimulation intensity, we need a system which is capable of managing complications and coping with equipment troubles. Future research issues Further research on the optimal intensity of stimulation is needed.In addition, other than stimulus intensity, there is no RCT on supplementary techniques such as magnet stimulation, which will be a future research subject.It is also desirable to elucidate the mechanisms underlying the subgroup with high response and develop evaluation methods to identify these subjects. RCT reports reviewed for this CQ Michael 1993 2) , VNS study Group 1995 3) , Handforth 1998 4) , Klinkenberg 2012 5) ) efficacy of stimulation of the anterior nucleus of thalamus and responsive stimulation of the seizure onset zone has been shown for partial seizures.Although a limited number of reports have also indicated the long-term efficacy of these methods and the effectiveness of other intracranial stimulation methods (hippocampus, paracentral thalamic nucleus, and cerebellum), evidence is not sufficient and further verification is required. nerve stimulation (VNS) for drug-resistant epilepsy, we suggest to use high intensity stimulation rather than low intensity stimulation (GRADE 1C) (strong recommendation, low level of evidence). *: At high-level stimulation, current was set at the highest tolerable level for each patient.At low-level stimulation, current was set at the lowest level that could be sensed by the patient.**:
2014-10-01T00:00:00.000Z
2020-02-08T00:00:00.000
{ "year": 2020, "sha1": "99dcd5d5ef84e8899738b09a4621817e26b09572", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32388/erb851", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1bf80f0d5d5be5f645b8f557057b0698eedc2ef1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4769355
pes2o/s2orc
v3-fos-license
Genetics Variants and Serum Levels of MHC Class I Chain-related A in Predicting Hepatocellular Carcinoma Development in Chronic Hepatitis C Patients Post Antiviral Treatment Background/aims The genome-wide association study has shown that MHC class I chain-related A (MICA) genetic variants were associated with hepatitis C virus (HCC) related hepatocellular carcinoma. The impact of the genetic variants and its serum levels on post-treatment cohort is elusive. Methods MICA rs2596542 genotype and serum MICA (sMICA) levels were evaluated in 705 patients receiving antiviral therapy. Results Fifty-eight (8·2%) patients developed HCC, with a median follow-up period of 48·2 months (range: 6–129 months). The MICA A allele was associated with a significantly increased risk of HCC development in cirrhotic non-SVR patients but not in patients of non-cirrhotic and/or with SVR. For cirrhotic non-SVR patients, high sMICA levels (HR/CI: 5·93/1·86–26.38·61, P = 0·002) and the MICA rs2596542 A allele (HR/CI: 4·37/1·52–12·07, P = 0·002) were independently associated with HCC development. The risk A allele or GG genotype with sMICA > 175 ng/mL provided the best accuracy (79%) and a negative predictive value of 100% in predicting HCC. Conclusions Cirrhotic patients who carry MICA risk alleles and those without risk alleles but with high sMICA levels possessed the highest risk of HCC development once they failed antiviral therapy. Introduction Hepatitis C virus (HCV) infection is one of the leading causes of hepatocellular carcinoma (HCC) worldwide. Successful HCV eradication reduces the risk of HCC in patients with all stages of liver disease (Yu et al., 2006a(Yu et al., , 2006bHuang et al., 2014;Morgan et al., 2013). Preexisting liver cirrhosis before treatment has been recognized the most critical factor for HCC in patients receiving anti-viral therapy (Huang et al., 2014). Recent meta-analysis has demonstrated an incidence of 1·05% per person-year for HCC development in patients with advanced liver disease, even if they achieved a sustained virological response (SVR) (Omata et al., 2010). Beyond the determinants of viral eradication and liver cirrhosis, several simple biochemical markers, such as α-fetoprotein (AFP) (Asahina et al., 2013), alanine transaminase (ALT) (Asahina et al., 2013), r-glutamyltransferase (r-GT) (Huang et al., 2014), and the aspartate aminotransferase (AST)-to-platelet ratio index (APRI) (Yu et al., 2006b), have been used to predict HCC occurrence in the post-treatment cohort. Notably, HCC remains in a substantial proportion of non-cirrhotic patients who have successfully eradicated HCV by antiviral therapy (Huang et al., 2014). In addition to the impact of virus and fibrogenesis, host genetics play a role in HCV-related hepatocarcinogenesis. Interleukin 28B (IL-28B) genetic polymorphisms are by far the most important genetic determinant for anti-HCV treatment efficacy (Huang et al., 2012(Huang et al., , 2013a(Huang et al., , 2013b. These polymorphisms also influence liver-related clinical outcomes, including HCC development (Noureddin et al., 2013). A genome-wide association study (GWAS) demonstrated that the single nucleotide polymorphism (SNP) rs2596542 of MHC class I chain-related A (MICA) and its serum level (sMICA) were associated with HCV-related HCC in a cross sectional study (Kumar et al., 2011). Genetic variants of epidermal growth factor (EGF) at rs4444903 are also associated with HCC (Abu Dayyeh et al., 2011;Tanabe et al., 2008). In addition, alcoholic cirrhotic patients who harbor the rs738409 GG genotype of patatin-like phospholipase domain-containing 3 (PNPLA3) are at increased risk of HCC (Guyot et al., 2013). Notably, the role of the host genetics in HCC development in chronic hepatitis C (CHC) patients after antiviral therapy has rarely been explored. Herein, we conducted a longitudinal follow-up study with a well characterized HCV cohort who had received anti-viral therapy and determined the association of the abovementioned candidate SNPs and sMICA with HCC development after weighing potential confounders, including viral eradication and preexisting cirrhosis. Methods CHC patients receiving anti-viral therapy were consecutively recruited as a prospective follow-up cohort at one tertiary hospital and two core regional hospitals from 2002 to 2012. All the participants received peginterferon alpha-2a or peginterferon alpha-2b plus ribavirin. Patients were excluded if they were co-infected with HIV or hepatitis B virus infection, exhibited alcohol abuse (≥20 g daily) or had evidence of HCC before, during or within 6 months after antiviral therapy. Patients with or without an SVR, defined as seronegativity of HCV RNA throughout a 24-week post-antiviral treatment follow-up period, were further evaluated for the risk of HCC development. Serum HCV RNA was detected using qualitative real-time polymerase chain reaction (PCR) (COBAS AMPLICOR Hepatitis C Virus Test, ver. 2.0; Roche, Branchburg, NJ, USA, detection limit: 50 IU/mL) or quantification branched DNA assay (Versant HCV RNA 3.0, Bayer, Tarrytown, New Jersey, USA; quantification limit: 615 IU/mL) if evaluated before 2011. The HCV genotypes were determined using the Okamoto method before 2011 (Okamoto et al., 1993). After 2011, both the HCV RNA and genotype were detected using real-time PCR assay (Real Time HCV; Abbott Molecular, Des Plaines IL, USA; detection limit: 12 IU/mL) (Vermehren et al., 2011). The definition of cirrhosis was based on liver biopsies, which were performed within 6 months before starting antiviral therapy, and liver histology was graded and staged according to the scoring system described by Knodell and Scheuer (Scheuer, 1991). The post-treatment follow-up strategy was based on cirrhotic status and treatment outcome, as previously described (Huang et al., 2014). Briefly, patients were followed every 3 months if they were cirrhotic or did not have an SVR and every 6 to 12 months if they were non-cirrhotic and had an SVR. The diagnosis of HCC was confirmed by histology or by imaging and laboratory evidence, in accordance with the American Association for the Study of Liver Diseases (Bruix & Sherman, 2011) and Asian Pacific Association for the Study of the Liver (Omata et al., 2010) guidelines. All patients provided written informed consent. The institutional review board at the participating hospital approved the protocols, which conformed to the guidelines of the International Conference on Harmonization for Good Clinical Practice. Genetic Testing and sMICA Measurement Four candidate single nucleotide polymorphisms (SNPs), including MICA rs2596542, IL-28B rs8099917, EGF rs4444903 and PNPLA3 rs738409, were selected in the current study. The IL-28B rs8099917 and PNPLA3 rs738409 genotypes were determined using methods that were previously described Lawitz et al., 2014;Huang et al., 2015a). SNP rs2596542 of MICA and SNP rs4444903 of EGF were determined byABI TaqMan® SNP genotyping assays (Applied Biosystems, Foster City, CA, USA) using the following pre-designed commercial genotyping assays (ABI Assay ID: C__27301153_10 and C__27031637_10, respectively). Briefly, PCR primers and two allelicspecific probes were designed to detect specific SNP target. The polymerase chain reaction (PCR) assays were performed in 96-well microplates with an ABI 7500 real-time PCR. Allele discrimination was achieved by detecting fluorescence using System SDS software version 1.2.3. All allele and genotype frequencies were in with Hardy-Weinberg equilibrium. sMICA levels before treatment were measured by sandwich enzyme-linked immunosorbent assay by DuoSet MICA eELISA kits (R & D Systems, Minneapolis, MN, USA). Statistical Analyses Frequency was compared between groups using the χ (Yu et al., 2006b) test, the Yates correction, or Fisher's exact test. Group means (presented as the mean ± standard deviation) were compared using analysis of variance and Student's t-test or the nonparametric Mann-Whitney test when appropriate. The aspartate aminotransferase (AST)-to-platelet ratio index (APRI), representing the severity of liver fibrosis, was determined by the following equation: (AST level/upper limit of normal range)/platelet counts (10 9 /L) × 100 (Yu et al., 2006b). Kaplan-Meier analysis and the Log-rank test were performed by comparing the differences of the cumulative incidence of HCC between determinants. The risk factors independently associated with HCC development were evaluated using Cox regression analysis. Bonferroni multiple test correction was used to adjust the P value for the reduction the chances of obtaining false-positive results (type I errors) when multiple pairwise tests are performed on a single set of data. The area under the curve (AUC) was compared using receiver operating characteristic (ROC) analysis to determine the cut-off value for sMICA to be used for predicting HCC. Statistical analyses were performed using the SPSS 12.0 statistical package (SPSS, Chicago, IL, USA). All statistical analyses were based on two-sided hypothesis tests with a significance level of p b 0·05. Influence of Host Genetics in HCC Development in Patients Stratified by Liver Cirrhosis and Treatment Response Given that preexisting liver cirrhosis and failing to achieve an SVR were the major determinants for HCC, we further analyzed the association of host genetics with HCC by stratifying patients based on the two factors. We observed that IL-28B rs8099917, EGF rs4444903 and PNPLA3 rs738409 genetic variants did not correlate with HCC development in each subgroup of patients stratified by cirrhotic and SVR status ( Supplementary Figs. 2-4). However, the MICA rs2596542 genotype was associated with HCC development in cirrhotic non-SVR patients but not in the other 3 subgroups. Among the cirrhotic non-SVR patients, those with HCC development had a significantly increased proportion of the MICA rs2596542 A allele (68·4% vs. 32·0%, P = 0·017), and patients carrying the risk A allele had a significantly increased incidence of HCC development compared with those without (HR: 3·4, P = 0·01) (Fig. 2). Impact of MICA SNP and sMICA on HCC Development in Non-SVR Patients The basic characteristics, follow-up period and incidence of HCC development in cirrhotic and non-cirrhotic HCV patients who failed anti-viral therapy were shown in Table 2. We further analyzed the effect of MICA SNP and sMICA on HCC development among non-SVR patients stratified by cirrhotic status. Among the non-cirrhotic, non-SVR patients, those who developed HCC were older and had lower platelet counts and higher levels of AST, AFP and APRI (Table 3). Cox regression analysis revealed that APRI was the single factor associated with HCC development (HR/CI:2·67/1·42-5.01, P = 0·002) in non-cirrhotic patients without an SVR (Table 4). Among cirrhotic patients, those with HCC had lower platelet counts, higher ferritin, significantly higher AFP levels and sMICA levels and a higher proportion of the MICA rs2596542 A allele (Table 3). Forty of 44 cirrhotic patients without an SVR had sMICA available. The best cut-off value of sMICA level in predicting HCC was 175·4 pg/mL (AUROC 0·70, P = 0·002). Compared with patients with low sMICA, patients with high sMICA levels (N175 pg/mL) were more likely to develop HCC in cirrhotic patients without an SVR (HR 4·3, P = 0·001) but not in non-cirrhotic or SVR patients ( Supplementary Fig. 5). Cox regression analysis revealed that the factors independently associated with HCC development among cirrhotic patients without an SVR were high sMICA levels (HR/CI: 5·93 / 1·86-26.38·61, P = 0·002) and the MICA rs2596542 A allele (HR/CI: 4·37/1·52-12·07, P = 0·002). Among the non-SVR cirrhotic patients with the GG genotype, sMICA levels were significantly increased in HCC subjects compared with non-HCC patients (100% vs. 6·7%, P b 0·001). In contrast, there was no difference in the proportion of high sMICA levels between patients with and without HCC development if the patients were non-cirrhotic, exhibited an SVR, or harbored the risk A allele (Table 5). Combined Effect of sMICA Levels and MICA rs2596542 Genetic Variants in Predicting HCC sMICA levels and MICA rs2596542 SNP were the two independent factors associated with HCC development in cirrhotic non-SVR patients. We evaluated the combined effect of the two factors in predicting HCC in the subpopulation. As shown in Table 6, the risk A allele or GG genotype with sMICA N 175 ng/mL provided the best accuracy, at 79%, and a negative predictive value of 100% for predicting HCC. Nineteen of the 28 patients (67·9%) who carried the two risk factors developed HCC, with an annual incidence of 23·5%. In contrast, none of the 14 GG genotype carriers with sMICA b 175 ng/mL developed HCC after a median follow-up period of 58·9 months (range: 6-107 months). The incidence of HCC did not differ between patients with or without the risk factors, in terms of MICA SNP and sMICA, among the other three subpopulations (Fig. 5). Discussion Host genetic predispositions are associated with anti-HCV treatment efficacy, (Huang et al., 2012(Huang et al., , 2013a(Huang et al., , 2013b HCV-related liver fibrosis, (Huang et al., 2015a;Urabe et al., 2013) clinical outcome (Noureddin et al., 2013) and HCC (Kumar et al., 2011;Abu Dayyeh et al., 2011;Tanabe et al., 2008;Guyot et al., 2013). However, whether host genetic variants play important roles in HCC development after anti-viral therapy is unclear. By testing the candidate SNPs in a large treatment cohort, we demonstrated that MICA rs2596542 genetic variants predicted HCC occurrence, and the influence was restricted to cirrhotic patients who failed antiviral therapy. Interestingly, we demonstrated that high sMICA was also predictive of HCC occurrence in the population. Most importantly, cirrhotic non-responders were at the highest risk for HCC development, with an annual incidence of 23·5%, if they carried the MICA A allele risk and had high pretreatment sMICA levels. Preexisting liver cirrhosis is the most critical factor associated with HCC in CHC patients (Lee et al., 2014;Goto & Kato, 2015). Once cirrhosis has evolved, 1 to 4% of patients develop HCC per year (Goto & Kato, 2015). Although successful HCV eradication could reduce the risk of HCC occurrence by 75%, SVR patients remain at risk of HCC development with an average risk of 1·05% per person-year if they have advanced liver disease (Yu et al., 2006a;Morgan et al., 2013). As shown in the current study and other studies, cirrhosis carries a higher hazard risk ratio than failing viral eradication for HCC development in the treatment cohort (Lee et al., 2014;Goto & Kato, 2015). It is therefore imperative to identity the risk of HCC in patients with cirrhotic background with and without SVR. MICA, a ligand for NKG2D, exerts its anti-tumor effect by activating natural killer cells and CD8 + T cells. A GWAS demonstrated that patients with HCV-related HCC had a higher rate of the MICA rs2596542 A allele (Kumar et al., 2011). The evidence was based on cross-sectional observations. However, whether genetic predisposition increased the long-term risk of HCC development is unclear. In the current study, we noticed that MICA SNP does not increase the risk of HCC development in patients who had successful viral eradication or mild liver disease. In contrast, among the cirrhotic non-responders who were on the extreme end of liver disease, patients who developed HCC had a significantly higher proportion of the MICA rs2596542 A allele. Carriers of the risk allele had a four-fold risk of HCC development after anti-HCV therapy. Similar to previous study, we concordantly observed significantly reduced sMICA production in patients with the risk A allele compared with those with GG genotype (Kumar et al., 2011). The pathophysiological mechanism may be due to the potentially low production of membrane-bound MICA with the risk A allele in patients who respond to HCV infection, leading to poor or no activation of immune cells, including NK cells (Kumar et al., 2011;Goto & Kato, 2015). On the other hand, high expression of MICA is associated with a variety of malignancies, including melanoma, breast, colon and hepatocellular cancers (Goto & Kato, 2015;Kumar et al., 2012;Groh et al., 1999Groh et al., , 2002. This result is likely attributed to the fact that highly soluble MICA in the circulation down-regulated NKG2D expression in immune cells and disrupted NKG2D-mediated antitumor immunity. Recently, we also demonstrated that high sMICA levels were also associated with HCV-related HCC recurrence after curative treatment for HCC and antiviral therapy for HCV with and without SVR (Huang et al., 2015b). We observed that cirrhotic non-responders with high sMICA levels (N 175 ng/mL) had a fivefold risk of HCC development compared with those with low sMICA levels. Notably in the current study, among cirrhotic non-responders who harbored the low-risk MICA GG genotype, six of seven (85·7%) patients with the high sMICA levels developed HCC during a mean follow-up period of approximately 5 years. In contrast, none of the 14 patients with low sMICA levels developed HCC. Subsequently, we identified two risk factors associated with HCC development in cirrhotic non-responders, the carriage of A allele or GG genotype with high sMICA levels. Twothirds of the cirrhotic non-responders who carried either factor developed HCC with an annual incidence of 23·5%. In contrast, the NPV of HCC development was up to 100% in the clinical setting after approximately 5 years of follow-up. Patients with unfavorable EGF genetic polymorphisms have an increased risk of HCC in a Western cohort with advanced liver fibrosis (Abu Dayyeh et al., 2011). PNPLA3 genetic variants are associated with HCV-related liver fibrosis (Guyot et al., 2013;Huang et al., 2015a). However, there was no association between the SNP and HCV-related HCC (Singal et al., 2014). In the current study, we demonstrated that both EGF rs4444903 and PNPLA3rs738409 genotypes did not determine HCC development in the post-treatment Asian population, regardless of treatment response or liver fibrosis. IL-28B genetic variants by far are the most powerful host genetic factors in predicting the HCV genotype 1 treatment efficacy of IFN-based therapy (Huang et al., 2012(Huang et al., , 2013a(Huang et al., , 2013b and spontaneous clearance (Yu et al., 2013). However, its influence in liver fibrosis and hepatocarcinogenesis remains unclear (Noureddin et al., 2013;Kumar et al., 2012;Fabris et al., 2011). Patients with unfavorable IL-28B genotype had a higher likelihood of HCC development in univariate analysis, but the association became insignificant after weighing treatment responses. This finding may be attributed to the confounder of viral clearance. Patients with favorable IL-28B genotype were prone to have an SVR, which subsequently reduced the risk of HCC. Although patients were stratified by the SVR status, IL-28B genotype no longer influenced the HCC development in the cohort (data not shown). Several potential confounders that may influence HCC occurrence were taken into account in the study. However, the limitation was that we failed to provide new masked associations between HCC development and host genetics during the follow-up period. In the era of direct antiviral agents (DAAs), the SVR rate could be achieved up to N 95%. However, there are still some concerns. The DAA lacks anti-neoplasm and immune modulation effect as interferon does. Its impact on HCC occurrence or recurrence remains conflicting until now (Nault & Colombo, 2016). Secondly, a critical issue is that only a small proportion of patients could have access to DAA due to the unaffordability. There remains a huge gap between clinical efficacy and community effectiveness in the management of HCV infection. The current study demonstrated the important role of MICA SNP and sMICA in non-SVR patients, who represent the majority of untreated and persistent viremic population the real world. We believed that the current study would consistently provide important information regarding risk prediction and surveillance of HCC. In conclusion, non-responders who carried the MICA risk A allele or had high pretreatment sMICA levels have a high risk for HCC development after antiviral therapy. Combining the two surrogate markers greatly enhanced the predictive power in the high-risk population, which provides insight for closer follow-up strategies and re-treatment priority in the era of direct antiviral agents. Author Contributions Conception and design: Ming-Lung Yu and Wan-Long Chuang.
2018-04-03T02:10:37.098Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "cee12e3554694828fb12ead68aca7b899696ece7", "oa_license": "CCBYNCND", "oa_url": "http://www.ebiomedicine.com/article/S2352396416305497/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd4ff843b3345c0bc89e377bce389ac77c4a0b19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
271063494
pes2o/s2orc
v3-fos-license
Compound osteoderms preserved in amber reveal the oldest known skink Scincidae is one of the most species-rich and cosmopolitan clades of squamate reptiles. Abundant disarticulated fossil material has also been attributed to this group, however, no complete pre-Cenozoic crown-scincid specimens have been found. A specimen in Burmite (99 MYA) is the first fossil that can be unambiguously referred to this clade. Our analyses place it as nested within extant skinks, supported by the presence of compound osteoderms formed by articulated small ostedermites. The specimen has a combination of dorsal and ventral compound osteoderms and overlapping cycloid scales that is limited to skinks. We propose that this type of osteoderm evolved as a response to an increased overlap of scales, and to reduced stiffness of the dermal armour. Compound osteoderms could be a key innovation that facilitated diversification in this megadiverse family. Within Scincoidea, there is a remarkable pattern of distribution of body armor; while osteoderms are common in Scincidae, Cordylidae and Gerrhosauridae, they are absent among Xantusiidae, including some amber embedded lizards with excellent integumentary preservation 22 . The distribution and structure of ossified dermal armor among Scincoidea is highly variable 47 , but cycloid, compound osteoderms, encircling the body, occur only in Scincidae 14,27,48 ; ventral compound osteoderms are also found gerrhosaurs (two gerrhosaurid species also possess dorsal compound osteoderms-Gerrhosaurus skoogi and Cordylosaurus subtessellatus).Among fossil taxa, only Parmeosaurus scutatus shows some ventral compound osteoderms or duplex osteoderms, formed by two units 41 , but these are more rectangular and arranged in a grid-like pattern, being similar to the dorsal osteoderms of gerrhosaurs and quite unlike the cycloid, imbricate, staggered arrangement of osteoderms of extant skinks. In this paper we describe a fossil in Burmite that preserves imbricating cycloid scales with compound osteoderms formed by small articulated osteodermites, structurally identical to those in modern skinks.Although this fossil is missing most of the vertebral column and skull, making it difficult to compare with other fossil taxa available from the Mezosoic, integumentary similarities with extant members of Scincidae, allow us to place this fossil as the oldest known representative of Pan-Scincidae.Currently over 100 specimens of squamates are known from Burmite.In this large sample, this new fossil is the only one that preserves this osteodermal morphology which makes it diagnosable to the Scincidae, and differentiates it from all known fossil squamates from the Cretaceous.The specimen is incomplete, but it does retain both postcranial skeletal elements and integumentary structures and, though less than ideal, it provides a basis for comparison with any putative scincid material that might be found in the future.Given both the significance of its osteodermal condition and the scarcity of substantially complete Burmite fossil lizards (and therefore the low likelihood of finding a more osteologically complete fossil that also retains osteoderms), we here opt to describe this taxon despite its incompleteness. Diagnosis A small lizard with an estimated snout vent length (SVL) of 30 mm.Electroscincus zedi differs from all other known squamates from the Mesozoic by the presence of imbricate, compound osteoderms arranged in a staggered pattern around the body, supporting its placement in Scincidae (Figs. 2, 3).Its inclusion within Scincidae is also supported by its possession of cycloid scales around the body 28 overlying compound osteoderms (in some Cordyliformes compound osteoderms are present in scales of the ventral surface only 49,50 ).The osteoderms are very different from rectangular and imbricated paramacellodid osteoderms. When compared with temporally relevant fossil taxa from the Cretaceous that have been associated with the Scincoidea, SVL estimates allow us to distinguish Electroscincus from many taxa; we estimated the SVL of Electroscincus zedi to be not longer than 30 mm, based on the distance between the pectoral girdle and the cloaca, and assuming the specimen is an adult or a subadult (see description of the skeleton for comments on ossification); estimating SVL in fossil lizards represented by isolated material is difficult, but using the length of bones of a typical living, fully-limbed, non-attenuate skink species (Emoia pallidiceps Smithsonian National Museum of Natural History USNM 166276), we can calculate a rough estimate of their size.Tepexisaurus, Myrmecodaptria, Carusia, Globuara, and Eoxanta are twice or more the SVL of E. zedi and Eoscincus (North America), Microteras (North America) and Paramacellodus (England and France) are approximately 17 mm larger than the new species.Retinosaurus and E. zedi have a similar estimated SVL, and both have a cruciform interclavicle (clavicles not preserved in the other taxa, except in Tepexisaurus, in which they are not well defined).Electroscincus may be distinguished from Retinosaurus and Tepexisaurus by the presence of imbricate, cycloid scales on the dorsum, rather than tuberculate, juxtaposed scales. Electroscincus, Tepexisaurus, and Retinosaurus have the same primitive lepidosaurian manual phalangeal formula, and Electroscincus and Tepexisaurus share the same primitive lepidosaurian pedal phalangeal formula (posterior limbs not preserved in Retinosaurus), but Electroscincus and Retinosaurus may be distinguished from Tepexisaurus by having narrower ungual phalanges (unguals of Tepexisaurus are about three times the width of the penultimate phalanges while in Electroscincius these phalanges are subequal in diameter). Recent time-calibrated phylogenies report that crown-scincids began diversifying in the Mid-Cretaceous, and so chronologically, Electroscincus may belong within crown-group Scincidae, though the huge diversity of living skinks, and current lack of consistent diagnostic features for the major groups thereof, as well as the limited material present in this fossil currently precludes any further diagnosis of Electroscincus within Pan-Scincidae.Nonetheless, Electroscincus can be differentiated from many extant groups that are limbless, or limb attenuated, including all Acontinae, and many genera of the other recognized subfamilies, particularly Scincinae and Sphenomorphinae, which have a high proportion of limb-reduced or attenuate taxa. Phylogenetic placement Our phylogenetic analyses (Fig. 5) recover Electroscincus zedi in different positions depending on the optimality criterion used, but in all analyses it is recovered as member of the Pan-Scincoidea.While many fossil Pan-scincoid taxa are deep stem lineages, Electroscincus zedi is instead recovered consistently grouped within scincids: as sister to acontine skins in the Bayesian timetree analysis (Fig. 5a) with moderate nodal support, as sister to Brachymeles in the Maximum Likelihood analysis with high support (Fig. 5b), or nested among non-acontine skinks the parsimony analysis, although with Bremer support of 1 (Fig. 5c).Parsimony results are less well resolved, but included five synapomorphies shared with other members of Scincidae: (1) compound osteoderms in dorsal scales (known also in two gerrhosaurids), (2) compound osteoderms in ventral scales (shared also with gerrhosaurids and the fossil Parmeosaurus scutatus), (3) dorsal body scales cycloid (rounded and overlapping), (4) lateral body scales imbricate, overlapping, and (5) ventral scales same size as adjacent laterals.In the Bayesian results, there are nine morphological synapomorphies for the crown group Scincidae, unfortunately none of them are observable in the Electroscincus.In the Maximum Likelihood analyses, there are also no morphological synapomorphies uniting Electroscincus and the non-acontine skinks.All the synapomorphies Electroscincus shares with the crown group Scincidae are from the integument and cannot, therefore, be evaluated in many fossil taxa in which skeletal features but not osteoderms are preserved. The timetree analysis estimates the divergence between Electroscincus and extant scincids to have occurred in the Cretaceous, approximately 125 Ma.Scincidae (including Electroscincus) is recovered as most closely related to a clade formed by the extinct taxa Hymenosaurus, Globaura, and Eoxanta, but with only moderate nodal support. In addition to Pan-Scincoidea, other major squamate clades recovered as monophyletic include Anguimorpha, Gekkota, Iguania, Lacertoidea, and Serpentes.Scinciformata sensu Brownstein et al. 21is not monophyletic, as we recover Lacertoidea to be more closely related to Toxicofera (Anguimorpha + Iguania + Serpentes) than to Pan-Scincoidea.Our combined analysis of molecules and morphology recovers a pattern of relationships that matches previous studies based on molecules 15,18,19 . Holotype Peretti Museum Foundation/ GRS GemResearch Swisslab AG (GRS-Ref-51036).Type locality.Specimen comes from mid-Cretaceous (Late Albian/early Cenomanian) outcrops in the Myitkyina District, Hukawng Valley, Kachin Province, northern Myanmar, approximately 100 km west of the town of Myitkyina.Precise location of these mines, history of excavations, and stratigraphy of the Burmese amber deposits are summarized elsewhere 51 . Etymology.The generic name is a combination of the Latin word for amber (electrum) and skink (scincus).The species epithet zedi refers to the bell-shaped stupas that house relics at Burmese Buddhist temples, referencing Description of the skeleton The piece of amber includes two disconnected parts of the lizard, containing the scales and mostly appendicular bones, but is clearly part of a single individual (Figs. 2, 3).The degree of ossification of limb bones indicates that it is an adult or subadult; the leg being more completely ossified than the arm, as evidenced by the morphology of the phalanges and potentially indicating differential timing in limb formation (Fig. 3), which occurs in subadults of living species of lizards (Fig. 6).The right scapulocoracoid is large, well preserved, has a well-defined glenoid, and a short, wide scapula, with a dorsal expansion nearly identical to the width at the level of the glenoid fossa.The scapulocoracoid is surmounted by a greatly expanded cartilaginous (as indicated by its reduced radio-opacity) suprascapula, with a maximum width approximately twice that of its junction with the scapula.The coracoid has a well-defined posterior coracoid emargination.The interclavicle is cruciform, having a short anterior process extending cranial to the lateral processes.Each lateral process is as long as the posterior process. The autopod-zeugopod-stylopod ratios are 6:5:6 (anterior limb) and 10:5:7 (posterior limb).The right humerus has a well-defined head and is expanded distally.The radius and ulna are preserved, but hard to distinguish from one another.The ulna possesses a distinct olecranon process.Manual metacarpals and phalanges exhibit incomplete ossification at the metacarpal-phalangeal, and inter-phalangeal joints.Metacarpals longer than proximal phalanges.The phalangeal formula of manus is difficult to determine, as there is not a complete set of phalanges in either hand, but the phalangeal complement seems to be 2:3:4:5:?Given that the primitive pedal formula is conserved, we expect that the manual formula will also be that primitive for squamates, 2:3:4:5:3.Penultimate phalanges in the hand are longer than the antepenultimate.The right leg is better preserved than the left, including femur, tibia and fibula, and all the tarsals and phalanges.The femur has a visible internal trochanter.It is common that many skinks, for example mabuyine lineages, exhibit short zeugopodal segments when compared with other parts of the limb, this is also the case in E. zedi.Metatarsals are longer than the proximal phalanges.The phalangeal formula of the pes is 2:3:4:5:4, with pedal digits straight, ungual phalanges narrow and claws not strongly recurved. The tail segment includes four vertebrae, the first three have long transverse processes (about three times longer than the centrum width) and are autotomic.The first vertebra preserves only the posterior segment, the second and third are complete and show some faint indication of the fracture plane, and the fourth has a well defined fracture plane.The transverse processes are formed by two lateral laminae that join distally, forming a gap near the centrum, the fracture planes pass through the transverse process as has been reported for the skink Plestiodon fasciatus 52 .The size of the transverse process is reduced abruptly between the third (three times the centrun width) and fourth (same length as the centrum width) vertebrae.The centrum is notochordal and there is one complete "V"-shaped chevron articulated between the third and fourth preserved vertebrae. Description of osteoderms In Electroscincus, the osteoderms take the form of the strongly imbricating cycloid scales of the body in which they are embedded, and the overlap of adjacent contacting scales, results in a corresponding overlap of the osteoderms that is clearly revealed by the CT scans.Osteoderms covering the body are ovoid (wider than long) and patterned in a staggered arrangement, so that the longest part of each osteoderm overlies the two osteoderms posterior to it.The osteoderms are compound, being formed by several smaller bony plates or osteodermites.The arrangement of osteodermites varies across the body of the fossil; the trunk osteoderms have five anterior and three to five posterior osteodermites, the caudal osteoderms have six to eight anterior and six posterior osteodermites, the precloacal osteoderms have nine anterior, 11 posterior and 13 interior osteodermites.Since each osteoderm has multiple rows of osteodermites, the posterior osteodermites of the anterior osteoderm, overlap the anterior osteodermites of the posterior osteoderm.The anterior osteodermites are slightly thickened anteriorly, forming a flattened ridge that articulates with the posterior osteodermites of the preceding osteroderm.Osteoderms in the limbs (Fig. 3) are smaller (mean length 0.253 mm, n = 5) than the body osteoderms (mean length 1.028 mm, n = 8), averaging 24.6% the length of the body osteoderms. Scincid compound osteoderms vary significantly in both the number of osteodermite rows, and the number of osteodermites per row (Fig. 4).Osteoderms from the nuchal region are usually comprised of an anterior and a posterior row of osteodermites.Of all the species observed in this study, only egernine skinks such as Tribolonotus novaeguineae, Corucia zebrata and Cyclodomorphus celatus deviate from this pattern.Caudal osteoderms often comprise three rows of osteodermites with the anterior row having a lower number than the posterior rows 53 . Regarding flexibility, it has been demonstrated that squamates with fused osteoderms have a stiffer skin than squamates with compound osteoderms 54 .In Electroscincus the area anterior to the cloaca is covered by osteoderms formed by more numerous and smaller osteodermites than in the rest of the body, therefore it is very likely that in life the skin covering this area was more pliable than in other regions. Discussion The Cretaceous period is an important time for the diversification for squamates 20,55,56 ; squamates, other tetrapods, arthropods, and angiosperms may have all been affected by a large scale macroevolutionary process known as the Cretaceous Terrestrial Revolution, although it has been determined that at least for squamates, phylogenetic diversification and the ecological roles of the main constituent clades were established in the Jurassic 57 .During the mid-Cretaceous, there was probably an expansion of ecological guilds among squamates, as indicated by an increase in the diversity of fossil groups 58 , dentitional disparity 59 , and cranial diversity 60 .The mines of northern Myanmar, the world's largest amber deposit of squamates, preserve this diversity 61 , documenting a critical period of diversification of many of the major extant squamate clades 58 , consistent with molecular clock estimates 18,62 .Alternatively, the increase in diversity of fossil groups during this period could be due to a sampling bias, especially in localities that facilitate preservation of delicate structures such as the amber mines of Myanmar, or fossils from the Yixian Formation of Liaoning, China 68 . To be useful, the phylogenetic position of fossils used to calibrate modern time trees must be linked to modern groups, either directly by phylogenetic analyses or apomorphy based diagnoses 63 .With adequate calibration, time trees provide estimates of historical diversification processes in groups that may fossilize poorly.In the case of skinks, previous crown age estimates for the family range from 75 to 118 MYA, averaging 94 MYA 64 .Although our phylogenetic analyses were inconsistent with respect to the position of E. zedi within the crown group Scincidae, its Cenomanian age is squarely at the center of the estimated range for the diversification of crown Scincidae 2,65 . Electroscincus is the only known squamate fossil that possesses unambiguously cycloid osteoderms, and its external appearance, especially the feet and body, resembles modern skinks (Fig. 7).Cycloid osteoderms were reported in the Early Cretaceous Scandensia cervensis 66 , in which they vary from large and somewhat angular dorsally to smaller and more ovoid ventrally.Although similar in shape to the cycloid osteoderms of skinks, including E. zedi, they are not compound.Some other fossils in amber from Myanmar have been considered www.nature.com/scientificreports/potential scincomorphs 58 , including one specimen (JZC Bu269) having cycloid scales in the postocular region.However, the scales of this specimen are multicarinate, juxtaposed and surrounded by tiny granular scales rather than imbricate ones, and CT scans of this specimen did not reveal any mineralized material associated with, or underlying, these scales.Among extant squamates, the presence of compound osteoderms on the dorsum is diagnostic for the family Scincidae and two species of gerrhosaurids.Compound osteoderms are exceedingly rare in the fossil record, since these osteoderms comprise small, delicate osteodermites, which disarticulate easily and may become lost during the fossilization processes, or may be inadvertently removed in specimen preparation 66 .Compound osteoderms require special taphonomic conditions to be preserved, in an analogous way to other equally diagnostic integumentary structures of archosaurs, such as feathers 67 .Amber is one of the best natural preservation media for integument, preserving even fine structures such as compound osteoderms or tiny scales in miniaturized geckos 58,68 .Amber offers similar conditions (quick burial and anoxia/hypoxia) to lagerstätten and aeolian deposits of the Campanian Gobi Desert that have preserved remarkably complete skeletons with associated fine integumentary structures [69][70][71] . Osteoderms have evolved independently at least eight times in squamates 72 , and some of the oldest squamates with osteoderms covering most of their bodies include paramacellodids, which are known from the Middle Jurassic.Although their phylogenetic placement is uncertain, it has been suggested that they represent stem Scincoidea 20,50 .It is likely that osteoderms evolved once in basal scincoideans, and that their absence in Xantusiidae 22 represents a rare instance of secondary loss of osteoderms in the squamate tree of life (complete loss of osteoderms have also been reported in some species of varanids 73 ).The order and timing of the evolution of compound and simple osteoderms in this group is less clear; simple osteoderms are reported from all major modern and fossil scincoid lineages, excluding Xantusiidae, and are likely to be plesiomorphic for Pan-Scincoidea.Ventral compound osteoderms either evolved twice-in Scincidae and Gerrhosauridae-or, equally parsimoniously, compound osteoderms may be plesiomorphic for Scincoidea, completely lost in Xantusiidae and only retained in the ventral regions of gerrhosaurids.Dorsal compound osteoderms are widely present in Scincidae, but only known from two species of gerrhosaurids (and the arrangement of osteodermites in these taxa is quite different from that seen in the compound osteoderms of other modern scincoids Fig. 2 74,75 ). Molecular clock estimates predict the presence of Scincidae during the Cretaceous 18,50 and the presence of a lizard with compound osteoderms confirms this estimate and reinforces the idea of early diversification of armor among squamates.Osteoderms are generally assumed to have evolved principally for protection, but other functions like thermoregulation, lactate sequestration, and calcium storage, have been suggested ( 50 and references therein).Biomechanical experiments also indicate that the different configurations of osteoderms yield differential loading properties, and the presence of compound osteoderms likely reduces overall stiffness 54 .The reduction of the size and number of simple osteoderms in the inguinal, axillary and cloacal regions of heavily armored cordylids (e.g.Cordylus, Namazonurus, and Smaug) 49 and anguimorphs (Celestus, Elgaria, Pseudopus, and the fossil Ophisauriscus) 76,77 suggests that these structures may impede the free movement of limbs and the cloacal region.The same regions in skinks retain their imbricate compound osteoderms, without any apparent reduction in flexibility.The evolution of extensive overlapping compound osteoderms in early skinks like Electroscincus produced a flexible dermal armor and may have been a key innovation that facilitated the remarkable diversification of the family. Although diagnoses should ideally employ autapomorphic characters, the material preserved imposes a limitation in this regard.However, Electroscincus is allocated to Scincidae on the basis of a combination of integumentary characters and is easily differentiated from other Cretaceous taxa previously referred to Scincoidea by its morphology and geographical distribution.It is certainly more complete than many fossil taxa described on the basis of fewer bones, or footprint impressions.Given that it retains at least some of the postcranial skeleton and skin impressions, in addition to the osteoderms, it is also more complete than many amber inclusions, which are often represented by skin impressions only 58 .It is uncertain if more specimens of skinks in amber will become available in the near future (See ethics statement), but if so, Electroscincus possesses a number of features that could be meaningfully compared with them.It also serves as a critical fossil to document the early origin of dermal armor in this megadiverse group of squamates. Ethics statement and specimen chain of custody The specimen described in this study was purchased by Dr. Ru Smith in March of 2016, more than one year prior to the June 2017 military takeover of the Myanmar mining regions, which has been established as the cutoff date for conflict amber in the guidelines of the Society of Vertebrate Paleontology (SVP) Myanmar Working Group.Dr. Smith gifted this fossil to Juan D. Daza in 2017, and the specimen was donated to the Peretti Museum Foundation, where it is now cataloged and available for examination. We followed the SVP recommendations for researchers, research institutions and publishers available at https:// vertp aleo.org, particularly the recommendations for material acquired before 2017.Detailed confidential information about the history of the specimen is available upon request, given the situation of Myanmar, the information about the vendor and previous owners is confidential, but is associated to the specimen in the catalogue of the Peretti Museum Foundation (PMF.org).We fulfill four of the recommendations from SVP working group: (1) Material was acquired from a seller that is not on the list of banned persons by the United Nations Human Rights Council (42 session, 2019, A/HRC/42/L.21/Rev.1),(2) The material is housed at the Peretti Museum Foundation which is a repository registered under the Swiss Government, (3) The material is available to researchers at the PMF and freely accessible as a digital copy at www. morph osour ce.org, (4) The PMF catalogue includes proof of current and previous ownership with specific dates and is supported by photographic metadata. Figure 1 . Figure 1.Comparison among higher-level classifications of skinks and relatives as used in recent studies. Figure 2 . Figure 2. Electroscincus zedi.Fossil in ventral (a) and dorsal (b) views.Detail of the right foot (c, e) and osteoderms (d).X-ray of the whole specimen showing the skeletal remains, and several articulated and scattered osteoderms (f).Scale bar applies to the entire amber piece. Figure 4 . Figure 4. Schematic representations of scincoid osteoderms, depicting the osteodermite arrangement in Electroscincus zedi, the gular osteoderms of gerrhosaurs and the nuchal osteoderms of representative genera from all scincid subfamilies.Scale bar equals 0.5mm. Figure 5 . Figure 5. (a) Bayesian timetree of Squamata based on a combined molecular/morphological data set.Posterior probability and bootstrap support values are given at key nodes.Phylogenetic position of Electroscincus zedi in the Maximun Likelihood (b) and parsimony (c) analyses.Bremer support values on the nodes of the parsimony analysis. Figure 6 . Figure 6.Cleared and stained subadult specimen of the sphaerodactylid gecko Sphaerodactylus townsendi from the University of Puerto Rico, Río Piedras Collection (UPRRP 006400).Note that the pedes exhibit proportionally less cartilage (blue) than the manūs.Photograph courtesy of Elyse Howerton. Figure 7 . Figure 7. Life reconstruction of Electroscincus zedi.Areas of the lizard not represented in the material available are depicted as blurred.Illustration by Stephanie Abramowicz.
2024-07-10T06:17:14.480Z
2024-07-08T00:00:00.000
{ "year": 2024, "sha1": "8f19c48cb666d9ac84da1d84b6ec591a6b5bb61d", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "351037dc3c81ecd27885ccc134b54f8b8092c554", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
224977803
pes2o/s2orc
v3-fos-license
The impact of diabetes mellitus on health-related quality of life in Saudi Arabia Objective To evaluate the effect of different demographic, clinical and social factors on diabetic patients' quality of life (QOL). Research design and methods A cross sectional study conducted on patients with type 2 diabetes who attended King Abdulaziz University Hospital outpatient clinics between February and March 2017. The patients were asked about sociodemographic data including age, sex, educational level, exercise history and marital status in addition to clinical data such as duration of diabetes, presence of comorbidities as well as medication history. The patients' QOL were assessed using EQ-5D-5L Arabic version. Results 131 participants were included in the study with a median age 55 years old. Forty five percent of participants were male. Regarding EQ-5D scores, there were significant correlation with gender, exercise, hypertension, heart disease, marital status, educational level and duration of diabetes while there was a significant difference in EQ-VAS scores with respect to heart disease, level of education and duration of diabetes. Conclusion More attention needs to be given to the assessment of the QOL of diabetic patients and assessing the effect of different treatment modalities on improvement of patients’ QOL. Introduction Diabetes mellitus is defined as disturbances of metabolism presented by chronic hyperglycemia which is due to either impaired insulin secretion or/and action (Kerner and Brückel, 2014). The number of diabetic patients is increasing rapidly; it is expected that by 2030, there will be a 69% increase in the number of adults with diabetes in developing countries and a 20% increase in developed countries (Shaw et al., 2010). Diabetes mellitus is associated with microvascular and macrovascular complications including retinopathy, nephropathy and cardiovascular and cerebrovascular events (Chawla et al., 2016) The increased prevalence within a progressively aging population and the presence of chronic complications will significantly increase the use of healthcare products and services and will have a negative impact on healthcare costs and patients' quality of life (QOL) (Mata-Cases et al., 2016). Patients who perceived higher levels of QOL have shown that they had better social support, acceptance of the seriousness and consequences of the disease, and had less difficulty in managing their diabetes (Chew et al., 2015). There is a rising attention towards patient's QOL improvements rather than patient's life longevity (Amelia et al., 2018;Brown, 2015) For diabetic patients the importance of developing special psychometric tools to measure QOL has been given specific attention and several tools were developed to evaluate the effect of diabetes as a disease; along with it's complications on patients' lives (Bradley and Speight, 2002;Trikkalinou et al., 2017). This study aimed to evaluate the QOL of Saudi patients living with diabetes using the EuroQOL-5 Dimentions 5 levels (EQ-5D-5L), and associate the effect of several social and clinical factors on their QOL assessment scores. Assessment of health related QOL Data collection and questionnaire were carried out through face-to-face interviews with the patients. Patients were asked for permission before they were interviewed. During the interview patients were asked about sociodemographic data as such as age, sex, educational level, exercise history and marital status in addition to clinical data such as duration of diabetes, presence of comorbidities and medication history. The patients' QOL were assessed using the EuroQOL-5 Dimentions 5 levels (EQ-5D-5L) Arabic version; which is a standardized instrument for measuring generic health (Herdman et al., 2011). The test includes patients' perception of their health status in terms of five dimensions: mobility (MO), self-care (SC), usual activities (UA), pain/discomfort (P/D) and anxiety/depression (A/D). Participants rate their level of severity for each dimension using 5 levels that range from not having a problem to having an extreme problem (Sakthong et al., 2015). Patient's responses are combined to form a five-digit score; which corresponds to a EQ-5D-5L index value set. There is no population reference valuation set for Saudi Arabia so a population reference valuation set of Thailand was used based on the location (van Reenen and Janssen, 2015). The test also includes a visual analogue scale (VAS); where participants rate their general health status on the day of the interview on a scale from 0 to 100, zero being the worst health status and 100 being the best health status. Statistical analysis Statistical analysis was done using SPSS version 21. Categorical data were summarized as percentages while numerical data were summarized as medians and ranges. Normality test was done using the Kolmogorov-Smirnov (K-S) Test and the Shapiro-Wilk Test and have shown that the values of EQ-5D and EQ-VAS scores are nonparametric. Association between QOL scores with gender, smoking, exercise and comorbidities were done using Mann Whitney test while association with marital status, duration of diabetes and educational level were done using Kruskal Wallis test. P-values less than 0.05 were considered significant. Results The baseline characteristics of participants are presented in Table 1. Overall 131 diabetic patients participated in the questionnaire. The median age was 55 years old. Forty five percent of participants were male and only 15.3% were smokers. Most of the patients (76.3%) were married. Eighty-four patients reported having hypertension while 26 reported having heart disease. Others reported having asthma, osteoarthritis and kidney impairments. The percentage of item reporting for each EQ-5DL-5L dimension by age group are summarized in Table 2. The association between EQ-VAS and EQ-5D scores with baseline characteristics and disease specific questionnaires is presented in Tables 3 and 4. With respect to the EQ-VAS, there was no significant difference in responses with respect to gender, smoking, exercise and hypertension while there was a significant difference in EQ-VAS scores between participants who had heart disease compared to those who don't. EQ-VAS scores of illiterate participants were significantly different from those with general and high education. Also, EQ-VAS scores of patients who had diabetes for more than 5 years were significantly different from those who were more recently diagnosed with the disease (1-5 years). Regarding the EQ-5D scores, there were significant correlation with gender, exercise, hypertension, heart disease, marital status, educational level and duration of diabetes. The relationship between EQ-VAS and EQ-5D scores and educational level and duration of diabetes were represented in Figs. 1 and 2, respectively. Discussion Increasing attention is being given by health care professional to evaluate the QOL of diabetic patients (Bradley and Speight, 2002). Diabetes is a chronic disease that is known to cause considerable morbidity and mortality and has been reported to result in a lower QOL compared non-diabetic patients (Golicki et al., 2015). In fact, the decline in the QOL of diabetic patients over 5 a year period was found to be twice the decline among those without diabetes (Grandy and Fox, 2012). In our study, diabetic females had lower QOL compared to males. This is in agreement with studies in other populations (Corrêa et al., 2017;Rodríguez-Almagro et al., 2018) and EQ-5D-3L (Cardoso et al., 2016;Hassali et al., 2016;Jin et al., 2018;Mata-Cases et al., 2016;Sakamaki et al., 2006). In addition, in Saudi Arabia, similar findings were obtained in Riyadh, Alkobar and Alqassim (Abdel-Gawad et al., 2002;Al-Shehri et al., 2008;Al Hayek et al., 2014;Alshayban and Joseph, 2020). A review on the gender differences in diabetic patients found that male patients were less depressed and anxious and overall living more effectively with the disease than females (Siddiqui et al., 2013). This is likely true for our population as well. The current study demonstrated a significant association between the marital status and QOL of diabetic patients, where divorced, single and widowed patients had lower QOL scores than married. A systematic review on the effect of marital life on the QOL of diabetic patients reported similar findings where a better QOL was reported for married compared to nonmarried (single, widow) diabetics (Kiadaliri et al., 2013). This is in agreement with a study in Riyadh (Abdel-Gawad et al., 2002) However, another study in Riyadh reported that married diabetic patients had lower QOL than unmarried (Al-Shehri, 2014). This was explained by the higher responsibilities that were associated with marriage; which could add a burden to managing the disease itself. While this has not been examined in our study, the quality of marriage has been shown to play a role as well. Indeed, the American Diabetes Association examined the effect of marital status and quality measures on the glycemic control in insulin treated diabetics and found that quality of marriage affected the health related quality of life and adaptation to the disease itself (Trief et al., 2001). In addition, other studies reported that uncontrolled diabetes as a disease affected their everyday relationships and social experiences, with many patients expressing negative impacts on their social well-being (Vanstone et al., 2015). *p-value < 0.05, significant. Pairwise comparisons revealed significant difference between illiterate and general education participants and illiterate and high education participants. **p-value < 0.05, significant. Pairwise comparisons revealed significant difference between participants with more than 5-year history of diabetes and those from 1 to 5-year history of diabetes. ***p-value < 0.05, significant. Pairwise comparisons revealed significant difference between widow-married, widow-single, divorced-single, married-single. ****p-value < 0.05, significant. Pairwise comparisons revealed significant difference between illiterate -general education and illiterate-high education. *****p-value < 0.05, significant. Pairwise comparisons revealed significant difference between participant with more than 5-year history of diabetes and those with history of 1-5 years. Regarding the level of education, the current study has shown that low educational levels adversely affect patients' QOL. Illiterate patients have the worse QOL scores compared to those with higher educational levels. This is in agreement with numerous studies worldwide that demonstrate that increased patient education level among diabetic patients improves overall health outcomes including QOL (Alshayban and Joseph, 2020;Nielsen et al., 2016;Powers et al., 2016;Rodríguez-Almagro et al., 2018). In fact, Diabetes Selfmanagement Education and Support (DSME/S) is a recognized act by the American Diabetes Association, the American Association of Diabetes Educators, and the Academy of Nutrition and Dietetics, that facilitates knowledge and necessary education and skills necessary for patient self-care (Powers et al., 2016). Other factors that contribute to the poor QOL of diabetic patients in our study include comorbidities such as heart disease. It was reported from previous studies that cardiovascular disease negatively impacts the QOL of diabetic patients (de Visser et al., 2002;Wändell, 2005). Indeed, the presence of cardiovascular disease in diabetic patients goes way beyond negatively impacting quality of life, to increased risk of morbidity and mortality (Bauters et al., 2003;Savarese and Lund, 2017) Physical activity, is known to be key for the prevention and management of type 2 diabetes (Colberg et al., 2010). Our study as well as many others demonstrated that it also has a great impact on the QOL of diabetic patients (Cardoso et al., 2016;Çolak et al., 2015;Corrêa et al., 2017). It has been reported that exercising more than 3 h per week for one year significantly improved the patients' QOL (Jin et al., 2018). Longer duration of diabetes was also associated with poor QOL for diabetic patients in our study. Not surprisingly, patients who had diabetes for more than 5 years tend to have lower health related QOL scores especially in the physical health domain (Corrêa et al., 2017;Jin et al., 2018). It is likely that is a result of increased disease severity, which is known to negatively impact QOL of diabetic patients (Alshayban and Joseph, 2020;Scollan-Koliopoulos et al., 2013). Despite the importance of the study, it was nonetheless limited by the small sample size of the participants. In, addition data collection was done from a single site. A multicenter data collection would represent comprehensive evaluation of the factors affecting the Saudi diabetic patients. Also, the tool used in the study EQ-5D-5L is not a specific tool for diabetic patients and hence, it can be affected by the presence of other diseases and comorbidities not only diabetes. Conclusion Several factors can affect the QOL of diabetic patients including age, gender, marital status, physical activity, presence of comorbidities and duration of diabetes. More attention needs to be given to the assessment of the QOL of diabetic patients and assessing the effect of different treatment modalities on improvement of patients' QOL.
2020-10-19T18:08:52.337Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "db462ad7925336dfe1fdf6e8429de1bd2a6741f6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jsps.2020.09.018", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e0791530393ede97d9752b50284e57131aebe34", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236472039
pes2o/s2orc
v3-fos-license
Compartment syndrome following intramuscular self-injection of kerosene and rodenticide: A case report Introduction and importance Kerosene and rodenticides are used in many households in developing countries. This case report aims to discuss the progression and management of a patient with intentional kerosene and rodenticide poisoning. To our knowledge, this is the first documented case of blended kerosene-rodenticide poisoning in medical literature. Case presentation This report describes a 23-year-old man who survived after intramuscular self-injection of 5 ml of kerosene mixed with a rodenticide into his left upper limb, with intent to commit suicide. He was admitted to our hospital following a convulsion and brief loss of consciousness. Compartment syndrome developed within 24 h of admission, necessitating urgent fasciotomy, repeated surgical debridement, limb elevation, wound cleaning and dressing, in addition to intravenous fluids, antibiotics, and close observation. Blood transfusion, phytomenadione (vitamin K1), tetanus toxoid, and analgesics were recommended. The patient also received physiotherapy, and was treated for depression. The limb healed completely, with contractures at the left wrist joint. Clinical discussion Injected kerosene and rodenticide may result in compartment syndrome and variable local and systemic complications which require multifaceted care and a prolonged follow-up period. Conclusion Seemingly minor injuries at presentation may quickly progress into considerable complications such as compartment syndrome. It is imperative that physicians comprehensively investigate patients with poisoning for multiorgan dysfunction. Anticipation of local and systemic complications of injected poisons and timely medical and surgical intervention is life-saving. Introduction Kerosene is a liquid hydrocarbon that is mainly used as a solvent fuel for cooking and lighting in many households in developing countries. Experience has been gained with ingested and inhaled hydrocarbons, being the common routes of accidental or intentional poisoning. Intentional poisoning is mostly observed among adolescents and adults [1], whereas accidental injection may occur as part of recreational abuse or in an industrial setting [2]. Injection with kerosene is an uncommon occurrence, often linked to suicidal attempts, and mostly involving the upper extremities [1,3]. Males are predominantly more affected compared to females [1]. In the same manner, rodenticides (rat poisons) are widely used to control rodent populations, increasing the risk of intoxication and suicide. Besides oral intake, rodenticides are easily absorbed through the skin, because they are lipophilic [4]. According to the American Association of Poison Control Centers, majority of human rodenticide exposures are due to long-acting anticoagulants, bromethalin, phosphides, cholecalciferol, and warfarin [5]. In this report, we describe a case of intentional administration of kerosene mixed with an unidentified rodenticide, and also review related literature. This study was written in compliance with the SCARE 2020 criteria [6]. Case presentation A 23-year-old male presented to a referral hospital following attempted suicide by intramuscular self-injection of approximately 5 ml of kerosene blended with rodenticide into two sites in his left upper limb. This act was his first, and was preceded by a relationship conflict. There was no history of poison ingestion, previously diagnosed mental health disorder, underlying chronic medical illness, or regular-use medication, and no significant previous surgical history. He subsequently developed a seizure, lost consciousness, and was brought by his neighbours to the accident and emergency unit within 2 h of poisoning. At admission to the medical ward, he was conscious, alert, and had no focal neurological signs. He experienced abdominal pain and generalized body aches. Local examination revealed needle-entry marks over the left wrist and cubital fossa. Examination of other extremities was normal. Respiratory and cardiovascular system assessment was normal; he was afebrile; but abdominal assessment revealed generalized tenderness. His blood pressure was 113/86 mmHg, pulse rate 115 bpm, oxygen saturation 96% in room air, with a random blood glucose level of 8.0 mmol/L. At admission, the attending physician recommended nil per os for at least 24 h, intravenous normal saline and dextrose 3 l were administered in 24 h, and analgesics were given. A complete blood count revealed a white blood cell count of 3880 cell/μl, haemoglobin level of 17.4 g/dl, and platelet count of 177,000/μl. The following day, he had developed marked swelling of the left forearm and hand; the skin had blisters and was shiny; and cold fingers with limited movement were noted. On the basis of these signs, a diagnosis of compartment syndrome of the left upper extremity was reached, which warranted urgent fasciotomy and surgical debridement. This prompted his transfer to the surgical department, followed by fasciotomy extension and repeat surgical debridement on day-8 and day-17, with removal of devitalized skin and subcutaneous tissue. Fig. 1A shows a photograph of the left upper limb on day-19 post-fasciotomy. Surgical procedures were accomplished using intramuscular pethidine, and were not associated with prolonged bleeding. Limb elevation was performed, and wound dressing was done using vinegar and vaseline gauze. Following the third session of debridement, the patient developed hemorrhage originating from the anterior proximal forearm. Pressure dressing was applied, intramuscular phytomenadione (vitamin K1) was administered, and haemostasis was achieved. Blood grouping and crossmatching was done (O rhesus D positive), and he was subsequently transfused with 2 units of whole blood. The pretransfusion haemoglobin level was 4.4 g/dl. No other laboratory or radiological investigations were requested due to financial challenges, and maintenance therapy with vitamin K1 was not recommended. During the course of hospitalization, no other history of unexplained bleeding was documented. Intravenous antibiotics were administered (Ampiclox 1 g 8 hourly, and Metronidazole 500 mg 8 hourly, Meropenem 1 g 8 hourly), intravenous crystalloids, analgesics (intravenous tramadol 100 mg 8 hourly, diclofenac 75 mg 8 hourly, oral morphine 10 ml 8 hourly, paracetamol tabs 1 g 8 hourly), bisacodyl tablets 10 mg nocte, haematinics, and oral fluid intake. He also received evaluation by a physiotherapist, and mental health assessment by a psychiatrist. A diagnosis of depression was made, and oral amitriptyline was prescribed. Gradually, granulation tissue formed, and he was discharged 42 days after admission. Home-based follow-up on day-63 revealed granulation tissue formation with slopy edges of the wound (Fig. 1B). Wound care continued, and on day-155 after fasciotomy, the wounds had healed, but with contractures at the left wrist (Fig. 1C). The patient had no other concerns, and was able to perform routine activities, including riding a motorcycle. Continuation of physiotherapy was recommended. Discussion While ingestion of hydrocarbons and rodenticides is common, parenteral administration of a mixture of a hydrocarbon and rodenticide has not been documented before in medical literature. Attempted suicide by tissue injection of hydrocarbons is often linked to mental disorders, including depression, schizophrenia, and drug addiction [3,7]. The most common site is the anterior forearm, through the subcutaneous, intramuscular, intravenous, or mixed routes [3,8]. In this case, the intramuscular route was used. The clinical presentation of hydrocarbon poisoning depends on the route of exposure. However, clinical manifestations do not significantly differ for parenteral administration of hydrocarbon [3]. Ingestion and inhalation often result in symptoms of chemical pneumonitis, whereas injection with hydrocarbons can result in significant local and systemic morbidities such as cellulitis, thrombophlebitis, compartment syndrome, necrotizing fasciitis, abscess formation, arrhythmia, and reversible pulmonary oedema [1,3,[7][8][9]. The neurovascular status of affected extremities should therefore be monitored for early signs of compartment syndrome [3]. Injection of kerosene or its derivatives may also cause agitation, drowsiness, lethargy, and fever [1,8,10]. Kósa [11] described a patient who developed tonic-clonic convulsions, loss of consciousness, and died shortly after accidental intravenous injection of petrol by a medical doctor. Similarly, Amiri and colleagues [1] described 10 intravenous drug addicts, 8 of whom lost consciousness after intravenous administration of kerosene. Our patient developed a convulsion and lost consciousness, but never manifested with respiratory or other systemic symptoms. He was diagnosed with compartment syndrome which developed within 24 h of hospitalization. There is no specific antidote for hydrocarbon toxicity. Therefore, symptom-based and goal-directed supportive care remains the mainstay of therapy. Elevation and immobilization of the affected extremity, intravenous antibiotics, careful monitoring for local and systemic complications are key aspects in management. Some authors [8] have reported the use of intravenous steroids, although this has not been extensively studied. Researchers have suggested surgical intervention by fasciotomy and repeated debridement as the definitive treatment for patients who develop compartment syndrome and necrotizing fasciitis, with or without need for skin grafting. In addition, abscesses should be drained [3]. As with many other reported cases, this aggressive approach averts the need for limb amputation [3,12]. Survival after intravenous administration of up to 30 ml of kerosene or its derivatives has been documented in several case reports [3,8,10], but administration of more than 5 ml of kerosene is associated with significantly higher likelihood of mortality [1]. Long-term follow up is required, because these patients may develop contractures, neurological deficits following nerve entrapment or ischemia, and oedema following vascular insufficiency [3]. As in the present case, contractures developed, requiring reconstructive surgery and physiotherapy. Mental health evaluation and subsequent follow-us should be emphasized, since repeat suicidal attempt may occur [3,12]. We were unable to determine the specific type and quantity of rodenticide that the patient added to kerosene. When used to commit suicide, rodenticides such as cholecalciferol, metal phosphides, anticoagulants, bromethalin, among others, are frequently administered orally [4,[13][14][15], and thus, literature on parenterally administered rodenticides is scanty. The wide array of clinical manifestations that are unique to different types of rodenticides makes it possible for physicians to narrow down the potential rodenticide. First generation (warfarin) and second-generation anticoagulant rodenticides (brodifacoum, bromadiolone, difenacoum, flocoumafen and difethialone) can cause epistaxis, haematuria, haematochezia, massive pulmonary hemorrhage, intracranial hemorrhage, easy bruising, and anaemia, due to inhibition of the enzyme vitamin K epoxide reductase, which results in inactivation of vitamin K dependent clotting factors II, VII, IX, and X [4,15,16]. Prolonged patient monitoring is warranted, given that the plasma half-life of second-generation anticoagulant rodenticides (AR) lies between 16 and 220 days, and that coagulopathy may be observed several days after intoxication [4,16]. However, our patient did not develop coagulopathy during the 5-month period of follow up, perhaps because he may have used a less potent rodenticide, or lower dosage of rodenticide. Nonetheless, some rodenticides such as arsenic and fluoroacetamide may lead to neurological toxicity, manifesting with seizures and coma, while others such as zinc phosphide may cause renal failure and cardiorespiratory problems (shock, congestive heart failure), and pulmonary oedema, which were absent in this patient [14]. The multiorgan toxic effects of metal phosphides are a result of toxic phosphine gas that is liberated after hydrolysis by gastric acid [14], a process which requires oral ingestion. None life-threatening injection of up to 5 ml of zinc phosphide has been documented [17]. Considering that the anticoagulant effects of AR can be reversed by administration of vitamin K1 and fresh frozen plasma [16], our patient received vitamin K1 and whole blood transfusion only after he started bleeding. Meanwhile, we did not attribute his bleeding episode to AR exposure, because it occurred following debridement, and we did not determine the prothrombin time (PT) and international normalized ratio (INR). We therefore recommend that physicians attending to patients with rodenticide poisoning should evaluate and monitor for coagulopathy and organ dysfunction. Conclusion Deliberate self-injection with a hydrocarbon-rodenticide mixture is a previously undocumented phenomenon, which calls for evaluation for an underlying mental health problem. Physicians should be enlightened about the potential devastating complications of hydrocarbon and rodenticide intoxication, and need to contact an expert toxicologist. In consideration of the diversity of rodenticides, physicians should make great effort to identify the specific poison, in order to guide treatment decisions. Limb salvage and recovery was possible following urgent fasciotomy, repeated surgical debridement, and meticulous supportive care. Sources of funding None. Ethical approval Not required. Single case reports are exempt from ethical approval in our institution. Consent for publication Written informed consent to publish this case report and accompanying images was obtained from the patient. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. Authors' contributions DA participated in patient care. DA, WIE, WMW, DK, and PKK collected clinical information and followed-up the patient. WIE drafted the original manuscript. DA and WMW reviewed and edited the manuscript. All authors read and approved the final manuscript. Research registration number Not applicable. Provenance and peer review Not commissioned, externally peer-reviewed. Declaration of competing interest There are no conflicts of interest.
2021-07-29T06:17:51.909Z
2021-07-22T00:00:00.000
{ "year": 2021, "sha1": "264b41ab9e7b5e7ab4973db0612823662f307731", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijscr.2021.106233", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5a963593a769974acbd61c53868975c6ccaa60e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218673855
pes2o/s2orc
v3-fos-license
Detectability of radio afterglows from Fast Radio Bursts produced by Binary Neutron Star Mergers Binary neutron star (BNS) mergers are one of the proposed origins for both repeating and non-repeating fast radio bursts (FRBs), which associates FRBs with gravitational waves and short gamma-ray bursts (GRBs). In this work, we explore detectability of radio counterparts to an FRB by calculating the radio afterglow flux powered by the two components: a relativistic jet and a slower isotropic ejecta from a BNS merger. Detection probability of a radio afterglow for a FRB is calculated as a function of the source redshift, observing time, and flux sensitivity, assuming that FRBs are not strongly beamed. The model parameter distributions inferred from short GRB afterglows are adopted. We find that the detection probability for an FRB at $z=0.5$ is 3.7 and 4.1% for the jet and isotropic components, respectively, when observed at the timing of their peak flux ($\sim$10 days and 1 year) with a typical sensitivity of 10 $\mu$Jy. The probability increases to 10 and 14%, respectively, with $\sim$1 $\mu$Jy sensitivity achievable with future facilities (e.g. SKA). In particular for the repeating FRB 180916.J0158+65, we find a high chance of detection (60% at 10 $\mu$Jy sensitivity) for the isotropic component that would peak around $\sim$10 years after the merger, as a natural consequence of its close distance ($z=0.03$). Therefore a long term radio monitoring of persistent radio emission for this object is important. The detection probability is similar for the jet component, though the peak time ($\sim$200 days) has likely already passed for this FRB. INTRODUCTION Fast radio bursts (FRBs) are radio transients with an intrinsic pulse width of milliseconds, whose origin is still enigmatic (Thornton et al. 2013). Nearly 100 FRBs have been discovered (Petroff et al. 2016) 1 since the first event archived in 2001 and reported in 2007 (Lorimer et al. 2007), with an all sky rate ∼ 10 3 -10 4 sky −1 day −1 above a 1 Jy ms fluence threshold (Keane & Petroff 2015;Lu & Piro 2019). Their large dispersion measures (the integrated column density of free electrons along the line of sight) exceeding the contribution by electrons in the Milky Way Galaxy, as well as the direct localization of host galaxies of some FRBs, point to an extragalactic origin and implies source redshift in the range of z = 0.1-1 (Chatterjee et al. 2017;Marcote et al. 2017;Tendulkar et al. 2017;Bannister et al. 2019;Prochaska et al. 2019;Ravi et al. 2019;Cordes & Chatterjee 2019;CHIME/FRB Collaboration et al. 2019;Lu & Piro 2019). While some of the FRBs are found to be repeating sources (Spitler et al. 2016;CHIME/FRB Collaboration et al. 2019), most seem to appear only once despite the intensive searches for possible repeating signals (Lorimer et al. 2007;Petroff et al. 2015;Shannon et al. 2018). The distribution of intrinsic pulse time width is different for repeating and non-repeating FRBs, as observed by CHIME, which implies different origins of these two populations (CHIME/FRB Collaboration et al. 2019). The high power and short timescale of FRBs naturally associate them with compact stars (see reviews by Cordes & Chatterjee 2019;Platts et al. 2019), especially neutron stars because of their enormous gravitational and electromagnetic energies and short characteristic time scales of O(GM/c 3 ) ∼ 10µs. One scenario is that a non-repeating FRB is produced at the time of merger of a binary neutron star (BNS) (Totani 2013;Zhang 2014;Wang et al. 2016;Yamasaki et al. 2018), while repeating FRBs may also be powered by the surviving remnant from the merger Margalit et al. 2019). The BNS merger scenario is particularly interesting because of the potential relation to gravitational waves (GWs) and short gamma-ray bursts (GRBs). The high end of the currently estimated BNS merger rate, 110-3840 Gpc −3 yr −1 (90% confidence intervals) by LIGO and VIRGO during O1 and O2 observations (Abbott et al. 2019), is within an order of magnitude of (although perhaps lower than) the FRB event rate, which can be translated into a local volumetric rate of ∼ 10 3 -10 4 Gpc −3 yr −1 by assuming a maximum redshift of z ∼ 1 (Totani 2013). Another hint is the recent localization of FRB 180924 and 190523 to massive, weakly star-forming galaxies (Ravi et al. 2019;Bannister et al. 2019) which suggests that the progenitor of non-repeating FRBs belongs to an old stellar population. The possible relation between FRBs and BNS mergers leads to potential multi-wavelength electromagnetic counterparts including short gamma-ray burst afterglow and rprocess kilonovae like those observed in the BNS merger event GW170817 (see Abbott et al. 2017a,b, and references therein). Such a hypothesis has been tested by deep targeted search for FRBs from GRB remnants (Madison et al. 2019;Men et al. 2019) and optical follow-up of FRBs (Niino et al. 2014(Niino et al. , 2018Tominaga et al. 2018), but no significant candidate has been identified. Identification of any electromagnetic counterpart to an FRB will not only help pinpoint its host galaxy, but also provide crucial information on the progenitor system and ambient environment. Throughout this paper we consider BNS mergers as the progenitor model for (both repeating and non-repeating) FRBs, and focus on the detectability of radio afterglows lasting for years by a relativistic outflow interacting with ambient matter (Nakar & Piran 2011;Troja et al. 2019;Hajela et al. 2019). Like what we observed in GW170817, a BNS merger produces a relativistic structured jet that contains an energetic core and exponentially faints outward, and also produces a quasi-isotropic neutron-rich ejecta that powers a kilonova (e.g. Troja et al. 2019;Villar et al. 2017, and reference therein). Both a jet and an isotropic ejecta will result in observable afterglow with different observable brightness and timescales, and we discuss their detectability by radio observations in order to test the BNS merger model of FRBs. For calculation of radio afterglows from BNS mergers, we use the model developed by our previous work (Lin et al. 2019). A unique point of this model is treating the efficiency of electron acceleration and the minimum electron energy as independent parameters, while other previous models made an oversimplified assumption that all electrons in the shock is accelerated as nonthermal particles. As a result of one more degree of freedom, we found a quantitatively different fit to the observed light curve of GW170817. The paper is organized as follows: in Section 2, we describe the models of outflow dynamics and afterglow emission. In Section 3, we present radio afterglow light curve calculation, Monte Carlo simulation of flux distribution, and determine detection probability as function of given source redshift and detector sensitivity. The discussion and conclusion are given in the Section 4 and 5. Table 1. Median and 1σ scatter of Gaussian fit to the parameter cumulative distributions by short GRB afterglow observation (Fong et al. 2015). MODEL We apply the same model in Lin et al. (2019) for both outflow dynamics and afterglow emission, as described below. Two outflow components are considered to power the afterglow emission: an axisymmetric angularly-structured jet with Gaussian energy/Lorentz factor profile, and (quasi-)isotropic ejecta with power-law radial velocity stratification. The relevant model parameters include isotropic energy (E k,iso ), initial Lorentz factor (Γ 0 ), ambient ISM particle number density n of hydrogen, angular width of the Gaussian jet (θ j ), and power-law index of ejecta stratification (α). Here E k,iso and Γ 0 refer to the isotropic-equivalent energy and Lorentz factor of the jet head, as well as the total energy and common Lorentz factor of isotropic ejecta, respectively. Then the evolution of shock into any direction can be solved as the shock velocity β(R) as function of the shock radius R. For the electron distribution developed in shock acceleration, we approximate the solution by broken power law (see also Sari et al. 1998). The relevant parameters include the fractions of internal energy transferred to electrons and magnetic field ( e , B ), power-law index of electron injection (p) and number fraction of injected electrons in the downstream ( f ). Using the electron distribution as input, the comoving synchrotron energy spectrum P(ν, R) is calculated as function of comoving photon frequency ν and shock radius R using the public Python package naima (Zabalza 2015). The effect of synchrotron self absorption is ignored since it does not affect our result significantly. Finally, the energy flux received by a distant observer is obtained by integrating the Doppler boost of P(ν, R)/(4πD 2 L ) over the photon equal time arrival surface, which can be solved from β(R) in all directions (see also Granot et al. 1999), given the following observer parameters: observed photon frequency ν obs , observed time t obs , viewing angle θ obs , and luminosity distance D L . Note that while the jet afterglow seems to be brighter than the ejecta afterglow in Fig.1, viewing angles in the presented range only constitute 13% of the whole population assuming random binary orientation. For the median value ( θ obs = 60 • ), jet produces comparable afterglow flux to ejecta, which results in the comparable detection probability (but different peak time) shown in Fig.2 and 3. Detection probability from simulated events Here we present estimate of radio afterglow detection probability, assuming the model parameter distributions as follows. For the jet component, we assume the orientation of the BNS system to be uniform spherical distribution, i.e. a differential θ obs distribution of dP/dθ obs = sin(θ obs ) for 0 • < θ obs < 90 • and 0 otherwise. The initial Lorentz factor Γ 0 = 100 is fixed. We adopt the Gaussian fits (Table.1) to the cumulative distributions of isotropic-equivalent onaxis energy E k,iso , ambient density n, opening angles θ j and electron index p inferred from short GRB afterglow observations (Fong et al. 2015, F15), with fixed values of e = 0.1, B = 0.01, f = 1 and a lower bound of n min = 10 −4 cm −3 for the n distribution imposed by F15. The distribution of θ j is estimated using only GRBs showing jet breaks in their afterglows. For the isotropic ejecta component, the same distributions of n and p are used, but E k,iso = 10 51 erg and Γ 0 = 1.05 are fixed because no observational constraint is available for its scatter. We assume α → ∞ (i.e. single velocity structure) for simplicity, also considering the observational constraint α > 6 suggested from late time radio monitoring of GW170817 (Hajela et al. 2019). Producing many model parameter sets by the Monte-Carlo method, we calculated radio light curves for each model set, and determined the detection probability as a function of a given detection threshold F lim , a source redshift z, and the observation time t obs . The result is shown Table.2 are shown by plus marks. in Fig.2, where the observed frequency is fixed at 1.4 GHz. We further calculate the maximum detection probability at the best timing about t obs . The maximum probability and the corresponding best observation time are shown in Fig.3 as a function of redshift and detector sensitivity. FRBs with reported upper limits on radio afterglows (Scholz et al. 2016;Shannon & Ravi 2017;Mahony et al. 2018;Bannister et al. 2019;Ravi et al. 2019;Prochaska et al. 2019;Marcote et al. 2020) are marked and also listed in Table.2. Detection prospects At the typical sensitivity of current radio telescopes (10-100µJy), the detection probability is estimated to lie between 1-10% for a source located at z < 1 (which includes the majority of the present FRB sample), and increase to >10% at z < 0.1. At 1µJy sensitivity, which is the designed level of SKA, all FRBs with z < 1 will have >10% detection prospects. Due to the comparable radio flux density levels produced by the jet (with median value of viewing angle θ obs = 60 • ) and isotropic afterglow components, the maximum detection probabilities at the best observing time have similar appearance for the two components on the redshiftsensitivity plane (Fig. 3). However, the peak time is much * Repeating FRB sources a Redshifts inferred from localized host galaxies, except those of FRB 180814, 131104 and 171020 which were inferred as the maximum values from their dispersion measures (see corresponding references). b Upper limits on the possible persistent radio emission and corresponding observation times after FRB detection (or after detection of the first burst in the case of repeater). VLA limits for FRB 180814, 180916 and 190523 were obtained based on the non-detection in the VLA Sky Survey performed in 2017 (https://science.nrao.edu/vlass), i.e., prior to the FRB detection. c The maximum detection probability by the flux limit of F lim and the corresponding best observation time, for the jet and isotropic components (the latter in parentheses). earlier for jet afterglow (∼10 days after the merger) due to the relativistic velocity of jet, compared with the mildlyrelativistic isotropic ejecta (∼1 year after the merger). Note that Fig.3 shows a tendency of later peak time t p towards closer distance and higher sensitivity. The key to interpret this result is the scaling relation of afterglow peak flux and peak time: F p ∝ E k,iso n (p+1)/4 , t p ∝ (E k,iso /n) 1/3 . If the scatter in E k,iso is negligible, the scatter in ambient density n leads to a negative correlation F p ∝ t −(3p+3)/4 p , i.e. fainter event with later peak time. This negative correlation remains true as long as E k,iso is less scattered compared with n, which is the case for short GRBs (Table 1), though the correlation is weaker due to the finite scatter of E k,iso . Consequently, a higher sensitivity or a closer source will allow much fainter detections, which eventually leads to the tendency of later peak time. Consistency with radio afterglow limits on FRBs We discuss whether or not the available limits on persistent radio emission imposed on some FRBs by follow-up observations is consistent with our calculated detection probability. From Table 2, we found that the detection probabilities for most FRBs are typically in the range of 1-10%. Considering that the total number of the available sample is less than 10 FRBs, the non-detection of any associated radio afterglow does not strongly constrain the BNS progenitor model. Repeating FRB 180916.J0158+65 is the only exceptions with a promising detection prospect (∼ 60% at 20 µJy sensitivity) for the isotropic component that peaks around ∼10 years after the merger, as a natural consequence of its close distance (z = 0.03). Yet the probability does not go up to 100% mainly because of the low ambient densities that may appear in the adopted parameter distribution. We further note that the repeating FRB activity would start 1-10 years after a BNS merger Yamasaki et al. (2018). Therefore there is a reasonable chance to detect a radio afterglow for this source in the near future, though no detection does not give a strong constraint. The ambient density distribution adopted in this work from F15 is widely spread from 10 −4 to 10 cm −3 . However, some FRBs listed in Table.2 are well localized to a type-specified host galaxy. Therefore, the detection probability is underestimated if the source is localized in high density environment like star-forming region (e.g. FRB 180916.J0158+65), and overestimated if localized in a more diffuse environment like an early-type galaxy or outskirt regions (e.g. FRB 180924, 190523). Caveats on the jet opening angle The distribution of jet opening angle θ j assumed in this work significantly affects the detection probability as it scales with θ 2 j . We note one important caveat that while we assumed the same parameter distributions for short GRBs and FRB afterglows in the BNS merger progenitor model, the parameter space in which an FRB is produced may not coincide with that for successful GRB jet formation. One such scenario is that FRBs may appear in a BNS merger with a "choked" or failed jet in the parameter space of smaller (E k,iso ) jet and larger θ j , e.g. (E k,iso ) jet < 0.05(E k,iso ) ej θ 2 j (Duffell et al. 2018), which may produce a bias against afterglow detections at larger viewing angle ( 23 • ). However, the resulting detection probability does not necessarily increase because the expected flux is fainter at a larger viewing angle. Another caveat is that we have used the θ j distribution of F15 based on 4 short GRBs with jet break measurements, which is narrowly peaked at 6 • with a 1σ scatter of 1 • . F15 also provided another distribution by including 7 more samples with lower bounds on θ j and assuming equal weighting over all 11 events, which extends the cumulative distribution almost linearly to an ad hoc maximum angle (30 • , 90 • ). Considering the uncertainty in the treatment of these additional events, and the fact that the value of narrow θ j distribution is well consistent with the jet width (θ j ∼ 5 • ) seen in GW170817 (Troja et al. 2019;Hajela et al. 2019), we only adopt the narrow distribution model in this work as a conservative estimate for the detection probability (since it scales with θ 2 j ). The final remark is that θ j in F15 are measured as the jet edge of a top-hat jet, different from the definition of jet width in our Gaussian jet model. For a same jet break time observed in a short GRB afterglow, the Gaussian jet model will measure a slightly smaller θ j (roughly by a factor of 0.7). However, the resulting difference of detection probability is at most a factor of 2, and hence we ignored it. CONCLUSION In this work we examined the prospects of detecting radio afterglows of FRBs, in the scenario that FRBs are produced by BNS mergers. We considered the two components of outflow: a relativistic jet and a mildly relativistic and isotropic ejecta. Detection probabilities were calculated as a function of sensitivity, source redshift and an observation time (Fig.2), assuming random viewing angles from the jet axis (i.e. assuming that FRBs are not strongly beamed) and adopting the model parameter distributions inferred from short GRB observations. As a result, we found that the detection probability from FRBs at z < 1 is between 1-10% for the typical sensitivity (10 − 100 µJy) of current radio telescope, which enhances to >10% if a 1µJy sensitivity can be achieved by future facilities (e.g. SKA). The expected flux peaks typically at ∼10 days for a jet afterglow and ∼1 year for an isotropic afterglow. We also found a tendency of later peak time towards closer source and higher sensitivity, which can be attributed to the contribution from low luminosity events whose afterglows peak at a later time (Fig.3). For individual FRBs with reported upper limits on a persistent radio flux, we listed their maximum detection probabilities for the two components at the best observation time in Table.2. Probability is less than 10% for most of these FRBs, and hence no detection does not give a strong constraint on the BNS merger scenario of FRBs. However, a future larger sample would give a meaning constraint or lead to a detection of a radio afterglow. In particular for the repeating FRB 180916.J0158+65, we found a 60% chance of detection for the isotropic component whose flux peaks at about 10 years after the merger and remains detectable for a few decades, as a natural consequence of its close distance (z = 0.03). The time scale of 10 yrs is also comparable to the lifetime of repeating FRBs formed by a BNS merger. Though the detection probability is not close to 100% because of the distribution of model parameters, a long-term radio monitoring of this object is thus interesting.
2020-05-19T01:01:07.622Z
2020-05-16T00:00:00.000
{ "year": 2020, "sha1": "6fa563396ee806dfa76cd983571d59ab4c9ba5ad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6fa563396ee806dfa76cd983571d59ab4c9ba5ad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4782977
pes2o/s2orc
v3-fos-license
Registered report: Coadministration of a tumor-penetrating peptide enhances the efficacy of cancer drugs The Reproducibility Project: Cancer Biology seeks to address growing concerns about reproducibility in scientific research by conducting replications of 50 papers in the field of cancer biology published between 2010 and 2012. This Registered report describes the proposed replication plan of key experiments from ‘Coadministration of a tumor-penetrating peptide enhances the efficacy of cancer drugs’ by Sugahara and colleagues, published in Science in 2010 (Sugahara et al., 2010). The key experiments being replicated include Figure 2 and Supplemental Figure 9A. In Figure 2, Sugahara and colleagues presented data on the tumor penetrance of doxorubicin (DOX) when co-administered with the peptide iRGD, as well as the effect of co-treatment of DOX and iRGD on tumor weight and cell death. In Supplemental Figure 9A, they tracked body weight of mice treated with DOX and iRGD to provide evidence that iRGD does not increase known DOX toxicity. The Reproducibility Project: Cancer Biology is a collaboration between the Center for Open Science and Science Exchange, and the results of the replications will be published by eLife. DOI: http://dx.doi.org/10.7554/eLife.06959.001 Introduction α V β 3 integrin is a marker of tumor blood vessels and is targeted by the family of RGD peptides that mimic its natural ligand. This family of small peptides has been widely shown to promote targeting of a variety of therapeutics to tumor blood vessels (for review see Danhier et al., 2012;Feron, 2010). Sugahara and colleagues previously presented data showing that a novel cyclized form of this peptide, iRGD, coupled to a motif that binds to the Neuropilin-1 receptor, helped increase tissue penetrance beyond the vasculature of anti-cancer drugs when it was directly conjugated to those chemotherapies (Sugahara et al., 2009;Feron, 2010). In their 2010 study, they showed that iRGD can increase penetrance simply through co-administration with therapies, including peptide-based therapeutics, small molecule drug compounds, and nanoparticle-based therapeutics. This replication attempt will focus on the finding that iRGD co-treatment with doxorubicin (DOX) in orthotopic xenograft prostate tumors increased drug penetrance and accumulation, increased TUNEL staining for apoptotic cells, and decreased tumor volume and weight. In Figure 2B, Sugahara and colleagues showed increased tissue permeability of DOX when co-injected with iRGD by demonstrating increased accumulation of DOX in the prostate when intravenously co-injected with iRGD. They documented little change in DOX penetrance in non-prostate tissues of the mouse. This figure is evidence of the first key point for replication: that iRGD increases tissue penetrance of the co-injected drug. This experiment will be replicated in Protocol 3. In Figure 2C, Sugahara and colleagues showed how treatment with varying doses of DOX, with or without iRGD, affected tumor weight. They showed that treatment with 1 mg/kg DOX with iRGD caused a decrease in tumor weight similar to treatment with 3 mg/kg DOX alone. Addition of iRGD to 3 mg/kg DOX further decreased tumor weight. In Supplemental Figure 9A, they also showed that the overall body weight of mice treated with DOX and iRGD did not change as compared to mice treated with DOX alone, indicating that DOX-related weight gain was not exacerbated by the addition of iRGD. These experiments will be replicated in Protocol 4. In Figure 2D, Sugahara and colleagues showed that treatment with DOX in combination with iRGD increased the number of TUNEL positive (i.e., apoptotic) tumor cells as compared to DOX alone. They also showed that iRGD did not increase the number of TUNEL positive cells in the heart, indicating that DOX-related cardiotoxicity is not exacerbated by co-treatment with iRGD. This experiment will be replicated in Protocol 5. To date, the closest direct replication of these experiments has been performed by Akashi and colleagues, who tested the effects of co-administration of iRGD with the drug gemcitabine to target xenograft models of pancreatic cancer. Relative tumor volume in two pancreatic cancer cell linederived xenografts treated with gemcitabine in combination with iRGD was significantly reduced when compared to tumor volume from xenografts treated with gemcitabine alone. Additionally, Akashi and colleagues assessed compound penetration by co-administering iRDG and the dye Evans Blue. They showed that Evans Blue dye penetrated further into tumor tissue when coadministered with iRGD than alone (Akashi et al., 2014). Additionally, Gu and colleagues showed that co-administration of iRGD with encapsulated paclitaxel increased nanoparticle extravasation across the blood-brain barrier, and co-administration of paclitaxel nanoparticles and iRGD led to an increase in mean survival of mice carrying intracranial gliomas (Gu et al., 2013). Furthermore, Pang and colleagues extended the work done in Sugahara and colleagues' 2010 paper by showing that binding of iRDG through an additional cysteine residue to plasma albumin prolonged the half-life of iRGD and increased tumor penetrance (Pang et al., 2014). In 2011, Ruoslahti and colleagues extended their findings to show that iRDG was also effective in increasing tumor penetrance of a novel therapeutic consisting of a nanoparticle-encapsulated peptide that homed to tumor vasculature and disrupted mitochondrial membranes, causing cell death; this co-treatment increased the survival of mice bearing glioblastomas as compared to mice treated with the nanoparticle alone (Agemy et al., 2011). Materials and methods Unless otherwise noted, all protocol information was derived from the original paper, references from the original paper, or information obtained directly from the authors. An asterisk (*) indicates data or information provided by the Reproducibility Project: Cancer Biology core team. A hashtag (#) indicates information provided by the replicating lab. Protocol 1: synthesis of iRDG This summary describes the synthesis of the peptide iRDG based on information from Sugahara and colleagues (Sugahara et al., 2009(Sugahara et al., , 2010. The iRGD peptide (H-Cys-Arg-Gly-Asp-Lys-Gly-Pro-Asp-Cys-NH2) will be chemically synthesized using Fmoc (9-fluorenylmethoxy carbonyl) chemistry sequence. The fully synthesized crude peptide will be cleaved from the resin and cleaned with Trifluoroacetic acid (TFA). The crude peptides will be then diethyl ether precipitated, drained, and washed. The peptides will then be amide blocked on the C terminus and cyclized by a disulfide bridge between C1 and C9. The peptides will be isolated and purified by high-performance liquid chromatography (HPLC). Fractions of greater than 95% purity will be used for the investigation. The purity and molecular weight of the peptide will be confirmed by matrix-assisted laser desorption ionization (MALDI)-time of flight mass spectrometry. Exact synthesis specifications were not originally specified; the lab will follow standard procedures for synthesis. All data obtained from the experiment-synthesis specifications, materials, raw data, data analysis, control data, and quality control data including the MALDI-TOF mass spectrometry data-will be made publicly available, either in the published manuscript or as an open access data set available on the Open Science Framework (https://osf.io/xu1g2/). i. Use sterile drapes, gloves, and instruments. c. Make a 5 mm midline incision over the bladder. d. Grasp the bladder with blunt forceps and move aside to expose ventral prostate gland. e. Inject cells (10 μl) into ventral prostate gland. f. Close incision with sutures. g. Inject mice subcutaneously with 0.1 mg/kg buprenonorphine immediately post operatively. h. Observe mice until they are awake, ambulatory, and drinking. i. Check mice again at the end of the day (around 5 pm). j. The next morning, inject mice subcutaneously with 0.1 mg/kg buprenonorphine. k. Remove sutures on Day 7. Deliverables c Data to be collected: 1. Mouse health records (age of mice at time of injection). c Samples delivered for further analysis: 1. Orthotopic tumor bearing mice for use in Protocols 2-4. Confirmatory analysis plan c Statistical analysis of the replication data: 1. None applicable. Provisions for quality control The cell lines used in this experiment will undergo STR profiling to confirm their identity and will be sent for mycoplasma testing to ensure there is no contamination. Additionally, cells used for xenograft injection will be screened against a Rodent Pathogen Panel to ensure no contamination prior to injection. All data obtained from the experiment-raw data, data analysis, control data, and quality control data-will be made publicly available, either in the published manuscript or as an open access data set available on the Open Science Framework (https://osf.io/xu1g2/). c A lab with experience in prostate gland tumor xenografts will perform the experiment. Protocol 3: quantifying the amount of Dox present in tumor tissue and major organs in mice treated with Dox with or without iRGD This protocol describes how to treat mice bearing human 22Rv1 prostate tumors from Protocol 2 with DOX and/or iRGD, harvest the tumors and assess DOX penetrance by measuring absorbance at 490 nm (OD 490 ), as seen in Figure 2B. Sampling c This experiment will analyze at least 3 mice per group for a final power of 97.2%. 1. See power calculations section for details. c The experiment consists of two cohorts: 1. Cohort 1: mice treated with DOX and PBS. A. N = 4. c To buffer against unexpected mouse deaths, 4 mice bearing tumors will be treated. 2. Cohort 2: mice treated with DOX and iRGD. A. N = 4. c To buffer against unexpected mouse deaths, 4 mice bearing tumors will be treated. 3. Cohort 3: untreated mice. A. N = 2. 3. Inject mice with drugs in combination: a. On day of injection, randomly assign the 10 mice into the two treatment groups and the untreated group. i. Assign each mouse a number 1 through 10. Materials and reagents ii. After mice have been assigned numbers, enter the treatment labels (4 labels as Negative control, 4 labels as Experimental, and 2 labels as Untreated), and randomize 10 subjects into 1 block using www.randomization.com. Record seed number. b. Negative control: inject mice intravenously with 10 mg/kg Dox suspended in 100 μl PBS. c. Experimental: inject mice intravenously with 10 mg/kg Dox and 4 μmol/kg iRGD suspended in 100 μl PBS. d. Untreated mice receive no injections. 4. 1 hour later, sacrifice mice and excise tissues: a. Deeply anesthetize the mice with isoflurane. b. Perfuse through the heart with PBS + 1% BSA # . i. Place the deeply anesthetized mouse in a heated cage for 10 min. ii. Secure the mouse in the supine position by taping the paws to a Styrofoam work surface. iii. Make an incision through the skin with surgical scissors along the thoracic midline from just beneath the xiphoid process to the clavicle. Make two additional skin incisions from the xiphoid process along the base of the ventral ribcage laterally. iv. Reflect the two flaps of skin rostrally and laterally making sure to expose the thoracic field completely. v. Grasp the cartilage of the xiphoid process with blunt forceps and raise it slightly to insert pointed scissors. Cut through the thoracic musculature and ribcage between the breastbone and medial rib insertion points and extend the incision rostrally to the level of the clavicles. vi. Separate the diaphragm from the chest wall on both sides with scissors. vii. Pin with 18 G needles the reflected ribcage laterally to expose the heart. viii. Gently grasp the pericardial sac with blunt forceps and tear it fully. ix. Secure the beating heart with blunt forceps and make a 1-2 mm incision in the left ventricle. Immediately insert a 24 G × 25.4 mm animal feeding needle. The tip is bulbous and will not damage the heart. Thread the feeding needle into the base of the aortic arch using a dissecting microscope. Clamp the needle base to the left ventricle above the incision site using a hemostat. x. Cut the right atrium with scissors and at the first sign of blood flow, begin infusion of DMEM containing 1% BSA (stage 1 perfusate). 1. Use gravity driven perfusion at a rate of 3 mls per minute. xi. Continue perfusing the body until the fluid exiting the right atrium is entirely clear. xii. Ensure that organs of interest become pale; if an organ does not become pale, exclude the organ from further analysis. c. Excise prostate tumor tissue, liver, spleen, pancreas, heart, lung, kidneys, and brain. 5. Homogenize each tissue separately in 1% sodium dodecyl sulfate and 1 mM H 2 SO 4 in water. a. # Place tubes on dry ice for 1 min to freeze, then thaw for 5 min in a 25˚C waterbath. 8. Centrifuge samples at 14,000×g for 15 min at # room temperature. a. Store samples at 4˚C until ready for Step 9. b. For the two untreated mice, combine their samples to create the blank reference for each tissue; that is, combine tumor with tumor, liver with liver, etc. 9. Measure the OD 490 of the organic phase (the lowest phase). a. For each measurement, blank with appropriate tissue homogenate (tumor, liver, spleen, etc) from the untreated control mice. b. Calculate the fold change in Dox level from mice treated with iRGD by dividing by the absorbance reading of mice treated with Dox alone. c. Graph the fold change by tissue. Deliverables c Data to be collected: 1. Raw readings of OD 490 absorbance of each sample. 2. Graph of DOX accumulation with iRGD or PBS per organ (compare to Figure 2B). Confirmatory analysis plan c Statistical analysis of the replication data: 1. At the time of analysis, we will perform the Shapiro-Wilk test and generate a quantile-quantile (q-q) plot to attempt to assess the normality of the data and also perform Levene's test to assess homoscedasiticity. If the data appear skewed, we will attempt a transformation in order to proceed with the proposed statistical analysis listed below and possibly perform the appropriate non-parametric test. A. Compare the level of Dox + iRGD to level of Dox alone in tumor tissue. c Unpaired two-tailed Student's t-test. 1. Original analysis. c Meta-analysis of original and replication attempt effect sizes: 1. This replication attempt will perform the statistical analysis listed above, compute the effects sizes, compare them against the reported effect size in the original paper, and use a meta-analytic approach to combine the original and replication effects, which will be presented as a forest plot. Known differences from original study c Details noted with a hashtag ( # ) were provided by the replicating lab. Provisions for quality control Tissue homogenate from untreated mice will be used to blank the spectrophotometer. Mice will be randomly assigned to treatment groups. All data obtained from the experiment-raw data, data analysis, control data and quality control data-will be made publicly available, either in the published manuscript or as an open access data set available on the Open Science Framework (https://osf.io/xu1g2/). c A lab with experience in prostate gland tumor xenografts will perform the experiment. Protocol 4: effect of Dox alone or Dox in combination with iRGD on tumor growth and total body weight This protocol describes how to treat mice bearing human 22Rv1 prostate tumors from Protocol 2 with DOX and/or iRGD, monitor body weight and then assess tumor weight, as seen in Figure 2C and Supplemental Figure 9A. Sampling c This experiment will analyze at least 6 mice per group for a final power of 93.5%. 1. See power calculations section for details. c The experiment consists of three cohorts: 1. Cohort 1: mice treated with PBS alone. A. N = 7. c To buffer against unexpected mouse deaths, 7 mice bearing tumors will be treated. 2. Cohort 2: mice treated with 1 mg/kg Dox and PBS. A. N = 7. c To buffer against unexpected mouse deaths, 7 mice bearing tumors will be treated. ii. After mice have been assigned numbers, enter the treatment labels (Cohort 1, Cohort 2, and Cohort 3) and randomize 3 subjects into 7 blocks using www.randomization.com. Record seed number. b. Cohort 1: mice treated with PBS alone. c. Cohort 2: mice treated with 1 mg/kg Dox and PBS. d. Cohort 3: mice treated with 1 mg/kg Dox and 4 μmol/kg iRGD. 4. Repeat injection every other day for 24 days. 5. Weigh mice every 4 days, starting on Day 0. 6. After 24 days of treatment, harvest tissue. Materials and reagents a. Perfuse mice as outlined in Protocol 3 Steps 4a-b. b. Excise prostate tumor tissue and heart tissue. 7. Weigh tumor tissue. 8. Process, embed, and section tissue. a. Fix tumor and heart tissue in 4% paraformaldehyde overnight at 4˚C. b. Cut each tumor and each heart in half. c. # Dehydrate tissue and infiltrate with paraffin. d. # Embed in paraffin. i. Use one half of each tumor or heart to perform the sectioning below. Hold the other half in reserve in case more sections are needed later. e. Cut at least 7 5-μm thick sections spaced throughout the tumor or heart halves and mount on glass slides (i.e., the sections should not be serial). i. Tumor and heart sections to be used in Protocol 5. Deliverables c Data to be collected: 1. Record of the drug treatment regimen and weight of each tumor for each mouse. 2. Raw values for mouse body weight at time points during treatment. 3. Graph of tumor weight by drug treatment in grams (compare to Figure 2C). 4. Graph of change in body weight as a percentage of body weight on day 0 (compare to Supplemental Figure 9A). c Samples delivered for further analysis. 1. Tumor and heart tissue processed to sections for further analysis (see Protocol 5). Confirmatory analysis plan c Statistical analysis of the replication data: 1. At the time of analysis, we will perform the Shapiro-Wilk test and generate a quantile-quantile (q-q) plot to attempt to assess the normality of the data and also perform Levene's test to assess homoscedasiticity. If the data appear skewed, we will attempt a transformation in order to proceed with the proposed statistical analysis listed below and possibly perform the appropriate non-parametric test. A. Tumor weights in each cohort (as seen in Figure 2C). c One-way ANOVA followed by Fisher's LSD t-tests for the following comparisons: 1. 1 mg/kg Dox vs 1 mg/kg Dox and 4 μmol/kg iRGD. A. Body weight shift (as seen in Supplemental Figure 9A). c One-way ANOVA on Day 24 time points. 1. As seen in the original analysis. c Additional analysis: one-way ANOVA of calculated area under the curve of mouse body weight from each cohort followed by Fisher's LSD corrected t-tests for the following comparison: 1. 1 mg/kg Dox vs 1 mg/kg Dox and 4 μmol/kg iRGD. c Meta-analysis of original and replication attempt effect sizes: 1. This replication attempt will perform the statistical analysis listed above, compute the effects sizes, compare them against the reported effect size in the original paper and use a meta-analytic approach to combine the original and replication effects, which will be presented as a forest plot. Known differences from original study c The replication study will be restricted to examining the following groups: 1. No dox/no peptide. 2. 1 mg/kg Dox. 3. 1 mg/kg Dox/iRGD. c The tumor tissue will be embedded in paraffin for paraffin sectioning rather than in OCT for cryosectioning. Provisions for quality control Mice will be randomly assigned to treatment groups. All data obtained from the experiment-raw data, data analysis, control data, and quality control data-will be made publicly available, either in the published manuscript or as an open access data set available on the Open Science Framework (https://osf.io/xu1g2/). c A lab with experience in prostate gland tumor xenografts will perform the experiment. Protocol 5: assessment of TUNEL staining of tumor and heart tissue after drug treatment This protocol describes how to assess cell death via TUNEL staining in prostate tumors derived from 22Rv1 xenografts treated with DOX and/or iRGD, as seen in Figure 2D. Sampling c This protocol uses tissues derived from Protocol 4. 1. This experiment will analyze 6 tumors per group, for a final power of 88.8%. Materials and reagents Procedure Note: This protocol uses tumor and heart tissues derived from Protocol 4. ii. Negative control: incubate with micrococcal nuclease prior to labeling procedure. b. Stain a total of 7 slides for each tissue; 5 for analysis, one negative control, one positive control. 2. Scan the stained sections with a Scanscope CM-1 scanner and quantify areas of TUNEL positive staining with ImageJ software. a. *Image 5 random fields at 40× per section and image 5 sections per tumor and per heart. b. If sections are unable to be imaged due to autofluorescence or damage during the staining procedure, take images, and exclude from analysis with indicated reason. Protocol 4 Summary of original data c Note: values estimated from original graph. Test family c One-way ANOVA followed by Fisher's LSD t-tests for the following comparison: 1. Mice treated with 1 mg/kg Dox and PBS vs mice treated with 1 mg/kg Dox and iRGD. Power calculations c F statistic and partial η2 performed with R software (3.1.2) (R Core team, 2014). c Power calculations performed using G*power software (Faul et al., 2007). c α = 0.05. Figure 2D, we will be using 6 tumors per group, for an achieved power of 99.9%. T-test Group 1 vs Group 2 Effect size d A priori power Group 1 sample size Group 2 sample size 1 mg/kg Dox and PBS 1 mg/kg Dox and iRGD 2.230140 86.9%* 5* 5* *Due to power calculations for Figure 2D, we will be using 6 tumors per group, for an achieved power of 93.5%.
2017-06-26T21:28:21.259Z
2015-05-22T00:00:00.000
{ "year": 2015, "sha1": "4883c662c4859000c817c067389511f6f0a492fa", "oa_license": "CCBY", "oa_url": "https://elifesciences.org/download/aHR0cHM6Ly9jZG4uZWxpZmVzY2llbmNlcy5vcmcvYXJ0aWNsZXMvMDY5NTkvZWxpZmUtMDY5NTktdjEucGRm/elife-06959-v1.pdf?_hash=67/C72g71Z7gdYcNyPPs85Ck5/6KblkHeRtVpm/Obig=", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4883c662c4859000c817c067389511f6f0a492fa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
788439
pes2o/s2orc
v3-fos-license
Robust Gait Recognition by Integrating Inertial and RGBD Sensors Gait has been considered as a promising and unique biometric for person identification. Traditionally, gait data are collected using either color sensors, such as a CCD camera, depth sensors, such as a Microsoft Kinect, or inertial sensors, such as an accelerometer. However, a single type of sensors may only capture part of the dynamic gait features and make the gait recognition sensitive to complex covariate conditions, leading to fragile gait-based person identification systems. In this paper, we propose to combine all three types of sensors for gait data collection and gait recognition, which can be used for important identification applications, such as identity recognition to access a restricted building or area. We propose two new algorithms, namely EigenGait and TrajGait, to extract gait features from the inertial data and the RGBD (color and depth) data, respectively. Specifically, EigenGait extracts general gait dynamics from the accelerometer readings in the eigenspace and TrajGait extracts more detailed subdynamics by analyzing 3-D dense trajectories. Finally, both extracted features are fed into a supervised classifier for gait recognition and person identification. Experiments on 50 subjects, with comparisons to several other state-of-the-art gait-recognition approaches, show that the proposed approach can achieve higher recognition accuracy and robustness. I. INTRODUCTION U SING gait, or the manner of walking, for person identification has been drawing more and more attention in recent years [1]- [3], due to its capability to recognize a person at a longer distance than the traditional biometrics based on face, fingerprint and iris recognition. However, in practice gait biometrics usually suffer from two issues. First, the data collected by a single type of sensors, e.g., a CCD camera, may only capture part of the gait features and this may limit the gait recognition accuracy. Second, gait biometrics are usually sensitive to hard-covariate conditions, e.g., walking with hands in pocket or with loadings. In this paper, we propose to combine gait data collected by different types of sensors to promote the gait recognition accuracy and the robustness. In the previous research, three types of sensors have been used for gait data collection and gait recognition -color sensors, depth sensors and inertial sensors. Using color sensors, e.g., CCD cameras, a walking person can be captured into a video, in which each frame is a 2D RGB (color) image of the person and the surrounding environment. Gait recognition on such a video is usually achieved by segmenting, tracking, and analyzing the silhouette of the walking person on each frame [4]- [11]. The silhouette segmentation and tracking can be difficult when the color of the person is similar to the color of the surrounding environment in the video. In addition, color sensors generally capture the dynamic gait features in a 2D space. Using depth sensors, such as the line-structure light devices, it is usually easier to segment a walking person from the surrounding environment, when there is no other moving objects around. In addition, from the depth data, 3D dynamic gait features can be derived for gait recognition [12]- [14]. However, in practice depth data may contain noise and errors, especially at the spots with strong reflectiveness, e.g., on a reflective clothing, where the depth value is totally invalid. Such errors may lead to incorrect gait features and gait recognition results. Different from color and depth sensors, which are installed to capture the walking person at a distance to collect gait data, inertial sensors such as accelerometers and gyroscopes collect gait data by attaching to and moving with the person [15]- [21]. The inertial-sensor based gait recognition mainly benefits from the extensive use of smart phones -people always carry their smart phones and almost all the smart phones have integrated inertial sensors of accelerometers and gyroscopes. Considering the usability, the smart phone must be allowed to be placed in any pockets with different orientations when we use its inertial sensors for gait recognition. Such different placements and orientations of the sensors may vary the inertial data and affect the gait recognition accuracy [18]. In general, each type of the above-mentioned sensors can capture part of the gait features with different kinds of errors and incompleteness. For example, depth and inertial sensors capture 3D gait features and color sensors capture 2D gait features. Meanwhile, the inertial data, such as the accelerometer readings, portrait the motion pattern of the whole body and provide a general description to the gait dynamics, while the color and depth data can be used to infer the motion of many body parts and provide more detailed sub-dynamics of the gait. It is natural to assume that the gait features derived from different sensors can complement each other. This motivates the proposed approach to integrate the color, depth and inertial sensors for more accurate gait recognition. Sensitivity to complex covariate conditions is another main issue in gait biometrics [6]. For example, gait data from a sensor may look different when the same person walks with hands in pocket or with loadings. Such a difference increases the variance of a person's gait features and reduces the gait recognition accuracy. In this paper, through carefully designed experiments, we show that the proposed approach of integrating different sensors can also improve the robustness of gait recognition under complex covariate conditions. As a practical application scenario, the proposed approach of integrating different sensors for gait recognition can be used for person identification to access a restricted area or building. As illustrated in Fig. 1, at the entrance of a restricted area, a user simply walks on a force platform to get his identity verified. During his walk, a pre-installed client application in his smart phone sends real-time inertial-sensor readings to the server by wireless communication. At the same time, color and depth sensors, mounted over the ceiling and facing the platform, collect the RGBD (color and depth) data and send them to the server. In the server, the proposed approach can integrate all the data and perform gait recognition to identify whether he is an authorized user or not. Other than a higher gait recognition accuracy, such an identification system also has good security -even if the smart phone is hacked to send forged inertial data to the server, it is difficult to forge the RGBD data since color and depth sensors are not controlled by the user. Following the scheme of identification illustrated in Fig. 1, in this paper we use accelerometer in the smart phone to collect inertial data and Microsoft Kinect to collect the RGBD (color and depth) data. We develop a new EigenGait algorithm to capture the general gait dynamics by analyzing the inertial data in the eigenspace and a new TrajGait algorithm to capture more detailed gait sub-dynamics based on the 3D trajectories extracted from the RGBD video. The extracted features on general dynamics and sub-dynamics of gait are then integrated and fed into a supervised classifier for gait recognition and person identification. In the experiments, we collect three sets of inertial and RGBD data from 50 subjects and evaluate the proposed approach under various covariate conditions. Comparison results with other approaches confirm that the gait recognition accuracy and robustness can be improved by integrating different types of sensors. The main contributions of this paper lie in four-fold. • First, a multi-sensor integration method is proposed for gait recognition, in which inertial sensor, color sensor and depth sensor are integrated to capture gait dynamics. The multisensor data fusion leads to more robust gait-recognition performance. • Second, an EigenGait algorithm is developed to describe the general gait dynamics by analyzing the time-series acceleration data in the eigenspace. The extracted features are more effective than that produced by Fast Fourier Transforms (FFT) or Wavelet Transforms. • Third, a TrajGait algorithm is proposed to describe the detailed sub-dynamics of gait by analyzing the RGBD videos. In TrajGait, 3D dense trajectories are derived from the RGBD videos and used for representing the gait features. We found that such gait features are more discriminative than the depth-or skeleton-based features in gait recognition. Inertial Sensor Color Sensor Depth Sensor (e.g., smartphone) Force Platform Server Data (e.g., Kinect) Fig. 1. The application scenario of the proposed approach: a gait-based person identification system for accessing a restricted area. • Finally, three new datasets, with both RGBD and accelerometer data, are collected on 50 subjects. They can be used to quantitatively evaluate and compare the performance of different gait recognition methods. The remainder of this paper is organized as follows. Section II reviews the related work. Section III introduces the proposed approach, including sensor setting, data collection, gait feature extraction, and integrated gait recognition. Section IV reports the experiments and results. Section V concludes our work and briefly discuss the possible future work. II. RELATED WORK The ideas and experiments of gait recognition can be traced back to Cutting and Kozlowski's work [22], in which the manner of walking, i.e., the gait, was found to be possible to identify a person. Since then, gait-based person identification has attracted extensive attention in both academia and industry [23], and a number of gait recognition methods have been proposed. In these methods, three types of sensors are mainly used for gait data collection, namely the color sensor, the depth sensor, and the inertial sensor. Hence, the gait recognition methods can be classified into the color-based, the depth-based, and the inertia-based. In this section, we briefly overview them, as well as a brief overview to other actionbased biometrics. Color-based methods. The color-based methods had a rapid development in the early days [4]- [6], [24]- [31]. These methods can be classified into the model-free methods and the model-based methods. In the model-free methods, gait features are often extracted by analyzing the shapes, or contours, of the silhouettes in successive frames. In addition, features on the velocity, texture and color are also examined. One important work among them is the GEI (gait energy image) method [5], which represents gait dynamics by an aligned and normalized silhouettes over a gait cycle. The GEI provides a compact representation of the spatial occupancy of a person over a gait cycle. However, partitioning gait cycles from a color video is rarely easy. In [25], [32], silhouettes were produced by background subtraction, and gait features were extracted by principal component analysis. In [33] and [34], statistic methods were employed to analyze the gait characteristics on a sequence of binary silhouettes images. Motion has been exploited for gait representation [35]- [37]. In [35], motion is described by local binary patterns, and HMM (Hidden Markov Model) is then applied to distinguish the gait dynamics of different persons. In [38], gait motions were encoded based on a set of spatio-temporal interest points from a raw gait video. These interest points were detected by using Harris corner detector from the regions with significant movements of human body in local video volumes. In [36], motions were computed based on a sequence of silhouette images. In [37], motions were computed on multi-view color videos, and the trajectories were encoded by Fisher vectors for gait representation. The model-based approaches commonly use a priori model to match the data extracted from a video [39], [40], and parameters of the model are then used for gait recognition. For example, in [40], a pendulum model is used to describe the leg movement of the body. Similar to [37], in this paper, we also extract gait features from trajectories. However we develop a new algorithm that is totally different from [37], with availability of other sensors and a goal to extract more accurate gait dynamics. First, we segment the walking person from the background by using a depth sensor. This way, we can more accurately and reliably extract the human silhouette than many human detection algorithms [41], which only generate rectangular bounding boxes around the person. Second, we compute dense trajectories other than sparse interest points and the use of dense trajectories can encode more detailed gait dynamics. Depth-based methods. With the development of depth sensors, e.g., Microsoft Kinect, it is easier to segment human body from the background and many depth-based gait recognition methods have been proposed recently [12], [14], [42]- [44]. Under the assumption that body movements can be described by the trajectories of body joints, Munsell et al [42] proposed a full-body motion-based method for person identification. It examines the motion of skeletons, i.e., a number of joints tracked by the Kinect, and constructs a position matrix based on the location of the joints. All the position matrices are then dealt with by an SVD (singular value decomposition) operation for feature extraction. Following the idea of GEI, Sivapalan et al [12] proposed the use of GEV (gait energy volume) to represent gait dynamics with a sequence of gait energy images, in which reasonably good recognition accuracy can be achieved based only on the frontal depth information of gait. However, these depth-based methods characterize the gait dynamics only using the depth information and neglect more detailed gait dynamics implied in the human appearance. In [14], PDV (pose depth volume) was used to improve GEV by extracting accurate human silhouettes, in which color information is used to improve the segmentation of human mask from the depth video. But PDV does not use color information for gait representation. In [45], depth features on body joints were obtained from Kinect depth camera, and the GEI features were extracted from color images. The combined RGBD features were then used for frontal gait recognition. Different from [45], the proposed method uses color images to compute the 2D dense trajectories, which are then combined to the depth data to build dense 3D trajectories for extracting more detailed gait sub-dynamics. Inertia-based methods. Early researches on inertia-based gait recognition can be found in [15] and [16]. In [15], a portable tri-axial accelerometer device is used, and the gait is represented by the correlation of acceleration curves and the distribution of acceleration signals in the frequency domain. In [16], a template matching strategy is used for gait based person identification, in which the acceleration signals are divided by gait cycles, and then dynamic time warping is applied to check the similarity of two gait curves. In [46] and [47], gait cycles were detected and cycle matching were performed to improve the accuracy of gait recognition in the context of authentication or identification. In recent years, smart phones equipped with accelerometer and gyroscope have been widely used, which makes it easier and cheaper to conduct an inertia-based gait recognition [17], [18], [21], [48]. In [18], a Mexican-Hat wavelet transform is applied to the acceleration data to analyze the gait patterns, and most discriminative features are selected based on a Fisher-ratio value. In [49], large-scale data were collected for gait recognition, in which the accelerometer is fixed on the human body. In [50], to avoid the complications in gait-cycle detection, signature-meaningful points (SPs) on the acceleration curve were detected, and gait features extracted on SPs were used for gait recognition. In [21], the gyroscope is used to rectify the orientation of the accelerometer. The acceleration signals with orientations are calculated with autocorrelation, and converted into the frequency domain using FFT. However, the gyroscope commonly has a cumulative-error problem, which may lead to an unreliable rectification and the difficulty in determining the similarity of two gait curves. Another limitation is that the detection accuracy of previous approaches highly relies on the very accurate placement of the accelerometer sensor on the human body. This strict requirement would greatly affect the usability and flexibility of the identification system. Other action-based biometrics. Also related to our work is the action-or activity-based person identification [51]- [59]. Besides gait, many other actions such as jump, run and skip are also found to be capable of identifying a person. Kobayashi and Otsu [51] proposed to identify persons from a sequence of motion images using an auto-correlation-based method. By incorporating more types of human actions, Gkalelis et al [52] presented a multi-modal method for person identification, and enhanced it by using a multi-camera setup to capture the human body from different viewing angles [53]. Recently, sparse-coding-based methods were developed for human identification based on the activities captured by videos [55]- [57]. In [55], a metric learning procedure was performed on the sparse-coded features to get discriminative features. In [56], [57], the discriminative power was further improved by performing a discriminative sparse projection and learning a low-dimensional subspace for feature quantization. In [58], multiple Kinects were found to improve the performance of gesture-based authentication. In [59], a generative model was presented to describe the action instance creation process and an MAP-based classifier was used for identity inference on 3D skeletal datasets captured by Kinect. III. PROPOSED METHOD A. System Overview Following the application scenario of person identification shown in Fig. 1, we let the user walk straight along a corridor for gait feature collection. The inertial sensors are with the user while the color and depth sensors are placed at the end of the corridor. In this paper, we use accelerometer in the smart phone as inertial sensors and Microsoft Kinect as color and depth sensors. This way, we collect the accelerometer readings and RGBD data for gait feature extraction and gait recognition. Note that, the color (RGB) data and depth data collected by Kinect are temporally synchronized. The flowchart of the proposed system is illustrated in Fig. 2. After data pre-processing, gait features are then extracted from the inertial data and RGBD data by using the proposed EigenGait and TrajGait algorithms, respectively. Finally, the gait features are combined as an input to the machine learning component for person identification. The proposed system can be installed at the entrance of any restricted area for person identification, such as banks, financial tower, and military base etc. The proposed gait recognition combining multiple sensors is not fully non-invasive. The inertial sensors move with the user and send the accelerometer data to the server. Therefore, the user should be notified priorly and may need to show certain level of cooperation in data collection. But from the application perspective, most, if not all, existing person personidentification systems for accessing a restricted area cannot be fully non-invasive -many of them work as a verification system where the user needs to provide his identity to the server for verification at the entrance. For such a personidentification system, the goal is to achieve good usability instead of full non-invasiveness. For better usability, a personidentification system should require as fewer human interactions and less strict cooperations as possible. For the proposed system, with appropriate settings and client applications in each user's smart phone, the data collection, including sending the inertial and RGBD data, and possibly the user's identity, to the server, and the whole process are fully automatic without additional human interactions. In addition, as shown in the later experiments, by combining multiple sensors, the proposed system shows higher robustness against covariate conditions. This also improves the usability by requiring less strict cooperations from the user. In the following, we first introduce the data collection and data pre-processing, and then elaborate on the EigenGait algorithm for inertia-based gait representation and the TrajGait algorithms for color-and depth-based gait representation. B. Data Collection and Pre-processing In this paper, we use accelerometer to collect inertial data and Kinect to collect RGBD data. 1) Acceleration data: We utilize a tri-axial accelerometer sensor in the smart phone to collect the acceleration data of a walking person. First, we build an application on the Android platform. Given the APIs provided by the Android SDK, we use the android.harware.SensorManager package and attached event listeners to the Sensor.Type Accelerometer to collect acceleration data. The sensor is registered to the SensorManager.Sensor Delay Game and is set a sampling rate of 50Hz on each axis. Considering the usability, in data collect we simply ask the user to put the smart phone, installed with our application, in his/her pocket with any orientation. Each user is required to walk in his/her normal pace and fast pace. Since the accelerometer is placed in the pocket with a random orientation, which varies over time during the walking, the acceleration values on each axis are collected in a timevarying direction. Therefore, the acceleration values along each axis are actually not comparable from time to time. To address this issue, we fuse the acceleration values on all three axes into one compound one. Let Acc x , Acc y and Acc z be the acceleration values on the X, Y, and Z axes, respectively, we compute the compound acceleration value Acc c by Acc c = Acc 2 x + Acc 2 y + Acc 2 z , which is more robust against the pose change of the accelerometer over time. Figure 3(a) shows an acceleration data sample on the X, Y and Z axes collected by a smart phone -the periodical property of the acceleration data reflects the walking pace of the user. Figure 3(b) shows the compound acceleration curve, which has been partitioned at local maximum. Specifically, we sequentially consider a point as the partitioning point if it satisfies three conditions: 1) it is a local maximum (peak) along the curve, 2) its distance to the previous partitioning point is no less than 700ms, and 3) its value is greater than 4m/s 2 . Each segment of the partitioning acceleration curve corresponds to one step in the walking. Note that, in our study, one step denotes a full step cycle consisting of a left-foot move and a right-foot move. 2) Color and depth data: A Kinect 2.0 assisted with Kinect SDK v2.0 1 is applied for color and depth data collection. The Kinect is placed about 0.5m up from the ground. The RGB video stream is in 24-bit true color format with a resolution of 1280×1024 pixels. The depth video stream is in VGA resolution of 640×480 pixels, with 13-bit depth value. The depth sensor has a practical ranging limit of 1.2-3.5m distance when using the Kinect SDK. The sampling rate is 15 fps. Figure 5 shows a sequence of color images and depth images collected by the Kinect. The depth images shown in Fig. 5 have been normalized since a single VGA channel has only 8 bits to represent a pixel. For the computation in all the experiments, the original 13-bit depth value is used, which provides a high precision to describe the motion in the depth channel. 3) Three datasets: Using the sensor settings as described above, we collect three datasets consisting of both RGBD data and accelerometer readings. We use these data for evaluating the performance of the proposed method, as well as the comparison methods, in the later experiments. • Dataset #1. This dataset is collected on 10 subjects, containing 1,000 groups of acceleration data and 1000 groups of RGBD data -100 groups of acceleration data and 100 groups of RGBD data are collected for each subject, with half in normal pace, and half in fast pace. The acceleration data and RGBD data are collected separately. In collecting acceleration data, each subject is required to walk along a hallway, with a length of about 60 feet. A group of acceleration data is defined as the sequence of acceleration values resulting from the entire walk from one end of the hallway to the other end. We partition the acceleration data into steps as illustrated in Fig. 3(b). For all the one-step acceleration data, we temporally interpolate them into a data sequence of length 50. Based on the temporally partitioning, we create 5 sub-datasets, containing one-, two-, three-, fourand five-step long data samples, respectively. In RGBD data 1 http://www.microsoft.com/en-us/kinectforwindows/develop/downloadsdocs.aspx collection, each subject is required to walk towards the Kinect 100 times, from about 5m away to 1m away to the Kinect. The sequences of frontal color and depth images of the subjects are captured. A group of RGBD data is defined as the sequence of RGBD images resulting from one full walk toward the Kinect. • Dataset #2. This dataset contains 500 data samples of 50 subjects, with 10 data samples for each subject. Each data sample consists of a sequence of acceleration data and a sequence of RGBD data, which are collected simultaneously for one full walk of a user. For each RGBD video, a frame is preserved only if the present person is recognized with all the body joints by the Kinect SDK. Each acceleration data covers about 2 steps or more. We uniformly partition each acceleration data and generate a two-step data sample. • Dataset #3. This dataset contains 2,400 data samples of 50 subjects, with 48 data samples for each subject. These data are collected under different covariate conditions. In particular, in collecting Dataset #3, each subject is required to walk under eight different conditions, i.e., natural walking, left hand in pocket, right hand in pocket, both hands in pocket, left hand holding a book, right hand holding a book, left hand with loadings, and right hand with loadings, as shown in Fig. 6. For each subject, 6 data samples are collected under each condition, with 3 in fast pace and 3 in normal pace. Acceleration data and RGBD data are collected simultaneously in each data sample. The information of the above three datasets is summarized in Table I. C. EigenGait: eigenspace feature extraction for gait representation A sequence of (compound) acceleration values resulting from a walk can be plotted into a 2D curve, as illustrated in Fig. 4 and we call it a gait curve in this paper. Inspired by the Eigenface algorithm [60] used for image-based face recognition, we propose an EigenGait algorithm for gait recognition based on gait curves. Let A = {S i |i = 1, 2, ..., N } be a set of gait curves of N subjects, S i denotes the gait curves collected for the ith subject. Treating a gait curve as a vector, we can compute an average gait curve for the ith subject aŝ where M i is the total number of gait curves collected for the ith subject, and S (j) i is the jth gait curve of the ith subject. Further, the overall average gait curve over all the N subjects can be calculated byŜ Then, a gait-curve difference can be calculated by To better illustrate the meaning of O i , we compute them on real data. Without loss of generality, let us consider the 2-step acceleration data collected in Dataset #1. In Fig. 7, the last figure in the middle row shows the gait-curve differences of ten subjects in Dataset #1. It can be seen from Fig. 7 that, the gait-curve differences also preserve the periodic property of the original gait curve, as shown in the top row of Fig. 7, and different subjects have different gait-curve differences. Then the covariance matrix can be calculated by We can perform an eigen-decomposition as (λ, U) = Eigen(C), where λ denotes the eigenvalues, and U denotes the corresponding eigenvectors. Suppose the eigenvalues in λ have been sorted in descending order, we select the first r elements that fulfill r i=1 λ i ≥ 0.85 · λ, and hence get r corresponding eigenvectors {u 1 , u 2 , ..., u r }. In the bottom row of Fig. 7, the seven curves show the top seven eigenvectors of the two-step sub-dataset in Dataset #1. It can be seen from Fig. 7 that, more distinctiveness can be observed in the gait-curve differences than in the original gait curves. We can also see that, these eigenvectors preserve the shape appearance of some of the original gait curves, as shown in the top row of Fig. 7 and we call them EigenGaits in this paper. When a new gait curve s comes, we can project it into the eigenspace defined by the r eigenvectors as and obtain an EigenGait feature vector (ω 1 , ω 2 , ..., ω r ). As the acceleration data reflects the whole body motion in the walking, the extracted EigenGait features can capture the general gait dynamics. D. TrajGait: dense 3D trajectories based gait representation The gait data captured by color sensor and depth sensor can be represented by a sequence of color images and depth images, respectively. These images provide useful information to describe the details of body movements, e.g., the movement of each body part. We combine the color and depth data and develop a TrajGait algorithm for extracting 3D dense trajectories and describing more detailed gait sub-dynamics. TrajGait algorithm is summarized in Algorithm 1, which contains the following four key operations: • computMotion One each RGB color frame, we compute the dense optical flow by the algorithm proposed by Färneback [61] 2 . This algorithm makes a good compromise between accuracy and speed. • segmentMask To focus on the walking person, we segment the person from the background, and take it as a mask in later operations. Since the Kinect SDK has provided t t+1 t+2 t+L t+L+1 t+L+2 z y x Fig. 9. Illustration of the trajectories in 3D space. functions for efficient human detection and joints tracking [62], we apply these functions to extract a raw human mask in each frame, and then apply some image processing techniques, including hole filling and noise removal, to get the final mask. Figure 8(c) displays a human mask segmented from the depth image in Fig. 8(b). Note that, while segmenting persons from a confusing background can be very challenging on RGBD data, it is not a serious issue in the proposed application scenario of person identification -the environment is highly controlled (e.g., a hallway) and the sensors are well set, without any other moving objects around. In this paper, the following steps are taken to obtain the human mask: (i) produce human-oriented depth image using the body-segmentation function provided by Kinect SDK (i.e., IBodyIndexFrame::AccessUnderlyingBuffer), (ii) resize the depth image to the size of the color image and interpolate the resized image using bi-cubic interpolation, (iii) binarize the depth image with a threshold t=113, and (iv) fill the holes and remove segments that are smaller than 1,000 pixels. • calcTrajectories Suppose (x, y) is the coordinate of a point at a frame of the collected color data, (z) is the depth value of that point in the depth video, then we can locate that point with a coordinate (x t , y t , z t ) in the RGBD space. In this way, we can treat each point in the RGBD data as a 3D point. Figure 9 illustrates the trajectories in the 3D space. The shape of a trajectory encodes the local motion patterns, which we use for gait representation. Based on the 2D dense trajectories extracted by [63] in RGB channels, we can compute the corresponding 3D trajectories. Let's further suppose point P t = (x t , y t , z t ) at frame t is tracked to frame t+1 at the point P t+1 , then, with a given trajectory length L, we can describe its shape by a displacement vectors, where ∆P t = (P t+1 −P t ) = (x t+1 −x t , y t+1 −y t , z t+1 −z t ), and L is empirically set as 15. Since the gait may be collected in various walking speed [64], the resulting vector has to be normalized to reduce deviations. As the metric in the color image is different from that in the depth image, we % Calculate the trajectories of all RGBD data: 12: for (i=1 to N ) do 13: for (j=1 to Xi) do 14: % Compute the motion on the color video V % Compute a number of K centers using Clustering: 26: Y ← kMeans(T , K); 27: % Compute trajectory histogram for each RGBD data sequence: 28: for (i=1 to N ) do 29: for (j=1 to Xi) do 30: end for 32: end for 33: end procedure separately normalize them by their sums of the magnitudes of the displacement vectors. We take a normalized displacement vector as a 3D trajectory descriptor. An example of 3D trajectory descriptors derived from an RGBD data sequence is shown in Fig. 10. • histTrajectory We apply a bag-of-words strategy to encode the 3D trajectory descriptors. Specifically, we generate a codebook with a number of K codes using a clustering technique. The standard K-means algorithm is employed here for clustering. To reduce the complexity, we cluster a subset of 1,000,000 randomly selected training samples. To increase the precision, we run K-means 10 times and keep the result with the lowest K-means clustering cost. For each RGBD sequence, the extracted 3D trajectory descriptors are quantized into a histogram by hard assignment. The resulting trajectory histograms are then used for gait representation. E. Gait Recognition We achieve gait recognition using a supervised classifier. We combine the gait features extracted by EigenGait and TrajGait and feed them into a machine learning component for training and testing. The trained model can then be used to recognize new unseen data samples for gait recognition and person identification. For feature combination, we simply concatenate the EigenGait features and the TrajGait features into one single feature vector. In the machine learning component, a multiclass Support Vector Machine (SVM) classifier implemented by libSVM 3 is used for both training and testing [65]. A one-vs-all classification strategy is applied. To investigate the potential relation between classification accuracy and computation efficiency, we try both the linear and non-linear SVMs. For the soft-margin 3 www.csie.ntu.edu.tw/ cjlin/libsvm/ constant C in SVM, we consistently set it 1,000 through all the experiments. IV. EXPERIMENTS AND RESULTS In this section, we use three datasets to evaluate the performance of the proposed method, as well as the comparison methods. First, we examine the effectiveness of the proposed EigenGait algorithm and the TrajGait algorithm using Dataset #1, separately. Then, we evaluate the performance of the proposed method, i.e., the one fusing EigenGait and TrajGait, on Dataset #2 by comparing its accuracy with several state-ofthe-art gait recognition methods. Finally, we test the robustness of the proposed method on Dataset #3. In particular, we try to answer the following questions: -How effective are the EigenGait algorithm and the TrajGait algorithm for gait recognition? How do the parameters influence their performances? -What is the overall performance of the proposed method? Does it work better than the state-of-the-art color-based methods, depth-based methods, and inertia-based methods? -How robust is the proposed method in handling gait data collected under hard-covariate conditions? In the experiments, we mainly evaluate gait recognition to address a classification problem. At the end of the section, we will also evaluate the proposed method to solve an identification problem. As a classification problem, we use the classification accuracy as a metric for performance evaluation. The classification accuracy is defined as Accuracy = # Correctly Classif ied Samples # T otal T esting Samples . In the classification, each testing sample is classified by the pre-trained SVM and receives a score vector containing n score values, where n is the number of subjects in training the SVM. A score value in the score vector indicates the likelihood of this sample to be from a specific subject. The sample will be recognized as being from subject i if the ith element is the maximum in the score vector. Compared with the groundtruth subject for the test sample, we can decide whether it is correctly classified and compute the accuracy. A. Effectiveness We use Dataset #1 to evaluate the performance of the EigenGait algorithm and the TrajGait algorithm. Dataset #1 is collected for 10 subjects, including five sub-datasets of the acceleration data, and one sub-dataset of the RGBD data. 1) EigenGait: There are five acceleration sub-datasets, i.e., the one-, two-, three-, four-and five-step sub-datasets. Each sub-dataset contains 5,000 acceleration data sequences, with half in normal pace and half in fast pace. The resulting EigenGait features are of dimension 43, 85, 128, 170 and 213 for the one-, two-, three-, four-and five-step data, respectively. Note that, data in the same sub-dataset have no overlaps with each other. The EigenGait algorithm is evaluated under the normal pace, fast pace and two paces mixed, i.e., normal+fast, and the results are shown in Fig. 11(a)-(c), respectively. From Fig. 11(a)-(c), we can see that, EigenGait obtains good The amount of data used for training (%) 10 classification accuracy in all three cases, e.g., over 0.95 in normal pace, 0.92 in fast pace, and 0.90 in normal+fast, using 30% data for training. Moreover, EigenGait shows higher accuracy under normal pace than under fast pace. This is because, a large speed variation would occur when a person walks in a fast pace, which would increase the complexity of the gait data. Decreased performances of EigenGait can be observed in Fig. 11(c), because the mixed-pace data further increase the data complexity. As can be seen from Fig. 11(b) and (c), on a dataset with large speed variations, e.g., in fast pace, or in normal+fast, EigenGait holds lower performances on the 1-step dataset than on two or more step dataset. This is because, a 1-step data is less capable of representing the gait than a 2 or more step data. Surprisingly, the EigenGait obtains comparable performances when varying the data length from 2 to 5 steps. Considering that a 2-step data can be easily captured and efficiently computed as comparing to longer data, in our later experiments, we always choose a length of 2 steps for EigenGait features, including the experiments on Dataset #2 and Dataset #3. Further, we evaluate EigenGait's performance using linear and non-linear SVMs. Typically, the 'KL1' and 'KCHI2' kernels are employed, respectively. Table II lists the classification accuracy under different walking paces and varied data lengths, where 50% data are used for training. It can be seen that, EigenGait generally shows higher performances using a linear SVM than using a non-linear one. This is because, the EigenGait extracts gait features in the eigenspace, which makes the feature more linearly classifiable. 2) TrajGait: We use RGBD data in Dataset #1 to evaluate the TrajGait algorithm. Specifically, we evaluate the TrajGait under different K for K-means clustering, and linear and nonlinear SVMs. In K-means clustering, 1,000 trajectories are 10 randomly selected for each training sample. In feature quantization, all trajectories of each data sample are used, which may span from about 8,000 to 15,000 in our experiments. Figure 11(d) shows the TrajGait accuracy when K=256, 512 and 1024., respectively. We can see that the TrajGait achieves classification accuracies higher than 0.98 when using over 20% data for training. A higher performance can be achieved with a larger K, i.e., the size of the codebook. It can also be observed that, under the same K, a non-linear SVM produces a little higher accuracies than the linear one. Considering that linear SVM performs better in EigenGait and has lower computation cost, we choose the linear SVM in the proposed gait recognition by combining the EigenGait and TrajGait features. B. Accuracy We evaluate the overall performance of the proposed method, i.e., EigenGait+TrajGait, by comparing it with several other inertia-based methods, color and depth based methods. Specifically, the following methods are included in the comparison, • Acc Fourier [21]: An autocorrelation operation is first applied to the acceleration data, which is then converted into the frequency domain using FFT. The top half of the coefficients are selected as the gait features. • Acc Wavelet [18]: The Mexican Hat Wavelet transform is used to analyze the gait patterns from the acceleration data. • Acc EigenGait: The proposed EigenGait algorithm handles the acceleration data. • D Skeleton [42]: The position matrix on 20 joints are decomposed by SVD, and the resulting 220-dimensional vectors are used for gait representation. • D GEV [12]: The GEV is computed on the human masks extracted from depth data. The principal component analysis is then performed the same way as in our EigenGait for gait features. • D TrajGait: The displacement of a trajectory is calculated only on the depth channel, with a codebook size K=1024. • RGB TrajGait: The displacement of a trajectory is calculated on the RGB channels, with K=1024. • RGBD TrajGait: The full TrajGait algorithm, i.e., trajectories extracted from the RGBD channels, with K=1024. • Acc EigenGait+RGBD TrajGait: The full version of the proposed method by combining EigenGait and TrajGait features. We normalize the EigenGait feature and TrajGait feature independently before concatenating them together. Afterwards, we normalize the concatenated feature as an input data for SVM. The normalization is performed using an L1-norm measure. For clarity, we use Figs. 12(a) and (b) to show the results of the acceleration-based methods and the RGBD-based methods, respectively. In Fig. 12(a), the proposed EigenGait is observed with a clear higher performance than the waveletbased or FFT-based methods, in handling acceleration data. In Fig. 12(b) we can see that, RGBD TrajGait obtains an accuracy over 0.90 when using 30% data for training, which is much higher than that of D Skeleton and D GEV. The TrajGait has a higher performance on the RGB channels than on the depth channel, which indicates that the color is more effective than the depth in representing gait sub-dynamics. Meanwhile, RGBD TrajGait outperforms RGB TrajGait and D TrajGait, which simply demonstrates that the color information and depth information can complement each other in characterizing the gait. It can also be seen from Fig. 12(b) that, a boosted performance can be achieved by fusing EigenGait (handling acceleration data) and TrajGait (handling RGBD data) features, i.e., EigenGait+TrajGait, which validates the effectiveness of the proposed multi-sensor data fusion strategy. C. Robustness We evaluate the robustness of the proposed method with Dataset #3, which contains 2,400 data samples of 50 subjects, under 8 hard-covariate conditions, as introduced in Section III-B3. Figure 12(c) shows the results of the proposed method and the comparison methods. We can see that, the Tra-jGait+EigenGait, the TrajGait, and the EigenGait achieves the top three performances among all the methods. The proposed method, i.e., TrajGait+EigenGait, stably hold an classification accuracy over 0.90 when varying the amount of training data from 10% to 90%, which indicates the proposed method can better handle these hard covariates. Moreover, we investigate the detailed performance of the proposed method by figuring out the classification accuracies 1 -left hand in pocket 2 -right hand in pocket 3 -both hands inpocket 4 -left hand holding a book 5 -right hand holding a book 6 -left hand with loadings 7 -right hand with loadings 8 -natural walking on each kind of hard covariate. As shown in Fig. 13, for EigenGait, the hard covariate 'both hands in pocket' leads to the lowest accuracy. It is because that, the acceleration would heavily vary from normal when a person walks with both hands in the pockets. While for TrajGait, 'a hand with loadings' will increase the difficulty for gait recognition. This is because the loadings may bring unexpected motions in the color space, as well as in the depth space, e.g., a bag is used to carry the loadings in our case. For the skeleton-based and wavelet-based methods, the average classification accuracy is about 30% and 10% lower than the proposed method, respectively. Comparing with the turbulent performances of the comparison methods on different hard covariates, the proposed method performs rather stably. D. Person-Identification Performance Finally, we evaluate the proposed method in the application scenario of person identification, as shown in Fig. 1. Half of the data samples in Dataset #3 are used for training, and the remaining half are used for querying and identification. The average ROC curve [66], [67] is employed for performance evaluation. For each subject, an ROC curve is computed on the results of a one-vs-all binary classification. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at varying threshold settings. The TPR and FPR are defined by T P R = T rue P ositive T rue P ositive + F alse N egative , F P R = F alse P ositive F alse P ositive + T rue N egative . Then the average ROC curve is computed based on all ROC curves of 50 subjects. The larger the area under the ROC curve, the better the person-identification performance. The average ROC curves for the proposed method and the comparison methods are plotted in Fig. 14. We can see that, the proposed method by combining EigenGait and TrajGait achieves the best performance. In addition, the EigenGait and the Wavelet-based method produce competing performance, 18 2 Fig. 14. ROC curves of person identification on Dataset #3 using 50% data samples for training. but the former achieves higher TPR than the later when the FPR is below 0.05. Thus, the EigenGait would outperform the Wavelet-based method since a lower FPR is often required in a strict identification system. It can also be observed from Fig. 14 that, the TrajGait uniformly outperforms the EigenGait which may simply indicate that the TrajGait features are more discriminative by describing the detailed gait sub-dynamics. V. CONCLUSION In this paper, the inertia, color and depth sensors were integrated for accurate gait recognition and robust person identification. Specifically, the accelerometer of smart phone and the RGBD sensor of Kinect were employed for data collection. An EigenGait algorithm was proposed to process the acceleration data from inertial sensor in the eigenspace, and capture the general dynamics of the gait. A TrajGait algorithm was proposed to extract gait features on the dense 3D trajectories from the RGBD data, and capture the more detailed sub-dynamics. The extracted general dynamics and detailed sub-dynamics were fused and fed into a linear SVM for training and testing. Datasets collected from 50 subjects were used for experiments and the results showed the effectiveness of the proposed method against several existing stateof-the-art gait recognition methods. In the experiments, there are several other interesting findings. First, for the acceleration-based gait recognition, the walking pace has a potential influence on accuracy of the system. Uniform walking pace under a normal speed produces better gait recognition than mixed walking paces. Second, for the RGBD-based gait recognition, motion can be better captured by wearing textured clothes, with which we can more accurately infer the detailed gait sub-dynamics for gait recognition. Third, the proposed construction and encoding of the 3D dense trajectories can provide more discriminative and robust gait features under different hard-covariate conditions than other sparse joint-based trajectories. In the future, we plan to further enhance the gait recognition system by configuring more sensors and building more effective classifiers. For example, more Kinects may be installed to capture multiple views of a walking person. For the classifier, other proved techniques in classification, such as the fuzzyreasoning strategies [68]- [70] may be integrated into SVM to improve the recognition accuracy and robustness.
2016-10-31T08:06:50.000Z
2016-10-31T00:00:00.000
{ "year": 2016, "sha1": "26ceceb2f571ff8e0230755d7fd18f804da2a4bd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.09816", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "23bf74ede0f19f93b9ab5274a9313edb2962799b", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
137653788
pes2o/s2orc
v3-fos-license
Synchrotron radiation X-ray microtomography and histomorphometry for evaluation of chemotherapy effects in trabecular bone structure Three-dimensional microtomography has the potential to examine complete bones of small laboratory animals with very high resolution in a non-invasive way. One of the side effects caused by some chemotherapy drugs is the induction of amenorrhea, temporary or not, in premenopausal women, with a consequent decrease in estrogen production, which can lead to bone changes. In the present work, the femur heads of rats treated with chemotherapy drugs were evaluated by 3D histomorphometry using synchrotron radiation microcomputed tomography. Control animals were also evaluated for comparison. The 3D tomographic images were obtained at the SYRMEP (SYnchrotron Radiation for MEdical Physics) beamline at the Elettra Synchrotron Laboratory in Trieste, Italy. Results showed significant differences in morphometric parameters measured from the 3D images of femur heads of rats in both analyzed groups. Introduction Chemotherapy often causes significant bone loss, marrow adiposity and haematopoietic defects. One of the side effects caused by some chemotherapy drugs is the induction of amenorrhea, temporary or not, in premenopausal women, with a consequent decrease in estrogen production. It leads to bone changes, similar to those presented in osteoporosis [1]. The osteoporosis is defined as a systemic skeletal disease characterized by low bone mass, microarchitectural deterioration and changes of bone tissue, resulting in increased bone fragility and susceptibility to fracture risk. It is generally accepted that trabecular bone strength depends not only on bone volume but also on its structures, which consist of connected bony plates. The investigation of a rat model for osteoporosis from X-ray microtomography has already been described previously [2,3,4]. Imaging techniques based on microtomography (µCT) enable three-dimensional (3D) nondestructive analysis of bone microarchitecture. µCT technique can be used to image and to quantify trabecular bone and this quantification has the capability to address the role of trabecular architecture on the mechanical properties of bone. The use of rat requires even higher spatial resolution, just because it has thinner bone structures. Trabecular diameter in rats is less than 100 µm while in humans, greater than 120 µm. To this goal, the combination of synchrotron radiation (SR) with µCT technique yields notable advantages, including high spatial resolution and coherence what can be achieved due to the high intensity of synchrotron facility and the natural collimation of the beam, and the wide spectral bandwidth allows continuous tuning of energy from a few keV to several tens of keV [5,6]. For quantification, a volume of interest (VOI) has to be chosen in the way that it is comprised only of trabecular bones. For this VOI, the following parameters can be computed: bone volume-total tissue volume ratio (BV/TV), trabecular thickness (Tb.Th (mm)), trabecular separation (Tb.Sp (mm)), trabecular number (Tb.N (mm -1 )) and structure model index (SMI). Different parameterization methods may be applied to extract quantitative architecture parameters from the sample such as in conventional histology. But this technique is somehow tedious and time-consuming, as it includes sample sectioning (the sample is destroyed) and the parameters are visually assessed in two dimensions, so the third dimension has to be added using the basis of stereology. In 3D computed microtomography, the histomorphometric parameters are assessed from 3D images of the bone structures, non-destructively, fastly, and precisely [7,8,9]. This technique was pioneered by Feldkamp and co-workers [10]. This work aims to evaluate changes in trabecular bone architecture of rats treated with a combination of chemotherapy drugs, using 3D computed microtomography and histomorphometry. Specimens In this study, the experimental animals were ten adult female Wistar rats at 90 days of age, divided randomly into two groups, with n = 5 each. The treated group received doses of docetaxel and cyclophosphamide drugs (G1) while control group (G0) was sham treated. At the beginning of the study, the animals weighed an average of 200 g. The rats were acclimatized with standard conditions of temperature (25 o C) and light-controlled environment (12 h light-dark cycles). Food and water were provided ad libitum. The treatment lasted for 1 month in cycles of 4 and 7 day of intervals between them. At the same time, untreated animals of the same age were sham treated as control. The rats were sacrificed by direct heart KCl injection at 150 days post-treatment, at 240 days of age, and femurs were excised, cleaned and let to air dry for at least 72 hours. Ethics permission to utilize the animals for the research described in this paper was obtained from Ethics Committee on Animal Research of the State University of Rio de Janeiro (Process CEA/010/2012). Figure 1 illustrates the location from where images were acquired. Synchrotron radiation computed microtomography (SR-µCT) All the specimens were imaged using the new high resolution microCT setup which has been recently available at the Synchrotron Radiation for Medical Physics (SYRMEP) beamline of the Elettra Synchrotron Light Laboratory (Trieste, Italy). The SYRMEP light source is one of the bending magnets of ELETTRA. The horizontal acceptance covered by the front-end light-port is 7 mrad. The useful energy range is 8.5-35 keV. Typical flux measured at the sample position at 17 keV is about 1.6x10 8 ph/mm 2 s, with a stored electron beam of 300 mA when ELETTRA is operated at 2 GeV, while it is 5.9x10 8 ph/mm 2 s, with 140 mA at 2.4 GeV. A custom-built ionization chamber, placed upstream to the sample, is used to determine the exposure on the sample, and hence to calculate the delivered dose [11]. In order to optimize the performances of the µCT setup for high resolution experiments in the SYRMEP beamline, 25-µm thick single crystal of cerium-doped lutetiumaluminum garnet (Lu3Al5O12) scintillator screen (Crytur, Czech Republic) was coupled to an aircooled 16 bit CCD camera (Photonic Science, KAI 4022M CCD, 2048 x 2048 full frame) via a visible light microscope optics (LEICA), effective pixel size of 1.03 µm. This system designed to achieve up to 2 µm spatial resolution was used in white/pink X-ray beam mode that provides a nearly parallel, laminar-section X-ray beam with a maximum area of 100 mm (horizontal) x 6 mm (vertical) at a distance of about 15 m from the source. For each bone sample, 1800 radiographic images were acquired over an angular range of 360° with angular step of 0.2°. Sample-detector distance was set to 9 cm and scanning time was approximately 1h for each sample. The sample was positioned so that the region of femur head lied on the field of view of the detector, not been necessary to cut the sample. Palladium (0.047 mm) and Silicon (1.5 mm) filters were used to cut the low energy X-ray components therefore, the average energy was around 24 keV. The 2D radiographies are normalized by using flat (images without the samples) and dark (background) images. This procedure allows one to take into account incident beam nonuniformities and to correct fixed noise due to the efficiency of the detector elements. After radiographic data acquisition, the SYRMEP Tomo Project software was used to reconstruct the slices [12]. The reconstruction was performed using filtered back projection with Shepp Logan filter. The quantitative analysis was performed in a VOI with 200 x 158 x 500 pixels, starting 0.5 mm below the top of femur head. The ImageJ ® software [13] was used to render the 3D volumes and BoneJ software (ImageJ plugin) [14] was employed for quantifying the samples. Morphologic parameters Prior to quantification of morphologic parameters, the images had to be binarized so that voxels corresponding to bone could be distinguished from those of background. To distinguish bone tissue from marrow and background, an optimal threshold for each specimen was determined. The method used is implemented in ImageJ, called IsoData, also known as Iterative Intermeans [15]. The morphologic parameter BV/TV was calculated by the number of foreground (bone) voxels divided by the total number of voxels in the image. The 3D model-independent parameter Tb.Th was computed based on the calculation of the local thickness volume [16]. The local thickness τ(n) at any point n ϵ Ω ⊂ R 3 is defined as the diameter of the largest sphere containing the point n and which is completely inside the structure Ω, ie: This definition was given in the context of the continuous space utilizing the Euclidean distance. The trabecular separation Tb.Sp was computed using the same process but applied to the complement of the binary trabecular bone structure. The number of trabeculae, Tb.N, was determined using: BS, the surface of trabecular bone, was calculated using a triangular surface mesh by marching cubes and bone surface area can be assumed to be the sum of the areas of the triangles making up the mesh [17]. An estimation of the plate-rod characteristic of the structure is achieved using the Structure Model Index, SMI [18]. This parameter is calculated by a differential analysis of a triangulated surface of a structure and is defined as where dBS/dr is the surface area derivative with respect to a linear measure r, relative to the halfthickness. For an ideal plate and rod structure the SMI value is 0 and 3, respectively. For structure with both plates and rods of equal thickness the value is between 0 and 3, depending on the volume ratio between rods and plates. Results The femoral head samples were scanned to reveal the two-dimensional (2D) and three-dimensional (3D) trabecular microstructure. Figure 2 presents the 2D and 3D microstructure of a typical specimen from G0 and G1 groups. The 3D rendered images, together with 2D sections of the G1 group ( Figure 2) reveal the possible changes in the microarchitecture of cancellous femoral head. It can be seen that the microstructure of femoral head in the chemotherapy group showed a greater spacing between the trabeculae and loss of interconnections, compared with control group. On the original 3D image, morphometric indices were determined directly from the binarized volume of interest (VOI). Statistical differences among groups were evaluated with Student's t-test. Pvalues less than 0.05 were considered to be significant. Table 1 indicates the mean values and respective standard deviations of the different trabecular parameters for both groups. The statistical significance between the two groups is reported in the last line through the p-value. The results reveal a significant decrease in BV/TV as well as an increase in spacing (Tb.Sp) but not significant for rats treated with chemotherapic drugs. This fact can be clearly seen in microtomographic images of the samples of this group (Figure 2). The trabecular thickness (Tb.Th) increased 7% in the G1 group, compared to the control one, but not significantly. The number of trabeculae per millimeter (Tb.N) decreased of 13% in G1 group compared with control one (P = 0.095). Bone volume to total volume ratio (BV/TV) decreased 31% (P = 0.006) with a consequent increase of 36% (P = 0.063) in trabecular spacing (Tb.Sp). The Structure Model Index (SMI) increased significantly 154% (P = 0.02) in G1 group comparing to G0, showing values varying between 0.244 and 2.240, indicating that trabeculae appear to have mixed plate-rod structures. The lowest value obtained (0.244) is closer to the plate-like, while the highest value (2.240) resembles rod-like structures. The deterioration of cancellous bone structure due to aging and diseases such as osteoporosis is characterized by a conversion from plate elements to rod elements. Consequently the terms "rod-like" and "plate-like" are frequently used for a subjective classification of cancellous bone [19]. Group treated with chemotherapy presented SMI of 1.840, indicating that structures are closer to rod-like, while control group presented SMI of 0.724, meaning more plate-like structures which reflects higher mechanical strength. Conclusions The use of synchrotron sources with considerably higher flux opens up new possibilities in the analysis of 3D images from bone samples, in particular to access quantitative analysis of trabecular microarchitecture. Three-dimensional SR-µCT was used to investigate bone architecture in femur head site. After image acquisition, 3D image processing technique was used to fully exploit 3D data. The advance in three-dimensional imaging techniques allows studying the organization of trabeculae in great detail. Concerning the obtained results for control and treated bones from same skeletal site, a certain declining bone volume fraction was achieved. The relative bone loss (decreased BV/TV) between chemotherapy group and control group in the femur head accompanied by lower Tb.N and higher Tb.Sp seems to indicate that chemotherapy induced bone loss and it is manifested through decreased connectivity and loss of thin trabecular elements. SMI seems to indicate that structures are tending to turn from plate to rod-like as chemotherapy is applied, suggesting a lower mechanical strength in the bones of chemotherapy treated animals. In conclusion, the results obtained could be used in forming the basis for comparison of the bone microarchitecture and the resulting 3D high resolution images along with histomorphometric quantification can be a valuable tool for predicting bone fragility.
2019-04-28T13:13:57.537Z
2014-04-03T00:00:00.000
{ "year": 2014, "sha1": "3d2c4ab2db36b9234c5df49a42a079a06789d168", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/499/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6def0b3c9330a838b05fe0cb5c3711f4b5882131", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
119508049
pes2o/s2orc
v3-fos-license
Re-entrant ferromagnetism in a generic class of diluted magnetic semiconductors Considering a general situation where a semiconductor is doped by magnetic impurities leading to a carrier-induced ferromagnetic exchange coupling between the impurity moments, we show theoretically the possible generic existence of three ferromagnetic transition temperatures, T_1>T_2>T_3, with two distinct ferromagnetic regimes existing for T_1>T>T_2 and TT>T_3) between two ferromagnetic phases, arises from a subtle competition between indirect exchange induced by thermally activated carriers in an otherwise empty conduction band versus the exchange coupling existing in the impurity band due to the bound carriers themselves. We comment on the possibility of observing such a re-entrance phenomenon in diluted magnetic semiconductors and magnetic oxides. Considering a general situation where a semiconductor is doped by magnetic impurities leading to a carrier-induced ferromagnetic exchange coupling between the impurity moments, we show theoretically the possible generic existence of three ferromagnetic transition temperatures, T1 > T2 > T3, with two distinct ferromagnetic regimes existing for T1 > T > T2 and T < T3. Such an intriguing re-entrant ferromagnetism, with a paramagnetic phase (T2 > T > T3) between two ferromagnetic phases, arises from a subtle competition between indirect exchange induced by thermally activated carriers in an otherwise empty conduction band versus the exchange coupling existing in the impurity band due to the bound carriers themselves. We comment on the possibility of observing such a reentrance phenomenon in diluted magnetic semiconductors and magnetic oxides. The large class of materials comprised of diluted magnetic semiconductors and magnetic oxides (DMS) has been widely studied in the recent years for their potential in spintronic applications as well as for the fundamental physics of carrier-mediated ferromagnetism in semiconductors. They have ferromagnetic (FM) critical temperatures T C ranging from below (e.g. (Ga,Mn)As [1]) to above room temperature (e.g. doped magnetic oxides as TiO 2 [2]), and some others never show long range FM order (e.g. most Mn-doped II-VI semiconductor alloys are spin glasses) [3]. FM is usually ascribed to carriermediated mechanisms and depends on many different parameters (e.g. carrier density n c , magnetic impurity density n i , magnetic coupling between the ion moment and the electron (or hole) spins J, and details of disorder) which vary greatly from system to system and sometimes from sample to sample. When the concentration of magnetic impurities is larger than a few percent, direct exchange (i.e. not carrier-mediated) can also be significant. This exchange is antiferromagnetic in many cases, like the Mn-doped III-V [4] and II-VI semiconductors [5], and ferromagnetic in others, like Co-doped TiO 2 [6]. The accepted effective theoretical models [7,8,9,10,11] for FM in DMS can be roughly divided into two broad categories depending on whether the carriers mediating the FM interaction between the magnetic impurities are itinerant free carriers (i.e. holes in the valence band or electrons in the conduction band) or localized (or bound) carriers (in an impurity band, for example). For the itinerant free carrier case, e.g. Ga 1−x Mn x As in the optimally doped (x ≈ 0.05) situation with the nominally high T C (∼ 100 − 170 K), the effective magnetic interaction producing FM is thought to be of the Ruderman-Kittel-Kasuya-Yosida (RKKY)-type [7,8,9,10,11,12] leading to a mean-field FM transition by virtue of there being many more effective impurity magnetic moments than free carriers. Such RKKY carrier-mediated meanfield FM seems to describe well [7,10,11] the optimally doped (Ga,Mn)As regime. For the localized case, where disorder should play an important role, the FM transition is thought to be caused by the temperature-driven magnetic percolation transition of bound magnetic polarons (BMP) [13], objects where one localized carrier is magnetically strongly correlated with a few neighboring magnetic impurities through the local exchange interaction. An example of such BMP percolation induced FM is thought to be the localized impurity band Ge 1−x Mn x DMS system [14]. Typically, the RKKY (BMP percolation) ferromagnetic T C is relatively high (low). In this Letter, we argue that it is possible, perhaps even likely in some impurity band systems, for these two mechanisms to operate together in (at least some) DMS materials, leading to an intriguing situation where the RKKY mechanism, mediated by thermally excited free carriers, dominates at high T , whereas the low T regime (where thermal carrier activation is exponentially suppressed) is dominated by the polaron percolation mechanism in the impurity band. By theoretically analyzing such an interesting multi-mechanism DMS situation using simple physical models, we show that it may be generically possible for DMS materials to have re-entrant FM with a paramagnetic state sandwiched between the high-temperature 'activated' RKKY-type and the lowtemperature BMP percolation-type FM. The system we consider is a localized impurity band system which is insulating at T = 0. In this scenario, the density of band carriers is a function of temperature T and activation energy ∆, n c (T, ∆) = n c0 exp (−∆/k B T ) . (1) As a consequence, the RKKY mechanism is inefficient at low T (typically for T < ∆) when the free carrier density is too low, while the activated carriers can mediate FM at higher temperatures. Therefore, this 'activated' RKKY mechanism leads, by itself, to a FM phase between two disordered phases at high T > T 1 and low T < T 2 temperatures. At low T , other mechanisms not requiring free carriers (e.g. magnetic polaron percolation) can come into play producing a further magnetic transition with critical temperature T 3 . When T 3 < T 2 there are two distinct ferromagnetic phases and the system exhibits re-entrant FM. The Hamiltonian of the exchange interaction between magnetic impurities and the carriers is H = i Ja 3 0 S i · s(R i ) where J is the local exchange coupling between the impurity spin S i located at R i and the carrier spin density s(r), and a 3 0 is the unit cell volume. As an illustration, we first discuss the hypothetical case of an intrinsic semiconductor with magnetic impurities which do not contribute carriers (therefore, there exists no impurity band where BMPs could form). In this case, the RKKY mechanism can only be mediated by carriers thermally activated from the valence to the conduction band. Another mechanism that could play a role in this intrinsic case is the Bloembergen-Rowland mechanism [15] mediated by virtual electron excitations across the band gap. This mechanism was considered in the context of II-VI DMS [3] and was finally dismissed as being too weak compared to superexchange [5]. Therefore, we do not further include this mechanism in our discussion. We now consider this 'activated' RKKY mechanism within a mean-field theory in the limit of nondegenerate carriers. In the activated carrier scenario we are studying, the Fermi energy is smaller than the tempera-ture, and hence the limit of nondegenerate carriers is appropriate. Within mean-field theory, the impurity spins act upon the carrier spins as an effective magnetic field ∝ Ja 3 0 n i S z while the carrier spins act upon the impurity spins with an affective field ∝ Ja 3 0 n c s z . As a result, the magnetization of the magnetic impurities S z /S is calculated self-consistently giving where B s (y) is the standard Brillouin function. The magnetization is plotted in Fig. 1 (a) for three different values of ∆. The ∆ = 0 curve has a non-standard concave shape as expected for a low density of carriers [12]. The critical temperatures are given by the points at which the magnetic susceptibility diverges. The susceptibility is essentially that of the magnetic impurities, as usually n c << n i due to carrier compensation, and in the paramagnetic regions is given by χ(T ) ≡ n i ∂ (g i µ B S z ) /∂B. For ∆ = 0, there is only one critical temperature given by the standard result T RKKY c0 = 1 3 Ja 3 0 √ n c0 n i (S + 1)(s + 1). For ∆ = 0, the model gives two critical temperatures T 1 and T 2 , shown in Fig. 1 (b): at low T , n c gets exponentially suppressed and the localized moments are independent of each other; as T increases, the band gets populated and the carrier-mediated FM kicks in at T 2 , which increases with ∆. At even higher T , thermal disorder produces the standard ferro-to-paramagnetic transition with critical temperature T 1 , which decreases as ∆ increases, due to the reduction of carrier density in the band. The T 1 and T 2 curves meet at ∆ ≈ 0.73 T RKKY c0 and, for ∆ > 0.73 T RKKY c0 , the density of carriers is always too low to mediate FM so the 'activated' RKKY model gives paramagnetism at all T . The curves for different parameters scale with T RKKY c0 ≡ T 1 (∆ = 0). We consider now the more realistic case in which there is an impurity band. The carriers in the impurity band can come from magnetic impurities acting as dopants, like in the III-V semiconductors, or from other dopants, like the oxygen vacancies which act as shallow donors in the magnetic oxides. We assume that, at T = 0 the conduction (or valence) band is empty so all the free carriers in the band are produced by thermal activation from the impurity band. At low T , the carriers in the impurity band can be strongly localized and mediate FM through the formation of BMPs which grow as T lowers, finally overlapping in clusters and producing a FM transition at percolation [13]. In our model of thermally activated carriers, the density of carriers in the impurity band is n * c (T, ∆) = n c0 − n c = n c0 [1 − exp(−∆/k B T )], where n c is the density of carriers in the valence or conduction band. ∆ is the smallest energy gap for electron excitations, which is given by the activation energy of a carrier at the impurity level. The critical temperature in the BMP percolation model is given by [13] T perc where a B is the carrier localization radius, and n * c is the carrier density at T perc c . Eq. (3) is valid in the low carrier density limit a 3 B n * c << 1. As the density of carriers involved in polaron formation n * c increases with ∆, T perc c also increases, saturating for large values of ∆, when n * c ≈ n c0 , as shown in Fig. 2 (solid line). The value of T perc c is mainly dominated by the value of a 3 B n * c , on which it depends exponentially. Another mechanism that could arise when there are no free carriers is the Bloembergen-Rowland-type mechanism proposed in Ref. [16] in the context of Mn-doped III-V semiconductors, where the virtual electron excitations take place between the impurity levels and the band. We do not expect, however, this mechanism to be stronger than the BMP percolation, and therefore, it does not affect our conclusions. In Fig. 2 we show two typical phase diagrams resulting from the interplay of the 'activated' RKKY and the BMP percolation models. The parameters chosen here are representative of real systems. We have fixed the magnetic impurity concentration to x = 0.1, J = 1 eV, and S = 1. Fig. 2(a). The curve is given by a 3 B nc0 = 0.00824. The other two main parameters left are n c0 and a B . n c0 is chosen so that n c0 /n i ≤ 0.1, as it is usually found in DMS systems due to strong carrier compensation. Finally, a B is chosen to fulfill the applicability conditions of Eq. (3). There are two qualitatively different phase diagrams that can occur depending on the value of a 3 B n c0 . For the lowest values of a 3 B n c0 , T 3 ≡ T perc c is relatively small, and we get the novel scenario depicted in Fig. 2 (a): in region I, corresponding to ∆/T RKKY c0 < ∼ 0.32 for the parameter values chosen in this figure, the system is FM with critical temperature T 1 ; in region III, with ∆/T RKKY c0 > ∼ 0.73 for any set of parameters, the system is ferromagnetic with T c = T 3 ; finally, in region II, which corresponds to intermediate values of ∆ (0.32 < ∼ ∆/T RKKY c0 < ∼ 0.73), T 3 < T 2 and the system shows re-entrant FM with three distinct critical temperatures T 1 , T 2 , and T 3 . For larger values of a 3 B n c0 (but still << 1), as in Fig. 2 (b), T 3 > T 2 for all values of ∆ and the system is ferromagnetic for T < max(T 1 , T 3 ). Although our theoretical analysis, being physically motivated, is formally correct, the question naturally arises about the observability of our predicted re-entrant FM behavior in DMS materials. There are three general conditions that should be met: (i) at T = 0 there should be no carriers in the conduction (or valence) band so all the carriers mediating RKKY come from thermal activation; (ii) for the scenario illustrated in Fig. 2 (a) to occur we need to fulfill the condition T perc c (n c0 ) < T max 2 ≈ 0.3 × T RKKY c (n c0 ); and (iii) we also have to ensure that the system is in region II in Fig. 2 (a) to actually observe re-entrant FM. Condition (i) basically implies that the system has to have activated-like resistivity (decreasing as T increases) limiting the suitable systems to those with so-called 'insulating behavior', for example, very lightly doped (Ga,Mn)As, (In,Mn)As, and magnetic oxides. Condition (ii) puts restrictions on the relative values of a B and n c0 . The equation in (ii) is independent of J and n i and reduces to the inequality a 3 B n c0 < 0.00824 for S = 1. This result is illustrated in Fig. 3 where the curve is given by the maximum allowed value of the product a 3 B n c0 . Below this curve we get the scenario in Fig. 2 (a) while the scenario in Fig. 2 (b) occurs for a B and n c0 values above the curve. The carrier confinement radius is given by a B = ǫ(m/m * )a where a = 0.52Å, ǫ is the semiconductor dielectric constant, and m * is the polaron mass (usually m * /m > 1). For diluted magnetic semiconductors and magnetic oxides we generally expect 3Å< a B < 10Å. Hence, as shown in Fig. 3, which is the important materials phase diagram to keep in mind in searching for a suitable system to observe our predicted re-entrant FM, we need to keep a relatively low density of carriers, but well within experimentally achievable values. Condition (iii) can be fulfilled, in general terms, when ∆ is comparable to T c . This rules out Ga 1−x Mn x As, with ∆/T c ∼ 10 (T c ∼ 100 K, and ∆ ≃ 110 meV [17]), which would rather be in region III (for very low carrier density samples). On the other hand, the re-entrant FM scenario might be applicable in In 1−x Mn x As, which has very low activation energies (see, for example, Fig. 2 in Ref. [18]) and low T c ∼ 50 K. However, the most likely candidate for observing our predicted re-entrant FM is possibly a doped magnetic oxide material, e.g. Ti 1−x Co x O 2 (with 30meV < ∆ < 70 meV [19] and very similar critical temperatures T c ∼ 700 K), which is an insulator at low T , but exhibits essentially T -independent Drude transport behavior, due to thermally activated carriers, at room temperature or above. In such a system, it is quite possible that the high-T FM is mediated by thermally activated carriers, whereas at low T a BMP percolation FM takes over [20] as in region I of Fig. 2 (a). We have neglected the direct exchange interaction between the magnetic impurities, which is short-ranged, and could also play a role, particularly when the density of impurities n i is large, completely destroying the lower FM phase [21], and leading to a spin-glass phase. These spin-glass low-T phases have been observed in II-VI DMS [3], and (Ga,Mn)N [22], in samples that never show FM order. More relevant to our case, a suppresion of magnetization at low T in zero field cooled curves has been reported in the magnetic oxide V-doped ZnO and possibly in other magnetic oxides [23]. In principle, therefore, direct antiferromagnetic exchange between the magnetic dopant impurities could compete with (or even suppress) our predicted low-T re-entrant FM phase, but the origin of this direct exchange being completely different from the carrier-mediated mechanisms producing the re-entrant FM behavior itself, we find it difficult to believe that such a suppression of re-entrance can be generic. We suggest detailed T -dependent magnetization studies in the shaded region of our Fig. 3 to search for our predicted re-entrant FM behavior. We have considered, using physically motivated effective models of FM in DMS materials, the intriguing possibility of generic re-entrant FM in an insulat-ing class of DMS systems. The models we use are considered to be successful minimal carrier-mediated models [7,8,9,10,11] for understanding FM behavior and predicting ferromagnetic T C in itinerant and localized DMS materials. The new idea in this work has been to point out that these 'competing' FM mechanisms, e.g. 'activated' RKKY and BMP percolation, could, in principle, exist together in a single sample where thermal activation leads to a high-T effective free-carrier mechanism mediating the RKKY ferromagnetism, and the low-T FM, where thermal activation of free carriers is exponentially suppressed, is mediated by localized bound carriers in an impurity band through the polaron percolation mechanism. We show that, depending on the materials parameters, such a situation with competing high-T and low-T FM mechanisms generically allows for re-entrant FM with an intermediate-temperature anomalous paramagnetic phase intervening between the higher temperature RKKY FM phase and the lower temperature BMP percolation FM phase. More experimental work in DMS materials is needed to confirm the existence of re-entrant FM, but the phenomenon should exist on firm theoretical grounds. This work is supported by the NSF and the NRI SWAN program.
2019-04-14T02:12:52.030Z
2006-11-14T00:00:00.000
{ "year": 2006, "sha1": "8064334a2f3741a847c39ae9fc25a98d4eadbbda", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0611384", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8064334a2f3741a847c39ae9fc25a98d4eadbbda", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
261074628
pes2o/s2orc
v3-fos-license
Lipidomic signature of stroke recurrence after transient ischemic attack While TIA patients have transient symptoms, they should not be underestimated, as they could have an underlying pathology that may lead to a subsequent stroke: stroke recurrence (SR). Previously, it has been described the involvement of lipids in different vascular diseases. The aim of the current study was to perform a lipidomic analysis to identify differences in the lipidomic profile between patients with SR and patients without. Untargeted lipidomic analysis was performed in plasma samples of 460 consecutive TIA patients recruited < 24 h after the onset of symptoms. 37 (8%) patients suffered SR at 90 days. Lipidomic profiling disclosed 7 lipid species differentially expressed between groups: 5 triacylglycerides (TG), 1 diacylglyceride (DG), and 1 alkenyl-PE (plasmalogen) [specifically, TG(56:1), TG(63:0), TG(58:2), TG(50:5), TG(53:7, DG(38:5)) and PE(P-18:0/18:2)]. 6 of these 7 lipid species belonged to the glycerolipid family and a plasmalogen, pointing to bioenergetics pathways, as well as oxidative stress response. In this context, it was proposed the PE(P-18:0/18:2) as potential biomarker of SR condition. The observed changes in lipid patterns suggest pathophysiological mechanisms associated with lipid droplets metabolism and antioxidant protection that is translated to plasma level as consequence of a more intensive or high-risk ischemic condition related to SR. While TIA patients have transient symptoms, they should not be underestimated, as they could have an underlying pathology that may lead to a subsequent stroke: stroke recurrence (SR).Previously, it has been described the involvement of lipids in different vascular diseases.The aim of the current study was to perform a lipidomic analysis to identify differences in the lipidomic profile between patients with SR and patients without.Untargeted lipidomic analysis was performed in plasma samples of 460 consecutive TIA patients recruited < 24 h after the onset of symptoms.37 (8%) patients suffered SR at 90 days.Lipidomic profiling disclosed 7 lipid species differentially expressed between groups: 5 triacylglycerides (TG), 1 diacylglyceride (DG), and 1 alkenyl-PE (plasmalogen) [specifically, TG(56:1), TG(63:0), TG(58:2), TG(50:5), TG(53:7, DG(38:5)) and PE(P-18:0/18:2)].6 of these 7 lipid species belonged to the glycerolipid family and a plasmalogen, pointing to bioenergetics pathways, as well as oxidative stress response.In this context, it was proposed the PE(P-18:0/18:2) as potential biomarker of SR condition. The observed changes in lipid patterns suggest pathophysiological mechanisms associated with lipid droplets metabolism and antioxidant protection that is translated to plasma level as consequence of a more intensive or high-risk ischemic condition related to SR. Stroke is an important cause of disability and death globally, resulting in more than 6 million deaths per year 1,2 .A transient ischemic attack (TIA) is a form of stroke characterized by transient episodes of neurological deficits due to brain ischemia 3 .Despite the temporary nature of their symptoms TIA patients are at a significant risk of suffering a definitive ischemic stroke with persistent symptoms (stroke recurrence [SR]) particularly during the first three months of follow-up 4,5 .Interestingly, the risk of SR in TIA patients is heterogenous with some individuals having a high risk while others have a lower risk 6 .It is known that patients with intracranial or extracranial stenosis [6][7][8] , cardioembolism 9 , diffusion weighted imaging abnormalities 7,9 and patients with repeated events 10 or motor weakness 4,9 have a higher risk.Furthermore, these differences in SR can vary based on sex as well 6 .Simultaneously, there has been a long-standing interest in the development of biomarkers for a considerable period of time.Biomarkers could provide valuable prognostic information 11 .It is important to note that due to TIA being a prevalent condition 2 , patients could be attended in centers without expertise or lacking the necessary technology.Therefore, identifying patients solely based on a blood test can be of interest.In this line, previous studies of our team predicted a significant role in the determination of lipids and their metabolites 12 .We observed how specific lysophosphatidylcholines (LysoPC [16:0] and LysoPC [20:4]) were significantly associated with SR.Lipids are involved in cardiovasculars diseases and acute myocardial infarction, not only a result of the retention of LDL-cholesterol and other cholesterol-rich apolipoprotein B-containing lipoproteins within the arterial wall but also as oxidative damage targets and the adaptation of lipid metabolism to ischemic processes 13 .Lipidomics, a subfield of metabolomics, involves the identification and quantification of the lipidome in biological systems.Lipidomics provides specific insight into the pathophysiologic mechanisms underlying ischemic stroke, and it would be a new strategy to describe biomarkers 14 . www.nature.com/scientificreports/ The aim of the current study was to perform a lipidomic analysis among consecutive TIA patients to find differences in the lipidomic profile in plasma samples between patients with SR after 90 days and patients without. Results As shown in Table 1, a total of 460 consecutive TIA patients upon arrival to the medical facility were included in the analysis with a mean age of 71.4 (SD 13.6) years, 221 (48.0%) patients were female.A total of 37 (8%) patients suffered SR after 90 days follow-up (Fig. 1), of whom 23 were females (62.2%). In parallel with the lipidomic study we identified a higher proportion of patients with sex female, previous ischemic stroke, duration of symptoms > 10 min, motor impairment, LAA and DWI abnormality in the SR group.No significant differences were observed in the standard clinical lipid profile between the two groups. Discussion We observed differences in the plasma lipidomic profiling of TIA patients who suffered a subsequent SR compared to TIA non-SR patients.The lipidomic profile of patients with SR consisted of a very restrictive set of lipids made up of 5 TG, 1 DG, and 1 plasmalogen.The observed changes in these lipid classes require special attention because the metabolic pathways and cell mechanisms behind them can be crucial in the physiopathology of SR. There are two functional categories associated with the different lipid classes identified: bioenergetics, and antioxidant protection.Thus, TG are bioenergetic compounds that compose the lipid droplets, and they are also present in neural cells 15 .DG are components of cell membranes and lipid mediators, but also precursors for biosynthesis of TG 15 .Finally, plasmalogens are structural components of cell membranes 16 and phospholipid monolayer of LDs 17 , and they also have antioxidant properties 18 that help to maintain lipid layer integrity. Our results indicate a significantly low abundance of these particular lipid species in SR patients compared to non-SR subjects.The observed low abundance of particular DG and TG lipid species in plasma from SR patients points to a low accumulation/formation of cerebral LDs indicating a patient-specific response to stress conditions and suggesting a defective ischemia-associated stress response of SR patients.In another hand, the detected differential plasmalogen also requires a special attention.In human brain, phosphatidylethanolamines (PE) are quantitatively the major phospholipid 19,20 and the predominant form is the alkenyl-PE.Plasmalogens play a key role in neural membrane properties such as membrane trafficking, cell signalling and antioxidant protection, as well as a preferential component of the phospholipid monolayer present in lipid droplets (LDs), the lipid storage organelles composed of a core of TG and sterol esters surrounded by a phospholipid monolayer and different associated proteins 21 , predominantly in glial cells and in lesser degree in neurons 22 LDs 21 .Consequently, the low abundance in plasm PE(P-18:0/18:2) from SR patients reinforces the suggested idea of alterations of LDs in SR patients, as well as an impairment in the antioxidant capacity 23 .However, more studies are needed to validate this concept, as well as the biological relevance of this particular lipid species instead of other plasmalogens.Importantly, these findings are in line with previous observations in animal models of ischemia-reperfusion 17,24 and in ischemic stroke patients 25 , suggesting that this lipid set express a condition of impaired stress in SR patients compared to TIA non-SR patients. Recent studies analyzing different biofluids (serum and urine) from a metabolomic approach have demonstrated, comparing stroke patients with healthy controls, the presence of specific metabolic profiles ascribed to changes in fatty acids, amino acids, choline metabolism, phospholipids, sphingolipids, and folate one-carbon cycle [25][26][27][28][29] .These few works collectively reveal the complexity of analyzing and discern metabolic events associated with stroke and the identification of unambiguous biomarkers.Brain ischemia occurs when there is a blockage of blood flow to the brain tissue, resulting in a decreased supply of energy to the affected area that alters membrane ionic balance, depolarizes neuronal membrane, increases intracellular Ca 2+ concentrations and activates calciumdependent proteases which, ultimately, leads to the neuronal death 15,25 .Additional cell damaging mechanisms include alterations of the blood brain barrier and subsequent increase in cerebral oxidative damage and neuroinflammatory response 30 , as well as metabolic alterations affecting lipid metabolism 12 .Effectively, hypoxic stress (and other cerebral pathological states) induces a cerebral increased content of LDs predominantly in glial cells and in lesser degree in neurons 22 .This accumulation of LDs is suggested as a support for energy supply, as well as a neuroprotective mechanism against the stress-induced lipotoxicity 22 .Remarkably, diverse studies using animal models of ischemia-reperfusion demonstrated that the limited regenerative ability of the injured brain is associated with the formation of inhibitory lipids in the damaged region 17 .Consistent with this hypothesis, it seems that TIA patients who are at a higher risk of SR also exhibit a more initial pronounced ischemic insult, as indicated by a greater proportion of DWI lesions.Therefore, TIA patients with lower bioenergetic or antioxidant capacity will be more susceptible to experiencing recurrent ischemia or may have a reduced ability to recover from new ischemic episodes. The clinical applicability of our results may be limited primarily due to the inherent complexity of the lipidomic analysis technique, which does not provide rapid results.However, the use of blood biomarkers that support www.nature.com/scientificreports/stroke diagnosis and early identification of subjects with high-risk of recurrence is currently of interest 31 .Given the high prevalence of cerebrovascular disease worldwide 2 and considering the heterogenous risk of SR among TIA patients 6,32 , the use of biomarkers related to SR could help the assessment of the individual risk of SR and management decisions 12 , especially in places without direct access to brain and/or vascular imaging.We believe that our results, despite the limitations of the study listed below, are reproducible and representative of the clinical reality TIA patients as we included a considerable number of patients and as we identified variables previously describes with SR like motor weakness 32,33 , LAA 6,8,9,34,35 and DWI abnormality 7,9,10 . This work has several limitations that must be considered: (1) High-throughput lipidomic techniques have inherent handicaps such as a high variable-to-sample ratio and the high variability in the levels of metabolites and the results.Therefore, they require large sample sizes and efficient dimensionality reduction techbiques, as well as the use of validation cohorts to improve the robustness and replicability of the results.In this sense, due to the small incidence of SR sample size the statistical power of the results obtained were limited.In addition, we admit that lipidomic analysis could be influenced by many uncontrolled conditions.We highlight that only two species that pass the FDR test.Therefore, our results should be confirmed in other independent cohorts.(2) In this work we have only analyzed those lipid classes that ionize in positive mode.Therefore, the metabolites that ionize better in negative mode (such as free fatty acyls) may be underrepresented.(3) The annotation of the compounds is a well-known limitation of the untargeted lipidomic approaches.In the present work we were able to annotate 100% of the differential lipid species but 3 of 7 were not confirmed by MS/MS spectrum because they were not available in the databases.A future confirmation of these identities could change or modify the conclusions withdrawn at the biological and mechanism levels.( 4) Finally, it is important to acknowledge the absence of a prior sample size calculation, although the number of events was enough to perform multivariate analysis of the clinical variables, the lipidomic analysis could be underpowered. In conclusion, the lipidomic profiles of TIA subjects with non-SR and SR were different, with minor but significant changes.The observed changes in lipid patterns, especially PE(P-18:0/18:2), suggest pathophysiological mechanisms associated with LDs metabolism and antioxidant protection that is translated to plasma level as consequence of a more intensive or high-risk ischemic condition related to early SR.The determination of these differential metabolites which are related to bionerergetics pathways and oxidative stress could improve the assessment of individual risk of SR and management decisions.In addition, our findings encourage the investigation of new potential pharmacological interventions. Material and methods Design and study population.We developed a registry-based cohort study following the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement 36 .We included consecutive TIA patients attended by a stroke neurologist working at the emergency department of a hospital during the first 24 h after the onset of symptoms from January 2006 to January 2015.TIA was defined according to the World Health Organization criteria as a reversible episode of neurological deficit of ischemic origin that was fully resolved within 24 h 37 .In all cases, the nature of the transient symptoms was evaluated for the final diagnosis of TIA after neuroimaging assessment.If patients were fully recovered from symptoms on arrival to the hospital, the precise neurologic symptoms and its duration were determined by interviewing the patients, family members or other caregivers.A structured questionnaire was used to record criteria accordingly to the Reduction of Atherothrombosis for Continued Health Registry (REACH) 38 the following variables: age, sex, vascular risk factors (hypertension, diabetes mellitus, hyperlipidemia, current smoking habit), and previous vascular disease including documented coronary artery disease and peripheral artery disease 35 .In patients who underwent magnetic resonance imaging (MRI) a trained radiologist with access to clinical information but blinded to patient outcomes analyzed the presence of diffusion-weighted imaging (DWI) abnormalities.Peripheral venous samples were obtained within the first 24 h from symptom onset. Outcomes and follow-up.The primary outcome was the occurrence of SR.It was defined as a new symptomatic neurologic deficit that was not attributable to a nonischemic cause accompanied by neuroimaging evidence of a new brain infarction.Structured clinical visits were performed by a stroke physician during the follow-up period at 90 days.All patient events, death records, electronic medical records, hospital admissions records were reviewed, and in needed cases the primary care physician was consulted 35 . Classification of stroke subtypes.Patients were classified etiologically based on the TOAST classification of stroke subtypes (REF; SSS-TOAST, an evidence-based causative classification system for ischemic stroke) 39 at the 90 days follow-up visit after the evaluation of all available test results by a stroke neurologist.The identified etiologies were large-artery occlusive disease (LAA), small-vessel disease, cardioembolic, uncommon or undetermined causes.Patients were classified as LAA if they exhibited a symptomatic, moderate to severe, intracranial or extracranial stenosis 7 .We applied the small artery disease classification to patients with no evidence of LAA or cardioembolic TIA who reported classic lacunar syndrome (pure motor, pure sensory, and sensorimotor syndrome involving at least 2 out of 3 specific body parts -face, arm, and leg-) and ataxic hemiparesis or dysarthria-clumsy hand syndrome 40 . Lipidomics approach.Untargeted lipidomic analyses was performed using an Agilent 1290 LC system coupled to an electrospray-ionization quadruple time of flight mass spectrometer (Q-TOF, 6520 instrument, Agilent Technologies, Barcelona, Spain). Plasma lipid species were extracted using a MTBE based methodology as described previously 41 .For protein precipitation, five μl of Mili Q water and 20 μl of methanol were added to 10 μl of a plasma sample and shaken for 2 min, and then 50μL of methyl tert-butyl ether (MTBE), containing internal lipid standards (Table S1) were added.Samples were immersed in a water bath (ATU Ultrasonidos, Valencia, Spain with an ultrasound frequency and power of 40 kHz and 100 W, respectively, at 10 °C for 30 min.Then, 75 μL of Mili Q water was added to the mixture, and the organic phase was separated by centrifugation (1400 g) at 10 °C for 10 min.The upper phase, containing all the extracted lipid species, was collected, and subjected to analyses.A pool of all lipid extracts was prepared and used as QC as previously described 42 . Internal isotopically labeled lipid standards for each class were used for signal normalization 43 Ten μl of lipid extract was applied onto 1.8 μm particle 100 × 2.1 mm id Waters Acquity HSS T3 column (Waters, Milford, MA, USA) heated at 55 °C.The flow rate was 400 μl/min with solvent A composed of 10 mM ammonium acetate in acetonitrile-water (40:60, v/v) and solvent B composed of 10 mM ammonium acetate in acetonitrile-isopropanol (10:90, v/v).The gradient started at 40% B and reached 100% B in 10 min and held for 2 min.Finally, the system was switched back to 40% B and equilibrated for 3 min, as previously described 44 . Data were collected in positive electrospray mode TOF operated in full-scan mode at 50-3000 m/z in an extended dynamic range (2 GHz), using N 2 as the nebulizer gas (5 L/min, 350 °C).The capillary voltage was 3500 V with a scan rate of 1 scan/s.The ESI source used a separate nebulizer for the continuous, low-level (10 L/ min) introduction of reference mass compounds 121.050873 and 922.009798, used for continuous, online mass calibration.Mass Hunter Data Analysis Software (Agilent Technologies, Barcelona, Spain) was used to collect the results, and Mass Hunter Qualitative Analysis Software (Agilent Technologies, Barcelona, Spain) to obtain the molecular features of the samples, as described 12 .We selected features with a minimum of 2 ions (adducts) to ensure that the feature corresponds to a specific metabolite.MassHunter Mass Profiler Professional Software (Agilent Technologies, Barcelona, Spain) was used to select, align, and filter molecular features.Multiple charge states were considered.Compounds from different samples were aligned using a retention time window of 0.1% ± 0.25 min and a mass window of 30.0 ppm ± 2.0 mDa.We selected only those features that are present in 100% of QC and had a maximum RSD among QC of 20%.Samples were normalized using a LOESS-based approach 45 .After outlier analyses 452 individuals (415 non-SR vs. 37 SR) were selected to apply both multivariate and univariate statistics.Baseline correction, peak picking and peak alignment were performed on acquired data.After quality control assessment, filtering (we chose only those features that are present in 100% of quality controls (QC) and had a maximum robust standard deviation (RSD) among QC of 20%) and correcting the signal, 152 features remained (supplementary dataset), which were used for multivariate and univariate statistical analysis.Identities were confirmed based on exact mass, retention time, isotopic distribution, and MS/MS spectrum using public databases such as Metlin 46 , HMDB 47 , and LipidMatch 48 .Because we applied a semiquantitative approach, the results are offered as relative abundance (MS counts). Statistical analysis. We compared the baseline characteristics, etiology, presence of acute lesions in DWI, between non-SR and SR patients.The quantitative variables were compared using either the student's T-test or the Mann-Whitney U test.The qualitative variables were compared using the chi-squared test or Fisher's exact test when the expected frequency was less than 5.The statistical analysis of the data was carried out using the SPSS statistical package, version 24.0.(SPSS, Chicago, IL, USA).Statistical significance was considered when p < 0.05.In addition, we find differences in the lipidomic profile in plasma samples between patients with and without SR.For this purpose metaboanalyst platform 49 was used to perform univariate and multivariate statistics (PCA) of the extracted features. Standard protocol approvals, registrations, and patient consents.The local ethics committee approved the TIA registry.Written informed consent was obtained from all participants or their designated representative 35 .Informed consent.Informed consent was obtained from all subjects involved in the study. Figure 1 . Figure 1.Kaplan-Meier event curves at 90 days.Proportion of patients with stroke recurrence over a period of 90 days. Figure 2 . Figure 2. Multivariate statistics reveals little changes in plasma lipidome between stroke recurrence (SR) and non-SR patients.A. Two-dimensional Principal Component Analysis (PCA) for the different analyzed groups. The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of Hospital del Mar-Parc Sanitari Mar (protocol code 2008/3084/I). Table 1 . Clinical characteristics associated with stroke recurrence (SR) after 90 days follow-up.Plus-minus values are means ± SD.Percentages may not total 100 because of rounding.Significant values are in italics.a : student t-test.SR Stroke recurrence; IHD Ischemic heart disease; TIA Transient ischemic attack; ABCD2 Age, blood pressure, clinical features, symptom duration, and diabetes mellitus risk scores; DWI Diffusion-weighted imaging; LAA Large-artery occlusive disease; CE Cardioembolism; SV Small vessel diseases; LDL Low-density lipoprotein; HDL High-density lipoprotein; TG Triglycerides. Table 2 . Identification of differentially expressed lipids between non-stroke recurrence (non-SR) and stroke recurrence (SR) patients.RT, min Retention time expressed in minutes; FDR False discovery range, TG Triglyceride, DG Diglyceride, PE phosphoethanolamine. a student t-test.b Confirmed by MS/MS spectra.
2023-08-24T06:17:37.290Z
2023-08-22T00:00:00.000
{ "year": 2023, "sha1": "ff26d919db8dc278df3096004a0693c822a16bef", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-40838-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d8efa01b4d9d405b3422f88ca84154b74bcb65e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240599523
pes2o/s2orc
v3-fos-license
Carbon supported Pd–Cu nanoalloys: support and valence band structure influence on reduction and oxidation reactions The present study has tracked the changes in the electronic and structural properties of Pd–Cu nanoalloys that were influenced by the composition and chosen support. Carbon supported Pd–Cu nanoalloys (PdxCu1−x/C for x = 1, 0.7, 0.5, 0.3 and 0) were subjected to sequential thermal treatments (up to 450 °C) to induce reduction and oxidation reactions. Valence band photoemission data and in situ XAS results showed that stronger oxygen–metal bonds are formed in Cu-richer samples. A regeneration process assisted by the support was observed during the oxidation reaction, and its reduction efficiency was found to be dependent on the distribution of occupied electronic states near the Fermi level. 3.6 minutes. Thus, during the heating processes from RT to 450°C each spectrum covers a temperature range of about 27°C. The XANES data obtained during the whole experiment were analyzed to extract information on the behavior of Pd x Cu 1-x /C samples during the reactions. Linear combinations (LC) of each spectrum were made using the ATHENA program of the IFEFFIT package 1 , which was also used for the standard XAS reduction and normalization procedures. Different internal reference spectra were used for the LC of those data collected in each step. For the first step (heating under CO flow), the following equation was used for the LC of each Pd x Cu 1-x /C case: µ LC = C i µ i + C f µ f , where µ i and µ f are the as-prepared (first) and the reduced (last) spectra of each sample. The corresponding linear coefficients were normalized (C i + C f = 1) and their values were limited between 0 and 1. The µ LC is the calculated absorption spectrum that best adjusts to the experimental data. All the spectra used as internal reference for the LC of the second step have been highlighted in Figure 3, while those for the steps (i) and (iii) are indicated in Figures SI1 and SI6. For the second and third steps, slightly different data analysis approaches were used for the mono and bimetallic samples. Three internal references were used for the LC of sample Cu/C. Since µ 0 is the last spectrum collect before started the air flow, therefore it was used as the initial condition of the (reduced) sample before to its exposure to air and heating. The component µ 2+ corresponds to the spectrum that presented the highest degree of oxidation. This is the one collected at 450°C and, by comparison to XANES spectra of some references and in view of the EXAFS analysis results, it is related to Cu +2 in CuO. And for the component µ + , the last spectrum collected during the exposure to air was selected, which was confirmed to be related to Cu +1 in Cu 2 O. Hence, for the second and third steps of the experiment carried out with the sample Cu/C, we used the following expression: µ LC = C 0 µ 0 + C + µ + + C 2+ µ 2+ , where the linear coefficients were normalized and limited between 0 and 1. For the bimetallic cases, we just used two reference spectra: the last one collected before exposure to air, µ m , and the spectra that present the highest degree of oxidation, µ ox . The following expression was used for the LC of the XANES data collected for Pd 0.5 Cu 0.5 /C and Pd 0.7 Cu 0.3 /C during the two last steps of the experiment: µ LC = C m µ m + C ox µ ox , where the linear coefficients were subject to the same restrictions as in the previous cases. XANES data at Pd-K edge (24350 eV) were collected for Pd/C sample during the first step of the reaction (heating under CO flow). The XANES spectra were collected in the range between 24320 eV and 24440 eV with a 2 eV step and an acquisition time of 2 s/point. Considering the acquisition time and the delay due to monochromator movements, one scan was collected every 3.4 minutes. Thus, during the heating processes from RT to 450°C each spectrum covers a temperature range of about 26°C. The XANES data were analyzed following the procedure described for Cu-K edge XANES data. Figure SI2 shows the XANES spectra evolution of Pd/C collected at the Pd-K edge during the first step of the experiment. Three spectra are highlighted in the graph: the initial and final ones, which were used in the linear combinations, and the spectra collected at 450°C. Additionally, EXAFS data were collected before and after each experimental step. Each scan was acquired at the Cu-K edge (8979 eV) in the range of 8880 to 9970 eV with a 2 eV step and 2 s/point, and four scans were merged in order to improve the signal-to-noise ratio. The EXAFS signals χ(k) were extracted and then Fourier transformed (FT) using a Kaiser-Bessel window in a Δk range of 8.7 Å. The FEFF 9.6 code 2 was used to obtain the phase shift and scattering amplitudes. For the monometallic case, a Cu fcc (lattice parameter a = 3.9239 Å) cluster with a radius of 10 Å was used in the FEFF calculations. For the bimetallic cases, a 10 Å cluster of Pd-Cu fcc alloy (lattice parameter a = 3.75 Å) was used. The S 0 2 (amplitude reduction term) was fixed for all samples at 0.8, which is the value obtained from the fitting of a standard Cu foil. During the EXAFS signal fitting, the number of free parameters was always maintained lower than the number of independent points in the fitted region. For the bimetallic samples, the distances and Debye-Waller factors for Cu-Cu and Cu-Pd pairs were adjusted independently, but they assumed similar values, as can be seen in Tables SI1-3. Table SI1. Cu/C: Structural parameters obtained from the EXAFS analyses. Experiment Step Pair N a R (Å) b  2 (10 -2 Å 2 ) c As-prepared Figure SI5. Cu/C: XANES spectra at the Cu-K edge collected for selected steps during the exposure to synthetic air. Figure SI6. XANES spectra evolution of each sample collected at the Cu-K edge during the third step of the experiment (exposure to CO at 450°C).
2021-11-04T00:09:02.129Z
2021-08-26T00:00:00.000
{ "year": 2021, "sha1": "7feba9d632f0564e377a7fed6ffb5c4bf3666e49", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/na/d1na00537e", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb72abd5612be4df4dbb3a6540b7b505b8f79dcc", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
129520360
pes2o/s2orc
v3-fos-license
Structural and Stratigraphical Correlation of Seismic Profiles between Drigri Anticline and Bahawalpur High in Central Indus Basin of Pakistan Publicly available seismic and well data are used to study the subsurface structure and stratigraphy of an area on the southern margin of the Central Indus Basin (CIB), Pakistan. Study area includes southern parts of the Punjab Platform and Sulaiman Foredeep tectonic units of the CIB. A regional scale East-West depth cross-section is prepared in South of hydrocarbon bearing Safed Koh Trend to Punjab Platform. It gives the structural configuration of various formations of Paleozoic-Cenozoic times. Reflectors are marked and correlated with the help of wells Drigri-01 and Bahawalpur East-01, located on seismic lines 914-RPR-03 and 916-YZM-05 respectively. These reflectors/formations are correlated with respect to ages to avoid the confusions as there are many truncations in the area. Average velocities are used for the depth computation. Depth cross-section (AB) shows that Punjab Monocline is a stable area with a shallow basement. In Punjab Platform all the formations dip gently to the West. Then they attain steep dips in the Sulaiman Foredeep/Depression area. Depth cross-section along the Drigri anticline which lies in the SE of Sakhi Sarwar anticline reveals that it is extended E-W over 17 km approx. and the reverse faults are present on both flanks of a fold, due to that a pop up structure is formed. It’s a low amplitude fold, as it marks the southern end of Safed Koh Trend (first line of folding of the folded flank of Sub-Sulaiman Fore Deep). Subsurface structural variations at Bahawalpur show a buried high of Jurassic- Introduction The study area comprises of frontal fault propagation folded zone of Sulaiman Range, Sulaiman Depression and Punjab Monocline of Pakistan (Figure 1 & Figure 2).It is bounded by longitude (70˚06''E -72˚16''E) and latitude (29˚17''N -29˚23''N).Seismic profiles from West to East are 914-RPR-03, 954-FZP-09, C95-LMT-05, W16-AT, B-01, PSPD-5085, PSPD-5340 and 916-YZM-05.A seismic line 914-RPR-03 lies in the NW of Rajanpur area of District D. G. Khan, with a Drigri-01 & Kotrum-01 wells located at SP-270 and South of profile respectively.Drigri and Kotrum Anticlines mark the southern limit of the Safed Koh Trend.Bahawalpur East-01 well is located at SP-300 on seismic line 916-YZM-05.An East-West depth cross-section AB shows the structural and stratigraphical variations in the formations.The surface geology is made up of alluvium and loose material mainly brought by rivers and from desert in the East. Tectonic and Depositional Setting Tectonically Pakistan comprises of two domains of large landmasses, i.e.Tethyan and Gondwanian Domains and is continued by the Indo-Pakistan crustal plate.The northern most and western regions of Pakistan fall in Tethyan Domain, which have complicated geology and complex crustal structure, While the Indus Basin consists of the gondwanian domain [1].The early rifting of micro-continents away from the northern margin of the Gondwanaland can be discussed with the development of Paleo-Neo-Tethys with a spreading ridge in between [2].During Cretaceous, there was a period of tectonic instability.The spreading rate was high, ~20 -30 cm/a in 80 -53 Ma ([3]- [5]).There was a convergence between Indian and Eurasian plates in Tertiary time.To go with the prominent convergence and the late Paleocene collision between the Indian and the Eurasian plates in the North Pakistan, the area was also affected by the translation between Indian plate and Afghan Craton in the northwest [6] and by territory convergence between Arabian Plate and Afghan Craton ([7]- [11]) suggest that the oblique collision of the Eurasian and Indo-Pakistan plates caused the development of large scale; N-S running, left-lateral strike-slip faults in the basement which are responsible for the segmentation of the Indo-Pakistan Plate.Pakistan lies on the north western corner of the Indian Plate.The collision zone in the northern Pakistan has been subdivided as the Main Karakoram Thrust (MKT), Main Mantle Thrust (MMT), Main Boundary Thrust (MBT) and Salt Range Thrust (SRT) [12].Pre-Cambrian Basement rocks are exposed along the Sargodha High.Lithospheric flexural buldge developed due to northward under thrusting of the Indian Plate and loading of South verging thrust sheets [13].This was also suggested by [12], they explained the tectonic configuration of the Sargodha Ridge as an outer "swell" due to loading of Indian Shield by the Himalayan thrusts. Sedimentary Basins Two major sedimentary basins of Pakistan are Indus Basin and Baluchistan Basin.The Indus Basin is the largest basin in Pakistan, oriented in NE-SW direction including the 25,000 square kilometers of SE part of Pakistan.Tectonically Indus Basin is much stable area as compared to other tectonic zones of Pakistan [1].The main feature which controlled the sedimentation in the proto-Indus Basin up to Jurassic was Precambrian Indian Shield whose topographic highs exist in the form of Kirana Hills (Sargodha High) and Nagar Parker.It is the Sargodha High which is considered to be a divide between Upper Indus Basin and Lower Indus Basin. The classification of Indus Basin: Upper Indus Basin: Kohat Sub-basin & Potwar Sub-basin; Lower Indus Basin: Central Indus Basin & southern Indus Basin (Figure 1).Another major feature of basement topography is Khair-pur-Jacobabad High and its associated structures which grew through Jurassic and Cretaceous/Paleocene ages and divided the Lower Indus Basin further into two basins namely southern and Central Indus Basin [14].Punjab Platform is an eastern part of Middle Indus Basin in Pakistan with Sulaiman depression and fold belt in the West, Sargodha High in North and Pokhran High in the South [15].Jacobabad and Mari Kandhkot highs are together termed as the Sukkur Rift.Central Indus Basin is also named as Sulaiman Sub-basin [16].It is sub divided into (Figure 2 Sulaiman Depression: This depression is a longitudinally oriented area of subsidence, it becomes arcuate and takes up a transverse orientation along its southern rim.The Sulaiman Foredeep/Depression is a broad syncline with a very gentle, undisturbed eastern limb & steeper western limb.The eastern limb has monoclinal dips & over 200 km wide.The western flank of the depression includes Zindapir Inner Folded Zone while Murri Bugti Inner Folded Zone lies in the South, to the East it merges in Punjab platform.The depth of the basement beneath this syncline is about 8 km. Sulaiman Fold Belt: This is a major tectonic feature in the proximity of collision zone and therefore contains a large number of disturbed anticlinal features.The trends of the structure are mainly East-West. General stratigraphy of Central Indus Basin is given in Table 1. Source Rocks Shales of Shinawari Formation and Chichali formation of Mesozoic act as a source in the Punjab Platform and Sulaiman Depression.These source rocks have acquired sufficient maturity to generate large volumes of gas. Reservoir Rocks Proven Reservoir Rocks of the Mesozoic age are Lumshiwal, Samanasuk, Shinawari and Datta formations of Punjab Platform area with discoveries at Tal, Chanda, Dhulian, Panjpir, Nandpur, toot and Meyal fields. Sealing Mechanism Drigri and Kotrum structures are gentle anticlines and the seal is provided by Ranikot shales (Paleocene) on Late Cretaceous Pab Sandstones. Work Flow All of the previous work done by respectable authors has been very helpful in the East-West correlation of the seismic profiles.The boundaries are determined by Drigri-01 well in West and Bahawalpur-01 well in East.The (Kadri, 1995). S. Asim et al. 1236 area in between is correlated by seismic lines keeping in view the previously made cross sections. On the basis of previous work, Well and Seismic data the few shortcomings are fulfilled as either Paleozoic in the area was not correlated before or Pre-Cambrian, Paleozoic, Mesozoic (cretaceous, Jurassic, Triassic) were not discussed separately.Also depths were not mentioned in some cases and early cross sections were made in Time domain because depth conversion is very sensitive to the velocities used. Reflection seismology is a remote imaging method used in petroleum exploration [21]. Seismic lines used in this study are 914-RPR-03, 954-FZP-09, C95-LMT-05, W16-AT, B-01, PSPD-5085, PSPD-5340 and 916-YZM-05.Reflectors are picked by matching seismic with synthetic (Figure 3 & Figure 4) and using well tops (Table 2 & Table 3) for the part where logs are not run or absent.Correlation of reflectors in rest of the seismic lines keeping in mind for the different surveys' parameters, datum planes and jump correlation.Then next stage is solving average velocities given at each seismic section in order to encounter lateral changes in velocities (Figure 5).After that a Depth computation by multiplication of average velocities with one way time of reflectors.The depth cross-sections are plotted side by side to a profile AB (Figure 6). Results and Discussion From the representative Depth cross-section (Figure 6) prepared with the help of Seismic profiles, it is seen that all the formations dip gently to the West.They dip more steeply in the Sulaiman Foredeep.In the East, formations are successively truncated at the Base Tertiary.These stratigraphic traps give a good possibility of hydrocarbon presence provided by a fact if adequate seal is present. There is a Thrust fault along Drigri that cuts in the Paleocene & Mesozoic sediments with a vertical throw of ).Paleocene & Cretaceous strata diminishes and onlap on Jurassic sediments in East.This suggests that Base Cretaceous is an unconformable surface and the basement uplift has occurred before Cretaceous.In the subsurface, Punjab Platform contains marine Paleozoic, Mesozoic and Neogene sediments.The zone is characterized by regional unconformities [16].A series of small highs are present from North to South as Budhuana, Panjpir, Sarai Sidhu, Nandpur, Tola, Karampur, Bahawalpur East and Marot. Conclusion The depth cross section AB shows the high in subsurface formations at well Bahawalpur East-01 location.East Jurassic and Triassic sediments show their presence.Jurrasic and Triassic are 300 m in East and thicken in West and achieve the thicknesses greater than 1000 m along Drigri and Kotrum anticlines.Whereas Permian, Cambrian and Pre-Cambrian formations which are correlated from Bahawalpur East-01 well, show a thick sedimentation in East as Permian is 346 m, Cambrian is 414 m and Pre-Cambrian (Salt Range Formation) is 817 m thick.Pre-Cambrian & Paleozoic sediments may present below a thick pile of Mesozoic sediments in West, but they are not drilled due to greater depths.Basement is uplifted in the East.It is gradually dipping westward and it is more than 9 km deep beneath the deformation front in the eastern Sulaiman range. Figure 1 . Figure 1.Research area lies in Punjab platform in middle Indus Basin [22]. Figure 2 . Figure 2. The profile covers the Punjab platform, Sulaiman Foredeep/Depression and edge of the frontal fault propagation fold zone (Main Thrust Front) [12]. Figure 5 . Figure 5. Average velocities are used for depth computation. Figure 6 . Figure 6.Depth cross-section AB in East-West direction. This unit marks the eastern segment of Central Indus Basin.Tectonically it is a broad monoclinic dipping gently towards the Sulaiman depression.Punjab Platform is tectonically the least affected area because of its greater distance from collision zone. Table 1 . Stratigraphy of central Indus Basin Table 2 . Well tops of Drigri-01 which lies in West of depth cross-section AB. Table 3 . Well tops of Bahawalpur-01 which lies in East of depth cross-section AB. [19]sim et al. 1239 about 10 msec.This anticline may have been formed at the expense of flow of Eocene shales.Also the flow of shales of Mesozoic age is very prominent in Drigri anticline.The fault propagation & fault bend folds are the most important structural features & they form important traps for hydrocarbon in the foreland fold & thrust belts.The frontal Sulaiman & Khirthar ranges are the most prospective & productive line of folding as compared to middle Indus.The Sakhi Sarwar structure in the Eastern Sulaiman range is a fault propagation fold.The Domanda, Dhodhak, Rodho, Zindapir, Fort Munro, Pirkoh, Loti, Uch & Mazarane structures in the sulaiman & Khirthar Ranges are thrusted anticlines ([19]).Nagri & Chinji Formation almost 1700 m thick have been deposited in Drigri anticline.Siwaliks are 800 m thick in East.Nari Formation (Oligocene) is overlain by Gaj Formation in Drigri anticline which onlaps (unconformable) on Eocene strata in East.Nari Formation truncates in East.Eocene, Paleocene & Cretaceous are 1300 m, 800 m & 150 m approx.thickrespectively in West.Eocene strata thins out in East but still attain a thickness of more than 250 m in East suggesting the presence of more accommodation space for its sedimentation in West.The oldest rocks encountered in Punjab Platform through drilling are of Infracambrian Salt Range Formation.Pre-Himalayan orogenic movements have resulted in prolonged uplifts/sea regression causing unconformities.As a result, several salt cored anticline structures are expected in the southern portion of this monocline ([14][19]
2019-04-24T13:12:58.334Z
2014-10-13T00:00:00.000
{ "year": 2014, "sha1": "ca31da3f45cdfeb99d13433c0a57b7493359cf4f", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=50407", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "ca31da3f45cdfeb99d13433c0a57b7493359cf4f", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
235747727
pes2o/s2orc
v3-fos-license
A study on factors affecting privacy risk tolerance to prevent the spread of COVID-19 in South Korea South Korea has been evaluated as a country that is responding well to COVID-19. The Government of the Republic of Korea discloses where, when, and by which means of transportation people confirmed to have the virus have visited. Although disclosure of movement has contributed to flattening the curve and providing timely medical service, concerns about privacy infringement have also been raised. This article determines what factors influence privacy risk tolerance, looking specifically at threat severity, vulnerability, response efficacy, and response cost. We also provide implications for the preparation of better countermeasures for the government to implement. COVID-19 has brought unprecedented challenges around the world. The World Health Organization recently said that roughly 1 in 10 people worldwide may have been infected by the coronavirus (Tuemmler et al., 2020). Since the pandemic started, each country has made efforts to prevent further spread of COVID-19. According to a UN report, South Korea has managed COVID-19 most effectively (Sachs et al., 2020). One of the country's practices that made it possible to suppress transmission was isolating infected individuals and tracing the people who came in contact with them (Ministry of the Interior and Safety, 2020). To improve the accuracy of epidemiological investigation, contact tracing can be conducted by tracking credit card transaction records, CCTV footage, and mobile phone GPS data, all within the scope permitted by the Infectious Disease Control and Prevention Act (Government of the Republic of Korea, 2020a). Through this investigative process, the South Korean government discloses where, when, and by which means of transportation the confirmed people have visited (Government of the Republic of Korea, 2020a). This movement record helps the general public know whether they have been in contact with confirmed cases and take necessary steps to protect themselves and minimize further spread of the virus (Government of the Republic of Korea, 2020b). Close contacts identified by epidemiological investigations are subject to self-quarantine, and their compliance with guidelines and health status is monitored (Government of the Republic of Korea, 2020a). With the GPS-based self-quarantine safety protection app developed by the government, people who are under quarantine are monitored to ensure they remain in their preregistered quarantined area (Ahn, 2020). This app automatically alerts both users and government officers when people who are under quarantine leave their preregistered areas (Government of the Republic of Korea, 2020a). Although the tracking and disclosure of movement records have contributed to flattening the curve and providing timely medical service (Government of the Republic of Korea, 2020b), concerns about privacy infringement have also been raised (Government of the Republic of Korea, 2020a;Joo & Shin, 2020;Zastrow, 2020). Although personally identifiable information is not made public (Government of the Republic of Korea, 2020a), quite a lot of information about confirmed cases is floating around on the internet (Zastrow, 2020) and some are trying to connect the dots and identify confirmed cases (Kim & Denyer, 2020). Some people say they are more afraid of the psychological distress and social stigma caused by disclosure of information regarding the person, time, and place they met than the physical suffering from the infection (BBC, 2020). At the same time, according to a survey by Statistics Korea (2020), 78.2% of people think that human rights protection should be put off as a subordinate priority when preventive measures against the pandemic need to be strengthened. There is clearly a disagreement among South Korean people regarding the extent to which privacy may be sacrificed. Such a circumstance can be explained from the perspective of risk tolerance, which indicates an individual's willingness to engage in behaviors that have a desirable goal, but achievement of the goal is uncertain and accompanied by the possibility of loss (Kogan & Wallach, 1964). Previous studies have examined risk tolerance concerning an individual's susceptibility to financial, environmental, and health risks (Csicsaky, 2001;Grable, 2008;MacCrimmon & Wehrung, 1986;Slovic, 2004). Disclosure of movement of confirmed cases also has a desirable goal of protecting the public's health, but it is not guaranteed to achieve the goal and may involve personal privacy invasions (Joo & Shin, 2020;Zastrow, 2020). Therefore, this article aims to examine privacy risk tolerance for the disclosure of movement records suggested as a countermeasure for COVID-19 in South Korea. In particular, by analyzing factors influencing privacy risk tolerance, we provide implications for the preparation of better countermeasures for the government to implement. Protection motivation theory Protection motivation theory (PMT) uses a costbenefit analysis to explain how precautionary measures are initiated or maintained. PMT was developed by Rogers (1975) to explain the effects of fear-inducing messages on behaviors and attitudes toward health. PMT has been applied and verified in several studies, and, in most cases, it has been specifically covered in health-related topics. It has been applied in various ways in health-degrading behaviors such as smoking (Thrul et al., 2013) and drinking (Murgraff et al., 1999), diseases such as cancer (McMath & Prentice-Dunn, 2005), and even in infectious diseases such as influenza A (Kim, 2010) or MERS (Yoo et al., 2016). According to the original PMT, protection motivation and attitude change are mediated by three cognitive components: appraised severity of the depicted event (i.e., severity), expectancy of exposure to the event (i.e., vulnerability), and belief in the efficacy of the recommended adaptive response in protecting self or others (i.e., coping efficacy; Rogers, 1975). Rogers (1983) developed a revised PMT that added variables of cost, reward, and self-efficacy. Response costs refer to all costs that can be incurred by taking the adaptive coping response, and rewards include both intrinsic rewards (e.g., bodily pleasure) and extrinsic rewards (e.g., social approval) earned by not engaging in the adaptive response. Selfefficacy is an evaluation of whether an individual can perform an adaptive response well. The focus of PMT is on the cognitive mediating processes, and sources of information trigger two appraisal processes: threat appraisal process and coping appraisal process (Rogers, 1983). Threat appraisal evaluates the adaptive or maladaptive responses, while the coping appraisal process evaluates the ability to cope with the threat (Floyd et al., 2000). In the threat appraisal process, rewards increase the probability of the maladaptive response, whereas severity of the threat and vulnerability to the threat decrease the probability of selecting the maladaptive response (Rogers, 1983). In the coping appraisal process, response efficacy and self-efficacy increase the probability of selecting the adaptive response whereas response costs decrease the probability (Floyd et al., 2000;Rogers, 1983). Regarding confirmed cases of COVID-19, disclosure of movement records is not performed by individuals but by government agencies. Therefore, the concept of self-efficacy is difficult to apply in this article, but we did include the variables of severity and vulnerability of COVID-19, efficacy of disclosing movement records to prevent the spread of COVID-19, and threats to privacy by disclosing movement record as a response cost. It is difficult to apply the concept of rewards obtained by not engaging in the adaptive response because individuals cannot refuse to disclose movement records. Instead, response benefits that could be gained from the disclosure of movement records were included in the coping appraisal process. In a pandemic, the health of individuals and others is strongly interwoven due to contagiousness (Giritli Nygren & Olofsson, 2020), so the benefit variable was defined as a personal or social benefit from sharing movement records. According to the PMT, we hypothesized that threat severity (H1) and threat vulnerability (H2) would have a positive effect on privacy risk tolerance. In the coping appraisal process, response efficacy (H3) and response benefits (H5) were predicted to have a positive effect on privacy risk tolerance, while response costs (H4) were predicted to have a negative impact. Institutional trust and social consensus When it comes to preventing the spread of COVID-19, responses and outcomes at the social level are expected to be derived. In this context, we considered social-level variablesdinstitutional trust and social consensusdwhich we term the collective appraisal in this study. Institutional trust can be defined as the confidence with which citizens assess how institutions will perform a particular action in a determined context, independent of whether they can monitor the action or not (Gambetta, 2000). Institutional trust allows individuals to follow the actions of the institution or people associated with it (Lahno, 2002). Without institutional trust, citizens may not voluntarily comply with government demands and regulations (Nye & Zelikow, 1997;Smith, 2010). In a setting in which people trust government or people in general, tolerance is expected to emerge (Berggren & Nilsson, 2014). Therefore, we hypothesized that institutional trust would have a positive impact on personal privacy risk tolerance (H6). Social consensus refers to the degree to which a social agreement is obtained that a potential issue, which is sharing movement records of those who were positive for COVID-19, is good (Jones, 1991). We hypothesized that social consensus would have a positive impact on the privacy risk tolerance of individuals (H7). The overall research model is described in Figure 1. Methodology To examine the research model, an online survey was conducted throughout South Korea by a professional online survey agency, Macromill Embrain. The survey target was selected through the stratified sampling method based on the proportion of age, gender, and residential area groups in South Korea. The final analysis included a total of 500 survey responses consisting of 255 males (51%) and 245 females (49%) ranging in age from 20e69. Measurement items were derived from relevant previous literatures with slight modifications to fit the context of the research. All items were measured by a 7-point Likert scale, ranging from "1 Z strongly disagree" to "7 Z strongly agree." The measurement items related to threat severity and threat vulnerability were mainly adopted from Ifinedo (2012) and Park and Woo (2013). The measurement items for response efficacy, response costs, and response benefits were derived from Ifinedo (2012), Yan et al. (2014), and Vance et al. (2012). For institutional trust, measurement items used by Ervasti et al. (2019) and Turow and Hennessy (2007) were considered. The questionnaires about privacy risk tolerance were adopted from Bannier and Neubert (2016). Reliabilities, zero-order correlations, means, and standard deviations of the variables are reported in Table 1. To examine the research model, regression analysis was conducted using SPSS 19. Before conducting the analysis, gender was dummycoded, with males being 0 as the reference group and females being 1 as the comparison group. Continuous independent variables were mean-centered to protect against multicollinearity (Cohen et al., 2003). Using hierarchical regression analysis, gender and age as demographic variables are frequently reported to affect information privacy concerns in the previous literatures (Paine et al., 2007;Sheehan, 1999;Smith et al., 2011), Variables related to threat appraisal (threat severity and vulnerability), coping appraisal (response efficacy, costs, and benefits), and collective appraisal (institutional trust and social consensus) were put into the second block. Results Our hypothesis predicted that threat severity, threat vulnerability, response efficacy, response costs, response benefits, institutional trust, and social consensus would be predictors of privacy risk tolerance related to the disclosure of movement records of those who were positive for COVID-19. Table 2 indicates the results of multiple regression analysis. The overall model was significant, F(10, 489) Z 77.04, p < .001, adjusted R 2 Z .60. When gender, age, and the extent to which one searches for information related to COVID-19 were entered into the regression equation in the first block as control variables, age (b Z .13, t Z 3.01, p Z .003) and information searching (b Z .14, t Z 3.17, p Z .002) were statistically significant. Among predictors in the second block, response costs (b Z -.24, t Z -7.79, p < .001), response benefits (b Z .11, t Z 2.34, p Z .020), social consensus (b Z .42, t Z 7.30, p < .001), and institutional trust (b Z .20, t Z 6.20, p < .001) were statistically significant predictors of privacy risk tolerance, while threat severity (b Z -.07, t Z -1.75, p Z .081), threat vulnerability (b Z .00, t Z 0.07, p Z .944), and response efficacy (b Z .05, t Z 0.97, p Z .332) were not. While H4 (response costs), H5 (response benefits), H6 (institutional trust), and H7 (social consensus) were accepted, H1 (threat severity), H2 (threat vulnerability), and H3 (response efficacy) were rejected. Unlike most previous studies that have shown that the severity and vulnerability of the threat and efficacy of the response facilitated adaptive intentions or behaviors (Floyd et al., 2000), they did not significantly decrease privacy Note: sr Z semipartial correlation; IS Z information searching for Covid-19. The rest of the abbreviations are the same as in Table 1. Regarding multicollinearity, the predictors had variance inflation factor (VIF) ranging from 1.00 to 4.15, which is lower than 10 as the traditional rule of thumb threshold value (Cohen et al., 2003). *p < 0.05. ** p < 0.01. *** p < 0.001. risk tolerance. The result could be explained by the mean score and standard deviation of the severity and vulnerability of COVID-19 and the efficacy of disclosing movement records. As can be seen in Table 1, only these three variables have a mean score of more than 6, and their standard deviation is also less than 1. Most respondents were aware of the dangers of COVID-19 and agreed on the effectiveness of countermeasures. Discussion and conclusion This article aimed to examine factors influencing privacy risk tolerance for the disclosure of movement records suggested as a countermeasure for COVID-19 in South Korea. Unlike existing literature in this article, since the majority of the public recognized the specificity and significance of COVID-19 and the effectiveness of disclosing movement records as countermeasures, it was found that threat severity, vulnerability, and response efficacy did not affect privacy risk tolerance significantly. On the other hand, response costs and benefits were found to be significant predictors. In addition, institutional trust and social consensusdvariables devised for collective appraisal in this articledwere found to have a positive effect on privacy risk tolerance. In the face of the global threat of COVID-19, countries around the world are taking various actions to prevent the spread. South Korea is often mentioned as an exemplary case of controlling the spread of the virus by disclosing the movement records of confirmed cases. However, there are scant academic studies in response to privacy concerns caused by the disclosure of information. In addition, because guidelines on information disclosure have been continuously changed even in South Korea, it is necessary to investigate the privacy risk tolerance for movement record disclosure and to establish a clear agreement on the degree of disclosure. Since other countries are showing great interest in South Korea's best practices, this study is expected to provide very timely and important implications. Another contribution of this research is the examination of collective appraisal in the PMT framework. Institutional trust and social consensus had a strong positive impact on privacy risk tolerance. This finding is particularly important as the influence of institutional trust and social consensus was relatively greater than that of other variables in this study, although the collective appraisal process was not considered in the existing research on protection motivation theory. In a pandemic, the information accessible to individuals may be insufficient or inaccurate (Kim & Kreps, 2020), so the government plays a critical role in effective crisis management and control (Ou et al., 2014). Also, because infectious diseases are highly contagious, it is a problem that cannot be solved by being careful alone; all members of our society must work together (WHO, 2020). Accordingly, we show that a collective appraisal process was necessary to evaluate the privacy risk tolerance related to the disclosure of movement records of confirmed cases as a measure to prevent the spread of infectious diseases. Accordingly, our findings suggest that a collective appraisal process needs to be included in the PMT when the threat is social or a collective effort is required to respond. It is significant that the scope of theoretical discussion can be expanded by including the collective appraisal process in the PMT.
2021-07-07T13:08:07.652Z
2021-07-06T00:00:00.000
{ "year": 2021, "sha1": "9f29d9fd9f381ae5509e6d885a33f61d1a2feb49", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.bushor.2021.07.002", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "995fb0a34708c0d2c96406106f41794671983280", "s2fieldsofstudy": [ "Political Science", "Law" ], "extfieldsofstudy": [ "Medicine" ] }
227152501
pes2o/s2orc
v3-fos-license
Short stature and SHOX (Short stature homeobox) variants—efficacy of screening using various strategies Background SHOX mutations have previously been described as causes of Léri-Weill dyschondrosteosis (LWD), Langer mesomelic dysplasia (LMD), and idiopathic short stature. The loss of X chromosome—Turner syndrome or mosaic 45,X/46,XX or 46,XY—also leads to the heterozygous loss of SHOX in patients with short stature only or with features similar to LWD. The aim of this study was to assess the efficacy of the targeted screening for SHOX variants, which involved different methods in the laboratory analysis of short stature. We determined the significance and positive predictive value of short stature for the detection of SHOX variants. Methods Targeted screening for variants in SHOX involving MLPA, sequencing, karyotyping and FISH was performed in the short stature cohort (N = 174) and control cohort (N = 91). The significance of short stature and particular characteristics for the detection of SHOX variants was determined by Fisher’s exact test, and the probability of SHOX mutation occurrence was calculated using a forward/stepwise logistic regression model. Results In total, 27 and 15 variants influencing SHOX were detected in the short stature and control cohorts, respectively (p > 0.01). Sex chromosome aberrations and pathogenic CNV resulting in diagnosis were detected in eight (4.6%) and five (2.9%) patients of the short stature group and three (3.3%) and one (1.1%) individuals of the control group. VUS variants were discovered in 14 (8.0%) and 11 (12.1%) individuals of the short stature and control groups, respectively. MLPA demonstrated the detection rate of 13.22%, and it can be used as a frontline method for detection of aberrations involving SHOX. However, only mosaicism of monosomy X with a higher frequency of monosomic cells could be reliably discovered by this method. Karyotyping and FISH can compensate for this limitation; their detection rates in short stature group were 3.55% and 13.46% (N = 52), respectively. FISH proved to be more effective than karyotyping in the study as it could reveal cryptic mosaics in some cases where karyotyping initially failed to detect such a clone. We suggest adding FISH on different tissue than peripheral blood to verify sex-chromosome constitution, especially in cases with karyotypes: 45,X; mosaic 45,X/46,XX or 46,XY; 46,Xidic(Y) detected from blood; in children, where mosaic 45,X was detected prenatally but was not confirmed from peripheral blood. The correlation of short stature with the occurrence of SHOX mutations was insignificant and short stature demonstrates a low positive predictive value-15.5% as unique indicator for SHOX mutations. The typical skeletal signs of LWD, including Madelung deformity and disproportionate growth, positively correlate with the findings of pathogenic SHOX variants (p < 0.01) by Fisher’s exact test but not with the findings of VUS variants in SHOX which are more prevalent in the individuals with idiopathic short stature or in the individuals with normal height. mutations was insignificant and short stature demonstrates a low positive predictive value-15.5% as unique indicator for SHOX mutations. The typical skeletal signs of LWD, including Madelung deformity and disproportionate growth, positively correlate with the findings of pathogenic SHOX variants (p < 0.01) by Fisher's exact test but not with the findings of VUS variants in SHOX which are more prevalent in the individuals with idiopathic short stature or in the individuals with normal height. INTRODUCTION Background Growth retardation, a common condition leading to reduced height, is defined as the deviation of an individual's height of more than two standard deviation score (SDS) below the mean in the population or the estimated familial target height (Amin, Mushtaq & Alvi, 2015). Short stature can be caused by nongenetic factors, such as nutrition, chronic systemic disorders, and emotional or psychosocial deprivation. Most forms of short stature, however, are based on genetic causes (Turner syndrome, Leri Weill dyschondrosteosis, Langer mesomelic dysplasia) (Seaver, Irons & American College of Medical Genetics Professional Practice and Guidelines Committee, 2009). SHOX gene mutations have previously been described as causes of Leri-Weill dyschondrosteosis (LWD), Langer mesomelic dysplasia (LMD), idiopathic short stature (ISS), and its haploinsufficiency is described as the cause of growth restriction in Turner syndrome (TS) (Belin et al., 1998;Rao et al., 1997;Benito-Sanz et al., 2005;Benito-Sanz et al., 2006;Benito-Sanz et al., 2011;Hirschfeldova et al., 2012;Zinn et al., 2002;Campos-Barros et al., 2007;Rappold et al., 2002). A wide spectrum of SHOX variants have been identified so far. However, not all can be directly associated with short stature in patients. ISS is a condition in which the height of the individual is more than 2 standard deviation (SD) below the corresponding mean height for a given age, sex, and population, and in whom no identifiable disorder is present (Wit et al., 2008). Heterozygous mutations of SHOX and/or its regulatory elements are detected in approximately 70% of LWD patients and involve 70-80% large deletions, 2-6% partial deletions, 20-25% point mutations (Benito-Sanz et al., 2005;Benito-Sanz et al., 2006;Benito-Sanz et al., 2011;Binder, 2011;Caliebe et al., 2012). Homozygous or compound heterozygous mutations of SHOX and/or its downstream enhancers are detected in 75% of LMD patients (Benito-Sanz et al., 2006;Benito-Sanz et al., 2012;Chen et al., 2009;Huber et al., 2006). In patients with ISS, the prevalence of SHOX mutations varies from 2-15% depending on other clinical features and the technologies used (Benito-Sanz et al., 2006;Benito-Sanz et al., 2012;Chen et al., 2009;Huber et al., 2006). However, the number of detected variants including intronic mutations influence the splicing of SHOX either with unambiguous or unclear significance for linear growth of individuals in the group of children with short stature is still growing (Thomas et al., 2009;Alharthi et al., 2017;Sandoval et al., 2014;Durand et al., 2011). Chromosomal abnormalities in the sex chromosomes that lead to the heterozygous deletion of SHOX are a cause of short stature in patients with TS or patients with ISS (Oliveira & Alves, 2011). There are also similarities to skeletal markers in TS and LWD (Soucek et al., 2013). Duplications have also been reported in LWD and ISS patients (Benito-Sanz et al., 2011). Clinical significance GH therapy is recommended for patients with LWD/TS (Blum et al., 2013;Lebl & Zapletalova, 2011;Amin, Mushtaq & Alvi, 2015). That is why the diagnosis of the syndromes is of great importance, especially in early childhood. The finding of an optimal balance between cost and effectivity of the testing in the population of children with short stature is still under debate especially in the children with ISS (Cohen et al., 2008;Sisley et al., 2013;Collett-Solberg et al., 2019). This group of the patients is heterogenous and growth restriction is often isolated. Determination of the causes of growth failure in the patients is a challenge for clinicians as other symptoms typical for LWD might manifest later in childhood or during puberty e.g., Madelung deformity (Binder, 2011;Fukami et al., 2004). However, GH treatment should start before puberty initialization. Several clinical prediction rules based on multiple anthropometric measurements (Rappold et al., 2007) or the seated height-to standing height ration (Binder, 2011) have been suggested to select ISS patients who have a higher probability of having SHOX variants. However, none of these criteria has a high positive predictive value, and clinical utility of the systems is limited by the highly variable clinical presentation of SHOX deficiency (Dauber, Rosenfeld & Hirschhorn, 2014). Thus, exclusion of the mutations in SHOX is usually indicated in clinical settings based only on short stature. Most textbooks and the previous GRS consensus on the topic of short stature recommend routine laboratory screening for occult disease in asymptomatic short children (Cohen et al., 2008). Karyotyping is a standard technique for exclusion of sex chromosome aberrations, namely TS (including mosaic), and MLPA and sequencing are tools for detection of subtle changes in the SHOX gene. However, even among the patients with mosaic 45,X/46,XX/46,XY we can find very mild phenotypes which actually do not suggest aberration of sex chromosomes. So our hypothesis was that the height might be good biomarker for detection of any variants in SHOX. We used a testing algorithm in children with primarily short stature for three years. We performed extensive screening involving the above mentioned methods to assess how effective this screening is if short stature is a ''stand alone'' predictor for SHOX mutations and how effective the particular methods are. Aim of study The study design is retrospective. The aim of the study was to assess the efficacy of screening testing using karyotyping, FISH, MLPA, and Sanger sequencing to detect mutations of the SHOX gene in children with short stature (cut off -2SDS). We assessed the positive predictive value of short stature for detection of SHOX aberrations. We assessed the significance of particular characteristics (facial dysmorphia; Madelung deformity; skeletal LWD markers-disproportionate growth/mesomelia/rhizomelia/wrist changes/shortening of the fourth and fifth metacarpals/tibial bowing/muscular hypertrophy; heart malformations; other congenital malformations; hypospadia, hypogonadism; micrognathia; neurodevelopmental disorders; microcephaly; macrocephaly; history of IUGR/SGA and family history of short stature) in the detection of SHOX mutations/variants. Participants Patients with short stature (deviation more than -2SDS) (N = 174) (age: 2-19 years; mean = 8.6; F = 101, M = 73) as well as a control cohort (deviation less than -2SDS) (N = 91) (age: 3-19 years; mean = 8.2; F = 55; M = 36) of Caucasian ethnicity were recruited from the Department of Medical Genetics and the Department of Pediatrics, University Hospital Olomouc. Endocrine and metabolic disorders were excluded before genetic diagnostic testing was performed. Height measurements and calculations of Z -scores, as well as anthropometric measures to determine disproportionate short stature (based on anthropometric measurements-sitting height/height), mesomelia, rhizomelia, etc. were performed at the first visit. Bone age was estimated from X-ray stay together. Disproportionate growth was based on measurement of sitting height. Whenever family members were tested, the results were applied to the short stature group or control cohort according to their height. Signed informed consent was obtained from all participants. Clinical data of the patients were collected from medical records. General observations of the characteristics/comorbidities in patients were made by a geneticist or paediatrician. Apart from growth restrictions, other characteristics/comorbidities were scored: facial dysmorphia; Madelung deformity; skeletal LWD markersdisproportionate growth/mesomelia/rhizomelia/wrist changes/shortening of the fourth and fifth metacarpal bones/tibial bowing/muscular hypertrophy; microcephaly; macrocephaly; heart malformations; other congenital malformations-heart/renal/urogenital/brain; micrognathia; neurodevelopmental disorders; history of IUGR/SGA and family history of short stature. The control cohort consisted of children and teenagers with deviations smaller than -2SDS or patients tested by MLPA, FISH, or karyotyping for different conditions (SHOX variant was detected as incidental finding) and adults (parents; volunteers involving laboratory staff and students) with deviations smaller than -2SDS. As we primarily focused on the height as a biomarker for the detection of SHOX variants, we used the same criterion for the control/tested group regardless of other symptoms. In volunteers and children tested for different conditions, the DNA samples were anonymized. Signed informed consent was obtained from all participants or their legal representative. All procedures were conducted in accordance with the Declaration of Helsinki. The study was approved with the ethical committee of University Hospital Olomouc (NU20-07-00042 and SUG 87-82). Methods Screening for SHOX aberrations involved karyotyping (169 short stature group and 49 control), MLPA (174 short stature group and 86 control), Sanger sequencing (154 short stature group and 25 control), and FISH (52 short stature group and 11 control) on buccal smears in relevant cases. The flow chart shows the scheme of the screening ( Fig. 1). Cytogenetic analysis was performed using cultured lymphocytes by conventional G-banding with a resolution of 550 bphs. I-FISH on buccal smears was performed with an alpha satellite X,Y -Satellite Enumeration Probes (Cytocell, Cambridge, UK) on interphase nuclei to exclude mosaic of monosomy X in relevant cases. Confirmation of idic(Y) and i(X) were performed with SHOX probes (Cytocell, Cambridge, UK). At least 100 cells were checked. A cut-off value was calculated for the healthy control for particular tissues, genders, and probes as mean +3SDS. Where it was suitable, a microarray (Agillent, Affymetrix) was added to determine the extent of gains and losses. The genomic DNA was isolated from peripheral blood using the saline method. MLPA (Multiplex Ligation-Dependent Probe Amplification) was performed with the probemix SALSA MLPA P018-G1 (SHOX ) MRC Holland. Fragment analysis was performed using an ABI Prism 3130 automated sequencer. Peak areas were assessed by Gene Mapper software (ABI Prism) and the resulting ratio was calculated using Coffalyser software (MRC, Holland). We double checked (two runs) the variants or verified them with another probemix (P070, P036). Sequence analysis of the coding regions was performed by Sanger sequencing: exons 2-6a, 6b, and the nearest intronic flanking codons. The sequence of the primers was as follows: 2R GTGCACAGCGAGGGGC, 2F ACGGGCCGTCCTCTCC, 3R CGTCTCCAAAAGTCCAGGAACC, 3F GAGTATCCTCCTCGGCTTTTGC, 4 and 5R AGGGACTAGGAGTGTCAGGATG, 4 and 5F CAAAGTGCTTGGTTCAGCCTC, 6aR GAAGGAGCTCCAGGCGGGGTTG, 6aF TAGGGGAGAAGAGGCACGTTG, 6bR GGATCACCTGAGGTCAGGAGTT, 6bF TTCACCGTGTTAGCCAGGA. Capillary electrophoresis was performed using a 3130 automated sequencer (ABI Prism) and analysis using Gene Mapper software (ABI Prism). We used RefSeq: NCBI RefSeq: NG_009385.2 and NM_000451.3, NM_006883.2 for variation calling and found variants were described according to recommendations for the description of DNA changes (HGVS). The clinical significance of the variants was determined using genomic databases-ClinVar, Varsome, LOVD, and Ensembl. We double checked (two runs) the pathogenic variants. The origin of the variants was determined in the parents whenever possible. In cases of chromosomal aberrations, the status of the SHOX gene was always verified by MLPA and/or FISH. The significance of particular co-morbidities for the detection of SHOX variants was determined by Fisher's exact test, and the probability of SHOX mutation occurrence with particular co-morbidities was calculated using a forward/stepwise logistic regression model. All calculations were performed by the analytical company Acrea using PS Imago Pro software. SHOX -negative patients were further tested by CMA and targeted sequencing of relevant genes (e.g., rasopatias, NPR2 etc.) consistently with their further symptoms. The aberrations of SHOX caused by structural or numerical changes in sex-chromosomes were revealed by karyotyping in a total of 6 out of 169 (3.55%) patients with short stature (Fig. 2, Table S1). All the cases were also confirmed and specified by MLPA in our study (Fig. 3). Of these, we have also obtained concordant results in 3 cases with FISH on a different tissue (buccal smear). However, we also had discordant findings on the basis of FISH performed on a buccal swab. We could discover the presence of chromosomally different cell clones-45,X or 47,XXX-which were not detected by karyotyping in 4 cases overall in the short stature group using FISH on buccal smears-see cases below (Table S1). The overall detection rate using FISH is higher than karyotyping-7 out of 52 (13.46%) in short stature patients (Fig. 3). Case 1 (Table S1): Small mosaic 47,XXX (9%) was found by FISH in buccal swab of girl (ID 1255/15) manifesting the phenotype of TS with karyotype 45,X from peripheral blood. The patient was a mosaic of TS. Case 2 (Table S1): A normal male karyotype was detected in a patient (ID 1758/18) with short stature (-3.52 Z-score) and IUGR. Through MLPA we discovered SHOX duplication, which finally appeared to be idic(Y) by FISH. Moreover, we confirmed the presence of minor clone 45,X in the patient's buccal swab. We had also verified the presence of such a cell clone through karyotyping 45,X[3]/46,X,idic(Y)(q11.223)[35] by subsequent checking of additional specimens. This mosaic and idic(Y) would not be detected without FISH and MLPA, respectively. The case was further delineated by CMA (Table S1, Fig. 4). The conclusion of the investigation was mosaic of monosomy X and idic(Y), which could better explain the short stature in the patient. Case 3 (Table S1): Large deletion of Yq was suspected on the basis of karyotyping (Table S1) in a boy with an unexplained growth restriction (-3.0 Z-score) (ID 83/14) and IUGR. SHOX duplication was detected by MLPA. The finding was concluded as idic(Y) based on the results of MLPA and FISH with SHOX probe. A minor clone of 45,X was also detected in the buccal smear of the patient by FISH (Table S1) which could explain growth restriction in the patient. Case 4 ( were recorded as prenatal karyotypes in the female from her amniotic fluid and cord blood, respectively, but these were not detected in peripheral lymphocytes postnatally. We assume that X-aneuploidy cells were suppressed from mitotic division during development in at least some tissues. CNV of SHOX were discovered in an additional 16 individuals with normal karyotypes and normal FISH results (Figs. 3 and 4). Out of these 4 pathogenic deletions spanning the whole SHOX gene and 1 pathogenic deletion of regulatory elements of SHOX (Table 1, Fig. 4). In one case deletion resulted from balanced translocation t(X;13) in the mother. CNV detected in a further 11 patients were classified as VUS (Fig. 2). Overall detection rate of MLPA was 13.2% (23 out of 174 tested). The duplications of SHOX regulatory elements were the most frequent mutation detected in the short stature cohort (Table 1). Figure 3 Contribution of the particular methods to the detection of SHOX variants in the short stature cohort. The number of detected CNV for the particular methods in the short stature cohort N = 27. The detection of pathogenic findings was made by three methods in 5 samples* and by two methods in one sample**. However, each method contributes differently to clarify patients' phenotype. We were able to detect different clone by FISH on a buccal smear (47,XXX in 1 out of 5 samples or 45,X in 1 out of 5 samples). In one sample** we detected duplication of SHOX by MLPA but 45,X clone was discovered by FISH on the buccal smear. By FISH 1 case (mosaic of 45,X) was exclusively detected. The method helped to specify altogether the findings in 3 cases (arrows) and the result concordant with MLPA and karyotyping was achieved in further 3 cases by FISH. (The detection rate for the particular methods related to the number of performed tests is stated in the text). Full-size DOI: 10.7717/peerj.10236/ fig-3 The frequency of sequence variants in the coding regions and flanking areas of these regions by Sanger sequencing was 1.95% (3 out of 154 tested) (Fig. 3). However, only intronic VUS variants were detected-heterozygous nucleotide change NM_000451.3(SHOX ): c.545-10T>C (rs370327147) in the 4th intron NM_000451.3(SHOX ): c.486+45G>A in the 3rd intron both inherited (Fig. 4). The same spectrum of SHOX aberrations has also been found in the control cohort (Figs. 2 and 3, Table 1). Despite the fact that we detected a higher rate of SHOX variants in females than in males in the short stature group, there was no significant difference between frequencies of variants in both genders. However, a conversed ratio in SHOX variant frequencies between females and males was detected in the control group compared to the short stature group (Table 1). SHOX variants in the male control cohort were more than twice as high as in males of the short stature group (p = 0.0519; p < 0.10). Short stature as a stand-alone marker was insignificant (p = 0.8621; p > 0.01) for the detection of SHOX variants, either pathogenic or VUS, using Fisher's exact test ( Table 2). The typical skeletal signs of LWD, including Madelung deformity and disproportionate growth, positively correlate with the findings of pathogenic CNV SHOX variants ( of patients with detected VUS variants ( Table 2). The correlation with increased BMI was observed in this subgroup (Table 2). Hypospadia, hypogonadism and Madelung deformity showed increased significance in our study due to sex chromosome aberrations ( Table 2). The positive predictive value of the screening (karyotyping, MLPA, FISH) was low (15.5%) if we depend only on short stature defined as deviation larger than -2SDS. SHOX variants-especially VUS variants involving duplications and intronic variants in SHOX and duplications and deletions of enhancers-were also recorded in children with smaller deviations (Table S1). A broad range of phenotypes from variable non-specific symptoms to asymptomatic accompanied these variants. Further scored co-morbidities and characteristics were insignificant for the detection of SHOX variants (Table 2). Short stature increases the likelihood ratio for the detection of relevant SHOX aberrations leading to determination of the diagnosis, provided it is accompanied with disproportionate growth or markers of skeletal dysplasia or Madelung deformity in patients by logit regression (p = 0.000 and 0.011). DISCUSSION Screening for SHOX variants in the short stature group yielded 15.5% of the variants generally in our study. It shows that short stature as a stand-alone indicator for SHOX abnormalities has a low positive predictive value. Only VUS variants were detected in the nonsyndromic short stature group. The correlation of short stature with findings of SHOX aberrations was not significant in any of the particular variation subgroups by Fisher's exact test if we calculate -2SDS as a cut-off value. However, the small size of the particular subgroups (with detected variants) means that the conclusion might be biased. Despite the fact that we were not able to prove significant difference between short stature and control groups it was evident that heterozygous loss of SHOX (large deletions) consistent with LWD and TS diagnosis in patients has an impact on the linear growth of patients whereas another variants showed milder impact. Moreover, we observed conversed ratio in the frequencies of SHOX variants in females and males when we compare the short stature group and the control group. A higher frequency of SHOX mutations was reported in the short stature females than in males (Rappold et al., 2007). As we detected a higher rate of SHOX variants in males of the control cohort than in the short stature group we assume that the impact of these variants on the height of females might be more pronounced than in males. The adverse effect of estrogen at the SHOX deficient growth plate in LWD females has been previously hypothesized (Fukami et al., 2004). It is questionable whether males with SHOX variants have an advantage against females in achieved final height because of later maturation. More study is necessary to prove the hypothesis. Disproportionate growth/other skeletal markers for LWD (p < 0.01) and Madelung deformity (p < 0.01) were significant features for the relevant SHOX CNV aberrations, which has been described previously (Wolters et al., 2013;Dávid et al., 2017;Rappold et al., 2007;Zapletalova et al., 2010). The likelihood ratio for SHOX CNV aberrations increases when Madelung deformities and/or disproportionate stature/skeletal markers for LWD manifest in patients. These markers reflected the haploinsufficiency of SHOX in case of the complete deletion without short stature in the mother of the boy with short stature (Table S1-cases 1437, 1438/14). On the contrary we have not confirmed the significant correlation between sex chromosome aberrations and skeletal markers except for Madelung deformity which might support the conclusions of the study written by Soucek et al. (2013). Significantly higher BMI and correlation with dysmorphic features have been observed in some studies (Dávid et al., 2017;Rappold et al., 2007;D'angelo et al., 2018). We observed the correlation of higher BMI in the group of patients with VUS variants but we did not prove correlation with skeletal markers in this group. However, we should take into account that the mean of age in our short cohort was 8.6 years so probably the majority of patients did not manifest skeletal deformities at the time. Hypospadia, hypogonadism, and ambiguous genitalia are more obligate sign for mosaics 45,X/46,XY or 45,X/46,Xidic(Y)(q11.2) than short stature (Hes et al., 2009;Kaprova-Pleskacova et al., 2013;Nomura et al., 2015). Thus we detected significant correlation in patient with aberration of sex chromosomes (p < 0.01) but it was not detected in patients with CNV variants of SHOX. Hypospadia was detected in a single case-in a male without short stature with mosaic 45,X/46,XY in peripheral lymphocytes, buccal smear and tissue from testicle biopsy, but did not manifest in cases with minor mosaic 45,X and idic(Y) and short stature in our study. We also assessed additional characteristics but without significant results. This study was performed to assess short stature as a predictive marker for SHOX aberrations and to assess the contribution of screening (consists of karyotyping, MLPA, sequencing and FISH methods) to the detection of SHOX abnormalities. The aneuploidies of sex-chromosomes (including TS) were also included in this study as mosaic of monosomy X might be hidden among patients with isolated short stature. Mosaics of TS might manifest with isolated short stature in early childhood and the correct diagnosis might not be made until reproductive age. However, the treatment of GH should be administered before pubertal initiation (Shankar & Backeljauw, 2018). Phenotypic data were also compared for children with TS in whom haploinsufficiency of SHOX is thought to be responsible for the height deficit as targeted exclusion of both syndromes (TS and LWD) is usually performed in children with growth restriction below the 3rd percentile (Shankar & Backeljauw, 2018). Despite the opinion that FISH on different tissue as a routine in short stature girls is not necessary (Shankar & Backeljauw, 2018) we found that occult mosaicism might be a factor that may influence the growth of individuals. A great number of cases of TS seem to be mosaic (Hook & Warburton, 2014). Some of the mosaics might be missed, if we investigate a limited number of cells and only one tissue. Moreover, some studies showed that the number of disomic (XX or XY) cells increases during the lifetime which might complicate detection (see cases in the result section) of cryptic monosomic clones (Denes et al., 2015). The discovered cryptic mosaic of normal cell clones or cell clones with extra copies of X chromosomes might explain the mechanism through which seemingly monosomic pregnancies are rescued from adverse outcomes (Hook & Warburton, 2014). Male with mosaic 45,X/46,XY karyotypes with isolated short stature and otherwise normal male phenotype have been also reported (Richter-Unruh et al., 2004). The prevalence of this disorder in cohort of ISS children is not known (Dauber, Rosenfeld & Hirschhorn, 2014). We assume that investigating minor mosaics in different tissues might bring to light new aspects of unexplained isolated growth restrictions. We were able to explain short stature in at least 3 (out of 174) cases where the initially normal finding or found variants (by MLPA or karyotyping) were not directly associated with short stature (duplication of SHOX ). People with triple SHOX due to sex chromosome trisomy usually show tall stature (Ogata et al., 2000;Upners et al., 2017). However, this is not always rule in individuals with microduplications of PAR1 involving SHOX or its regulatory sequences (Benito-Sanz et al., 2011;Valetto et al., 2016;Thomas et al., 2009;Roos, Nielsen & Tümer, 2009). We would have attributed short stature to duplication of SHOX resulting from idic(Y) if we had not performed FISH on buccal swabs to detect cryptic 45,X clones. On the contrary, in the cases where a mosaic of 45,X/46,X,idic(Y)(q11.22) was revealed in the blood and no monosomic cells were detected in buccal smears, we could assume that cells with an extra copy of SHOX are present in the growth plates of chondrocytes, which compensate the heterozygous loss in some cells in individuals without short stature (control 1759/14) (Fig. 4, Table S1). It implies that an explanation might be achieved for normal growth in individuals, at least. However, phenotypes of patients might be variable and probably reflect the distribution of monosomic cells (Hes et al., 2009;Kaprova-Pleskacova et al., 2013). We have observed that only the small 56 kb duplication of upstream regulatory elements segregated with proportionate short stature (Fig. 5). On the contrary, duplication of downstream regulatory elements showed variable patterns of segregation, including tall stature. The significance of such a finding has been commented by several authors (Hirschfeldova & Solc, 2017;Benito-Sanz et al., 2011). Duplication of SHOX and its enhancers may represent one of the susceptibility factors influencing human height (Roos, Nielsen & Tümer, 2009). We have also observed that the deletion of SHOX enhancers did not always segregate with short stature and LWD phenotype. This was especially pronounced in a family with recurrent 47.5 kb deletion in the SHOX downstream region detected in male members of the family in the control cohort (SDS less than -2SDS). The deletion of SHOX enhancers was located on the Y chromosome (Fig. 5). Because of the frequent recombination between both PAR1 regions, it is sometimes difficult to determine the origin of such a mutation. But we confirmed in the family that deletion was paternally inherited on the Y chromosome. Variable impact of the enhancers deletion to the height of the individuals even within families was described previously (Kant et al., 2013). All male family members with the variant were without features suggesting LWD at the time of investigation. This deletion of enhancers was the only one where we confirmed its transmission on the chromosome Y. It would be interesting to research whether there is difference in the phenotype of the patients, dependent on the type of sex chromosome carrying the CNV. MLPA and FISH showed the highest detection rate of SHOX aberration. FISH is a convenient method for detection of mosaics in different tissue than peripheral blood. It is a complementary method to MLPA or karyotyping for detection of mosaic X monosomy. The detection rate of sequencing is the lowest compared to the other method in patients with only short stature, which raises the question of whether sequencing is meaningful especially in patients with short stature without any other clinical symptoms. We detected only intronic VUS variants which are not causal for LWD but we suppose they might modify the linear growth. Among SHOX -negative patients we have further discovered 1 patient with duplication 17q12 (34,822,283,612)x3 by CMA and mutations causing Cornelia de Lang syndrome (1), Noonan syndrome (2), cardiofaciocutaneous syndrome (1), Legius syndrome (1), hypophosphatemic rachitis (1) and mutation in ANKRD 11(1) in the short stature cohort so far. CONCLUSIONS Despite the fact that we could not prove that there is a significant correlation between SHOX variants and growth restriction using Fisher's exact test, the logit regression implies that short stature is a significant predictor for the risk of pathogenic SHOX variants, provided it is accompanied with either Madelung deformity or disproportionate stature/other skeletal signs for LWD. This fact is indicative that a very careful physical examination, including measurement of body proportions, is essential before testing and it increases the effectiveness of the testing by discovering of the relevant SHOX variants leading to unambiguous diagnosis. The majority of the variants detected in the short stature cohort were classified as VUS and we assume that they only modify the growth of individuals. Most of them were inherited and very often from parents with normal height. These cases are inconclusive. On the other hand, the result of our extensive screening showed us that cryptic or tissue mosaics of monosomy X might be missed if we do not test children with no obvious markers of SHOX deficiency. SHOX variants in the male control cohort were more than twice as abundant than in males of the short stature group. In females the frequency of SHOX variants was higher in the short stature group than in the control group. It might imply that the impact of SHOX variants on height may be more pronounced in females. However, further study should be carried out. We assume the influence of other genes that cooperate in the process of growth and differentiation of chondrocytes together with another internal factors (hormonal etc.) including the time differences in maturation of both genders. MLPA is reliable as a frontline method for the detection of SHOX mutations including sex chromosome aberrations. Moreover, it is more convenient than time-consuming karyotyping for the fast targeted exclusion of SHOX variants including sex chromosome aberrations in screening of large groups of short stature patients. However, the detection of mosaic of monosomy X is limited by the proportion of aberrant clones. We also suggest adding FISH on different tissue than peripheral blood to verify sex-chromosome constitution, especially in cases with karyotypes: 45,X; mosaic 45,X/46,XX or 46,XY; 46,Xidic(Y) detected from blood; in children, where mosaic 45,X was detected prenatally but was not confirmed from peripheral blood.
2020-11-19T09:17:37.764Z
2020-11-17T00:00:00.000
{ "year": 2020, "sha1": "69adf048e17da51fbdac65801aff21aaed5e684e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.10236", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8dfc84eb62c91ae853469286c4cd7978209d3e71", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253179409
pes2o/s2orc
v3-fos-license
Origin of the ease of association of color names: Comparison between humans and AI Rapid evolution of artificial intelligence (AI) based on deep neural networks has resulted in artificial systems such as generative pre-trained transformer 3 (GPT-3), which can generate human-like language. Such a system may provide a novel platform for studying how human perception is related to knowledge and the ability of language generation. We compared the frequency distribution of basic color terms in the answers of human subjects and GPT-3 when both were asked similar questions regarding color names associated with the letters of the alphabet. We found that GPT-3 generated basic color terms at a frequency very similar to that of human non-synaesthetes. A similar frequency was observed when color names associated with numerals were tested indicating that simple co-occurrence of alphabet and color word in the trained dataset cannot explain the results. We suggest that the proposed experimental framework using the latest AI models has the potential to explore the mechanisms of human perception. The rapid evolution of artificial intelligence (AI) based on deep neural networks (NNs) has resulted in the development of artificial systems. These systems can generate natural texts whose source of generation (humans or AI systems) is difficult to distinguish. Generative pre-trained transformer 3 (GPT-3) (Brown et al., 2020a(Brown et al., , 2020b is one of the most advanced examples of such systems that can understand and generate natural language. Briefly, GPT-3 is a massive NN that inputs and outputs "tokens," the smallest units that constitute a sentence, such as words and symbols. Given a token sequence and various control parameters, the GPT-3 predicts the next token based on the token type and the token's position in the token sequence. The predicted tokens are combined into the token sequence to generate sentences and the process is executed recursively. GPT-3 predictions were learned from approximately 300 billion tokens from the Internet text and digital archive written in English that covers large domains of human knowledge. Such a system may provide a novel platform to study how human perception is related to the knowledge and ability of language generation. This is because the responses of artificial systems such as GPT-3 are based on these two factors. As a first attempt, in this study, we tested the responses of GPT-3 to simple questions regarding color names and examined the frequency distribution of basic color terms in the answers. The basic color terms correspond to 11 irreducible English color names from Berlin and Kay (1969): black, white, red, yellow, green, blue, brown, orange, purple, pink, and gray. When people are asked to provide color names, some colors are provided earlier and more frequently than others (Battig and Montague, 1969). Importantly, such variations in the ease of generation of basic color names correspond to neither the color word frequency in the corpus (Simner et al., 2005) nor the order of typology (Berlin and Kay, 1969;Kay and Regier, 2003), and the origin of the difference in the ease of generation across basic color terms is unclear. We speculate if this phenomenon originates from the general knowledge of humans and the ability to generate language, GPT-3 would provide basic color terms in an order comparable to that provided by humans. This problem was examined in the present study. Methods To obtain the frequency distribution of basic color terms from GPT-3, we employed a simple question-and-answer test that was employed to study the association of graphemes to colors in human subjects (Simner et al. 2005). In this study, the experimenter presented a questionnaire that asked the subjects to give any color association for the 26 letters of the alphabet. We used this procedure because in their study, it was found that the order of the frequency of basic color terms in non-synaesthetes was approximately comparable to that of the ease of generation of color terms in human subjects (Battig and Montague, 1969), but it did not correspond to the color word frequency in the corpus (Simner et al., 2005) or the order of typology (Berlin and Kay, 1969) of basic color terms. Similarly, in the present study, we asked GPT-3 to give a color name associated with each of the 26 letters of the alphabet. Figure 1 is an example of the text for the case of letter "a." We used the Chat function at http://beta.openai.com/examples and placed the question in Playground. The first to third sentences in Figure 1A are the default texts provided by the Chat function of GPT-3. The fourth line is the question that we inputted into GPT-3 for the case of letter "a," and the fifth line is an example of the answer of GPT-3. After recording the answers of GPT-3, we erased the fifth line. Thereafter, we inputted the next question into GPT-3. In the main experiment, we used the "Davinci" engine of GPT-3, which, although slow, outputs the most accurate and fluent texts, and tested at four "temperature" parameter values: 0.3, 0.5, 0.7, and 0.9. Temperature parameter in GPT-3 controls the randomness/variations of the model output. Other parameters of GPT-3 include top_P, freauency_penalty, and presence_penalty. Top_P parameter determines how much of the top probability of the predicted token should be targeted for output. Since top_P and temperature parameters are expected to have similar effects, only the value of temperature was varied in the present study and the top_P was untouched from the default value (1). Both frequency_penalty and presence_penalty are parameters that suppress token repetition. As the present study did not need to control the token repetition, we left those parameters untouched from the default value (0 and 0.6, respectively). At each temperature, we repeated the question-and-answer test 50 times for each letter of the alphabet. Therefore, a total of 1,300 answers from GPT-3 were obtained. We counted the number of times each of the 11 basic color terms appeared in the answer and obtained the frequency distribution of the basic color terms. In the present study, we compared the frequency distributions obtained by GPT-3 and those reported for human subjects. We used the answer for analysis only when it specified one of the 11 basic color terms. We also tested the performance of GPT-3 using a different engine (Ada), which, although fast, has low accuracy. Regarding this supplementary test, we repeated the test 20 times for each letter at only one temperature value (0.7). We also tested the frequency distribution of basic color terms associated with Arabic numerals (0 to 9) by GPT-3 (Davinci engine at temperature 0.9) using a question-and-answer test similar to the one used for the main experiment. In this test, the color name associated with each numeral instead of alphabet was asked; e.g., for the case of "0," the question was "Please give a color name that you will associate with a number '0'." We repeated this test 40 times for each numeral. Tests using GPT-3 were conducted between September 2021 and February 2022. Results The frequency distributions of the basic color terms that appeared in the GPT-3 responses for each temperature are summarized in Table 1. Some color terms appeared more frequently than others in the answers, and there were some differences in the distribution across different temperatures. We compared the responses of GPT-3 with those of human subjects tested using a similar procedure (Simner et al., 2005). In this study, the human subjects consisted of individuals with and without grapheme-color synesthesia. Non-synaesthetes were tested under two conditions: forced-and freechoice. In the forced-choice condition, the subjects were forced to answer a color name for each alphabet, while in the free-choice condition, they were asked to note a color if one easily came to mind. Table 2 summarizes the frequency distributions of the basic color terms for the three conditions reported by Simner et al. (2005). We quantitatively evaluated the similarity of the frequency of Figure 1. An example of the texts of question and answer with GPT-3 for the case of letter 'a'. We used Chat function at https://beta.openai.com/examples and placed the question at the Playground. The first to the third sentences show the default texts given by GPT-3 which we did not touch. The fourth line is the question which we gave to GPT-3 for the case of letter 'a', and the fifth line shows an example of the answer of GPT-3. Numbers at the left are added for the purpose of explanation. the color names used by GPT-3 and those used by human subjects. Because the frequency of the color names was skewed, we first log-transformed the frequency value of each color. Before logtransformation, we added the minimum number of non-zero value (0.000769 that was for brown in GPT-3 at temperature 0.5) to avoid the presence of zero value (3 cases in GPT-3: gray and brown at temperature 0.3, gray at 0.5). Then, a Shapiro-Wilk test was performed for each data and none showed evidence of non-normality (W = 0.93, p = .41 for GPT-3 at t(temperature) = 0.3; W = 0.92, p = .30 for GPT-3 at t = 0.5; W = 0.95, p = .60 for GPT-3 at t = 0.7; W = 0.95, p = .67 for GPT-3 at t = 0.9; W = 0.86, p = .0507 for Synaestetes of Simner et al. (2005); W = 0.86, p = .07 for nonSynaesthetes forced choice; W = .90, p = .17 for non-Synaesthetes free choice). Based on this, we computed Pearson's correlation coefficient between the logtransformed value of the answers of GPT-3 and those of human subjects. The left side of Figure 2 shows the correlation coefficients computed at four temperatures of GPT-3 using the Davinci engine with human synaesthetes and non-synaesthetes (forced-and free-choice). It can be observed that correlation is low (<0.1) between human synaesthetes and GPT-3 (insignificant for all temperatures); in comparison, that between human non-synaesthetes and GPT-3 is higher. The correlation was significant between GPT-3 and humans (both forced-and free-choice groups) at all temperatures while it tended to increase with an increase in temperature. The significance levels are shown in Figure 2. The correlations for the free-choice group were slightly higher than those for the forced-choice group though there were no significant differences between two groups at any temperature. These results show that GPT-3 provides basic color terms in an order comparable to that of humans when the temperature is high. To examine whether the ability to generate natural language affects the frequency distribution of basic color terms, we tested the performance of GPT-3 with the Ada engine at a temperature of 0.7, in which correlation was quite high for the Davinci engine. The frequency distributions of the basic color terms that appeared in the answers of the Ada engine are summarized in Table 3. As was done for the data obtained by Davinci engine, we first log-transformed the frequency values after adding Note.In some cases, GPT-3 gave a color name (e.g., cyan) that was not basic color names, or more than two color names (e.g., blue, purple, green). In other cases, GPT-3 gave a word that is different from the color name (e.g., banana) or uninterpretable words. This is the reason why the overall frequency was less than one. Temp = temperature; GPT-3 = generative pre-trained transformer 3. Table 2. Frequency distribution of basic color terms in the answers of human subjects reported by Simner et al. (2005). the same constant value (0.000769), tested the normality of the data (W = 0.97, p = 0.92, Shapiro-Wilk test), then computed Peason's correlation coefficient. As shown on the right side of Figure 2, the correlation coefficients between the answers of the Ada engine and human non-synaesthetes are comparable to those of the Davinci engine at the same temperature (0.7). The correlation with the human synaesthete was quite low, as observed for the Davinci engine. These results suggest that as far as the simple question-and-answer task is used, the performance of GPT-3 does not clearly depend on the engine employed. We verified whether the high correlation in performance between the GPT-3 and human nonsynaesthetes observed in the present study is specific to English speakers and the Roman alphabet. Simner et al. (2005) conducted the same test on German non-synaesthetic speakers. The correlation between German and English speakers (forced-choice non-synaesthetes) was quite high (r = 0.811), whereas that for English speakers (free-choice non-synaesthetes) was not as high (r = 0.617). Similarly, the performance of GPT-3 in the present study was not highly correlated with Figure 2. Correlation coefficients computed between the log-transformed frequencies of basic color terms given by GPT-3 and those given by human synaesthetes and non-synaesthetes. Human data were adopted from Simner et al. (2005). Left; Results obtained by Davinci engine at four temperatures of GPT-3. Right; Results obtained by Ada engine at temperature of 0.7. * p < 0.05, * * p < 0.01, * * * p < 0.001. the results of the German subjects (r = 0.413-0.461). Nagai et al. (2016) examined color associations with graphemes in a non-synaesthetic Japanese population. The frequency distributions of basic color terms associated with graphemes (kana characters, alphabets, and Arabic, and kanji numerals) are shown in Figures S2 and S3 of their study. The same test was conducted twice, and the results of the two tests were similar. When we computed the correlation coefficient between the frequency distribution of their results (average of first and second tests) and the performance of GPT-3 (Davinci engine), we found that the correlation was quite high for all graphemes (r = 0.974 and 0.979; for kana character, r = 0.907 and 0.934; for alphabets, r = 0.887 and 0.929; for Arabic numerals, r = 0.889 and 0.900; for kanji numerals, r = 0.861 and 0.798; temperature = 0.7 and 0.9, respectively). We also tested the frequency distribution of basic color terms associated with Arabic numerals (0-9) by GPT-3 (Davinci engine at temperature 0.9) (see "Methods" section), and the results (Table 4) were compared with the frequency distribution of the basic color terms associated with alphabet by GPT-3 of the same condition. We found that the correlation between the two results was quite high (r = 0.972 and 0.963 with the Davinci engine; temperature = 0.7 and 0.9, respectively). In these additional analyses, again, we first log-transformed the frequency values after adding the same constant value (0.000769), tested the normality of the data (p > .05, Shapiro-Wilk test), then computed Pearson's correlation coefficient. These results suggest that the frequency distribution of color names generated by GPT-3 is not specifically related to a certain language (e.g., English) nor to a certain index (e.g., alphabet), although there are variations in the performance of human subjects owing to unknown factors. Discussion In this study, using a procedure analogous to that used for human subjects employing natural language questions, we observed that GPT-3 can generate basic color terms at a frequency very similar to that of human non-synaesthetes. The similarity was more distinct when GPT-3 allowed a larger degree of variability (high temperature). Presumably, an increase in the temperature value increased the likelihood that minor color names weekly associated with the letter to appear. Importantly, we did not ask GPT-3 to answer the frequency of color names. We simply asked for the color name associated with each alphabet, and the frequency of color names was indirectly evaluated from the statistics of the answers. It is highly unlikely that the present results can be explained by simple co-occurrence of alphabet and color names in the trained data of GPT-3 because similar results were obtained when numerals instead of alphabets were used. We also directly examined this problem by Ngram analysis (bigram, trigram, 5gram, and 10gram) and unigram analysis using a large-scale dataset from WikiText-103 that contains over 100 million tokens (Supplemental Tables S1 and S2). We found that the co-occurrence probability was very similar for all cases tested, and it was highly correlated with the unigram computed by the basic color terms (Supplemental Table S3). On the other hand, these co-occurrence probabilities were quite different from the frequency distribution of the basic color terms generated by GPT-3 in response to either the alphabets or the numerals. The correlations between the frequency distribution of the color terms generated by GPT-3 and the co-occurrence probability between the alphabets/numerals and color terms are very low (Supplemental Table S4). These results support our assumption that the present results cannot be explained by simple co-occurrence of alphabet and color names in the trained data of GPT-3. For the human subjects, the determinant of the order of the frequency of the generation of color names is not completely understood. In the study by Simner et al. (2005), the order of the frequency of basic color terms in non-synaesthetes was approximately comparable to that of the ease of generation of color terms in human subjects (Battig and Montague, 1969). However, it did not correspond to the color word frequency in the corpus (Simner et al., 2005) or the order of typology (Berlin and Kay, 1969) of basic color terms. Because the results of the present study are highly correlated with those of Simner et al. (2005), the order of frequency of color-term generation by GPT-3 corresponds to the ease of generation of color terms in human subjects. However, it does not correspond to the frequency in the corpus or the order of typology. The ease of generation is related to the exemplar typicality (Simner et al. 2005) and we consider this should be related to the structure of general knowledge humans have of the world. GPT-3 is trained with huge amount of text data present in the web and digital archive which is not restricted to the knowledge of a specific domain but is related to every aspect of knowledge related to the natural and artificial world (general knowledge). GPT-3 is highly capable of handling natural language which is useful to extract meaningful information from the text dataset. In addition, ease of generation is directly related to the function of lexical retrieval. Therefore, we speculate that the similarity in the generation of basic color terms between human subjects and GPT-3 stems from the general knowledge and language ability that is shared by human subjects and GPT-3. A high correlation between the human association of basic color terms with numerals is in line with this interpretation. However, how the natural language ability of GPT-3 is related to the acquisition of the ease of generation is still an open question. To examine whether AI without natural language ability can acquire the human-like ease-of-generation ordering of colors may be useful to answer this question in future experiments. So far, we have discussed on the cause of the similarity of the frequency distribution of the basic color terms generated by GPT-3 and human non-synaestheses considering only the summary statistics of the frequency of color names. However, this problem can be considered from another perspective taking into account how the color names were associated with each alphabet or each numeral. Because the sum of the frequency distribution of all combinations of the alphabet and the color name (e.g., "a/red," "b/red," "c/red," … "a/blue," "b/blue," …) should yield the summary statistics, this measure may also provide useful information when considering the mechanism of the generation of the frequency distribution of color terms. The present experiment employed a question-and-answer test for each letter, which was very similar to that reported by Simner et al. (2005). They analyzed the effects of various factors on the generation of color terms for each alphabet and observed that the initial letter of the color terms (e.g., "r" for red) tended to be associated with the corresponding color terms in both the forced-and free-choice groups of non-synaesthetes, although the effects could explain only a small part of the entire frequency distribution. We also observed that GPT-3 exhibits a tendency of initial-letter match (e.g., 16 cases of blue for "b," 20 cases of red for "r" at a temperature of 0.9 for the Davinci engine); nonetheless, the removal of this effect did not affect the entire result (data not shown). In a more recent paper, Mankin and Simner (2017) showed that letter-color association in non-synaestheses (as well as synaestheses) is influenced by the letter-word association (e.g., apple for A) and color-word association (e.g., red for apple). This suggests that letter-color association is mediated by two separate associations: one is the association of prototypical word for a particular letter (e.g., apple for A), and the other is the association of prototypical color of the above word (e.g., red for apple). Both of these associations should be part of the general knowledge of English speaking population. Their paper suggests a potential mechanism that connect letter and color name. Although their results can explain only a part of the specific association between alphabets and color names, and it is not clear how a similar explanation can be applied to the association between numerals and color names, this study suggests a potentially effective direction in the future study on the mechanisms of association between specific color names and specific graphemes. GPT-3 should be a useful tool to examine such possibility and may contribute to elucidate the mechanism of letter-color association in the future study. In contrast to human non-synaesthetes, only a weak correlation was observed between the performance of the GPT-3 and grapheme-color synaesthetes. Synaesthetes associate letters and colors in specific ways that are different from those of non-synaesthetes, which should have resulted in low correlation. The procedure used in this study will be useful for estimating the answers of general people who share common knowledge with GPT-3. However, it will be difficult to apply this method to infer idiosyncratic responses that are based on specific traits or experiences, such as the graph-color association of synaesthetic subjects. In the present study, we used the answer of GPT-3 for analysis only when it specified one of the 11 basic color terms. As we noted in Table 1 legend, overall frequency of answering basic color names by GPT-3 is less than one. Although a similar procedure was used in Simner et al. (2005), the overall frequency was nearly one. We think the difference in the overall frequency is caused by the difference in the control of the way the answer is given. In human subjects, task demand is easily understood and this would give strong control on the way the subjects make answers. On the other hand, in the present study, we made the question to GPT-3 as simple as possible. This necessarily sacrificed the context information given to GPT-3, and this may have yielded answers which cannot be included in the analysis. When we used the answers of GPT-3 that fit with the intended question, the frequency distributions of the basic color names were highly correlated with those of human non-synaesthetes. Because of this, we believe it is unlikely that the difference in the overall frequency between GPT-3 and human subjects is due to the difference in the color knowledge. Although the test conducted in the present study is very simple, it shows the potential of AI systems with high language capability to be applied as a platform for studying how human perception is related to the knowledge and ability of language generation. AI systems that can generate natural language are still evolving and they will become useful tools for exploring the mechanisms of perception. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number 20H05955 (Grant-in-Aid for Transformative Research Areas (A) "Deep Shitsukan") to HK, and JST A-STEP, JSPS KAKENHI Grant Numbers JP21H04426 and JP21K12006 to EW.
2022-10-28T15:22:49.856Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "c86d8663b8a4aa987ead31b61dfd5433ddb5e9cb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/20416695221131832", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "72674969e32c426fa5bc7ae10b83a6aa20e7d5e7", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [] }
232380553
pes2o/s2orc
v3-fos-license
Solvable Model for the Linear Separability of Structured Data Linear separability, a core concept in supervised machine learning, refers to whether the labels of a data set can be captured by the simplest possible machine: a linear classifier. In order to quantify linear separability beyond this single bit of information, one needs models of data structure parameterized by interpretable quantities, and tractable analytically. Here, I address one class of models with these properties, and show how a combinatorial method allows for the computation, in a mean field approximation, of two useful descriptors of linear separability, one of which is closely related to the popular concept of storage capacity. I motivate the need for multiple metrics by quantifying linear separability in a simple synthetic data set with controlled correlations between the points and their labels, as well as in the benchmark data set MNIST, where the capacity alone paints an incomplete picture. The analytical results indicate a high degree of “universality”, or robustness with respect to the microscopic parameters controlling data structure. Introduction Linear classifiers are quintessential models of supervised machine learning. Despite their simplicity, or possibly because of it, they are ubiquitous: they are building blocks of more complex architectures, for instance, in deep learning and support vector machines, and they provide testing grounds of new tools and ideas in learning theory and statistical mechanics, in both the study of artificial neural networks and in neuroscience [1][2][3][4][5][6][7][8][9]. Recently, interest in linear classifiers was rekindled by two outstanding results. First, deep neural networks with wide layers can be well approximated by linear models acting on a well defined feature space, given by what is called "neural tangent kernel" [10,11]. Second, it was discovered that deep linear networks, albeit identical to linear classifiers for what concerns the class of realizable functions, allow it to reproduce and explain complex features of nonlinear learning and gradient flow [12]. In spite of the central role that linear separability plays in our understanding of machine learning, fundamental questions still remain open, notably regarding the predictors of separability in real data sets [13]. How does data complexity affect the performance of linear classifiers? Data sets in supervised machine learning are usually not linearly separable: the relations between the data points and their labels cannot be expressed as linear constraints. The first layers in deep learning architectures learn to perform transformations that enhance the linear separability of the data, thus providing downstream fully-connected layers with data points that are more adapted for linear readout [14,15]. The role of "data structure" in machine learning is a hot topic, involving computer scientists and statistical physicists, and impacting both applications and fundamental research in the field [16][17][18][19][20][21][22]. Before attempting to assess the effects of data specificities on models and algorithms of machine learning, and, in particular, on the simple case of linear classification, one should have available (i) a quantitative notion of linear separability and (ii) interpretable parameterized models of data structure. Recent advances, especially within statistical mechanics, mainly focused on point (ii). Different models of structured data have been introduced to express different properties that are deemed to be relevant. For example, the organization of data as the superposition of elementary features (a well-studied trait of empirical data across different disciplines [23][24][25]) leads to the emergence of a hierarchy in the architecture of Hopfield models [26]. Another example is the "hidden manifold model", whereby a latent low-dimensional representation of the data is used to generate both the data points and their labels in a way that introduces nontrivial dependence between them [19]. An important class of models assumes that data points are samples of probability distributions that are supported on extended object manifold, which represent all possible variations of an input that should have no effect on its classification (e.g., differences in brightness of a photo, differences in aspect ratio of a handwritten digit) [27]. Recently, a useful parameterization of object manifolds was introduced that is amenable to analytical computations [28]; it will be described in detail below. In a data science perspective, these approaches are motivated by the empirical observation that data sets usually lie on lowdimensional manifolds, whose "intrinsic dimension" is a measure of the number of latent degrees of freedom [29][30][31]. The main aims of this article are two: (i) the discussion of a quantitative measure of linear separability that could be applied to empirical data and generative models alike; and, (ii) the definition of useful models expressing nontrivial data structure, and the analytical computation, within these models, of compact metrics of linear separability. Most works concerned with data structure and object manifolds (in particular, Refs. [8,27,28]) focus on a single descriptor of linear separability, namely the storage capacity α c . Informally, the storage capacity measures the maximum number of points that a classifier can reliably classify; in statistical mechanics, it signals the transition, in the thermodynamic limit, between the SAT and UNSAT phases of the random satisfiability problem related to the linear separability of random data [32]. Here, I will present a more complete description of separability than the sole storage capacity (a further motivation is the discovery, within the same model of data structure, of other phenomena lying "beyond the storage capacity" [33]). Linear Classification of Data Let us first review the standard definition of linear separability for a given data set. In supervised learning, data are given in the form of pairs (ξ µ , σ µ ), where ξ µ ∈ R n is a data point and σ µ = ±1 is a binary label. We focus on dichotomies, i.e., classifications of the data into two subsets (hence, the binary labels); of course, this choice does not exclude datasets with multiple classes of objects, as one can always consider the classification of one particular class versus all the other classes. Given a set of points X = {ξ µ } µ=1,...,m , a dichotomy is a function φ : X → {−1, +1} m . A data set {(ξ µ , σ µ )} µ=1,...,m is linearly separable (or equivalently the dichotomy φ(ξ µ ) = σ µ , µ = 1, . . . , m, is linearly realizable) if there exists a vector w ∈ R n , such that where (ξ µ ) i is the ith component of the µth element of the set. In the following, I will simply write w · ξ µ for the scalar product appearing in the sgn function when it is obvious that w and ξ µ are vectors. In machine learning, the left hand side of Equation (1) is the definition of a linear classifier, or perceptron. The points x, such that w · x = 0 define a hyperplane, which is the separating surface, i.e., the boundary between points that are assigned different labels by the perceptron. By viewing the perceptron as a neural network, the vector w is the collection of the synaptic weights. "Learning" in this context refers to the process of adjusting the weight vector w so as to satisfy the m constraints in Equation (1). Because of the fact that the sgn function is invariant under multiplication of its argument by a positive constant, I will always consider normalized vectors, i.e., both the weight vector w and data points ξ will lie on the unit sphere. A major motivation behind the introduction of the concept of data structure and the combinatorial theory that is related to it (reviewed in Sections 5 and 6 below) is the fact that the definition of linear separability above is not very powerful per se. Empirically relevant data sets are usually not linearly separable. Knowing whether a data set is linearly separable does not convey much information on its structure: crucially, it does not allow quantifying "how close" to being separable or nonseparable the data set really is. To fix the ideas, let us consider a concrete case: the data set MNIST [34]. MNIST is a collection of handwritten digits, digitized as 28 × 28 greyscale images, each labelled by the corresponding digit ("0" to "9"). I will use the "training" subset of MNIST, containing 6000 images per digit. To simplify the discussion, I will mainly focus on a single dichotomy within MNIST: that expressed by the labels "3" and "7". The particular choice of digits is unimportant for this discussion; I will give an example of another dichotomy below, when subtle differences between the digits can be observed. One may ask the question as to whether the MNIST training set, as a whole, is linearly separable. However, the answer is not particularly informative: the MNIST training set is not linearly separable [34]. But how unexpected is this answer? Can we measure the surprise of finding out a given training set is or is not linearly separable? Intuitively, there are three different properties of a data set that facilitate or hinder its linear separability: size, dimensionality, and structure. • Size. The number of elements m of a data set is a simple indication of its complexity. While a few data points are likely linearly separable, they convey little information on the "ground truth", the underlying process that generated the data set. On the contrary, larger data sets are more difficult to classify, but the information that is stored in the weights after learning is expected to be more faithful to the ground truth (this is related to the concept of "sample complexity" in machine learning [35]). • Dimensionality. There are two complementary aspects when considering dimensionality in a data oriented framework. First, the embedding dimension is the number of variables that a single data point comprises. For instance, MNIST points are embedded in R 784 , i.e., each of them is represented by 784 real numbers. The embedding dimension is n in Equation (1); therefore, n is also the number of degrees of freedom that a linear classifier can adjust to find a separating hyperplane. Hence, one expects that a large embedding dimension promotes linear separability. Second, the data set itself does not usually uniformly occupy the embedding space. Rather, points lie on a lower-dimensional manifold, whose dimension d is called the intrinsic dimension of the data set. The concept of general position discussed below is related to the intrinsic dimension; however, beyond that, I will not explicitly consider this type of data complexity in this article (for analytical results on the linear separability of manifolds of varying intrinsic dimension, see [27]). • Structure. As I will show in a moment, the effects of size and dimensionality on linear separability are easily quantified in a simple null model. Data structure, on the other hand, has proved more challenging, and it is the main focus of the theory described here. There is no single definition of data structure; different definitions are useful in different contexts. A common characterization can be given like this: data have structure whenever the data points ξ µ and their labels σ µ are not independent variables. I will specify a more precise definition in Section 5. Intuitively, the data structure can both promote or preclude linear separability. If points that are close to one another tend to have the same label then linear separability is improved; if, instead, there are many differently labeled points in a small region of space, then linear separability is obstructed. Let us get back to the question "how surprising is it that MNIST is not linearly separable?". This question should be answered by at least taking into account the first two properties described above, the size of the data set and its dimensionality, which are readily computed from the raw data. In fact, the surprise, i.e., the divergence from what is expected based on size and dimensionality, may be interpreted as a beacon of the third property: data structure. I will show in the next section that the answer to our question is "exceedingly unsurprising". Yet, a slightly modified question will reveal that MNIST, albeit unremarkable in it not being linearly separable, is exceptionally structured. Null Model of Linear Separability Let us consider a null model of data that fixes the dimension n and the size p. I use a different letter (p instead of m), because it will be useful below to have two different symbols for the size of the whole data set (m) and for the size of its subsets. Consider a data set Z p = {(ξ µ , σ µ )} µ=1,...,p , where the vectors ξ µ are random independent variables that are uniformly distributed on the unit sphere, and the labels σ µ are independent Bernoulli random variables (also independent from every ξ µ ). These choices are suggested by a maximum entropy principle, when only the parameters m and n are fixed. What is the probability that a data set generated by this model is linearly separable? This problem was addressed and solved more than half a century ago [36][37][38]; In Section 6 I will describe an analytical technique that allows this computation. The fraction of dichotomies of a random data set that are linearly realizable is where ( · · ) is the binomial coefficient. Thus, a random (uniform) dichotomy has probability c n,p of being linearly realizable. In this article, I will refer to the probability c n,p as the separability, or probability of separation. A related quantity is the number of dichotomies C n,p = 2 p c n,p (here, 2 p is the total number of dichotomies of p points). Figure 1 shows the sigmoidal shape of c n,p as a function of p at fixed n. The separability is exactly equal to 1 up to p = n (which pinpoints what is known as the Vapnik-Chervonenkis dimension in statistical learning theory [35]), and it stays close to 1 up to a critical value p c , which increases with n. At p c , the curve steeply drops to asymptotically vanishing values, the more abruptly the larger is n. Rescaling the number of points p with the dimension n yields the load α = p/n. As a function of α, the probability of separation has the remarkable property of being equal to 1/2 at the critical value (that is known as the storage capacity) α c = p c /n = 2, independently of n. Such an absence of finite size corrections to the location of the critical point is an unusual feature, which will be lost when we consider structured data below. In the large-n limit, c n,αn converges to a step function that transitions from 1 to 0 at α c . How large is the probability of separation c n,m that is given by Equation (2) when one substitutes the sample size m = 12,000 and the dimensionality n = 784, i.e., those of the dichotomy "3"/"7" in the data set MNIST? The probability, as anticipated, is utterly small, less than 10 −2000 : it should be no surprise that MNIST is not linearly separable. This comparison is not completely fair, because of the assumption, underlying Equation (2), of general position. The concept of general position is an extension of that of linear independence, which is useful for sets larger than the dimension of the vector space. A set X of vectors in R n is in a general position if there is no linearly dependent subset X ⊆ X of cardinality less than or equal to n. MNIST is quite possibly not in general position. To make sure that it is, I downscaled each image to 10 × 10 pixels and only considered 1000 images per class (to allow for faster numerical computations), and applied mild multiplicative random noise, by flipping 5% of the pixels around the middle grey value (see Figure 2); I will refer to this modified dataset as "rescaled MNIST". Running the standard perceptron algorithm on rescaled MNIST did not show signs of convergence after 10 5 iterations, which indicated that the data set is likely not linearly separable. For m = 2000 and n = 100, the separability c n,m is less than 10 −400 . The null model provides a simple concise interpretation of the linear separability of a given data set, given its size m and dimensionality n, in terms of 5 possible outcomes (see Figure 1, bottom panel): 1. The set is linearly separable and it lies in the region where c n,m ≈ 1. Separability here is trivial: almost all data sets are separable in this region, provided that the points are in general position. 2. The set is not linearly separable and it lies in the region where c n,m ≈ 1. The only way this can happen for m ≤ n is if the points are not in a general position. For m > n, but still in this region, the lack of separability could also be attributed to a non-trivial data structure. 3. The set is not linearly separable and it lies in the region where c n,m ≈ 0. Almost no dichotomy is linearly realizable in this region; therefore, the lack of separability is trivial here. 4. The set is linearly separable and it lies in the region where c n,m ≈ 0. This situation is the hallmark of data structure. The fact that the data set happens to represent one of the few dichotomies that are linearly realizable in this region indicates a non-null dependence between the labels and the points in the data set. 5. The set lies in the region where c n,m is significantly different from 0 and 1. Here, knowing that a data set is linearly separable or not is unsurprising either way. The location and the width of this "transition region" are the two main parameters that summarize the shape of the separability curve. In Section 6 I will show how to compute these quantities within a more general model that includes data structure. The separabilities of two representative dichotomies in the data set (digits "4" versus "9", and digits "3" versus "7") are far removed from the null model, as is apparent from the location (and the width) of their transition regions (green areas). The shaded areas denote the 95% variability intervals. (Right panel) By increasing the distance δ between the means of the two Gaussian distributions that define the synthetic data set (here in n = 20 dimensions), the separability increases. For δ = 0 (squares), one recovers the prediction of the null model (blue line). Error bars (not shown) are approximately the same size as the symbols. Quantifying Linear Separability via Relative Entropy In order to make a step further in the characterization of the linear separability of (rescaled) MNIST, we can consider its subsets. While there is only one subset with m = 2000 points (focusing on the dichotomy "3"/"7"), and only one yes/no answer to the question of its linear separability, there are many subsets of size p < m, which can provide more detailed information. To quantify such information, let us formulate a more precise notion of surprise with respect to a model expressing prior expectation [39]. Let us again fix an empirical data set Z m = {(ξ µ , σ µ )} µ=1,...,m and fix p ≤ m. Now, consider the set N p of all subsets ν = {ν 1 , . . . , ν p } of p indices ν i ∈ {1, . . . , m}, with ν i = ν j for i = j. Additionally, consider the set Σ p = {−1, +1} p of all dichotomiesσ = {σ 1 , . . . ,σ p } of p elements. (I use curly braces for both sets and indexed families.) For each pair ν ∈ N p ,σ ∈ Σ p , we can construct the corresponding synthetic dataset similarly, for each ν ∈ N p , we can construct the corresponding subset Z emp (ν) of the empirical data set Z m : The main tool for defining the surprise will be probability distributions on a space Ω p , which is defined as the union of all synthetic data sets: The empirical space Ω emp p ⊆ Ω p can be defined similarly: Essentially, Ω is the number of subsets of size p in the data set. Interpreted as a probability distribution on Ω p , the empirical data are uniform distributed on Ω emp p ; likewise, the null model defined above induces, by conditioning on the points {ξ µ }, the uniform distribution on the whole Ω p . In general, not every data set in Ω p (nor in Ω emp p ) is linearly separable. Let us define the subsets for which this property holds: Let us call Q p and Q emp p the uniform probability distributions on Ω p and Ω emp p , respec- then measures the surprise carried by the data with respect to the prior belief regarding its linear separability expressed by Q p . Because Q p and Q emp p are defined on sets (Ω p and Ω emp p ) of different cardinality, I define the (signed) surprise S p by subtracting the reference KL divergence between the uniform distributions on these spaces: Notice that the summand in the definition of KL divergence, Equation (8), is only nonzero for z ∈ Ω emp p ; one then obtains where I have defined the empirical separability c emp n,p as the fraction of linearly separable subsets of size p in Z m : The signed surprise S p is positive (respectively negative) when the fraction of linearly separable subsets of size p is smaller (respectively larger) than expected in the null model. Separability in a Synthetic Data Set and in MNIST The discussion above encourages the use of the empirical separability c emp n,p as a detailed description of the linear separability of a data set in an information theoretic framework. Despite being one of the simplest benchmark data sets used in machine learning, MNIST is already rather complex; its classes are known to have small intrinsic dimensions and varied geometries [15]. Therefore, before turning to MNIST, let us consider a simple controlled experiment, where the data are extracted from a simple one-parameter mixture distribution, defined, as follows. Let σ ∈ {−1, +1} be a Bernoulli random variable with parameter 1/2, which generates the labels. The data points ξ ∈ R n are extracted from a multivariate normal distribution with σ-dependent mean. The joint probability distribution of each point-label pair is where f N (µ,I) is the probability density function of the multivariate normal distribution with mean µ and identity covariance matrix. The parameter δ measures the distance between the two means: Figure 2 shows the empirical separability c emp n,p , as a function of the size p of the subsets, for such a data set containing m = 200 data points in n = 20 dimensions. When δ = 0, all of the data points are extracted from the same distribution, regardless of their labels: the data have no structure and the separability follows the null model, as in Equation (2). While δ increases, equally labelled points start to cluster, and the separability at any given p > n increases, as expected from the qualitative discussion in Section 2. It is interesting to note that the width of the transition region (∆p in Figure 1) is also an increasing function of δ. This dependence was not expected a priori; In Section 7, I will show that the theory of structured data presented below allows for explaining this behavior. Let us now compute c emp n,p for the rescaled MNIST data set. Figure 2 shows the results of three numerical experiments, as compared with the null model prediction (2), and elicits four observations. (i) MNIST data are significantly more separable than the null model. For instance, the signed surprise, with respect to the null model, of the empirical dichotomies separating the digits "3" and "7" takes the values S 400 ≈ −55, S 500 ≈ −100, S 600 ≈ −150. (ii) Even within the same data set, different classifications can have different probabilities of separation; the dichotomy separating the digits "4" and "9" in rescaled MNIST is closer to the null model than the dichotomy of "3" and "7" (e.g., S 400 ≈ −48). (iii) Destroying the structure by random reshuffling of the labels makes the separability collapse onto that of the null model; the surprise S p in this case is, at most, of order 10 −1 for all p. (iv) Similarly to what happens in the more controlled experiment with the synthetic data above, the separability curve of the "3"/"7" dichotomy, which has its transition point at a larger value of p than the "3"/"9" dichotomy, also has a wider transition region. This analysis shows that, contrary to what appeared by looking solely at the whole data set, the dichotomies of rescaled MNIST are much more likely to be realized by a linear separator than random ones. In relation to the separability as a function of p, the null model has a single parameter, the dimension n. Is it possible to interpret the empirical curves as those of the null model with an effective dimension n eff ? Increasing n has the effect of increasing proportionally the value p c because the storage capacity is fixed to α c = 2. However, while fixing n eff ≈ 280 indeed aligns the critical number of points p c with the empirical one, it yields a much smaller width of the transition region (∆p ≈ 80 for the null model and ∆p ≈ 300 in the data). Furthermore, notice that the values of the surprise for the "3"-vs.-"7" and "4"-vs.-"9" experiments are not very different. The reason is the ingenuousness of the null model, which hardly captures the properties of the empirical sets, and whose term c n,p therefore dominates in S p . These observations, together with the motivations that are discussed above, are a spur for the definition of a more nuanced and versatile model of the separability of structured data. Parameterized Model of Structured Data Fixing a model of data structure in this context means fixing a generative model of data. Here, I use the model first introduced in [28]. This should not be considered to be a realistic model of real data sets. It is useful as an effective or phenomenological parameterization of data structure. It has two main advantages: (i) it allows the analytical computation, within a mean field approximation, of the probability of separation c n,p ; and, (ii) it naturally points out the relevant geometric-probabilistic parameters that control the linear separability. The model is expressed in the form of constraints between the points and the labels. The synthetic data set is constructed as a collection of q "multiplets", i.e., subsets of k points {ξ 1 µ , . . . , ξ k µ } with prescribed geometric relations between them, and such that the labels are constant within each multiplet: The total number of point/label pairs is p = qk. Observe that, if one considers the set of all points X = {ξ i µ }, not every dichotomy of X is admitted by the parameterization of Z q in Equation (13). If a dichotomy assigns different labels to two elements of the same multiplet, it cannot be written in this form. The dichotomies that agree with the parameterization of Equation (13) are termed as admissible. The relations between the points ξ i µ within each multiplet can be fixed, for instance, by prescribing that the k(k − 1)/2 overlaps ρ i,j = ξ i µ · ξ j µ be fixed and independent of µ (remember that |ξ i µ | = 1). The statistical ensemble for Z q , as specified by the probability density dp(Z q ), is chosen in accordance with the maximum entropy principle: it is the uniform probability distribution on the points and the labels independently, given the constraints: where Z n, q, {ρ i,j } is the partition function, fixed by the normalization condition The null (unstructured) model of Section 3 is recovered in this parameterization in two different limits. First, if k = 1 each multiplet is composed of a single point, and no contraints are imposed other than the normalization. Second, for any k, if all overlaps are fixed to 1, then all points in each overlap coincide, ξ 1 µ = ξ 2 µ = · · · = ξ k µ , and the model is equivalent to the null model with p = q. The theory that will be described below depends on a natural set of parameters ψ m , with m = 2, . . . , k. These quantities are conditional probabilities of geometric events that are related to single multiplets. They characterize the properties of the multiplets that are relevant for the linear separability of the whole set. Consider a multiplet X = {ξ 1 , . . . , ξ k }. ψ m is a measure of the likelihood that a subset X ⊆ X of m ≤ k points is classified coherently by a random weight vector. More precisely, ψ m is the probability that the scalar product w · ξ has the same sign for all ξ ∈ X , being conditioned on the event that w · ξ has the same sign for all ξ ∈ X \ {ξ }. This probability is computed in the ensemble where the vector w is uniformly distributed on the unit sphere S n−1 , X is uniformly distributed on the subsets of X of m points, and ξ is uniformly distributed on the elements of X . This is coherent with the mean field nature of the combinatorial theory, which assumes uniformly distributed and uncorrelated quantities (see below). In a few cases, ψ m can be computed explicitly. For instance, for a doublet {ξ,ξ} at fixed overlap ρ = ξ ·ξ, This is the probability that a random hyperplane does not intersect the segment that connects two points at overlap ρ. It is an increasing function of ρ, from ψ 2 (−1) = 0 to ψ 2 (1) = 1. If k > 2, then the quantity that enters the equations will be the mean of ψ 2 (ρ) over all the pairs in the multiplet. It can be shown that ψ m , as a function of the overlaps ρ i,j , does not explicitly depend on the dimensionality n [28]; this property greatly simplifies the analytical computations. In summary, the parameters of the model are the following: the dimensionality n, the multiplicity k, and the k − 2 probabilities ψ m . Actually, only two special combinations of the parameters ψ m emerge as relevant from the theory that is presented in the next sections: I will call them structure parameters. Other functions of the probabilities ψ m are relevant for other purposes, for instance, when considering the large-p asymptotics of c n,p , which relates to the generalization properties of the linear separator [32]. Combinatorial Computation of the Separability for Structured Data Cover popularized a powerful combinatorial technique to compute the number of linearly realizable dichotomies in an old and highly cited paper [38]. Despite its appeal, the combinatorial approach (while certainly not extraneous to contemporary statistical physics, both theoretical and applied [40][41][42][43]) remained somewhat confined to very few papers in discrete mathematics, and it was only very recently extended to more modern questions, when it was used to obtain an equation for C n,q , the number of admissible dichotomies of q multiplets, for structured data of the type that is defined in the previous section. Ref. [28] first presented the arguments and computations leading to this equation. To make this article as self-contained as possible, I repeat most of the derivation here. Exact Approach for Unstructured Data (k = 1 Points per Multiplet) First, I recall the classic computation for unstructured data (k = 1 in our notation). The idea is to write a recurrence relation for the number of linearly realizable dichotomies C n,p and, consequently, for the probability c n,p , by considering the addition of the (p + 1)th element ξ p+1 to the set X p = {ξ 1 , . . . , ξ p } that was composed of the first p elements. Consider one of the dichotomies of X p , let us call it φ p ; how many linearly realizable dichotomies of X p+1 = {ξ 1 , . . . , ξ p , ξ p+1 } agree with φ p (i.e., take the same values) on the points of X p ? When the point ξ p+1 is added to the set, two different things can happen: (i) sgn(w · ξ p+1 ) is the same for all possible weight vectors w that realize φ p ; and, (ii) there is at least one weight vectorŵ realizing φ p , such thatŵ · ξ p+1 = 0. These two cases lead to different contributions to C n,p+1 . In the first case, there is only one dichotomy of X p+1 agreeing with φ p , as the value that is assigned to ξ p+1 is fixed. In the second case, the value that is assigned to ξ p+1 can be either +1 or −1; therefore, the number of dichotomies of X p+1 agreeing with φ p is 2. Let us call M n,p the number of those dichotomies, among the C n,p dichotomies of X p , such that (ii) holds for the new point; the number of those satisfying (i) will be C n,p − M n,p . The reasoning above then leads to C n,p+1 = (C n,p − M n,p ) + 2M n,p = C n,p + M n,p . Here lies the keystone that allows for the closure of the recurrence equation: M n,p is the number of dichotomies conditioned to satisfy a linear constraint; therefore, it is equal to the number of dichotomies, of the same number of points p, in n − 1 dimensions: M n,p = C n−1,p . Finally, the recurrence relation is C n,p+1 = C n,p + C n−1,p , which translates into the following equation for the probability c n,p : The boundary conditions of the recurrence (19) are which come from the conditions C 1,p>0 = 2 (there are only two normalized weight vectors in one dimension) and C n>0,1 = 2 (there is always a weight vector w, such that ±w · ξ = ±1). The solution of Equation (19) is Equation (2), as can be checked directly. However, the more complicated equations that are satisfied by the probabilities for structured data are not as easily solvable. For this reason, in Section 7, below, I will show a method to compute useful quantities that are related to the shape of c n,p directly from the recurrence relations, with no need for a closed solution. Mean-Field Approach for Pairs of Points (k = 2 Points per Multiplet) The simplest non-trivial extension of Cover's computation to structured data is k = 2. From here on I will useĉ n,q andĈ n,q to denote the fraction and number of linearly realizable admissible dichotomies of q multiplets because the symbols c n,p and C n,p were reserved to denote the fraction and number of linearly realizable dichotomies of p points. Notice that all the quantities appearing above are notated with no explicit dependence on the points ξ. This is because the unstructured case enjoys a strong universality property (as proved in [38]): C n,p is independent of the points of X p , as long as they are in a general position. Such generality breaks down for structured data. In this case, the recurrence equations that will be obtained are not valid for all sets X p ; rather, they are satisfied by the ensemble averages ofĈ n,q andĉ n,q , in the spirit of the mean-field approximation of statistical physics. The set of points is now X q ∪X q , where X q is a set of q points {ξ 1 , . . . , ξ q } andX q is a set of partners {ξ 1 , . . . ,ξ q }, where ξ µ ·ξ µ = ρ for all µ = 1, . . . , q (remember that all of the points are on the unit sphere). Consider the addition of the points ξ q+1 andξ q+1 to X q andX q , respectively. By repeating the reasoning described above for k = 1 with respect to the pointξ q+1 , one finds a formula for the number Q n,q of dichotomies of the set {ξ 1 ,ξ 1 , . . . , ξ q ,ξ q ,ξ q+1 } that are admissible on the first q pairs (and are unconstrained onξ q+1 ): Q n,q =Ĉ n,q +Ĉ n−1,q . These dichotomies can be separated into two classes, similarly to the two cases (i) and (ii) above: those that can be realized by a weight vector orthogonal to ξ q+1 (let us denote their number by R n,q ) and those that cannot (their number is then Q n,q − R n,q ). For each dichotomy φ of the first class, there exists one and only one admissible dichotomy of the full set X q+1 ∪X q+1 that agrees with φ and can be realized linearly. In fact, thanks to the orthogonality constraint, there is always, among the weight vectors realizing φ, one vector w, such that sgn(w · ξ q+1 ) = φ(ξ q+1 ), (21) thus satisfying the admissibility condition on the pair {ξ q+1 ,ξ q+1 }. The remaining Q n,q − R n,q dichotomies do not allow this freedom. How many of them are realized by weight vectors w, such that the admissibility condition (21) is satisfied can be estimated at the mean field level by the probability that, given a random weight vector w chosen uniformly on the unit sphere, the scalar products w · ξ q+1 and w ·ξ q+1 have the same sign. This probability does not depend on the actual points, but only on their overlap ρ, and it is exactly the quantity ψ 2 (ρ) that is defined in the previous section, Equation (16). I will denote it by ψ 2 in the following, with the dependence on ρ being understood. The foregoing argument brings the following equation: Similarly to what happens in the unstructured case, the unknown term R n,q can be expressed in terms of variablesĈ •,q by considering the same problem in a lower dimension. In fact, remember that Q n,q above was computed by applying Cover's argument for k = 1, because it counts how the number of dichotomies is affected when the single pointξ q+1 is added to the set. R n,q must be computed in the same way, since it, again, counts the number of dichotomies that are admissible on the first q pairs and free onξ q+1 . However, these dichotomies must satisfy the additional linear constraint w · ξ q+1 = 0; therefore, the whole argument must be applied in n − 1 dimensions. This leads to R n,q =Ĉ n−1,q +Ĉ n−2,q . Finally, substituting this expression of R n,q into Equation (22) yieldŝ As above, this translates to a similar equation for the probabilityĉ n,q : The boundary conditions of this recurrence are slightly different than for k = 1. They are discussed in the Appendix A, together with those for the general case. General Case Parameterized by k It is possible to extend the method that is described above to all k. I will only sketch the derivation; the details can be found in [28]. Just as the case k = 2 can be treated by making use of the recurrence formula for k = 1, the idea here is to construct the case k recursively by using the formula (yet to be found) for k − 1, therefore obtaining a recurrence relation in k as well as in n and q. To this aim, the (q + 1)th multiplet {ξ 1 q+1 , . . . , ξ k q+1 } is split into the two subsets {ξ 1 q+1 } andξ q+1 = {ξ 2 q+1 , . . . , ξ k q+1 }. The formula for k − 1 allows for applying the argument to the setξ q+1 , thus obtaining the number Q n,q of dichotomies of the set X q \ {ξ 1 q+1 } that are admissible on the first q complete multiplets and are admissible on the (q + 1)th incomplete multipletξ q+1 . More formally, Q n,q is the number of linearly realizable dichotomies φ, such that Now the argument goes exactly as for the case k = 2: some of these Q n,q dichotomies (their number being R n,q ) can be realized by a weight vector orthogonal to the point ξ 1 q+1 ; therefore, each of them contributes a single admissible dichotomy of the whole set X q+1 ; the remaining Q n,q − R n,q contribute with probability ψ k . Again, R n,q can be expressed by applying the same argument in n − 1 dimensions. Finally, one finds that the probabilityĉ n,q satisfies a recurrence equation in n and q: where the coefficients θ k l are constants (independent of n and q) satisfying a recurrence equation in k and l: The boundary conditions for Equation (28) are the conditions at k = 1 are those that reproduce Equation (19). Computation of Compact Metrics of Linear Separability The model of data structure leading to the foregoing equations is very detailed, in that it allows for the independent specification of a large number of parameters. However, the influence of each parameter on the separabilityĉ n,q is not equal, with some combinations of parameters being more relevant than others. In this section, I compute two main descriptors of the shape ofĉ n,q as a function of q at n fixed: the transition point p c (equivalently, the capacity α c ) and the width ∆p of the transition region; they are defined more precisely below. We will see that only the structure parameters Ψ 1 and Ψ 2 , the special combinations defined in Section 5, are needed to fix p c and ∆p. Diagonalization of the Recurrence Relation Notice that, while the quantityĉ n,q that is given by the theory is expressed as a function of the number of multiplets q, the definition of separability that is discussed in Section 5 is given in terms of the number of points p = kq. This is not really a problem in the thermodynamic limit whereby the separability is expressed as a function of the load α. In the following, I will define the location q c and the width ∆q of the transition region in the parameterization by the number of multiplets q; the corresponding quantities that are parameterized by p are obtained by rescaling: Let us consider the discrete derivative ofĉ n,q with respect to n: γ n,q = ∆ nĉn,q ≡ĉ n+1,q −ĉ n,q . As will be clear momentarily, working with γ n,q is convenient because it is normalized, as I will prove below. γ n,q satisfies the same recurrence relation asĉ n,q : The boundary conditions, in accordance with (20), are γ n,1 = δ n,0 , γ n<0,q = 0. The right hand side of Equation (33) has the form of a discrete convolution between θ k • and γ •,q : The convolution is diagonalized in Fourier space, by defining the characteristic functions Multiplying both sides of Equation (35) by e int and summing over n yields From the definition (36) and boundary conditions (34), one getsγ 1 (t) = 1; hence, the solution of the recurrence equation isγ Defining the Location and Width of the Transition Region As mentioned above, γ n,q is normalized, which means that or, equivalently,γ q (0) = 1. To prove this, it suffices to show thatθ k (0) = 1, i.e., that θ k n is normalized. Summing both sides of Equation (28) in l from 0 to ∞ shows thatθ k (0) is constant in k, thereforeθ as can be computed from the boundary conditions (29). Because it is normalized, γ •,q can be interpreted as a probability distribution, whose cumulative distribution function isĉ •,q . The ath moment of the distribution is The same holds for θ k • , whose moments θ a k can be obtained from its characteristic functioñ θ k (t). Let us focus on the mean µ q and the variance σ q , Equation (39) allows for expressing these quantities in terms of the mean µ θ = θ k and variance σ 2 as can be checked by using Equation (42). We can now define the two main descriptors, q c and ∆q, which summarize the separability as a function of q: Expression in Terms of the Structure Parameters To compute these quantities, all we need is µ θ and σ θ , or θ k and θ 2 k . Solving Equation (45) for q c gives q c = nµ −1 θ + 1. Solving Equations (46) and (47) for ∆q gives The corresponding expressions to leading order in n are the following The moments of θ k • satisfy the following equation, which can be obtained by multiplying both sides of Equation (28) by l a and summing over l: The boundary conditions are θ 0 k = 1 (computed above) and θ a 1 = 1/2, as given by Equation (29). In particular, for a = 1, we obtain whose solution is where the structure parameter Ψ 1 , as defined in Equation (17), implicitly depends on k. For a = 2, the recurrence Equation (51) becomes By substituting θ k−1 given by Equation (53) and solving the recurrence we obtain, after some algebra, where Ψ 2 is the second structure parameter that is defined in Equation (18). Finally, by combining the leading order expansions (50) and the moments (53) and (55), and by rescaling, as in Equation (31), we have the following explicit expressions for the two main metrics of separability as functions of the multiplicity k and the structure parameters Ψ 1 and Ψ 2 : For data that are structured as pairs of points, k = 2, Equation (56) gives the storage capacity of an ensemble of segments; this special result was first obtained, by means of replica calculations, in [44], and it was then rediscovered in other contexts in [8,45]. Dependence on the Structure Parameters and Scaling The two structure parameters Ψ 1 and Ψ 2 , which control the two main metrics of linear separability, belong to k-dependent ranges: The two quantities are not independent, since they are constructed from the same set of k − 1 quantities ψ m ∈ [0, 1]. When conditioned on a fixed value of Ψ 1 , Ψ 2 has a lower bound Ψ − 2 and an upper bound Ψ + 2 that can be computed by considering the two following extreme cases. First, the supremum of Ψ 2 is realized in the maximum entropy case, where the value of Ψ 1 is uniformly distributed among the ψ m . Second, the infimum of Ψ 2 corresponds to the minimum entropy case, where Ψ 1 is distributed on the fewest possible ψ m 's. Explicitly, The definition of Ψ 2 , Equation (18), can be rewritten, as follows: Substituting (59) and (60) into (61), we obtain Figure 3 shows the location of the transition, p c , and the width of the region, ∆p, as functions of Ψ 1 and Ψ 2 for a few values of k. Notice that the range of ∆p at fixed k and Ψ 1 is itself bounded because of the limited range [Ψ − 2 , Ψ + 2 ] of Ψ 2 . There is an interesting observation to be made on a semi-quantitative level. At fixed k and n, p c is an increasing function of Ψ 1 . The width ∆p depends on both structure parameters, but, since the range of Ψ 2 at fixed Ψ 1 is so limited, one expects that, in practice, ∆p will be approximately an increasing function of Ψ 1 . Therefore, ∆p will be, in most cases, an increasing function of p c . This is exactly the phenomenology that is observed in Figure 2, in both the synthetic data and MNIST. The rescaled location of the transition p c /n, Equation (56), does not depend on Ψ 2 , and it depends on Ψ 1 only through the rescaled value Ψ 1 /k. For large k, it takes the scaling form The width ∆p, on the contrary, depends on both Ψ 1 and Ψ 2 . Because it is a monotonically increasing function of Ψ 2 , its upper bound ∆p + and lower bound ∆p − at fixed Ψ 1 can be obtained by substituting (62) and (63) in Equation (57). Expressing ∆p + again as a function of the rescaled parameter Ψ 1 /k, and only keeping the leading term in k → ∞, one obtains the scaling form Doing the same for ∆p − yields a complicated function, which is plotted in Figure 3. A simpler expression for the bound can be obtained by observing that Ψ − 2 ≥ (Ψ 2 1 − Ψ 1 )/2; using this more regular bound yields, at leading order in k, (66) Figure 3 shows the large-k scaling behavior of p c , ∆p + , and ∆p − . The two metrics are insensitive on most of the microscopic parameters of the theory, and they only depend on the two structure parameters, as shown analytically above. In addition, they display a large degree of robustness, even as functions of Ψ 1 and Ψ 2 : measuring p c /n from the data fixes (up to corrections in k) the quantity Ψ 1 /k, which, in turn, significantly narrows down the range of values that are attainable by ∆p, the more so the smaller is k. Discussion The discussion above focused on the quantification of linear separability within a model that encodes simple relations between data points and their labels, in the form of constraints. Such a model has the advantage of being analytically tractable and allows the explicit expression of p c and ∆p in terms of model parameters. Moreover, the parameters appearing in the theory have direct interpretations as probabilities of geometric events, thus suggesting routes for further generalization. In the face of its convenience for theoretical investigations, the definition of data structure used here does not aim at a realistic description of any specific data set. It must be interpreted as a phenomenological or effective parameterization of basic features of data structure that have a distinct effect on linear separability. The limited numerical experiments on MNIST data reported above are a proof of concept, showing a real data set with unexpectedly high linear separability, and they serve as a notable motivation for the investigation of data structure. The main goal of this article is the theoretical analysis; therefore, I postpone any comparison of theory and data. Moreover, MNIST is a relatively simple and clean data set. The numerical analysis signals the highly constrained nature of these data, where points that are close with respect to the Euclidean distance in R n are more likely to have the same label. However, more complex data sets, such as ImageNET, are expected to be less constrained at the level of raw data, due to the higher variability within each category, and due to what are referred to as "nuisances", i.e., elements that are present, but do not contribute to the classification. Yet, even in these cases, the aggregation of equally-labelled points emerges in the feature spaces towards the last layers of deep neural networks, which improves the efficacy of the linear readout downstream, as empirically observed [14,15]. An interesting, and perhaps unexpected, outcome of the theory concerns the universal properties of the probability of separation c n,p . Here, I use the term "universality" in a much weaker sense than what is usually intended in statistical mechanics: I use it to denote (i) the qualitative robustness of the sigmoidal shape of the separability curve on the details of the model, and (ii) the quantitative insensitivity of the separability metrics on all but a few special combinations of parameters [46]. Importantly, the two metrics of data structure that are computed for the model, p c and ∆p, are the only two important parameters that fix c n,p in the thermodynamic limit, apart from the rescaling by k. The central limit theorem suggests this universality property. In fact, γ n,q is the probability distribution of the sum of p − 1 independent and identically distributed variables, as expressed by Equation (39). Therefore, γ n,q will converge to a Gaussian distribution with linearly increasing mean and variance. This indicates that µ q and σ q are the only two nonzero cumulants in the thermodynamic limit and, thus, q c and ∆q are the only two nontrivial metrics that are related toĉ n,q . This does not, by any means, imply that the model of data structure itself can be reduced to only two degrees of freedom. In fact, the phenomenology is richer if one considers the combinatorial quantity C n,q instead of the intensive oneĉ n,q , see [32]; still, regarding the probability of separation, the relevant metrics are the location and width of the transition region. Appendix A. Boundary Conditions The boundary conditions of the recurrence Equation (27) require some care. When a single (q = 1) multiplet is considered in dimension n ≥ k, both its admissible dichotomies are linearly realizable. This is because all dichotomies of k points can be realized in n ≥ k dimensions, as I mentioned above. Thereforê c n≥k,1 = 1. (A1) The boundary conditions for n < k are not simply the same as for k = 1. To see this, consider for instance what happens in n = 1 dimensions when dealing with a single (q = 1) multiplet of k = 2 points, ξ andξ. Two problems arise: (i) if the two points lie on opposite sides of the origin, a linearly realized dichotomy φ will always assign them different signs, φ(ξ) = −φ(ξ); (ii) there are not enough degrees of freedom to fix the overlap ρ = ξ ·ξ while keeping ξ andξ normalized. These obstructions are problematic when trying to define the value ofĉ 1,1 for k = 2. This quantity appears in the right hand side of the recurrence Equation (25) when n = 2 and q = 1, where it is needed, alongsideĉ 2,1 , to computeĉ 2,2 . Retracing the derivation for k = 2 shows thatĉ 1,1 in this context occurs when imposing a linear constraint in 2 dimensions, where it represents the fraction of admissible dichotomies of the doublet {ξ,ξ} that can be realized by a weight versor w satisfying w · ξ = 0. In 2 dimensions, the orthogonality condition fixes w up to its sign. If this constrained w is such that sgn(w · ξ) = sgn(w ·ξ) (A2) then exactly 2 admissible dichotomies of {ξ,ξ} are realizable, otherwise the only realizable dichotomies are not admissible. Thereforeĉ 1,1 expresses the probability that Equation (A2) is satisfied; in the mean field approximation, this is ψ 2 (ρ). The foregoing argument actually applies for all k ≥ 1. The probability that all k points in a multiplet lie in the same half-space with respect to the hyperplane realized by a random versor fixes the first non-trivial boundary conditionĉ 1,1 . For k = 2 this fixes everything. Let us now consider k = 3. In this case Equation (A1) omitsĉ 2,1 . What should its value be? Again, going back to the argument in Section 6.3 is helpful.ĉ 2,1 appears in the recurrence when n = 3 and a linear constraint is imposed on w. This fixes w up to rotations around an axis, identified by a versor v. Now, whether the multiplet {ξ 1 , ξ 2 , ξ 3 } allows 2 or 0 admissible dichotomies depends on whether there exists a vector w satisfying the constraint and such that sgn(w · ξ 1 ) = sgn(w · ξ 2 ) = sgn(w · ξ 3 ). This happens if and only if the axis of rotation v lies outside the solid angle subtended by the three vectors ξ 1 , ξ 2 , ξ 3 . This characterization allows to computeĉ 2,1 by elementary methods of solid geometry. One findŝ where For larger values of k, the same reasoning allows to express the non trivial boundary conditionsĉ n<k,1 as geometric probabilities. Fortunately, the hassle of computing all these probabilities can be bypassed by using the boundary conditions (20), which are approximate for k > 1, but still provide asymptotically correct results [28]. In fact, as is evident from the discussion in Section 7, if one takes the thermodynamic limit (30) the contribution of the k − 1 approximate values ofĉ n,1 becomes negligible. Other ways of taking the thermodynamic limit (e.g., if k is extensive in n) may not enjoy this simplification, and may require a different analysis of the boundary conditions.
2021-03-29T05:13:57.717Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "95792335c5cef343d2f8cb12ad27354c8a350d1d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/23/3/305/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95792335c5cef343d2f8cb12ad27354c8a350d1d", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
12357486
pes2o/s2orc
v3-fos-license
Activity of right premotor-parietal regions dependent upon imagined force level: an fMRI study In this study, we utilized functional magnetic resonance imaging (fMRI) to measure blood oxygenation level-dependent (BOLD) signals. This allowed us to evaluate the relationship between brain activity and imagined force level. Subjects performed motor imagery of repetitive right hand grasping with three different levels of contractile force; 10%, 30%, and 60% of their maximum voluntary contraction (MVC). We observed a common activation among each condition in the following brain regions; the dorsolateral prefrontal cortex (DLPFC), ventrolateral prefrontal cortex (VLPFC), supplementary motor area (SMA), premotor area (PM), insula, and inferior parietal lobule (IPL). In addition, the BOLD signal changes were significantly larger at 60% MVC than at 10% MVC in the right PM, the right IPL, and the primary somatosensory cortex (SI). These findings indicate that during motor imagery right fronto-parietal activity increases as the imagined contractile force level is intensified. The present finding that the right brain activity during motor imagery is clearly altered depending on the imagined force level suggests that it may be possible to decode intended force level during the motor imagery of patients or healthy subjects. In voluntary motor execution, an accurate control of the appropriate force level is needed for precise motor performance. To date, numerous studies have investigated the relationship between brain activity and the level of contractile force. Singlecell recordings in animals indicate that there is a direct relationship between force level and the discharge rate of cortical neurons in BA 4 (Evarts, 1968), primary somatosensory cortex (SI; Wannier et al., 1991), and PM (Werner et al., 1991). In humans, an electrophysiological study by Perez and Cohen (2009) assessed corticospinal excitability by monitoring the magnitude of motor evoked potentials (MEPs) elicited by transcranial magnetic stimulation (TMS). They found that a graded modulation of corticospinal excitability during voluntary contractions. Studies utilizing neuroimaging have also found that brain activity in BA 4, SI, SMA, cingulate cortex, and cerebellum is correlated with contraction force level (Dettmers et al., 1995;Thickbroom et al., 1998;Dai et al., 2001;Ehrsson et al., 2001;Cramer et al., 2002). Taken together, the findings of the above studies indicate that valid measurements of the magnitude of contractile force can be assessed by monitoring brain activity via single cell discharge rate, MEPs, cerebral blood flow, and blood oxygen level dependent (BOLD) signal. However, during voluntary contractions brain activity must reflect not only the motor command but also the somatosensory afferent signals from the periphery. Therefore, it remains unclear as to whether brain activity reflects somatosensory input or motor command. Since there is no somatosensory input during motor imagery, brain activity during motor imagery would be expected to reflect only the motor command. In a study evaluating whether corticospinal excitability during motor imagery is dependent upon imagined force level (Mizuguchi et al., 2013b), subjects practiced generating isometric forces of 10%, 30%, and 60% maximum voluntary contraction (MVC) before MEPs were recorded. Then, MEPs were measured during motor imagery of the same force generations. The MEPs amplitudes recorded in the agonist muscles of the 60% MVC condition were significantly greater than those of the 10% MVC condition. However, since the TMS study accessed only corticospinal excitability, it is still unclear whether activity in brain regions responsible for motor imagery other than the BA 4 correlate with the imagined force level. In the present study, we utilized fMRI to quantify BOLD signals and thereby establish a relationship between brain activity and imagined force level. We hypothesized that activity in motor regions such as the PM and SMA correlates with imagined force level. SUBJECTS Sixteen normal subjects (three females and thirteen males; mean age 22.9 ± 2.6 years) participated in this study. All of them were right-handed according to the Edinburgh Inventory (Oldfield, 1971). The subjects did not have a previous history of neurological or psychiatric disorders. Before the experiment, informed consent was obtained from all subjects. The study was approved by the Human Research Ethics committee of Waseda University. PROCEDURE The subjects performed three motor imagery conditions; (1) 10% MVC; (2) 30% MVC; and (3) 60% MVC. First, grip strength of the right hand was measured using an electronic hand dynamometer (EH101, Hata Sporting Goods Ind., Ltd., Osaka, Japan) outside the MRI room. The dynamometer was adjusted to best fit the grip of the subject's right hand. Then, the subject was placed in a standing position and asked to squeeze the grip as hard as possible for 3 s without moving their arm. The subject was verbally encouraged to maximize the force. The subject performed this action twice with a 2-min rest between the trials. The mean value of the two trials was adopted as the subject's MVC. After this determination, 10, 30, and 60% MVC of grip strength were calculated. Second, the subjects were instructed to match the grasping force at one of the three force levels for ten trials, with at least a 1-min rest between trials. After each trial, the subjects were given feedback regarding the difference between the performed and the target values. After ten trials of pre-training, the subjects moved to the MRI room immediately, where they performed one of the three conditions in the MRI room for 5 min 12 s. After the fMRI measurements, the subjects were moved outside of the MRI room, where they were again asked to match their force of grasping to the required % MVC for five trials with a hand dynamometer. They received no feedback about their performance. These procedures were repeated for the three different force levels. Thus, the subjects completed three different fMRI scans. The order of the three conditions of force level was randomized for each subject and counterbalanced across all subjects. Before performing the MVC measurement, the difference in motor imagery between the first person perspective and third person perspective (Stevens, 2005) was explained to the subjects. They were subsequently instructed to "imagine repeatedly grasping with the right hand using a first person perspective at your own pace in the fMRI experiment". After each condition, we confirmed that the subjects conducted the instructed imagery. Approximately 5-min of rest was provided between conditions. In total, it took about 90 min for one subject to compete the entire experiment. For the MRI scan, a run for each condition consisted of five alternate repetitions of the task and rest periods. The durations of the task and rest period were both 30 s. In the scan, the subjects were presented with a blue-filled or red-filled circle cue via a PC and projector system (VisuaStimDigital, Resonance Technology Co, USA). Each circle with a black background was presented with a non-magnetic goggle. When the blue cue was presented, the subjects were instructed to mentally reproduce the requested force without any muscle activation, with the right hand, at a natural and comfortable self-paced rhythm. In addition, they were asked to not change the pace during the experiment. When the red cue was shown, the subjects were asked to relax and not to image. The subjects were also asked to keep their muscles relaxed and not to think about anything throughout the entire procedure. Any communication between the experimenter and the subject was made through an intercom. BEHAVIORAL DATA ANALYSIS The grasping forces produced by each subject were normalized with reference to the MVC of that particular subject. We then averaged the forces of the last of five trials in the pre fMRI and all of the five trials in the post fMRI period. Differences in the grasping force between the pre and post fMRI were evaluated with paired t-tests. fMRI DATA ACQUISITION AND ANALYSIS All images were acquired using a 1.5 T MR scanner (Signa, General Electric, Wisc., USA). BOLD contrast functional images were acquired using T2*-weighted echo planar imaging (EPI) free induction decay (FID) sequences with the following parameters: TR 3000 ms, TE 50 ms, FOV 22 cm × 22 cm, flip angle 90 • , slice thickness 5 mm and gap 1 mm. The orientation of the axial slices was parallel to the AC-PC line. For anatomical reference, T1-weighted images (TR 30 ms, TE 6 ms, FOV 24 cm × 24 cm, flip angle 90 • , slice thickness 1 mm and no gap) were also obtained for each subject. The first four volumes (12 s) of each fMRI session were discarded because of unstable magnetization. Raw data were analyzed utilizing Statistical Parametric Mapping (SPM8, Wellcome Department of Cognitive Neurology, London, UK) (Friston et al., 1994(Friston et al., , 1995a implemented in MATLAB (Mathworks, Sherborn, Massachusetts, USA). Realigned images were normalized to the standard space of the Montreal Neurological Institute brain (MNI brain). Then, smoothing was executed with an isotropic threedimensional Gaussian filter with full-width at half-maximum (FWHM) of 8 mm. High-pass filters (128 s) were also applied and low frequency noise and global changes in the signals were removed. We confirmed that the subjects' head movements were less than the size of one voxel. Statistical analysis was performed on two levels. A first-level analysis was performed for each subject using a general linear model. We constructed a statistical parametric map of the t-statistic for the three contrasts, (1) 10% MVC vs. rest; (2) Subject-specific contrast images of the estimated parameter were used for a second-level analysis (random-effect model; Friston et al., 1999). The second-level analysis was performed to extend the inference from individual activation to the general population. One-sample t tests were used with a voxel-wise threshold of p < 0.001 (uncorrected) to generate the cluster images. Then, we set the threshold at p < 0.05 for the cluster level after correction by the false discovery rate (FDR) for the whole brain space. The locations of brain activity were transformed from MNI coordinates into Talairach standard brain coordinates (Talairach and Tournoux, 1988). If significant activation was found in the white matter, the result was excluded from description in the result section and tables. We also calculated the BOLD signal changes that occurred during tasks in order to allow for the identification of activation peaks for each individual (Nakata et al., 2008). Eight regions were selected based on activation in the 60% MVC condition (Table 3), and each datum was collected from all subjects, using the "Plot" option in SPM8. The peak activities of three conditions in each region were analyzed by analyses of variance (ANOVAs) with repeated measures using as a within-subjects factor, condition (10% MVC, 30% MVC, and 60% MVC) (SPSS for windows version 21; IBM SPSS, Tokyo, Japan). For a repeated measures factor, it was tested whether Mauchly's sphericity assumption was violated. In all cases, the sphericity was maintained, and it was not necessary to use the Greenhouse-Geisser correction. When significant effects were identified, post hoc analyses were determined by utilizing paired t-tests with the Bonferroni correction in each region. Statistical significance was set at p < 0.05 (p < 0.0166, uncorrected). BOLD SIGNAL CHANGES The BOLD signal changes in eight regions were compared among conditions. There was a significant main effect of condition in the right PM (F (2,30) = 6.216, p < 0.01), right IPL (F (2,30) = 3.944, p < 0.05), and right SI (F (2,30) = 4.946, p < 0.05). Post-hoc analysis showed that the activity was significantly larger at 60% of the MVC than at 10% MVC in the right PM (p < 0.05), larger for the 60% MVC than the 10% MVC in the right IPL (p < 0.05), and larger at 60% of the MVC than at 10% of the MVC in the right SI (p < 0.05). No statistically significant differences among conditions were found for other regions (Figure 3). DISCUSSION In the present study, we demonstrated that, for certain areas, brain activity during motor imagery was dependent upon imagined force level. We utilized force levels of 10%, 30%, and 60% of the MVC. We observed a common activation for the three force conditions in several brain regions. Such regions included the DLPFC, VLPFC, SMA, PM, insula, and IPL. This finding is consistent with previous neuroimaging studies examining motor imagery (Hanakawa et al., 2003(Hanakawa et al., , 2008Lacourse et al., 2005;Lotze and Halsband, 2006;Imazu et al., 2007;Guillot et al., 2009;Mizuguchi et al., 2013a). Although we did not find significant differences in any voxels using a whole brain voxel-based analysis, the BOLD signal changes were significantly larger in the 60% MVC condition than they were in the 10% MVC conditions in the right PM, the right IPL, and SI. This discrepancy might be explained by the difference of the statistical power for the two situations. The PM receives a strong input from the IPL (Rizzolatti and Luppino, 2001). Therefore, activity in the right PM and IPL is likely to be part of a fronto-parietal network. However, it is known the absence of actual motor responses ). Thus, it is likely that both the PM and IPL are linked to the imagination of force generation, but that these two regions function at different stages in the processing of motor imagery. Studies utilizing fMRI and positron emission tomography (PET) during motor execution have provided evidence that activity in various regions such as BA 4, SI, PFC, SMA, PM, PPC, cingulate cortex, and cerebellum are correlated with the level of contracting force (Dettmers et al., 1995;Thickbroom et al., 1998;Ehrsson et al., 2000Ehrsson et al., , 2001Dai et al., 2001;Cramer et al., 2002). In addition, a non-human primate study demonstrated that neuronal activity in the PM contralateral to the active muscle is associated with the level of contractile force (Werner et al., 1991). In the current study, we showed that brain activity during motor imagery of force generation with the right hand was correlated with the imagined force level in the PM on the right (ipsilateral) side, and not on the left (Figure 3). One major difference between actual execution and motor imagery is the presence or absence of muscle contraction. That is, during muscle contraction, the contralateral BA 4 sends signals to motoneuron and the contralateral SI receives afferent feedback from muscle spindles and cutaneous receptors. The lack of a relationship between activity and imagined force level for the right hand, except for the right fronto-parietal region, might be due to the absence of afferent feedback from the periphery. What are the functions of the right fronto-parietal region? A recent study utilizing diffusion tensor imaging demonstrated that the anatomical connection from the PM to other regions was different for the right and left hemispheres (van der Hoorn et al., 2014). According to this study, function of the right PM differs from that of the left PM. For example, the right PM has stronger connections to the occipito-parietal region of the opposite hemisphere, while the left PM has stronger connections to the prefrontal area and anterior parietal cortex. Indeed, previous studies suggest that the right fronto-parietal region play an important role in the integration of somatosensory input and motor command or in the induction of kinesthesia from somatosensory input (de Jong et al., 2002;Naito et al., 2005). Since the somatosensory cortices might receive efference copy from the motor cortices during motor imagery (Grush, 2004), we infer that the activity in the right fronto-parietal region seen in the present study is related to the amount of kinesthesia or efference copy during motor imagery. Other studies suggest that the right PM is related to motor awareness and sense of agency (Berti et al., 2005;Tsakiris et al., 2010). Since the subjects were required to imagine at higher effort level for the higher forces in the present study, activity in the right PM would be expected to also reflect a stronger motor command or greater effort to produce motor imagery. However, to clarify differences in the functions of the fronto-parietal region between the "right" and "left" hemispheres in more detail, future study will be need to perform the same analysis during motor imagery using the left hand at different force levels. A previous study has demonstrated that functional connectivity during motor imagery is different for kinesthetic motor imagery and visual imagery (Solodkin et al., 2004). In addition, functional connectivity during motor imagery is different between healthy subjects and stroke patients (Sharma et al., 2009). These results suggest that motor imagery ability and/or imagery modality affect the functional connectivity during motor imagery. Since the amount of brain activity was dependent upon imagined force levels, functional connectivity might also differ between higher and lower imagined force levels. In the future, we need to clarify whether functional connectivity during motor imagery is altered by different imagined force levels. Recently, we reported that excitability of the corticospinal tract of the left hemisphere increased when the imagined force directed to the right hand increased (Mizuguchi et al., 2013b). Therefore, the left BA 4 would be expected to be associated with the imagined force level. Previous studies utilizing paired-pulse TMS demonstrated that activity of PM modulated excitability in the contralateral corticospinal tract via the transcallosal pathway (Mochizuki et al., 2004;Ni et al., 2009;Duque et al., 2012;Uehara et al., 2013). In addition, the right PM has an anatomical connection to the left precentral gyrus which includes a motor representation of the hand (van der Hoorn et al., 2014). Therefore, during motor imagery activity changes in the right PM might affect excitability in the left corticospinal tract via the transcallosal pathway. In the future, we need to clarify how the right PM or parietal region increased left corticospinal excitability during motor imagery with the right hand. However, we did not see an activation of the BA 4 during motor imagery in the present study. While some researchers did find BA 4 activation during motor imagery (Porro et al., 1996;Lotze et al., 1999;Sharma et al., 2008;Chen et al., 2009;Guillot et al., 2009), others did not (Decety et al., 1994;Naito et al., 2002;Kuhtz-Buschbeck et al., 2003;Higuchi et al., 2007;Szameitat et al., 2007). This discrepancy might be associated with such factors as the degree of muscle activity, the type of tasks, and/or differneces between the subjects (Munzert et al., 2009). Kuhtz-Buschbeck et al. (2003) investigated brain activity and corticospinal excitability during motor imagery using fMRI and TMS. They found a significant enhancement of corticospinal excitability with TMS, but not significant activation in BA 4 when utilizing fMRI. These findings suggest the possibility that sensitivity for the detection of neural activation, especially in BA 4, was higher for TMS than for fMRI. Therefore, although we did not detect BA 4 activity in the present study, BA 4 might be active during motor imagery. Motor imagery ability is known to be correlated with the enhancement of corticospinal excitability during motor imagery (Williams et al., 2012). In the present study, we did not assess the motor imagery ability of each subject. Therefore, if we screened for motor imagery ability, we might have been able to detect BA 4 activity. Another limitation of the present study was that we did not record an electromyogram for each muscle during motor imagery in the fMRI scan. Subjects might have contracted certain of their muscles during motor imagery. However, the lack of left BA 4 activation would indicate that actual muscle activity was minimal or absent during the motor imagery tasks. We believe that brain activity in the present study reflects motor imagery and not intended or unintended muscle activity. In this study the relationship between neuronal activity and imagined force level was investigated. Our findings suggest that during motor imagery activity of the right fronto-parietal region increases as the imagined contractile force level with the right hand is intensified. Motor imagery can be utilized not only for rehabilitation and sports training but also as a mean to create a brain computer interface (e.g., Neuper et al., 2009). Thus, the main finding of this study-that right brain activity during motor imagery is clearly altered depending on the imagined force level-suggests that it may be possible to decode "intended force level" by monitoring activity in the right frontoparietal region during the motor imagery of patients or healthy subjects.
2016-06-17T16:39:56.278Z
2014-10-08T00:00:00.000
{ "year": 2014, "sha1": "b430c68471341353d161f828a12a3e2b0128d522", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2014.00810/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b430c68471341353d161f828a12a3e2b0128d522", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
266489013
pes2o/s2orc
v3-fos-license
Prevalence and associated risk factors for mental health problems among young adults in Fiji Island during COVID-19: a cross-sectional study Introduction The COVID-19 pandemic has had a significant impact on mental health globally. To understand the impact of the pandemic on mental health in Fiji, this study aimed to investigate the prevalence of anxiety disorder and depression among the young adults. Method An online survey was conducted to assess the prevalence of anxiety disorder and depression among the general population in Suva, Fiji during the COVID-19 pandemic. A total of 1,119 Fiji adults participated in the study. The study was conducted between May 20 to June 30, 2022, using a snowball sampling via social media platforms. The Generalized Anxiety Disorder (GAD-7) and Patient Health Questionnaire (PHQ-9) scales were used to measure anxiety and depression, respectively. The COVID-19 related stressors was evaluated using the adapted SARS stressors assessment. Univariate and multivariate logistic regression analysis was performed to determine the factors influencing mental health among respondents. Results The result shows that a significant portion of individuals experienced each of the stressors, with the highest prevalence seen for hearing information about the severity of COVID-19. The prevalence of anxiety and depression was found to be 45% and 49%, respectively. Being female, having pre-existing illness and COVID-19 stressors were a risk factor to develop anxiety and depression. On the other hand, employed individuals and having high BMI was a protective factor against developing depression during COVID-19 lockdown. Conclusion These findings highlight the importance of addressing the mental health needs of the Fijian population during the COVID-19 pandemic and beyond. Introduction The emergence of the Coronavirus disease (COVID-19) outbreak in late 2019 marked the beginning of a global health crisis that rapidly spread across the world (1).The virus was highly contagious, leading to a significant increase in cases and deaths outside of China in March 2020.The World Health Organization declared the outbreak an epidemic in January 2020, with over 200 countries and territories reporting cases (2).The COVID-19 can be transmitted through various routes, including direct transmission through physical contact, such as coughing, sneezing, and inhaling respiratory droplets from an infected person (3).Governments around the world had implemented various measures such as home confinement, quarantine for infected individuals, social distancing, and the use of personal protective equipment, such as face masks and gloves, in an attempt to control the spread of the virus (4).However, these containment strategies, including isolation and physical confinement, have reportedly had negative impacts on mental health (5).Frequent emotions and established risk factors for various mental health disorders, including anxiety, affective, and post-traumatic stress disorders, include frustration, loneliness, and worry about the future (6). The COVID-19 pandemic has had a disproportionate impact on various groups, leading to increased mental health difficulties (7).In particular, healthcare workers are at higher risk of contracting the virus and experiencing heightened stress (8).Individuals with low income and precarious employment face job insecurity and live in overcrowded conditions, contributing to increased stress and depression.Marginalized communities experience systematic inequalities and limited access to healthcare and social support, amplifying the psychological impact of the crisis (9).The COVID-19 pandemic has also impacted children and school going students, placing them at risk of mental health difficulties from disrupting their learning process, and hindering their acquisition of knowledge, skills, and structured routines (10).Minority and ethnic groups, facing challenges such as key worker roles, overcrowded living conditions, poverty, and discrimination, are more susceptible to mental health issues.Individuals with pre-existing physical or mental health conditions are also at higher risk of mental health difficulties during COVID-19 because of worsening symptoms for individuals with pre-existing mental health conditions (7). Fiji reported its first case of COVID-19 in Lautoka on 19 March 2020, and as of 1 June 2023, the country has had a total of 68,921 cases and 882 deaths, spanning across all divisions (11).The COVID-19 pandemic and the measures implemented to control its spread have disrupted daily routines, caused financial stress, and increased the risk of mental health problems in Fiji, including anxiety, depression, and substance abuse.The isolation and confinement resulting from the pandemic can worsen existing mental health issues, while limited access to mental health care services exacerbates the challenges due to cultural stigma, a shortage of trained professionals, and resource constraints.The economic consequences of the pandemic have also contributed to financial stress and further impacted mental health in Fiji (12).Ensuring the mental well-being of the population in Fiji will be crucial for the country's overall recovery from the pandemic. There is growing evidence that the COVID-19 has caused a substantial impact on mental health (13).The virus's rapid and unprecedented transmission has fueled widespread fear, uncertainty, and anxiety, intensified by constant news updates and the perceived threat of infection (6,14).The disruptions to routine life, including work, education, and daily activities, have resulted in psychosocial challenges, causing a loss of structure, purpose, and normalcy for many individuals.Financial hardships and economic uncertainties, compounded by job losses, have further heightened stress levels and distress (15).A meta-analysis that examined 68 studies from 19 countries during the pandemic found that approximately 33% of the general population experienced symptoms of anxiety, while 30% reported symptoms of depression (15).Early in the pandemic, a study in China noted alarming figures with 29% of the population experiencing anxiety and 37.1% grappling with depression (16).A broader international study covering 78 countries reported that 50% of individuals experienced moderate mental health effects due to COVID-19 lockdowns (17).The impact extends beyond high-income countries, with studies from 40 European countries reported a significant 17.80% prevalence of distress during the pandemic (18).Importantly, lower-income countries have not been spared, as evidenced by studies highlighting a higher prevalence of mental health issues during COVID-19 in low and lower-middle-income countries (19).The Asia-Pacific region, a diverse area with varied socio-economic landscapes, has consistently reported elevated levels of anxiety, depression, and stress during the pandemic (20).Specific attention has been drawn to Pacific Island countries, such as New Zealand, where a couple of studies have reported a higher proportion of psychological disturbances during COVID-19, shedding light on the unique challenges faced by these communities (21-23).Notably, there is a significant gap in studies on mental health during COVID-19 in Fiji Island underscores a broader issue of underrepresentation from certain regions, limiting the holistic understanding of the pandemic's mental health impact. The sharp rise in these mental health symptoms underscores the unique challenges and psychological distress caused by the pandemic.Studies reported that various factors associated with this increased susceptibility to psychological consequences during COVID-19 (15).The findings from recent studies indicate that certain demographic factors and risk factors are associated with a higher prevalence of mental health consequences.Specifically, females, younger age groups (24), individuals with lower socioeconomic status (SES) (25,26), those residing in rural areas (27), people with preexisting illness (24), frequent alcohol-consumers (28), smokers (28), and individuals at higher risk of COVID-19 infection (29) and COVID-19 related stressors (30) are more likely to experience negative mental health. While a substantial body of research has examined the impact of the COVID-19 pandemic on mental health globally (31)(32)(33), there is a significant dearth of systematic assessments specifically tailored to the Pacific Island context, particularly in Fiji.Much of the existing literature predominantly stems from studies conducted in diverse cultural and socioeconomic settings, potentially limiting its applicability to the unique circumstances of Pacific Island nations.The experiences of the researcher reveal a critical gap in understanding the mental health outcomes of individuals in Fiji during the ongoing pandemic.Unlike many developed nations, Fiji faces distinctive challenges, including limited mental health resources, unique cultural contexts, and vulnerability to external stressors (34).While one study conducted among 300 physical education and sports teachers in Fiji found that 50% of them were negatively affected by the pandemic (12), there was no research conducted on the broader population.Further, there is a pressing need for research that delves into the nuanced mental health experiences of the broader Fijian population during COVID-19, as existing interventions and findings may not be directly translatable to this specific cultural and regional context.Our study aims to address this gap by providing a focused examination of mental health outcomes in Fiji during the pandemic, contributing vital insights to the broader understanding of the pandemic's impact on mental health in Pacific Island nations.Nevertheless, it is important to recognize that negative psychological consequences were already evident in pacific region (35) and Fiji prior to the pandemic (34), and the COVID-19 situation has likely exacerbated these challenges.Fiji, like many other countries, has faced social and economic disruptions, health concerns, and increased stress levels due to the pandemic.The lack of systematic assessment of mental health outcomes among the general population in Fiji highlights the need for further research to understand the specific impact of the pandemic on mental health in the country.Therefore, this study investigated the mental health problems among different group of populations during the early stage of the COVID-19 pandemic in Fiji.We also aimed to examine the risk factors associated with developing these psychological problems, among general populations in Fiji. Study design and data collection An online survey was conducted to gather data on the impact of the COVID-19 pandemic on mental health in Suva, Fiji.The survey was conducted from May 20 to June 30, 2022, after the second wave of the lockdown.The target participants for the study were aged 18 or over, living in the Fiji during the survey.A total of 1,119 Fiji adults participated in the study.The sample was collected using the snowball sampling method, in which the survey was distributed through social networks such as Facebook, WhatsApp, LinkedIn, and Instagram.To design the questionnaire, we used Kobo Toolbox to create an online survey that could be accessed through a link.This allowed for easy data collection and visualization.The link for the survey was shared with the target group in two primary ways: (1) by emailing an invitation to participate to all students attending the University of the South Pacific and (2) by using field assistants to distribute the survey link to the target group. Measurement instrument 2.2.1 COVID-19 stress The COVID-19 related stressors utilized in this study was adapted from the 10-item SARS stressors scale (36).It included seven questions related to COVID-19 infection, quarantine status, the severity of contagiousness, vacation and financial loss.Respondents answered each item as either yes or no, with a score of 1 or 0, respectively.The scores for all items were then summed to calculate an overall score, with higher scores indicating a greater amount of stress related to COVID-19. Mental health measures To assess anxiety levels, we used the Generalized Anxiety Disorder 7-item (GAD-7) scale (37).This scale has excellent validity and reliability, with a Cronbach's alpha coefficient of 0.911.Respondents indicate the frequency of symptoms over the past two weeks on a 0 (not at all) to 3 (almost every day) scale.A summary score is calculated by summing all items, with a range of 0 to 21. Respondents are categorized as having minimal/no anxiety (summary scores between 0-4), mild anxiety (5-9), moderate anxiety (10)(11)(12)(13)(14), or severe anxiety (15)(16)(17)(18)(19)(20)(21).In addition to these four levels of anxiety, we used a cutoff score of 9 or higher to identify clinical levels of generalized anxiety disorder (38). To measure respondents' levels of depression over the past two weeks, we used the Patient Health Questionnaire (PHQ-9).This wellvalidated tool has a Cronbach's alpha coefficient of 0.89 and includes nine items that are rated on a 0 (not at all) to 3 (almost every day) scale (39).Scores are calculated by summing the items, with a range of 0 to 27.Scores of 0-4 indicate minimal to no depression, 5-9 mild depression, 10-14 moderate depression, and scores of 15-21 indicate severe depression (40).We used these four levels of depression, as well as a cutoff score of 10 or higher, to identify clinical levels of major depressive disorder (41). Risk factors Sociodemographic variables including age, gender, level of education attained, living area, area of residence, current living status, occupations and monthly income were self-reported.Gender was determined by asking whether male or female or other.Age was used as a continuous variable.The education level of the participants was categorized into following groups including college, undergraduate and postgraduate level.Living region was assessed by identifying their region as Central, Northern, Eastern or Western.Residents' characteristics were determined by their present residence, which was classified as urban or rural.Three questions were used to assess respondents' living status, including whether they lived with or without family members or alone.Occupation were classified as unemployed, student, government job, private job, healthcare workers, teacher, business, daily labor worker, and housewife.Monthly income was classified as 0-2,000 or 2,001-4,000 or > 4,000 FJD. In terms of health related variables, presence of a pre-existing illness, smoking habit, habit of drinking alcohol and kava, Body Mass index (BMI), self-reported health status and daily time spent for searching COVID-19 information were considered.Participants were asked if they have any long-standing illness or disability.BMI and daily time spent searching for COVID-19 information were recorded as continuous variables.Smoking and alcohol/kava consumption were determined by asking participants to indicate "yes" or "no". Data analysis In our study, we used descriptive statistics to summarize the demographic characteristics of the respondents.We reported categorical data as percentages and continuous data as means and standard deviations.To check for data normality, we used the Shapiro-Wilk test.Since our data were not normally distributed, we used non-parametric tests to investigate the relationships between the respondents' general characteristics and their mental health during the COVID-19 pandemic.To identify potential predictors of psychological outcomes, we conducted univariate (unadjusted) and multivariable (adjusted) logistic regression analysis, adjusting for sociodemographic factors.In the univariate analysis, we employed chi-squared tests or the Kruskal-Wallis test to assess the association between potential risk factors and psychological outcomes.Since we used cut-off values for outcome variables such as anxiety (≥10) and depression (≥10), we conducted multivariable logistic regression analysis after adjusting for sociodemographic, heath and COVID-19 stressors.We included only statistically significant predictors from the univariate analysis in the multivariable logistic regression models and calculated adjusted coefficients and their 95% confidence intervals for independent variables.We considered a two-tailed test with a significance level of p < 0.05 to be statistically significant.SPSS statistical software (version 26) was used to analyse the data. Ethical approval consideration The study followed the process of the Declaration of Helsinki and maintained the highest possible extent of ethical standards.The study included a clear description of the procedures followed to obtain informed consent from participants, including the purpose of the study, the voluntary nature of participation, and any necessary ethical approvals obtained from relevant institutional review boards or ethics committees.An electronic consent of participation was obtained from all the respondents before they took part in the study.The consent form is attached to the questionnaire.The study was approved by the ethical clearance committee of the School of Information Technology, Engineering, Mathematics and Physics (STEMP) Academic Unit Research Committee, University of South Pacific, Fiji. Demographic and health-related characteristics Table 1 presents data on the sociodemographic and health-related characteristics of a group of respondents in Fiji.The majority of the respondents were female (59.8%), with a mean age of 26.01 years.In comparing our convenience sample to the known population parameters from the Fiji census, we observed slight variations in the age and gender distribution.The Fiji census data 2017 indicated that the median age of the population was 27.5 years, which means that half of Fiji's population was below that age Additionally, the gender distribution was approximately 49% male and 51% female (42).There was a slight difference in the age distribution, with our sample having a slightly lower proportion (mean age: 26.01) compared to the population parameters.The gender distribution in our sample was also slightly different, with a higher percentage of females (59.8%). Most of the respondents were single (79.0%) and lived with their family members (81.0%).The majority of the respondents were students (63.0%), and enrolled in undergraduate education (59.0%).Most of the respondents lived in urban areas (81.9%) and had a monthly income of 0-2000 FJD (55.1%).In terms of health-related variables, only 9.1% of the respondents reported having a pre-existing illness.A small percentage of respondents reported engaging in smoking (13.2%) or drinking alcohol (17.4%) or drinking kava (17.1%).The mean body mass index (BMI) of The figure shows that a significant portion of individuals experienced each of the stressors, with the highest prevalence seen for hearing information about the severity of COVID-19 (92%), knowing someone who has COVID-19 symptoms (84%), cancelling a vacation trip because of COVID-19 (73%) and having a close friend recently diagnosed with COVID-19 (72%). Overall prevalence of poor mental health Figure 2 presents data on the prevalence of mental health issues among the general population in Fiji during the COVID-19 pandemic.For generalized anxiety, 28% of the general population had minimal levels of anxiety, 26% had mild levels, 24% had moderate levels, and 21% had severe levels.For depression, 28% reported minimal depression levels, 23% experienced mild depression, 16% had moderate depression levels, and 33% reported severe depression levels. Prevalence of poor mental health by professions Table 2 shows the prevalence of mental health among different groups of occupation.The prevalence of depression was significantly different between the unemployed (41.2%), students (47.2%), and employed (42.4%).No other differences between groups were detected. Factors influencing the prevalence of poor mental health Table 3 presents the univariate and multivariate results for the risk factors for anxiety and depression.Gender, having pre-existing illness, and COVID-19 stress were significant predictors for experiencing anxiety.In particular, female respondents had a significantly increased risk of experiencing anxiety disorder (OR = 1.88 95% CI = 1.42-2.48,p < 0.001) than their counterparts.Respondents with a pre-existing illness were more likely to experience anxiety disorder (OR = 1.89, 95%CI = 1.19-3.01,p < 0.01).Further, respondents who had experienced higher COVID-19 stress during pandemic period were more likely to experience anxiety disorder (OR = 1.22,95%CI = 1.13-1.32,p < 0.001). Summary of main findings In Fiji, mental health issues have been a growing concern for many years, with limited resources and access to mental health services for the general population (43).The COVID-19 pandemic exacerbated this issue, with a significant portion of the population experiencing mental health challenges such as anxiety and depression.The pandemic also resulted in unprecedented changes in daily life, including lockdowns, social distancing, and travel restrictions.It not only posed a significant threat to physical health but also had a Our findings suggest that a significant portion of the young adults in Fiji experienced mental health issues, with higher levels of severity seen for depression than generalized anxiety.Given the scarcity of existing national study on the impact of COVID-19 on mental health in Fiji, we have compared our study findings with other countries.A similar finding was observed in young adults in New Zealand during the COVID-19 outbreak (44).Studies from around the world have consistently reported an increase in mental health issues including depression, anxiety, and stress during the pandemic due to various stressors such as isolation, fear of infection, financial difficulties, and loss of loved ones (45,46).Interestingly, we found that students consistently showed the highest rates of depression, followed closely by the unemployed and then the employed.Previous studies also reported that students were highly susceptible to developing mental health than working professionals during COVID-19 (47).Students faced significant disruptions in their education during the COVID-19 pandemic, including the shift to online learning, which often introduced new challenges and stressors (48).The uncertainty of the academic environment and the need to adapt to remote learning can contribute to increased anxiety and depression among students (49).Students often faced uncertainty about the future, including concerns about job prospects, internships, or the continuation of their education that could lead to depression (50).Our study also suggests that COVID-19 had a significant impact on individuals and caused a range of related stressors among the adult population in Fiji.Similar findings were observed in other developing countries like India (51) and Bangladesh (47).It's important to consider the potential impact of these stressors on mental health and to address the needs of individuals who have experienced them. Our findings suggest that female respondents had a significantly increased risk of experiencing anxiety disorder and depression than their counterparts.This finding is in line with previous research that has shown that women are more likely to experience anxiety disorders and depression than men (52)(53)(54).Such gender differences may correspond to women being more affected by the social and economic consequences of the pandemic than men on average (55).For instance, school closures and family members becoming unwell may result in additional caregiving responsibilities for women.Women are also more likely to be financially disadvantaged during the pandemic due to lower salaries, less savings, and less secure employment than men (56,57).As a result, women may be more vulnerable to financial stress and insecurity, which can increase the risk of developing mental health disorders (58).Furthermore, the prevalence of domestic violence has increased during periods of lockdown and stay-at-home orders, with women being more likely to be victims of such violence (59).The pandemic has exacerbated existing gender inequalities and increased the burden of caregiving and household responsibilities, which may contribute to the higher prevalence of mental health disorders among women (57).Other reasons could be biological mechanisms such as that there are hormonal differences between males and females that affect the way they respond to stress.Research has shown that women tend to have higher levels of stress hormones like cortisol and may be more sensitive to the effects of these hormones on their bodies and brains (60). Having a pre-existing illness was a risk factor to develop anxiety disorder during the pandemic in Fiji.These findings were consistent Overall prevalence of poor mental health (N = 1,119).with previous studies (61)(62)(63).People with pre-existing health conditions may be at higher risk of developing severe COVID-19 symptoms, which can increase anxiety and fear about their health and wellbeing.They may worry about the potential consequences of contracting COVID-19 and the impact it could have on their health and ability to manage their existing illness (64).Further, people with chronic illnesses may have more limited access to healthcare services during the pandemic, which can lead to increased anxiety about their ability to manage their illness and access the care they need.The pandemic has disrupted healthcare systems and forced many people to delay or forego medical appointments, which can exacerbate feelings of uncertainty and anxiety (65). Our study suggests that higher COVID-19 stress was a risk factor to develop anxiety and depression.In this study, having family members or a close friend or knowing someone suspected of COVID-19, hearing information about the severity of COVID-19, cancelling a vacation trip because of COVID-19, and experiencing income loss because of COVID-19 was the major COVID-19 stress that elevated respondents' risk of anxiety and depression.These findings are consistent with earlier studies (66,67).Unemployment, for instance, can lead to financial insecurity and a sense of loss of control over one's life, both of which are known risk factors for depression and anxiety (68).The death of a loved one or friend due to COVID-19 can also cause intense grief and distress, leading to the development of these mental health conditions.Receiving a positive COVID-19 diagnosis can also cause fear and uncertainty about one's health and the health of others, which can lead to anxiety symptoms.Further, individuals may have been worried about their family members getting infected by the virus.This was especially true for working professionals who had to return to their workplaces during the pandemic, such as healthcare workers who were more susceptible to exposure and could transmit the virus to their families (69).Research found that participants whose family members worked in healthcare were 44% more likely to develop mental illness (70).The perceived risk of contracting or transmitting the virus to family members contributed to increased stress and anxiety among workers. Interestingly, our findings found that employed individuals were protective against depression during lockdowns.Employed individuals typically have a source of income and financial stability.This financial security can reduce the stress associated with economic uncertainty, which is a common trigger for depression, especially during economic downturns (71,72).One study conducted in Turkey found that state employees experienced lower levels of anxiety and depression compared to those in private sectors (73).Further, employees in Fiji may have had access to better resources and support systems during the pandemic, which could have contributed to their better mental health outcomes.For example, government employees may have had access to mental health services through their employee assistance programs, as well as job protections and financial support during the pandemic.However, further research is needed to confirm this finding and explore potential explanations for this relationship. High BMI was associated with less depression in our study.High BMI being considered a protective factor might seem counterintuitive, especially given the common perception of high BMI as an adverse health outcome.It's crucial to note that the relationship between BMI and health outcomes can be complex and context-dependent.While high BMI is generally associated with increased health risks (74), in certain populations or contexts, it may indeed be linked to better health outcomes.One theoretical rationale for high BMI being perceived as protective, particularly in some low-income populations, could be related to the "obesity paradox." This phenomenon suggests that, in certain conditions such as chronic diseases or in older age groups, individuals with a higher BMI might have a survival advantage compared to those with a lower BMI.This paradox has been observed in a previous study where overweight groups exhibited the lowest prevalence of depression (75).It is often attributed to factors like better nutritional reserves, increased energy stores, and potential protective effects in the face of certain health challenges.Further, it is well established that physical activity is beneficial for mental health, and individuals with a higher BMI may be more likely to involve in weight-gain protective behavior (76).Physical activity has also been shown to have a positive impact on the immune system, which may be particularly relevant during a pandemic. Implications of the study Our study findings have important implications.The high prevalence rates of anxiety disorder and depression underscore the critical need for strengthened mental health support services in Fiji, especially during public health crises like the COVID-19 pandemic.Investing in accessible and culturally sensitive mental health resources can aid in addressing the immediate mental health needs of the population (77).Our study findings underscore the need for targeted public health interventions to address mental health challenges in Fiji, particularly among vulnerable groups such as students, females, and individuals with pre-existing illnesses.Implementing support programs that address the specific needs and challenges faced by these populations can contribute to more effective mental health outcomes.Community-based mental health programs should be considered to foster a supportive environment.These programs can engage community leaders, local organizations, and individuals to create a network of mental health support, reducing stigma and promoting open conversations about mental well-being (78).Understanding the generational impact of mental health, policymakers should focus on long-term resilience-building measures.Incorporating mental health education in schools, workplaces, and community settings can contribute to a more resilient and mentally healthy future population (79).However, our findings should be interpreted with caution.Generalizing to rural populations or those with different socioeconomic and cultural backgrounds should be approached with caution.Fiji's diversity may result in varied responses to stressors and different prevalence rates.Extrapolating findings to populations outside Fiji, especially in vastly different cultural and socio-economic contexts, may not be appropriate.The uniqueness of Fiji's circumstances necessitates careful consideration when applying these findings to dissimilar settings. Limitations of the study The participants were recruited through convenience sampling, which may limit the generalizability of the findings to the larger population in Fiji.The study participants were also recruited through social media, which may result in a self-selection bias.The data collected in this study was based on self-reported measures, which may be subject to recall bias, social desirability bias, and other sources of response bias.The study used a cross-sectional design, which limits the ability to draw causal inferences between the COVID-19 pandemic and mental health outcomes.Longitudinal studies would be necessary to assess the temporal relationships between exposure to the pandemic and mental health outcomes.Further, we adapted the SARS-10 scale for assessing COVID-19 stressors was influenced by the historical context and the imperative for a validated instrument at the onset of our study.During that period, there was no well-established COVID-19 stressor scale.Further, the SARS-10 scale stood as a wellestablished and widely recognized tool for evaluating stressors related to infectious disease outbreaks.The study relied on self-reported symptoms of depression and anxiety, rather than clinical diagnoses made by healthcare professionals.Further, the study did not collect information on COVID-19 exposure, such as whether participants had contracted the virus or had close contacts who did, which may be an important factor in understanding mental health outcomes during the pandemic. Conclusion In conclusion, it is imperative to delve deeper into the implications of the study findings and consider the broader context for mental health interventions in Fiji.The substantial prevalence rates of anxiety disorder and depression uncovered in this study underscore the urgent need for targeted mental health interventions tailored to the unique challenges faced by the general population in Suva, Fiji.The prevalence rates, particularly the high prevalence related to hearing information about the severity of COVID-19, emphasize the pervasive impact of pandemic-related stressors on mental well-being.Moreover, the identified risk factors, such as being female, having a pre-existing illness, and exposure to COVID-19 stressors, provide critical insights into the specific demographic and contextual elements that amplify the vulnerability to anxiety and depression.These risk factors should guide the development of interventions that address the distinct needs of these at-risk groups.Conversely, the protective factors identified, including employment status and higher BMI, present valuable opportunities for targeted mental health support strategies.Recognizing the potential resilience conferred by employment and certain health characteristics can inform interventions designed to bolster mental well-being in these specific segments of the population.These findings serve as a foundation for evidence-based mental health initiatives in Fiji during future public health crises.The identified risk and protective factors should be integrated into public health strategies, with a focus on proactive and accessible mental health support systems.Policymakers and healthcare professionals can leverage this knowledge to implement interventions that not only address the current challenges but also fortify mental health resilience for the future. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Figure 1 Figure 1 presents data on the prevalence of different COVID-19 specific stressors experienced by individuals.The figure includes data on seven different stressors: having family members suspected of having COVID-19, having a close friend recently diagnosed with COVID-19, knowing someone who has COVID-19 symptoms, fear of getting quarantined, hearing information about the severity of COVID-19, cancelling a vacation trip because of COVID-19, and experiencing income loss because of COVID-19.The figure shows that a significant portion of individuals experienced each of the stressors, with the highest prevalence seen for hearing information about the severity of COVID-19 (92%), knowing someone who has COVID-19 symptoms (84%), cancelling a vacation trip because of COVID-19 (73%) and having a close friend recently diagnosed with COVID-19 (72%). TABLE 1 Demographic characteristic of the respondents (N = 1,119).group was 25.77.In terms of perceived health status, 42.7% of the respondents reported being in good health, while 25.5% reported being in fair or poor health.The mean time spent searching for COVID-19 related information was 3.45 h per day. the TABLE 2 Prevalence of poor mental health by profession (N = 1,119). TABLE 3 Influencing factors of mental health in Fiji during COVID-19.
2023-12-23T16:23:39.108Z
2023-12-21T00:00:00.000
{ "year": 2023, "sha1": "805e44ad0a74c258c9d10569858ad3b4b538f8cb", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1323635/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d1f6796a1508ba7510fce7145b80df1ac0ec87e", "s2fieldsofstudy": [ "Psychology", "Medicine", "Sociology" ], "extfieldsofstudy": [] }
240524704
pes2o/s2orc
v3-fos-license
The Impact of COVID-19 on Stock Market Returns in Vietnam : This paper studies the impacts of COVID-19 on the performance of the Vietnamese Stock Market—a rapidly growing emerging market in a country that has to date successfully controlled the disease outbreak. The study uses a random-effect model (REM) on panel data of stock returns of 733 listed companies on both HOSE (the Ho Chi Minh Stock Exchange) and HNX (the Hanoi Stock Exchange) from 2 January 2020 to 13 December 2020. The study shows that the number of daily COVID-19 confirmed cases in Vietnam has a negative impact on stock returns of listed companies in the market. The impacts were more severe for the pre-lockdown and second-wave period, compared to impact for the lockdown period. The impacts also differed across sectors, with the financial sector being the most affected. With significant government control and influence over the bank-dominated financial system, the financial sector was expected to absorb some of the negative shocks hitting the real sector. Such expectations were reflected in the stock market movement during the pandemic. N.T.M.H.; editing, visualization, supervision, D.V.H.; project N.T.M.H.; Introduction Vietnam is currently one of the most dynamic emerging countries in the world with a rapidly growing economy and stock market (The World Bank 2020). The stock market of Vietnam is made up of two principal stock exchanges: the Ho Chi Minh Stock Exchange (HOSE) listing companies with charter capital above VND 120 billion; and the Hanoi Stock Exchange (HNX), listing companies with above VND 30 billion (State Securities Commission of Vietnam 2012). The number of listed companies increased from 5 in 2000 to 743 in 2019. The stock market of Vietnam has turned from a frontier market into an emerging market. Market capitalization increased by almost three times during the 2014-2019 period, from USD 52.43 billion to USD 149.82 billion (The World Bank 2020). The coronavirus disease , which commenced at the end of 2019, has had a very serious impact on many fields of social and economic life of all countries across the world in unprecedented ways. On 11 March 2020, the WHO (World Health Organization) declared it a pandemic, and up to 28 December 2020, there were 222 countries affected with over 79 million confirmed cases and nearly 2 million deaths (WHO 2020). COVID-19 has caused substantial negative effects on the performance of stock markets around the world (Zhang et al. 2020;Alfaro et al. 2020;Al-Awadhi et al. 2020;He et al. 2020;Ahmar and Val 2020;Baker et al. 2020;Ding et al. 2020). The Vietnamese stock market is no exception, and the VN-index declined dramatically in the first three months of the year (Giang and Yap 2020) from 31 December 2019 to 30 March 2020. Market capitalization declined 37.4 billion USD in absolute value, or 28% in relative value in this period. Directive No. 16/CT-TTg implemented a nationwide lockdown, spanning the period 1 April 2020 to 15 April 2020, to curb community transmission of the virus. As a result, the Government successfully controlled COVID-19 infection rates and the stock market began showing signs of recovery, becoming one of the four best-performing stock markets in the world (Nguyen 2020). This paper investigates the response of stock market returns during the COVID-19 outbreak in Vietnam. First, our results show that the number of confirmed COVID-19 cases is negatively correlated with stock returns. Our contributions add to the growing body of literature on the interplay of pandemics and financial markets, including research on the SARs pandemic (Nippani and Washer 2004;Chen et al. 2007;Chen et al. 2009), the Ebola virus (Del Giudice and Paltrinieri 2017;Ichev and Marinc 2018) and recently, the COVID-19 pandemic (Onali 2020;Takyi and Bentum-Ennin 2020;Al-Awadhi et al. 2020;Alfaro et al. 2020;etc.). Our study is one of the few that test the effect of COVID-19 on stock returns, using emerging market datasets. To the best of our knowledge, it is the only one that uses a Vietnamese stock market dataset. Our results show that the negative correlation between disease cases and stock returns exists across markets, regardless of the degree of financial development. Second, our results show a heterogeneous response of stock returns to COVID-19 across sectors, with the financial sector being the most affected. This finding is in line with Demirguc-Kunt et al. (2020), which compares the responses of banks' versus non-bank firms' stock returns, using data from 53 countries. They find that bank stocks underperformed non-bank stocks during the COVID-19 outbreak. We show that those results still hold, even when one classifies non-bank firms into smaller sectorspecific sub-groups. Our results highlight the importance of financial sectors developing sound financial policy so as to best absorb negative shocks hitting the real sector during pandemics. Literature Review The stock market often reacts to major events in the environment (Lorraine et al. 2004;Ramiah et al. 2013), as well as sports events (Edmans et al. 2007;Gopane and Mmotla 2019), natural disasters (Teitler-Regev and Tavor 2019) and changes in politics and current affairs (Burggraf et al. 2020;Hillier and Loncan 2019). It also responds to infectious pandemic diseases such as SARS and Ebola Virus. Epidemics can cause economic losses, trigger anxiety and create pessimism among investors about future incomes, leading to volatility in stock markets (He et al. 2020;Liu et al. 2020;Jiang et al. 2017). Investors act more optimistically when the stock market is in an upwards trend and there is less potential risk. If the stock market is in a downward trend, investors tend to wait until the market begins to recover to invest (Burns et al. 2012;Liu et al. 2020). During these periods many investors turn to "safe-haven assets" to mitigate risk during these volatile economic times (He et al. 2020). As a result, stock prices often fall and market performance worsens. Nippani and Washer (2004) studied eight Asian and Canadian stock exchanges using the Mann-Whitney test and t-test statistics. They found that stock markets in China and Vietnam were significantly negatively affected by the SARS epidemics. The stock market in Taiwan was also affected by SARS with a negative correlation between the outbreak of disease and stock returns in tourism, hotel, retail and wholesalers' sectors, while returns in the biotechnology sector demonstrated a positive correlation (Chen et al. 2007(Chen et al. , 2009). The travel industry was hit the hardest and experienced the biggest drop in stock value (sharply falling by 29%) within a month of the SARS outbreak. The relationship between the Avian influenza A virus (H7N9) epidemic and the stock market in China was assessed by Jiang et al. (2017). They found that in the market index, stock prices in certain industries, including traditional Chinese medicine and the biomedical industries, were adversely and dramatically influenced by the number of infection cases each day. Del Giudice and Paltrinieri (2017) looked into the effect of the Ebola virus on investors' decisions in the equity mutual funds of Africa. They looked at 78 mutual funds in African countries and tracked monthly mutual fund transactions as well as results from 2006 to 2015 and discovered that Ebola had a huge impact on mutual fund flows. Retail finance pundits overreacted to the event; the more heavily Ebola featured in the press, the more investors withdrew their investments from African mutual funds (Del Giudice and Paltrinieri 2017). In the securities market of the US, Ichev and Marinc (2018) also pointed out the influences of the Ebola pandemic on stocks. They revealed the significant effects of the Ebola-infected cases that occurred in the US and West Africa on the companies having operations located there. The stock returns of small companies were more impacted by Ebola than the larger ones. The positive relationship between this epidemic and the food and beverage, pharmaceutical, healthcare supplies as well as biotechnology sectors was also shown by Ichev and Marinc (2018). Conversely, the other sectors had a negative relationship in respect of the impact of Ebola. Recognizing the possible economic impacts of epidemics, when COVID-19 happened, besides some publications show no significant impact of COVID-19 to the financial market (Onali 2020;Takyi and Bentum-Ennin 2020), a number of pieces of recent academic research made several attempts to investigate the influence of this pandemic on performance of stock markets in various countries with various econometric methods. According to Al-Awadhi et al. (2020), all stock returns of companies in China responded negatively with both confirmed infected cases and deaths per day. The consequences of COVID-19 on the performances of 64 countries' stock markets were examined by Ashraf (2020). The authors explored whether the number of reported infection cases had a negative association with the stock market and found it seemed to react more strongly with confirmed cases than with deaths. Alfaro et al. (2020) also showed the returns of US stock were negatively influenced by the COVID-19. Baker et al. (2020) discovered that while previous epidemics, such as the Spanish Flu, had only a minor effect on the stock market of the U.S., the COVID-19 has had a significant impact. Yilmazkuday (2021) also found out a 1% rise in daily COVID-19 cases in the United States resulted in a fall of about 0.01 percent in the S&P 500 Index after a day and after a month, this figure had fallen further to around 0.03 percent. Negative impacts on Nigeria's stock market returns were also found by Adenomon and Maijamaa (2020) through Generalized Autoregressive Conditional Heteroskedasticity (GARCH) Models. In research by Zhang et al. (2020), it was shown that this epidemic has had negative effects on the stock markets of Singapore, Japan, and Korea, as well as the ten countries having the highest confirmed infected cases in March 2020. In February, the stock market of China demonstrated the biggest standard deviations, but in March, it showed the smallest. Additionally, according to Zhang et al. (2020), during the study period, the U.S.'s stock market had the sharpest rise in the standard deviation of the nations examined. Liu et al. (2020) and He et al. (2020) in their studies analyzed the influences of COVID-19 on worldwide stock markets and observed that the epidemic had a negative effect on stock returns. There was also a spreading impact from the pandemic across Asian, European, and American nations (He et al. 2020). COVID-19's extreme effect on capital markets has caused policymakers around the world to enact prohibitions (such as short-selling bans) to mitigate the likelihood of market losses, reduce uncertainty, and protect the stability of the market (Kodres 2020). Moreover, the impact of the COVID-19 outbreak on returns of stock varied across industries (To and Bui 2020;Czech and Wielechowski 2021). The sectors most heavily influenced on the stock market by this epidemic include petroleum and gas, machinery, transport, automobile, garment, and hospitality (Schoenfeld 2020). The financial sector (including financial institutions and banks) was heavily affected by the COVID-19, according to Goodell (2020), as it witnessed a rise in non-performing loans owing to a large number of withdrawals by depositors within a short period of time and the depletion of borrowers' income. COVID-19, however, does not have a negative impact on all industries. Alam et al. (2020) delineated the positive effect of COVID-19 on the stock returns of the Telecommunications and Technology industries in Australia after the pandemic, as a result of the soaring demand for utilities to learn and work from home (Ramelli and Wagner 2020). During the pandemic, Chinese stocks in the IT and pharmaceutical industries dramatically outperformed those of the economy more generally (Al-Awadhi et al. 2020). There have also been some studies on the Vietnamese market in the COVID-19 period. The research of Duc et al. (2020) addressed how Vietnam's oil and gas industry is experiencing the third oil price crisis in 12 years due to the impacts of COVID-19 and it is proving difficult to return to a period of strong development. pointed out that COVID-19 has caused heavy damage to the economy, putting great pressure on the production capacity and global supply chains that directly affect the logistics activities of enterprises in general. Vietnam's tourism industry has also been hit hard by the government's shutdown policy . However, most of the data used by these studies was obtained from the beginning of the year to May 2020, when the second wave of the epidemic had not happened yet in Da Nang, Vietnam. Furthermore, these studies only provided scenarios for industries to prepare for the future wave of disease and focused mainly on tourism, logistics, supply chain and the economy in general. Overall, these prior studies just concentrate on the stock markets of developed or emerging markets such as China, the US, Japan, South Korea, Germany, France, Spain, and Italy. There was a lack of research on countries that have a rapidly growing emerging market or had successfully controlled the disease outbreak, and there is little research on those economies demonstrating signs of recovery in the stock market, such as Vietnam. This research was inspired by the literature review gap and the future growth of the stock market in Vietnam. Data Source This research examines how the COVID-19 has impacted the stock returns per day of 733 listed Vietnamese stock market companies including 345 companies on HNX and 388 companies on HOSE. Daily stock data begins on 2 January 2020, the first day of operation of the stock market in Vietnam in 2020, and 31 December 2020 will be taken as the data end date. The updated data on the number of COVID-19 confirmed cases in Vietnam every day will be obtained from the website of the Ministry of Health of Vietnam (https://ncov.moh. gov.vn/). The stock prices and detailed data of listed companies are from the Hanoi Stock Exchange (https://hnx.vn/) and the Ho Chi Minh Stock Exchange (https://www.hsx.vn/). There are 184,716 observations in total. The websites were accessed on the 30 April 2021. Research Methods In research examining the influence of the COVID-19 outbreak on worldwide stock market returns, Ashraf (2020) and Al-Awadhi et al. (2020) explained that the spread of a pandemic lasts for a long time (several days or months) rather than being a specific event that occurs at one point in time. Therefore, the panel data regression methodology is more fitting than the classical MacKinlay event-study methodology (cross-correlation problems could exist in abnormal returns when study periods of stocks overlap, and thin trading due to stocks that do not trade every day could be a problem when applying the Fama-French three-factor model or market model). Furthermore, the panel data regression is better in identifying time series variation relationships between independent and dependent variables as well as reducing the errors such as heteroskedasticity, estimation bias and multicollinearity (Hsiao 2014;Wooldridge 2010;Baltagi 2008). Therefore, the quantitative method is employed. Regression Models Following the panel data regression approach, this research will analyze the impacts of the COVID-19 on the Vietnamese stock market's performance while controlling for firm-specific characteristics. Based on the study of Anh and Gan (2020), two dummy variables (D_before, the pre-lockdown period between 1 January 2020 and 31 March 2020 and D_lockdown, the lockdown period from 1 April 2020 to 15 April 2020) are generated to estimate the differences of stock returns in these two periods. In addition, based on the specific circumstance in Vietnam with different periods of COVID-19 pandemic in 2020 (Pham 2021), the authors generated one more dummy variable (D_second, the second wave of COVID-19 period in Vietnam from 6 July 2020 to 30 August 2020) to examine the impact of COVID-19 in this more serious period which saw the first cases of deaths. In the studies of Ashraf (2020), Anh and Gan (2020), and Al-Awadhi et al. (2020), the price-to-book ratio, daily market capitalization and sector specific factors (being in financial, energy, industrials, consumer goods, communications and technology, and healthcare sectors) all significantly affect stock returns. Thus, these variables will be included in the models of this study. Furthermore, the authors also wanted to know whether there were different impacts of COVID-19 between the two exchanges of Vietnam, which have different scales. Accordingly, the models study HOSE and HNX separately. D_industry j : a vector of dummy variables that represent the sector of a company. The sectors included are financial (D_financial), energy (D_energy), industrial (D_industrial), consumer goods (D_consumergoods), communications and technology (D_comtech), and healthcare (D_healthcare). It equals 1 if the company belongs to these respective industries; 0 otherwise. ε i,t and θ j,t : error term The various panel-data regression approaches (FEM/REM/Pooled OLS) are checked to see which one is best for analyzing COVID-19 impacts. The authors use Stata 14 software to assess the data. The following steps are used to interpret the data: (1) reporting quantitative data's characteristics using descriptive statistics analysis; (2) examining the normal distribution of variables and calculating Pearson correlation coefficients; and (3) performing regression analysis. There are different panel models depending on α 1 , β 1 , and γ 1 . The Hausman test is used to determine which estimator is more suitable for the models, selecting between the fixed-effects model (FEM) and the random-effects model (REM). The null hypothesis is the REM is the suitable model (Hausman 1978). α 1 , β 1 , and γ 1 are viewed as regression parameters in the FEM whereas while using the REM, they are considered components of random disturbance. Furthermore, the Breusch-Pagan Lagrangian multiplier test is used to choose which is the appropriate model between Pooled OLS and REM. The F-test is also used to check whether FEM or Pooled OLS is more suitable. Following the selection of the best model for the research, it is evaluated again using an autocorrelation test in panel data (Wooldridge 2010) and VIF test (Variance Inflation Factor) for multicollinearity. Hypothesis Development Due to the consistent empirical results throughout recent research, the authors expect that the increase in daily confirmed COVID-19 cases in Vietnam has a negative relationship with stock returns, meaning the Hypothesis is expected that the higher the number of infections, the greater fall will be seen in stock returns. Moreover, it is doubted about the different levels of significance among different periods of the COVID-19 (Eleftheriou and Patsoulis 2020; Baig et al. 2020;, the authors also want to examine the significant, negative impacts of pre-lockdown, lockdown and the second wave period on Vietnam's stock market performance. Different industries will also be adversely affected by COVID-19, especially the financial sector which is easily affected by economic downturns with unusual deposit withdrawals and the risk of a rise in bad loans (Goodell 2020;Ashraf 2020). However, the communications and technology and healthcare sectors are expected to be positively affected in line with the rising number of confirmed cases (Ramelli and Wagner 2020;Alam et al. 2020;Al-Awadhi et al. 2020). Therefore, the authors construct the Hypotheses (Table 1) with expected signs in relationships between the stock returns and COVID-19 confirmed cases in different industries in Vietnam Table 1. Hypotheses with expected signs of relationships based on previous researches. Variables Expected Descriptive Statistics Table 2 below demonstrates the descriptive statistics of 733 listed companies in Vietnam's stock market over the period 2 January 2020 to 31 December 2020. The average daily confirmed cases of COVID-19 infections per day in Vietnam is 4 cases with the highest number at 81 cases per day. Listed companies on the HOSE have an average price-to-book value of 1.20 and on HNX this figure is 0.93. During the research period, the average stock return of all listed companies in the market is positive. The ranging of stock returns of listed companies on HOSE is from −6.478% to 4.861%, while the ranging of returns of stocks on HNX exchange is from −6.655% to 5.401%. Based on the results of Table 3, the Pearson correlation coefficients between the independent variables in the regression models are lower than 0.5 (50%), thus, the moderate correlation between variables may be expected to eliminate the multicollinearity issues in regression analysis. Variables stockreturn i,t , case t , marketcap i,t , pb i,t are checked for unit root. The Levin-Lin-Chu test is performed on panel data for each variable: stockreturn i,t , marketcap i,t and pb i,t. The Augmented-Dickey-Fuller test is performed on variable case t . Results are reported in Table 4. The null-hypothesis that each of these variables contains a unit-root are strongly rejected. Regression Model and the Errors The authors conducted some tests to determine if FEM (Fixed Effects Model), REM (Random Effects Model), or Pooled OLS was more suitable for the models mentioned above. These three models were run via Stata 14 software. With model (1a), (1b), (1c), the Pooled OLS gave a p-value of marketcap and pb higher than 0.05; with model (2), it gave a p-value of D_finance, D_industrial and D_comtech higher than 0.05; with model (3a), (3b), (3c) the Pooled OLS showed four out of six independent variables had a p-value higher than 0.05. As a result, the Pooled OLS is not suitable for these research models. Furthermore, after running FEM and REM for all the research models, the authors conducted the Hausman test using each of them. All results show that Prob > Chi-square is greater than 0.05. This means the null hypothesis cannot be rejected. In short, the random effects model (REM) is the most appropriate model for the research models. After choosing REM, the authors implemented the VIF test to inspect the multicollinearity among independent variables. The VIF value of variables is less than 5, meaning there was no multicollinearity. The Breusch-Pagan Lagrangian multiplier test for heteroskedasticity is also used to check REM. The null hypothesis (H0) for this test is homoscedasticity. All research models gave the result Prob > χ 2 higher than 0.05, so the null hypothesis cannot be rejected. This means there is no occurrence of heteroskedasticity in these random-effect models. With the results of the Wooldridge test for autocorrelation, there is also no autocorrelation error in all models with a Prob > F above 0.05. Regression and Analysis The following tables are regression results of models mentioned in Section 3.3. Table 5 above shows the results of model (1a), (1b), (1c) via the REM. As can be seen, the number of daily COVID-19 confirmed infection cases in Vietnam is substantially negatively correlated with the HOSE stock returns as well as the HNX stock returns at various significance levels in the three models. This finding supports the conclusion of other researches, such as (Ashraf 2020;Liu et al. 2020;He et al. 2020;Adenomon and Maijamaa 2020;Zhang et al. 2020;Alfaro et al. 2020;etc.) that the COVID-19 has harmed stock market returns significantly. For both HOSE and HNX, the dummy vector D_before is negative and significant at 1% and 0.1%, respectively, showing that the COVID-19 pre-lockdown period had a negative impact on all of the listed companies' stock returns in the Vietnam market. The Impact of COVID-19 on the Vietnamese Stock Market during Pre-Lockdown, Lockdown and Second-Wave Periods The D_second dummy variable is also negative and significant at 5% in respect of both exchanges. The reasons behind this result are investor concerns about the risk that the second COVID-19 wave would make production and business activities of enterprises difficult for a long time, but especially so during this period. It was in this phase that Vietnam recorded the deaths caused by this epidemic. However, in this declining phase, there were positive signals that investors could expect a recovery of the market. Firstly, investors were reassured by the experience and determination of the authorities in respect of epidemic control. It was difficult for investors to predict the scale and extent of the spread or to predict when the epidemic would end, but from the success of the first antiepidemic wave and the drastic zoning measures put in place by the authorities over the past time, it was deemed likely that the spread of this second epidemic would soon be controlled. Secondly, the bottom of the VN-Index and HNX-Index in March created the reference point for this decline and triggered active cash flow to buy in and catch the bottom earlier. Thirdly, between these two epidemic outbreaks, the savings interest rate level at banks had decreased by around 1% per year, a sign that cash flow was shifting to more profitable investment channels. Securities were the preferred choice when two other popular investment channels-gold and real estate-became less attractive, as gold prices have risen and real estate has poor liquidity (Lam 2020). Noteworthy is the dummy variable D_lockdown with a positive and significant value at 1% on both HOSE and HNX, implying that the COVID-19 lockdown had a positive impact on the listed companies' stock performance in the market. This finding contradicts the results of Eleftheriou and Patsoulis (2020) that the lockdown of COVID-19 had a negative effect on international stock markets as well as the conclusion of Baig et al. (2020) about the detrimental impact of the lockdown period on the U.S. stock market. Vietnam's stock market performance in April 2020 was quite positive, with both VN-Index and HNX-Index gaining compared to March 2020. Specifically, at the end of the session on 29 April 2020, the VN-Index reached 769.11 points, up 16.08%; the HNX-Index reached 106.84 points, which represented an increase of 15.32% compared to the session ending on March 31 (Lam 2020). The faith and trust of investors in the efforts of the Vietnamese government to fight the outbreak of the virus were the underlying reasons for the stock market's outstanding success during the lockdown (Giang and Yap 2020). Furthermore, investors returned to the Vietnam stock exchange due to the attraction of fair prices for investors' favorable stocks at the time. In terms of company characteristics, the price-to-book ratio (pb) of both HOSE and HNX during the COVID-19 pandemic is significantly and negatively linked to stock returns at 1%. This finding suggests that the HOSE and HNX listed companies with weak financial performance as well as overvalued stock values appeared to have lower stock returns in the pandemic period. The Relationship between Stock Returns and Various Industries on the Vietnamese Stock Market during Pre-Lockdown, Lockdown, and the Second-Wave Period Table 6 above presents the results of models (2), (3a), (3b), (3c) via the random-effects panel-data regression models. The findings of model (2) show that during the COVID-19 pandemic in Vietnam, various market sectors (financial, energy, industrial, consumer goods, communications and technology, and healthcare) had different connections with stock returns. COVID-19 had the most impact on the financial sector on both the HOSE and HNX, followed by the consumer goods sector and the industrial sector. The communications and technology sector was the sector least influenced by COVID-19 in both HOSE and HNX. In Vietnam, telecommunications are expanding, and during the pandemic, the internet became the primary means of linking people for work, education, and other purposes. Unlike previous research, the energy industry is shown to be even less affected by COVID-19 than the healthcare and consumer goods sector. The key reason for this phenomenon is investors' long-term expectations for the energy market regarding the recovery of oil prices, as companies around the world return to a regular production cycle and the government's effectiveness in controlling price changes, particularly when the Vietnamese government performs exceptionally well in disease control. Meanwhile, despite a spike in the healthcare sector (the dramatic rise in the use of masks, dry hand sanitizer, antiseptic liquid, and so on) and the consumer goods sector (panic buying) to hoard since the outbreak, these are thought to be short-term effects of the pandemic. These non-consumption excess stockpile activities will likely significantly decrease revenue of the businesses in the relevant sectors in the following months. As a consequence, the phenomenon is not really advantageous to them. The regression results of model (3a) show that stock returns in Vietnam's various sectors were negatively affected during the pre-lockdown period of the COVID-19 pandemic. This pandemic caused the financial sector to be hardest hit on the HOSE, while energy was the most impacted sector on HNX before the government's nationwide lockdown period. As can be seen in the model (3b), the nationwide lockdown had a positive impact on stock performance of all the selected sectors in the research. This finding confirmed that investors' increased confidence in the Vietnamese government's efforts to combat the pandemic during the lockdown resulted in favorable results for all stock market sectors. During the period of lockdown, the financial sector gained the most benefit of all sectors on the HOSE, while on the HNX, it was communications and technology. The results in the model (3c) show that the stock returns of selected sectors were negative during the period of the second wave of COVID-19 in Da Nang, Vietnam. On both HOSE and HNX, the healthcare sector was the most affected by the pandemic. In this period, Vietnam recognized that the first deaths due to the COVID-19 had made investors more concerned about the dangers of this disease, and that healthcare companies have faced fierce competition from international firms as well suffering from a lack of investment in expansion or R&D. Models (2), (3a), (3b), and (3c) show that the effect of the number of daily confirmed cases of COVID-19 infections in Vietnam (case), market capitalization (marketcap), and priceto-book ratio (pb) on Vietnam stock returns are consistent with the results from models (1a), (1b), and (1c) (1c). As a result, stock returns in Vietnam were most affected on days when the number of reported cases was highest, confirming COVID-19's significant effect on the Vietnamese stock market. Moreover, during the COVID-19 pandemic outbreak, the stock performance of listed companies' in Vietnam with high price-to-book value was also adversely affected. Standard errors in parentheses * p < 0.10, ** p < 0.05, *** p < 0.001 (Source: Authors' calculation and compilation). Our results, demonstrating heterogeneous effects of COVID-19 across real sectors, are expected and in line with previous findings. Pharmaceutical and telecommuni-cation sectors are the least affected and in some cases, gained during the pandemic (Ichev and Marinc 2018;Al-Awadhi et al. 2020;Ramelli and Wagner 2020). Pharmaceutical firms increased sales and profits by selling vaccines and medical supplies. The pandemic also increased demand for equipment and software needed to work from home. Consumergood and tourism sectors, on the other hand, usually suffered due to travel bans and precautionary saving motives of consumers (Ercolani et al. 2021). The strong negative effect of COVID-19 cases on financial sector stock returns is unexpected at first. One might expect banks to suffer some losses due to rising non-performing loans made to the real sector. However, one would not expect the banking sector to do worse than the consumer goods sector or the industrial sector, both of which are directly affected by COVID-19. Our results, however, show that financial sector stock returns in Vietnam suffered the most from COVID-19. This is likely because Vietnam's financial system is heavily bank-based. Counter-cyclical lending practices of banks, compelled by the government by either direct or indirect intervention, force the banking sector to absorb negative shocks hitting the real sector (Demirguc-Kunt et al. 2020). This is reflected in banking and financial sector stock returns during the pandemic. As predicted by the market, in April, 2021 Circulars 03/2021/TT-NHNN was issued by the State Bank of Vietnam requesting credit institutions to restructure loans and reduce/exempt interest charged to borrowers. Ashraf (2020) and Goodell (2020) argue that financial systems are unstable during economic crises or pandemics due to the possibility of large deposit withdrawals and excessive bad loans occurring in a short period of time. One more possible reason to explain this finding is the stimulus packages and incentives of the Vietnamese government aimed at non-financial industries, such as agriculture, food processing, textile production, manufacturing, aviation, automotive, and tourism. COVID-19 harmed non-financial market stocks less than financial firm stocks as a result of market anticipation of central bank credit policy, government stimulus packages and the vulnerability of the financial sector during crisis and pandemics. Conclusions This paper studies the impact of the COVID-19 on Vietnam's stock market performance during pre-lockdown, lockdown, and second-wave periods, looking at 733 companies listed on both HOSE and HNX from 2 January 2020 to 31 December 2020. Via a random-effect model (REM) of panel data regression, the research showed that the rise in the number of daily COVID-19 confirmed infections cases in Vietnam has a negative effect on stock returns of listed companies in the market. During the pre-lockdown and second-wave periods of COVID-19, stock returns were negatively affected by the pandemic. Contrary to the negative impact on stock returns in the COVID-19 pre-lockdown and second-wave periods in Vietnam, as well as the negative impact of lockdowns that occurred for other countries' stock markets (such as seen in research of Eleftheriou and Patsoulis 2020; Baig et al. 2020), the COVID-19 lockdown period in Vietnam had a positive impact on stock performance in Vietnam. The underlying reasons for this result were investor belief in the preventative measures taken by the Vietnamese government as well as undervalued stock prices which attracted capital inflow. These measures had the effect of resurrecting the performance of Vietnamese stock markets in the period of COVID-19 lockdown. During the pre-lockdown, lockdown, and second-wave periods of COVID-19 in Vietnam, different market industries were impacted differently by the pandemic. The financial sector, which during economic downturns was described as a vulnerable sector with the risk of an uptick in unusual withdrawals of deposits and bad loans, was the hardest impacted sector during the COVID-19 pandemic on Vietnam's stock markets. In the second-wave period of the pandemic, the healthcare sector was the most impacted despite their services being in high demand. The reason behind this result could be that they have faced fierce competition from international firms as well as suffering from a lack in terms of investment in expansion or R&D. This research poses some recommendations for governments and investors based on findings from the Vietnam stock exchange. Firstly, the undeniable negative impacts of the COVID-19 as well as the daily rise in the number of reported infections cases on stock returns has suggested that proactive and timely responses and containment measures would be needed for nations as well as governments to shield stock markets from serious degradation in future pandemics or epidemics. From the start, the Vietnamese government has taken decisive steps to contain the spread of COVID-19 proactively, including issuing national guidance to all citizens about the pandemic's severity, explicit emergency response guidelines, medical measures, school blockades, travel bans, social distancing, and nationwide lockdown, as well as providing financial assistance and other measures to protect the stock market. These steps taken by the Vietnamese government have raised confidence and trust among investors, leading to a positive relationship between the lockdown period and stock returns at that time. Market performance would deteriorate if investors continued to be concerned and fearful of the future. As a result, the measures to deal with the pandemic should be followed consistently and strictly in order to sustain the positive outcomes in preventing and countering the impact of the pandemic, as well as to improve customer and investor trust and enhance economic growth. Additionally, this research's empirical results show that the impacts of COVID-19 vary depending on the industry. Financial companies' stocks are one of the most severely impacted during an unexpected event such as a pandemic, due to: (i) the high risk of increasing abnormal large-scale withdrawals and bad debts, which could spark business crises or even bankruptcy; (ii) market expectation of counter-cyclical lending practices implemented by the government and the central bank; and (iii) stimulus packages often prioritizing non-financial sectors. Further research on how characteristics of banks (stateowned vs. private-owned, leverage ratio, capital adequacy, asset size, etc.) affect their stock returns during pandemics would be an interesting question that one can investigate in future research.
2021-10-05T20:10:10.277Z
2021-09-14T00:00:00.000
{ "year": 2021, "sha1": "412c8617bcc397cf4ed9402104be6ff25fe12a49", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1911-8074/14/9/441/pdf?version=1631613082", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "039e1f99379884f2bba34650acef3deee9459969", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
210132115
pes2o/s2orc
v3-fos-license
Photo Cross‐Linking Probes Containing ϵ‐N‐Thioacyllysine and ϵ‐N‐Acyl‐(δ‐aza)lysine Residues† Abstract Posttranslational modifications (PTMs) are important in the regulation of protein function, trafficking, localization, and marking for degradation. This work describes the development of peptide activity/affinity‐based probes for the discovery of proteins that recognize novel acyl‐based PTMs on lysine residues in the proteome. The probes contain surrogates of ϵ‐N‐acyllysine by introduction of either hydrazide or thioamide functionalities to circumvent hydrolysis of the modification during the experiments. In addition to the modified PTMs, the developed chemotypes were analyzed with respect to the effect of peptide sequence. The photo cross‐linking conditions and subsequent functionalization of the covalent adducts were systematically optimized by applying fluorophore labeling and gel electrophoresis (in‐gel fluorescence measurements). Finally, selected probes, containing the ϵ‐N‐glutaryllysine and ϵ‐N‐myristoyllysine analogues, were successfully applied for the enrichment of native, endogenous proteins from cell lysate, recapitulating the expected interactions of SIRT5 and SIRT2, respectively. Interestingly, the latter mentioned was able to pull down two different splice variants of SIRT2, which has not been achieved with a covalent probe before. Based on this elaborate proof‐of‐concept study, we expect that the technology will have broad future applications for pairing of novel PTMs with the proteins that target them in the cell. The chosena mide bond analogues were either the thioamidef unctionalg roup (X = CH 2 ,Y = S), whichh as been em-ployed in mechanism-based SIRTi nhibitors [20] or the d-azalysine-based hydrazides described by Cole and co-workers (X = NH, Y = O). [21] Both were selected due to their ability to form so-calleds talled intermediates with the NAD + co-substrate in the sirtuin active site, resultingi nd ecreased rates of hydrolysis. [20g] Encouragingly,C en and co-workersr ecently reported on an ABP,c ontaining a e-N-thioacetyllysine that could successfully label overexpressed SIRT2 in HEK293 cells, providing further impetusfor this concept. [22] Results and Discussion Design and synthesis of photo cross-linkingprobe collection For ap roof-of-concept probe collection, decameric peptides were chosen to allow severalamino acids to flank the modified lysine residue. To assess the effect of the peptide sequence together with the PTM of interest, the four selected sequences were based around lysine residues that are knownt ob ed ecorated with the different PTMs investigated in this study.T hus, in order to investigate Kac and Kcr the histones equences aroundh istone4 -Lys12 (H4K12) and histone 3-Lys9 (H3K9) were chosen as they both have been shown to carry Kac [4a,b, 23] and Kcr. [9] To date only two proteins have been shown to carry the Kmyr modification on e-aminog roups of lysiner esiduesnamely tumor necrosis factor-a (TNF-a) [11a] and interleukin 1a (IL-1a) [24] -and we decided to base the probe sequence aroundT NF-a K20.G lutarylation of numerous lysine residues has been identified on multiple proteins.Carbamoyl phosphate synthetase-1 (CPS-1) is among the proteins that has been most extensively investigated in this context [8a] and we therefore chose to design our probe sequence around CPS-1 K1356. Previous investigations by Sieber and co-workerss howed that the commonly used benzophenone photo cross-linker gave rise to high levels of non-specific binding. [25] Furthermore, Li and co-workers found that using the diazirine photo crosslinker "photo-Leu"i nc lose proximity to their modified lysine was preferred. [19d] In light of this insightr egarding the choice and positionofp hoto cross-linker,aswell as preliminary results from our own laboratory (unpublished),w ed esigned the collectiono fp robe sequences outlined in Scheme 1A.I na ddition to the mentioned e-amide bond analogues,w ea lso prepared the native oxoamide versions of the probesf or comparison (motifs A-C in Scheme 1B). Finally,f our well-described acylbased PTMs to lysine were chosen (Kac, Kcr,K myr,a nd Kglut) (Scheme 1C)a nd ac ollection of 36 probeso ut of the full matrixo f4 8c ombinationsw as synthesized (Table S1, Supporting Information (SI)). For control experiments and validation of specificity,w ea lso synthesized an on-acylated probe based on sequence 1a nd an umber of competitor probes that contain PTMs but are devoid of photo cross-linker and click handle (sequences 1 C , 3 C ,a nd 4 C ;S cheme 1D and Ta ble S2, SI). Briefly,aversatile a-N-Fmoc-d-N-Boc-e-N-Teoc-(d-aza)lysine buildingb lock was designed for introduction of hydrazide lysine-mimickings ide chains on solid support (Scheme S1, SI) and thioamide-containing buildingb locks were prepared using Lawesson'sr eagent [26] (Scheme S2, SI). After severalf ailed attempts to prepare the thiocrotonylated building block, potentially due to instability of the a,b-unsaturatedt hioamide functionality, [27] it was discarded fromo ur investigation. As mall selectiono fa dditional probe constructs were furthermore omitted from the series based on the results of initial screens( see explanations vide infra). Thus, 36 different probes, systematically covering peptide sequences, lysine mimics, and PTMs (see Ta ble S1), were synthesized by standard automated Fmoc solid-phase peptides ynthesis (SPPS). Though, couplingo ft he photo-leucine building block and introductiono ft he acyl modifications in the hydrazide-based probes were performed manually (Schemes S3 and S4, SI). The competitor sequences were similarly synthesized (Schemes S5 and S6) and all peptides were purified to > 95 %h omogeneityb yp reparative reverse-phase HPLC separation and lyophilization of the fractions that contained pure product, according to MALDI-TOF MS. Optimization of ABP methodology using in-gel fluorescence With the collection of probes in hand, we first optimized the conditions for cross-linked adduct formation and subsequent click chemistry using the native amide-containing control probesb ased on H4K12 (1A sequences). We optimized the enzyme-probe conjugate formation by applyings odium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and in-gel fluorescencem easurements (SI, Figures S1-S5). For the photo cross-linking we applied UV irradiation (365 nm) for 10 min at 0 8C. [19d] First, we assessed the choice of ligand for the Cu I -catalyzeda zide-alkyne 3 + 2c ycloaddition (CuAAC). In our hands, the more polar ligands exhibited superiorp erformance than TBTA. Although different ligand effects may be observedd epending on the used buffers and reducing agents as well, this finding was in agreement with ar ecent study of hydrophilic ligands, [28] and we therefore chose BTTAA for further experiments( see SI, Figure S1 forw estern blot analysis, IUPAC names, and chemical structures). We then investigated the efficiency of different copper-ligand ratios on the CuAAC. Based on labeling of sirtuins2 and 3( FigureS2), the optimal ratio of CuSO 4 -BTTAA was selected to be 1:2. Next, we addressedt he amount of probe and the probe-fluorophore ratio, using recombinant enzymes (1 mm)i nt he presence of HeLa whole cell lysate (2 mg mL À1 )t ob etter mimic ac ellular environment for the labeling reaction. The experimentsr evealed dosedependenta nd selectivel abeling of SIRT1, SIRT2, and SIRT5 ( Figure 2A and Figure S3). Based on those results, it was decided to continue with 25 mm of probe and 20 equivalents of fluorophore relative to this concentration, except when using the Kmyr probes where 12.5 mm of probe was considered sufficient. As evidentf rom Figure 2A,t he labeling experiments using the acetylated probe (1A-Kac)l ed to significant labeling of multiple proteins in the lysate in addition to the recombinant SIRT3. After this optimization,w ep erformed as eries of control experiments to confirm that the fluorescent labeling of the enzymesr elied on addition of probe, UV irradiation, CuSO 4 ,a nd fluorophore ( Figure 2B and Figures S4 and S5). Furthermore, it was confirmed that the non-cross-linking peptides Ac-H4 8-17 K12(myr)-NH 2 (1 C A-Kmyr)a nd Ac-H4 8-17 K12(glut)-NH 2 (1 C A-Kglut)d on ot lead to labeling, which allows for the use of these peptides as competitors.F inally,t he importance of the lysine modification wasf urtherc onfirmed by applying ap robe withoutaPTM but still containing the photo cross-linker and alkyne moieties (1-freea mine,T able S2), which did not lead to fluorescent bands. Addressing the importance of PTM and peptide sequence for sirtuin labeling Encouraged by the above control experiments, we then systematically screenedf or the effect of peptides equence (1)(2)(3)(4) and modified lysine motif (A-C)o nt he ability of the probes to interactw ith the different sirtuins. It was gratifying to observe that thioamide (1-4B)-and hydrazide ( larly to the native 1A-Kac probe, while the 1B-Kac thioacetylated probe significantly enhanced the labeling efficiency.T herefore, it was decided to focus on thioacetylated probes rather than the hydrazide versionso ft he acetylated probes. Somewhat surprisingly in light of the recent work by Cen and coworkers, [22] however,t he thioacetylated probesa ppeared to substantially label mosts irtuins, including SIRT4 and SIRT5, which are not believed to target Kac [6a, 8a, 12] (Figure S10-S17). Although, the observed differences in selectivity between our more elaborate screening of probesa nd the previous report may be explained by differences in experimental conditions, we conclude that probes containing this modification may be of limited utility and speculate that perhaps the hydrazide versions couldr ather be revisited after all. Nevertheless, we decided to focus the presenti nvestigationo nn on-Kacm odifications. Interestingly,t he e-N-glutaryl-(d-aza)lysine-containing probe (1C-Kglut)e xhibited decreased labeling efficiency of SIRT5 compared to 1A-Kglut ( Figure S14). This observation was presumably due to zwitter ion formation or intramolecular hydro-gen bondingb etween the basic NH group of the hydrazide and the terminal carboxylate. Based on this finding, we decided not to pursue the remaining hydrazide versions of the glutarylated probes. The thioglutarylated probes, on the other hand, are highly promising for the study of SIRT5, showing increasedl abeling comparedt ot he glutarylated probes, and therefore potentially also for other proteins that may recognize Kglut ( Figure S14-S17). The crotonylated hydrazide-based probes( 1-3C-Kcr)r obustly labeled SIRT4, which is an interesting observation because SIRT4 has never been connected with this PTM before and therefore requires further investigation. In addition, future work employingt hese chemotypes will include investigation of recognition domains such as YEATS [29] as well as class Iz inc-dependentH DACs, which are believed to be the main regulators of Kcr. [16] The e-N-myristoyllysine-containing probe series efficiently labeled SIRT1-3 as expected [30] and also gave rise to substantial fluorescent bands for SIRT7 ( Figures S20 and S21), which is in agreement with ar ecent study. [31] In general, we found that the PTMs more significantly affect probe efficacy than the peptide sequence;a lbeit, with some exceptions. Therefore, we speculated that addingt he hydrazide-baseds eries of probes with sequence 4 would provide limited information and decided not to include 4C probesi nt he collection. Interestingly, the Kmyr-containing peptides equences based on TNF-a,w hich has been shown to be targeted by sirtuins, [11b] appeared to label SIRT2 with some degree of selectivity over other sequences. This was particularly pronouncedw hen using the probesc ontaining the thioamide or hydrazide Kmyr mimics (Figure 3). Stability of probescontaining amide bond analogues and competitionexperiments Based on the above screeningr esults, andb ecause the PTMs in question have been shown to regulate the respective enzymes, [8a, 11b] we selectedt he TNF-a sequence (3)f or probes containing Kmyr residues and the CPS1 sequence (4)f or glutarylated probes for further experiments. First, we challenged the performance of probesc ontainingd ifferentl ysine residues (A-C)b ya ddition of NAD + ,t he co-substrate of sirtuins. This clearly resulted in as ignificant decrease in fluorescencei ntensity for the oxoamide-containing probesa nd in the case of Kmyr,acomplete abolish-ment of sirtuin labeling (Figure 4). However,t he experiment gratifyingly showedretention of fluorescent bands for the thioamide bond-containing probes 3B-Kmyr and 4B-Kglut as well as the d-azalysine mimic 3C-Kmyr ( Figure 4A,BandF igure S22). Interestingly,the glutarylated probe 4B-Kglut even exhibited more intense labeling of SIRT5 in the presence of NAD + ,which could be explained by extended residence time in the enzyme active site due to formationo fas talled intermediate between probe and co-substrate. Ta ken together,t his strongly indicates that compromised structurali ntegrityo fp robest hat are based on native PTMs (type A)c ould give rise to ambiguous results if appliedi nabiological environment and shows that this challenge can be solved by applyingn on-cleavable amide bond analogues (i.e.,t hioamides or hydrazides). Importantly,w ef urther showedb yc ompetition experiments, that the novel PTM mimics indeed bind to the sirtuin enzymes through specific recognition that can be outcompeted by non-cross-linking, PTM-modified peptides ( Figure 4C,F igure S23 and supporting discussion). The thioamide probe (3B-Kmyr)w as more efficientlyo utcompeted than the hydrazide probe (3C-Kmyr)a nd was therefore selected for the furtherexperiments. Employing photo cross-linking probesfor the enrichment of native enzymes from cell lysates With these encouraging proof-of-concept resultsf rom in-gel fluorescencee xperiments, we next ventured into trapping of native enzymes from whole cell lysates. Here, we employed coupling of biotin to the protein-ABP conjugate instead of the Chem. E ur.J.2020, 26,3862 -3869 www.chemeurj.org 2020 The Authors. Published by Wiley-VCH Verlag GmbH &Co. KGaA, Weinheim fluorophore, followed by enrichment on streptavidin-coated beads, andp rotein identification using western blot analysis. Most ABPs developed for investigation of sirtuins have only been evaluated using recombinant enzymes or with overexpressing cell lines thus far. [19b, 22] An otable exception to this is work from Li and co-workers,w ho have demonstrated enrichment of endogenous SIRT3 and SIRT5 from HeLa whole cell lysate using probes based on Kac and Kmal,respectively. [19d] We show here the first examples of successful pull-downofendogenous interaction partners for the Kmyr and Kglut posttranslational modifications. Thus, probes 3B-Kmyr and 4B-Kglut were incubated with native HEK293w hole cell lysate and different amountso ft he corresponding competitor probes. After elutionf rom the streptavidin-coatedb eads the samples were analyzed by western blotting( Figure 5). Importantly,a ne xtensive amount of experimentsp roved necessary to optimize this protocol, which eventually showed that the choice of beads was of tremendousi mportance.T hus, at first sight, we achieved robuste nrichmento fs everal enzymes using both agarose beads and Dynabeads. However, the control experiments revealed this to be accompanied by substantial non-specific bindingt ot he beads, which we unfortunately failed to eliminate, even after extensive washing steps (see FigureS24 for details). Finally,atype of streptavidin magnetic beads that could be washed properly was identified and successfully employed for enrichment ( Figure S24C). Satisfyingly,b oth probese fficiently trapped and pulled down the reported interaction partners SIRT2 and SIRT5, respectively,w hen employing the optimized conditions ( Figure 5). With the myristoylated probe (3B-Kmyr)w eenriched two isoformso fS IRT2 and the labeling was successfully abolished by competition using 3 C B-Kmyr for both bands.T his constitutes the first demonstrationo faversatile photo crosslinking probe that is able to enrich two different endogenous and physiologically relevant splice variantso fS IRT2 [i.e.,i soform 1( 43 kDa) and isoform 2( 39 kDa], [32] which underscores the utility of our new probe design ( Figure 5A). Affinity purification of both splice variants of SIRT2 from HL60 cell lysate has previously been demonstrated. However,this was achieved by using beads coatedw ith immobilized SirReal2, which is a selectivei nhibitor of SIRT2 with long residence time in the active site. [33] We also attempted toe nrich HDAC11 from MCF-7 cell lysate, where this enzymei sa bundantly expressed. Unfortunately,t hese efforts failed, which we suspect may be due to the low binding affinities( high K M values) recorded for Kmyr substrates with HDAC11. [17] Finally,t he glutarylatedp robe (4B-Kglut)w as able to form a cross-linked adduct with and enrich endogenousS IRT5 and this adduct formation was also outcompetedb yt he corresponding 4 C B-Kglut ( Figure 5B). Conclusions In summary,w eh ave developed an ew strategy for photo cross-linking probes to address the members of the proteome that target e-N-acyllysine posttranslational modifications, which are highly prevalent and constitute ac ontinuously expanding collection of chemical groups. We have harnessed design principles from mechanism-based inhibitors of the sirtuin enzymes, because this may allow for extended residence time in the targeting enzyme'sp ocket and, importantly,w ill preservet he integrity of the probe during the course of the experiment. Our results are highly encouraging, as they corroborate the hypothesis that we can install enzymaticallys table amide bond analogues,w hile retainingt he ability of native interaction partners in cell lysates to recognize the modification with high selectivitya nd specificity.I na ddition, we provide the first investigation of the importance of peptides equence on performance of e-N-acyllysine-containing photo cross-linking probes, screening the efficiency of ac ombined series of 36 probe constructs againsta ll 7h uman sirtuins. The developed methodology providesbasis for detailedc hemoproteomics studies by combining our protocol with stable isotope labeling with amino acids in cell culture (SILAC) tandem mass spectrometry-based methods. [34] This will broaden the potentialo ft he technology beyond identification of the endogenous" eraser" enzymes (i.e., HDACs or sirtuins) as demonstratedi nt his proof-of-concept study.W ith the enhanced structurali ntegrity of the probes, we expect that potential readerd omains can be identified as well. Finally,t he relatively simple design allows for easy preparation of novel probesc ontaining different PTMs. Thus, with the continuously expanding landscape of acyl modifications to lysine residues in the humanp roteome, [35] discovered by tandem mass spectrometry proteomics methods, we expect this technology to find broad applicationsi nt he future.
2020-01-11T14:04:07.197Z
2020-01-10T00:00:00.000
{ "year": 2020, "sha1": "5f8605082baad7452f52448e5ec6c90e43ab245a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/chem.201905338", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "74032d3534a9e80f9d11bfe53a960d93e4d9c408", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
18220129
pes2o/s2orc
v3-fos-license
Cluster simulation of relativistic fermions in two space-time dimensions For Majorana-Wilson lattice fermions in two dimensions we derive a dimer representation. This is equivalent to Gattringer's loop representation, but is made exact here on the torus. A subsequent dual mapping leads to yet another representation in which a highly efficient Swendsen-Wang type cluster algorithm is constructed. It includes the possibility of fluctuating boundary conditions. It also allows for improved estimators and makes interesting new observables accessible to Monte Carlo. The algorithm is compatible with the Gross-Neveu as well as an additional Z(2) gauge interaction. In this article numerical demonstrations are reported for critical free fermions. Introduction Occasionally in talks or papers about dynamical fermions it is mentioned -more or less as a joke -that the computer has no data-type Grassmann and one hence can simulate fermions only via the nonlocal effective theory after integrating them into the determinant. Of course, this is plagued by the well-known inefficiencies. In this article, based on Gattringer's loop representation [1], we show that in two space-time dimensions one actually can get pretty close to 'simulating Grassmann numbers' 1 . We here expand on [1] and its recent numerical implementation [2] in several ways. First the loop representation is re-derived starting from Majorana fermions in what we think is a particularly natural way. The new connection includes definite boundary conditions on the torus and does not only work in the thermodynamic limit as before. In particular, we can then also approach the finite volume continuum limit. Furthermore we propose a cluster algorithm that is (practically) free of critical slowing down and allows for improved estimators. In this formulation we can simulate fluctuating boundary conditions which is necessary to allow for fixed (anti)periodic boundary conditions in the original fermion system. It also makes ratios of partition functions accessible as observables in Monte Carlo simulations. They constitute interesting quantities in the continuum limit. The original Gross-Neveu model of self-coupled fermions in two dimensions [3] is most naturally written in terms of N species of Majorana fermions. In the lattice discretization with Wilson fermions the euclidean action is given by [4] S = a 2 (1.1) The Grassmann-valued field ξ ≡ ξ αi (x) has a spin index α = 1, 2 and a flavor index i = 1, . . . N that we leave implicit. We denote by ∂, ∂ * ,∂ the forward, backward and symmetric nearest neighbor differences on our cubic T × L lattice. The charge conjugation matrix C obeys For even N each pair of Majorana fermions may be considered as one Dirac fermion with its independent ψ, ψ. In the Majorana form the full global symmetry group O(N) is manifest beside (without Wilson term) the discrete γ 5 symmetry whose spontaneous breaking was studied in [3] in the N → ∞ limit. The model is renormalizable in the strict sense: there is no other O(N) invariant scalar 4-fermion interaction term. For N = 2 we have the Thirring model [5], [6], the cases N 3 are expected to be asymptotically free. In the remainder of this paper we set the Wilson parameter to r = 1 and work in lattice units a = 1. The discrete chiral symmetry ξ → γ 5 ξ of the massless continuum theory is broken by the Wilson term and is only expected to be recovered in the continuum limit at the critical mass m = m c . On the torus we consider four conceivable combinations of periodic or antiperiodic boundary conditions in the two directions. Periodicity angles different from 0, π -as sometimes used for Dirac fermions -would not lead to a periodic action density for Majorana fermions. We label the possible boundary conditions by a bit-vector ε µ , ε 0 , ε 1 ∈ {0, 1} (1.4) where 0 stands for periodic and 1 for antiperiodic boundary conditions in the corresponding direction. Often the interaction term is factorized by the introduction of an auxiliary bosonic field. For us it will be more convenient to think of m → m(x) as an xdependent mass for a while. If Z ε ξ [m] is the partition function of one free Majorana fermion with boundary condition ε µ in this background field, then the partition function of the interacting theory is written as Integrating the fermions in the remaining Gaussian problem yields the Pfaffian In appendix A one can find a reminder of the definition of Pfaffians. For even N we may replace the Pfaffians by the N/2 power of the determinant. In this form and with a factorizing field, the model can be simulated by standard methods like HMC as carried out in [4]. For larger g this compute-intensive task became rather difficult due to singularities developing in the operator under the Pfaffian. Of course, the model itself remains completely well-defined on any finite lattice. Fermionic and compact bosonic variables are safe in this respect. As a first step we now introduce the loop representation [1] of Z ε ξ [m]. It may in fact also be looked upon as a dimer ensemble similar to those derived in [7] for strong coupling QCD. External field fermion partition function As a building block for the Gross-Neveu models we consider for a single Majorana field the external field action We assume a lattice with T sites in the time direction (µ = 0) and L sites in the space direction (µ = 1). The variables ϕ(x) = 2 + m(x) and τ (x, µ) are external commuting fields. The link field τ is introduced for completeness. It will be dropped at some point. The lattice derivatives in (1.1) combine to Wilson projectors that we define for arbitrary lattice unit vectors n = ±μ The last identity implies While the field ξ has torus periodicity ε, the external fields ϕ, τ are continued periodically to obtain a periodic action density. Defining the 'covariant' projecting hop operator we may, with the help of (2.3), also write the action in the manifestly antisymmetric short-hand form where H ⊤ µ is transposed with respect to both spin and space indices yielding Now we are ready for the partition function The measure is and yields which is a nonlocal expression in the external fields as in the case of the usual Dirac fermion determinant. In appendix A we evaluate the Pfaffian for τ ≡ 1 and constant ϕ for all four boundary conditions. 3 Equivalent statistical systems Dimer representation The Grassmannian Boltzmann factor may be expanded, All fields in the curly bracket are at x and this factor is best considered as part of the measure. We have here chosen C such that 1 2 ξ ⊤ Cξ = ξ 1 ξ 2 which just amounts to a phase convention. Note that the square of the hop-term vanishes due to the one-dimensional projectors. There is only one linear combination of the two Grassmann numbers contributing from each site which squares to zero. We now introduce one-bit-valued dimer or bond variables [7] on each link k(x, µ) = 0, 1, whose values are used to organize the expansion as (3.2) As in [7] the goal now is to integrate out the fermions to yield a Boltzmann weight ρ[k] for each dimer configuration. By asking how the Grassmann integrations can be saturated site by site it is clear that a non-zero weight only arises if at each site there are either two dimers adjacent from different links or none at all. In the latter case the integration is saturated by the measure term and a factor ϕ(x) appears for this site. We also call these contributions monomers. Due to the above constraint the dimers have to form closed non-intersecting and non-backtracking loops. We choose randomly a starting point and an orientation on such a loop such that along the loop one visits the sites (x 1 , x 2 , . . . , x l ). Consecutive sites differ by lattice unit vectors, x i+1 = x i +n i , including x 1 = x l +n l in the last step. For such a loop the product of bilinears has to be considered together with the integrations on the l sites involved. Note that here (2.3) is relevant on the links that are transversed in the negative direction. The trivial key formula is, for a single site, Now the expression (3.3) integrates to X = − tr[P (n 1 )P (n 2 ) · · · P (n l )]. (3.5) Here appears the very important minus sign for a closed fermion loop, well-known for instance from Feynman diagrams. It is not difficult to see that this result is independent of the starting point and orientation chosen. The evaluation of the spin factor follows [8]. Let us introduce eigenspinors of the projectors P (n i ) = |n i n i |, n i |n i = 1, (3.6) X = − n 1 |n 2 n 2 |n 3 · · · n l |n 1 (3.7) A spinor is rotated by an angle θ by the unitary spin matrix R(θ) = exp[(θ/2)γ 0 γ 1 ]. This allows us to write withn l+1 =n 1 . Using n j |R(∆θ i )|n j = cos(∆θ i /2) (3.9) we get to The rotation accumulated in steps (3.8) is For closed paths we have Θ = 2πν and R(Θ − ∆θ l ) = cos(πν)R(−∆θ l ). For the nonzero lattice angles, cos(∆θ i /2) = 1/ √ 2 . Altogether the final result is 12) where N c is the number of ±π/2 angles ('corners') occurring along the loop and ν = 0, ±1 is the number of complete rotations the loop makes. The extra minus sign for |ν| = 1 is the one associated with fermions under 2π-rotations. If we include a non-trivial τ field then the product of its Wilson loops for all dimer loops appear in addition in the weight. After this remark we set τ ≡ 1 until further notice. Although we are on a lattice here, we can define homotopy classes of loops. Two loops are homotopic to each other if they can be transformed into each other by a sequence of steps, where dimers are only changed around a single plaquette. We see from the above that X is positive for all configurations containing only loops that are homotopic to the trivial loop, just a point. The two minus signs characteristic of fermions compensate each other in this class! Loops can wind however around the torus in either direction as noted in [2]. A pair of loops winding around the same direction is still in the trivial homotopy class. An odd number of windings leads however to a new class. This may also happen in both directions at the same time and hence there are the four classes L 00 , L 10 , L 01 , L 11 . In figure 1 we show a representative of each class. They are equilibrium configurations of free fermions at m = 0 and T = L = 10. The meaning of the +/− signs in the plots will become clear later. Only configurations from L 00 have a positive weight while in the other cases there is an odd number of closed loops with zero total rotation angle each of which contributes a factor −1. By introducing antiperiodic boundary conditions in some direction the loops closing around that direction receive yet another sign without changing the topologically trivial ones. With the local weight we now define the positive dimer partition functions From what was said above the connections arise, which can be inverted. If we want to realize the boundary conditions ǫ µ = (1, 0) of [4] (or actually any other definite choice for the fermions) we have to sum over all dimer classes including negative weight contributions . (3.20) All these relations between partition functions can be turned into relations between expectation values of the scalar fermion density and monomer densities by differentiating with respect to ϕ(x). One example based on (3.20) is (3.21) The observable K(x) is one if there is a monomer at x and zero otherwise. With the help of τ (x, µ) as a source one could establish further relations. For free fermions (ϕ = 2 + m) in the thermodynamic limit at fixed m > 0 the various Z ǫ ξ differ only by exponentially small amounts and Z 00 k dominates among the dimer ensembles. Taking the finite volume continuum limit (L → ∞ with Lm fixed, see appendix A) and in particular for m = 0 this is not so. In the latter case we have an exact zero fermion mode for ǫ µ = (0, 0) and Z 00 ξ = 0 holds. This implies Z 00 Spin representation In this subsection we transform the dimer system to yet another representation by Ising spins. This will allow us to design a global cluster algorithm. A clue that this may be possible is given by the idea that a natural way to manage and modify the closed loops in the dimer formulation is to consider them as boundaries of domains of up-spins surrounded by down-spins (Peierls contours). The spins that we introduce live on the lattice dual to the one carrying the fermionic and the dimer variables. Its sites labelled by underlined x are dual to plaquettes of the original lattice and are imagined to be located at their centers. Analogously, the sites of the original lattice are dual to plaquettes in the new one. Links (x, µ) and (x, µ) are dual to each other if they cross, see figure 2. The idea is now to put an Ising field s(x) on the dual lattice and to identify configurations In other words, dimers are located where nearest neighbor spins on the dual lattice are antiparallel. In a first stage we restrict ourselves to the class L 00 of dimer configurations. We first prove that for each admissible dimer configuration there are exactly two spin fields obeying (3.23) that differ by a global spin-flip. In a first step we define a Z(2) lattice gauge field 2 on the dual lattice in terms of the dimers on the original lattice Because of the constraints on k this gauge field is unity when multiplied around any plaquette (on the dual lattice). As we restrict ourselves to L 00 here, also loops around the torus are unity for this gauge field. Thus σ it a pure gauge on the torus with a periodic gauge function. The spin-field is this gauge function that we construct now. We choose a site y (the origin, for example) and set s(y) = +1. Now this value is parallel-transported with σ to all other sites, for instance along a maximal tree rooted at y. Due to the absence of curvature in σ, the result is path-independent, consistent and unique. Starting from s(y) = −1 we obtain the other configuration associated with {k(x, µ)}. The signs in figure 1 are just these spins. While we now have exactly two spin fields for each admissible dimer configuration in L 00 , not all conceivable spin fields are reached in this way. Obviously, spin configurations, that on any plaquette look like do not occur in the image, where we here just gave simple labels to the four spins. They would correspond to crossing loops not allowed by the original Grassmann variables. If this is however excluded on all plaquettes, then we can reconstruct admissible {k(x, µ)} configurations from {s(x)}. The total Boltzmann factor in the spin representation is now a big product with one factor for each plaquette. These weights w are given in table 1 which lists only 8 of the 16 configurations and is completed by using w(s 1 , s 2 , s 3 , is taken here at the site x of the original lattice sitting at the center of the dual plaquette considered. The derivation of this representation resembles the construction of the dual formulation for generalized Ising models [9]. We summarize it (giving the ordinary self-dual two-dimensional Ising model as an example in brackets): One first introduces new variables living on the bonds making up the Hamiltonian of the s 1 s 2 s 3 s 4 w + + + + ϕ(x) original theory (a link field). Then the original spins are summed over producing a constraint in the new variables (vanishing plaquettes of the links interpreted on the dual lattice). This constraint is then solved on the dual lattice (links given as a pure gauge by a site field). The extension of the concept here is that we change from Grassmann elements to bosonic variables, have an additional constraint (3.25) to fulfill, and that there can be minus signs. One could talk of Fermi-Bose-or super-duality. The plaquette weight can also be written in terms of pairwise nearest neighbor bond-interactions of the form [writing with the x-dependence in p, q, r suppressed. To match table 1, the following equations have to hold The solution of this system is We remark here that all coefficients are positive for 0 m 2 √ 2 ≈ 2.83. In the free case, this clearly covers the relevant range of bare masses. If we now include dimer configurations in the other sectors by using (3.23), the only difference is that the resulting s(x) is antiperiodic in the direction orthogonal to those where dimer loops run around the torus. Thus L 10 corresponds to spin-fields antiperiodic in space, L 01 to those in time and L 11 to spins with both directions antiperiodic. The mechanism here is that antiperiodic boundary conditions of the spins force an interface into their configurations which leads to the nontrivial dimer loop topology. Introducing further partition functions Z s for the spin ensembles we relate (3.28) The factor 1/2 cancels the global spin-flip symmetry. Again, by differentiation, we may relate expectation values, where the presence (absence) of a monomer yielding K(x) = 1(0) in the spin language translates into maximally 'polarized' plaquettes where all four spins are parallel, 4 Simulation algorithms Local algorithms A local algorithm to simulate any one of the above dimer ensembles with all weights taken positive was recently described and tested by Gattringer et al. [2]. The simplest case to consider is a free Majorana fermion of mass m with ϕ(x) = 2 + m. In the updates one actually performs only changes that are local in the homotopysense by proposing dimer-flips k(x, µ) → 1 − k(x, µ) around plaquettes. The move is accepted with the Metropolis probability corresponding to the ratio of ρ in (3.13) for the new and the old configuration, which is a locally computable quantity. Of course, it vanishes, if the new configuration would violate one of the constraints. This update stays in the homotopy class fixed by the starting configuration (see figure 1) and is ergodic within it. Thus the various ensembles can be simulated which correspond to combinations of periodic and antiperiodic Pfaffians. In [2] it was demonstrated that already such simulations are vastly more efficient than HMC type simulations. The entirely equivalent update in the Ising form consists of local spin-flips. The Metropolis decision in this case depends on the eight nearest and next-to-nearest (diagonal) neighbors that share plaquettes with the spin in focus. The numerical efficiency in both forms is very similar. Cluster algorithm The plaquette interaction in (3.26) is now written as a superposition of 10 different terms, schematically given by with P i ∈ {p, q, r} and ∆(s 1 , s 2 , s 3 , s 4 ) ∈ {0, 1}. In complete analogy to [10] we introduce now ten-valued bond variables b(x) with each value corresponding to one of these terms to obtain To avoid a too clumsy notation here again s 1 , s 2 , s 3 , s 4 are spins around each of the plaquettes. The celebrated trick of [10] consists of Monte Carlo sampling both the b and the s variables. As the first part of an update cycle one chooses new b at fixed s by a local heatbath procedure. Of course, in general, the choice is between less than 10 possibilities, as some of the ∆ b(x) (s 1 , s 2 , s 3 , s 4 ) vanish. Then, for given bonds b, any of the 2 T L spin configurations has either weight zero or a constant nonzero weight depending on the constraints given by the product of all factors ∆ b(x) . Just as in the standard Ising model case we now construct the percolation clusters defined by the active bonds in all {∆ b(x) }. By flipping the spins in each cluster as a whole with probability 1/2, we sample one of the allowed and equally weighted spin configurations. The overall procedure amounts to a global independent sampling of spins (at fixed bonds) and will be numerically demonstrated to almost eliminate critical slowing in section 5. At the stage of selecting a new spin field one may also construct improved cluster estimators. This is achieved, if, for some observable, one is able to analytically average over all conceivable spin-assignments of which only one is taken as the next configuration. While the above procedure is analogous to the well-known Swendsen-Wang algorithm [10] we could also study a single cluster variant [11]: one spin is chosen at random, and then only the one cluster connected to it is constructed by investigating the plaquette terms touched in the growth process until it stops. Then spins on this cluster are always flipped. This may well be even more efficient as large clusters are preferred. We end this subsection with a remark on the global spin flip symmetry. At first one may think that it is a (slightly) annoying redundancy in the new representation. However, it is in fact essential to be able to grow clusters whose energy (action) is associated with the surface (in our case the loops) and not with the bulk. Loosely speaking, as one grows a (single) cluster, there is always an energetic 'way out' by flipping the whole lattice. Of course, if this is all that happens, that algorithm will not be efficient. The auxiliary percolation problem allows to find nontrivial clusters. Fluctuating boundary conditions In (3.20) we saw that it is desirable to also be able to simulate enlarged ensembles where one sums beside configurations of spins also over several possible boundary conditions. If in conventional simulations one proposes a change of the boundary conditions at fixed spins, one generates energy (action) proportional to T or L and the proposal will practically always be rejected. It was noticed however in [12] that with cluster algorithms the situation can be different. In the step where we pick new spins at fixed bonds the search among possible equally weighted new configurations can be enlarged to also include changed boundary conditions. If we label them by ε µ again (for the spins now) the four possible ε become a dynamical variable. In [12] these changes were introduced in a single-cluster/Metropolis spirit, which would also be possible -in fact less involved -here. In view of the future construction of improved estimators we stick however to the many-cluster view and design now a correspondingly generalized cluster algorithm. It consists of the following steps: • We throw bonds on the links as discussed before. • We determine by some percolation algorithm (e.g. tree search) the independent spin clusters connected by bonds but ignore two layers of links such that the torus is cut open. We take {(x, 0)| x 0 =T −1 } and {(x, 1)| x 1 =L−1 }. We call these clusters preclusters, their connectivity is determined in the 'interior'. Each of them carries a unique cluster label as a result. • Now the remaining links are examined as far as they have been activated. We call these bonds clamps. They have the effect of sewing up (some of) the preclusters. This is done by the pointer technique described in [13]. We may visualize the process as a graph with the preclusters as blobs, some of which get connected by lines. • In this processà la [13] one can detect when closed loops in the graph are formed. We set one of four types of flags whenever a loop is closed. They distinguish whether an odd or an even number of temporal or spatial clamps are met around the loop. We end with flags f 00 , f 10 , f 01 , f 11 each being zero or one. • Among the compatible ε (1,2 or 4 values) one is chosen with equal probability. • Now flips for all preclusters are determined and executed. Connected components of the graph flip together, but preclusters within these components can flip relative to each other if the boundary condition has changed. The above construction guarantees that the orientations thus propagated do not depend on the path that is taken on the graph. Although rather short and compact in the end this is not a trivial code to write. It is helpful to organize it under a geometric point of view focussing on parallel transport between preclusters with a Z(2) group, where the boundary conditions are gauge variables on the clamps. Of course, as we can solve the free fermion ensembles exactly (appendix A) and easily get high accuracy with cluster simulations, many significant checks by short simulations (taking seconds) were available. A very good monitor for debugging at every stage is to set traps for the occurrence of illegal plaquettes (3.25). An alternative strategy to move boundary conditions would be to turn the loop structure of the above graph into a system of linear equations in the Galois field of two elements (addition isomorphic to logical xor). Following [14] and [15] this can be solved by Gauss elimination. The above scheme contains some nested pointer operations. One could be worried in principle whether the execution time grows more than linearly with the lattice volume. In practice there was found to be absolutely no problem of this kind. This is in fact the same for cluster simulations of the standard Ising model using the algorithm of [13]. In our case the problem is even less severe as the number of clamps is smaller than T + L, not proportional to the volume. Negative mass For free fermions one could content oneself with the parameter range m 0 = m c . On the other hand all results in appendix A can be taken at arbitrary m. After all, the partition function on the finite lattice is just a polynomial. When we later come back to the interacting theory, it will also turn out, that negative mass fermions will be required because m c < 0 due to renormalization. The local algorithm [2] works for negative m, a sign problem only arises if ϕ = 2 + m changes sign. The bond probabilities (3.27) however restrict the cluster algorithm so far to m 0. Luckily, there is an alternative decomposition of the plaquette interaction into bonds that comes to rescue when m(x) is negative. The r-term in (3.26) is replaced bỹ r 2 [δ 12 δ 14 + δ 21 δ 23 + δ 32 δ 34 + δ 41 δ 43 + δ 12 δ 14 + δ 21 δ 23 + δ 32 δ 34 + δ 41 δ 43 ], where we introduced also antibonds δ ij ≡ 1 − δ ij . The two other terms are unchanged but the coefficients are nowp,q. Using δ 12 δ 14 = δ 12 − δ 12 δ 14 etc. one sees that the new weight coincides with the old one if we identify r = −r and p =p +r. The matching equations are now Now all weights are positive for −m 2(2 − √ 2) ≈ 1.17. The presence of antibonds is compatible with the cluster search including fluctuating boundary conditions. With an m(x) that changes sign over one lattice, one actually decomposes some plaquettes with (3.27) and others with (4.3). One must however not make the mistake to think of the preclusters as ferromagnetic (Weiss) domains, they contain in general both up and down spins. This is why we took care to talk about distributing flips to clusters rather than assigning new spin orientations to them as whole. Numerical applications We now report on numerical experiments. In this first publication on the method we stick to free fermions and the observable K(x). It corresponds to the scalar fermion density and is mainly used to diagnose the algorithm. Hence, with the results of appendix A, every computed mean value is known exactly and was verified to be reproduced by the Monte Carlo simulations within errors. We do not plot results for K. They are too boring: errors not visible on the graph and exact results agreeing with the data within 1 and occasionally up to around 2 sigma. For the algorithm, the non-interacting case does not seem to be fundamentally different from the interacting Gross-Neveu model. All details necessary for this extension are given, but the numerical implementation is deferred to a future investigation. What remains to be seen is how the correlation between monomers of different flavor influences the Monte Carlo dynamics at stronger coupling. Critical slowing We performed a series of simulations of one species of free Majorana fermions at the critical value m = 0. In this case, the only infrared scale is given by the system size T = L = 8 . . . 128. We simulated the trivial ensembles corresponding to the loop class L 00 . Results are summarized in table 2. Each run with the local algorithm, passing through the lattice in lexicographic order, consists of 10 6 sweeps of which a small fraction 3 is discarded for thermalization. The autocorrelation time τ int,K has been defined and measured as described in [16]. Table 2: Monomer density and its integrated autocorrelation time for local and cluster simulations at m = 0 and lattice sizes T = L. As one expects for local algorithms we see a steeply rising autocorrelation time hinting at a dynamical exponent not too far from two -we have no ambition here to determine it precisely which would be very costly. One notices, that the error (at fixed sweep number) is almost independent of L. This means the variance just compensates the growing autocorrelation time and decays roughly proportionally to 1/T L. This is in fact implied by scaling and the canonical dimension of the (connected) 2-point function of the scalar density. Although the integral over the autocorrelation function at L = 128 does not look too unconvincing, one may suspect that our number for τ int,K may only be a lower bound for this case. The cluster simulations in the last columns consist of 0.6 × 10 6 sweeps which in our implementation takes about the same time as the local runs on a single PC. We see small slowly rising autocorrelation times. From the two largest lattices one would estimate an effective dynamical exponent z eff ≈ 0.30, which is a typical value for cluster algorithms. In total only about 15 CPU hours went into these demonstrations. All codes have been programmed in MATLAB and the update routine has about 100 lines (50 without fluctuating boundary conditions). The next series of runs to be reported is on T = L = 128 lattices at several positive and negative masses. Again each data point is produced by 0.6 × 10 6 sweeps. These simulations included fluctuating boundary conditions. Recorded observables were the monomer density K, the boundary conditions ε µ and the topological flags f αβ . From these data the distribution of boundary conditions can be deduced and we checked their correctness. As an example we consider here (3.21) and compute for the right hand side where on the right hand side the Ising ensemble with fluctuating boundary conditions ε is meant. The result at m = 0 is S 10 = 0.76987 (6) with the exact value being 0.769800361... Errors for this combination of observables are estimated as discussed in [16], where the definition of τ int,S 10 from the fluctuations relevant for this quantity can be found. We show these autocorrelation times in figure 3. There seems to be a steep rise by about one unit close to m = 0. The second plot shows a better resolution of its vicinity. All these numbers stay comfortably small. The combination measured here has no serious sign problem. One could however construct positive cluster estimators for numerator and denominator. A simple example for the use of an improved estimator exploiting the topological information can be given by the two observables with equal mean while the exact answer is 0.186455866.... Of course, the two estimates are strongly correlated. It is clear that the left estimate is 'more stochastic' using the actually picked boundary conditions in the run and τ int is hence smaller. This pattern is typical for cluster estimators with the reduced variance usually overcompensating this effect. The first few terms are ϕ| 0 = 2 + m, ϕ| 1 = 2 + m + g 2 2 + m , ϕ| 2 = 2 + m + 2g 2 2 + m (2 + m) + g 2 , . . . . (5.5) In interacting theories the mass of Wilson fermions undergoes additive renormalization. Both in the Thirring model (N = 2) and in the higher N Gross-Neveu model perturbative and nonperturbative calculations [4] as well as large N approximations yield negative values for the critical mass m c (g 2 ) close to which one wants to simulate. This is why it was crucial to extend the cluster algorithm to also accommodate ϕ < 2. Additional gauge coupling An amusing exercise is to add to the Majorana 'flavor' group a 'color' Z(2) gauge interaction. We describe it here only very briefly. We now make use of the gauge links in (2.1) and consider with the plaquette field τ p made from the links τ (x, µ) = ±1. The self-interaction can always be added, changing only the monomer weights. We suppress it here. Each dimer loop receives as an additional factor the Wilson loop made of τ . By Stokes' theorem they can be replaced by a product of τ p either on all plaquettes where a + spin resides or on those where the negative spins sit, see figure 1. Let us choose +. In the case of antiperiodic boundary conditions additional loops around the torus appear where the torus closes (where the clamps were). As a two-dimensional gauge theory is rather trivial the τ sum can be carried out. We need to know how many plaquettes get tiled with τ p an odd number of times. Let us introduce the 'composite' flavor spin-field Then contain this information and the extra weight from summing over τ is This is equivalent to a magnetic field coupled to S and fluctuating in sign. In addition we only get contributions if an even number of flavors is antiperiodic in each direction separately (due to Z(2) confinement). The fluctuating magnetic field can presumably be included in the cluster simulation without problems. As we update a given flavor, each spin can in addition bond to one exterior ('phantom') spin. Conclusion and Outlook It seems that with the new cluster algorithm the Gross-Neveu model (and the Thirring model) are wide open for high precision simulation. Further observables, in particular cluster estimators, remain to be constructed. The O(N) Noether currents may be accessible, for example. Also ratios of partition functions like Z 00 s /Z 10 s etc. are expected to have a continuum limit and may serve as renormalized finite volume couplings. Majorana fermions are prominent in supersymmetry. Maybe some studies on two dimensional supersymmetry become possible with simulations very close to the continuum limit. An obvious question that every reader will have is if any of this carries over to higher dimension and/or more complicated gauge interactions. Let us first caution here: fermions in one space dimension are very special. This has manifested itself in other so-called Fermi-Bose equivalences in two dimensions. At the heart of this in operator language is the fact that the Jordan-Wigner [17] transformation transforms anticommuting to commuting degrees of freedom without generating non-localities. This has no obvious generalization to higher dimension (see however [18]). The fact that we find positive weights for all topologically trivial loops in a way seems to be the euclidean counterpart. Also that Majorana fermions are in some sense equivalent to Ising spins it not new, of course, see [19], [20]. The main achievement here is that we simulate in a standard (lattice) euclidean fermion formulation and can get really critical. In higher dimension a dimer representation for fermions can probably be constructed along similar lines, but the weight will be sign-fluctuating in a more essential fashion. The slight hope may be that one could be able to handle this sign problem with cluster estimators. In addition, the coupling to gauge fields contributing fluctuating Wilson loops is an open problem. There are ongoing efforts to simulate discrete models dual to nonabelian gauge theories (spin foam) (see [21] and references therein). This may be an interesting view on gauge theories in the context of the approach to fermions developed here. In any case, the goal is attractive enough to warrant further thought. We would like to acknowledge discussions about the Gross-Neveu model on the lattice with Francesco Knechtli, Björn Leder, Rainer Sommer and most of all Tomasz Korzec. We also thank the Deutsche Forschungsgemeinschaft (DFG) for support in the framework of SFB Transregio 9. A Exact Majorana partition functions The Pfaffian is defined for an antisymmetric matrix A ij of even size, i, j = 1, . . . , 2n, where we integrate over 2n Grassmann variables and a sign convention for the measure is implied. By a change of variables ξ → F ξ one sees the well known identity Pf ( The factor det(F ) is just a phase that we fix later. The matrixà consists of antisymmetric blocks where momenta p, −p get paired. If they are different (modulo 2π) such a block contributes one factor (p 2 + M(p) 2 ) to the Pfaffian. For simplicity we now restrict our discussion to both T and L even. Then all momenta get paired non-trivially except p = (0, 0), (π, 0), (0, π), (π, π) in the all periodic case ε µ = (0, 0). We may summarize this result by . (A.14) All these exact results can be easily evaluated for any finite lattice that is simulated. For ε µ = (0, 0), m → 0 the product z ξ ⊤ Cξ (0,0) remains finite but requires precaution numerically. The continuum limit of z can presumably be computed analytically. We here content ourselves with the numerical construction of their Symanzik expansion in a few cases. We set T = L (aspect ratio one) and z(ε, m)| mL=κ ≃ k≥0 d k (κ, ε)L −k (A. 15) and compile some values in table 3. The analysis of the asymptotic series was carried out as described in appendix D in [22]. The ε missing in the table can be computed from those given. Digits are quoted such that there is at most an uncertainty of one in the last digit. For κ = 0 odd corrections vanish. Table 3: Symanzik expansion coefficients for ratios of partition functions at different boundary conditions in the finite volume continuum limit at fixed κ = mL.
2007-09-06T16:53:25.000Z
2007-07-19T00:00:00.000
{ "year": 2007, "sha1": "083c4fbf5d5090749d4819e51aa63fccdc4327a5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0707.2872", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "083c4fbf5d5090749d4819e51aa63fccdc4327a5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
109216280
pes2o/s2orc
v3-fos-license
THE REMOVAL OF SINGLE AND BINARY BASIC DYES FROM SYNTHETIC WASTEWATER USING BENTONITE CLAY ADSORBENT . ABSTRACT In order to broaden the application of bentonite clay, an easily obtainable and bio-available low cost adsorbent, it was employed for the decolourization of synthetic wastewater consisting of single and binary basic dyes (malachite green and rhodamine b). The sorbent was used as obtained without any further modification and also characterized for its specific surface area, point of zero charge and its surface functional groups pre and post dyes sorption was determined using Fourier Transform Infrared Spectroscopy (FTIR). Batch adsorption methods were employed in order to study the effects of pH, Ionic strength and contact time in the single solute system. The parameters of sorption of rhodamine b (RDB) and malachite green (MG) were obtained and fitted to three isotherm models; Freundlich, Langmuir and Temkin. The Freundlich plot analysis indicated the process occurred via heterogeneous coverage of adsorbent by both dyes. The kinetics of adsorption data were analyzed using the; pseudo-first order, pseudo-second order, Intraparticle diffusion, film diffusion, and Boyd kinetic models. Over the study of these parameters, the film diffusion mechanism was found to predominate in the sorption process of the dyes. Competitive sorption studies was carried out by using both dyes as either the sorbate of interest or as the interfering specie and the competitive co-coefficient values obtained from interfering MG in RDB removal were significantly lower than those obtained from interfering RDB in MG removal, indicating that the presence of RDB in the aqua matrix had antagonistic effect on MG adsorption by bentonite. INTRODUCTION Years of increased industrial activities have resulted in the generation of large amount of wastewater containing a number of coloured toxic pollutants, which are polluting the available fresh water continuously. Having realized this basic fact that these coloured pollutants in aqueous systems adversely affect human and animal life, the control of pollution is now a high priority task. The availability of clean water for various activities is becoming the most challenging task for researchers and practitioners worldwide. Dye pollutants are a major source of environmental contamination and colour is the first contaminant that is recognized in wastewater (Banat et al., 1996). This water soluble dyes are recalcitrant, carcinogenic and offer considerable resistance to biodegradation due to their complex structures and high thermal/photo-stability, hence there removal is paramount. Industries such as the textile industries, dye manufacturing industries, paper and pulp mills, tanneries, electroplating factories, distilleries, food companies, and a host of other industries generate and discharge this coloured wastewater (McKay et al., 1998). The first synthetic dye was encountered in 1856 by William Perkin Henry in the accidental synthesis of Mauveine but presently over 100,000 dyes have been made and more than 7 x 10 5 tonnes are produced annually (McMullan et al., 2001). More than 10,000 dyes are commercially available and about 5-10% of these are somehow discharged as wastewaters by these industries directly or indirectly into our water bodies (Gong et al., 2005) The two conventional methods for the treatment of coloured wastewater are; the biological method and the physical/chemical method. Amongst the numerous techniques of dye removal, adsorption which is a physical/chemical method is the procedure of the choice and gives the best results as it can be used to remove different types of colouring materials (Derbyshire et al., 2001;Ho and McKay, 2003;Jain et al., 2003). The major advantage of an adsorption treatment over other methods is due to its cost-effectiveness and sludge free clean operation (Gupta et al., 2009). Over the past decade, bentonite clay has been successfully employed for the adsorption of dye molecules and metal ions (Tahir et al., 2006, Tahir et al., 2010. To date several conventional and non-conventional sorbents have been employed in the removal of these dyes from wastewaters while limited attention has been given to the fact that this wastewater usually contains more than one dye constituent, hence in the removal of one dye constituent the other may either possess an antagonistic or a non-interactive effect on the constituent of interest. This study particularly takes a case study of two different basic dyes (RDB and MG) in a binary system and examines the nature of the effect of one on the other. The effects of pH, Ionic strength and contact time were further evaluated and reported for each dyes in their single solute system. EXPERIMENTAL Adsorbent characterization The pH point of zero charge (pHPZC) of the adsorbent was determined via the solid addition method as described by Balistrieri and Murray (1981). This is used to determine the pH at which the surface of the sorbent exhibits total surface electrical neutrality. The Sear's method (1956) was employed in determining the surface area of the adsorbent as well, as surface area is one of the major factors the affects adsorption processes. The surface functional groups present on the adsorbent and there possible involvement in the sorption process was examined from the Fourier Transform Infrared spectrum (FTIR) of the bentonite which was obtained from the Fourier Transform Infrared Spectrophotometer (Buck Scientific Infrared Spectrophotometer) between 400 -4000 -cm . Sorbate preparation and quantification The dyes used in this present work are; Malachite green (chemical formula = C23H25N2CI, λmax= 621nm) and Rhodamine b ((chemical formula = C28H31CIN2O3, λmax= 543nm) were accurately weighed and dissolved in a double distilled-deionized water to prepare a stock solution (100mg/L) and different working solutions which ranged between 2.5mg/L -30mg/L were prepared from the stock via serial dilution. Single solute system adsorption experiment Determination of equilibrium isotherm and kinetic parameters Batch adsorption experiment studies were carried out as described by Nadi et al (2012). 0.1g of the adsorbent with 50 ml of the dyes (RDB and MG) solutions for each of the desired initial dyes concentration of 2.5-30 mg/L in a 100 ml capped conical flask and the mixture was agitated on a temperature controlled magnetic stirrer at a constant speed of 1000rpm until equilibrium is attained. Samples were withdrawn after the equilibrium time (3hrs) and centrifuged at 4000rpm for 20min for complete separation of the bentonite particles from the solution and the residual dye concentrations were calculated from the calibration curve. The kinetics of sorption of the dyes was studied by monitoring the uptake of the dyes from aqueous solution at different time intervals (1,3,5,10,20,30,60,90,120,180) (Olaseni et al., 2014). The amount of dye removed per unit mass of the adsorbent was calculated as; where Co (mg/L) is the initial concentration of the dye, Ce (mg/L) equilibrium concentration, m is the mass of the adsorbent, and V is the volume of the solution. Percent dye removed (%DR) was calculated using the equation; Influence of pH The effect of pH was examined between pH4 and pH12. The different pH (4-12) of the aqueous solution was adjusted by the addition of either 0.1M HCl or 0.1M NaOH solution via drop wise method when necessary to the dyes solutions before the introduction of the adsorbent. The adsorbent-adsorbate ratio was fixed at 0.1g to 50ml while the initial concentration was fixed at 30mg/L. Influence of Ionic strength The dyes solution ionic strength effect was also examined by using NaCl solution of varying concentrations; 0%, 0.1%, 0.5% and 1% which is equivalent to ionic strengths of 0.0 Mol/L, 0.017 Mol/L, 0.085 Mol/L and 0.17 Mol/L. The adsorbent-adsorbate ratio was also fixed at 0.1g to 50ml while the initial concentration was fixed at 30 mg/L. Bi-solute system adsorption experiment The competitive sorption studies was studied in a synthetic wastewater containing only the two solutes (RDB and MG) at different initial concentrations which was between 2.5mg/L and 30mg/L for each of the sorbate of interest. The concentrations of the sorbates of interest were varied between 2.5-30mg/L while the interfering dyes molecule was kept constant at 30mg/L (Sheindorf et al, 1981). The competitive sorption equilibrium parameters were obtained by contacting 50ml solutions of known concentrations of the sorbate of interest and interfering dye solution in a binary system. The mixture was agitated for 3hrs and samples withdrawn at the end of the sorption process, centrifuged and the supernatant concentrations of both the sorbate of interfering solution were determined using the UV -VIS Spectrophotometer at there're respective wavelengths. Error Analysis. In order to examine the best fit of the different kinetic models to the observed experimental data, an error function is required in the optimization procedure. In the present study, the kinetic models were examined using the linear coefficient of determination, r 2 , and the non-linear chi-squared statistical method, χ 2 . The coefficient of determination r 2 , represents the percentage of variability in the dependent variables that has been explained by regression line (Oladoja and Akinlabi, 2009). The value of the coefficient of determination may vary from zero to one and was calculated with the aid of the equation; Where is the sum of squares of x, is the sum of squares of y and is the sum of squares of x and y. In order to evaluate the best of the kinetic models that fit the experimental data, the sorption process was also examined using nonlinear chi-squared (χ 2 ) statistical test. The chi-squared statistical test is basically the sum of the squares of the difference between the experimental data and the theoretical data obtained by calculating from models, with each squared difference divided by the corresponding theoretical data obtained by calculating from models. The equivalence of the mathematical statement is; If the data from model and experimental data are similar, then χ 2 value will be small and if they are different χ 2 will be a large value. Results and Discussion. Characterization of adsorbent The pH point of zero charge (pHZPC) is used to determine the pH at which the surface of the sorbent exhibits total surface electrical neutrality. The functional groups present on the sorbent surface may reside in the positive or negative charge state, depending on the pH of the system. A large relationship co-exists between the amount of sorbate adsorbed to the surface of the sorbent and the pHZPC of the adsorbent, which is that; adsorption of positively charged sorbates will be favoured at pH values higher than the pHZPC when the surface of the adsorbent is predominantly negatively charged while adsorption of negatively charged sorbent will be favoured at pH values lower than the pHZPC when the surface of the adsorbent is predominantly positively charged (Nomanbhay and Palanisamy, 2005). In the determination of the surface charge of bentonite used in this study, this showed that it possesses a pHZPC of 2.14 which was supported by literature to be between 2.0 and 3.0 (Hashen, 2012., Akpomie andDawodu, 2015., Kosmulski, 2009). This signifies that the sorbent is suitable for the removal of basic dyes over a large pH range above 2.14. Its specific surface area was also determined to be 21.5(m 2 /g). A BET surface area of 20(m 2 /g) and 31.5(m 2 /g) was reported by Shu-li et (2009) andXifang et al (2007) respectively. The difference in the surface area values reported could be ascribed to the method of determination, purity, method of preparation of sample and type of bentonite. The FTIR spectra of the sorbent pre and post dyes uptake is illustrated in FIG 1. The FTIR spectra of bentonite before adsorption of dyes showed absorption peak at 3456.80 cm -1 which is linked to the stretching vibration of H2O, the peak at 1649.60 cm -1 is linked or corresponds to the bending of H2O, the peak at 1047.20 cm -1 corresponds to the stretching of Si-O, the peak at 920.80 cm -1 denotes the bending vibration of Al-OH-Al. furthermore, FTIR peak at 620.00 cm -1 is associated with Al-O + Si-O out of plane vibrations while the peak at 537.00 cm -1 denotes Al-O-Si vibration bending. For the FTIR peaks of malachite green and rhodamine b, the peak at 1576 cm -1 and 1594 cm -1 respectively, indicates the presence of = + . The peaks at 1467 cm -1 and 1178 cm -1 for malachite green, peaks at 1468.8 cm -1 and 1176.8 cm -1 for rhodamine b denotes the vibration of the heterocyclic skeleton of the dye molecules. The peaks at 1178 cm -1 and 1176.8 cm -1 for malachite green and rhodamine b respectively is assigned to C-H3. It should be noted that the intensity of the peak at 1047.20 cm -1 present in the FTIR spectra of bentonite (denoting Si-O vibrations) in the absence of adsorbed dyes (FIG 1a) decreases considerably and consistently after the adsorption of each of the dyes to its surface and also in the binary system. A shift in this peak from 1047.2 cm -1 to 1046.40 cm -1 for the adsorption of MG, to 1052.80 cm -1 for the adsorption of RDB and to 1033.60 cm -1 in the presence of both dyes was observed. This may be due to the electrostatic attraction between Si-O group of the bentonite and the positively charged nitrogen moiety present on both dyes (malachite green and rhodamine b) signifying the possible involvement of the Si-O group in the sorption process while the shift in peaks confirms the presence of adsorption. Sorption kinetics of MG and RDB unto bentonite from synthetic wastewater. To investigate the adsorption process, monitoring of the adsorption kinetics is an important step. It was observed that equilibrium was reached at about 60mins and the amount of the dyes congregating unto the surface of the adsorbent was highly dependent upon initial concentration. The kinetic parameters of the adsorption of MG and RDB unto bentonite were evaluated by employing the pseudo-first order and the pseudo-second order kinetic models. (a) Pseudo-First Order Kinetic Model. In cases where adsorption is preceded by diffusion through the boundary layer, the kinetics most have followed the pseudo-first order mechanism as described by lagergren. (Lagergren, S. 1898., Ho, Y. S. 2004. The linearized form of the pseudo-first-order equation of Lagergren is generally expressed as: where qe and qt are the sorption capacity at equilibrium and at time t respectively, (mg/g). k1 is the rate constant of pseudo-first-order adsorption. The plot of In (qe -qt ) versus t should give a linear relationship from which k1 and qe can be deduced from the slope and intercept of the plot, respectively. The pseudo-first-order constants, k1, the correlation coefficients, r 2 , and the chisquare test statistic, χ 2 , were deduced from the plot of In(qe -qt) versus time (t), and the respective values obtained are presented in Table 1 and 2. The correlation coefficients for the pseudo-first order plots for the adsorption of these dyes ranged between 0.812 to 0.976 for RDB (Table 1) while that of MG ranged between 0.871 to 0.994 (Table 2). The calculated qe values (qe.theo) possess a large disparity from the experimental qe values obtained for both dyes. For the applicability of this kinetic model towards the description of the mechanism of the sorption of these dyes unto bentonite, the nonlinear chisquare test statistic was used to correlate the qe values obtained from the theoretical prediction (qe.theo) and the actual sorption process qe (qe,exp) (Tables 1 and 2). The results presented in Tables 1 and 2 showed that the χ 2 , values were very high, which is an indication that the pseudo-first order model cannot be employed in the description of the adsorption process. Hence, the process does not follow the pseudo-first order adsorption rate expression of Lagergren. where qe , qt , and t have the same meaning as explained above and Where k2 is the pseudo second order rate constant (gmg -1 min -1 ) which is determined from the intercept and qe from the slope of the plot. The plot of t/qt should give a linear relationship if the pseudo-second order kinetic model is applicable. The initial sorption rate (h) can be obtained from the pseudo-second-order linear plots, as qt/t approaches zero: the initial sorption rate, h, the pseudo-second order rate constant, k2 , the amount of RDB and MG sorbed at equilibrium, qe , the linear coefficient value, r 2 , and the chi-square test statistic function, χ , obtained are presented in Table 3 and 4. The values of h and qe increased with the increase in the initial concentration of both dyes while the sorption rate constants decreased with increasing initial concentrations. The large values of the r 2 and negligible χ values is a pointer to the fact that the mechanism of adsorption of the dyes follows the pseudosecond order. Muhammad et al 2016Muhammad et al , 2015, have also reported the mechanism Table 4: Malachite Green pseudo-Second order kinetic parameters Intraparticle diffusion model. Based on the theory proposed by Weber and Moris, Intraparticle diffusion model which has been widely applied for adsorption studies was employed to investigate the adsorption mechanism of RDB and MG unto bentonite. This model is mathematically expressed as; q t = K id t 0.5 + C … … … … … … … … . .8 Where C (mg/g) is the intercept and Kid (mg/gmin 0.5 ) is the intraparticle diffusion rate constant. The value of Kid was obtained from the slope of the linear plot of qt vs t 0.5 for both dyes (FIG 4 and 5). The value of C which is given in Table 5 and 6 represent the boundary layer effect (Kannan and Sundaram, 2001). The significance of the deviation from this theory is the difference between the rate of mass transport between the initial and final stages of adsorption (Panday and Singh, 1985) Furthermore, such deviation of straight lines from the origin indicates that the pore diffusion is not the sole ratecontrolling step (Poots et al, 1978). The results presented in Figure 4 and 5 show that two separate regions are found in the plot: the first straight portion of the plot is attributed to the macropore diffusion (surface sorption) and the second linear portion is attributed to micropore diffusion (Intraparticle diffusion) (Lakshmi et al, 2009). In order to actualize the rate controlling step between these two distinct stages by determining the rate of Intraparticle diffusion (Kid) which can be obtained from the slope of the Intraparticle diffusion plots. Drawing a comparison between the Kid values of the two stages, it can be observed that the rate-limiting step is the macropore diffusion stage for the adsorption of both dyes (table 5 and 6). REF Liquid film diffusion model. Sequel to the inability of the intraparticle diffusion model to adequately describe the sorption mechanism of Rhodamine B and Malachite green by bentonite, the adsorption dynamics was investigated utilizing the liquid film diffusion model. The possible migration of dye molecules from bulk solution to the exterior surface of the sorbent through the liquid film was investigated by utilizing the liquid film diffusion model (equation x) in order to determine the rate controlling step (Boyd et al, 1947). In(1 − F) = −K fd t … … … … … … … … .9 Where Kfd (min -1 ) is the liquid film diffusion rate constants and F = t e q q . A linear plot of ln(1 ) F − vs t, for liquid film diffusion for concentration between 2.5 -30mg/l were tested for both dyes (RDB and MG). From these plots, the overall sorption process yielded linear plots with coefficients of correlation (R 2 ) ranging between 0.8121 to 0.9768 with intercepts between -1.7581 to -3.1278 for rhodamine b and 0.8717 to 0.9946 with intercepts ranging between -0.2403 to -1.0645 for malachite green. The rate constant for the liquid film diffusion Kfd was between 1.81 x 10 -2 -2.27 x 10 -1 for rhodamine b and between 6.14 x 10 -2 -8.87 x10 -2 for malachite green. This is pointer of the fact that as these plots do not pass through the origin (i.e. intercepts less than zero for rhodamine b and malachite green), the applicability of this model in describing the mechanism of the adsorption of these basic dyes by bentonite is limited. Overall First Boyd Kinetic Model. In order to actualize the rate controlling step involved in the sorption of rhodamine b and malachite green unto the bentonite, the kinetic data's were further analyzed using the Boyd kinetic model according to the equation (Boyd, et al. 1947); F = 1 − (6 tπ 2 ⁄ )exp(−B t ) … … … … … … … . .10 Where t B represents the mathematical function of F, and F is the solute adsorbed fraction at various time t. = ∞ … … … … … … .11 Where t q is the amount adsorbed at time t, q  represents the amount adsorbed at infinite time. (In this study is 180min). A linear form of the Boyd kinetics is written as: B t = −0.4978 − In(1 − q e q t ⁄ )……………..12 Hence, the values of Bt can be estimated for each value of F using equation 10. The calculated Bt values were plotted against time ( Figure 6 and 7), and the linearity test of the Bt versus t plot for different initial RDB and MG concentrations was employed to differentiate between the particle diffusion and film diffusion controlled adsorption process. If the Boyd plot is a straight line passing through the origin (zero intercept), the particle diffusion mechanism predominates. However, if the plots do not pass through the origin (i.e. intercept greater than zero), the film diffusion mechanism predominates. Fig 6 and Fig 7 shows that the Boyd plots for the removal of rhodamine b and malachite green by bentonite do not pass through the origin, indicating that the film diffusion mechanism predominates in the adsorption process Adsorption isotherms Adsorption equilibrium isotherm is expressed by relating the amount of adsorbate sorbed per gram of sorbent, qe (mg/g), to the equilibrium solution concentration, Ce (mg/L), at a fixed environmental conditions (Baek et al, 2010). In this present study, the Temkin, Langmuir and Freundlich isotherm models were employed in the description of the Adsorption process and and the isotherm parameters obtained using these models are presented in Table A. The linear form of Langmuir's isotherm model is expressed as; Where qm the maximum value of adsorptive capacity (mg/g), KL, is the Langmuir constant (L/mg) which is related to the energy of adsorption are obtained from the intercept and slope of the plot respectively (Alshabanat et al., 2013;Soni et al., 2012). The constants qL and KL were calculated from the slope and intercept of the linear plot of Ce/qe versus Ce. From Table 9, the low correlation coefficient values (r 2 ) obtained for Langmuir isotherm model, 0.465 for RDB and 0.042 for MG is an indication that this isotherm model did not express the experimental data for both dyes, suggesting that the surface of bentonite is heterogeneous and not homogeneous in nature. The essential characteristics of the Langmuir isotherm, used to predict the adsorption efficiency, was expressed in terms of a dimensionless equilibrium parameter RL, defined by , where Co (mg/l) is the highest concentration of adsorbate. The value of RL indicates if Langmuir isotherm is unfavourable (RL > 1), linear (RL = 1), favourable (0< RL >1) or irreversible (RL = 0). The values of RL at Co = 30mg/l in the present study were found to be 1.6058 X 10 -3 for Rhodamine b and 0.718 for malachite green at 298.15k, indicating that the adsorption of these basic dyes unto bentonite is favourable. The Freundlich adsorption isotherm is the first mathematical fit to an isotherm published (Freundlich, 1906). It is a purely empirical isotherm which describes the adsorption onto a heterogeneous surface with the linear equation; Inq e = InK f + 1 n InC e … … … … … … . .14 Where qe is the adsorptive capacity which is the quantity of the material adsorbed per unit gram of the adsorbent (mg/g), Ce is the equilibrium adsorbate concentration (mg/L); Kf, the Freundlich isotherm constant related to adsorption capacity (indicating the quantity of dye adsorbed onto the adsorbent) and n, the Freundlich isotherm constant related to adsorption intensity (which indicates the favourability of the adsorption process) can be obtained from the intercept and slope of the plot respectively (Alshabanat et al., 2013;Runping et al., 2008). Therefore, the plot of Inqe versus InCe gives a straight line of slope 1/n and intercepts logKF. It can be observed from Table 9 above that the Freundlich isotherm provides a very good fit to the experimental data possessing r 2 of 0.980 for RDB and 0.905 for MG. suggesting that the sorption of the dyes occurred via heterogeneous coverage of the surface of the adsorbent. The values of KF which denotes the sorption capacities of the adsorbent towards the dyes was found to be 0.386mg/g for RDB and 0.8055mg/g for MG. Rhodamine B and Malachite Green adsorption onto Bentonite clay is considered to be unfavourable because the Freundlich exponent values of n (which determines the intensity and feasibility of the adsorption process) were less than 1 (0.73 for Rhodamine B and 0.98 for Malachite Green). Generally, the value of 1/n less than 1 and the value of n in the range of 2-10 indicates that the adsorption process is favourable, 1-2 moderately difficult and below 1 poor adsorption characteristics (Rahman et al., 2012). The Temkin isotherm is based on the assumption that the free energy of adsorption is dependent on the surface coverage and takes into account the interactions between adsorbents and dye molecules. The linear form of the Temkin isotherm model equation is expressed as (Tempkin and Pyzhev, 1940): Table 9. therefore the Temkin isotherm model do not describe the mechanism of sorption of these dyes unto bentonite. Effect of pH It must be emphasized that pH is one of the most important variable to be considered in adsorption studies. The pH of the aqueous medium alters the net charge present on the surface of the adsorbent, the ionization degree of the sorbate molecules and the dissociation extent of the functional groups present on the adsorbent (Nandi et al, 2009). The variations in the total percentage of the dyes removed at different pH and initial concentration of 30mg/l for RDB and MG are shown in FIG 8. For these dyes, the maximum percentage removal was attained at pH 4 (in excess of 70% for Rhodamine B, and excess of 90% for Malachite Green). The trend in the plot also shows that the percentage removal peaked at pH 4 and maintained a steady decline until pH 12. This variation in behavior of the adsorption can be explained on the basis of point of zero charge of the adsorbent (pHZPC = 2.14) as well as the structures of the dyes. At pH below the point of Zero charge of the adsorbent (2.14) the surface of the sorbent has a high positive charge density and under this condition the uptake of positively charged dyes would be low. With increasing pH i.e. beyond the point of zero charge, negative charge density on the surface of the adsorbent increases, resulting in an enhancement in the removal of these positively charged dyes. Also as a result of the acidic groups on these dyes that dissociates with increasing pH, giving rise to negative charge on the dyes which accounts for the reduction in the percentage removed in these dyes at higher pH. . This is because in solution of high ionic strength, the electrostatic attraction mechanism is suppressed due to the competition between the cationic dyes molecules with the Na + present for the active sites on the adsorbent surface, leading to reduction in electrostatic attraction between the dyes and the adsorbent surface. High ionic strength also enhances hydrophobichydrophobic interactions by compression of the electrical double layer that moves particles much closer together which leads to increase in dye adsorption as observed in FIG 9. Hu et al, 2013, also reported a similar observation for the sorption of methyl orange, methylene blue and neutral red. It has been reported that ionic strength greatly influences particle aggregation by altering electrostatic interactions between sorbates and sorbents. An increase in ionic strength of a solution has been described to suppress electrostatic repulsion thereby promoting particle aggregation (Mercer andTobiason, 2009., Piret andSu, 2008). For Malachite Green/Rhodamine B system q R = K M C M (a MR C R + C M ) n 1 −1 … … … … … … .18 Where Malachite Green is the sorbate of interest and Rhodamine B is the interfering ionic species. The linear form of the bicomponent isotherm equation can be written as; R/M System As expressed above, the competition coefficients were determined from the intercept of a straight line plot of CR against βR in equation 17, which is presented below; FIG 10: Experimental results presented in the linear form of the multicomponent adsorption isotherm: Rhodamine B in the presence of Malachite Green at constant concentration system. Fig 10 shows the extended Freundlich plot for Rhodamine B-Malachite Green system. The correlation coefficient was found to be 0.993 which shows a very good linearity of the plot and indicates the applicability of the extended Freundlich equation to describe the competitive adsorption behavior of the binary mixture. The values of the competition coefficients were determined from the intercept of the above plot (Fig.10). However, it should be noted that since by definition = 1 ⁄ (i.e. 12 = 1 21 ⁄ as described by Sheindorf et al, (1981 ), only one of the coefficients is required (Sheindorf et al., 1981 Table 10: Summary of competitive coefficients values for the binary-solute system of rhodamine b and malachite green in synthetic wastewater system. Deb et al., (1967) proposed that the value for the competitive coefficient range from zero (complete lack of competition) to greater than zero (normally less than 10) for a high degree of competition. An overview of the values of the competitive coefficients obtained showed that the values of the (Table 10) were far lower than that of the in the synthetic wastewater system. The values of the ranged between 0.251 and 0.318 while that of the ranged between 3.15 and 3.99. These showed that the competitive effects of Malachite Green in Rhodamine B sorption by Bentonite were insignificant, since the values of the competition coefficient were below unity (Wu et al., 2002). The values of the competitive coefficient of Rhodamine B in Malachite Green sorption by Bentonite were very high (Table 10). The values observed for the competitive coefficients of both Malachite Green and Rhodamine B, as interfering ionic specie, in their respective systems, signifies that the presence of Rhodamine B in the aqua matrix had antagonistic effects on the sorption of Malachite green by the sorbent while the presence of Malachite Green had noninteractive effects on the sorption of Rhodamine B by Bentonite (Oladoja et al., 2016). CONCLUSIONS This study evaluated the use of a readily bio-available and inexpensive bentonite as a low cost adsorbent for the removal of basic dyes in single and binary system. The FTIR characterization revealed the presence of silanol and aluminol groups on the surface of the bentonite which was responsible for the sorption of these basic dyes. The sorbent was found to possess a high adsorption potential for MG than RDB which is possibly due t the presence of the acidic group -COOHon the RDB which MG do not possess. The -COOHgroups stimulates an electrostatic repulsion between RDB and bentonite. The sorption of both dyes was found to be highly dependent on various environmental conditions such as; pH, Ionic strength, contact time and initial dye concentrations. The equilibrium data analysis revealed heterogeneous nature of the bentonite which was confirmed by the fitting of the experimental data to the Freundlich isotherm model. The extremely high values of R 2 and the predicted values of the equilibrium sorption capacity qe which was very much in agreement with experimental data for all initial RDB and MG concentrations confirms that the sorption process follows a pseudo-second-order mechanism. The competitive effects of the presence of Malachite Green in the sorption of Rhodamine B by Bentonite were insignificant (less than unity), thus, it could be concluded that Malachite Green has non-interactive effect in the sorption of Rhodamine B by Bentonite. Whereas the competitive effect of the presence of Rhodamine B in the sorption of Malachite Green were very high (greater than unity), thus it could be concluded that Rhodamine B is antagonistic in the sorption of Malachite Green by Bentonite.
2019-04-12T13:50:44.714Z
2019-03-21T00:00:00.000
{ "year": 2019, "sha1": "c43df49637339731a1f378563caff25610c2aea4", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajpst.20190501.13.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "108a56667a4badee356ba2bd86a9362aa27b1bc2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
254003697
pes2o/s2orc
v3-fos-license
Climate change adaptation innovation in the water sector in Africa: Dataset The dataset covers the determinants of adaptation innovation in the water sector in Africa over the period 1990-2016. The data is collected from secondary sources; namely the World Bank, Organization for Economic Co-operation and Development databases and the University of Notre Dame's Global Adaptation Initiative. The data is focused on susceptibility to water stress caused by climate change and the public response in the form of technology development. The analysis performed on the data focused on the degree to which exposure to the risk of water insecurity is a motivating factor in the public response. In the analysis, an econometric model was specified for a relationship between a measure of water stress induced by climate change and adaptation innovation, along with a series of socio-economic and socio-political indicators as controls. Sustainable development practitioners, environmental and social scientists with research and teaching interests on Africa will find the dataset very useful. Sustainable development practitioners can use the data to chart simple trends and for other summative purposes. The data can also be used to make regional or geopolitical comparisons on the same subject of our analysis. Furthermore, with similar technology innovation data on other sectors exposed to climate change risks, comparisons of public responses can be undertaken to understand relative effectiveness of climate change adaptation responses. Crucially, the simple format of the data makes it a very convenient teaching tool in a statistics or econometrics class. a b s t r a c t The dataset covers the determinants of adaptation innovation in the water sector in Africa over the period 1990-2016. The data is collected from secondary sources; namely the World Bank, Organization for Economic Co-operation and Development databases and the University of Notre Dame's Global Adaptation Initiative. The data is focused on susceptibility to water stress caused by climate change and the public response in the form of technology development. The analysis performed on the data focused on the degree to which exposure to the risk of water insecurity is a motivating factor in the public response. In the analysis, an econometric model was specified for a relationship between a measure of water stress induced by climate change and adaptation innovation, along with a series of socio-economic and socio-political indicators as controls. Sustainable development practitioners, environmental and social scientists with research and teaching interests on Africa will find the dataset very useful. Sustainable development practitioners can use the data to chart simple trends and for other summative purposes. The data can also be used to make regional or geopolitical comparisons on the same subject of our analysis. Furthermore, with similar technology innovation data on other sectors exposed to climate change risks, comparisons of public responses can be undertaken to understand relative effectiveness of climate change adaptation responses. Crucially, the simple format of the data makes it a very convenient teaching tool in a statistics or econometrics class. Table Subject Economic Development and Growth Specific subject area Sustainable development, technology innovation for climate change adaptation. Type of data Table How the data were acquired Existing databases were used to collect data. The use of a specific database was determined by (1) reliability and (2) whether or not it contained reliable data on the desired indicator for our analysis. Data format Raw Description of data collection Secondary data was collected from three main sources: the World Bank, OECD, and Notre Dame's Global Adaptation Initiative (ND-GAIN) databases. The World Bank was the source for the following variables: openness to trade -trade as % of gross domestic product, time required to register property, per capita gross domestic product, employers (total), and gross enrolment ratio. ND-GAIN was the source of data for the vulnerability index and adaptation technology data was extracted from the OECD database. The variables on which data was collected was dictated by the empirical model and the three sources for data on these variables were selected on the basis of data reliability and consistency. Data on the variables of interested was downloaded from these databases first in Excel format. The data was then inspected for compatibility, uploaded on the Stata software and saved as a Stata file for use in statistical analysis of the paper [1] . The data is provided in this article in both the Excel and Stata formats [2] . The structure of the data and how it is arranged is the same in both formats. There are ten variables in total in the dataset, seven of which are used directly in the analyses of the paper after pooling the data across the rest of the three variables (country, country identification and year). The years for which data is collected range from 1990 to 2016. Data Value of the Data • Development practitioners can use the data to identify simple trends in water security risks in Africa over time • Researchers can use the data to make regional or geopolitucal comparisons on the same subject of our analysis. Public policy can benefit from potential insights on differences that may emerge from such comparisons. • Researchers may derive useful insights on the relative effectiveness of climate change adaptation responses by using the data to make comparisons with other sectors exposed to climate change risks, at different levels -local, regional or global. • The simple format of the dataset makes it a convenient, useful resource for instructors on the subjects of statistics and econometrics. Data Description Raw data is provided in both Stata and Excel files [2] in the form of a Table. It contains data on seven variables: year, adaptation technologies, openness to trade (trade as percentage of gross domestic product), time required to register property (calendar days), gross domestic product per capita, employers (total), gross enrolment ratio and the water stress index. The data on all the above variables is then pooled for the following years: 1990, 20 0 0, 20 05, and 2010 to 2016. Water-related patent data is used as a proxy for adaptation innovation, specifically tied to climate change adaptation and is collected from the database of Organization for Economic Co-operation & Development (OECD). For exposure to water insecurity tied to climate change, data on the water security score of Notre Dame Global Adaptation Initiative (ND-GAIN) index of vulnerability to climate change is collected. The water score index accounts for indicators such as projected change of annual runoff, projected change of annual groundwater recharge, fresh water withdrawal rate, water dependency ratio, dam capacity, and access to reliable drinking water. ND-GAIN scales these indicators by using the "proximity-to-reference" method, which measures vulnerability by the distance to an ideal status. Each of these indicators is scaled to a score that lies between 0 and 1, withvalues closer to zero implying less vulnerability and values closer to 1 implying high vulnerability. This scaling allows for comparisons across countries. The score for each indicator is scaled by subtracting from zero the ratio of the difference between a raw value of the indicator and a reference point, a baseline maximum and baseline minimum. ND-GAIN uses a baseline minimum of 54.99% for access to reliable drinking water and zero for the other five indicators. The baseline maximums are: 1 for projected change of annual runoff, 1 for projected change of annual groundwater recharge, 100 for fresh water withdrawal rate, 73.32 for water dependency ratio, 4932 for dam capacity, and 100 access to reliable drinking water. The reference points are: 100% for projected change of annual runoff, 100% for projected change of annual groundwater recharge, 0% for fresh water withdrawal rate, 0% for water dependency ratio, 4932m 3 per capita for dam capacity, and 100% for access to reliable drinking water reference point. The water score index is then obtained by taking the arithmetic mean of the scores of these six constituent indicators. This arithmetic mean for the water sector is the measure of vulnerability to water stress used in the research article. ND-GAIN's choice of reference points, baseline minimun and maximum is guided by literature. The data for the rest of the variables is collected from the World Bank database. This includes Gross Domestic Product (GDP) per capita -used as a control for one country's size relative to others in our sample. Time required to register property records the number of calendar days needed for businesses to secure rights to property and serves as a proxy for institutional effectiveness in our analysis. Openness to trade is the sum of exports and imports of goods and services measured as a share of gross domestic product. Total employers is used as a proxy variable for research and development activity. The education variable is represented by gross enrolment ratio and measures total enrollment in primary education, expressed as a percentage of the population of official primary education age. This variable is used as a proxy for the knowledge base of a country that forms the absorptive capacity necessary for technology transfer and diffusion. Experimental Design, Materials and Methods The goal in the accompanying research article [1] was to examine the response of the tecknology sector to the increased exposure to water insecurity induced by climate change in Africa. To do this, the research article draws from the empirical literature on the drivers of innovation in climate change policies and the literaure on water innovationn as well as resources management. Data for African countries is then pooled for years between 1990 and 2016. Then an econometric model is specified that estimates the relationship between climate-induced vulnerability in the water sector and water-related adaptation innovations, with controls such as a country's size, technology transfer environment, institutional and regulatory quality, knowledge base, and research and development activity. In the econometric model, patents contitute our dependent variable in the analysis. As a count variable we use the appropriate count or probability model -the negative binomial model, which is a less restrictive form of the Poisson model. Both models have built-in functions for empirical analysis in the stata software, which is used to estimate parameters of the models and compare results. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:
2022-11-27T16:27:57.301Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "a4e8ea8fb3e1eae60274fdf42c6274a51020633f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.dib.2022.108782", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d158f1ae2230adc3863336d70ea715e043c94a5", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
97012146
pes2o/s2orc
v3-fos-license
NMR study of pyrochlore lattice antiferromagnet, melanothallite Cu2OCl2 The melanothallite Cu2OCl2 is a new example of pyrochlore-like antiferromagnet, which is composed of 3d transition metal electrons. We performed Cu- and Cl-NMR experiments on powder samples of Cu2OCl2 below transition temperature TN = 70 K and we observed six resonant peaks of Cu nuclei, which are composed of three symmetric peaks corresponding to 63Cu and three corresponding to 65Cu. The Cu nuclei feel the strong hyperfine fields because of ordered magnetic moments and the electric field gradients. We determined the spin structure by analyzing the Cu-NMR spectra. The melanothallite has an all-in-all-out spin structure. The spin lattice relaxation rates T1−1 of Cu- and Cl-NMR in the ordered phase are proportional to the temperature; This suggests that although long-range ordering occurs at rather high temperature, the large spin fluctuations caused by the geometrical frustration still remain. Introduction The pyrochlore lattice is composed of corner-sharing tetrahedra. The antiferromagnet on the pyrochlore lattice with nearest neighbour Heisenberg interactions has the macroscopic number of degeneracy and has no long-range magnetic ordering even at zero temperature. This magnetic tetrahedron system has attracted much interest because of three-dimensional geometrical frustration. A well-known example is an A 2 B 2 O 7 system, in which A is a rare earth element and B is a 3d-transition metal element. In these compounds 4f-electrons dominate their magnetic properties and their main interaction is a long-range dipolar interaction. In contrast, the interaction in the 3d-electron system is short-range exchange interaction. Thus, the 3delectron antiferromagnets on the pyrochlore lattice are considered to be simpler than 4f-electron pyrochlore antiferromagnets. Recently, clinoatacamite Cu 2 (OH) 3 Cl with s = 1/2, atacamite Ni 2 (OH) 3 Cl with s = 1, Cu 3 O 4 with s = 1/2, and FeF 3 with s = 5/2 have been reported to be pyrochlore-like antiferromagnets with 3d-electron system [1][2][3]. These compounds were reported to show interesting behaviours, such as successive transitions, exotic critical behavior, large spin fluctuation in the spin-freezing phase, and novel spin structure in the ordered phase. These behaviors are considered to be due to additional perturbations, such as further neighbouring interactions, anisotropies, or a lattice distortions [1][2][3][4][5]. The melanothallite is a new example of the pyrochlore-like antiferromagnet with the 3d-electron system. Crystal structure of melanothallite. The blue, red, and green balls represent copper, oxygen, and chlorine, respectively. The thick blue bonds between Cu 2+ represent the exchange interaction J, and the thin ones represent the exchange interaction J ′ . The tetrahedra composed of Cu are shown in yellow. The figures was prepared using the VESTA program [6]. The crystal structure of melanothallite is orthorhombic, as shown in figure 1, and its space group is F ddd. The lattice constants are a = 9.542Å, b = 9.712Å, and c = 7.396Å. All Cu 2+ , O 2− and Cl − belongs crystallographically to corresponding identical sites. The magnetic ion Cu 2+ has a spin of 1/2. The oxygens are located at the centre of the tetrahedra composed of Cu 2+ . These tetrahedra form the pyrochlore lattice by the corner-sharing network of Cu 2+ . On the other hand, the crystal structure could be considered to be composed of the vertical cross-stacking of one-dimensional chains of Cu 2+ , as shown in figure 1 by the thick blue bonds. The super-exchange path between Cu moment that consists Cu−O−Cu and Cu−Cl−Cu is represented by thick blue line. Thin blue line represents the super-exchange Cu−O−Cu path. Controversy had existed on whether this compound should magnetically be considered a threedimensional pyrochlore lattice or a one-dimensional chain. The temperature dependence of susceptibility shows a broad peak at about 140 K and an antiferromagnetic ordering at T N = 70 K [7]. The broad maximum cannot be explained by a one-dimensional antiferromagnetic model with only intrachain interactions J. If a modified onedimensional model with intrachain interactions J and interchain interactions J ′ , as shown in figure 1, is assumed, the fitting can be improved. However, the obtained J and J ′ are 113 K and 108 K, respectively, and thus they are nearly the same [7]. Therefore the one-dimensional model is not appropriate, and it is suggests that the melanothallite, Cu 2 OCl 2 , is a slightly distorted pyrochlore lattice antiferomagnet. The specific heat measurement also indicates that the long-range magnetic ordering occurs below T N =70 K [7]. At temperature below T N , the clear muon spin precession signals were observed in the µSR measurement in the long-range ordered phase [8]. However, the time spectra of the µSR measurement show exponential damping even below T N , which suggests that spin fluctuations still exist even below T N because of the geometrical frustration [8]. The spin structure and the origin of large spin fluctuation in the ordered phase have not been determined. Figure 2. Cu-NMR spectrum at T = 2.9 K below T N with no external magnetic field. The blue and green solid arrows indicate 63 Cu, and 65 Cu peaks, respectively. Therefore we have investigated the spin structure and spin fluctuations of the new 3d-electron pyrochlore antiferromagnet, Cu 2 OCl 2 , microscopically by nuclear magnetic resonance (NMR) measurement. Experiment We have performed Cu-and Cl-NMR experiments on polycrystalline samples of Cu 2 OCl 2 . The sample was synthesized by a conventional solid-state reaction of high-purity CuCl 2 and CuO powders. The stoichiometric powder mixtures were pressed into pellets and sintered at 623 K for 18 h in CO 2 flow. The spin-echo spectra of 63 Cu (I = 3/2), 65 Cu (I = 3/2)A and 35 Cl (I = 3/2)@was obtained and the spin-lattice relaxation rates 63 T −1 1 of 63 Cu and 35 T −1 1 of 35 Cl were obtained using a coherent phase spectrometer. The spectra of Cu nucleus were obtained by a frequency sweep without an external magnetic field. In the case of the Cl-NMR, the spectra were obtained by a field sweep at an operating frequency of 24.08 MHz. No magnetic field was applied for the measurements of the relaxation rates of the Cu nuclei. Experimental Results and Discussions The Cu spin-echo spectra at T = 2.9 K below T N without external magnetic field is shown in figure 2. Six sharp resonant peaks were clearly observed; three equally-spaced peaks corresponding to 63 Cu nucleus and three equally-spaced peaks corresponding to 65 Cu nucleus. The three symmetric peaks for each Cu nucleus are due to the internal magnetic field and the electric field gradient (EFG). These resonant peaks can be analyzed on the basis of the Zeeman interaction and electric quadrupole interactions. The Hamiltonian of Cu nucleus is expressed as where I z , I + and I − are the nuclear spin operators; Q is the nuclear quadrupole moment; γ is the gyromagnetic ratio; and H local is the magnetic local field at the Cu nucleus position. The second-order derivatives of the electric potential V αα = ∂ 2 V /∂α 2 (α = x, y, z) denote components of the EFG tensor in the principal coordinate system with |V zz | ≥ |V yy | ≥ |V xx |, and η = (V xx − V yy )/V zz is the asymmetric parameter. The principal axis z of the EFG is generally not along the direction of the localized magnetic field z ′ . The nuclear quadrupole resonant (NQR) frequency ν Q = eQV zz /4I(2I − 1)h is a measure of size of the EFG, and the value of ν Q is experimentally determined by measuring half the distance between the two satellite peaks. The values of ν Q are obtained from the spectra of 63 Cu and 65 Cu in figure 2, and these values are presented in table 1. The value of the centre peak frequencies f c are obtained from the spectra in figure 2, and these values are listed in table 1. The NQR frequency ν Q is a quarter of the f c , as is seen in table 1. Thus the electric quadrupole interaction is too large to be treated as a perturbation for Zeeman interaction. In this case, the shift of the spectra becomes asymmetrical depending on the angle between the principal axis z of the EFG and the principal axis z ′ of the local magnetic field. The obtained Cu-NMR spectra are symmetric, which implies that the axis z and axis z ′ are parallel. The local magnetic fields H local are obtained from the centre-peak frequencies f c by H local = f c /γ, and are listed in table 1. For convenience, the modified EFG are determined as ν Q /Q = eV zz /4I(2I − 1)h, and these values are listed in table 1. The values of ν Q /Q = eV zz /4I(2I − 1)h and H local depend only on the environment around the Cu nucleus and not on kinds of isotope nucleus. The modified EFG ν Q /Q and the local field H local at all Cu nucleus positions are found to be identical, as seen in from table 1. In addition, since the obtained spectra are very sharp, H local , ν Q /Q, and the angle between the principal axes z and z ′ have no distribution. We find that the EFG, the magnetic moment of Cu spin, and the angle between the principal axis z of the EFG and the principal axis z ′ of the local magnetic field are uniform. This is suggested that the magnetic and crystallographic unit cells are the same. We calculated the EFG at the Cu site by assuming a point charge on O 2− and on Cl 1− . The direction of V zz at the Cu site is found to be the direction almost from Cu to O that locates the center of tetrahedron. The spin structures that the spin directs toward the center of tetrahedron are, so-called, the 2-in-2-out, 3-in-1-out, and all-in-al-out spin structures. Since the nearest neighbour interaction in Cu 2 OCl 2 is determined to be antiferromagnetic by the susceptibility measurement, the all-in-all-out spin structure is energetically stable. Therefore, the magnetic structures of Cu 2 OCl 2 are the all-in-all-out type, as shown in figure 3. The ground state of Heisenberg pyrochlore antiferromagnet with only nearest-neighbour interaction J has the macroscopic number of degeneracy even at zero temperature, and has no magnetic long-range ordering. Thus, the observed spin ordering in Cu 2 OCl 2 should be caused by additional perturbative effects. Further neighboring interactions, distorted nearestneighbour interactions, and the anisotropy are considered to be the additional perturbative effects in Cu 2 OCl 2 . First, we consider further neighbouring interactions and distorted nearest- neighbouring interactions. The case with the additional exchanges of the second-neighbouring interaction J 2 and the third-neighbouring interaction J 3 were studied by Monte Carlo simulations [4]. The simulations show that the q = 0 states are stable. The q = 0 states contain the all-in-allout spin structure and collinear spin arrangement. The collinear spin arrangement is thermally selected due to the entropy effect. In Cu 2 OCl 2 , the pyrochlore lattice is slightly distorted with |J| = 113 K > |J ′ | = 108 K. In this case, the collinear ordered spin arrangement is also energetically favored. Thus, models with |J| > |J ′ | and further interactions cannot explain the obtained spin structure of Cu 2 OCl 2 in the ordered phase. Next we consider the effect of the anisotropy. Because the pyrochlore lattice has no inversion centre between the lattice points, it is possible that Dzyaloshinsky-Moriya interactions (DMI) exist in Cu 2 OCl 2 . Therefore it is reasonable to discuss the DMI. The DMI are written as where D ij is a DMI vector. The ground state of Heisenberg pyrochlore antiferromagnet with DMI has been studied by Monte Carlo simulations [5]. In reference 5, it has been considered to have two types of DMI configurations -direct and indirect DMI. In the pyrochlore lattice, the D vector exists in the vertical plane at the middle point of the edge and is parallel to the opposite side edge. The difference between direct and indirect is the sign of D. The direct and indirect DMI lead to the all-in-all-out structure and a collinear ordered spin structure, respectively. Thus the DMI resolve the degeneracy between the all-in-all-out state and collinear ordered states. Since Cu 2 OCl 2 is not an ideal pyrochlore structure and DMI configuration depend on the symmetry of the crystallographic structure, we cannot simply apply this DMI configuration to Cu 2 OCl 2 . It is, however, thought that DMI can exist in Cu 2 OCl 2 , because Cu 2 OCl 2 has no inversion centre between the Cu 2+ ions and because the perpendicular bisector at the middle point of the J bond is a mirror plane. It is likely that the DMI in J bond is similar to the direct DMI configuration. Thus, we conclude that the DMI play the important roles on the magnetic ordering and structure in Cu 2 OCl 2 . The rates 63 T −1 1 of 63 Cu were measured at the centre peak that corresponds to −1/2 ↔ +1/2 transition. The temperature dependences of 63 T −1 1 and 35 T −1 1 of 35 Cl are shown in figure 4. The divergent behaviour of the temperature dependence of 35 T −1 1 is observed near T N as shown in figure 4. This implies that a critical slowing down occurs around T N . We found that both rates in the ordered phase are proportional to the temperature, T −1 1 ∝ T . When the spin-wave excitation is dominant in the long-range ordered spin system, the T −1 1 exhibits an approximately a powerlike-law dependence on temperature, T α where α > 2. The results for T −1 1 may suggest that although the long-range ordering occurs at rather high temperature, the large spin fluctuations caused by the geometrical frustration still remain. These results are consistent with the results of the µSR results. In summary, we performed the NMR study of the 3d-electron pyrochlore antiferromagnet Cu 2 OCl 2 . Based on the analysis of the Cu-NMR spectra, Cu 2 OCl 2 has long-range ordering below T N . The spin structure of Cu 2 OCl 2 in ordered phase is all-in-all-out type spin structure, and the DMI is a plausible cause for this spin ordering. The temperature dependence of T −1
2019-04-06T00:42:57.181Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "72d091ceff7c3008604092c2b1e46714c3b73a32", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/320/1/012030", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a2c6e20e30f21870db3db19059f8eed6f9db17a5", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
73595954
pes2o/s2orc
v3-fos-license
Investigating the Knowledge Management Culture Knowledge Management (KM) efforts aim at leveraging an organization into a knowledge organization thereby presenting knowledge employees with a very powerful tool; organized valuable knowledge accessible when and where needed in flexible, technologically-enhanced modes. The attainment of this aim, i.e., the transformation into a knowledge organization, depends on a number of critical success factors. One of these critical factors is the promotion of a knowledge-friendly organizational culture. This paper investigates the elements which synthesize a knowledge-friendly and simultaneously KM-enabling culture. Special interest is put on how such a culture is shaped in an educational setup. Introduction Knowledge Management (KM) involves collecting valuable knowledge and then storing, categorizing and organizing this knowledge with the aim of making it promptly available to those people and systems that need it. Such valuable knowledge is primarily the possession of the experienced employees but may also be found in systems, databases, file cabinets, and other available sources. In order that organizations may transform into knowledge organizations it is necessary to put together coordinated efforts which will be directed towards a number of business areas. These efforts involve the alignment of the organizations' structure, the system processes, and the availability of technology and skills, with the organization's specific goal to become a knowledge organization and its broader executive goals and direction. Critical Success Factors for KM Established within the KM frameworks of implementation are a number of factors which require direct attention and are considered as critical for the success of the KM initiative. The same factors if not addressed properly and adequately may turn from being enablers into being barriers in enjoying the benefits of KM [1], [2]. KM enablers concern the organizational structure; strategy and leadership; technological infrastructure; culture; organizational processes; and measurement. If all of factors are appropriately attained to and are additionally checked to be in alignment with the goals and direction of the organization, then it is very likely that KM success will follow. More analytically, key success factors for a KM implementation, a.k.a. KM enablers or KM ingredients, have been discussed in literature by several researchers such as [3], [4], (Ernst & Young KM International Survey, 1996), [5], [6], [7], [8] among others. Hence, the long list of KM success factors which may be compiled would possibly include the following: employee training; employee involvement in KM activities; teamwork; employee motivation; employee empowerment (skills development); top management leadership and commitment in knowledge management; effective use of information and communication technologies; performance measurement to include both soft and hard measures; a knowledge-friendly organizational culture; a KM-friendly national culture (preferable); benchmarking; appropriate knowledge structure(s) (such as communities of practice); the resolution of organizational constraints; the integration and balancing of leadership, organization, learning and technology in an enterprise-wide setting; streamlined organizational structures and processes; infrastructure support as a composite of some of the above factors; a reward/recognition scheme for knowledge sharing. Several factors relate directly to the individual employee while the majority address issues at the organization level (see Figure 1). The KM Culture Among the critical success factors for KM at the organizational level is the existence of a knowledge-friendly and KM-enabling culture within the organization. A KM-friendly national culture would also be preferable. The present project work focuses on culture and investigates the characteristics of what constitutes a KM culture. An organization's culture is created from the fundamental assumptions and beliefs that are shared by an organization's members. It is found to operate unconsciously and it defines the organization's view of itself and its environment [9]. It involves the values, principles, unwritten rules, norms, and procedures used within the organization. A KM-enabling culture is defined as a trusting knowledge culture that is directed towards rewarding innovation, learning, experimentation, scrutiny and reflection [10]. Trust among the organization's members is a must for sharing knowledge [4]. Within an organization, culture impacts and is impacted by infrastructure and strategy, as well as the organization's mission, vision, objectives, and goals. The national culture also affects the values and practices of every organization which is attempting a KM implementation [4]. Anthropologists agree on some basic characteristics of culture. These are: (a) culture is learned; it involves learned behaviour), (b) culture is shared between the members of a group but it is not necessarily homogenous, (c) culture is based on symbols the most important of which is language, (d) culture is integrated as it constitutes a composite of all of its parts, (e) culture is dynamic in that it interacts with other cultures and change. Via the culture people create expectations of behaviours. These expectations can result into constructive interactions which promote knowledge sharing but also non-constructive interactions and thereby actions that hinder knowledge exchange. Constructive and defensive cultures in relation to individual and organizational outcomes that promote KM success have been studied by Balthazard and Cooke [11]. Constructive norms were found to be positively associated with both individual outcomes such as role clarity, communication quality, organizational fit, creativity, and job satisfaction, as well as organizational outcomes such as quality of products and services, quality of customer service, organizational adaptability, limited turnover, and quality of the workplace. All of these outcomes, individual and organizational, are considered to promote KM success. On the other hand, defensive cultures, both passive and aggressive, are negatively related with the above individual and organizational outcomes that may bring KM success [11]. There is also support that within an organization there may be a variety of cultures. This view may explain why some organizational units exhibit behaviours that are counter to the organization's expressed values or mission [11]. Is a competitive or a supportive culture better for the creation of a knowledge sharing atmosphere? Additional research supports that a competitive culture leads to individuals keeping their knowledge for themselves whereas a supportive culture may demote their self-interest and make them feel even morally obligated to share [12], [13], [14], [15] cited in [16]. The absence of some of the above cultural attributes or the existence of one or more of the following cultural barriers may jeopardize the perseverance of a KM culture. Cultural barriers may be raised from unclear priorities; THE INDIVIDUAL THE ORGANIZATION distrust of data use [7]; lack of rewards/recognition for knowledge sharing; organizational inefficiencies (wrong person at wrong position); knowledge sharing not a part of daily work; privileged positions used for own personal benefit; lack of participation; lack of trust; lack of training; and unwillingness to share knowledge. The role of management in creating a KM-enabling culture must also be emphasized. Managers and leaders should actively encourage the creation and use of knowledge. Additionally, management should promote the organization's workforce to build a positive orientation to knowledge which suggests that they become intellectually curious, they are willing and feel free to explore and they are willing to share without feeling that sharing knowledge will result in them losing power or will cost them their jobs. Alongside to the creation of a KM-enabling culture, knowledge sharing should be further encouraged by establishing a value system which should be characterized by non-linear, dynamic and interdependent relationships. A team spirit and benefits derived by other users may also motivate users towards knowledge sharing. It is not uncommon for people to share knowledge for altruistic pro-social reasons [15] as for example in the case of Wikipedia. Motivating employees towards knowledge sharing and learning is also not uncommon. For example Ernst and Young [17] and Price Waterhouse [18] devised reward mechanisms for knowledge sharing activities whereby such activities are tied to the employee's performance evaluation. It is also supported that only by developing the necessary organizational culture, can an organization gradually change the pattern of interaction between people, technologies, and techniques, because the core-competencies of an organization are entrenched deep into organizational practice [19]. Finally, in the case that culture needs to change to form what has been described as a KM-enabling culture, the ability or inability to change it, which may itself be affected by a number of conditions, is crucial. In such cases, the organization should not neglect the resistance from cultural inertia and the difficulty that this causes in transferring the knowledge to effectively implement better business practices [6]. KM and the Academia The characteristics of a KM-enabling culture and all of the other related issues as these were already discussed are in general valid for all of the types of organizations. In the meantime some additional issues may be raised in relation to developing a knowledge sharing culture in an educational organization. Focused research carried out in the area of education reveals some of these concerns which are presented below. Would you feel secure to share your knowledge with a colleague? Some would say "Not sure". In fact, it may be understandable to feel insecure in sharing knowledge at the work place as knowledge is regarded to be a valuable resource. Actually, it is not uncommon that individuals may not share information with their departmental peers, supervisors, or other colleagues based on the belief that this provides them with an inherent advantage in bargaining and negotiation [2]. To what extend is this premise applicable to educators who by definition must practice knowledge sharing? Albeit, knowledge sharing is the essence of education how ready are educators to share their knowledge among themselves? Academics are judged upon their teaching duties, their research output, and their broader contribution to the university community and the society. Knowledge sharing is the heart of all three tasks although knowledge may be directed towards a different recipient every time. A study conducted by Cheng and his collaborators (Cheng, et al., 2006) to examine knowledge sharing behaviours among academics in a knowledge-based institution, being a university, focused on the factors which may affect academics' willingness to share knowledge. Organizational, individual, and technology factors were examined and the overall findings revealed that incentive systems and personal expectations are the two key factors in urging academics to engage in a knowledge sharing activity. "Forced" participation which was attempted did not work as expected and appeared to be an ineffective policy in cultivating a sharing behaviour among academics. Instead, academics responded to a performance-based incentive system and the general conclusion was that it is important to provide the "right" incentive system and understand individual's expectations towards knowledge-sharing in order to facilitate a knowledge sharing behaviour [20]. In a different study, Alotaibi and co-researchers [21] aimed to investigate the factors that affect academics' behaviour towards knowledge sharing using Web technology. The following groups of factors are identified as most important for shaping the knowledge sharing behaviour of staff: motivation factors; IT acceptance; and organizational culture. The lack of time and the high level of effort required for knowledge sharing activities back the support that motivation is the biggest issue. IT acceptance is reached as an outcome of the individual's evaluation of the usefulness and the ease of use of the particular technology. Finally, organizational culture entails trust between employees, time availability, leadership directives and practices, and the necessary IT support. Intellectual property is another issue which tough it not explicitly addressed in the literature as either a knowledge sharing barrier or an issue which requires regulation it is interesting to consider. In fact, the Higher Education Funding Council for England (HEFCE) commissioned a study in the year 2010 on intellectual property [22]. The study showed that 19% of the academics in the top 6 high research HEIs felt that intellectual property and other issues relating to the terms of interactions of knowledge exchange nature with external organizations, could act as a barrier for their knowledge exchange interactions. In particular, these concerns were primarily raised by academics in the science, technology, engineering and mathematics disciplines. In conclusion, in order to overcome this natural tendency Investigating the Knowledge Management Culture by individuals to protect knowledge and not share, people must be convinced, rewarded or recognized properly [8]. More importantly special attention should be given to the sharing of incomplete, inaccurate or ambiguous information because of competing interests. The motivation of employees and others, e.g. other organizations, customers, suppliers, etc., to share accurate and timely information is closely paired with the mutual existence of trust which in turn depends on the prevailing sharing culture [2]. The same pertains to the sharing of knowledge. Research Methodology The present study was conducted as part of a broader consultation project aiming at the implementation of knowledge management in a privately owned European institution of tertiary education. Focusing on the aspects of the investigation relating to the KM culture within the Higher Education Institution (HEI) the authors are herewith aiming at extracting and further investigating those attributes which would synthesize a KM-enabling culture in a HEI. The study collected qualitative data via the utilization of focus groups and in-depth interviews. Two focus groups were formulated. The first one comprised faculty members from different schools, at different ranks, and with varied experience at the current institution. The second comprised members of the staff from different functional units and with a varied service time at the institution. Individual in-depth interviews were also used to collect the opinions, views, and experiences of top executives of the institution in relation to current KM activities and future plans. All collected data were transcribed, compiled and analyzed using the Miles and Huberman General Analytical Technique [23]. Characteristics of the KM Culture Although the conducted study was not dedicated at examining the HEI's culture but was more broadly investigating the current practices and future plans of the HEI in relation to KM practices, it allowed the authors to initially and broadly shape the institution's culture in the context of a KM-enabling environment. A set of attributes relating to a KM culture in the HEI were extracted. These are presented herewith along with selected related contributions made by the faculty, staff, and administrators of the HEI who participated in the study. A KM-enabling culture is therefore characterized by (see Figure 2 Organizational members should exhibit a team spirit  Cultivate an environment which will promote professional and social interactions between its members  Cultivate a shared sense of direction and excitement  Role Clarity  Prevent and resolve possible conflicts resulting from a confusion in regards to responsibilities and jurisdiction  Trust  Involves trust in the knowledge received to be the best in terms of currency, accuracy and completeness  Also involves trust that knowledge sharing will be done in appropriate, ethical ways  Expects practicing KM with transparency in collecting best practices, reflecting on practices and sharing experiences  Requires the careful handling of copyrights, sensitive, and proprietary knowledge Participants' Comments With their participation in the undertaken study faculty, staff and administrators of the HEI have articulated their appreciation for the need for KM in saying that:  "There is a need to provide in a systematic way all this wealth of experience / knowledge / expertise so that someone will be able to use it if they take over a position in our units… We want to establish a system for the transfer of knowledge."  "Knowledge sharing is part of the nature of academia and a university environment."  "If you do not want to learn you will fail. When the organization learns it does not mean that everybody learns. Not only learning about what you are doing; it is also learning new things." As for their concern raised by the organization members regarding the realization of a KM-enabling culture, these are better expressed through their comments some of which follow:  "If you provide them the means and the time people are willing to learn."  "Social interactions are very important for all organizations. It is not just the dissemination of knowledge that should interest us, but the key is how people interact and collaborate to share the knowledge, along to the existence of a positive environment."  "People can be trained, if there is a willingness, how to speak to each other. A culture can be cultivated."  "Why should people take an initiative if their efforts are not rewarded? Quite often it is just a question of being recognized and appreciated."  "It requires an individual and an organization value system to learn from past mistakes in order to go forward."  "Should work on the emotional level on keeping people happy."  "Maybe one of our weaknesses is sometimes a competition that may exist between departments. This may be caused by the size of the organization or the un-clear delegation of duties, overlapping of responsibilities, stress caused by increased work load…"  "Sometimes there is confusion in regards to responsibilities and jurisdiction of departments or individuals by the management or colleagues or students."  "A re-engineering of positions with clear job descriptions may be necessary."  "Communication between relating departments may not be developed to the necessary degree."  "There is good communication between relating departments."  "Knowledge sharing is not a problem of individuals it is rather a bad characteristic of our culture. In other cultures things are different. Students sometimes are looking to receive inspiration from their lecturers. Our discussions revolve around our everyday tasks; they lack spirituality." In some cases participants were not in full agreement between them but rather expressed contradicting viewpoints, such as:  "No problem with motivation and trust."  "In general there is motivation… Every time we approach people with information there is response, there is readiness, …"  "Has to do with the motivation of the person. Generally speaking our society is not characterized by a strong work ethic. Most people do not want to work."  "We need somebody to motivate the people and cultivate the culture." Other Issues Necessary for the KM Success Additionally employees' views on related aspects of the organization's functions revealed a number of issues which are associated with the success of a KM implementation with a HEI. These include: Issues relating to organizational structure:  Maintenance of an organizational structure which will be promoting knowledge sharing;  Resolution of any conflicts such as conflicting goals and responsibilities between the organization's departments which may sometimes be influencing people's behaviours in relation to knowledge sharing. Issues relating to networking and communication:  The creation of the necessary networks for knowledge transfer and sharing;  Networking abilities through established avenues of communication with colleagues, experts and other benefactors, such as students, and others;  Dissemination of knowledge between those who need it in a variety of ways in order to ensure easier and enhanced access (supporting the KM function of delivering the right knowledge to the right people at the right time);  The promotion of internal cooperation among organization members;  The promotion of external cooperation with industry consortia and other institutes. Issues relating to technology and related skills:  Ultimate use of available ICT to connect with others;  Organizational investments in new ICT to enhance collaboration, communication, sharing, etc.;  Ways of dealing with the technology fear and the expected resistance to change. Issues relating to organizational processes:  Updated knowledge of different areas of expertise and interests;  A clear allocation of responsibilities for KM functions to individuals and offices;  Carefully designed KM activities which must follow the organizational processes' natural work flow and must be embedded in organizational activities so as to require minimum additional effort. Issues relating to management involvement:  A management team actively and openly supporting KM;  Conceptualization and formalization of KM activities by means of adopting a clear KM strategy. Issues relating to KM on-going activities:  Constant identification of knowledge gaps in the organization and filling them by recruiting new organizational members and/or providing such knowledge to the organization members along with the means necessary to attain it;  Acknowledgement and follow up of the evolution of the organization by designing new KM activities and re-designing/re-engineering the existing KM activities as deemed necessary;  The regular measurement of KM practices and the close following of any progress made. Conclusions and Plans for Future Research A KM-enabling culture is overall a trusting, supportive, non-individualistic culture which promotes sharing for the common goal of organizational prosperity. Initial and on-going efforts will be required at all levels of the organization in order to create and maintain such a culture. Bottom-up initiatives as these are taken at the employee level should be welcomed and may cause change in the management of the organization. At the same time top-down initiatives involving new directions and changes introduced by the management to shape up new behaviours and actions between the organization's employees are desirable. The realization and maintenance of such a culture should be seen as a challenging task. Finally, the attainment of a KM-enabling culture will most definitely be rewarding for the organization which will subsequently have the ability to employ knowledge management and expect to enjoy its benefits. The present study was qualitative and involved the employee force, both staff and faculty, of a European HEI. A part of the study aimed at understanding a KM-enabling culture and the aspects of the overall organizational culture which relate to knowledge sharing. This let to shaping a KM-enabling culture within a HEI and identifying its main characteristics. Alongside to the examination of the knowledge sharing culture and by considering the feedback received from the organizational force it was possible to identify a number of related issues and concerns which are also necessary for the implementation of KM within an organization. It is in our future plans to investigate further the KM-enabling culture by approaching more HEIs in order to reach more conclusive results which may be generalizable in the sector of higher education.
2019-05-30T13:13:44.519Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "b89d7c49daff5942a4aa0e6aba8e4ebb9f31ecdf", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20160630/UJER3-19506115.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "56826f03074cf5801937ca33a3ccad185747d13a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
18734731
pes2o/s2orc
v3-fos-license
A case of obstructive jaundice caused by tuberculous lymphadenitis: A literature review Obstructive jaundice caused by tuberculous lymphadenitis is a rare manifestation of tuberculosis (TB), with 15 cases having been reported in Korea. We experienced a case of obstructive jaundice caused by pericholedochal tuberculous lymphadenitis in a 30-year-old man. The patient's initial serum total bilirubin level was 21.1 mg/dL. Abdominal computed tomography revealed narrowing of the bile duct by a conglomerated soft-tissue mass involving the main portal vein. Abrupt obstruction of the common bile duct was observed on cholangiography. Pathologic analysis of a ultrasonography-guided biopsy sample revealed chronic granulomatous inflammation, and an endoscopic examination revealed esophageal varices and active duodenal ulceration, the pathology of which was chronic noncaseating granulomatous inflammation. Hepaticojejunostomy was performed and pathologic analysis of the conglomerated soft-tissue mass revealed chronic granulomatous inflammation with caseation of the lymph nodes. Tuberculous lymphadenitis should be considered in patients presenting with obstructive jaundice in an endemic area. INTRODUCTION Tuberculosis (TB) is an infectious disease that is prevalent worldwide, but obstructive jaundice secondary to abdominal TB remains rare. 1 Patients with bile duct involvement of TB causing obstructive jaundice have protracted symptoms such as malaise, jaundice, and weight loss, which are indistinguishable from those of cholangiocarcinoma. 2 Obstructive jaundice can be caused by tuberculous enlargement of the head of the pancreas, tuberculous lymphadenitis, tuberculous stricture of the biliary tree, or a tuberculous mass of the retroperitoneum. 1 Fifteen cases of pericholedocal tuberculous lymphadenitis were reported in Korea. [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] There were two cases of pericholedocal tuberculous lymphadenitis with duodenal TB 10,14 and two cases of pericholedocal tuberculous lymphadenitis with portal hypertension. 11,13 This is the first case report of pericholedocal tuberculous lymphadenitis with portal hypertension concomitant with duodenal TB in Korea. Here we report a case of obstructive jaundice with portal hypertension caused by pericholedochal tuberculous lymphadenitis with duodenal TB in addition to a review of tuberculous lymphadenitis in Korea. CASE A 30-year-old man admitted our hospital due to jaundice. One year ago, he had been diagnosed with pulmonary TB that he had completed a six-month regimen of anti-TB medication (isoniazid, rifampicin, etambutol and pyrazinamide for 2 months, then continuing with isoniazid, rifampicin and ethambutol for the remaining 4 months) at local clinic. After treatment, he had no other problems until the development of jaundice. Abdominal ultrasonography performed at a local clinic which showed bile duct dilatation. The patient was referred to our hospital for evaluation of the biliary obstruction. On physical examination, the patient's sclera was icteric. There was no hepatomegaly, splenomegaly or ascites. Hemoglobin was 10.7 g/dL, platelets were 227,000/μL, and white blood cell count was 9300/μL. The serum total bilirubin was 21.1 mg/dL and direct bilirubin was 12.4 mg/dL. AST was 160 IU/L and ALT was 147 IU/L. Serum BUN, creatinine, amylase and lipase levels were within normal range. Viral marker assays were negative for hepatitis B surface antigen, IgM anti-hepatitis A and anti-hepatitis C virus. Dynamic computed tomography (CT) showed both intrahepatic duct and extrahepatic bile duct dilation with abrupt narrowing of the proximal common bile duct (CBD). The proximal CBD was encased by a soft tissue mass (Fig. 1A, 1B). This lesion spread from the hepatic hilum to the hepatoduodenal ligament and pancreatic head. Central calcification was observed in the lesion and the main portal vein was encased by soft tissue mass. The patient's chest X-ray showed patchy and fibrotic opacities in both upper lungs with volume decrease. High-resolution chest CT showed multiple nodules with calcification and fibrotic bands in both upper lobes and the superior segment of both lower lobes considered stable TB (Fig. 1C). The cholangiogram from percutaneous transhepatic biliary drainage (PTBD) showed abrupt proximal CBD obstruction with dilated intrahepatic ducts (Fig. 1D). The guide wire and catheter were not passed through the narrowed segment. Upper gastrointestinal endoscopy revealed grade 1 esophageal varices ( Fig. 2A) and active duodenal ulceration was noted at the bulb (Fig. 2B). Pathologic examination of the duodenal ulceration showed chronic non-caseating granulomatous inflammation (Fig. 2C, 2D). TB polymerase chain reaction (TB-PCR) and acid fast bacillus (AFB) stain were all negative. Percutaneous ultrasonography-guided biopsy of the soft tissue mass was performed and pathologic examination showed chronic granulomatous inflammation with fibrosis (Fig. 3A). TB-PCR was negative and AFB and periodic acid Schiff stains did not demonstrate acid-fast bacilli or fungal organisms. Bacteria and Mycobacterium tuberculosis (M. tuberculosis) were not identified in blood and bile fluid from PTBD. The soft tissue mass was considered to be conglomerated lymph nodes or a true mass lesion. Drug sensitivity test for M. tuberculosis could not be performed because of no growth of the organism. An explorative laparotomy was performed to relieve the biliary obstruction and to exclude malignancy. Several conglomerated lymph nodes encasing CBD and portal vein were observed during surgery. The pancreas and liver appeared grossly normal and the gallbladder was not distended. Examination of frozen sections of the conglomerated lymph nodes showed chronic ill-defined granulomatous inflammation and fibrosis. Cholecystectomy and a Rouxen-Y bypass hepaticojejunostomy were performed. Final pathologic examination showed chronic granulomatous inflammation of the lymph nodes with caseation (Fig. 3B, 3C). AFB staining did not identify acid-fast bacilli in the gallbladder, bile duct or lymph nodes. TB-PCR showed positive band in lymph nodes. After the operation, total bilirubin level decreased to 3.2 mg/dL. The patient was treated with anti-TB medication and bilirubin level had decreased to normal six weeks after surgery. DISCUSSION TB of the biliary system is rare and difficult to diagnose. 2 Obstructive jaundice caused by tuberculous lymphadenitis is most often attributed to mechanical obstruction of the biliary tract by lymph nodes or mass lesions. 1 Patients with tuberculous lymphadenitis usually present with obstructive jaundice, which may be confused with hepatobiliary malignancies. 2 The annual incidence of hepatobiliary TB is reported as 1.05% of all TB infections. 18 Hepatobiliary TB is caused by two mechanisms. 19 The first mechanism is the direct spread of caseous materials from the portal tracts into the bile duct and the second is secondary inflammation related to tuberculous periportal adenitis. 19 Table 1 lists pericholedocal tuberculous lymphadenitis reported in Korea including this case. [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] Pericholedocal tuberculous lymphadenitis in Korea showed a 81.3% male preponderance. The initial total bilirubin level ranged from 0.5 to 21.1 mg/dL with a mean of 5.9 mg/dL. Including the present case, 11 cases were treated by surgery with anti-TB medication and five cases were treated by anti-TB medication alone or anti-TB medication with endoscopic nasobiliary drainage or prednisolone. 9,12,14,15,17 In Korea, pericholedocal tuberculous lymphadenitis has been associated with intestinal TB (31.3%), pulmonary TB (25%), mediastinal tuberculous lymphadenitis (6.3%), cervical tuberculous lymphadenitis (6.3%) and tuberculous meningitis (6.3%) ( Table 1). Anti-TB medication without surgical intervention is desirable, but there are two emerging problems. 2 First, multi-drug resistant strains of M. tuberculosis are becoming increasingly prevalent. Second, the bile duct can be severely damaged by repeated inflammatory reactions and may thus be irreversibly scarred. 20 In this case, the conglomerated lymph nodes encased the main portal vein and this resulted in portal vein hypertension, thus causing the esophageal varices. In Korea, there were two previous reports of portal hypertension associated with portal vein obstruction by pericholedocal tuberculous lymphadenitis. 11,13 Including the present case, all three cases of pericholedocal tuberculous lymphadenitis with portal hypertension were treated by surgical interven- tion. In this case, surgery was performed to relieve the tight obstruction of the CBD duct and to exclude malignancy. Considering the worldwide prevalence of TB, tuberculous lymphadenitis is likely to be encountered. Pericholedocal tuberculous lymphadenitis needs to be considered in patients presenting with biliary obstruction, particularly in patients with risk factors such as the history of TB in endemic area. Reference numbers. * Intestinal tuberculosis (31.5%), pulmonary tuberculosis (25%), mediastinal tuberculous lymphadenitis (6.3%), cervical tuberculous lymphadenitis (6.3%) and tuberculous meningitis (6.3%). † Post-operation total bilirubin. ‡ Splenectomy due to splenomegaly by portal hypertension. § Operation due to paradoxical reaction of anti-tuberculous medication.
2017-10-27T20:10:17.741Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "532747ce4352f0a9dc65885b0dac4b05605b098a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3350/cmh.2014.20.2.208", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "532747ce4352f0a9dc65885b0dac4b05605b098a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4524121
pes2o/s2orc
v3-fos-license
Improving the Quality of Positive Datasets for the Establishment of Machine Learning Models for pre-microRNA Detection MicroRNAs (miRNAs) are involved in the post-transcriptional regulation of protein abundance and thus have a great impact on the resulting phenotype. It is, therefore, no wonder that they have been implicated in many diseases ranging from virus infections to cancer. This impact on the phenotype leads to a great interest in establishing the miRNAs of an organism. Experimental methods are complicated which led to the development of computational methods for pre-miRNA detection. Such methods generally employ machine learning to establish models for the discrimination between miRNAs and other sequences. Positive training data for model establishment, for the most part, stems from miRBase, the miRNA registry. The quality of the entries in miRBase has been questioned, though. This unknown quality led to the development of filtering strategies in attempts to produce high quality positive datasets which can lead to a scarcity of positive data. To analyze the quality of filtered data we developed a machine learning model and found it is well able to establish data quality based on intrinsic measures. Additionally, we analyzed which features describing pre-miRNAs could discriminate between low and high quality data. Both models are applicable to data from miRBase and can be used for establishing high quality positive data. This will facilitate the development of better miRNA detection tools which will make the prediction of miRNAs in disease states more accurate. Finally, we applied both models to all miRBase data and provide the list of high quality hairpins. Introduction Disease phenotypes largely depend on the expression of genes and on their translation into proteins. MicroR-NAs (miRNAs) are short endogenous RNA sequences which are involved in the post-transcriptional modulation of protein abundance [1]. Thereby they have been implicated in many diseases ranging from virus-based ones to cancer [2]. Many miRNAs have been established experimentally since their first detection [3]. Such miR-NAs are stored in databases like miRTarBase [4] and miRBase [5]. Experimental detection of miRNAs is convoluted [6] and establishing an effect on the protein level makes the process even more complicated. Therefore, computational methods which detect miRNAs directly from genomic or transcriptomic sequences have been widely applied [7]. Most of the methods for pre-miRNA detection are based in machine learning [7] and thereby need suitable examples for training an effective model. It is known that true negative data is not available [8] and that the confidence in machine learning models based on two-class classification, therefore, is limited [7], [8]. On the other hand, the quality of positive data which usually stems from miRBase has also been questioned [9], [10]. For example, Bartel and colleagues rejected one third of all mammalian miRNAs in miRBase and suggested 20 % new ones [9]. Wang and Liu developed a computational pipeline to filter miRBase entries based on RNA-seq data [10]. They reported a number of inconsistencies in respect to the 3′ and 5′ ends of the mature miRNA and the occurrence of miRNA* in Drosophila melanogaster (61 % accurate, 9.5 % miRNA*, 25 % 3′ variants, and 4.5 % 5′ variants) and Caenorhabditis elegans (86.2 % accurate, 4.8 % miRNA*, 7.8 % 3′ variants, and 1.2 % 5′ variants). Chen and colleagues proposed to use structure and expression to scrutinize miRBase entries. In respect to structure they analyzed the location of the mature miRNAs within its pre-miRNA. Overall, they rejected large percentages of the plant miRNAs in miRBase [11]. Tarver et al. [12] found, using strict criteria (based on Okamura et al. [13], Axtell et al. [13], Kozomara and Griffiths-Jones [14], and Tsutsumi et al. [15], that none of the protist miRNAs similar to plant miRNAs were acceptable under their constraints. Donoghue and colleagues also investigated plant miRNAs [16]. They applied modified criteria by Ambros et al. [17] for the evaluation of about 7000 miRBase entries and found 30 % to be questionable. Peterson and colleagues reported that only 30 % of human entries in miRBase are well supported by using strict criteria [18]. They also point out that the aim of miRBase is not to scrutinize miRNAs, but to register them; which led them to create MirGeneDB which houses filtered (robust) entries from miRBase. Jones-Rhodes cautions that many entries in miRBase could be siRNAs instead of miRNAs [19], which helps explain why they appear in miRBase since their function is similar. These studies and our previous work [20] used a number of criteria to decide whether a miRBase entry is robust. The complementarity between the two mature sequences (animals first 16 of 22, plants ≤ 4 mismatches) is often used as a criterion but the number of required matches varies. Evidence of expression for both mature sequences is generally required with a lower expected abundance of the miRNA*. The reads that are mappable to the pre-miRNA should further show low heterogeneity and display precise alignment on the 5′ side (precise cleavage). On the 3′ side some studies require a 2 nucleotide overhang between the two mature miRNAs. These rules entail already that both mature miRNAs are within a pre-miRNA and located on the stem. A few studies have additionally required the miRNAs not to match to other non-coding RNAs and/or not to have multiple matches throughout the genome. The latter two criteria are questionable since a miRNA may exist in multiple copies in a genome [21] and because miRNAs can come from any transcription unit [22]. Here we analyzed data from miRBase and MirGeneDB [18] and established how they can be scrutinized to achieve a high confidence filtered positive dataset. To this end, we created a machine learning model using 1000-fold Monte Carlo cross validation with human data from miRBase as positive examples and pseudo hairpins for negative examples. We then applied the model to analyze all pre-miRNAs from MirGeneDB and miRBase. The interesting feature of our model is that with increasing quality of the data the positive prediction rate increases. Therefore, the model appears to be independent of possible false-positive data used in its establishment. We assessed different features to filter positive data from miRBase and used our model to assess how well the data was filtered. This leads to a list of features which are useful to separate the wheat from the chaff. Additionally, the trained model can be used directly to remove such examples that are not named miRNA from the positive data given a threshold (we successfully used the lower quartile from the MirGeneDB distribution as a threshold). Using either method of filtering positive data will lead to more accurate pre-miRNA detection models. Finally, we provide the list of filtered pre-miRNAs to avoid the need to recalculate the data. Datasets All 28,645 hairpins listed in miRBase release 21 were used for calculating features needed for performing predictions using izMiR (http://www.nature.com/protocolexchange/protocols/4919). Except for atr-MIR8591 (http://www.mirbase.org/cgi-bin/mirna_entry.pl?acc=MI0027479) which could not be analyzed using our system, all other hairpins were processed. Regarding atr-MIR8591 it needs to be mentioned that this particular entry in miRBase has a hairpin length of 2354 nt, thereby, being the entry with the largest amount of nucleotides (average for miRBase: 83.59 nt). This is by no means a typical miRNA and, therefore, we do not believe that our approach is at fault. Since this is an extreme example (the only one of almost 30,000 and considering that the other two large hairpins with more than 1000 nucleotides (sly-MIR9475 (1451 nt) and atr-MIR8598 (1411 nt); 27 hairpins > 500 nt in miRBase) were analyzed with no problems, atr-MIR8591 can be safely ignored in our opinion. More information about the features and how to calculate them is available on our web site: http://jlab.iyte.edu.tr/software/mirna. The same procedure was applied to all 1434 hairpins from the four species available in MirGeneDB v1.1 (http://mirgenedb.org). The pseudo [23] dataset (8492 entries) was used to simulate negative data although there is no quality guarantee for such data [8]. Pre-miRNA Detection miRBase and MirGeneDB datasets with calculated features were further processed using izMiR which was developed using the data analytics platform KNIME [24]. Our platform izMiR provides several models and for this study we chose Average DT (average of decision tree prediction scores based on an ensemble of 13 individual models) which was successful for most scenarios (http://www.nature.com/protocolexchange/protocols/4919). Most notably, the accuracy of the Average DT model, while trained using human data from miRBase and pseudo as negative data, mostly depends on the quality of test data [8]. Therefore, it can be used to analyze different filtering strategies. Quality Assessment of pre-miRNAs In order to identify high confidence pre-miRNAs, we employed a number of strategies based only on data available in miRBase and features that can be directly derived from that information. 1. miRBase entries were divided into two groups; one with RPM (reads per million) values less than or equal to 100 and the other one with RPM values greater than 100 (more explanation provided in Section Section 3.1). 2. Simple k-means clustering (k = 3, WEKA 3.7 in KNIME) was used to create a model based on the human dataset in miRBase using about 900 features which was then applied to cluster all miRBase entries. k was selected as 3, since we suspected that there should be at least 3 groups in miRBase in respect to quality. The first group should represents true miRNAs with strong experimental support, the second group likely consists of entries that might be true miRNAs but have some questionable properties, and the last group will be entries that have very small chance of being a real miRNAs. 3. Simple k-means clustering (k = 3, WEKA 3.7 in KNIME) was used to create a model based on the Mir-GeneDB dataset and the obtained model was applied to cluster all entries in MirGeneDB. 4. Identical hairpin sequences between miRBase and MirGeneDB were extracted and these miRBase hairpins were compared with the rest of its entries. This essentially is applying the same strategy as MirGeneDB [18]. 5. A species specific comparison was performed by using mouse data in miRBase by analyzing high confidence mmu entries versus the remaining mmu hairpins. 7. Performance of miRBase and MirGeneDB entries were analyzed in a species specific manner. 8. Similarity between miRBase and MirGeneDB hairpin sequences were investigated by using normalized Levenshtein distance (normalized to the length of the longer sequence). Filtering miRBase The Average DT izMiR model was employed to analyze all miRBase entries and hairpins with a model score above 0.862 (lower quartile of MirGeneDB, Figure 1). Hairpins with a score above the threshold were accepted as confident pre-miRNAs. A machine learning model was established with high confident hairpins from miRBase as positive data and low confident ones as negative data based on selected structural and thermodynamic features ( Table 1). The model was applied to all miRBase entries and all hairpins passing the prediction threshold (0.5) were accepted as high confident pre-miRNAs. A list with all entries from miRBase and the rating of the two models was created and is available as Supplementary Table 1. Individual Analyses in Respect to miRBase and MirGeneDB Although there are many alternative databases claiming to provide better and higher quality miRNA data like MirGeneDB [8] and miRTarBase [5] most of such repositories suffer from the limited number of organisms included in their datasets and overall less amount of pre-miRNAs. Therefore, miRBase remains the standard source for positive data since it offers miRNA information for 223 species and contains almost 30,000 pre-miRNAs. However, for machine learning, it is essential to have high confident positive data. Considering these issues it is essential to scrutinize the data obtained from miRBase to arrive at a high quality positive dataset. It is furthermore convenient to use an intrinsic parameter to save computational efforts like aligning large amounts of reads to pre-miRNAs. For example, RPM (reads per million) is a value provided for some of the entries in miRBase. Applying this simple RPM measure, separating the datasets into lower (≤ 100) and higher RPM (>100) support, leads to different distributions of the model prediction score distribution ( Figure 2). Especially, the lower whisker and the lower quartile are affected by filtering using 100 RPMs also leading to a lower interquartile range with 0.17 for low and 0.08 for high RPM support (Figure 2). For all other measures of the distribution the one with higher RPM support has higher values. A highly similar distribution to the one with lower RPM support was observed when using all entries instead of the random sample (not shown). MirGeneDB was created to have a high confidence of hairpins filtered from miRBase. Cluster analysis is a popular approach to group datasets based on the similarity of its elements [25]. Here we performed k-means clustering (k = 3) for all miRBase entries and identified the MirGeneDB entries in the clusters, as well. Applying clustering led to three clusters with clearly different quality measures. This approach thereby allowed the enrichment of confident positive data in one of the clusters (Figure 3). The average model prediction score for miRBase entries in cluster 2 is 0.957 and thereby is 0.198 and 0.135 points larger than for clusters 1 and 0, respectively. The distribution of model scores (Figure 4) leads to very high scores in a narrow interquartile range (0.01) for cluster 2 (median: 0.989), followed by cluster 0 with a similar maximum, but much larger inter quartile range (0.23) and lower median (0.915) ( Figure 4). As the scores indicate, median of prediction values for Cluster 2 is higher than upper quartiles of clusters 1 and 0. Performing a similar clustering analysis for MirGeneDB entries also leads to clusters with different model score distribution. Cluster 0 has a larger inter quartile range (0.218) compared to clusters 1 and 2 (0.080 and 0.075, respectively). The model score distributions are very similar for clusters 1 and 2 while cluster 0 shows lower values for all measures. Collective Analyses Involving miRBase and MirGeneDB MirGeneDB only contains a fraction of the entries in miRBase (1434; 5 %), but they were extracted with the intend to have a high confidence positive dataset [18]. We performed string matching between the MirGeneDB entries and the miRBase entries and selected all matching ones in order to also account for duplicates in species not represented in MirGeneDB. The model score distribution of the matches was then compared to the remainder of miRBase ( Figure 5). Overall, 259 miRBase hairpins (234 unique sequences) have identical sequences to 278 MirGeneDB hairpins (312 exact matches in total). The quality of these 259 hairpins was compared to the remaining 28,385 hairpins listed in miRBase ( Figure 5). The model score distribution is higher for the matching sequences when compared to the remainder of miRBase ( Figure 5). Matches between MirGeneDB and miRBase have a lower upper quartile (0.988 vs. 0.990) and a lower median (0.977 vs. 0.979). However, they have a much higher lower whisker, lower quartile, and most notably a narrower interquartile range (0.081 vs. 0.157). Following frequent reports of low quality data in miRBase, the platform reacted and now provides a high confidence miRNA dataset in its latest release [26]. Similarly to above, we analyzed the high confidence mouse dataset (miRBase HC) and compared it to the remaining low confidence mouse data in miRBase (miRBase LC), and the MirGeneDB entries specific to mouse ( Figure 6). The high confidence miRBase mouse dataset has a similar upper quartile to the MirGeneDB mouse dataset (0.991 vs. 0.986), but a higher median (0.988 vs. 0.935), a higher lower quartile (0.914 vs. 0.839), and a smaller interquartile range (0.077 vs. 0.147). The unfiltered mouse data in miRBase has much lower values for all measures of the distribution. The miRBase data filtered with our model according to a threshold of 0.862 according to the lower quartile model score for MirGeneDB (Figure 1) naturally has a distribution with a minimum of 0.862. All other distribution measures are also better than for MiRBase HC. MicroRNAs have been described for many species and miRBase hosts data for more than 200 of them. We applied the izMiR Average DT model to all pre-miRNAs in miRBase and recorded the model score distribution on a per species basis for all data in miRBase and for data filtered by RPM. In Figure 7 we report the median score for these two cases and the unfiltered variant for species which have suitable RPM support for their hairpins. While many species have very high median model scores for unfiltered data, the ones which have low medians, are improved after filtering. For some species with high medians before filtering the median is further improved after filtering. Conversely, data which is filtered out leads to lower median model scores. This is reversed for csi (Citrus sinensis). Figure 7 is sorted by the median score after filtering, but there is no natural sorting of species following from that. For example, aly is a plant species (Arabidopsis lyrata) and hsa stands for Homo sapiens and both are almost adjacent to each other with very similar median model scores (0.985 and 0.980, respectively). MirGeneDB has much less species recorded in its database and we performed the same analysis as above. Model score distributions are quite similar for most species in MirGeneDB (Figure 1). For reference, the model score distribution of the same species in miRBase is also provided in Figure 1. The number of hairpins supporting the distributions from miRBase are always higher compared to MirGeneDB generally containing about one third of the entries in miRBase. For the individual species and the overall data, the miRBase distribution has lower values for lower whisker and lower quartile as well as a larger interquartile ranges. Except for human (hsa) and Danio rerio (dre) where the model score distributions are very similar between MirGeneDB and miRBase. For example, dre interquartile range is 0.095 and 0.079 for MirGeneDB and miRBase, respectively. The largest interquartile range was found for chicken (gga) for miRBase data (0.384) while the interquartile range for MirGeneDB was 0.119. In order to extract high confident entries from miRBase, all entries resembling MirGeneDB entries with less than 0.2 distance score (normalized Levenshtein distance). The model score distribution shows that high confident entries were extracted (Figure 8). Hundreds of features have been proposed to describe a pre-miRNA [27]. In an attempt to employ these features to identify high confident miRNA entries in miRBase, putatively high confident entries in miRBase were selected as positive data (2139) and possibly low quality ones (26,505) were selected as negative data (see Figure 8). Information gain was calculated to assign an importance to the features describing a pre-miRNA in respect to differentiating between high and low confidence ( Table 1). The features with higher information gain are better able to separate between positive and negative data and hence between high confidence and low confidence entries in miRBase. Among the features that are able to separate between high and low confidence entries in miRBase are sequence-based ones (e.g.: %G++#%U/hpl). Other features have a structural component like mwmF/hpl or a thermodynamic one such as Tm/hpl. Model Prediction In order to determine whether the pre-miRNA detection model employed in this study can be used for assigning confidence to miRBase hairpins, different thresholds were applied to analyze miRBase data (not shown). A suitable threshold could be provided by the lower quartile of the MirGeneDB score distribution (Figure 1; 0.862). Applying the model using that threshold to all miRBase data leads to the overall rejection of 8400 hairpins (∼28 %). Conversely, 43 (0.5 %) hairpins from the pseudo dataset pass the threshold. Feature Model In order to identify high confidence entries in miRBase, we used all non-sequence-based features from Table 1 (12, bold). With these features we established a machine learning model using the high quality sequences as positive data and the low quality ones as negative data (see Figure 8). For establishing the model we used a randomly sampled 70-30 training/testing scheme with 1000 fold MCCV and equal amount of positive (0.91 threshold; 1601 hairpins; Figure 8) vs. negative data (below 0.91; p804 hairpins; Figure 8). Applying the model using the default threshold (0.5) to all miRBase data leads to the overall rejection of 20,586 hairpins (72 %). Discussion Many microRNAs have been detected and many more are expected to be found [28]. However, finding miR-NAs even using NGS data is complicated and most current miRNAs have no evidence on the protein level. Additionally, it is futile to aim to determine all miRNA-mRNA interactions experimentally. Therefore, computational models are necessary and these models depend on training data [8]. While negative training data is of unknown quality, positive training data should be of high confidence. Unfortunately, much of the data in miRBase, the de facto source for all positive data used in machine learning to determine miRNAs is riddled with false positive entries (perhaps related sequences like siRNAs, snoRNAs, etc.). Therefore, we analyzed the data on miRBase and investigated different filtering strategies to distil a high confidence dataset. By performing filtering based on reads per million (Filtering Strategy 1), an increase in the prediction performance is possible (Figure 2). This is further confirmed through analysis of the median model score on a per species basis in miRBase ( Figure 7). However, it also becomes clear that sufficient evidence on the transcript level is only available for few species in miRBase and that RPM abundance, while effective, cannot completely differentiate between high confident and low confident samples (Figure 7). This observation is inline with other studies which have used the location of the mature sequence, additional read alignment and other parameters to further investigate hairpin confidence. It is our aim to establish confidence in miRBase entries without the use of additional transcriptomic data or the reliance on different levels in the miRNA genesis pathway like the location of the mature miRNA. Hundreds of features have been proposed for pre-miRNA detection and it is likely that some of those features, or a combination of them, are able to discriminate between high and low confidence hairpins in miRBase. A first attempt was clustering based on the feature vectors of all pre-miRNAs in miRBase. Three clusters were generated using k-means clustering and it was possible to enrich a cluster in confident miRBase entries (Figure 3). In the future, this could be improved iteratively to arrive at different quality datasets. Cluster 1 has the lowest distribution and is therefore likely enriched with pre-miRNAs from miRBase that could be false positives (Figure 4). The same analysis was done for MirGeneDB where cluster 0 likely contains non miRNAs and clusters 1 and 2 probably are enriched in true miRNAs (Figure 9). Our previous analysis of mouse data from MirGeneDB confirms that there are still non-miRNAs in the MirGeneDB dataset [8]. The low quality data from miRBase has a very unfavourable distribution of model scores when compared to the other datasets ( Figure 6). Conversely, and expanding on our previous results, here we show that mouse data from miRBase can be filtered effectively while still retaining more hairpins than other approaches ( Figure 6). While there are relatively few entries in MirGeneDB, they are of high confidence as can be seen from Figure 5 where the score distribution for the MirGeneDB subset and the remainder of miRBase was analyzed. The model score distribution is better for the subset when compared to the remainder of miRBase ( Figure 5). This means that MirGeneDB succeeded in extracting high confidence miRNAs from miRBase. However, the distribution for miRBase is also quite well which means that a large portion of the entries in miRBase are also of high confidence. Additionally, while having a similar number of hairpins, the model score distribution is better for the miR-Base high quality dataset when compared to the MirGeneDB dataset ( Figure 6). Since there may be species specific characteristics of miRNAs it may be beneficial to use positive data from the organism of interest for machine learning. Therefore, we applied our izMiR model to all species in miRBase which have at least one hairpin with more than 0 and less than 100 read support and at least one hairpin with more than 100 read support. Only 35 (∼16 %) species fulfilled these criteria (Figure 7). Of these species, most have high quality data while the ones with lower quality data were pinpointed by lower model score median for hairpins with less than 100 read support and conversely, the subset with more than 100 read support generally showed increased model score medians (Figure 7). This supports our previous work where we showed that the izMiR model (trained on human) is applicable to all species and speculated that the decrease in positive prediction rate for some species was due to false positive examples (Saçar Demirci et al. [29] [accepted for publication]; http://www.nature.com/protocolexchange/protocols/4919). Since MirGeneDB entries were of high confidence and since the izMiR model is widely applicable we used the izMiR model with the lower quartile of the MirGeneDB entries as a threshold to analyze all entries in miR-Base. Twenty eight percent of the entries in miRBase were rejected in that manner, which is in line with previous reports of 30 % entries in miRBase being questionable (Taylor et al. [30], Chiang et al. [9]). Since the izMiR model was established using miRBase data, we wondered whether a different approach would lead to similar result. Therefore, we used high confidence miRBase entries as positive data and low confidence ones as negative data, established a machine learning model, and extracted the features that separate between the datasets. Application of the model to all miRBase data led to the rejection of about 70 % of entries in miRBase. While this is similar to what Peterson and colleagues found for human (Fromm et al. [18]), it seems very restrictive; and others have not found such a large percentage of questionable hairpins [11], [12], [13], [14], [15], [16], [17]. Both models we created were applied to all data in miRBase and all entries were scored and rated providing a comprehensive positive dataset. Out of the 28,644 entries in miRBase 72 % pass the Average DT , 28 % the feature model, and 24 % both models (Supplementary Table 1). We suggest to use the Average DT decision as a filter mechanism but if more stringency is needed, the miRBase entries passing both models could be useful. Conclusion Computational detection of pre-miRNAs directly from the genome and in RNA-seq data is important since experimental methods are convoluted. This is usually achieved by machine learning which depends on training data. Unfortunately, true negative data is unavailable. Therefore, the analysis of the positive data is needed to increase the overall confidence in established machine learning models. Here we analyzed miRBase and MirGeneDB data and found that miRBase contains about 28 % low confident entries while MirGeneDB also seems to contain a number of questionable entries ( Figure 9). The Average DT model of our izMiR platform allows the successful filtering of miRBase entries while retaining more entries for mouse than MirGeneDB or the high confidence data provided by miRBase. We applied our model and an alternative one we established in this study to all entries in miRBase and distilled a high confidence dataset in this manner. For all entries we indicate the decision of Average DT and our feature model which can furthermore be combined into an ensemble decision for highest confidence. This high confidence dataset will enable the establishment of more successful machine learning models and increase the confidence in findings in the area of hairpin detection which is also important for the analysis of dysregulation in diseases like cancer.
2018-04-03T00:39:14.511Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "a509040292a6267cc9edbc5cd85813720315f5cb", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1515/jib-2017-0032", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd9dee18cce487e72b743beeb25475f8a01868d3", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
15158605
pes2o/s2orc
v3-fos-license
The effects of grounding (earthing) on inflammation, the immune response, wound healing, and prevention and treatment of chronic inflammatory and autoimmune diseases Multi-disciplinary research has revealed that electrically conductive contact of the human body with the surface of the Earth (grounding or earthing) produces intriguing effects on physiology and health. Such effects relate to inflammation, immune responses, wound healing, and prevention and treatment of chronic inflammatory and autoimmune diseases. The purpose of this report is two-fold: to 1) inform researchers about what appears to be a new perspective to the study of inflammation, and 2) alert researchers that the length of time and degree (resistance to ground) of grounding of experimental animals is an important but usually overlooked factor that can influence outcomes of studies of inflammation, wound healing, and tumorigenesis. Specifically, grounding an organism produces measurable differences in the concentrations of white blood cells, cytokines, and other molecules involved in the inflammatory response. We present several hypotheses to explain observed effects, based on current research results and our understanding of the electronic aspects of cell and tissue physiology, cell biology, biophysics, and biochemistry. An experimental injury to muscles, known as delayed onset muscle soreness, has been used to monitor the immune response under grounded versus ungrounded conditions. Grounding reduces pain and alters the numbers of circulating neutrophils and lymphocytes, and also affects various circulating chemical factors related to inflammation. Introduction Grounding or earthing refers to direct skin contact with the surface of the Earth, such as with bare feet or hands, or with various grounding systems. Subjective reports that walking barefoot on the Earth enhances health and provides feelings of well-being can be found in the literature and practices of diverse cultures from around the world. 1 For a variety of reasons, many individuals are reluctant to walk outside barefoot, unless they are on holiday at the beach. Experience and measurements show that sustained contact with the Earth yields sustained benefits. Various grounding systems are available that enable frequent contact with the Earth, such as while sleeping, sitting at a computer, or walking outdoors. These are simple conductive systems in the form of sheets, mats, wrist or ankle bands, adhesive patches that can be used inside the home or office, and footwear. These applications are connected to the Earth via a cord inserted into a grounded wall outlet or attached to a ground rod placed in the soil outside below a window. For the footwear applications, a conductive plug is positioned in week of grounding or earthing treatments, shows a marked level of healing and improvement in circulation, as indicated by the skin color. (C) Taken after 2 weeks of earthing treatment, shows the wound healed over and the skin color looking dramatically healthier. Treatment consisted of a daily 30-minute grounding session with an electrode patch while patient was seated comfortably. The cause of the wound adjacent to the left ankle was a poorly fitted boot. A few hours after wearing the boot, a blister formed, and then developed into a resistant open wound. The patient had undergone various treatments at a specialized wound center with no improvement. Vascular imaging of her lower extremities revealed poor circulation. When first seen, she had a mild limp and was in pain. After an initial 30 minutes of exposure to grounding, the patient reported a noticeable decrease in pain. After 1 week of daily grounding, she said her pain level was about 80% less. At that time, she showed no evidence of a limp. At the end of 2 weeks, she said she was completely pain-free. 85 Earthing or grounding reduces inflammation The study involved 12 subjects who were in pain and had problems sleeping. They slept grounded for 8 weeks using the system shown in Figure 4. During this period, their diurnal cortisol profiles normalized, and most of the subjects reported that their sleep improved and their pain and stress levels declined. The results of the experiment led to these conclusions: 1) grounding the body during sleep yields quantifiable changes in diurnal or circadian cortisol secretion levels that, in turn, 2) produce changes in sleep, pain, and stress (anxiety, depression, and irritability), as measured by subjective reporting. The cortisol effects described by Ghaly and Teplitz 5 are particularly significant in the light of recent research showing that prolonged chronic stress results in glucocorticoid receptor resistance. 6 Such resistance results in failure to downregulate inflammatory responses, which can thereby increase risks of a variety of chronic diseases. This effect complements the findings described in the "Effects on pain and the immune response" section. Effects on pain and the immune response A pilot study on the effects of grounding on pain and the immune response to injury employed delayed-onset muscle soreness (DOMS). 7 DOMS is the muscular pain and stiffness that takes place hours to days after strenuous and unfamiliar exercise. DOMS is widely used as a research model by exercise and sports physiologists. The soreness of DOMS is caused by temporary muscle damage produced by eccentric exercise. The phase of contraction that occurs when a muscle shortens, as in lifting a dumbbell, is referred to as concentric, whereas the phase of contraction as a muscle lengthens, as in lowering a dumbbell, is referred to as eccentric. Eight healthy subjects performed an unfamiliar, eccentric exercise that led to pain in their gastrocnemius muscles. This was done by having them perform two sets of 20 toe raises with a barbell on their shoulders and the balls of their feet on a 2-inch × 4-inch wooden board. 7 All subjects ate standardized meals at the same time of day, and adhered to the same sleep cycle for 3 days. At 5.40 pm on each day, four of the subjects had conductive grounding patches adhered to their gastrocnemius muscles and the bottoms of their feet. They rested and slept on grounding systems such as that shown in Figure 4. They remained on the grounded sheets except for visits to the bathroom and meals. As controls, four subjects followed the same protocol except that their patches and sheets were not grounded. The following measurements were taken before the exercise and 1, 2, and 3 days thereafter: pain levels, magnetic resonance imaging, spectroscopy, cortisol in serum and saliva, blood and enzyme chemistry, and blood cell counts. 7 Pain was monitored with two techniques. The subjective method involved morning and afternoon use of a Visual Analog Scale. In the afternoon, a blood pressure cuff was positioned on the right gastrocnemius and inflated to the point of acute discomfort. The pain was documented in terms of the highest pressures that could be tolerated. The grounded subjects experienced less pain, as revealed with Figure 5) and by their ability to tolerate a higher pressure from the blood pressure cuff ( Figure 6). 7 The DOMS grounding study report 7 contains a summary of the literature on the changes in blood chemistry and content of formed elements (erythrocytes, leukocytes, and platelets) expected after an injury. The immune system detects pathogens and tissue damage and responds by initiating the inflammation cascade, sending neutrophils and lymphocytes into the region. [8][9][10][11][12] As expected, the white cell counts increased in the ungrounded or control subjects. White cell counts in the grounded subjects steadily decreased following the injury (Figure 7). 7 Previous research has shown increases in neutrophils following injury. [13][14][15][16] This happened in both grounded and ungrounded subjects (Figure 8), although neutrophil counts were always lower in the grounded subjects. 7 As the number of neutrophils increases, lymphocytes are expected to decrease. [17][18][19] In the DOMS study, the lymphocyte count in the grounded subjects was always below the ungrounded subjects ( Figure 9). 7 Normally, neutrophils rapidly invade an injured region 8,[20][21][22] in order to break down damaged cells and send signals through the cytokine network to regulate the repair process. Neutrophils' production of ROS and reactive nitrogen species (RNS) is termed the "oxidative burst". 21 While ROS clear pathogens and cellular debris so that the tissue can regenerate, ROS can also damage healthy cells adjacent to the repair field, causing so-called collateral damage. The fact that the grounded subjects had fewer circulating neutrophils and lymphocytes could indicate that the original damage resolved more quickly, collateral damage reduced, and the recovery process accelerated. This would explain the reduction in the Reduction in inflammation with grounding or earthing documented with medical infrared imaging. Notes: Thermal imaging cameras record tiny changes in skin temperature to create a color-coded map of hot areas indicative of inflammation. Panel A shows reduction in inflammation from sleeping grounded. Medical infrared imaging shows warm and painful areas (arrows in upper part of panel A). Sleeping grounded for 4 nights resolved the pain, and the hot areas cooled. Note the significant reduction in inflammation and a return toward normal thermal symmetry. Panel B shows infrared images of a 33-year-old woman who had a gymnastics injury at age 15. The patient had a long history of chronic right knee pain, swelling, and instability, and was unable to stand for long periods. Simple actions, such as driving, increased the symptoms. She had to sleep with a pillow between her knees to decrease the pain. On-and-off medical treatment and physical therapy over the years provided minimal relief. She presented on November 17, 2004 with considerable right medial knee tenderness and a mild limp. Top images in Panel B were taken in walking position to show the inside of both knees. Arrow points to exact location of patient's pain and shows significant inflammation. Lower images in Panel B taken 30 minutes after being grounded with an electrode patch. The patient reported a mild reduction in pain. Note significant reduction of inflammation in knee area. After 6 days of grounding, she reported a 50% reduction pain and said that she could now stand for longer periods without pain, and no longer needed to sleep with pillow between her legs. After 4 weeks of treatment, she felt good enough to play soccer, and for the first time in 15 years felt no instability and little pain. By 12 weeks, she said her pain had diminished by nearly 90% and she had no swelling. For the first time in many years, she was able waterski. The patient contacted the office after 6 months of treatment to report that she had finished a half-marathon, something she never dreamt she would ever be able to do prior to treatment. Figure 4 Grounded sleep system. Notes: Grounded sleep system consists of a cotton sheet with conductive carbon or silver threads woven into it. The threads connect to a wire that leads out the bedroom window or through the wall to a metal rod inserted into the Earth near a healthy plant. Alternatively, it can be connected to the ground terminal of an electrical outlet. Sleeping on this system connects the body to the Earth. A frequent report from people using this system is that sleeping grounded improves the quality of sleep and reduces aches and pains from a variety of causes. Our working hypothesis features this scenario: mobile electrons from the Earth enter the body and act as natural antioxidants; 3 they are semi-conducted through the connective tissue matrix, including through the inflammatory barricade if one is present; 23 they neutralize ROS and other oxidants in the repair field; and they protect healthy tissue from damage. The fact that there are fewer circulating neutrophils and lymphocytes in the grounded subjects may be advantageous because of the harmful role these cells are thought to play in prolonging inflammation. 24 We also raise the possibility that the inflammatory barricade is actually formed in ungrounded subjects by collateral damage to healthy tissue, as was suggested by Selye in the first and subsequent editions of his book The Stress of Life ( Figure 10). 25 While there may be other explanations, we suggest that rapid resolution of inflammation takes place because the Earth's surface is an abundant source of excited and mobile electrons, as described in our other work. 1 We further propose that skin contact with the surface of the Earth allows Earth's electrons to spread over the skin surface and into the body. One route to the body interior could be via acupuncture points and meridians. The meridians are known to be low resistance pathways for the flow of electrical currents. [26][27][28] Another pathway is via mucous membranes of the respiratory and digestive tracts, which are continuous with the skin surface. Sokal and Sokal 29 89 Earthing or grounding reduces inflammation body, on the mucosal membrane of the tongue, and in the venous blood rapidly drop to approximately -200 mV. When the body is disconnected from the Earth, the potential is quickly restored. These effects reveal changes in the internal electrical environment within the body. 29 Selye 30 studied the histology of the wall of the inflammatory pouch or barricade ( Figure 10). It is composed of fibrin and connective tissue. Our hypothesis is that electrons can be semi-conducted across the barrier, and can then neutralize reactive oxygen species (free radicals). 30 A semiconducting collagen pathway or corridor may explain how electrons from the Earth quickly attenuate chronic inflammation not resolved by dietary antioxidants or by standard medical care, including physical therapy ( Figure 3). The barricade probably restricts diffusion of circulating antioxidants into the repair. Taken together, these observations indicate that grounding or earthing the human body significantly alters the inflammatory response to an injury. Anatomical and biophysical aspects The concept that the inflammatory barricade forms from collateral damage to healthy tissue surrounding an injury site is supported by Selye's classic studies published along with his description of the granuloma or Selye pouch ( Figure 10). 25,30 Moreover, research in cell biology and biophysics reveals the human body is equipped with a system-wide collagenous, liquid-crystalline semiconductor network known as the living matrix, 31 or in other terms, a ground regulation system 32,33 or tissue tensegrity matrix system ( Figure 11). 34 This bodywide network can deliver mobile electrons to any part of the body and thereby routinely protect all cells, tissues, and organs from oxidative stress or in the event of injury. 23,31 The living matrix includes the extracellular and connective tissue matrices as well as the cytoskeletons of all cells. 31 Integrins at cell surfaces are thought to allow for semi-conduction of electrons to the cell interior, and links across the nuclear envelope enable the nuclear matrix and genetic material to be part of the circuitry. 23 Our hypothesis is that this body-wide electronic circuit represents a primary antioxidant defense system. This hypothesis is the central point of this report. The extracellular part of the matrix system is composed mainly of collagen and ground substances (Figures 11 and 12). The cytoskeleton is composed of microtubules, microfilaments, and other fibrous proteins. The nuclear matrix contains another protein fabric composed of histones and related materials. It is not widely appreciated that collagen and other structural proteins are semiconductors. This concept was introduced by Albert Szent-Györgyi in the Korányi memorial lecture in Budapest, Hungary in 1941. His talk was published in both Science (Towards a New Biochemistry?) 35 and Nature (The Study of Energy Levels in Biochemistry). 36 The idea that proteins might be semiconductors was immediately and firmly rejected by biochemists. Many modern scientists continue to reject semi-conduction in proteins, because living systems only have trace amounts of silicone, germanium, and compounds of gallium that are the most widely used materials in electronic semiconductor devices. However, there are many ways of making organic semiconductors without using metals. One of the sources of confusion was the widely held belief that water was a mere filler material. We now know that water plays crucial roles in enzymatic activities and semi-conduction. Hydrated proteins actually are semiconductors, and have become important components in the global microelectronics industry. Organic microcircuits are preferred for some applications, because they can be made very small, self-assemble, are robust, and have low energy consumption. 37,38 One of the leaders in the field of molecular electronics, NS Hush, has recognized Albert Szent-Györgyi and 67 The ground substance is a highly charged polyelectrolyte gel, a vast reservoir of electrons. Note the collagen fibril embedded in ground substance units known as matrisomes (a term coined by Heine). 33 Detail of a matrisome to the right (b) reveals vast stores of electrons. Electrons from the ground substance can migrate through the collagen network to any point in the body. We suggest that they can maintain an anti-oxidant microenvironment around an injury repair field, slowing or preventing reactive oxygen species delivered by the oxidative burst from causing collateral damage to healthy tissue, and preventing or reducing the formation of the so-called "inflammatory barricade". Journal of Inflammation Days Bilirubin Figure 13 Comparisons of bilirubin levels, pretest versus post-test for each group. Robert S Mulliken for providing two concepts fundamental to the industrial applications: theories of biological semiconduction, and molecular orbital theory, respectively. 39 In recent studies, given awards by the Materials Research Society in both Europe and the USA, scientists from Israel made flexible biodegradable semiconductor systems using proteins from human blood, milk, and mucus. 40 Silicon, the most widely used semiconducting material, is expensive in the pure form needed for semiconductors, and is inflexible and environmentally problematic. Organic semiconductors are predicted to lead to a new range of flexible and biodegradable computer screens, cell phones, tablets, biosensors, and microprocessor chips. We have come a long way since the early days when semi-conduction in proteins was so thoroughly rejected. 41,42,43 Ground substance polyelectrolyte molecules associated with the collagenous connective tissue matrix are charge reservoirs ( Figure 12). The matrix is therefore a vast whole-body redox system. The glycosaminoglycans have a high density of negative charges due to the sulfate and carboxylate groups on the uronic acid residues. The matrix is therefore a body-wide system capable of absorbing and donating electrons wherever they are needed to support immune functioning. 44 The interiors of cells including the nuclear matrix and DNA are all parts of this biophysical electrical storage and delivery system. The time-course of the effects of grounding on injury repair can be estimated in various ways. First, we know from medical infrared imaging that inflammation begins to subside within 30 minutes of connecting with the earth via a conductive patch placed on the skin. 2,3 Secondly, metabolic activity increases during this same period. Specifically, there is an increase in oxygen consumption, pulse rate, and respiratory rate and a decrease in blood oxygenation during 40 minutes of grounding. 45 We suspect that the "filling" of the charge reservoirs is a gradual process, possibly because of the enormous number of charged residues on the polyelectrolytes, and because they are located throughout the body. When charge reservoirs are saturated, the body is in a state we refer to as "inflammatory preparedness". This means that the ground substance, which pervades every part of the body, is ready to quickly deliver antioxidant electrons to any site of injury via the semiconducting collagenous matrix (see Figure 16B). These considerations also imply anti-aging effects of earthing or grounding, since the dominant theory of aging emphasizes cumulative damage caused by ROS produced during normal metabolism or produced in response to pollutants, poisons, or injury. 46 We hypothesize an anti-aging effect of grounding that is based on a living matrix reaching every part of the body and that is capable of delivering antioxidant electrons to sites where tissue integrity might be compromised by reactive oxidants from any source. 47,48 Molecules generated during the immune response were also followed in the DOMS study. 7 Parameters that differed consistently by 10% or more between grounded and ungrounded subjects, normalized to baseline, included creatine kinase, phosphocreatine/inorganic phosphate ratios, bilirubin, phosphorylcholine, and glycerolphosphorylcholine. Bilirubin is a natural antioxidant that helps control ROS. [49][50][51][52][53] While bilirubin levels decreased in both grounded and ungrounded groups, the margin between the subjects was large (Figure 13 The inflammatory markers changed at the same time that the pain indicators were changing. This was revealed by both the visual analog pain scale and by the pressure measurements on the right gastrocnemius ( Figures 5 and 6). The authors of the DOMS study suggested that bilirubin may have been used as a source of electrons in the ungrounded subjects. 7 It is possible that the lower decline in circulating bilirubin in the grounded subjects was due to the availability in the repair field of free electrons from the Earth. Other markers encourage the hypothesis that the grounded subjects more efficiently resolved tissue damage: the pain measures, inorganic phosphate-phosphocreatine ratios (Pi/ PCr), and creatine kinase (CK). Muscle damage has been widely correlated with CK. [54][55][56] As Figure 14 shows, CK values in the ungrounded subjects were consistently above those in the grounded subjects. 7 Differences between Pi/PCr of the two groups were monitored by magnetic resonance spectroscopy. These ratios are indicative of metabolic rate and cellular damage. [57][58][59][60] Inorganic phosphate levels are indicative of hydrolysis of PCr and adenosine triphosphate. The ungrounded subjects had higher levels of Pi, while the grounded subjects showed higher levels of PCr. These findings indicate that mitochondria in the grounded subjects are not producing as much metabolic energy, probably because there is less demand due to more rapid achievement of homeostasis. The differences between the groups are shown in Figure 15. The pilot study 7 on the effects of earthing in speeding recovery from the pain of DOMS provides a good basis for a larger study. The concepts presented here are summarized in Figure 16 as a comparison between "Mr Shoes" (an ungrounded individual) and "Mr Barefoot" (a grounded individual). Discussion Voluminous current research correlates inflammation with a wide range of chronic diseases. A search for "inflammation" in the National Library of Medicine database (PubMed) reveals over 400,000 studies, with more than 34,000 published in 2013 alone. The most common cause of death and disability in the United States is chronic disease. Seventy-five percent of the nation's health care spending, which surpassed US$2.3 trillion in 2008, is for treating chronic disease. Heart disease, cancer, stroke, chronic obstructive pulmonary disease, osteoporosis, and diabetes are the most common and costly chronic diseases. 61 Others include asthma, Alzheimer's disease, bowel disorders, cirrhosis of the liver, cystic fibrosis, multiple sclerosis, arthritis, lupus, meningitis, and psoriasis. Ten percent of all health care dollars are spent treating diabetes. Osteoporosis affects about 28 million aging Americans. 61,62 However, there are few theories on the mechanisms connecting chronic inflammation with chronic disease. The research on grounding or earthing summarized here provides a logical and testable theory based on a variety of evidence. The textbook description of the immune response describes how large or small injuries cause neutrophils and other white blood cells to deliver highly ROS and RNS to break down pathogens and damaged cells and tissues. Classical textbook descriptions also refer to an "inflammatory barricade" that isolates injured tissues to hinder the movement of pathogens and debris from the damaged region into adjacent, healthy tissues. Selye described how the debris coagulates to form the inflammatory barricade ( Figure 10). This barrier also hinders the movements of antioxidants and regenerative cells into the blocked-off area. Repair can be incomplete, and this incomplete repair can set up a vicious inflammatory cycle that can persist for a long period of time, leading to so-called silent or smoldering inflammation that in turn, over time, can promote the development of chronic disease. Remarkable as it may seem, our findings suggest that this classical picture of the inflammatory barricade may be a consequence of lack of grounding, and of a resultant "electron deficiency". Wounds heal very differently when the body is grounded (Figures 1 and 2). Healing is much faster, and the cardinal signs of inflammation are reduced or eliminated. The profiles of various inflammatory markers over time are very different in grounded individuals. Those who research inflammation and wound healing need to be aware of the ways grounding can alter the time-course of inflammatory responses. They also need to be aware that the experimental animals they use for their studies may have 94 Oschman et al very different immune systems and responses, depending on whether or not they were reared in grounded or ungrounded cages. It is standard research practice for investigators to carefully describe their methods and the strain of the animals they use so that others can repeat the studies if they wish. An assumption is that all Wistar rats, for example, will be genetically and physio logically similar. However, a comparison of neoplasms in Sprague-Dawley rats (originally outbred from the Wistar rat) from different sources revealed highly significant differences in the incidences of endocrine and mammary tumors. The frequency of adrenal medulla tumors also varied in rats from the same suppliers raised in different laboratories. The authors "stressed the need for extreme caution in evaluation of carcinogenicity studies conducted at different laboratories and/or on rats from different sources." 63 From our perspective, these variations are not at all surprising. Animals will differ widely in the degree to which their charge reservoirs are saturated with electrons. Are their cages made of metal, and if they are, is that metal grounded? How close are their cages to wires or conduits carrying 60/50 Hz electricity? From our research, those factors will have measurable impacts on immune responses. In fact, they represent a "hidden variable" that could have affected the outcomes of countless studies, and also could affect the ability of other investigators to reproduce a particular study. Dominant lifestyle factors such as insulating footwear, high-rise buildings, and elevated beds separate most humans from direct skin connection with the Earth's surface. An earth connection was an everyday reality in past cultures that used animal skins for footwear and to sleep on. We suggest that the process of killing pathogens and clearing debris from injury sites with ROS and RNS evolved to take advantage of the body's constant access to the virtually limitless source of mobile electrons the Earth provides when we are in contact with it. Antioxidants are electron donors, and the best electron donor, we strongly believe, is right under our feet: the surface of the Earth, with its virtually unlimited storehouse of accessible electrons. Electrons from the Earth may in fact be the best antioxidants, with zero negative secondary effects, because our body evolved to use them over eons of physical contact with the ground. Our immune systems work beautifully as long as electrons are available to balance the ROS and reactive nitrogen species (RNS) used when dealing with infection and tissue injury. Our modern lifestyle has taken the body and the immune system by surprise by suddenly depriving it of its primordial electron source. This planetary separation began accelerating in the early 1950s with the advent of shoes made with insulating soles instead of the traditional leather. Lifestyle challenges to our immune systems proceeded faster than evolution could accommodate. The disconnection from the Earth may be an important, insidious, and overlooked contribution to physiological dysfunction and to the alarming global rise in non-communicable, inflammatory-related chronic diseases. A lack of electrons can also de-saturate the electron transport chains in mitochondria, leading to chronic fatigue and slowing the cellular migrations and other essential activities of the cells of the immune system. 64 At this point, even a minor injury can lead to a long-term health issue. When mobile electrons are not available, the inflammatory process takes an abnormal course. Areas that are electron deficient are vulnerable to further injury -they become positively charged and will have difficulty warding off infections. The result is an immune system constantly activated and eventually exhausted. Cells of the immune system may fail to distinguish between the body's diverse chemical structures (called "self ") and the molecules of parasites, bacteria, fungi, and cancer cells (called "non-self "). This loss of immunologic memory can lead to attacks by some immune cells on the body's own tissues and organs. An example is the destruction of insulin-producing beta cells of the islets of Langerhans in the diabetic patient. Another example is the immune system attacking cartilage in joints, producing rheumatoid arthritis. Lupus erythematosus is an extreme example of an auto-immune condition caused by the body's immune system attacking host tissues and organs. Lupus, for example, can affect many different body systems, including skin, kidneys, blood cells, joints, heart, and lungs. With time, the immune system becomes weaker and the individual more vulnerable to inflammation or infections that may not heal, as often seen with the wounds of diabetic patients. Specifically, which part or parts of the body the weakened immune system will attack first depends on many factors such as genetics, habits (sleep, food, drinks, exercise, etc), and toxins in the body and in the environment. 65,66 A repeated observation is that grounding, or earthing, reduces the pain in patients with lupus and other autoimmune disorders. 1 Conclusion Accumulating experiences and research on earthing, or grounding, point to the emergence of a simple, natural, and accessible health strategy against chronic inflammation, warranting the serious attention of clinicians and researchers. The living matrix (or ground regulation or tissue tensegrity-matrix system), the very fabric of the body, appears to serve as one of our primary antioxidant defense systems. As this report explains, it is a system requiring occasional recharging by conductive contact with the Earth's surface -the "battery" for all planetary life -to be optimally effective. Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/journal-of-inflammation-research-journal The Journal of Inflammation Research is an international, peer-reviewed open-access journal that welcomes laboratory and clinical findings on the molecular basis, cell biology and pharmacology of inflammation including original research, reviews, symposium reports, hypothesis formation and commentaries on: acute/chronic inflammation; mediators of inflamma-tion; cellular processes; molecular mechanisms; pharmacology and novel anti-inflammatory drugs; clinical conditions involving inflammation. The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
2017-04-04T23:25:07.142Z
2015-03-24T00:00:00.000
{ "year": 2015, "sha1": "26cd44a0e26e8fbd859ad0c03a05ff14d6dca7c1", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=24289", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f0db44c05ccbf0d406895922e56473d0831a3cdd", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
269087696
pes2o/s2orc
v3-fos-license
Mapping 18F-FDG Kinetics Together with Patient-Specific Bootstrap Assessment of Uncertainties: An Illustration with Data from a PET/CT Scanner with a Long Axial Field of View Visual Abstract Hi gh-resolution dynamic whole-body PET scanning enhances the ability to map metabolic characteristics of tissue, particularly in the context of cancer.The current focus has been on dynamic PET studies with 18 F-FDG using the well-established Huang-Sokoloff 2-compartment (2C) modeling framework (1)(2)(3).Although 2C modeling has had widespread application in PET imaging, far beyond the brain setting in which it was developed, the biochemical understanding of the transporters involved in the metabolism of 18 F-FDG and their distribution across normal and cancerous tissues has evolved in the years since the Huang-Sokoloff construct was proposed (4)(5)(6)(7).The temporal and spatial resolutions of emerging scanners have transformed the ability to objectively assess the accuracy of the 2C framework to represent 18 F-FDG time-course data across the diverse tissues encountered in the human body.In this context, the assessment of 18 F-FDG kinetics based on more flexible nonparametric analysis approaches (8,9) may be necessary.The most recent implementation of the nonparametric voxel-level analysis scheme (9) is particularly efficient, largely because of an extensive reliance on quadratic programming techniques, and its nonparametric aspect provides an ability to apply an image-domain bootstrapping process for evaluation of statistical uncertainties in derived kinetic maps and associated biomarkers (10,11).Uncertainties in diagnostic information recovered from PET scans could augment decisionmaking for individual patients that is based on complex nonlinear radiomic metrics derived from a kinetic map. The volume of data produced by a dynamic 18 F-FDG PET study on a state-of-the-art scanner with a long axial field of view (FOV) is a practical computational challenge for voxel-level analysis of kinetics.The bootstrap uncertainty assessment requires that comprehensive voxel-level analyses be applied to multiple simulated datasets, each created to match the full character and extent of the original data.This significantly adds to the computational challenge involved. The work here uses a series of dynamic 18 F-FDG data acquired on a long-axial-FOV scanner (2) to investigate the approach.Apart from the demonstration of the practical feasibility of kinetic mapping with uncertainty evaluation, the analysis allows regional comparisons between nonparametric and 2C modeling results in terms of both derived kinetics and accuracy of data representation. MATERIALS AND METHODS An extended materials and methods description is provided in the supplemental materials (supplemental materials are available at http:// jnm.snmjournals.org)(12). Patient Scans and Volumes of Interest (VOIs) The data considered arise from a set of 24 patients with different types of cancer who participated in an institutionally approved 18 F-FDG PET/CT study at Bern University Hospital (KEK 2019-02,193).Details of the study were reported previously (2).In summary, PET scanning was conducted on a Biograph Vision Quadra scanner (Siemens) with a 106-cm axial FOV and a nominal in-plane resolution of 3.3 mm in full width at half maximum (13).Data were acquired in list mode starting 15 s before an intravenous bolus injection of 18 Automated segmentation algorithms based on CT and PET were used to define VOIs corresponding to several tissue structures, including gray and white matter in the brain, liver, lungs, kidneys, spleen, and bones (2).A further set of 49 VOIs corresponding to tumor tissue was identified by an experienced nuclear medicine physician.Finally, a VOI placed in the descending aorta was used to define the whole-blood arterial input function (AIF) used for kinetic analyses (2).Further scanning and study protocol details are available in the supplemental materials. Parametric Imaging Techniques Tissue Residue and Kinetic Parameters.When the Meier-Zierler (14) formalism is followed, the analysis assumes the PET-measured time course for a tissue region is represented as a convolution between the local AIF, C p , and the regional tissue residue function.Kinetic parameters are defined in terms of this residue (Fig. 1).Large-vessel vascular blood and distribution volumes (V b and V d , respectively) are evaluated as areas under the tissue residue.The apparent rate of retention or flux (K i ) of the tracer, measurable by PET over the scan duration, is the height of the residue at the end of the acquisition period.Also, the mean transit time of the tracer in the tissue and extraction fraction are defined as ratios of amplitude and integral measurements.A variety of approaches might be used to approximate the residue: a nonparametric method is used here.Patlak analysis uses a constant residue (15).Compartmental model forms, for example, the 1-compartment Kety-Schmidt ( 16) model for water and the 2C Huang-Sokoloff (17) model for 18 F-FDG in the brain, represent residues by positive linear combination of exponentials.In the 6-parameter 2C model, there is additive adjustment for an arterial signal.By adding a sharp residue element to the 2-exponential form, a Meier-Zierler residue is also available for this model.This allows residue-defined metabolic parameters for the extended compartmental model to be evaluated via the decomposition shown in Figure 1 (18).Supplemental materials provide a review of how Meier-Zierler residue parameters link with rate constants in the 2C model. Nonparametric Residue Mapping (NPRM) of Kinetics. NPRM approximates the voxel-level residue by the positive linear sum-ofbasis elements that have been selected by a cross-validation-guided analysis of a comprehensive collection of time courses produced by segmentation of all the available data in the study (10,18).Individual basis elements are of the form m k ðtÞ5 Here, R k is the basis element residue and D k is its associated delay factor.Note that cross validation is used to select the number of basis elements (K).Given the basis set, PET-measured voxel-level time-course data over the available set of J time frames, fzðt j Þ, j51, 2, . . ., Jg, is expressed as Eq. 1 Here, d and (a 1 , a 2 , … , a K ) are the unknown voxel-level delay and basisamplitude parameters, respectively, and є (t) represents (random) model error.A weighted least-squares criterion, with weights proportional to the product of the frame duration and the decay correction factor used to convert raw counts to decay-corrected tracer activity, is used for optimization of the unknown parameters.For any delay, the optimal set of a coefficients is found by quadratic programming.A crude grid search is used to optimize delay (10). Bootstrap Assessment of Uncertainty.Model residuals across N voxels and J time frames, fz i ðt j Þ2ẑ i ðt j Þ, i51, . . ., N; j51, 2, . . ., Jg, are used to construct an image-domain data generation process (DGP) for bootstrapping.The DGP generates data according to where ẑðt j Þ5â 1 m 1 ðt j 2 dÞ1 . . .1â K m K ðt j 2 dÞ and the simulated error process, e*, mimic the stochastic character of analysis residuals.Analysis of bootstrapped datasets arising from the DGP leads to a set of bootstrapped kinetic parameter values at each voxel.The SD of these values estimates the voxel-level SE of the kinetic parameter.Similarly, the SEs for more complex quantities, such as the maximum-intensity projection (MIP) for a kinetic map, are created as the SD of the bootstrapped MIPs of the kinetic parameter (Fig. 2).Numeric studies (10,11) have shown that image-domain DGP bootstrapping matches the accuracy of the much more computationally intensive list-mode bootstrapping approach of Haynor and Woods (19). The number of bootstrap simulations impacts the accuracy of the SEs it produces (20); this is discussed in the supplemental materials. Statistical Analysis NPRM kinetic analysis with 25 bootstrapped simulations is evaluated for each of the studies in the series.Results are examined in 4 separate ways.Technical details with formulas are in the supplemental materials. Representation of VOI where the random errors, є Ã i ðtÞ, are in units of SD and ŝe is an overall scale of the model error.In Equation 3, the factors ĉi and ŵj are scale-free quantities representing the relative uncertainty across voxels (i) and time frames (j).As the PET-measured activity scales with dose, the DGP error scale (ŝ e ) should also scale with dose; this is examined graphically.The overall axial pattern variation is described by the scale factor ĉi .In a uniform cylindric phantom, this has a familiar U-shaped pattern related to scanner sensitivity (10).With a patient in the scanner, the distribution of activity and attenuation is far from uniform.Physiologic patient motions, such as breathing, may also impact axial variation.Skewness is a key feature of iteratively reconstructed PET data.A histogram of scaled residuals shows how the DGP captures this aspect.After adjustment for spatial scale factors, the 3-dimensional power spectrum of the normalized residual process provides insight into the effective resolution of the scanning.Coordinatewise autocorrelation functions associated with the spectrum give insight into the actual resolution of the scanner.Again, physiologic movements may well lead to the actual resolution's deviating from what might be predicted on the basis of static phantom measurements. SEs of VOI Kinetics.In theory, uncertainty in parameters recovered by kinetic model fitting should be proportional to the scale of the residual model error, but it may also be a function of the relevant sensitivity matrix for the model.We examine the relation between the bootstrap assessment of mean VOI kinetic SEs and suitable explanatory factors including the weighted-residual-sums-ofsquares fit of the VOI and the mean VOI kinetic values.For each kinetic parameter, linear regression analysis on a logarithmic SE scale is applied.Adjustment of this regression analysis based on the VOI type and the kinetics are explored.Regression predictions of SEs are graphically compared with the true.Correlation values are also summarized. Illustration Sample kinetic MIP maps with associated SEs obtained using the NPRM technique and bootstrapping are shown in Figure 2. A video of all coronal MIP maps is provided as Supplemental Video 1.Note that the dataset is the same as that used in a previous report (2).The results are of high quality and are well aligned with the vascular and metabolic 18 F-FDG patterns expected for key organ structures such as the brain, liver, kidneys, spleen, etc. ( 2).The uncertainties of V b , V d , distribution flow (K d ), and K i are generally higher for regions with larger magnitudes for the kinetic variable.This is perhaps related to the fact that these parameters, which are linear functions of the fitted voxel-level residue, ultimately scale with the magnitude of the time-course data.Mean transit time and extraction fractions deviate somewhat from this pattern.This is likely to be related to the fact that both the mean transit time and extraction fraction are defined in terms of ratios of the V d , K d , and K i variables and, as a result, do not necessarily scale with the scale of the voxel time course.The large blood vessels are seen to impact the structure of the MIP uncertainty for several parameters.The algorithms developed allow kinetic mapping, including the bootstrapping process, to be achieved in a timely fashion.On a single 3.2-GHz processor, the compute time for the NPRM kinetic analysis including the definition of the DGP is 140 min; each bootstrap replicate took 80 min. Statistical Analysis Representation of VOI Time-Course Data.The full time course as well as the time course over the first minute of data acquisition are shown in Figure 3. Average VOI time-course data are fit directly using the nonparametric and 2C models; averages of voxel-level fits are also provided.This gives a reference to the results reported previously (2).Although the 2C fitting of some VOIs is reasonable, for example, gray and white matter, there are clearly some VOIs where 2C modeling is substantially inferior (e.g., kidney, liver, bone, and bladder).The data fit achieved by the VOI averaging of the voxel-level nonparametric fit is quite good overall and especially over the first 1 min of acquisition.However, it is important to appreciate that almost half of the total number of frames occur in the first 80 s.For this example, over the first minute, differences between the VOI average of the voxelwise 2C fits and the fit of the 2C model to the mean of the VOI time-course data are quite pronounced.In contrast, differences between the corresponding nonparametric fits are much smaller. Quantitative summaries of the nonparametric fitting of VOI timecourse data and comparisons with direct analysis of the mean VOI time-course data using nonparametric and 2C analysis are presented in Table 1.Although values from the weighted-residual-sums-of-squares fit for VOIs are similar based on the VOI average of voxel-level nonparametric fits or by direct fitting of the VOI time-course data, there is a marked increase in weighted-residual-sums-of-squares fits when the VOI time course is approximated using the best-fitting 2C model.VOI time-course fitting by the nonparametric model is consistently improved by averaging voxel-level nonparametric fits; the percent improvement is a modest 50%.VOI time-course fitting by the 2C model is substantially worse than the nonparametric fitting.The mean percent improvement here is almost 390%. VOI Kinetics.VOI kinetics are reported in Table 2. Statistically significant deviations between the kinetics recovered by alternative methods are largely linked to early time-course parameters (Fig. 1), particularly for V b .Deviations between voxel-averaged parameters and values recovered from nonparametric and 2C analysis of the VOI time course are much smaller for nonparametric analysis than for 2C analysis.However, it is noteworthy that, for most VOIs, K i is quite similar in magnitude across all 3 analyses.This might be because flux is a late-time-course parameter (Fig. 1), and alternative methods fit the late time course quite similarly (Fig. 3).DGP Model. Figure 4 and Supplemental Figure 2 show an expected linear relation between the scale of the DGP and study dose; the linear correlation of 0.68 is highly significant.The axially averaged spatial scale of the DGP increases toward the top and bottom of the patient in the FOV.As expected, the increased scale is not just a function of the nominal sensitivity but is clearly impacted by patient-specific factors including the varying uptake, attenuation, and perhaps any impacts of small patient movements.The skewed nature of random fluctuations in the DGP model, which vary on the basis of the data coefficient of variation, are fully consistent with patterns for iteratively reconstructed PET data (10,21).The full width at half maximum of the autocorrelation functions in each direction is on the order of 2-3 mm.The coordinatewise autocorrelation functions show greater spatial persistence in the x (perpendicular to scanning bed) and z (axially) directions (Supplemental Fig. 3).This could align with involuntary patient movements during scanning. SEs of VOI Kinetics.SEs of VOI kinetics (voxel-level nonparametric) are well approximated using a log-linear model that accounts for the VOI type, the VOI mean kinetics, and the residual weighted root mean square error of the voxel-level nonparametric fit of the VOI time course (Fig. 5; Supplemental Fig. 4).The overall correlation between the bootstrap-measured SE and the SE values predicted by log-linear modeling is 0.96, which is also quite high for individual kinetic parameters. DISCUSSION This work demonstrates the practicality of using image-domain bootstrapping for the construction of patient-specific uncertainty assessment in kinetics variables for voxel, VOI, and more complex derived quantities such as MIPs from a whole-body dynamic 18 F-FDG PET study.This development creates an opportunity to incorporate uncertainty about a PET-derived kinetic biomarker that might be used to guide a clinical decision for a patient.This could be particularly helpful in cases where the biomarker value is close to a boundary between alternative treatment options.Bootstrap reliability depends both on the number of bootstrap simulations (N B ) used and on the accuracy of the representation of the data used in the DGP (20).Computational resources dictate the choice of N B .The results here are based on just an N B of 25, but for the data in Figure 2, a 4-fold increase in N B leads to little qualitative change in derived voxel-level SE (Supplemental Fig. 5).Table 1 clearly demonstrates the benefit of using a nonparametric methodology in the DGP.Relative to the well-established 2C 18 F-FDG model, substantial and highly significant improvements in data representation are achieved using the nonparametric approach.These benefits are mostly associated with the ability of the nonparametric technique to capture the highly resolved early time-course pattern of data from the current generation of PET scanners.The generally more modest deviations between nonparametric and 2C fits beyond the early time period, say after 1 min, suggest that the deficiencies in the 2C model may primarily relate to the lack of sophistication in the representation of the vascular components of blood-tissue exchange (22).The high temporal resolution of the scans here, as well as the use of a bolus injection, contributes to the ability to scrutinize the 2C model in ways that have likely not been possible in the past.The VOIs here are large and heterogeneous-far from the assumption of homogeneous well-mixed compartments that underly the 2C model.However, it is notable that our previous work (23) reported significant discrepancies between 2C and nonparametric representation of dynamic 18 F-FDG brain data in healthy subjects using much smaller and highly homogeneous 2 for gray and white matter, the discrepancies primarily impact the accuracy of the initial phase of the 18 F-FDG tissue residue-V b especially-but have much less impact on several other variables including flux and V d .However, statistically significant differences between voxel nonparametric and VOI 2C parameters do not imply that parameters are unrelated.For example, Figure 6 and Supplemental Figure 6 show pairwise plots and summary correlations for the 18 F-FDG metabolic rate (MR) flux scaled by the plasma glucose in Equation 4. The strong linear dependence in Figure 6 emphasizes the importance of differentiating statistical and practical significance.Calculated K i based on nonparametric or 2C analysis would likely yield similarly effective diagnostic values.Indeed, it is well appreciated that even simpler assessments of 18 F-FDG flux by Patlak analysis and SUV are also highly effective. The nonparametric technique here uses a linear basis, but the structure and number of elements involved are adapted to the full 4-dimensional dynamic data and guided by cross validation to prevent overfitting (10).The accuracy and stability of a kinetic mapping procedure are best evaluated numerically, which was reported previously (24)-studies based on a 2-min constant infusion injection of 18 F-FDG and a temporal sampling protocol in which the shortest time frames were 20 s in duration, providing mean-square-error performance characteristics of NPRM and 2C kinetic mapping of 18 F-FDG PET data as a function of the study dose and as a function of whether the underlying ground truth is governed by a compartmental model or not.In this study, the accuracy of the flux is largely unaffected by whether a 2C or an NPRM mapping technique is used.Across other kinetic variables, when the ground truth is noncompartmental, the NPRM approach is much better.Remarkably, when the ground truth is a 2C model, the NPRM continues to outperform the 2C approach, especially for variables such as V b and V d .Further study of the mean-square-error performance would clearly be useful, particularly in settings where the ground truth, study protocol, and scanning methods are similar to those encountered with the current generation of whole-body 18 F-FDG PET studies. VOI values of 3 variables, FDG metabolic rate (MR FDG ), distribution volume (DV), and vascular blood flow (BF), are compared with literature reports.Each variable is directly obtained by simple scaling of our summary kinetic values: K i , V d , and V b . Here, m glc is the plasma glucose concentration and t* is the value used to define the vascular component in the decomposition of the Meier-Zierler residue in Figure 1.In a cancer setting, 18 F-FDG MR is by far the most clinically important of these variables.Note that we do not try to use 18 interest in deriving potentially useful additional diagnostic information related to tissue vascularity from 18 F-FDG (1,25,26).There is no intention of questioning PET 15 O-H 2 O as the gold standard for V b determination.Our V b formula is an application of the central volume theorem (14) based on an assumed mean transit time in the vasculature of t*/2 (here, 7.5 s) for the collection of tracer atoms whose tissue transit time in the local voxel is less than 15 s.Table 3 compares the VOI averages of 3 variables to those in the literature.For 18 F-FDG MR and V d , the values are seen to be in the range reported using 2C and Patlak analyses (27).V b values are compared with those in reports based on PET 15 O-H 2 O and dynamic susceptibility contrast MR techniques.The results for the NPRM approach are remarkably similar to those in the literature, particularly given that the study group here is older and unhealthy (28).Further examination of the V b variable could be merited.Viability of conducting PET 15 O-H 2 O on this scanner was previously demonstrated (29).Note that some of the deviation in Table 3 may be related to scaling differences between the use of whole-blood activity as an AIF (like ours) and other analyses that used the arterial plasma activity time course as an AIF. Although our focus has been on parameters that have traditionally been used to quantify 18 F-FDG PET dynamics, the nonparametric technique provides a possibility to also evaluate a summary of the arrival pattern of 18 F-FDG at the voxel level.A sample amplitude-weighted average of voxel-level basis element delay as shown in Equation 1 is shown in Figure 7.There is early arrival of the signal to the lung and much more delayed arrival to the bladder and more peripheral regions (1).More detailed consideration of the 18 F-FDG arrival pattern may be worthwhile.PERTINENT FINDINGS: NRPM analysis together with imagedomain bootstrapping is a suitable methodology for mapping kinetics. IMPLICATIONS FOR PATIENT CARE: The ability to derive uncertainties in complex kinetic biomarkers could enhance patientspecific decision-making for guiding treatment of cancer patients. F-FDG (with activity of $3 MBq/kg of patient weight) to the left or right arm, followed by flushing with 50 mL of saline solution.The plasma glucose level was measured for each patient.Emission data were acquired for 65 min and binned into 62 contiguous time frames with durations of 2 3 10 s, 30 3 2 s, 4 3 10 s, 8 3 30 s, 4 3 60 s, 5 3 120 s, and 9 3 300 s.Images were reconstructed with a voxel size of 1.65 3 1.65 3 1.65 mm 3 .Low-dose CT scans (voltage, 120 kV; tube current, 25 mA; CARE Dose4D and CARE kV [Siemens]) were acquired as part of the examinations.The CT images were reconstructed with a voxel size of 1.52 3 1.52 3 1.65 mm 3 . Time-Course Data.Mean VOI timecourse data are compared with the corresponding mean VOIs of the fitted voxel-level time courses, ẑðt j Þ, in Equation 2. Mean VOI timecourse data are also analyzed using the nonparametric model and the Huang-Sokoloff 2C model including a fractional V b and delay of the FIGURE 1 . FIGURE 1. Meier-Zierler tissue residue (R) with decomposition into vascular (R b ), in-distribution (R d ), and extracted (R e ) components.Decomposition was used to define indicated metabolic parameters.MTT 5 mean transit time; Ext 5 extraction fraction. FIGURE 2 . FIGURE 2. MIP maps of NPRM kinetic parameters and associated SE.SEs are based on SD of MIP results for each of 25 bootstrap replications.Top row shows CT images for selected cross sections through volume and PET MIP maps at indicated times.K 5 9 basis elements were determined for data by NPRM methodology.MTT 5 mean transit time; Ext 5 extraction fraction. FIGURE 3 . FIGURE 3. Results of alternative fitting of VOI data used in Figure 2. Data are points, and line colors correspond to methods used.Full time course is on left; first minute is on right.GM 5 gray matter; NP 5 nonparametric; WM 5 white matter. CONCLUSION NPRM kinetic analysis together with bootstrap assessment of uncertainty is practically feasible in the context of large-scale longaxial-FOV18 F-FDG PET data.This provides an ability to incorporate patient-specific uncertainty measures of kinetic biomarkers recovered from dynamic PET to support clinical decisions.DISCLOSUREThis research is supported by Science Foundation Ireland grant PI-11/1027 and by the National Cancer Institute USA grant R33-CA225310.Kuangyu Shi and Axel Rominger are funded by Siemens Healthineers and Novartis.Hasan Sari is an employee of Siemens Healthineers.No other potential conflict of interest relevant to this article was reported.KEY POINTSQUESTION: Is it feasible to map kinetics together with uncertainty in long-axial-FOV dynamic18 F-FDG PET studies? FIGURE 7 . FIGURE 7. (A) Delay image corresponding to coronal CT slice in Figure 2; values are amplitude-weighted delay values, fd1D k , k51, 2, . . ., Kg, in Equation 1. Data are centered so that mean delay in descending aorta is 0. (B) Box plots of distribution of mapped delay values in seconds by VOI.GM 5 gray matter; WM 5 white matter. TABLE 2 VOI Kinetics Recovered Using Different Methodologies VOIs.Similar to what is reported in Table
2024-04-13T06:17:52.391Z
2024-04-11T00:00:00.000
{ "year": 2024, "sha1": "6672f33abd1063f0dc8e03556e5aa9b8f1a7e26f", "oa_license": "CCBY", "oa_url": "https://jnm.snmjournals.org/content/jnumed/early/2024/04/11/jnumed.123.266686.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3c328af4f46a0795fcd26ee6610d4bf34350c2a4", "s2fieldsofstudy": [ "Medicine", "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
233830662
pes2o/s2orc
v3-fos-license
Communication Networks of On-Farm Rubber in Riau Province, Indonesia The rubber on-farm subsystem is an important part of rubber commodity development. Communication is an important part of the on-farm subsystem carried out by farmers because the communication network formed describes the farmer’s communication pattern. The research aims to analyze the communication networks on-farm rubber in Riau and was conducted in two potential districts for rubber commodities in Riau Province, which are Kuantan Singingi Regency and Kampar Regency. To obtain research data, respondents were determined by purposive and snowball sampling. The number of respondents in this research was 168 rubber farmers. The results of the research showed that the on-farm communication network of rubber farmers in Kampar Regency, in a centralized pattern form (interlocking personal network), indicated that there are individuals dominant in the communication network of rubber farmers. Meanwhile, the communication network for rubber farmers in Kuantan Singingi Regency, with a radial personal network, confirmed that the farmer information centers have begun to spread to several individuals. It is necessary to introduce institutional and communication technology, so that farmer information centers are not limited on only certain individuals, and farmers have choice of information sources that can increase knowledge and help to solve problems for rubber farmers in Riau Province. Introduction One of the plantation sectors developed in Indonesia is rubber plantations. Rubber is the second largest plantation commodity after palm oil which has a large share as a source of foreign exchange. Natural conditions that support and the world's high demand for natural rubber commodities make rubber a potential commodity to be developed in the context of agricultural development and the Indonesian economy. Rubber plantation is one of the strategic commodities developed in Riau Province. Data obtained from the Indonesian Central Bureau of Statistics (CBS) in 2017 shows that the area of rubber plantations in Indonesia is 3,639,129 hectares with a production of 3,157,808 tons, while the area of rubber plantations in Riau Province is 349,714 hectares with a production of 324,123 tons. Based on existing data, it shows that Riau Province has great potential in the development of rubber plantations. One of the obstacles in rubber development is the availability of information on the agribusiness subsystem. In the upstream subsystem, information on low seed quality is the main problem for plantations in the Sumatra corridor. This is indicated by the productive age of rubber plants which does not reach 30. In the farming subsystem, the availability of information on rubber plant Internal Characteristics of Farmers Nearly all of the respondent rubber farmers in Riau Province are in the productive age group, as shown in Table1, comprising 159 farmers and 9 farmers of unproductive age. This illustrates that rubber farmers are easy to find and absorb rubber farming information provided by communicators. Junior High School-Senior High School was the majority of the educational level of the respondents, amounting to 112 individuals (66.67%). Meanwhile, 42 people did not finish elementary schoolelementary school (low), and 14 graduates from Diploma -Bachelor Degree (8.33%). This condition explains that the majority of respondent rubber farmers are farmers who have the latest level of 3 education in the medium and low categories, this is because of the low awareness of the importance of education, which causes dependence on knowledge and information on highly educated farmers. Based onthe number of family dependents, it is known that the dominant respondent in the group of 3-4 people was 88 people with a percentage of 52.38%. Meanwhile, the distribution of the number of family dependents of 1-2 people was 66 with a percentage of 39.29% and the distribution of family dependents of 5-6 people were 14 people with a percentage of 8.33%. This condition states that the number of dependents of farmers is medium (3-4 people). The more the number of family dependents, the more the farmers spend. The experience of the farmers surveyed in rubber farming shows that most of the farmers' experience in rubber farming is classified as long (≥ 15 years).Usually, farmers who have been farming for a long time will have the experience of becoming a source of information for other farmers. Land ownership was in the medium category of land, namely farmers who own 0.6-2 hectares of land as many as 131 or 77.98 %. In the meantime, farmers with a narrow land category of 0.5 hectares were 17 or 10.12 %, and farmers with a large land area in the land category > 2 hectares were 20 or 11.90 %. These results show that the land owned by rubber farmers is still relatively moderate, so it does not allow them to do business more effectively. The cosmopolitan level was the ability of a farmer to relate to a very wide environment. On the basis of Table 2, the cosmopolitan rubber farmer in the province of Riau was in the medium category with an average of 1.76. It shows that rubber farmers in the Riau Province have been quite active in reading information about rubber plants through mass media. Farmers have also been quite active in mingling with extension workers, and farmers can contact extension workers at farmer group meetings. However, most farmers did not travel to a place to seek knowledge or training in rubber farming. Farmers who have high education, such as a Bachelor's degree, were usually active in reading information about rubber farming, socializing with extension workers, and traveling outside the region to find information. External characteristics The need for information sources is increasing for farmers in developing countries, as farmers have to make increasingly complex decisions. [7] Extension intensity was the number of meetings of extension officers attended by farmers. Based on Table 3, the intensity of the extension of farmers in Riau province has still existed. It can be seen from the score of 2.19 in the medium category, where extension workers rarely made direct extensions. Extension officers conducted only remote evaluations of farmers and rarely went directly to the field. However, these extension activities still exist especially when assistance is provided to groups of farmers. In order to update farmers, actors can be used in the communication network as information channels and communication intermediaries [8]. Table 4, the accuracy of the extension channels for rubber farmers in Riau Province was in the high category with an average score of 2.36. This condition shows that the communication channel in a direct way (lectures and discussions) was in the medium category with a score of 2.30 or considered good enough by some farmers. And seen from the accuracy of targeting the extension channels were also in the high category with a score of 2.43. Overall the accuracy of the rubber farmer extension channels in Riau Province is good, where farmers can absorb the rubber information provided by extension workers during extension activities. From the table above, the amount of information available to rubber farmers in Riau Province on rubber farming is in the high category with an average score of 2.44. The information obtained by farmers is in the form of materials related to cultivation, inputting, marketing, and support institutions (farmer groups and cooperatives). Information obtained from rubber farmers came from extension workers, farmer group leaders, fellow farmers, and the boss. In the meantime, other sources of information come from the Internet, farmers from other areas, newspapers, magazines, and others. Communication Network Analysis An analysis of the communication network was carried out on rubber farmers in Riau Province. An analysis of the communication network can identify the communication structure that has been formed, how many individuals can connect with other individuals, and describe the patterns of interaction that are formed between individuals in the system. Besides, the role of individuals in the network can also be identified. The different understandings and interpretations of farmers towards advancement will differentiate the patterns of relations between farmers [9]. The advantages of the messages received will be determined by the form of a communication network [10]. Analysis of the communication network of rubber farmers in Riau Province was studied in two regencies, namely Kampar Regency and Kuantan Singingi Regency. Each regency took two villages as samples, wherein Kampar Regency, namely Tanjung Alai Village and Batu Bersurat Village, while in Kuantan Singingi Regency, Lubuk Terentang Village and Gunung Village. Information dissemination in the network is divided into four groups, which are the upstream agribusiness subsystem, the on-farm agribusiness subsystem, the downstream agribusiness subsystem, and the supporting institution subsystem. Upstream subsystem communication network. The communication network on the upstream subsystem in rubber farmers was set up as a result of the production and distribution of the production facilities needed by farmers for rubber plant farming. The production facilities used by rubber farmers are seeds, fertilizers, pesticides, vinegar, latex stimulants, and agricultural equipment. The upstream subsystem communication network in the two villages illustrates the same pattern, namely the communication structure formed by a wheel structure (interlocking personal network) with a wheel center at nodes 6, 24, and 43 in Batu Besurat village and nodes 56 and 57 in Tanjung Alai Village. The leader of the opinion in the two villages was the leader of the farmer group. The individual who acted as a bridge between the group and other groups was the secretary of the farmer group, who was also the head of the Community Empowerment Institutions in Batu Besurat and the extension workers' villages. In addition, individuals acting as stars, i.e. people who had a strong relationship with others were traders or shop owners. The majority of farmers in the upstream subsystem had a direct relationship with the upstream subsystem actors in Tanjung Alai Village, in this case traders responsible for the distribution of production facilities such as fertilizers, pesticides and equipment used by rubber farmers. The pattern of communication that occurs in rubber farmers in Kuantan Singingi Regency has started in a pattern that spreads to all directions (radial personal network). Farmers as well as community leaders in Lubung Terentang Village and the head of the farmer group association in Gunung Village acted as opinion leaders.Individuals acting as bridges were rubber farmers, while those acting as stars were the extension workers. The integration of rubber farmers into the Kuantan Singingi rubber farmer association opens opportunities for communication between rubber farmers with more sources and providing information to each other. So that the communication network of rubber farmers in Kuantan Singingi Regency is more widespread, and is not focused on just one person. Farming subsystem communication network. Farmers need information on farming subsystems to get the maximum production and the quality standards the market wants. Farming subsystem communication network in Kampar Regency. Information on the farming subsystem manages inputs to rubber farmers including land clearing, seedling care, spacing, planting, maintenance of yielding crops, how to tapping, use of fertilizers, and use sorax (stimulant sap) Individual Level Communication Network Analysis The proximity of the personality between the farmer and his group can be formed from the interaction and communication network of the farmer [11]. The relationship of the communication network in the group structure may differ from the individual level [18] and depend on the benefits of communication [10]. 10 traders and formal leaders in their groups were individuals with high local authority in this respect. Agricultural shop owners demonstrated high centrality in Tanjung Alai Village.The extension worker was the high value of the local centrality of the Lubuk Terentang village. Whereas there was a high centrality value in the Gunung Village, which was chaired by the Berkah Illahi Farmer Group. The second indicator for measuring communication networks at the individual level is global centrality, indicating the number of steps that an individual must take when contacting other individuals in the system. In other words, the smaller the global centrality and the greater the individual's ability to contact all members of the network. The higher the score for centrality, the stronger the communication that occurs [13] The maximum global centrality value in Batu Bersurat Village was 109 in the communication network for the upstream subsystem and a the minimum of 52. The lowest global centrality value for traders had maximum access to all network members.It is in keeping with the local centrality and its role in the network where traders were central to the network as traders of manufacturing plants. Meanwhile, in Tanjung Alai Village the lowest global centrality is owned by agricultural shop owners. In Lubuk Terentang Village, the lowest global centrality is owned by traders of production facilities. Whereas in Gunung Village the lowest global centrality was the traders of production facilities and administrators of the Berkah Illahi group. Furthermore, the indicator in measuring the communication network at the individual level is the level of betweenness. The betweenness level in this research is defined as the frequency at which a node/individual is located among other nodes in a close distance that connects between these nodes. In other words, a high level of betweenness indicates a high level of individual dependence on a system. In the upstream communication network in Batu Bersurat village, the maximum level of betweenness is 53,109 and the minimum is 0. The data have shown that thefarmers'betweenness level was classified as low. The low level of membership is linked with the farmers' activity in the upstream communication network being limited to dealing with the traders. The production trader facilities were the highest level of betweenness. Meanwhile, in Tanjung Alai Village, the highest betweenness level was owned by agricultural shop owners. In Lubuk Terentang Village, the highest betweenness level was the extension workers who had a task area in Lubuk Terentang Village. While in Gunung Village the maximum betweenness was the chairman of Berkah Illahi group. Farming subsystem. There were two types of roles of nodes, namely as a core role that occupied a central position and a bridge role as a liaison between the nodes [14]. Batu Bersurat Village, the individual communication network with a high local centrality, was demonstrated by agricultural extension workers. In Tanjung Alai Village, an agricultural extension worker from Koto Kampar Agricultural Extension Center XIII who had a job in Tanjung Alai Village showed a high local centrality of farming. The high value of local centrality in Lubuk Terentang Village was shown by the extension worker assigned to Lubuk Terentang Village. Whereas in Gunung Village the high centrality value was shown by farmers who were considered successful in terms of planting and caring for rubber plants as well as community leaders. The role of a person as a bridge for others in communication will determine the importance of that person's role in the communication network [15]. The second indicator in measuring communication networks at the individual level is global centrality. In the communication network on the agricultural subsystem, Batu Bersurat Village, which acted as an agricultural extension worker, had the lowest global centrality value. Meanwhile, the lowest global centrality in Tanjung Alai Village was also the extension worker from the XIII Koto Kampar District Agricultural Extension Centre.The lowest global centrality in Lubuk Terentang Village is owned by agricultural shop owners. Whereas the lowest global centrality in Gunung Village was the farmers who were considered community leaders in Gunung Village. In addition, the level of betweenness is the indicator in the communication network measurement at the individual level. The maximum level of betweenness that was demonstrated by individual extension workers from the XIII Koto Kampar District Agricultural Extension Center was obtained in the farming communication network in Batu Bersurat and Tanjung Alai villages.An extension from the Gunung Toar District Agricultural Extension Center is also the highest betweenness in Lubuk Terentang Village and Gunung Village. It illustrates that in agriculture, the role of extension workers is important for rubber farmers. Conclusions and Recommendations In Riau, there were two forms of communication networks, the centralized pattern that took place in Kampar Regency, and in Kuantan Singingi Regency, the pattern began to spread in all directions. A centralized communication network has shown that certain actors in the communication network have become command points. The communication network demonstrated that there was a limited flow of information to farmers. As the communication network spread, it illustrated that the interaction that takes place started in all directions. With some parties, farmers can communicate. There has been a Rubber Farmers Association established in Kuantan Singingi Regency since 2017.Members of the association were rubber farmers who belonged to farmer groups in the Regency of Kuantan Singingi.The association, which was formed through the Kuantan Singingi Regency Plantation Office, was formed to help rubber farmers to get the best and equivalent price. Farmers who were members of the association were also the members of WhatsApp group. It helps farmers to communicate and obtain a lot of information from the many parties in the group. Institutions and communication technologies should be introduced so that the information center is not just focused on certain people; farmers must choose sources of information to enhance knowledge and help solve problems.
2021-05-07T00:03:43.469Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "4e1b4e8a69821cd829890817681975eca5efc122", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/715/1/012003", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3e4b057221c59f5dce2c0e50948f74079b779830", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Physics", "Business" ] }
248002980
pes2o/s2orc
v3-fos-license
When is the SARS-CoV-2 infection over and what is post-COVID? We read with interest the article by Ahmed et al. about a metaanalysis of the neurological complications of COVID-19 patients [1]. Only studies in which neurological compromise developed after the PCR for SARS-CoV-2 became negative or studies of patients who had recovered from COVID-19 and who developed neurological disease after full recovery were included [1]. It was concluded that despite recovery from the acute infection, there may be persistent illness that requires extensive follow-up of COVID-19 patients, including those initially thought to be asymptomatic [1]. The study is appealing but raises concerns that need to be discussed. A limitation of the study is that the inclusion criteria were based on a negative PCR for SARS-CoV-2 or the end of clinical manifestations of COVID-19 [1]. Material for PCR tests is usually taken from naso-pharyngeal swabs. However, a negative PCR test does not rule out that a patient is still infected with the virus. Although the respiratory tract is the main site of infection, there are a number of extra-pulmonary manifestations of SARS-CoV-2 infections, even at onset of the infection [2]. In addition, recovery from lung disease does not mean that the infection is over. The immune response against the virus continues even after the lung symptoms have subsided. Since many manifestations of COVID-19 are due to the immune response against the virus, considering the end of pulmonary manifestations as the end of the disease is not justified. For example, the case reported by Ishaq et al. and included in table 1, developed opsoclonus myoclonus two days after recovery from the pulmonary infection [3]. Thus, opsoclonus is not really a post-infectious phenomenon but pathophysiologically linked to the infection. The same applies to venous sinus thrombosis (VST), which occurs not only as a post-vaccination phenomenon, but also during a SARS-CoV-2 infection [4]. VST can be due to immune thrombocytopenia (ITP), which can develop as early as the pulmonary phase. However, VST may develop not earlier than after the patient had become PCR negative. According to Table 1 there are 12 patients in whom the latency period between the COVID-19 infection and the onset of neurological disease exceeded 30 days [1]. The longest latency of the 60 included patients was even 130 days [1]. Given these numbers, it is quite unlikely that there is a causal relation between the SARS-CoV-2 infection and the neurological compromise. These 12 patients in particular should be investigated for alternative causes of the neurological disease. Guillain-Bare syndrome (GBS) usually develops within four weeks after onset of the SARS-CoV-2 infection [5]. Another limitation of the study is that the spectrum of neurological disease after recovery from COVID-19 is broader than shown in the index study. It also includes conditions such as ventriculitis, hypophysitis, cerebrellitis, brainstem encephalitis, cerebral vasculitis, or venous sinus thrombosis. Overall, the interesting study has some limitations and inconsistencies that call their results and their interpretation into question. Addressing these issues would strengthen the conclusions and could improve the status of the study. We disagree with the notion that a negative PCR test or absence of pulmonary symptoms spells the end of COVID-19. We read with interest the article by Ahmed et al. about a metaanalysis of the neurological complications of COVID-19 patients [1]. Only studies in which neurological compromise developed after the PCR for SARS-CoV-2 became negative or studies of patients who had recovered from COVID-19 and who developed neurological disease after full recovery were included [1]. It was concluded that despite recovery from the acute infection, there may be persistent illness that requires extensive follow-up of COVID-19 patients, including those initially thought to be asymptomatic [1]. The study is appealing but raises concerns that need to be discussed. A limitation of the study is that the inclusion criteria were based on a negative PCR for SARS-CoV-2 or the end of clinical manifestations of COVID-19 [1]. Material for PCR tests is usually taken from naso-pharyngeal swabs. However, a negative PCR test does not rule out that a patient is still infected with the virus. Although the respiratory tract is the main site of infection, there are a number of extra-pulmonary manifestations of SARS-CoV-2 infections, even at onset of the infection [2]. In addition, recovery from lung disease does not mean that the infection is over. The immune response against the virus continues even after the lung symptoms have subsided. Since many manifestations of COVID-19 are due to the immune response against the virus, considering the end of pulmonary manifestations as the end of the disease is not justified. For example, the case reported by Ishaq et al. and included in table 1, developed opsoclonus myoclonus two days after recovery from the pulmonary infection [3]. Thus, opsoclonus is not really a post-infectious phenomenon but pathophysiologically linked to the infection. The same applies to venous sinus thrombosis (VST), which occurs not only as a post-vaccination phenomenon, but also during a SARS-CoV-2 infection [4]. VST can be due to immune thrombocytopenia (ITP), which can develop as early as the pulmonary phase. However, VST may develop not earlier than after the patient had become PCR negative. According to Table 1 there are 12 patients in whom the latency period between the COVID-19 infection and the onset of neurological disease exceeded 30 days [1]. The longest latency of the 60 included patients was even 130 days [1]. Given these numbers, it is quite unlikely that there is a causal relation between the SARS-CoV-2 infection and the neurological compromise. These 12 patients in particular should be investigated for alternative causes of the neurological disease. Guillain-Bare syndrome (GBS) usually develops within four weeks after onset of the SARS-CoV-2 infection [5]. Another limitation of the study is that the spectrum of neurological disease after recovery from COVID-19 is broader than shown in the index study. It also includes conditions such as ventriculitis, hypophysitis, cerebrellitis, brainstem encephalitis, cerebral vasculitis, or venous sinus thrombosis. Overall, the interesting study has some limitations and inconsistencies that call their results and their interpretation into question. Addressing these issues would strengthen the conclusions and could improve the status of the study. We disagree with the notion that a negative PCR test or absence of pulmonary symptoms spells the end of COVID-19. Funding sources No funding was received. Ethics approval was in accordance with ethical guidelines. The study was approved by the institutional review board. Consent to participate was obtained from the patient. Consent for publication was obtained from the patient. Availability of data All data are available from the corresponding author. Code availability Not applicable. Author contribution JF: design, literature search, discussion, first draft, critical comments, final approval, DM: literature search, discussion, critical comments, final approval. Declaration of competing interest None.
2022-04-08T13:06:25.714Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "a9cae015801f3ff0003a6a5f56ee00112a5a3dd0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.amsu.2022.103550", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f86be87275dbaad4f2fa39b00253be9688c578c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119227386
pes2o/s2orc
v3-fos-license
A Time Dependent Multi-Determinant approach to nuclear dynamics We study a multi-determinant approach to the time evolution of the nuclear wave functions (TDMD). We employ the Dirac variational principle and use as anzatz for the nuclear wave-function a linear combination of Slater determinants and derive the equations of motion. We demonstrate explicitly that the norm of the wave function and the energy are conserved during the time evolution. This approach is a direct generalization of the time dependent Hartree-Fock method. We apply this approach to a case study of ${}^6Li$ using the N3LO interaction renormalized to 4 major harmonic oscillator shells. We solve the TDMD equations of motion using Krylov subspace methods of Lanczos type. We discuss as an application the isoscalar monopole strength function. Introduction. The time-dependent Hartree-Fock method (TDHF) and its quasi-particle generalization, the time dependent Hartree-Fock-Bogoliubov method (TDHFB), are central tools in studying nuclear dynamics (see for example ref. [1], ref. [2] for a recent review and references in there). In these approaches the time dependence of the nuclear wave function is studied under the assumption that the nuclear wave function can be described by a single Slater determinant or by a quasi-particle determinant wave function. Usually nuclear excitations, for example giant resonances, are studied in the approximation of small amplitude motion around the static solution (RPA or QRPA). In this case, the description of nuclear excitations reduces to the solution of a large eigenvalue problem. Despite the enormous matrix dimensions, the RPA or QRPA equations are solved nowadays using efficient Krylov projection techniques of Arnoldi type (see for example ref. [3] for recent applications). Recently, the time-dependent coupled-cluster method (refs. [4], [5]) has been revisited (ref. [6]) and it has been applied to light nuclei (ref. [7]) using the N3LO interaction (ref. [8]) transformed by the similarity renormalization group. In this work we discuss a Time-Dependent Multi-Determinant (TDMD) approach whereby the nuclear wave function is approximated by a linear combination of several Slater determinants. This approach is the time dependent version of the Hybrid Multi-Determinant (HMD) approach (refs. [9]- [11]). Each Slater determinant is built from different single-particle wave functions of the most generic type. To the author knowledge, this approach has never been considered in nuclear physics. In this sense, this is an exploratory study. Our starting point is the Dirac variational principle which, as well known, leads to the time-dependent Schroedinger equation in the most general case, or to the TDHF equations (ref. [12]) if the nuclear wave function is approximated by a single Slater determinant. Using the Dirac variational principle, we derive the equations of motion and prove explicitly that the time evolution conserves the norm and the energy of the wave function. The equations of motion for the single-particle wave functions are of the type iLψ = R where R is an energy gradient, ψ is the set of single-particle wave functions of all Slater determinants, and L is a matrix of large dimension related to the time derivative of the norm of the wave function (which will be discussed in detail below). The actual evaluation of the wave function as a function of time is performed using the Direct Lanczos method (DL) for the solution of a large linear system. The DL method belongs to the family of Krylov subspace methods for the solution of linear systems (an excellent review of these methods can be found for example in ref. [13]). These methods for eigenvalue problems include the familiar Lanczos method used in the shell model approach to nuclear structure (refs. [14], [15]) and the Arnoldi method used in solving the RPA or QRPA eigenvalue problems (ref. [3]). The basic idea of these methods is the following. Although we may not be able to store a matrix (e.g. the nuclear Hamiltonian matrix) we can easily evaluate the matrix to vector product. In our case, although L is not as large as the shell model Hamiltonian matrix, it can hardly be stored except in simple cases. However the matrix to vector product appearing in the equations of motion is trivial to evaluate, and the Laczos method is the ideal one. We solve the equations of motion, as an exploratory study, in the case of 6 Li using the N3LO interaction renormalized to 4 major oscillator shells with the Lee-Suzuki (ref. [16], [17]) method, in order to reduce the otherwise very large single-particle space. We use the time-dependent wave function obtained in this way to evaluate strength functions. Our ultimate goal is to extend ab-initio methods to time-dependent problems, such as the evaluation of strength functions, starting from a two-body nucleon-nucleon interaction. The MCTDHF is a time dependent version of the shell model written in the full Hilbert space. The MCTDHF method uses a time-dependent linear combination of all possible Slater determinants. The time-dependent coefficient of such a linear combination is a function of all possible many-body configurations and it is obtained using the equations of motion. Since the ansatz for the many-body wave function is not unique, one restricts the freedom in the many-body wave function by imposing orthogonality among the single-particle wave functions. As shown in ref. [19] this amounts to a redefinition of the coefficient of the linear combination. The only difference between an exact treatment of the time evolution of the many-body wave function and the MCTDHF approach is that in the latter the single-particle basis is time dependent. In the MCTDHF approach, at a given value of time, all Slater determinants are built from the same time-dependent single-particle basis, that is, each of them is a n-particle-n-hole excitation from the lowest one. In our approach, instead, each Slater determinant is built from a different time-dependent single-particle basis. Moreover, we consider several and not all possible Slater determinants and we do not have the freedom of imposing orthogonality between the single-particle wave functions belonging to different Slater determinants. Rather, we consider the most generic Slater determinants, in the same spirit of the HMD method. Our approach is not limited by the dimension of the Hilbert space. Each Slater determinant, in our approach, is equivalent to a rather large number of linear combinations of the Slater determinants of the MCTDHF approach. As a consequence, the equations of motion in the MCTDHF approach are different from the ones of the TDMD approach (cf. ref. [18]- [20] and section 2a of this work). The outline of this paper is as follows. In section 2 we derive the equations of motion in the TDMD approach using the Dirac variational principle, we prove that these equations of motion conserve the norm and the energy of the nuclear wave function and discuss how to fix uniquely the solution of the equations of motion for the single-particle wave functions. We also briefly discuss the imaginary time version of these equations of motion. At the end of section 2 we discuss the 'static' solutions of these equations and show that the time propagation of these solutions generates a time-dependent phase factor common to all Slater determinants (in some sense this is the generalization of the single-particle energies), In section 3 we discuss the numerical method and in section 4 we discuss the application of our method to the nuclear strength function using the boost method in order to determine the excitation spectrum. 2a. Equations of motion and conservation laws. The Dirac time-dependent variational principle states that the time evolution of the nuclear wave function is obtained by varying the action or equivalently with respect to |ψ > and < ψ| independently, under the constraint that the wave function is held fixed at the initial and final times t 1 and t 2 . where |U S > is a Slater determinant and N w is their number. These Slater determinants for A particles are of the most generic type and are written as A being the number of particles, S labels the Slater determinant and in the above equation, a † i is the creation operator in the single-particle (e.g. harmonic oscillator) state i, N s is the number of the single-particle states and U is the single-particle wave function in the h.o. representation. Note that these singleparticle wave functions are different for each Slater determinant labeled by the index S. In what follows we label particles with greek letters and single-particle states with latin letters. As mentioned in the introduction, in the MCTDHF (cf. ref. [20]) each Slater determinant is written as a multi-particle multi-hole excitation built on the first one. Moreover in the MCTDHF approach it is essential to multiply each Slater determinant by a time-dependent amplitude and the time-dependent single-particle states can be taken orthogonal to each other. That is, in the MCTDHF approach, |ψ >= [n 1 ,n 2 ..] A(n 1 , n 2 , .., t)|n 1 , n 2 , ..t >, with the sum extending over all possible allowed values of the occupation numbers of the time-dependent basis, i.e. n 1 , n 2 , .. = 0, 1. These considerations illustrate the basic difference between the TDMD approach proposed in this work and the MCTDHF approach. We assume that each Slater determinant is a product of a neutron and a proton Slater determinant. The ansatz (3)-(4) is the same of the Hybrid multi-determinant method (refs. [9]- [11]) used in variational calculations. Usually in the HMD method, a projector to good quantum numbers (angular momentum and parity) is applied to the wave function of eq.(2), in order to decrease the otherwise large number of Slater determinants needed to obtain accurate energies, for example for the yrast states. In this work, we do not use projectors to good quantum numbers. We do this in order to simplify the equations and the proof of the conservation of the energy and of the norm. The Slater determinants are not orthogonal to each other and are 'deformed', that is, they do not have good quantum numbers. At the initial time they could be the result of a partially converged variational calculation as given by the HMD method, or converged variational wave functions 'boosted' by some excitation operator (e.g. dipole, quadrupole , etc.). We do not have the freedom to impose the orthogonality between the single-particle wave functions belonging to different Slater determinants, although we can impose orthogonality between the single-particle wave functions of the same Slater determinants. Although the Dirac variational principle determines uniquely the time dependence of the Slater determinants, it does not uniquely fix the single-particle wave functions U iαS . In order to see this, let us perform the following transformation of the generalized creation operators defined in eqs. (3) and (4). for every S. In other words, we mix the particle labels in each Slater determinant, but we do not mix the particle labels of different Slater determinants. Each Slater determinant can be rewritten as Therefore, provided det(g(S)) = 1, the same Slater determinant can be obtained using the new generalized creation operators with. in matrix notation, Hence, if the U's are a solution of the equations of motion (discussed below) also the U ′ given by equation (8) with any g (provided det(g) = 1), will satisfy the same equations of motion. This kind of gauge invariance implies that the equations of motion, although they determine the time evolution of the set of Slater determinants, they do not determine unambiguously the time evolution of the singleparticle wave functions U(S). Since g is arbitrary (provided det(g) = 1), we have A 2 − 1 free parameters for each Slater determinant. In order to uniquely specify the solutions of the equations of motion we select the matrix g such that for the A × A submatrix of U for each Slater determinant. In eq. (9), U ′ AA is the determinant of the A × A submatrix of U This point will be further discussed after the equations of motion have been derived. We assume that all Slater determinants have been recast so that the A × A submatrices of the single-particle wave functions are as in eq. (9) and in what follows we shall drop the prime. In this way we effectively decrease the number of unknowns. We now proceed to determine the equations of motion of the single-particle wave functions U. In what follows, since we always have pairs of indices S and S ′ , the Slater determinant |U S > will have the label S (even though sometimes it will be omitted) and the the complex conjugates of will have the label S ′ . V S ′ is the Hermitian conjugate of the matrix U S ′ . We do this in order to use simple matrix notations, and to avoid confusion between U and U † for different S and S ′ since often we omit the labels S and S ′ in order to shorten the equations. The Dirac variational principle gives (the bra will be denoted as where we have shown explicitly the quantities which are varied. In what follows, we quote the results for the overlaps and for the matrix elements of the Hamiltonian (cf. ref. [9]). The Hamiltonian iŝ where we recast the one-body term into the two-body interaction, as done in shell model calculations. The matrix elements of H are antisymmetrized (i.e. For any V and U, (relative to the Slater determinants S ′ and S respectively) let us define The matrix G has indices α, β = 1, 2, .., A. The matrix W has indices α, i, the matrix X has indices i, α while ρ and F have indices i, j = 1, 2, .., N s . The matrix ρ is the generalization of the density matrix in TDHF and satisfies the relations trρ = A and ρ 2 = ρ for any S ′ and S, as it can easily be verified. We have then where the matrix Γ is given by Let us note that the exchange term is the same of the direct since the matrix ele- Then it is easy to verify that and that the explicit form for EOM1 is (using the identity where E is the energy functional The equation of motion EOM2 can be obtained in the same way and is given by These equations need a few comments. First, if we recast then in a schematic matrix notation the dimension of the linear systems to be solved can be rather large. For example In the case of 24 Mg with 7 major shells (N s = 168) for 10 Slater determinants, the matrix L is 20160 × 20160 (for both neutrons and protons), for a larger number of major shells or for heavier nuclei, the storage of this array in the computer memory can be a problem. Moreover these matrices seem to have some kind of separable structure. L (1) for example contains a separable term in the indices (iα)(µr) and another separable term in the (ir)(µα) indices. This implies that although we may not be able to store the matrix L, we can very easily perform the matrix to vector product. We only needs to store the matrices X, W, F and G, in the case of Mg, of dimension 168×12, 12×168 and 168×168 for every S and S'. These matrices are the same matrices used in the HMD variational calculations. In the past few decades, linear systems of this type, for which the matrix cannot be stored but the matrix to vector product can easily be performed, have received a major attention in applied mathematics using the so called Krylov subspace techniques. These techniques are precisely of the same kind one uses in standard shell model calculations (ref. [13]). They will be briefly recalled in the next section. A systematic treatment can be found in ref. [13] (note however that in ref. [13] the convention for the scalar product is < x|y >= x i y ⋆ i ). Equations of motion EOM1 and EOM2 are equivalent. The matrix L (1) and L (2) are Hermitian. One can show that the norm of the wave function is preserved during the time evolution, using the explicit form of the equations of motion. From eqs. (14) and (17) one has with the understanding that S ′ refers to V and S to U. From EOM1 eq. (19), multiplying by V αiS ′ and summing over the indices one has From EOM2 of eq. (21), multiplying by U iαS and summing over the indices one Subtracting eqs. (24) and (25), and since for any SS ′ , tr(ρ) = A, we have The right hand side of this equation is 0 since F = 1 − ρ. 23)). Next we shall prove that the energy is constant during the time evolution. We need to prove that since the norm of the wave function is a constant. Let us set The Lagrangian associated to EOM1 can be rewritten as where a = (iαS) for brevity. EOM1 can then be written as for all b = (βjS ′ ). Similarly the Lagrangian associated with EOM2 can be recast as and EOM2 can be recast as Multiplying eq.(32) byV b and summing over the indices, and similarly multiplying eq.(34) byU a and summing over indices, after subtracting the two results, we is not equivalent to orthogonality of the single-particle wave functions (even for the same Slater determinant). We find eq.(9) simpler to implement for several Slater determinants than the orthogonality. as shown by the structure of the equations of motion. Only in the case of a single Slater determinant they can be made orthogonal and orthogonality is preserved during the time evolution. All these considerations have been tested numerically. We did not find any need to enforce eq. (9) using Krylov subspace techniques. Actually all initial calculations have been performed without the gauge fixing condition of eq. (9). Note also that if we impose (for a given S) orthogonality between the single-particle wave functions we would have to introduce Lagrange multipliers, while the condition of eq.(9), simply reduces the number of unknowns in the linear system of eq. (19). 2b. Imaginary time equations of motion. Propagation in imaginary time can be used to determine the best approximation to the ground-state for a specified number of Slater determinants. As τ = it → ∞ we obtain the ground-state of the system. We solve the following imaginary time equations of motion where L and R are given in the previous subsection in eq. (21). We consider EOM2 since the basic matrices in eq.(13) can be taken from HMD computer programs, which have accurately been tested. We also solve the variational problem using the HMD method (which is a quasi-newtonian method). The technical details of the variational methods used in the HMD approach can be found in ref. [21]. The results from the HMD method can be used as initial start in eq. (36) and vice versa. We obtain the same energies from the two methods and this is a strong validation test of our computer programs. OnceV in eq. (36) has be found, we determine V using Runge-Kutta methods with a time interval sufficiently small so that the energy decreases as a function of the imaginary time. Typical values for the imaginary time interval are 10 −2 , 10 −3 MeV −1 . 2c. The static solutions. Let us suppose that we have found the ground state wave function for a selected number of Slater determinants, either by imaginary time propagation or with the variational HMD method, and let us call these single-particle wave functions V (S ′ ). As in the the TDHF approximation, we can propagate in real time these static single-particle wave functions and obtain the single-particle energies. However, in the case of several Slater determinants we cannot define the singleparticle energies since we do not have a self-consistent eigenvalue problem as in the the HF approximation. The question naturally arises whether one can define some type of single-particle energies from the the evolution of the static solutions V (S). In the case of several Slater determinants, since we do not impose orthogonality between single-particle wave functions, these can mix. Hence we seek solutions of the type with the A × A matrix f determined by the equations of motion EOM2. The matrices U, X, W, G, ρ, Γ and F for a pair of Slater determinants S, S ′ obey the relations, in a matrix notation, where We seek time-independent M i.e. f = exp(iMt). Since eq.(39) has to be valid at all times det(f (t, S ′ )) must be independent of S ′ , i.e. all Slater determinants must evolve with the same phase factor exp[itr(M)t]. As a consequence the Fourier decomposition of the wave function gives an energy E F T = tr(M). Generally, this spectral energy differs from the energy obtained from the variational calculation. However, the two energies must converge to the same value if we consider a sufficiently large number of Slater determinants so that the exact wave function is sufficiently well approximated. This considerations must be kept in mind when we extract energies using spectral decomposition of the wave functions. In general one can define the following spectral density of a HamiltonianĤ relative to where E n are the energies for the eigenstates |n >. The spectral density can be obtained from the Fourier transform of the time correlation function < φ 0 |φ(t) >, where |φ(t) > is obtained from the time evolution of the initial state |φ 0 >, as The number of the Slater determinants has to be sufficiently large for this method to be reliable. Moreover, if the initial state is the static solution of the imaginary time evolution, we would obtain only one pole corresponding to E = tr(M), which is obviously wrong in the HF case. Hence eq. (42) gives reasonable estimates for the eigenvalues only if there is reasonable fragmentation of ρ(E) for a sufficiently large number of Slater determinants. Moreover, for this method to be reliable one has to show that the spectrum obtained in this way, is independent of the initial wave function |φ 0 >. In this work we will not study the convergence properties of this method. We prefer to obtain the static energies using variational methods or by imaginary time propagation since we obtain upper bounds for the energy, while the energies obtained with eq. (42) are not upper bounds to the exact values. As discussed in the next sections in the contest of the boost method for strength functions, we need static solutions to a high degree of accuracy. The reason is the following. The expectation values of a one-body operator Q = a † i q ij a j are given by A brief description of the numerical method. We solve numerically EOM2 (eq.21) and eq.(22b)) forV . As pointed out in the previous section, it is not advisable to store the matrix L 2 . However we can easily evaluate L 2 v where v is any vector. Actually, we can easily evaluate any power of L 2 applied to v. The linear system of eq.(22b) can be solved by projecting eq.(22b) into the subspace (known as Krylov subspace) generated by the vec- where v is an arbitrary trial solution of the linear system, followed by Gram-Schmidt orthonormalization. Since L 2 is Hermitian, its projection in the Krylov subspace gives a tridiagonal matrix (just as in the shell model method) and the linear system can then be efficiently solved. We have implemented the so called direct Lanczos method, the full detail of which (including the algorithm) can be found in ref. [13]. With this method the tridiagonal linear system is solved efficiently. In our computer program the iterations stop when the residual vector −iL 2V − R 2 has a norm less than 10 −7 ÷ 10 −11 . The dimension of the Krylov subspace is less than the dimension of the linear system of eq.(22b) and, although it is advisable to implement eq.(9), we found no actual need. In this work we considered 6 Li with the interaction given by the N3LO nucleonnucleon potential renormalized using the Lee-Suzuki method to 4 major harmonic oscillator shells. We consideredhω = 12MeV and we added to the Hamiltonian the center of mass Hamiltonian β(P 2 cm /2mA + mAωR 2 cm /2 − 3hω/2) with β = 1. In fig. 1 In this case the scalar products < v k |u > for k = 1, .., n can be evaluated independently using different processors. This classical Gram-Schmidt method however is known to be numerically unstable for a large number of vectors. This instability can however be cured by first orthogonalizing |u > to |v 1 >, then the result is orthogonalized to |v 2 > and so on. This latter method is known as the modified Gram-Schmidt method and it is numerically stable. In this case, however, we cannot evaluate the several scalar products using different processors, since it is a sequential chain of calculations. The instability of the classical Gram-Schmidt method can be bypassed by simply repeating two or three times the orthogonalization procedure. We have implemented both the iterated classical and the modified Gram-Schmidt re-orthogonalization in the direct Lanczos method. Strength functions. We evaluate the strength function for a one body operatorQ E * n being the excitation energy of the n-th eigenstate, with the boost method, as follows. First we determine the ground-state of the system |0 >, then at time t = 0 + we boost the system with the unitary operator whereQ is a one-body operator. For sufficiently small values of the parameter η, only linear terms in η can be retained. We then evolve this wave function in time by solving the equations of motion EOM2 and evaluate The strength function can then be obtained using the Fourier transform of Q(t) (see for example ref. [22]) for sufficiently large T such that e −ΓT is negligible via the relation Alternative methods for the determination of strength functions can be found in ref. [23] and in ref. [24]. In eqs. (45) and (46) the ground-state is replaced by the static solution evaluated with high accuracy. Only in this case we can safely guarantee that the response of eq.(46) is proportional to η. In figs. (2) and (3) With 3 Slater determinants, we obtained the results of figs, (6) and (7). The main peak at E ≃ 16.7MeV shows considerable fragmentation around 15 MeV while the secondary peak at 7.5MeV is nearly unchanged. The peak around 27 MeV has nearly disappeared and has moved to lower excitation energies. Similar plots, using 5 Slater determinants, are shown in figs. (8) and (9), We also considered a larger number of Slater determinants, although for smaller values of T , N w = 15 and N w = 25. In these latter cases, some high frequency oscillations still remain. The results for the strength functions are shown in figs. (10) and (11). Note that the structure of the strength function has changed considerably, pointing out to the To some extent, a simple remedy to the lack of the continuum is to increase the width Γ. In fig. 12 we compare the monopole strength functions for N w = 1, 25, 35 evaluated with Γ = 3MeV . This comparison gives an idea, although with low energy resolution, of the degree of convergence as we increase the number of Slater determinants. Some discrepancy between N w = 25 and N w = 35 still remains, but the shapes are very similar. The TDHF result, instead, is different. A possible cause of the discrepancy between the TDHF strength and the ones for N w = 25 and N w = 35 is the angular momentum content of the wave functions. In this work we did not project the wave functions to good angular momentum. Since the Slater determinants break rotational symmetry we do not expect that the wave functions to have good angular momentum, especially for a small number of Slater determinants. We have checked the expectation values of J 2 for N w = 1, 25, 35. The results are the following: < J 2 > Nw=1 = 6.94, < J 2 > Nw=25 = 4.54 and < J 2 > Nw=35 = 4.21, instead of the exact value < J 2 >= 2. Let us recall that For some recent works that take into account the continuum in the TDHF and in the TDHFB approximations, see for example refs. [26], [27]. Our main goal in this work is to define the time dependent method, solve the equations of motion and verify our computer programs. More applications will be presented in future works.
2013-06-10T10:42:56.000Z
2012-08-01T00:00:00.000
{ "year": 2012, "sha1": "33bfc73dd6adb347726b5d066c62e76518afb66c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.0122", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "33bfc73dd6adb347726b5d066c62e76518afb66c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5415936
pes2o/s2orc
v3-fos-license
Immune Checkpoint Inhibitors in Melanoma and HIV Infection Introduction: Immunotherapy with immune checkpoint inhibitors increases the overall survival of patients with metastatic melanoma regardless of their oncogene addicted mutations. However, no data is available from clinical trials of effective therapies in subgroups of melanoma patients that carry chronic infective diseases such as HIV. Evidences suggest a key role of the immune checkpoint molecules as a mechanism of immune escape not only from melanoma but also from HIV host immune response. Conclusion: In this article, firstly, we will describe the role of the immune checkpoint molecules in HIV chronic infection. Secondly, we will summarize the most relevant clinical evidences utilizing immune checkpoint inhibitors for the treatment of melanoma patients. Lastly, we will discuss the potential implications as well as the potential applications of immune checkpoint molecule-based immunotherapy in patients with melanoma and HIV infection. BACKGROUND Melanoma is an aggressive form of skin cancer characterized by poor prognosis and high mortality. In Europe, about 100,000 new cases of melanoma are diagnosed every year and incidence of melanoma is continuously increasing [1]. Patients with immunodeficiency and especially those affected by Human Immunodeficiency Virus (HIV) chronic infection are characterized by a higher risk of tumor development including melanoma. The incidence of melanoma in HIV infected patients is 2.6 higher as compared to no HIV patients [2]. This increased incidence reflects both a decreased efficiency of the host immune response in eliminating potentially malignant cells and an improvement in the treatment of HIV patients because of the development of new antiretroviral agents. The latter prolong the survival of the infected patients [3] increasing the time of immunodeficiency and the possibility of tumor development. However, in HIV patients, melanoma shows a more aggressive phenotype and poorer survival outcomes as compared to non HIV patients [4]. Implementation of monoclonal antibodies (mAbs), inhibiting the interaction of immune suppressive checkpoint molecules with their ligands, has dramatically changed the clinical course of cancer patients, including those with melanoma. The administration of mAbs targeting immune checkpoint molecules such as Cytotoxic T Lymphocyte Antigen-4 (CTLA-4) and Programmed Death-1 (PD-1) significantly increases overall survival (OS) of metastatic melanoma patients [5]. In addition, novel mAbs targeting different immune checkpoint molecules, such as Programmed Death Ligand-1 (PD-L1), T-cell immunoglobulin and mucin-domain containing-3 (TIM-3), Lymphocyte Activation Gene-3 (LAG-3) and T cell immunoreceptor with Ig and ITIM domains (TIGIT), are now being tested in promising clinical trials alone or in combination with CTLA-4 or PD-1 inhibitors. Nevertheless, scant information is available about the efficacy and safety of these therapeutic strategies in HIV infected melanoma patients. Indeed, HIV infected melanoma patients are currently excluded from novel clinical trials because of their immunodeficient status, the potential drug interactions and the effects of HIV infection on the safety and activity of the investigational agents. These findings and the lack of curative therapy for HIV infected melanoma patients with metastatic disease emphasize the urgent need to define novel effective therapies for this subgroup of melanoma patients. In vitro and in vivo evidences suggest a major role of immune checkpoint molecules in the pathogenesis and clinical progression of HIV infection. PD-1/PD-L1, CTLA-4, TIM-3, LAG-3 and TIGIT are higher expressed on the lymphocytes of HIV-positive as compared to HIV-negative patients [6 -12]. However, the role of immune checkpoint molecules as well as the potential application of immune checkpoint targeting strategies in HIV disease still needs to be better defined. In this article, firstly, we will describe the role of CTLA-4, PD-1, PD-L1, TIM-3, LAG-3 and TIGIT during HIV infection. Secondly, we will summarize the most relevant clinical evidences utilizing immune checkpoint blockade for the treatment of metastatic melanoma patients. Lastly, we will discuss the potential implications as well as the potential application of immune checkpoint-based immunotherapy in patients with melanoma and HIV. ROLE OF IMMUNE CHECKPOINT MOLECULES IN HIV INFECTION Many in vitro and in vivo studies have been performed to define the interactions between HIV disease and immune checkpoint molecules. PD-1, PD-L1, CTLA-4, TIM-3, LAG-3 and TIGIT have been involved in chronic viral persistence and are usually used as a marker to define exhausted T cells during HIV infection ( Fig. 1) [6 -12]. In addition, T-cell exhaustion markers such as PD-1, TIM-3 and LAG-3, measured prior to antiretroviral therapy, are often used to strongly predict time of viremia rebound [9]. CTLA-4 gene polymorphisms and their involvement in chronic viral infection were described for the first time in Hepatitis B Virus (HBV) infection [13]. In HIV subjects, CTLA-4 is significantly higher on CD4+ T cells as compared to cells from normal donors. Furthermore, CTLA-4 levels are negatively correlated with both CD4+ T cell number and CD4/CD8 ratio, while are positively correlated with HIV viral load and disease progression [14,15]. CTLA-4 is also expressed by HIV-specific CD4+ T cells although its levels change based on the timing of HIV infection [14 -16]. Specifically, CTLA-4 upregulation on CD4+ T cells is followed by its downregulation during disease progression. CTLA-4 downregulation is mediated by the Negative Regulatory Factor (Nef), a protein involved in HIV survival and viral replication into T cells [16]. The axis of PD-1 and PD-L1 can also modulate HIV-specific T cell response although contrasting data are reported in the literature about the correlation of PD-1 expression with number of CD4+ T cells, HIV viral load and disease progression. PD-1 is overexpressed on both CD4+ and CD8+ T cells of HIV patients. In those, CD4+ and CD8+ T cells express significantly higher levels of PD-1 as compared to cells from normal donors [17,18]. In addition, PD-1 levels are negatively correlated with CD4+ T cell number as well as with CD4/CD8 ratio, while are positively correlated with both HIV viral load and disease progression [17 -20]. PD-1 levels on CD4+ T cells are also negatively associated with the viral replication in vivo [21] although Chomont et al. reported that infected CD4+ T cells co-expressing PD-1 might represents a major reservoir of HIV [22]. Lastly, PD-L1 is significantly elevated on monocytes and B cells in the peripheral blood of HIV-infected individuals as compared to HIV-negative controls. Its expression negatively correlated with the number of CD4+ T cells and its levels are associated with both viral load and disease progression [23]. Several mechanisms can regulate PD-1 and PD-L1 expression in T cells from HIV infected patients. The common gamma-chain cytokines including IL-2, IL-7, IL-15 and IL-21 upregulate both PD-1 and PD-L1 in vitro [24]. In addition, the accessory HIV protein Nef upregulates PD-1 through p38 MAPK-dependent mechanism [25]. The immune checkpoints TIM-3, LAG-3 and TIGIT have been also investigated in the pathogenesis of HIV. TIM-3 expression on CD8+ T cells is increased in HIV patients as compared to uninfected subjects. Furthermore, TIM-3 upregulation positively correlates with HIV viral load and CD38 expression, while it is negatively associated with CD4+ T cell number [26]. Co-expression of TIM-3 and PD-1 is associated with a more severe exhaustion of T cells during HIV infection in vitro [27]. The ligand of TIM-3, galectin-9, is rapidly released during acute HIV infection and galectin-9-TIM-3 crosstalk contributes to persistent T cell dysfunction [28]. In contrast to this data, Hoffmann et al. showed that TIM-3 expression might be a protective biomarker in some infected subjects because of its association with a delayed HIV disease progression [12]. LAG-3 expression on CD8+ T cells is associated with HIV plasma viral load, but not with number of CD4+ T cell [12]. Upregulation of LAG-3 on both CD4+ and CD8+ T cells is correlated with HIV disease progression and a prolonged antiretroviral therapy can reduce its expression. In addition, the overexpression of LAG-3 on T cells or the stimulation of LAG-3 on T cells leads to a reduction of T cell responses [29]. TIGIT is upregulated on CD8+ T cells during HIV infection and the co-expression of PD-1 and TIGIT positively correlates with HIV disease progression [10]. Tauriainen et al. showed that an increased TIGIT expression in vitro correlates with a decreased functional capacity of HIV-specific CD8+ T cells [30]. Lastly, LAG-3 and TIGIT, alone or in combination with PD-1, positively correlate with an increased number of CD4+ T cells that harbor an integrated HIV DNA [11]. Few experimental evidences have been testing the potential applicability of immune checkpoint inhibitors in HIV infection and contrasting results are reported in the literature. Blocking of both PD-1 and CTLA-4 in HIV-1-specifìc CD4+ and CD8+ T cells leads to a recovery of cell proliferation and a cytokine production in vitro [19]. Furthermore, CTLA-4 blockade by an anti-CTLA-4 mAb increases CD4+ T cell proliferation and augments HIV-specific CD4+ T cell function in vitro [14,15]. In a simian immunodeficiency virus (SIV)-infected macaque model, administration of an anti-CTLA-4 mAb decreases viral replication of the infected subjects while it is associated with an increased viral replication at mucosal site and no benefit in terms of plasma viral load and survival [31,32]. Into the clinical setting just few case reports have been reported. Wightman et al. have recently shown that treatment with anti-CTLA-4 mAb in a metastatic melanoma patients could reactivate HIV from latency [33]. Sabbatino et al. have been reported a melanoma tumor response associated with a decreased viral replication and an increased number of CD4+ T cells in a patient with both HIV infection and metastatic melanoma during treatment with an antiretroviral therapy and an anti-CTLA-4 mAb [34]. Blockade of PD-1 has also been reported in chronic viral infection. In vitro blockade of PD-1, in patients affected by HBV, leads to an increased T cell survival as well as to an increased cytokine production, especially in patients with HIV co-infection [35]. Moreover, Trautmann et al. reported that PD-1 blockade enhances the capacity of HIV-specific CD8+ T cells to survive and proliferate leading to an increased production of cytokines and cytotoxic molecules in response to cognate antigen in vitro [19]. Even more, in vitro stimulation of CD28 in combination with PD-1 blockade synergistically increases HIV-specific CD4+ T cell proliferation [8]. Lastly, in a SIV-infected macaque model, blockade of PD-1 by an anti-PD-1 mAb increases the number of virus-specific CD4+ T cells and memory B cells as well as the levels of envelope-specific antibodies. These immunological effects are associated with the lack of side effects and a significantly increase of OS of the treated SIV-infected macaques [36]. Besides CTLA-4 and PD-1, also blockade of other checkpoint molecules has been tested in chronic viral infection. In vitro blockade of TIM-3 signaling pathway enhances the cytotoxic capabilities of HIV specific CD8+ T cells from chronic progression by increasing their functions and their ability to suppress HIV infection of CD4+ T cells [37]. Furthermore, the ex vivo blockade of LAG-3 significantly augments HIV-specific CD4+ and CD8+ T cell responses [29]. Lastly, in vivo combinatorial blockade of PD-L1 and TIGIT restores viral-specific CD8+ T cell effector response [10]. IMMUNE CHECKPOINT INHIBITORS IN MELANOMA The introduction of immune checkpoint inhibitors into clinical setting has drastically changed the survival of metastatic melanoma patients. Several mAbs have been developed to inhibit the interaction of immune regulatory checkpoint molecules CTLA-4 and PD-1 with their ligands CD80 or CD86 and PD-L1 or PD-L2, respectively [38]. As a result, T cells can proliferate and elicit the host immune response against cancer cells. Ipilimumab (Yervoy, Bristol-Myers Squibb), a fully human immunoglobulin G1 (IgG1), targeting CTLA-4, was the first mAb to demonstrate a survival benefit in patients with metastatic melanoma. In a Phase III randomized clinical trial (MDX010-020) administration of ipilimumab in combination with glycoprotein 100 (gp100) peptide increased OS as compared to gp100 vaccination alone (10.1 versus 6.4 months) [39]. In another Phase III randomized clinical trial (CA184-024), ipilimumab in combination with dacarbazine demonstrated a significantly longer OS as compared to dacarbazine alone (11.2 versus 9.1 months) [40]. An update analysis has confirmed the survival benefit of ipilimumab in metastatic melanoma patients showing an 18.2% of 5-year survival rate for patients treated with ipilimumab plus dacarbazine as compared to 8.8% of patients treated with placebo plus dacarbazine [41]. Even more administration of ipilimumab at 10 mg/kg significantly increased OS of melanoma patients as compared to standard dose of 3 mg/kg (15.7 versus 11.5 months) [42]. However, the relevant results obtained with ipilimumab have been mitigated following the publication of the clinical trials data of the anti-PD-1 mAbs, nivolumab (Opdivo, Bristol-Myers Squibb) and pembrolizumab (Keytruda, Merck), in metastatic melanoma patients. In a Phase III randomized clinical trial (CheckMate 066) administration of nivolumab, a fully human IgG4 anti-PD-1, improved 1-and 2-year OS rate as compared to standard chemotherapy with dacarbazine in previously untreated patients with metastatic melanoma without BRAF mutation (73.0% versus 41.0% at 1 year and 56.7% versus 26.7% at 2 years) [43,44]. In another Phase III randomized clinical trial (CheckMate 037) nivolumab demonstrated a higher percentage of overall response rate (ORR) as compared to investigator's choice chemotherapy in patients with metastatic melanoma who experienced disease progression following anti-CTLA-4 or BRAF inhibitor treatment (31.7% vs 10.6%) [45]. In a Phase II randomized clinical trial (KEYNOTE-002) administration of pembrolizumab, a humanized IgG4 anti-PD-1, at two different doses, was compared to investigator's choice chemotherapy in metastatic melanoma patients who experienced disease progression after treatment with ipilimumab and/or BRAF inhibitor and/or MEK inhibitor. The 6-month progression free survival (PFS) rate was 34% and 38% for pembrolizumab at 2 and 10 mg/Kg, respectively, while it was only 16% for the chemotherapy group [46]. Median OS was 13.4 and 14.7 months for 2 and 10 mg/kg of pembrolizumab, respectively, while it was 11.0 months for chemotherapy. Eighteen-month OS rates were 40%, 44% and 36% and 24-month rates were 36%, 38% and 30% [47]. Lastly, in a Phase III randomized clinical trial (KEYNOTE-006), pembrolizumab, at two different schedules of treatment (10 mg/Kg every two or three weeks), demonstrated an improvement in PFS (12-month PFS 39% and 38% versus 19%; 24-month PFS 31% and 28% versus 14%) and OS (1-year OS rate 74% and 68% versus 59%; 2-year OS rate 55% and 55% versus 43%) as compared to ipilimumab alone [48,49]. Immune checkpoint blockade using anti-PD-L1 mAbs is another promising approach for the treatment of melanoma patients with metastatic disease. BMS-956559 (Bristol-Myers Squibb), a fully human IgG4, was the first anti-PD-L1 mAb to show objective tumor responses in patients with solid tumors [50]. In addition, anti-PD-L1 mAbs such as Atezolizumab (MPDL3280A, Roche Genentech), Durvalumab (MEDI4736, Astrazeneca) and Avelumab (MSB00107185, EMD Serono/Merck KGaA/Pfizer) are currently tested in clinical trials and their results are expected soon. Moreover, several ongoing clinical trials are now testing blockade of different checkpoint inhibitors such as LAG-3, TIM-3 or TIGIT, alone or in combination with anti-PD-1/PD-L1 drugs, in metastatic melanoma patients. Both anti-CTLA-4 and anti-PD-1/PD-L1 mAbs are revolutionizing the clinical approach to melanoma patients regardless the mutational status. However, as we have previously described, the efficacy of this novel immunotherapeutic strategy is limited to up to 40% of treated patients and there is still the need to identify potential predictive biomarkers of treatment response. Several biomarkers are under investigation including PD-L1, PD-L2, FAS, HLA class I and HLA class II antigen expression, immune checkpoints LAG-3, TIM-3, IDO, OX40, CD137 and CD40 expression, tumor infiltrating lymphocytes (TIL), CD4+, CD8+, granzyme B+, CD56+ and FOXP3+ cells, secreted molecules IL-2, IFN-γ, IL-10, IL-4, CXCL9, CXCL10, CCL5, cancer cell mutational load, antigenic peptide expression, gene expression and TCR signaling analysis. So far, none of these biomarkers including PD-L1 expression has been shown to play a major role as a predictive biomarker of response to immune checkpoint molecule based immunotherapy. In a large meta-analysis, which summarized the results of clinical trials utilizing anti-PD-1 mAbs in malignant diseases including melanoma, PD-L1 expression by tumor cells correlated with ORR. However, in both PD-L1 positive and negative tumors clinical response rates and an increased OS were reported [51]. As a result, PD-L1 expression might be used to identify patients that benefit more from anti-PD-1 based immunotherapy but it cannot be used to exclude patients from the treatment with this type of therapy. The combination of different immune checkpoint inhibitors is currently tested as an alternative strategy to increase the ORR and OS of treated melanoma patients. In a randomized Phase III clinical trial (CheckMate 067), administration of ipilimumab and nivolumab was compared to single agent alone (ipilimumab or nivolumab) in previously untreated melanoma patients. The median PFS was 11.5 months in the nivolumab-plus-ipilimumab group as compared to 6.9 months for nivolumab alone and 2.9 months for ipilimumab alone [52]. The median OS had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group and 19.9 months in the ipilimumab group. The 3-years OS was higher in the nivolumab-plus-ipilimumab group as compared to nivolumab and ipilimumab groups (58% versus 52% and 34%, respectively). However, a higher rate of grade 3-4 immune-related toxicities was reported in the combination group as compared to single agents alone (59% in the nivolumab-plus-ipilimumab group, 21% in the nivolumab group and 28% in the ipilimumab group) [53]. Lastly, immune checkpoint inhibitors have been tested not only in the metastatic setting of melanoma patients, but also as an adjuvant strategy following surgery for high risk melanoma patients. In a Phase III randomized clinical trial the anti-CTLA-4 mAb ipilimumab increased the 5-years rates of recurrence-free survival (40.8% versus 30.3%), OS (65.4% versus 54.4%) and distant metastasis-free survival (48.3% versus 38.9%) as compared to placebo in high-risk stage III melanoma patients [54]. However, also in this case, side effects were not irrelevant. Recently, another Phase III randomized clinical trial compared 1-year administration of nivolumab with ipilimumab in completely resected stage III-IV melanoma patients. Nivolumab significantly improved 12-month rate of recurrence-free survival as compared to ipilimumab (70.5% versus 60.8%), with a reduced incidence of treatment-related grade 3-4 adverse events (14.4% versus 45.9%) [55]. CONCLUSION The implementation of immune checkpoint-based immunotherapy is completely revolutionizing the clinical approach to cancer patients. In melanoma, a tumor that for many years has shown high rate of deaths because of its resistance to standard therapy, administration of both anti-CTLA-4 and anti-PD-1 mAbs significantly increases response rates, PFS and OS of the treated patients [38 -53]. Nevertheless, in cancer patients carrying a chronic viral infection such as HIV, the anti-tumor activity of these molecules has not been extensively evaluated and clinical trials are still warranted. Some clinical cases showed clinical and immunological response to checkpoint inhibitors in melanoma patients with HIV, HBV or Hepatitis C Virus (HCV) infections [33, 34, 56 -60]. Moreover, in a retrospective analysis of 44 patients affected by metastatic tumors (including 29 melanoma patients) and concurrent solid organ transplant, HIV, HBV or HCV infections, the administration of anti-PD-1/PD-L1 mAbs appeared to have clinical activity in the absence of adverse effect on the viral control [61]. Recently, another retrospective study evaluated the efficacy of immune checkpoint blockade in metastatic melanoma patients with concomitant HIV infection, pointing out similar results [62]. Globally, these data provide evidences about the efficiency of immune checkpoint inhibitors as treatment of melanoma and HIV disease. As previously described, HIV infection plays a crucial role in determining the worst prognosis of HIV-infected cancer patients because of the induction of a chronic and progressive immunodeficient status [4]. The latter causes the inability to mount an effective host immune response and the persistence and/or the progressive expression of different immune checkpoint molecules (CTLA-4, PD-1/PD-L1, TIM-3, LAG-3, TIGIT) lastly leads to an immune exhausted phenotype [63 -65]. Ideally, treatment of metastatic cancer in patients with HIV should not further compromise immune competence, interact adversely with antiretroviral agents or increase the risk of tumor development. This hypothesis is supported by the results obtained from two recently published Phase I/II clinical trials. In a first trial, patients with advanced hepatocellular carcinoma were treated with nivolumab including those affected by HCV and HBV infection. The results demonstrate that infected patients have similar outcomes in terms of tumor response and safety profile as compared to non-infected subjects [66]. In a second trial, patients with squamous cell carcinoma of the anal canal were treated with nivolumab. A sub-analysis confirmed nivolumab efficacy and safe in both HIV-negative and HIV-positive patients [67,68]. However, these preliminary data are referred to small cohorts of cancer patients with chronic viral infections and they have to be interpreted cautiously. Several Phase I/II clinical trials (NCT02408861, NCT03304093, NCT02595866) testing the administration of checkpoint inhibitors, alone or in combination, in patients with HIV and advanced solid tumors are currently ongoing. Their results will shed some light on the efficacy and safety of this therapeutic strategy in this subgroup of cancer patients which has always been excluded from previous clinical trials. In conclusion, there is an urgent need to design new clinical trials in order to determine the effectiveness of treatment with checkpoint molecule inhibitors for the treatment of HIV and/or HIV-related cancer patients. Immune Checkpoint Inhibitors in Melanoma The Open AIDS Journal, 2017, Volume 11 97 AUTHORS' CONTRIBUTION FS and AM conceived and designed the work. All authors contribute to write the manuscript. All authors read and approved the final manuscript. CONSENT FOR PUBLICATION Not applicable. CONFLICT OF INTEREST PA has/had a consultant/advisory role for BMS, Roche-Genentech, MSD, Novartis, Ventana, Amgen, and Array. He received also research grants from BMS, Roche-Genentech, Ventana, and Array. SP received research grants from Roche-Genentech and Astrazeneca.
2017-11-15T00:18:21.541Z
2017-11-14T00:00:00.000
{ "year": 2017, "sha1": "c7ea24e457af19f6e5aad7733627bc8b83fc9e1b", "oa_license": "CCBY", "oa_url": "https://openaidsjournal.com/VOLUME/11/PAGE/91/PDF/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7ea24e457af19f6e5aad7733627bc8b83fc9e1b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219045146
pes2o/s2orc
v3-fos-license
A Unified Approach to Scalable Spectral Sparsification of Directed Graphs Recent spectral graph sparsification research allows constructing nearly-linear-sized subgraphs that can well preserve the spectral (structural) properties of the original graph, such as the first few eigenvalues and eigenvectors of the graph Laplacian, leading to the development of a variety of nearly-linear time numerical and graph algorithms. However, there is not a unified approach that allows for truly-scalable spectral sparsification of both directed and undirected graphs. In this work, we prove the existence of linear-sized spectral sparsifiers for general directed graphs and introduce a practically-efficient and unified spectral graph sparsification approach that allows sparsifying real-world, large-scale directed and undirected graphs with guaranteed preservation of the original graph spectra. By exploiting a highly-scalable (nearly-linear complexity) spectral matrix perturbation analysis framework for constructing nearly-linear sized (directed) subgraphs, it enables us to well preserve the key eigenvalues and eigenvectors of the original (directed) graph Laplacians. The proposed method has been validated using various kinds of directed graphs obtained from public domain sparse matrix collections, showing promising results for solving directed graph Laplacians, spectral embedding, and partitioning of general directed graphs, as well as approximately computing (personalized) PageRank vectors. INTRODUCTION Many research problems for simplifying large graphs leveraging spectral graph theory have been extensively studied by mathematics and theoretical computer science (TCS) researchers in the past decade [1,7,8,16,19,24,27]. Recent spectral graph sparsification Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference'17, July 2017, Washington, DC, USA © 2020 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn research allows constructing nearly-linear-sized 1 subgraphs that can well preserve the spectral (structural) properties of the original graph, such as the the first few eigenvalues and eigenvectors of the graph Laplacian. The related results can potentially lead to the development of a variety of nearly-linear time numerical and graph algorithms for solving large sparse matrices and partial differential equations (PDEs), graph-based semi-supervised learning (SSL), computing the stationary distributions of Markov chains and personalized PageRank vectors, spectral graph partitioning and data clustering, max flow and multi-commodity flow of undirected graphs, nearly-linear time circuit simulation and verification algorithms, etc [5,7,8,12,13,15,17,27,28,31,32]. However, there is not a unified approach that allows for trulyscalable spectral sparsification of both directed and undirected graphs. For example, the state-of-the-art sampling-based methods for spectral sparsification are only applicable to undirected graphs [17,25,28]; the latest algorithmic breakthrough in spectral sparsification of directed graphs [7,8] can only handle strongly-connected directed graphs 2 , which inevitably limits its applications when confronting real-world graphs, since many directed graphs may not be strongly connected, such as the graphs used in chip design automation (e.g. timing analysis) tasks as well as the graphs used in machine learning and data mining tasks. Consequently, there is still a pressing need for the development of highly-robust (theoreticallyrigorous) and truly-scalable (nearly-linear complexity) algorithms for reducing real-world large-scale (undirected and directed) graphs while preserving key graph spectral (structural) properties. This paper proves the existence of linear-sized spectral sparsifiers for general directed graphs, and introduces a practically-efficient and unified spectral sparsification approach that allows simplifying real-world, large-scale directed and undirected graphs with guaranteed preservation of the original graph spectra. More specifically, we exploit a highly-scalable (nearly-linear complexity) spectral matrix perturbation analysis framework for constructing ultra-sparse (directed) subgraphs that can well preserve the key eigenvalues and eigenvectors of the original graph Laplacians. Unlike the prior state-of-the-art methods that are only suitable for handling specific types of graphs (e.g. undirected or strongly-connected directed graphs [8,25]), the proposed approach is more general and thus will allow for truly-scalable spectral sparsification of a much wider range of real-world complex graphs that may involve billions of elements. The spectrally-sparsified directed graphs constructed by the proposed approach will potentially lead to the development of much faster numerical and graph-related algorithms. For example, 1 The number of edges is close to the number of nodes in the sparsifier. 2 A strongly connected directed graph is a directed graph in which any node can be reached from any other node along with direction. spectrally-sparsified social (data) networks allow for more efficient modeling and analysis of large social (data) networks; spectrallysparsified neural networks allow for more scalable model training and processing in emerging machine learning tasks; spectrallysparsified web-graphs allow for much faster computations of personalized PageRank vectors; spectrally-sparsified integrated circuit networks will lead to more efficient partitioning, modeling, simulation, optimization and verification of large chip designs, etc. The rest of this paper is organized as follows. Section 2 provides a brief introduction to graph Laplacians and spectral sparsification of directed graphs. In Section 3, a scalable and unified spectral sparsification framework for general graphs is described in detail. Section 4 describes a practically-efficient spectral sparsification approach, while Section 5 introduces potential applications of the proposed graph sparsification framework. Section 6 demonstrates extensive experimental results for a variety of real-world, largescale directed graphs, which is followed by the conclusion of this work in Section 7. PRELIMINARIES 2.1 Laplacians for (un)directed graphs Consider a directed graph G = (V , E G , w G ) with V denoting the set of vertices, E G representing the set of directed edges, and w G denoting the associated edge weights. In the following, we denote the diagonal matrix by D G with D G (i, i) being equal to the (weighted) outdegree of node i, as well as the adjacency matrix of G by A G : Then the directed Laplacian matrix can be constructed as follows [8]: Let n = |V |, m = |E G |, and undirected graphs can be converted into equivalent directed graphs by replacing each undirected edge with two opposite directed edges. While for most directed graphs L G may not be a symmetric matrix. It can be shown that any directed (undirected) graph Laplacian constructed using (2) will satisfy the following properties: 1) Each column (and row) sum is equal to zero; 2) All off-diagonal elements are non-positive; 3) The Laplacian matrix is asymmetric (symmetric) and indefinite (positive semidefinite). Spectral sparsification of undirected graphs Graph sparsification aims to find a subgraph (sparsifier) S = (V , E S , w S ) that has the same set of vertices but much fewer edges than the original graph G. There are two types of sparsification methods: the cut sparsification methods preserve cuts in the original graph through random sampling of edges [2], whereas spectral sparsification methods preserve the graph spectral (structural) properties, such as distances between vertices, effective resistances, cuts in the graph, as well as the stationary distributions of Markov chains [7,8,27]. Therefore, spectral graph sparsification is a much stronger notion than cut sparsification. For undirected graphs, spectral sparsification aims to find an ultra-sparse subgraph proxy that is spectrally-similar to the original one. G and S are said to be σ -spectrally similar if the following condition holds for all real vectors x ∈ R V : where L G and L S denote the symmetric diagonally dominant (SDD) Laplacian matrices of graphs G and S, respectively. By defining the relative condition number to be κ(L G , L S ) = λ max /λ min , where λ max (λ min ) denotes the largest (smallest) eigenvalues of L + S L G , and L + S denotes the Moore-Penrose pseudoinverse of L S , it can be further shown that κ(L G , L S ) ≤ σ 2 , implying that a smaller relative condition number or σ 2 corresponds to a higher (better) spectral similarity between two graphs. Spectral sparsification of directed graphs A significant progress has been made for spectral analysis of directed graphs in [6], which for the first time has proved the Cheeger inequality for directed graphs and shown the connection between directed graph partitioning and the smallest (nontrivial) eigenvalue of directed Laplacian. More specifically, the transition probability matrix and the stationary distributions of Markov chains have been exploited for constructing the undirected Laplacians for stronglyconnected directed graphs. The latest algorithmic breakthrough in spectral sparsification for strongly-connected directed graphs has been introduced based on the results in [6], which proposes to first convert strongly connected graphs into Eulerian graphs via Eulerian scaling, and subsequently sparsify the undirected graphs obtained via Laplacian symmetrization [6] by leveraging existing spectral graph theory for undirected graphs [8]. It has been shown that such an approach for directed graphs can potentially lead to the development of almost-linear-time algorithms for solving asymmetric linear systems, computing the stationary distribution of a Markov chain, computing expected commute times in a directed graph, etc [8]. For directed graphs, the subgraph S can be considered spectrally similar to the original graph G if the condition number or the ratio between the largest and smallest singular values of L + S L G is close to 1 [7,8]. Since the singular values of L + S L G correspond to the square roots of eigenvalues of (L + S L G ) ⊤ (L + S L G ), spectral sparsification of directed graphs is equivalent to finding an ultra-sparse subgraph S such that the condition number of (L + S L G ) ⊤ (L + S L G ) is small enough. A UNIFIED SPARSIFICATION FRAMEWORK 3.1 Overview of our approach We introduce a unified spectral graph sparsification framework that allows handling both directed and undirected graphs in nearlylinear time. The core idea of our approach is to leverage a novel spectrum-preserving Laplacian symmetrization procedure to convert directed graphs into undirected ones (as shown in Figure 1). Then existing spectral sparsification methods for undirected graphs [1,12,13,19,27] can be exploited for directed graph spectral sparsification tasks. Our approach for symmetrizing directed graph Laplacians is motivated by the following fact: the eigenvalues of (L + S L G ) ⊤ (L + S L G ) will always correspond to the eigenvalues of L S L ⊤ S + L G L ⊤ G under the condition that L G and L S are diagonalizable. It can be shown that L G L ⊤ G and L S L ⊤ S can be considered as special graph Laplacian matrices corresponding to undirected graphs that may contain negative edge weights. Consequently, as long as a directed subgraph S can be found such that the undirected graphs corresponding to L S L ⊤ S and L G L ⊤ G are spectrally similar to each other, the subgraph S can be considered spectrally similar to the original directed graph G. Unlike the recent theoretical breakthrough in directed graph sparsification [7,8], our approach does not require the underlying directed graphs to be strongly connected, and thus can be applied to a much wider range of large-scale real-world problems, such as the neural networks adopted in many machine learning and data mining applications [14,30], the directed graphs (e.g. timing graphs) used in various circuit analysis and optimization tasks [22], etc. In the following, assume that G = (V , E G , w G ) is a weighted directed graph, whereas S = (V , E S , w S ) is its initial spectral sparsifier (subgraph), such as a spanning subgraph. Define L G u = L G L ⊤ G and L S u = L S L ⊤ S to be the undirected graph Laplacians obtained through the proposed symmetrization procedure for G and S. Spectrum-preserving symmetrization Performing singular value decomposition (SVD) on L G leads to where ζ i and η i are the left and right eigenvectors of L G , respectively. 3 It should be noted that ζ i and η i with i = 1, ..., n span the eigenspace of L G L ⊤ G and L ⊤ G L G , respectively. Since the eigenspace related to outgoing edges of directed graphs needs to be preserved, we will only focus on the Laplacian symmetrization matrix L G u = L G L G ⊤ that is also a Symmetric Positive Semi-definite (SPS) matrix. Theorem 3.1. For any directed Laplacian L G , its undirected graph Laplacian L G u after symmetrization will have the all-one vector as its null space and correspond to an undirected graph that may include negative edge weights. 3 The pseudoinverse of L G is L G Proof. Each element (i, j) in L G u can be written as follows: (3) It can be shown that the following is always true: which indicates the all-one vector is the null space of L G u . For directed graphs, it can be shown that if a node has more than one outgoing edge, in the worst case the neighboring nodes pointed by such outgoing edges will form a clique possibly with negative edge weights in the corresponding undirected graph after symmetrization. As an example shown in Figure 2, when edge e2 is added into the initial graph G that includes a single edge e1, an extra edge (shown in red dashed line) coupling with e1 will be created in the resultant undirected graph G u ; similarly, when an edge e3 is further added, two extra edges coupling with e1 and e2 will be created in G u . When the last edge e4 is added, It forms a clique. It can be shown that G u will contain negative edge weights under the following condition: In some cases, there may exist no clique even though all outgoing edges according to one node are added into subgraph only because the weights of these edges become zeros satisfying: Existence of linear-sized spectral sparsifier It has been shown that every undirected graph with positive edge weights has a Twice-Ramanujan spectral sparsifier with positive edge weights to spectrally-approximate the original graph [1,19]. In this work, we extend the above theory to deal with the undirected graphs obtained through the proposed Laplacian symmetrization procedure that may introduce negative weights. Theorem 3.2. For a given directed graph G and its undirected graph after symmetrization satisfies the following condition for any x ∈ R n : We will need the following the lemma [3.3] to prove our theorem. Lemma 3.3. Let ϵ > 0, and u 1 , u 2 , ..., u m denote a set of vectors in R n that allow expressing the identity decomposition as: where I n×n ∈ R n×n denotes an identity matrix. Then there exists a O(m/ϵ (1) )-time algorithm [19] that can find non-negative coefficients {t i } m i=1 such that at most |{t i |t i > 0}| = O(n/ϵ 2 ) and for any x ∈ R n : Proof. Any directed graph Laplacian can also be written as: where B m×n and C m×n are the edge-vertex incidence matrix and the injection matrix defined below: and W m×m is the diagonal matrix with W (i, i) = w i . We show how to construct the vectors u i for i = 1, ..., m in (8), which will suffice for proving the existence of linear-sized spectral sparsifiers for directed graphs. (9) allows writing the undirected Laplacian after symmetrization as Since W o is an SPS matrix, we can always construct a U matrix with u i for i = 1, ..., m as its column vectors: U contains all the information of the directed edges in G. It can be shown that U satisfies the following equation: where r is the rank of the L G u . According to Lemma 3.3, we can always construct a diagonal matrix T ∈ R m×m with t i as its i-th diagonal element. Then there will be at most O(n/ϵ 2 ) positive diagonal elements in T, which allows constructing o B that corresponds to the directed subgraph S for achieving (1 + ϵ)spectral approximation of G as required by (6). It can be shown that each u i with a nonzero t i coefficient corresponds to the outgoing edges pointed by the same node. Consequently, for directed graphs with bounded degrees, there will be O(n/ϵ 2 ) total number of directed edges in the (1 + ϵ)-spectral sparsifier S. □ A PRACTICALLY-EFFICIENT FRAMEWORK Although we have shown that every directed graph has linearsized spectral sparsifiers, there is no practically-efficient algorithm for constructing such sparsifiers. In this work, we exploit recent spanning-tree based spectral sparsification frameworks that have been developed for undirected graphs, and propose a practicallyefficient algorithm for spectral sparsification of directed graphs. Initial subgraph sparsifier The idea of using subgraphs as preconditioners for more efficiently solving linear system of equations has been first introduced in [29], showing that a maximum-spanning-tree (MST) subgraph can be leveraged as an mn-proxy of the original undirected graph. Recent nearly-linear time spectral sparsification algorithms for undirected graphs exploit similar ideas based on low-stretch spanning tree subgraphs [11][12][13]. In this work, we exploit ideas that are closely related to spanning-tree based subgraphs as well as the Markov chains of random walks. The following procedure for constructing the initial subgraph sparsifiers for directed graphs has been developed: (1) Compute the transition matrix P G sym = D −1 where, L G sym and D G sym are Laplacian matrix and diagonal matrix for graph G sym respectively; (2) Construct an undirected graph G ′ sym with P G sym as its adjacency matrix, and find an MST subgraph S mst of G ′ sym . (3) Construct a directed subgraph S 0 according to the S mst , and check every node in S 0 : for each of the nodes that have at least one outgoing edge in G but none in S 0 , include the outgoing edge with the largest weight into S 0 . (4) Return the latest subgraph S as the initial spectral sparsifier. Step 3) will make sure that the graph Laplacians of directed graphs G and S 0 share the same rank and nullity. As aforementioned, if a node has more than one outgoing edge, in the worst case the neighboring nodes pointed by such outgoing edges will form a clique in the corresponding undirected graph after symmetrization. Consequently, when constructing the initial subgraphs from the original directed graphs, it is important to limit the number of outgoing edges for each node so that the resultant undirected graph after Laplacian symmetrization will not be too dense. To this end, emerging graph transformation techniques that allow splitting highdegree nodes into multiple low-degree ones can be exploited. For example, recent research shows such split (e.g. uniform-degree tree) transformations can dramatically reduce graph irregularity while preserving critical graph connectivity, distance between node pairs, the minimal edge weight in the path, as well as outdegrees and indegrees when using push-based and pull-based vertex-centric programming [23]. In addition, let matrix Consider the following first-order generalized eigenvalue perturbation problem: where a small perturbation δ L S u in S u is introduced and subsequently leads to the perturbed generalized eigenvalues and eigenvectors µ i +δ µ i and v i + δ v i . Then the task of spectral sparsification of general (un)directed graphs can be formulated as follows: recover as few as possible extra edges back to the initial subgraph S such that the largest eigenvalues or the condition number of L + S u L G u can be dramatically reduced. Expanding (14) by only keeping the first-order terms simply leads to: Since both L G u and L S u are SPS matrices, L S u -orthogonal generalized eigenvectors v i for i = 1, ..., n can be found to satisfy: By expanding δ L S u with only the first-order terms, the spectral perturbation for each off-subgraph edge can be expressed as: where δ L S = w p,q e p,q e p ⊤ for (p, q) ∈ E G \ E S , e p ∈ R n denotes the vector with only the p-th element being 1 and others being 0, and e p,q = e p − e q . The spectral sensitivity δ µs p,q for the off-subgraph edge (p, q) can be computed by: (18) allows computing the spectral sensitivity of the dominant generalized eigenvalue with respect to the Laplacian perturbation due to adding each extra off-subgraph edge into S, which thus can be leveraged to rank the spectral importance of each edge. As a result, spectral sparsification of general (un)directed graphs can be achieved by only recovering the top few off-subgraph edges that have the largest spectral sensitivities into S. Since the above framework is based on spectral matrix perturbation analysis, compared to existing spectral sparsification methods that are limited to specific types of graphs, such as undirected graphs or strongly-connected directed graphs [7,8], the proposed spectral graph sparsification framework is more universal and thus will be applicable to a much broader range of graph problems. Approximate dominant eigenvectors A generalized power iteration method is proposed to allow much faster computation of the dominant generalized eigenvectors for spectral sparsification of directed graphs. Starting from any initial random vector expressed as h 0 = i α i v i , the dominant generalized eigenvector v 1 can be approximately computed by performing the following t-step power iterations: When the number of power iterations is small (e.g., t ≤ 3), h t will be a linear combination of the first few dominant generalized eigenvectors corresponding to the largest few eigenvalues. Then the spectral sensitivity for the off-subgraph edge (p, q) can be approximately computed by which will allow us to well approximate the spectral sensitivity in (17) for ranking off-subgraph edges during spectral sparsification. The key to fast computation of h t using generalized power iterations is to quickly solve the linear system of equations L S u x = b, which requires to explicitly construct L S u rather than L G u . To this end, we leverage the latest Lean Algebraic Multigrid (LAMG) algorithm that is capable of handling the undirected graphs with negative edge weights as long as the Laplacian matrix is SPS. The LAMG algorithm also enjoys an empirical O(m) runtime complexity for solving large scale graph Laplacian matrices [20]. Lean algebraic multigrid (LAMG) The setup phase of LAMG contains two main steps: First, a nodal elimination procedure is performed to eliminate disconnected and low-degree nodes. Next, a node aggregation procedure is applied for aggregating strongly connected nodes according to the following affinity metric c uv for nodes u and v: where is computed by applying a few Gauss-Seidel (GS) relaxations using K initial random vectors to the linear system equation L G u x = 0. Letx represent the approximation of the true solution x after applying several GS relaxations to L G u x = 0. Due to the smoothing property of GS relaxation, the latest error can be expressed as e s = x −x, which will only contain the smooth components of the initial error, while the highly oscillating modes will be effectively damped out [4]. It has been shown that the node affinity metric c uv can effectively reflect the distance or strength of connection between nodes in a graph: a larger c uv value indicates a stronger connection between nodes u and v [20]. Therefore, nodes u and v are considered strongly connected to each other if x u and x v are highly correlated for all the K test vectors, which thus should be aggregated to form a coarse level node. Once the multilevel hierarchical representations of the original graph (Laplacians) have been created according to the above scheme, algebraic multigrid (AMG) solvers can be built and subsequently leveraged to solve large Laplacian matrices efficiently. Edge spectral similarities The proposed spectral sparsification algorithm will first sort all offsubgraph edges according to their spectral sensitivities in descending order (p 1 , q 1 ), (p 2 , q 2 ), ... and then select top few off-subgraph edges to be recovered to the initial subgraph. To avoid recovering redundant edges into the subgraph, it is indispensable to check the edge spectral similarities: only the edges that are not similar to each other will be added to the initial sparsifier. To this end, we exploit the following spectral embedding of off-subgraph edges using approximate dominant generalized eigenvectors h t computed by (19): where (p, q k ) are the directed edges sharing the same head with (p, q) but different tails. Then the proposed scheme for checking spectral similarity of two off-subgraph edges will include the following steps: (1) Perform t-step power iterations with r = O(log n) initial random vectors h (2) For each edge (p, q), compute a r -dimensional spectral embedding vector s p,q ∈ R r with s p,q (r ) = ψ p,q (h (r ) t ); (3) Check the spectral similarity of two off-subgraph edges (p i , q i ) and (p j , q j ) by computing If SpectralSim < ϵ for a given ϵ, edge(p i , q i ) is considered spectrally dissimilar with (P j , q j ). Algorithm flow Algorithm 1 is presented to show the edge similarity checking for a list of off-subgraph edges. Algorithm 1 Edge Similarities Checking Input: E list , L G , L S , d out , ϵ t ; 2: Choose each edge (p, q) whose starting node has out-degree less than d out into a new E list ; 3: Compute a r -dimensional edge similarity vector s p, q ∈ R r for ∀(p, q) ∈ E list : s p,q (r ) = ψ p,q (h (r ) end if 9: end for 10: Return graph E addlist ; Algorithm flow and complexity The algorithm flow for directed graph spectral sparsification is described in Algorithm 2, while its complexity has been summarized as follows: APPLICATIONS OF DIRECTED GRAPH SPARSIFICATION Spectral graph sparsification algorithms can be potentially applied to accelerate many graph and numerical algorithms [8]. In this work, we demonstrate the applications of the proposed sparsification algorithm in solving directed Laplacian problems (e.g., Lx = b), [8,9], computing personalized PageRank vectors [7], as well as spectral graph partitioning. PageRank and personalized PageRank The idea of PageRank is to give a measurement of the importance for each web page. For example, PageRank algorithm aims to find the most popular web pages, while the personalized PageRank 5: Do E addlist = EdgeSimilaritiesChecking(E list , L G , L S , d out ϵ ) ; 6: Update S new = S + e addlist and calculate largest generalized eigenvector h t new , largest generalized eigenvalue µ maxnew based on L G and L S new ; 7: if µ maxnew < µ max then 8: end if 10: iter = iter + 1; 11: end while 12: Return graph S and L S , µ max , algorithm aims to find the pages that users will most likely to visit. To state it mathematically, the PageRank vector p satisfies the following equation: where p is also the eigenvector of A ⊤ G D −1 G that corresponds to the eigenvalue equal to 1. Meanwhile, p represents the stable distribution of random walks on graph G. However, D −1 G can not be defined if there exists nodes that have no outgoing edges. To deal with such situation, a self-loop with a small edge weight can be added for each node. The stable distributions of (un)directed graphs may not be unique. For example, the undirected graphs that have multiple stronglyconnected components, or the directed graphs that have nodes without any outgoing edges, may have non-unique distributions. In addition, it may take very long time for a random walk to converge to a stable distribution on a given (un)directed graph. To avoid such situation in PageRank, a jumping factor α that describes the possibility at α to jump to a uniform vector can be added, which is shown as follows: where α ∈ [0, 1] is a jumping constant. After applying Taylor expansions, we can obtain that p = α n i ((1 − α)A ⊤ G D −1 G ) i . By setting the proper value of α (e.g., α = 0.15), the term (1−α) i will be quickly reduced with increasing i. Instead of starting with a uniform vector α n , a nonuniform personalization vector pr can be applied: In this work, we show that the PageRank vector obtained with the sparsified graph can preserve the original PageRank information. After obtaining the PageRank vector computed using the sparsifier, a few GS relaxations will be applied to further improve the solution quality. Directed Laplacian solver Consider the solution of the following linear systems of equations: Recent research has been focused on more efficiently solving the above problem when L is a Laplacian matrix of an undirected graph [15,18]. In this work, we will mainly focus on solving nonsymmetric Laplacian matrices that correspond to directed graphs. Lemma 5.1. When solving (28), the right preconditioning system is applied, leading to the following alternative linear system of equations: where vector b will lie in the left singular vector space. When the solution of (29) is obtained, the solution of (28) is given by It is obvious that solving the above equation is equivalent to solving the problem of L G L ⊤ G L +⊤ G x = b. In addition, L G u is a Laplacian matrix of an undirected graph that can be much denser than L G . Therefore, we propose to solve the linear system of L S uỹ = b instead to effectively approximate (29) since G S u is sparser than G G u and more efficient to solve in practice. We analyze the solution errors based on the generalized eignvalue problem of L G U and L S U . We have , µ is the diagonal matrix with its generalized eigenvalues µ i ≥ 1 on its diagonal. Since the errors can be calculated from the following procedure: we can write the error term as follows: Sinceỹ = i a i v i , the error can be further expressed as Therefore, the error term (32) can be generaly considered as a combination of high-frequency errors (generalized eigenvectors with respect to high generalized eigenvalues) and low-frequency errors (generalized eigenvectors with respect to low generalized eigenvalues). After applying GS relaxations, the high-frequency error terms can be efficiently removed (smoothed), while the lowfrequency errors tend to become zero if the generalized eigenvalues approach 1 considering (1 − 1 µ i ) tends to be approaching zero. As a result, the error can be effectively eliminated using the above solution smoothing procedure. In summary, in the proposed directed Laplacian solver, the following steps are needed: (a) We will first extract a spectral sparsifier L S of a given (un)directed graph L G . Then, it is possible to compute an approximate solution by exploiting its spectral sparsifier L S u = L S L ⊤ S via solvingỹ = L + S u b instead. (b) Then we improve the approximate solutionỹ by getting rid of the high-frequency errors via applying a few steps of GS iterations [3]. (C) The final solution is obtained from x = L ⊤ Gỹ . Directed graph partitioning It has been shown that partitioning and clustering of directed graphs can play very important roles in a variety of applications related to machine learning [21], data mining and circuit synthesis and optimization [22], etc. However, the efficiency of existing methods for partitioning directed graphs strongly depends on the complexity of the underlying graphs [21]. In this work, we propose a spectral method for directed graph partitioning problems. For an undirected graph, the eigenvectors corresponding to the first few smallest eigenvalues can be utilized for the spectral partitioning purpose [26]. For a directed graph G on the other hand, the left singular vectors of Laplacian L G will be required for directed graph partitioning. The eigen-decomposition of its symmetrization L G U can be wirtten as where 0 = µ 1 ≤ ...µ k and v 1 , ..., v k , with k ≤ n denote the Laplacian eigenvalues and eigenvectors, respectively. There may not be n eigenvalues if when there are some nodes without any outgoing edges. In addition, the spectral properties of L G U are more complicated since the eigenvalues always have multiplicity (either algebraic or geometric multiplicities). For example, the eigenvalues according to the symmetrization of the directed graph in Figure 4 have a a few multiplicities: µ 2 = µ 3 , µ 4 = µ 5 = µ 6 = µ 7 , µ 9 = µ 10 . Therefore, we propose to exploit the eigenvectors (left singular vectors of directed Laplacian) corresponding to the first few different eigenvalues (singular values of directed Laplacian) for directed graph partitioning. For example, the partitioning result of directed graph in Figure 4 will depend on the eigenvectors of v 1 , v 2 , v 4 , v 8 that correspond to eigenvalues of µ 1 , µ 2 , µ 4 , µ 8 . As shown in Fig (4), the spectral partitioning results can quite different between the directed and undirected graph with the same set of nodes and edges. Figure 4 In general, it is possible to first extract a spectrally-similar directed graph before any of the prior partitioning algorithms are applied. Since the proposed spectral sparsification algorithm can well preserve the structural (global) properties of the original graphs, the partitioning results obtained from the sparsified graphs will be very similar to the original ones. EXPERIMENTAL RESULTS The proposed algorithm for spectral sparsification of directed graphs has been implemented using MATLAB and C++ 4 . Extensive experiments have been conducted to evaluate the proposed method with various types of directed graphs obtained from public-domain data sets [10]. Figure 5 shows the spectral sensitivities of all the off-subgraph edges (e2 to e19) in both directed and undirected graphs calculated using MATLAB's "eigs" function and the proposed method based on (20) using the LAMG solver, respectively. The edges of subgraphs for both directed and undirected graphs are represented with red color. The spectral sensitivities of all the off-subgraph edges (e2 to e19 in blue color) with respect to the dominant eigenvalues (µ max or µ 1 ) in both directed and undirected graphs are plotted. We observe that spectral sensitivities for directed and undirected graphs are drastically different from each other. The reason is that the spectral sensitivities for off-subgraph edges in the directed graph depend on the edge directions. It is also observed that the approximate spectral sensitivities calculated by the proposed t-step power iterations with the LAMG solver match the true solution very well for both directed and undirected graphs. Table 1 shows more comprehensive results on directed graph spectral sparsification for a variety of real-world directed graphs using the proposed method, where |V G |(|E G |) denotes the number of nodes (edges) for the original directed graph G; |E S 0 | and |E S | denote the numbers of edges in the initial subgraph S 0 and final spectral sparsifier S. Notice that we will directly apply the Matlab's "eigs" function if the size of the graph is relatively small (|E S 0 | < 1E4); otherwise we will switch to the LAMG solver for better efficiency when calculating the approximate generalized eigenvector h t . We report the total runtime for the eigsolver using either the LAMG solver or "eigs" function. µ in µ max denotes the reduction rate of the largest generalized eigenvalue of L + S u L G u . We also plot the detailed reduction rates of the largest generalized eigenvalue when adding different number of off-subgraph edges to the sparsifiers of graph "gre_115" and "peta" in Figure 7. It shows that the largest generalized eigenvalue can be effectively reduced if sufficient off-subgraph edges are included into the sparsifier. Figure 6: The correlation of the Personalized PageRank between itself and its sparsifier for the 'gre_115.mtx' graph (left) and graph'gre_185.mtx' (right) w/o smoothing. Table 2 shows the results of the directed Laplacian solver on different directed graphs. It reports relative errors between the exact solution and the solution calculated by the proposed solver with and without smoothing. It shows that errors can be dramatically reduced after smoothing, and our proposed solver can well approximate the true solution of L G x = b. µ reduction rate µ reduction rate Figure 7: µ max eigenvalue reduction rates for "gre_115" (left) and "pesa" (right). Figure 6 shows the personalized PageRank results on two graphs ("gre_115" and "gre_185") and their sparsifiers. We can observe that personalized PageRank vectors match very well with the original ones. Finally, we show the spectral graph partitioning results on the original directed graph Laplacian G u and its sparsifier S u in Figure 8. As observed, very similar partitioning results have been obtained, indicating well preserved spectral properties within the spectrallysparsified directed graph. CONCLUSIONS This paper proves the existence of linear-sized spectral sparsifiers for general directed graphs, and proposes a practically-efficient and unified spectral graph sparsification framework. Such a novel spectral sparsification approach allows sparsifying real-world, largescale directed and undirected graphs with guaranteed preservation of the original graph spectral properties. By exploiting a highlyscalable (nearly-linear complexity) spectral matrix perturbation analysis framework for constructing nearly-linear sized (directed) subgraphs, it enables to well preserve the key eigenvalues and eigenvectors of the original (directed) graph Laplacians. The proposed method has been validated using various kinds of directed graphs obtained from public domain sparse matrix collections, showing promising spectral sparsification and partitioning results for general directed graphs.
2018-12-11T00:56:38.000Z
2018-12-11T00:00:00.000
{ "year": 2018, "sha1": "6926e548b802099bf69b63a06ba158b67b9d76c9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "10eb5a03bc0f882cff57d04923f68c22977344f9", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }