id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
221377125
pes2o/s2orc
v3-fos-license
Could Deficiencies in South African Data Be the Explanation for Its Early SARS-CoV-2 Peak? The SARS-CoV-2 pandemic peaked very early in comparison to the thresholds predicted by an analysis of prior lockdown regimes. The most convenient explanation is that some, external factor changed the value of the basic reproduction number, $r_{\rm 0}$; and there certainly are arguments for this. Other factors could, nonetheless, have played a role. This research attempts to reconcile the observed peak with the thresholds predicted by lockdown regimes similar to the one in force at the time. It contemplates the effect of two, different, hypothetical errors in the data: The first is that the true level of infection has been underestimated by a multiplicative factor, while the second is that of an imperceptible, pre-existing, immune fraction of the population. While it is shown that it certainly is possible to manufacture the perception of an early peak as extreme as the one observed, solely by way of these two phenomena, the values need to be fairly high. The phenomena would not, by any measure, be insignificant. It also remains an inescapable fact that the early peak in infections coincided with a fairly profound change in $r_{\rm 0}$; in all the contemplated scenarios of data-deficiency. Introduction On around the 18th of July, 2020, the incidence of SARS-CoV-2 cases peaked and the number of active infections followed suit around five days later. Either some unknown factor caused the basic reproduction number, r 0 , to drop below unity, around the 13th of July (the mean SARS-CoV-2 incubation period is 5.2 days [14]), or the threshold had been reached. All this transpired at an apparently very low level of total infection, a mere 0.7 % of the population; or, couching this in the more conventional terms of susceptability, 99.3 %. Such a threshold would imply a basic reproductive number less than 1.01. On the 13th of July, South Africa did revert to a lockdown regime similar to its previous level 4 lockdown (r 0 = 1.69), referred to in this work as "level 3.5", however, as much as prohibition, curfews and a number of other measures were reinstated, the threshold predicted by an analysis of the level 4 lockdown regime, suggests that a new regime, in itself, was not enough to cause the observed peak. A basic reproductive number of 1.01 is exceptionally marginal. How does one explain this conundrum? It is already known that the perception of a peak at 99.3 % is based on infection-data which are deficient by an order of magnitude, or even greater. The head of the CDC, Robert Redfield's opinion on the topic of asymptomatic or undiagnosed SARS-CoV-2 infections, in the U.S.A., is that antibody testing reveals that "A good rough estimate now is 10 to 1" ([5]) and others, in similar positions all over the world, have expressed similar sentiments. Redfield's factor of eleven also needs to be revised upward if one considers that, although antibodies lend themselves favourably to the diagnosis of immunity, they are not the ultimate indicator. Undetected, T-cell mediated immunity can exist in the absence of a positive antibody test. In South Africa, epidemiologists have focussed on excess deaths and put forward a value of 1.59 ( [10], [4] and [3]). There is not necessarily any conflict between this apparently, relatively low number of excess deaths and Redfield's statement, if one considers the obvious bias in detection: If you're so sick that you're about to die, you're more likely to seek out medical assistance and be diagnosed. It may also be worth keeping in mind that a massive 57 % of the inhabitants of the Mumbai slum areas of Chembur, Matunga and Dahisar tested positive for exposure to SARS-CoV-2 [1]. That the SARS-CoV-2 virus is often insiduous and infection data are consequently incorrect by a factor is therefore already a widely recognised phenomenon. This fact, alone, is nonetheless unable to reasonably explain the SARS-CoV-2 threshold observed in the South African data, without contemplating improbably-high, though not impossible, values. The question then arises as to whether the SARS-CoV-2 virus really is novel and the population really is naive, or whether some other pathogen, genetics, etc. has not imparted an undetectable immunity to a significant fraction of the population. Although antibodies lend themselves favourably to the diagnosis of immunity, they are not the ultimate indicator. As already stated, undetected, T-cell mediated immunity can exist in the absence of a positive antibody test. This research attempts to reconcile the observed peak in South African infections with the thresholds predicted by lockdown regimes similar to the one in force at the time. It contemplates the effect of two, different, hypothetical errors in the data: The first is that the true level of infection has been underestimated by a multiplicative factor, denoted a, while the second is that of an imperceptible, pre-existing, immune fraction of the population, denoted b. Since the rules in place at the time of the peak were most similar to level 4, the values of a and b explored were mostly selected on the basis that they manufacture an erroneously-detected, 99.3 % threshold for the level 4 lockdown. Of course, it is possible that some, other, external factor, caused the basic reproduction number to plummet, around the 13th of July, and a quantification of lockdown regimes means nothing if the public at large were not compliant with the rules. 2 The Erroneous Perception Created by Inexplicable Immunity and Asymptomatic, or Undiagnosed, Infections Suppose that S(t), I(t) and R(t) represent the usual quantities in Kermack and McKendrick's SIR model [13]. Suppose that a tilde is used to further distinguish detected values of these quantities, which for the purposes of this exposition, will be erroneous. If, for some presently inexplicable reason, there is an imperceptible, pre-existing immune fraction of the population, b, then R(0) =R(0) + b, for an epidemic that begins at some time, t = 0. Suppose one were to further determine that bothĨ(t) and therefore, the resistant portion arising from the current epidemic,R(t) −R(0), are in actual fact higher by some factor, a. That is, Substituting these quantities into in In the particular case of the South African, SARS-CoV-2 peak,S(t) = 0.993 andR(0) = 0. Equation (1) therefore simplifies to The true value of the erroneously-detected 99.3 % threshold therefore depends on the values of both a and b, some examples of which are given in Table 1. Changing the subject of Equation (1), This is a formula for the perceived susceptable fraction that will be erroneously detected for given values of a and b. The Data and Their Interpretation Epidemiological data are usually presented in the format "numbers of current infections" and "total number of cases". The present case of the SARS-CoV-2 pandemic is no exception. The data for the level-5 lockdown, the level-4 lockdown, the early level-3 lockdown and Sweden were already interpreted in [12] and may be found in Table 2. The final level-3 and so-called level-3.5 values, in Table 2, were determined as follows. The Level-3 Lockdown The level-3 lockdown commenced on the 1st of June and was still in force up until the 12th of July. Once again, a period of 15 days was allowed for the viral incubation period and the subsequent diagnosis of an infection. Once again, it was also assumed that the termination of the level-3 lockdown would not reflect in the data for at least 24 hours. Curves were accordingly fitted to the subset of data ( [7], [6] and [2]) which commenced on the 16 of June and terminated on the 13th of July. The curves fitted to the data, using Gnuplot, are depicted in Figure 1. The values these formulae yield for the relevant dates are provided in Table 2. The Level-3.5 Lockdown The so-called level-3.5 lockdown commenced on the 13th of July and was in force up until the 17th of August. Once again, a period of 15 days was allowed for the viral incubation period and the subsequent diagnosis of an infection. Once again, it was also assumed that the termination of the level-3 lockdown would not reflect in the data for at least 24 hours. Curves were accordingly fitted to the subset of data ( [7], [6] and [2]) which commenced on the 28th of July and terminated on the 18th of August. The curves fitted to the data, using Gnuplot, are depicted in Figure 2. The values these formulae yield for the relevant dates are provided in Table 2. Converting the Conventional, Epidemiological Format into S(t) and I(t) The epidemiological data have been presented in their usual format "numbers of current infections" and "total number of cases". If N is the size of the population, "active infections" are justĨ(t)N and "total infections" are just [Ĩ(t) +R(t) −R(0)]N . Realising this, the values of S(t) and I(t) needed for this work can be obtained from In 2020, the size of the South African population was estimated to be 59 140 502 by [8]. Calculation of the Basic Reproduction Number, r 0 The basic reproduction number, r 0 , was calculated according to the formula derived in [12] from Kermack and McKendrick's SIR equations [13]. That is, in which S(t), I(t) and R(t) represent the usual quantities in Kermack and McKendrick's SIR model [13] and they are evaluated at either end of an interval, [t 1 , t 2 ], in the formula. Expedience was the motivation for using this slightly unorthodox method, however, not only have so-called individual level models, such as those of [9] and [11], been discredited by [11] as a means for calculating r 0 , they are also far more laborious than the simple method used in this work. Notice that this formula also appears to be fairly robust. Take any movement of I(t) by an additive constant, up or down, for example. Such movement has no effect on the calculation of r 0 , whatsoever. This is important as the infection data, more than any other, are often plagued by exactly this problem. In fact, the formula, Equation (4), is reasonably robust against any data error that does not effect the relative values, or slopes of the functions concerned. This will be borne out when exploring the use of a multiplicative factor on the data. Calculation of the Threshold and S ∞ It is instructive to know both the threshold, as well as the point at which all infection would cease. Both are calculated from r 0 and the relevant theory is provided in [12]. If the susceptable fraction of the population is still above 1 r 0 , for a given regime, the epidemic will continue to grow in the way of infections. Only at the threshold does r-effective drop to unity and the infections subsequently decline. Once r 0 has been calculated, S(t 2 ) = S ∞ , can be recovered from Equation (4), by considering that S ∞ is the point at which all infection ceases, i.e. by setting I(t 2 ) = 0, in Equation (4). Results Since the rules in place at the time of the peak were most similar to level 4, the values of b and a explored were mostly selected on the basis that they manufacture an erroneously-detected, 99.3 % threshold for level 4. The results, as well as the inputs from which they were obtained, are provided on pages 9 to 11. Conclusions The SARS-CoV-2 pandemic was, very likely, a lot closer to its threshold than the South African data suggested, however, some, possibly external, factor still changed the value of r 0 . The author's opinion is therefore that the contemplated data-deficiencies are unlikely to explain the early peak in the SARS-CoV-2 pandemic on their own. If deficient data did, indeed, play a role, then the more compelling of the two phenomena might be that the true level of infection has been underestimated by a multiplicative factor. It is already a documented phenomenon and it is mathematically less disruptive. The existance of a significant, imperceptible, immune fraction of the population quickly drives r 0 up, admittedly not necessarily to unprecedented levels, however, in so doing, it moves thresholds to even lower values. In contrast, there is very little difference between perceived and true r 0 's, should infections have been underestimated by a factor. It is also already a widely recognised phenomenon that the SARS-CoV-2 virus is often insiduous and infection data are incorrect by a substantial factor, whereas a pre-existing, imperceptible immune fraction of the population remains nothing more than a hypothetical construct; a mere thought experiment for the present. The phenomenon of infections having been underestimated by a multiplicative factor, alone, is unable to comprehensively explain the SARS-CoV-2 peak observed in the South African data, without contemplating improbably-high values. Yet, those improbably-high values (a = 60) might be possible in an area like Khayalitsha, considering that a massive 57 % of the inhabitants of Mumbai slum areas tested positive [1]. Revising country-wide infections upward by a single order of magnitude is probably something not too far-fetched and it creates a level-4 threshold of 59.0 % that would be erroneously-detected as being 95.9 %, a perceived value not too far from the actual 99.3 % peak observed (see Table 3). It comes a long way to reconciling the observed with the predicted and it remains an inescapable fact that, in July, r 0 did change fairly abruptly in all the contemplated scenarios of data-deficiency. This is not something suggestive of a threshold. Around the 13th of July, it seems likely that something changed the value of r 0 significantly. Perhaps one criticism of the [12] analysis of lockdown regimes is that it denies the public at large their humanity. The quantification of lockdown regimes means nothing if the rules aren't complied with. Perhaps, by the 13th of July, the population's assymptotic compliance had reached the necessary level? Perhaps, too, the intrahousehold infections that obviously took place early on in the lockdowns, had also run their course? How much genetic drift took place in the preceding months is yet another unanswered question. Perhaps, when the prohibition was reimposed, the seemingly-endless supply of liquor, that had understandably leaked from the stricken hospitality industry, had finally dried up? All these factors could have contributed to lowering the level-3.5 r 0 to below the level-4 value of 1.69, driving the erroneously-detected 95.9 % (Table 3) closer to the 99.3 % peak observed. However, could all of this have driven the 1.69 down as far as 1.08, the r 0 -value necessary for the 99.3 % perception? The fact remains that level 3's r 0 was very much lower than 1.08. It was 0.71 for a = 10 and b = 0 (Table 3)! This was no threshold.
2020-09-01T01:01:18.979Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "5b0f6f3b890234297f7d547f535259be04307c13", "oa_license": null, "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/02/20/2020.08.31.20185108.full.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5b0f6f3b890234297f7d547f535259be04307c13", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Economics", "Biology", "Physics", "Mathematics", "Medicine" ] }
15728868
pes2o/s2orc
v3-fos-license
OCT demonstrating neoatherosclerosis as part of the continuous process of coronary artery disease Although the advent of drug-eluting stents has reduced the rates of target vessel revascularization, there are observations of ongoing stent failure occurring very late after stent implantation and presenting as very late restenosis or as very late stent thrombosis. The de novo development of atherosclerosis within the neointimal region, called neoatherosclerosis, has been identified as one of the pathomechanisms of these observed late stent failures. The mechanisms of neoatherosclerosis development and its association with stent failure are currently the subject of intensive research. Optical coherence tomography (OCT) is an invasive imaging modality that allows us to visualize the micromorphology of coronary arteries with near-histological resolution, thus providing detailed assessment of the morphological characteristics of the neointima after stent implantation, including neoatherosclerosis. Several OCT studies have tried to provide in vivo insights in the mechanisms of neoatherosclerosis development and its association with late stent failure. This review summarizes the current insights into neoatherosclerosis obtained with OCT and discusses the association of neoatherosclerosis with late stent failure. Drug-eluting stents (DES) have reduced the rates of target vessel revascularization and comprise the mainstay of treatment in percutaneous coronary intervention. However, despite the excellent short-and mid-term results with the current DES generation, ongoing stent failure with both bare metal stents (BMS) and DES is a frequent finding very late after stent implantation [1][2][3]. Very late stent failure clinically manifests as very late restenosis or stent thrombosis. Although the patho-genesis of (very) late stent failure appears to be multifactorial [4,5], one of the major mechanisms that have been implicated is the de novo development of atherosclerosis within the neointimal region, called neoatherosclerosis [6,7]. Observations of neoatherosclerosis have been documented both in ex vivo pathological observations and in vivo by intravascular imaging. Among several imaging modalities that have been used to identify neoath-erosclerosis [8][9][10][11], optical coherence tomography (OCT) can provide the most comprehensive assessment of the neointimal tissue. OCT allows for the visualization of the micromorphology of coronary arteries with near-histological resolution, differentiating between individual plaque components and providing important quantitative plaque information as the thickness of the fibrous plaque [12,13]. Thus, by OCT it is possible to assess distinct morphological characteristics of characteristics of in-stent neoatherosclerosis on optical coherence tomography images. a In-stent necrotic core (yellow) within the neointima (green). b In-stent calcification (white) and macrophage infiltration (orange). c Neointimal rupture. d In-stent necrotic core (yellow) with thin overlying fibrous cap. Panels on the right are color-coded cartoons of the corresponding OCT images, explaining the composition of the neointimal tissue. neoatherosclerosis, such as macrophage infiltration, lipid accumulation, in-stent calcification, or neointimal rupture. Consequently, in vivo OCT studies have focused on studying in-stent neoatherosclerosis and its association with late stent failure. This review summarizes the current insights into neoatherosclerosis obtained with OCT and discusses the implications of neoatherosclerosis on longterm outcome after stent implantation. Definition and OCT imaging evidence of in-stent neoatherosclerosis Neoatherosclerosis refers to an atherosclerotic change in neointimal tissue, first described in pathologic specimens of BMS, and more recently in pathologic specimens of DES as well [6,7]. Although observations of neoatherosclerosis had been sporadically documented from the early application of OCT [14,15], the tis-sue properties of the observed neointimal tissue patterns were unknown. Pathological studies were the first to provide more comprehensive insights regarding the histological composition of this entity. These studies defined neoatherosclerosis as the presence of clusters of lipid-laden foamy macrophages with or without necrotic core formation and/or calcification within the neointimal tissue of stented segments [6]. The presence of morphological components of native atherosclerosis within the neointimal tissue implies that these features can be identified by OCT, which has a high accuracy for detection of these characteristics in native atherosclerotic plaques. This hypothesis has been corroborated by a pathologic study of ex vivo stented coronary arteries, imaged by OCT, which showed that the OCT appearance of these characteristics is similar to that in native atherosclerosis [16]. Components of neoatherosclerosis that can be visualized by OCT include macrophage infiltration, necrotic core, instent calcifications, and neoatherosclerotic plaque rupture [17]. An in-stent necrotic core visualized by OCT is defined as the presence of signal-poor, highly attenuating regions with poorly delineated borders within the neointima, while in-stent calcifications are defined as well-delineated, signal-poor regions with sharp borders [16]. In accordance with native atherosclerosis, macrophages on OCT appear as a thin signal-bright layer causing high attenuation, while neointimal plaque rupture appears as a discontinuation in the luminal surface with formation of a cavity burrowing into the neointima [16]. Representative OCT images of in-stent neoatherosclerosis are shown in . Fig. 1. At this point it is important to emphasize the difference of neoatherosclerosis with the previously reported heterogeneous and layered patterns of coverage [18]. These patterns are respectively defined as neointimal tissue with focally changing optical properties and varying backscattering patterns, and as neointimal tissue with concentric layers having different optical properties (an adluminal high scattering layer and an abluminal low scattering layer). Such patterns are often encountered in the context of in-stent restenosis, and although the pathological substrate is not well characterized, histological findings from atherectomy specimens, autopsy, and animal experiments include organized thrombus, fibrin, and myxomatous extracellular matrix with sporadic evidence of inflammation [16,[19][20][21]. The association of these patterns with neoatherosclerosis is yet unknown, however there is evidence that these patterns similar to the homogeneous pattern may subsequently be replaced by neoatherosclerotic tissue [22,23]. Representative OCT images of different neointimal coverage patterns are shown in . Fig. 2. Although OCT can discern the distinct morphological characteristics of neoatherosclerosis, caution should be applied to image interpretation. This is because macrophage infiltration can sometimes appear as in-stent necrotic core, and vice versa [24]. Also, the distinction of a layered pattern of coverage with in-stent necrotic core can also be challenging some- times, although the low attenuation observed in the case of the layered pattern can be used to discriminate between these two entities [16]. Finally, with OCT it is not possible to distinguish between true adluminal atherosclerosis from the progression of an abluminal underlying necrotic core into the neointima. Nevertheless, the latter mechanism appears to be less prevalent [25], while it is unclear what the clinical significance of such a discrimination would be. In vivo prevalence of neoatherosclerosis in BMS and DES with OCT Several studies have tried to investigate the prevalence of neoatherosclerosis late after BMS or DES implantation using OCT (. Table 1). The reported prevalence varies highly between the studies. As these studies differ significantly in stent type, in the interval between stent implantation and follow-up, and in the clinical presentation, this difference in prevalence appears to reflect differences in the composition of the studied population. The first reports of in-stent neoatherosclerosis report a prevalence of 67 % in patients with BMS implantation beyond 5 years [15]. Subsequent studies have shown lower prevalence ranging between 30 and 50 % in long-term follow-up of asymptomatic patients with BMS [26][27][28]; however, this incidence could be as high as 100 % in BMS restenosis more than 10 years since implantation [29]. With regard to neoatherosclerosis within DES, a prevalence of more than 50 % has been reported in a very late follow-up [27,28], which can be even higher in stents with failure [14]. These rates tend to generally be in accordance with rates reported by pathological studies, which show a prevalence of 13-65 % depending on stent type and stent age [25]. Stent type and age Pathological studies were the first to indicate a potential association of stent type and stent age with the prevalence of neo-atherosclerosis [6]. DES appear to have a higher prevalence of neoatherosclerosis compared with BMS. Moreover, irrespective of stent type, the interval from implantation appears to be strongly associated with neoatherosclerosis development, an interval which is much lower than the interval required for development of na-tive atherosclerotic plaques [25]. Nakazawa et al. [6] first demonstrated an increased prevalence of neoatherosclerosis in autopsy specimens of first-generation DES compared with BMS despite a shorter interval from implantation. Importantly, in both groups, the prevalence of neoatherosclerosis increased with the inter- Abstract Although the advent of drug-eluting stents has reduced the rates of target vessel revascularization, there are observations of ongoing stent failure occurring very late after stent implantation and presenting as very late restenosis or as very late stent thrombosis. The de novo development of atherosclerosis within the neointimal region, called neoatherosclerosis, has been identified as one of the pathomechanisms of these observed late stent failures. The mechanisms of neoatherosclerosis development and its association with stent failure are currently the subject of intensive research. Optical coherence tomography (OCT) is an invasive imaging modality that allows us to visualize the micromorphology of coronary arteries with near-histologi-cal resolution, thus providing detailed assessment of the morphological characteristics of the neointima after stent implantation, including neoatherosclerosis. Several OCT studies have tried to provide in vivo insights in the mechanisms of neoatherosclerosis development and its association with late stent failure. This review summarizes the current insights into neoatherosclerosis obtained with OCT and discusses the association of neoatherosclerosis with late stent failure. Schlüsselwörter Atherosklerose · Optische Kohärenztomographie · Perkutane Koronarintervention · Medikamentenbeschichtete Stents · Reine Metallstents val since implantation [6]. In vivo OCT studies have been in line with pathological studies showing an increased prevalence of neoatherosclerosis in DES at a follow-up of up to 4 years, although at a very long-term follow-up the prevalence is similarly high for both stent types [30]. Furthermore, serial imaging observations in first-and second-generation DES demonstrate that neointimas with a homoge-neous or heterogeneous pattern of coverage might develop neoatherosclerosis over time with the prevalence of neoatherosclerosis increasing from 15 % at 9 months to 28 % at 24 months since implantation [22]. This was further corroborated by another serial imaging study in first-generation DES where the incidence of neoatherosclerosis increased from 3 % at mid-phase observations (3-12 months after stent implantation) to 23 % at latephase observations [23]. Although DES in general appear to have a higher prevalence of neoatherosclerosis than BMS, it is not clear whether the prevalence of neoatherosclerosis differs between first-and second-generation DES. Second-generation DES have improved biocompatibility due to thinner struts and more biocompatible or bioabsorbable polymers, which translates to lower vascular toxicity and hypersensitivity reactions [31,32]. However, a human autopsy study did not demonstrate any significant difference in the prevalence of neoatherosclerosis between a second-generation DES, cobalt-chromium everolimus-eluting stent (CoCr-EES), and first-generation paclitaxel-eluting (PES) or sirolimus-eluting stent (SES): CoCr-EES = 29 %; SES = 35 %; PES = 19 % [32]. Conversely, an in vivo OCT study in 212 DES-treated patients with > 50 % stenosis (101 patients had a first-generation and 111 patients had a second-generation DES) demonstrated a significantly lower prevalence of neoatherosclerosis in the second-generation DES group (10.8 vs. 45.5 %, p < 0.001); however, this difference was not significant after adjusting for time since implantation [33]. Similar findings were observed with OCT in a comparison of the coronary arterial response in biodegradable polymer biolimus-eluting stents (BES) versus durable polymer SES and BMS at the 5-year follow-up. The number of cross sections with neoatherosclerosis within BES was similar to BMS (BES = 2.26 % vs. BMS = 2.23 %, p = 0.98) and tended to be lower than SES (BES = 2.26 % vs. SES = 9.90 %, p = 0.07) [34]. The reasons why the temporal course of atherosclerosis within stents is accelerated are not well understood; however, dysfunction of endothelial cells, which are barriers that prevent lipid infiltration and Follow-up (33) 2nd DES (12) 1st DES (11) BMS (10) 5 years 2 (frames) 10 (frames) 2 (frames) Kitabata et al. [41] Follow-up (46) BMS ( migration of inflammatory cells, has been considered to be the main mechanism. This mechanism can also explain the increased prevalence of neoatherosclerosis observed in DES, where local drug delivery inhibits neointimal hyperplasia, causing delayed coverage and dysfunction of endothelial cells [35]. Clinical factors In addition to the stent type and age, several patient characteristics have been associated with neoatherosclerosis. Two registries of late stent follow-up have both identified chronic kidney disease as a factor independently associated with neoatherosclerosis [28,33]. Other identified factors included low-density lipoprotein cholesterol > 70 mg/dl [33], current smoking, and lack of treatment with angiotensin-converting enzyme inhibitors or angiotensin II receptor blockers [28]. In another study, the presence of diabetes mellitus was associated with a higher incidence of neoatherosclerosis at a late DES follow-up (18.3 vs. 5.5 %, p = 0.03) [36]. Plaque and stent characteristics Plaque characteristics have also been implicated in the neoatherosclerotic process. Microvessels play a key role in advanced atherosclerosis, which is closely associated with plaque hemorrhage and plaque rupture [37], while previous investigations have demonstrated a higher incidence of microvessels in late in-stent restenotic tissue, suggesting that neovessels might be a trigger for in-stent neoatherosclerosis [38,39]. Moreover, a spatial correspondence of neoatherosclerosis and microvessels was identified at a follow-up OCT examination of BMS and DES [40], supporting the hypothesis of microvessel involvement in neoatherosclerosis development. In the same study, neoatherosclerosis was more frequently identified in the proximal and distal stent sections, also associated with the morphology of the adjacent vessel segment. Specifically, the presence of lipid plaque in the adjacent stent edges was associated with the presence of neoatherosclero-sis at the stent edges, implying that progression of native disease might also contribute to neoatherosclerosis. Apart from plaque characteristics, stent characteristics such as strut thickness seem to play a role even among the same stent type. In a study assessing the prevalence of neoatherosclerosis in BMS at a long-term follow-up (≥ 4 years after implantation), thick-strut stents (≥ 100 µm) had a higher prevalence of neoatherosclerosis compared with thin-strut stents (70 vs. 32 %, p = 0.02) [41]. Neoatherosclerosis and in-stent restenosis Although early in-stent restenosis mainly results from aggressive neointimal proliferation [42], recent data also suggest that neoatherosclerosis may play an important pathophysiological role, especially in late restenosis. Pathological studies have demonstrated that neoatherosclerosis represents a common substrate in patients with late stent failure [43], while in vivo OCT studies have further elucidated the role of neoatherosclerosis in the development of late stent failure (. Fig. 3). The presence of neoatherosclerosis has been associated with a higher degree of neointimal hyperplasia, independent from stent type and time since implantation [44]. Although during the early phases after DES implantation, homogeneous or heterogeneous patterns of coverage are the predominant pathology in in-stent restenosis, in later follow-up intervals restenosis is often associated with the presence of neoatherosclerosis [38,45]. A study focusing on 50 patients with DES restenosis at a median follow-up of 32 months since implantation showed a prevalence of neoatherosclerosis of 52 % [9]. Also, in lesions with very late BMS restenosis beyond 10 years since implantation, neoatherosclerosis was detected in 100 % of the stents with a high frequency of neointimal rupture and neointimal thrombi [29]. However, a comparison of restenotic lesions of DES and BMS showed that neoatherosclerosis occurs earlier in DES compared with BMS and develops more diffusely along stented vessels with thin-ner cap and greater total lipid core [8]. Whether the contribution of neoatherosclerosis in restenosis of second-generation DES is similar to that of first-generation DES is not well established; however, observations of restenosis due to neoatherosclerosis have also been reported for second-generation DES [33]. Neoatherosclerosis and stent thrombosis Neoatherosclerosis also appears to contribute to the development of (very) late stent thrombosis (. Fig. 4). Initial pathological studies demonstrated a role for the underlying plaque in BMS thrombosis, while subsequent studies in DES incriminated an impaired healing response with high rates of uncovered struts and vascular toxicity-associated malapposition as the main pathogenetic factor [46,47]. Nevertheless, more recent pathological and intracoronary imaging data sup-port the hypothesis that neoatherosclerosis is an active causal mechanism in a high percentage of cases with late stent thrombosis [32,48,49]. Autopsy studies have shown that thrombosis might develop on the grounds of neointimal plaque rupture [32], while thrombus aspirate samples from patients with very late BMS thrombosis demonstrate the presence of atherosclerotic plaque components such as foamy macrophages and cholesterol crystals within the aspirates [48]. Simultaneously, a number of studies have examined the OCT findings in patients with late and very late stent thrombosis, in an attempt to identify the pathomechanisms [50][51][52][53][54]. Despite minor discrepancies in the exact prevalence of neoatherosclerosis and the relative contribution of neointimal plaque rupture in each study, these studies collectively demonstrate a very high prevalence of plaque rupture as a pathomechanism for very late BMS thrombosis [55], and an almost equal contribution for neo - Fig. 4 9 Very late thrombosis due to neoatherosclerosis. a Angiogram of patient presenting with ST-elevation myocardial infarction 7 years after everolimus-eluting stent implantation in the right coronary artery showing total occlusion. After thrombus aspiration an in-stent lesion is obvious (b), while optical coherence tomography study shows neoatherosclerosis development (yellow) with neointimal rupture leading to cavity formation (Cav) and thrombus (red) (c-f). Homogeneous neointima is indicated in green. Panels on the bottom are color-coded cartoons of the corresponding OCT images, explaining the composition of the neointimal tissue. intimal plaque rupture and impaired healing in (very) late DES thrombosis [50,52,53]. Compared to an impaired healing response, neointimal rupture is a mechanism occurring mainly at longer intervals since stent implantation [50,56], in neointimal regions with high necrotic core content and thin fibrous cap, similarly to what is observed in native atherosclerosis, but also with a very high incidence of macrophage infiltration [49]. Evidence also suggests that the clinical presentation of late stent failure is not exclusively affected by the presence of neoatherosclerosis but also by the morphological characteristics. In patients with in-stent restenosis, a higher incidence of OCT-defined thin-cap fibroatheroma, intimal rupture, and thrombi was identified in patients presenting with unstable angina compared with patients with stable symptoms [9]. Our group investigated the differences in the prevalence of neoatherosclerosis with regard to the clinical presentation [49]. Both stent restenosis and stent thrombosis had a higher prevalence of neoatherosclerosis compared with the asymptomatic group (stent thrombosis: 67 % vs. restenosis: 85 % vs. asymptomatic: 32 %; p < 0.001). However, the prevalence of neointimal rupture was higher in the stent thrombosis group (stent thrombosis: 41 % vs. restenosis: 15 % vs. asymptomatic: 5 %; p < 0.001) (. Fig. 5a). This finding underscores the contribution of neoatherosclerosis devel-opment in the pathogenesis of late stent failure, while it indicates that neointimal rupture is mainly associated with acute presentation, although not infrequently encountered in stable or asymptomatic in-stent lesions. Overall, intravascular imaging has changed the understanding of the pathogenesis of late restenosis and thrombosis by disclosing that neoatherosclerosis is a common pathomechanism in both entities (. Fig. 5b). Stent restenosis had traditionally been considered a rather benign entity associated with stable symptoms, while stent thrombosis was seen as a life-threatening entity associated with acute presentation, with these two entities considered to be associated with different pathogenetic substrates. However, the presence-to a different extentof common imaging findings such as instent necrotic core, neointimal rupture, and thrombus in both entities implies that these two presentations can occur as different manifestations of the same unfavorable healing process. Neoatherosclerosis and bioresorbable scaffolds Bioresorbable vascular scaffolds (BVS) are a new technology for percutaneous revascularization that are resorbed after an interval of 2-4 years since implantation. Conceptually, after this interval, the permanent vessel caging with the associ-ated endothelial dysfunction disappears and this can lead to a reduction of stentinduced complications such as hypersensitivity reactions or neoatherosclerosis. First-in-man studies of BRS have demonstrated a favorable healing response at a very long-term follow-up with complete strut reabsorption, late luminal enlargement, and a potentially favorable plaque modification, while no cases of necrotic core accumulation of adluminal origin were observed [57]. As after bioresorption, the area corresponding to struts and neointima is consolidated with the underlying plaque, it resembles a native atherosclerotic plaque that is now well separated by the lumen by a signal-rich layer, a process modulated by hemodynamic factors [58] (. Fig. 6). Nevertheless, as these firstin-man studies have focused on simple lesions, data on more complex lesions including thrombotic lesions are scarce [59]. Studies on more complex populations are ongoing, without, however, any worrying signs regarding neoatherosclerosis development in human studies of BVS thus far. Moreover, studies reporting on (very) late BVS thrombosis have not shown any evidence of scaffold thrombosis due to neointimal plaque rupture [60]. Therefore, the further development and implementation in the clinical practice of bioresorbable technologies might allow these complications to be overcome, although more longterm data are needed to confirm a potential benefit of this technology. [49]. b Change in the understanding of stent failure by intravascular imaging insights. Although stent restenosis had traditionally been considered a benign entity associated with stable symptoms and stent thrombosis had been considered a life-threatening entity associated with acute presentation, intravascular imaging has disclosed the participation of neoatherosclerosis in both entities, with several imaging findings such as in-stent necrotic core, neointimal rupture, and thrombus being common between them Newer OCT techniques for assessment of neoatherosclerosis As previously mentioned, OCT is associated with some limitations in assessment of neoatherosclerosis. Nevertheless, new developments in OCT could improve instent tissue characterization and provide an enhanced and more objective neointimal tissue characterization. One of these developments is quantitative attenuation imaging. Tissue properties such as attenuation have been shown in ex vivo studies to be associated with the presence of macrophages and/or necrotic core [61,62], while in vivo studies have demonstrated the potential of this technique for the detection of lipid plaques and fibroatheromas [63]. Therefore, this technique could be used for the assessment of neoatherosclerosis, since the optical properties of tissue components such as macrophages and a necrotic core are similar between native atherosclerosis and neoatherosclerosis. OCT assessment of neoatherosclerosis can also be improved through software for automated fibrous cap thick-ness measurement that can reduce human variability in the assessment of in-stent fibroatheromas [64]. Furthermore, the application of technologies such as polarization-sensitive OCT in humans can provide further structural information such as collagen and smooth muscle content of neointimal tissue [65], while high-speed OCT catheters will be able to acquire images from an entire coronary artery in less than 1 s [66]. Conclusion and perspectives Recent evidence has highlighted the significance of in-stent neoatherosclerosis as a disease entity that comprises an important substrate for late adverse cardiac events after stent implantation and is one of the major pitfalls of the current generation of metallic stents. Pathological studies have demonstrated that infiltration of lipid-laden foamy macrophages with or without necrotic core and/or calcification formation constitutes the pathological substrate. A number of in vivo studies have used OCTan imaging modality that can accurate-ly identify morphological characteristics such as in-stent necrotic core, instent calcification, macrophage infiltration, and neointimal plaque rupturein order to assess the in vivo characteristics and implications of this disease entity. These have demonstrated that neoatherosclerosis appears earlier and more frequently in drug-eluting stents compared with bare metal stents, is associated with several clinical and morphological factors, while its prevalence increases over time. Importantly, an association of neoatherosclerosis with late stent failure has been demonstrated, manifesting either as late restenosis, or in cases with neointimal plaque rupture as late stent thrombosis. Bioresorbable technologies hold promise for the reduction of this entity; however, their long-term healing response needs to be better documented. Further studies are needed so as to demonstrate what the prognostic implications of neoatherosclerosis detection are, and whether the natural history of this disease entity can be modified. Meanwhile, new devices for percutaneous revascularization need to be developed that can alleviate this entity, in order to be able to reduce the incidence of very late events after percutaneous coronary intervention. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/ licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. implantation, struts are preserved and the neointimal area is clearly delineated between the stent and lumen contour even at long-term follow-up, with possible development of neoatherosclerosis within the neointima. Conversely, in long-term follow-up of bioresorbable scaffolds, neointimal boundaries are unclear after bioresorption (dotted line), and the intima resembles a native plaque, defined as neoplaque. The signal-rich layer is the layer that separates the underlying plaque components from the lumen. BVS bioresorbable vascular scaffold. (Adapted from Karanasos et al. [57])
2016-05-12T22:15:10.714Z
2015-08-11T00:00:00.000
{ "year": 2015, "sha1": "8094950c582d23ceb2e8dd28ac732eb5f3a23f22", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00059-015-4343-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8094950c582d23ceb2e8dd28ac732eb5f3a23f22", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
14852955
pes2o/s2orc
v3-fos-license
Fomitopsis betulina (formerly Piptoporus betulinus): the Iceman’s polypore fungus with modern biotechnological potential Higher Basidiomycota have been used in natural medicine throughout the world for centuries. One of such fungi is Fomitopsis betulina (formerly Piptoporus betulinus), which causes brown rot of birch wood. Annual white to brownish fruiting bodies of the species can be found on trees in the northern hemisphere but F. betulina can also be cultured as a mycelium and fruiting body. The fungus has a long tradition of being applied in folk medicine as an antimicrobial, anticancer, and anti-inflammatory agent. Probably due to the curative properties, pieces of its fruiting body were carried by Ötzi the Iceman. Modern research confirms the health-promoting benefits of F. betulina. Pharmacological studies have provided evidence supporting the antibacterial, anti-parasitic, antiviral, anti-inflammatory, anticancer, neuroprotective, and immunomodulating activities of F. betulina preparations. Biologically active compounds such as triterpenoids have been isolated. The mushroom is also a reservoir of valuable enzymes and other substances such as cell wall (1→3)-α-d-glucan which can be used for induction of microbial enzymes degrading cariogenic dental biofilm. In conclusion, F. betulina can be considered as a promising source for the development of new products for healthcare and other biotechnological uses. Introduction In 1991, a mummified body was discovered in the Val Senales glacier in Italy. The man (named Ӧtzi the Iceman), who lived 5300 years ago, carried two fragments of a fruiting body of Fomitopsis betulina (formerly Piptoporus betulinus). Some scientists believe that Ӧtzi might have used the fungus for medical purposes (Capasso 1998) and, although the idea arouses some controversy (Pöder 2005), the long tradition of the use of F. betulina in folk medicine is a fact (Reshetnikov et al. 2001;Wasser 2010). Infusion from F. betulina fruiting bodies was popular, especially in Russia, Baltic countries, Hungary, Romania for its nutritional and calming properties. Fungal tea was used against various cancer types, as an immunoenhancing, anti-parasitic agent, and a remedy for gastrointestinal disorders (Grienke et al. 2014;Lucas 1960;Peintner and Pöder 2000;Semerdžieva and Veselský 1986;Shamtsyan et al. 2004). Antiseptic and anti-bleeding dressings made from fresh F. betulina fruiting body were applied to wounds and the powder obtained from dried ones was used as a painkiller (Grienke et al. 2014;Papp et al. 2015;Rutalek 2002). In the present paper, we have shown the current knowledge of the fungus F. betulina, including its lifestyle, chemical composition, and potential in biotechnology. Abstract Higher Basidiomycota have been used in natural medicine throughout the world for centuries. One of such fungi is Fomitopsis betulina (formerly Piptoporus betulinus), which causes brown rot of birch wood. Annual white to brownish fruiting bodies of the species can be found on trees in the northern hemisphere but F. betulina can also be cultured as a mycelium and fruiting body. The fungus has a long tradition of being applied in folk medicine as an antimicrobial, anticancer, and anti-inflammatory agent. Probably due to the curative properties, pieces of its fruiting body were carried by Ötzi the Iceman. Modern research confirms the health-promoting benefits of F. betulina. Pharmacological studies have provided evidence supporting the antibacterial, anti-parasitic, antiviral, anti-inflammatory, anticancer, neuroprotective, and immunomodulating activities of F. betulina preparations. Biologically active compounds such as triterpenoids have been isolated. The mushroom is also a reservoir of valuable enzymes and other substances such as cell wall (1→3)-α-d-glucan which can be used for induction of microbial enzymes degrading cariogenic dental biofilm. In conclusion, F. betulina can be considered as thicker walls. No primordia or fruiting bodies of this species were found in vitro (Petre and Tanase 2013). Basidiospores are smooth, hyaline, thin-walled, and cylindrical (Han and Cui 2015;Han et al. 2016;Kim et al. 2005;Schwarze 1993). The birch polypore grows mainly as a saprophyte on dead trees and occasionally as a parasite of living trees. It occurs in northern temperate forests and parks in Europe, North America, and Asia. The host range of the fungus is restricted exclusively to birch species, e.g. Betula pendula Roth., B. pubescens Ehrh., B. papyrifera Marsh., and B. obscura Kotula (Schwarze 1993;Žižka et al. 2010). Wood decay Wood rotting fungi are traditionally divided into white and brown rot species based on the structure and composition of residual wood. Brown rot fungi extensively degrade the carbohydrate fraction of lignocellulose but, in contrast to white rot fungi, leave lignin, although in a modified form. In these fungi, chemical depolymerization of cellulose, which precedes and supports its enzymatic degradation, is very important. They lack ligninolytic peroxidases and usually some other enzymes such as processive cellobiohydrolases used for degradation of crystalline cellulose, but contain H 2 O 2 -generating oxidases and Fe 3+ -and quinone-reducing enzymes used for non-enzymatic depolymerization of polysaccharides (Arantes and Goodell 2014; Baldrian and Valášková 2008;Hori et al. 2013). Modern phylogenetic evidence suggest, however, that there is no sharp distinction between the two groups of fungi (Hori et al. 2013;Riley et al. 2014). Fomitopsis betulina is one of the most common brown rot species but its wood-decaying mechanism has been tested only fragmentarily (Meng et al. 2012) and is still poorly understood. As other fungi of this type, it degrades wood to yield brown, cubical cracks easily broken down. Many factors, including microflora or compounds present in wood, contribute to this complex process (Przybył and Żłobińska-Podejma 2000;Song et al. 2016;Zarzyński 2009). Shang et al. (2013) showed that wood samples decayed by F. betulina lost 57% of dry weight (dw) and 74% of holocellulose after 30 days, whereas the fungus growing on wheat straw causes 65% loss of dw within 98 days of culture (Valášková and Baldrian 2006a). A set of enzymes of F. betulina involved in the degradation of lignocellulose was characterized in detail by Valášková and Baldrian (2006a, b). The fungus growing on straw produced enzymes with wide substrate specificities: (1→4)-β-endoglucanase, β-glucosidase, (1→4)-β-endoxylanase, (1→4)-β-endomannanase, (1→4)-β-xylosidase, and (1→4)-β-mannosidase. The activities of ligninolytic enzymes and cellobiose dehydrogenase for oxidoreductive cleavage of cellulose were not detected. Similar results were obtained in liquid cultures by Vĕtrovský et al. (2013). When F. betulina grew in nature, β-glucosidase and β-mannosidase activity was associated with the fruiting bodies while endopolysaccharidases were detected in colonized wood (Valášková and Baldrian 2006a). Cultivation Carpophores of F. betulina from natural habitats or mycelium and culture liquid from submerged cultures were used as raw material to obtain extracts and bioactive substances with medicinal properties (Table 1) (Lomberh et al. 2002). Studies concerning the mycelium growth rate in the presence of various substances (metals, dyes) were conducted mainly on agar media or in liquid cultures (Baldrian and Gabriel 2002;Dresch et al. 2015;Hartikainen et al. 2016). The yield of F. betulina mycelium was established in liquid cultures with addition of some agricultural wastes in the studies of Krupodorova and Barshteyn (2015). The enzymatic activity of F. betulina was studied in laboratory conditions on agar media (Krupodorova et al. 2014), in liquid cultures (Vĕtrovský et al. 2013), on wheat straw (Valášková and Baldrian 2006a, b), and on Betula sp. wood samples (Reh et al. 1986;Shang et al. 2013). There are limited data on small-or large-scale cultivation of this species in which carpophores could be obtained in controlled conditions. The first such report referring to outdoor log cultivation of F. betulina on Betula davurica Pallas originated from Korea (Ka et al. 2008). Logs with a diameter of 8-18 cm and length of 107-135 cm were inoculated and then cultured in natural conditions. The yield obtained was in the range from 212 to 1298 g fresh weight (1-2 mushrooms per log). Development of fruiting bodies took an average of 18 months. The ratio of log yield was estimated at 2.8-6.1%. The only report on indoor production of F. betulina fruiting bodies was given by Pleszczyńska et al. (2016). In the study, four strains of F. betulina isolated from natural habitats were applied. Their mycelia were inoculated into birch sawdust supplemented with organic additives. Mature fruiting bodies weighing from 50 to 120 g were obtained from only one strain, after 3-4 months of the cultivation in artificial conditions (Fig. 1c). The biological efficiency ranged from 12 to 16%. It was shown that extracts isolated from cultivated and naturally grown F. betulina fruiting bodies had comparable biological activity (Table 1). Although some authors considered young specimens of F. betulina edible (Wasson 1969), the fungus value is not the result of nutritional but therapeutic properties. The overview of the available literature concerning medical potential of birch polypore was presented in Table 1. Referring to the folk uses of the birch polypore, most of the presented research was based on crude extracts, which often have greater bioactivity than isolated constituents at an equivalent dose. This phenomenon is explained by mostly synergistic interactions between compounds present in mixtures. Furthermore, extracts often contain substances that inhibit multi-drug 83 Page 4 of 12 Kandefer-Szerszeń and Ether extracts polyporenic acid (suggested) Kandefer-Szerszeń and Kawecki (1974) nucleic acids (RNA and DNA) Page 5 of 12 83 Zwolińska (2004) 83 Page 6 of 12 Blumenberg and Kessler (1963) Page 7 of 12 83 resistance and therefore further increase the effectiveness of the active substances. Particularly noteworthy among the wide variety of biological activities of F. betulina extract, are properties proved in in vivo studies, e.g. the efficacy of water and ethanol extracts in treatment of the genital tract in dogs (Utzig and Samborski 1957;Wandokanty et al. 1954Wandokanty et al. , 1955 or mice protection from lethal infection with the TBE virus by water, ethanol, and ether extracts (Kandefer-Szerszeń et al. 1981;Kawecki 1974, 1979). The broad spectrum of antiviral and antimicrobial activity of F. betulina extracts proved by a number of research teams in different models based on different techniques deserves special attention as well (see references cited in Table 1). Recently, Stamets (2011Stamets ( , 2014 has invented formulations prepared from different medicinal mushrooms including F. betulina, which are useful in preventing and treating viral and bacterial diseases, i.e. herpes, influenza, SARS, hepatitis, tuberculosis, and infections with E. coli and S. aureus . Some pure compounds corresponding to the bioactivity of the birch polypore were also identified (Fig. 2). They belong to several chemical classes but the greatest attention was paid to small molecular weight secondary metabolites, especially triterpenoids. Kamo et al. (2003) isolated several triterpenoid carboxylic acids with a lanostane skeleton, e.g. polyporenic acids and their derivatives (Table 1). In in vivo tests, the substances suppressed TPA-induced mouse ear inflammation up to 49-86% at the dose of 0.4 µM/ear. Alresly et al. In their search for fungal antimicrobial substances, Schlegel et al. (2000) isolated another valuable compound-piptamine, N-benzyl-N-methylpentadecan-1-amine from submerged culture of F. betulina Lu 9-1. It showed activity against gram-positive bacteria (MIC, Wandokanty et al. (1954;1955) Tumor size reduction and inhibition of bleeding from the genital tract minimum inhibitory concentration, values in the range from 0.78 to 12.5 µg/ml) and yeasts including Candida albicans (MIC 6.25 µg/ml). Polysaccharides from higher basidiomycota mushrooms have been usually considered to be the major contributors of their bioactivity. However, birch polypore polysaccharides have not yet been sufficiently explored, in terms of either the structure or pharmacological activity. It is known that the Fomitopsis cell wall contains (1→3)-β-d-glucans in an amount of ca. 52% dw (Jelsma and Kreger 1978;Grün 2003). They are built from β-d-glucopyranose units connected with (1→3)-linkages in the main chain, with (1→3)-β-d linked side branches. However, there are no reports about their biological activities. Another polysaccharide isolated from the birch polypore was water-insoluble, alkali-soluble (1→3)-α-d-glucan. Although α-glucans are believed to be biologically inactive, its carboxymethylated derivative showed moderate cytotoxic effects in vitro (Wiater et al. 2011). Miscellaneous applications With the knowledge of the mechanisms of action of brown rot decay, there are possibilities of new applications of these fungi in biotechnology. The enzymatic and non-enzymatic apparatus for lignocellulose degradation can be used for bioprocessing of biomass towards fuels and chemicals (Arantes et al. 2012;Giles and Parrow 2011;Ray et al. 2010). Brown rot fungi, including F. betulina, were tested for bioleaching of heavy metals (Cu, Cr, and As) from wood preservatives due to accumulation of metal-complexing oxalic acid (Sierra Alvarez 2007). Production of biomass degrading enzymes, for instance cellulases, hemicellulases, amylases, etc., was also studied (Krupodorova et al. 2014;Valášková and Baldrian 2006a, b). Conclusions and outlook The F. betulina fungus has been widely used and appreciated in folk medicine, and modern pharmacological studies have confirmed its potential indicating significant antimicrobial, anticancer, anti-inflammatory, and neuroprotective activities. The possibility of successful cultivation thereof in artificial conditions additionally promotes the applicability of the fungus. However, compared with other polypore fungi, the research on F. betulina is less developed; for instance, little is known about its lifestyle, including the wood degradation strategy. Moreover, most of the bioactivity studies have been performed using crude extracts; hence, only a few of the effects have been associated with the active substances identified, e.g. antibacterial activities with piptamine or polyporenic acids. With a few exceptions, we still do not know the mechanisms underlying the biological activities. Verification of biological activities in in vivo and clinical studies is also required. The further research could contribute to better exploitation of the F. betulina application potential.
2018-04-03T04:38:57.652Z
2017-04-04T00:00:00.000
{ "year": 2017, "sha1": "d57987769d5e31e1bcf45fcaa9aa73b37e48bfe7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11274-017-2247-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a85b292bb02b9b5ee72359c1cb7287d7faef1c3b", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
58932709
pes2o/s2orc
v3-fos-license
Deterministic and cascadable conditional phase gate for photonic qubits Previous analyses of conditional \phi-phase gates for photonic qubits that treat cross-phase modulation (XPM) in a causal, multimode, quantum field setting suggest that a large (~\pi rad) nonlinear phase shift is always accompanied by fidelity-degrading noise [J. H. Shapiro, Phys. Rev. A 73, 062305 (2006); J. Gea-Banacloche, Phys. Rev. A 81, 043823 (2010)]. Using an atomic V-system to model an XPM medium, we present a conditional phase gate that, for sufficiently small nonzero \phi, has high fidelity. The gate is made cascadable by using using a special measurement, principal mode projection, to exploit the quantum Zeno effect and preclude the accumulation of fidelity-degrading departures from the principal-mode Hilbert space when both control and target photons illuminate the gate. I. INTRODUCTION In optical quantum logic, qubit states are usually encoded using the presence or absence of a single photon in one of the many modes of the quantum electromagnetic field. We refer to this special information-carrying mode as the principal mode. Logic gates can be high-fidelity only if they map input principal modes to output principal modes. Gates can be cascaded successfully if the input and output principal modes are the same. In either the dual-rail or polarization architectures, high-fidelity, cascadable single qubit-gates can be readily implemented using linear optics (beam splitters and phase-shifters). A significant challenge to implementing optical quantum information processing is the faithful realization of a deterministic and cascadable universal two-qubit photonic logic gate. Cross-phase modulation (XPM)-a nonlinear process in which one electric field affects the refractive index seen by another-has often been proposed [1][2][3][4] as a nonlinear optical process that might be used to construct such a universal gate, the conditional π-phase gate. (Other, fundamentally different photonic two-qubit gates have been designed, e.g., [5,6], which involve only single-photon+atom interactions; such gates will not be discussed here.) While a single-mode analysis of XPMbased gates is encouraging, in recent years multimode efforts [7][8][9] that treat photons as excitations of a quantum field with continuously many degrees of freedom have been somewhat more foreboding. In [7,8] the problem was studied using a quantized version of the solution to the classical coupled mode equations for XPM. It was shown that, within this model of quantum XPM, noise terms necessary to preserve commutation relations prevent the high-fidelity operation of a conditional φ nl -phase gate when φ nl ∼ π. Similar fidelity-degrading noise arose in the work of Gea-Banacloche [9], whose treatment of XPM was based on a Hamiltonian describing an effective field-field interaction appropriate to a medium exhibiting electromagnetically induced transparency (EIT). In this analysis the difficulties were attributed, at least partially, to spontaneous emission. It was recently demonstrated [10] that these problems can be circumvented by encoding qubit states in resonant, temporally-entangled (highly-bunched) biphoton pairs and using an atomic ∨-system to realize a Kerr medium. However, this approach is not scalable: while it may be a reasonable way to implement a conditional π-phase gate on exactly one pair of qubits, implementing this gate on any pair of n qubits would require encoding n-qubit states in n-photon packets every pair of which is temporally entangled. The prospects for achieving a high-fidelity conditional φ nl -phase gate for photonic qubits with φ nl ∼ π thus seem rather dim. Still, semiclassical analyses have shown that several media, such as those supporting the EITbased giant Kerr effect [11,12], possess χ (3) nonlinearities whose real part (responsible for the XPM phase shift), though small, is large in comparison to the rate at which various fidelity-degrading absorption processes occur. With this in mind, we address in the present work the following question. Can a high-fidelity conditional φ nl -phase gate be constructed for small φ nl , and could these gates be cascaded to yield a significant nonlinear phase shift with high-fidelity? We show that, indeed, a conditional phase gate can be constructed with small nonlinear phase shift such that the error probability (infidelity) |ε| 2 is even smaller, |ε| 2 φ nl 1. Cascading these gates, however, is nontrivial. The error, which results from a slight deformation of the principal modes, can be coherently amplified as the gate is cascaded, preventing the straightforward construction of a conditional π-phase gate. This difficulty can be avoided by performing a measurement after each primitive conditional φ nl -phase gate that projects onto the principal mode subspace, exploiting the quantum Zeno effect as an error-preventing mechanism [13][14][15]. For a particular choice of principal modes, we suggest one way that such a measurement could be realized. In deriving these results, we start from a Hamiltonian describing the interaction of two quantum optical fields with a three-level ∨-atom. While the nonlinearities present in a ∨-atom are not as strong as those in, for example, the giant Kerr effect [11], the ∨-system is simple enough that it yields readily to an analysis in terms of quantum fields. After solving for the evolution of our system in the one-and two-photon subspace, we investigate fidelity and cascadability. II. THE FIELDS AND THEIR INTERACTION In this section we describe our encoding of qubit states in one-dimensional quantum fields, then consider how these fields evolve when interacting with an optical cavity containing an atomic ∨-system. Following the approach used in [6,10,16], this is described by a Hamiltonian H nl for the fields+cavity+atom system. The Hamiltonianbased approach we use is essentially the basis for an alternative description in terms of the input-output formalism [17]. The atomic system mediates an XPM-like interaction that is a central component in the conditional phase gates discussed later. Determining the nonlinear phase shift and error induced by the atomic interaction will be of the utmost importance in evaluating these gates. To this end, one-and two-photon propagators for this system [10] are introduced. A. Qubit Encoding In our gate, qubit states are encoded using two quasimonochromatic, positive-frequency, photon-units, optical fields h z (τ ) and v z (τ ) [18] (for convenience, τ ≡ ct is used to measure time). We take +z as the propagation direction, and ignore the transverse character of these fields throughout. The horizontally polarized field h z (τ ) and the vertically polarized field v z (τ ) are independent, and have nontrivial commutator Logical qubit states are encoded as excitations of two principal modes h and v, defined by where operators without explicit time dependence are in the Schrödinger picture. With the normalization dz |ψ(z)| 2 = 1, h † and v † are interpreted, respectively as creating horizontally and vertically polarized photons with wavefunction ψ(z). We refer to all modes orthogonal to h and v as auxiliary, or bath, modes, and assume that the auxiliary modes are initially unexcited. In this case, the correspondence between logical qubit states and field states reads where |vac is the multimode vacuum. Equation (2) could describe either a dual-rail or polarization encoding, where fields not participating in our gate have been dropped for convenience. B. Qubit Evolution and Interaction Hamiltonian At the input to our gate, the fields are prepared in some superposition |ψ in of the basis states in Eq. (2). This state is localized in a noninteracting input region ( Fig. 1(a)). It then propagates in the +z direction toward a region where both fields interact, evolving under a nonlinear total Hamiltonian H nl which couples these fields to a three-level atomic ∨-system. (Here "nonlinear" means that the total Hamiltonian H nl generates nonlinear Heisenberg equations of motion, a necessary condition for H nl to effect a two-qubit gate that does not factorize into a product of one-qubit gates. ) A long time later, the atom has returned to its ground state and the photonic qubits are in a state |ψ 1 which is localized in a noninteracting output region. Working in an interaction picture with respect to the free-field Hamiltonian H field , the scattering matrix connects the states |ψ in and |ψ 1 : In interacting with the atom, a single horizontally polarized (vertically polarized) photon acquires a phase shift φ H (φ V ), and may undergo some amount of pulse deformation. When both a horizontal and a vertical photon are incident upon the atom at the same time, however, the presence of the horizontal photon frustrates the interaction of the vertical photon with the atom, and vice versa, i.e., the atom cannot absorb both photons simultaneously. As a result, the pair of photons picks up an extra phase shift φ nl . In this way, the ∨-system models a Kerr medium, and can be used to construct conditional phase gates. To describe this interaction, we use the same Hamiltonian H nl as in [10]: both fields h z and v z couple to a one-sided cavity containing an atomic ∨-system, whose level structure is shown in Fig. 1(b). For z < 0 these fields are interpreted as propagating toward the cavity, while for z > 0 they are interpreted as propagating away from the cavity ( Fig. 1(a)). All cavity modes are ignored, except a horizontally polarized mode a H and a vertically polarized mode a V , both of which are resonant with the atomic transitions at frequency Ω 1 . The total Hamilto- The external fields vz(τ ) and hz(τ ) interact with an atom placed within a one-sided cavity at position z = 0. (b) Three-level atom used as an XPM medium. Vertical light (V ) drives the 0 ↔ 1 transition, while horizontal light (H) drives the 0 ↔ 2 transition. In the lossycavity regime, γ3D g κ, the cavity fields can be adiabatically eliminated, yielding an effective coupling directly between the external fields and the ∨-system at strengths interacting Hamiltonian H 0 and two interaction pieces. In terms of k-space field operators v k ≡ dz v z e −ikz and h k ≡ dz k z e −ikz , which annihilate photons with definite frequency, the noninteracting Hamiltonian H 0 is wherein σ mn ≡ |m n| and ω k = ck [19]. Under H 0 , the Heisenberg-picture field operators propagate towards +∞, e.g., The interactions between the cavity, free-field, and atom are taken within the rotating wave approximation, so that the total Hamiltonian H nl is: We have taken z = 0 as the cavity's position. As in [10], we consider the lossy-cavity regime in which the cavity decay rates κ, cavity-atom couplings g, and rate γ 3D of spontaneous emission into free space satisfy κ g γ 3D . In this regime cavity decay dominates spontaneous emission, and cavity operators can be adiabatically eliminated in favor of the external field [10,20]. The dynamics of the atom+external field system are then identical (up to an inconsequential phase shift resulting from reflection off the one-sided cavity's perfect mirror) to those generated by an effective Hamiltonian H nl in which the fields are directly coupled to the atom, where the effective coupling is . In this paper, dynamics are derived exclusively from the effective Hamiltonian, H nl . To ensure that the gate treats both qubits symmetrically, we will later set Γ H = Γ V , but temporarily retain subscripts for pedagogical clarity. C. Evolution of One-and Two-Photon States To determine how the one-and two-photon states in Eq. (2) that encode the computational basis evolve under the scattering matrix S nl , it suffices to know the oneand two-photon propagators (the vacuum state evolves trivially). Labeling states |atom;field , these are The time-dependent propagators-matrix elements of e −iH nl τ / c instead of S nl -are given in [10]. The single photon propagator G H (x, y) gives the longtime, interaction-picture amplitude for a photon initially at position y to propagate to position x. We will always assume that y < 0 so that every photon can interact with the atom, located at the origin. In this case, where is the amplitude for the atom, excited by a horizontally polarized impulse at time zero, to still be excited a time τ later. Here θ(τ ) is the Heaviside step function, equal to 1 for τ > 0 and 0 for τ < 0. The Fourier-space propaga- Analogous results hold for G V (x, y). If the atomic system were linear, it could absorb multiple photons before emitting any. In this case, the twophoton propagator G HV (x H , x V , y H , y V ) would just be a product of single photon propagators. Instead, it is Here the second piece removes from exactly those terms that correspond to two absorptions before any emissions. This causes two-photon output states to be antibunched. The corresponding two-photon Fourier-space propagator is wherein δ propagators G H , G V , and G HV enable the gate fidelity calculations reported in Sec. III C III. A PRIMITIVE CONDITIONAL PHASE GATE In this section we describe a conditional phase gate based on the interaction S nl described above. We first discuss how the unnecessary and undesirable linear evolution can be removed. We then consider the fidelity of this primitive (non-cascaded) gate with an ideal conditional phase gate. A. Removing Linear Evolution In interacting with the atomic ∨-system, both the oneand two-photon states that encode the computational ba-sis (Eq. (2)) evolve nontrivially: Here all kets are normalized, {φ H , φ V } are the singlephoton (linear) phase shifts, and the various ε-terms represent errors that occur because of photons evolving out of the principal modes. The linear phase shifts {φ H , φ V } are not only irrelevant to the construction of conditional logic gates, but come also with some amount of fidelity-degrading evolution out of the principal mode subspace In order to build highfidelity gates, it would be useful to remove completely the linear evolution that causes these effects. Removing linear evolution is also theoretically appealing because it allows one to study the fundamental limitations of the ∨-system's capacity for quantum XPM. Formally, linear evolution is removed by evolving backward in time under a linearized Hamiltonian H l (Ω 1 ) in which the atomic lowering operators σ 01 and σ 02 are replaced by independent harmonic oscillator annihilation operators b V and b H (c.f. Eq. (6)): This Hamiltonian, which we have explicitly parametrized by the cavity frequency Ω 1 for later convenience, is linear in the sense that the equations of motion which it generates for the field operators h z (τ ) and v z (τ ) are linear differential equations. Application of the corresponding inverse scattering matrix S † l (Ω 1 ) ≡ lim τ →∞ e −iH field (Ω1)τ / c e +iHl(Ω1)τ / c then removes linear evolution from S nl . This useful form of error-correction can, in principle, be implemented using linear optics. Figure 2(a) shows an optical circuit that removes linear evolution from input photons with center wavenumber k 0 by simulating time reversed evolution under Eq. (14). First, the baseband modulation of the input photon pulses is inverted (I). The pulses then interact with empty one-sided cavities, and, finally, the baseband modulation is re-inverted. Real-space inversion of an optical pulse's baseband modulation corresponds to inversion about its center wavenumber k 0 in Fourier-space. This transformation, can be achieved using temporal imaging [21][22][23][24]. Temporal imaging is the longitudinal analog of traditional spa- tial imaging: in spatial imaging, a beam's transverse profile is manipulated using free-space diffraction and thin lenses; in temporal imaging, the longitudinal (temporal) profile is manipulated using dispersive delay lines and quadratic phase modulation. Figure 2(b) shows a temporal imaging system for baseband modulation inversion, while Fig. 2(c) shows its spatial analog. While this method has not, to our knowledge, been used to demonstrate pulse inversion with quantum light, we see no fundamental physical principle preventing its implementation. Because the scheme to implement I shown in Fig. 2(b) involves only passive linear field transformations (dispersion and phase modulation) it behaves identically with respect to classical fields and few-photon pulses. After inverting the optical pulses, the fields in Fig. 2(a) evolve forward in time under the linearized Hamiltonian H l (Ω 2 ) with cavity frequency Ω 2 . This corresponds to applying S l (Ω 2 ) on the field operators. Because the equations of motion generated by H l (Ω 2 ) are linear, the mapping of the field operators under S l (Ω 2 ) is analogous to the mapping of single photon packets under S nl (Eq. (10)): and similarly for v k . By picking the pulse center wavenumber k 0 , atomic resonance Ω 1 , and cavity resonance Ω 2 such that photon-atom and photon-cavity detunings are equal and opposite, viz. k 0 −Ω 1 = −(k 0 −Ω 2 ), the combined effect of pulse inversion, followed by evolution under H l (Ω 2 ), followed by pulse inversion yields time-reversed evolution under H l (Ω 1 ): In this way, the linear portion of S nl can be undone. B. The Primitive Gate The combined effect of nonlinear interaction with the ∨-system and removal of linear evolution is evolution under S † l S nl : Here |e is a two-photon state whose presence reflects errors intrinsic to the nonlinear evolution only. We refer to the transformation Eq. (18) as our primitive conditional φ nl -phase gate; this gate is primitive in the sense that it is not built by cascading smaller gates. It is convenient to describe the primitive gate as transformation on the logical subspace {|vac , |H , |V , |HV } alone. For nonzero errors ε, the mapping Eq. (18) between input and output field states is not unitary when restricted to the this subspace, because of pulse deformation and undesirable entanglement generated between continuous degrees of freedom (e.g., photon momentum) When restricted to the logical subspace, Eq. (18) corresponds to a trace-preserving quantum operation E prim : Here ρ is a two-qubit density matrix, U φ is the ideal conditional φ-phase gate, and the operation elements {E 1 , E 2 } represent pure amplitude damping of the twophoton state |HV out of the logical subspace. In the usual basis, This operator-sum representation of the primitive gate is useful in determining its fidelity with an ideal conditional phase gate. C. Fidelity of a Single Gate The fidelity of two states is a measure of how close they are to one another, increasing from 0 (orthogonal states) to 1 (identical states). The fidelity of a pure state ψ with a mixed state ρ may be defined as their overlap, F (|ψ , ρ) = ψ|ρ|ψ . Gate fidelity extends this idea from states to logical operations on qubits. The (minimum) gate fidelity of a quantum operation E with a unitary gate U that E approximates is the fidelity of E's output with the target output, minimized over pure state inputs [25]: The infidelity 1 − F (E, U ) is the (maximum) probability that the E fails to effect the desired transformation U . The fidelity of our gate E prim with the ideal conditional phase gate U φnl is Here the minimizing state is |11 l = |HV . We now consider the relationship between the fidelity F (E prim , U φnl ) and the nonlinear phase shift when the real-space principal mode wavefunction ψ(z) in Eq. (23) is a rising exponential with center wavenumber k 0 and width γ: This particular principal mode wavefunction is chosen because, as demonstrated in the next section, it is possible to make a projective measurement that distinguishes excitations of this principal mode from all other modes. Additionally, we now specialize to the case in which Γ H = Γ V ≡ Γ in order that the qubits are treated symmetrically. Large Phase Shifts. If the fidelity F (E prim , U φnl ) and phase shift φ nl could both be large simultaneously, the primitive gate would be an effective conditional phase gate. It is only when the atomic line-width Γ is comparable in size to the pulse bandwidth γ that a large nonlinear phase shift is possible. If γ Γ, then the pulse is too broadband to interact significantly with the atom, while if γ Γ, then one sees from Eq. (11) that the range Γ −1 of the nonlinear piece of the two-photon propagator is negligible in comparison to the pulse length γ −1 . Figure 3(a) shows the fidelity and nonlinear phase shift as functions of the detuning δ ≡ k 0 − Ω 1 in the particular case γ = Γ. The phase shift φ nl (δ) has the form of a dispersion curve, while the infidelity 1 − F mimics an absorption curve. The figure shows that while large nonlinear phase shifts are possible for nearly resonant pulses, the fidelity is unacceptably low in these cases-a conclusion similar to those drawn in [7,9]. A large contribution to this fidelity degradation is the entanglement generated between the position (or momentum) coordinates of the horizontally and vertically polarized photons. This entanglement reflects antibunching in the two-photon output wavefunction, and is character-ized by a sub-unity purity P ≡ tr ρ 2 H of the horizontal photon's output density matrix ρ H ≡ tr V E(|HV HV |) (Fig. 3(b)). Small Phase Shifts. While large phase shifts are accompanied by large errors, it is possible to achieve small phase shifts with a much smaller error: |ε| 2 φ nl 1. When the phase shift and error are small, it is convenient to write so that to lowest order in ζ the nonlinear phase shift is φ nl = Re[ζ] and error probability 1 − F is |ε| 2 = 2Im[ζ]. Particularly simple expressions for the phase shift and error are obtained when the pulse bandwidth is much less than the atomic line-width, γ Γ. Because the photon wavefunction ψ(z) has length ∼ γ −1 and is normalized to unity, this can be considered a sort of weak-excitation regime. In this case, ζ is readily calculated from the Fourier-space propagators, Eq. (10) and Eq. (12). One finds that in this case dependences of φ nl and |ε| 2 on the detuning are again those of dispersion and absorption curves: to lowest nonvanishing order in γ/Γ . From Eq. (25) it is clear that when γ Γ δ the nonlinear phase shift, while very small, is large in comparison to the error probability: |ε| 2 φ nl . Actually, the relation |ε| 2 φ nl can be achieved without requiring that γ Γ: it is enough for the photons to be far-detuned. When Γ, γ δ, we have to lowest order in max [γ, Γ]/δ. Again the nonlinear phase shift, though small, is much larger than the infidelity |ε| 2 . In this sense, our primitive conditional phase gate can be considered high-fidelity for small phase shifts. IV. CASCADING SMALL PHASE SHIFTS The error |ε| 2 in the primitive conditional phase gate discussed above is the probability that the gate causes the two-photon state |HV to leak out of the principal mode subspace. Because this error probability can be made much smaller than the phase shift in the fardetuned regime, the possibility of cascading N = π/φ nl primitive gates to produce a high-fidelity conditional πphase gate arises. When the primitive gate S † l S nl is cascaded N times two sorts of errors can occur. With each application, the probability of photons leaking out of the principal mode subspace increases; for small |ε| 2 , these leakage errors grow as N |ε| 2 = π|ε| 2 /φ nl 1, and are not terribly problematic. However, amplitude that leaked from the principal mode subspace in earlier applications of S † l S nl can return in later applications with corrupted phase; these coherent feedback errors can grow as N 2 |ε| 2 , which is not small. Alternatively, this difficulty can be seen by noting that the primitive gate cascaded N times does not correspond to the quantum operation E prim cascaded N times. This is because the state of the auxiliary modes changes with each application of the S † l S nl . A. A Cascadable Primitive Gate We propose to eliminate coherent feedback errors by measuring the number of photons present in the auxiliary modes after each application of the primitive gate. For the sake of the following analysis, the result of this measurement need not be considered, only that with probability at least 1 − |ε| 2 it projects the quantum state back onto the principal mode subspace. For this reason, we call this measurement process principal mode projection (PMP). Performing PMP after each application of the primitive gate is a sort of Zeno effect error correction that prevents amplitude from leaking out of the principal mode subspace too quickly. Crucially, the measurement used to implement PMP must be done in such a way that it is insensitive to the number of photons in the principal modes. If the principal mode function ψ(z) is chosen to be the one-sided exponential used above (Eq. (23)), then such a measurement can, in fact, be performed using empty optical cavities, the pulse inverter I introduced in Sec. III A, and irises. The scheme, illustrated in Fig. 4, exploits the fact that (ignoring free-space evolution) a cavity with resonant wavenumber k 0 and decay rate γ preferentially emits photons with mode functions e ik0z Ψ(z) and preferentially absorbs from the inverted mode e ik0z Ψ(−z). This selectivity is used to load all principal mode photons into optical cavities. Once this is done, iris 1 is closed, preventing non-principal mode photons from entering the cavity, while iris 2 is opened, allowing the cavity photons to be emitted back into the principal modes. This setup could be modified to record the result of the PMP measurement, allowing for heralded operation and postselection. However, the point of the present analysis is to provide a design for a deterministic gate, and thus our process employs no post-selection. Figure 5 shows the entire process: interaction with the ∨-system (S nl ), followed by removal of linear evolution (S † l ), followed by PMP. (Note that the second and third pulse inverters cancel, and thus need not actually be implemented.) This gate, which we call the cascadable primitive gate is most naturally represented by a nontrace-preserving quantum operation [25], where tr [E c-prim (ρ)] is the probability of success, i.e., that the output state has been collapsed into the principal mode subspace. B. Fidelity of the Cascaded Gate Because of the PMP, the cascadable primitive gate can be cascaded N = π/φ nl times to produce a high-fidelity conditional π-phase gate. Without any post-selection, the fidelity of this cascaded gate with the ideal conditional π-phase gate is the probability that PMP success occurs N times: to lowest nonvanishing order in max [γ, Γ]/δ. In the fardetuned regime, γ, Γ δ, this fidelity can become quite large: cascading E c-prim can yield a high-fidelity conditional π-phase gate. Unfortunately, because the ∨-system's nonlinearity is so weak, an incredible number of cascades are required to produce a high fidelity conditional π-phase gate. For fixed N , Eq. (28) can be rewritten, after optimizing the ratio γ/Γ, as To achieve a fidelity greater than 95% over 10 6 cascades are required. The origin of this unfortunate scaling is the weak cross phase shift, φ nl ∝ δ −3 . If instead the phase shift and error were φ nl ∝ δ −m and |ε| 2 ∝ δ −n , the fidelity of the cascaded gate would be F ∼ 1 − N 1−n/m . Our cascadable primitive gate E c-prim operates in the far-detuned regime and incorporates two error correcting steps: the removal of linear evolution (S † l ) and the PMP. Principal mode projection is absolutely essential in making this gate cascadable. How important is removing the linear evolution? For the mode function used above, the linear errors {|ε H | 2 , |ε V | 2 } must be removed. Because the Fourier-space mode function ψ(k) = iγ 1/2 (k − k 0 + iγ/2) −1 falls off only as k −1 , linear errors are of the same order of magnitude as the nonlinear phase shift. However, for more well behaved Fourier-space mode functions (e.g. Gaussians ψ(k) ∼ exp [−(k − k 0 ) 2 /4γ 2 ] and even Lorentzians ψ(k) ∼ [(k − k 0 ) 2 + γ 2 ] −1 ) linear errors are of the same order as nonlinear errors. If PMPs could be constructed for these modes, the removal of linear evolution would not be essential. V. CONCLUSIONS Treating light as a multimode quantum field, we have described conditional phase gates in which photonic qubits interact with a three-level ∨-system. Although we have used the language of atomic and optical systems in our analysis, other implementations are possible. In the microwave, for example, the one-dimensional field of transmission line waveguides have been coupled to artificial atoms [26,27]. In the regime of large nonlinear phase shifts, our primitive (non-cascaded) gate has unacceptably low fidelity, as has been found for other gates relying on quantum cross-phase modulation [7][8][9]. We attribute much of this infidelity to undesirable entanglement generated by the local character of the nonlinear interaction between the horizontal and vertically polarized fields. In contrast, the primitive gate can produce a small nonlinear phase shift with very high fidelity (1−F φ nl ) by operating in the far-detuned regime. However, one cannot straightforwardly cascade this high-fidelity, small conditional phase shift because of coherent feedback errors that grow as N 2 . We have shown that it is, in principle, possible to overcome the cascadability problem by making a projective measurement of the bath modes' photon number after each small conditional phase gate. With high probability, this measurement projects the field state back onto the information-carrying principal modes. This stepprincipal mode projection-uses the quantum Zeno effect to prevent coherent feedback errors from occurring, making a cascadable primitive conditional phase gate. We suggest that principal mode projection could be a helpful subroutine in the future of photonic quantum information processing. While the ∨-system's weak crossphase shift may make cascading our gate impractical (Eq. (29)), PMP together with stronger nonlinearities, e.g., the giant Kerr effect, could potentially realize a con-ditional π-phase gate whose fidelity scales more favorably with N . Other interesting future directions include the possibility of considering alternative PMP constructions and analyzing the usefulness of PMP in overcoming XPM noise in optical fiber [7].
2012-02-29T18:46:46.000Z
2012-02-29T00:00:00.000
{ "year": 2013, "sha1": "56d3608444def153cf1f9fdda6f8950804a6ba71", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/134283/2/PhysRevA.87.042325.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7456c5266b2ca43315916f1f2043f404b8698cee", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
79181958
pes2o/s2orc
v3-fos-license
A comparative study of adverse reactions of taxol and non taxol based chemotherapy regimens in breast cancer Breast cancer is the most common cancer in women worldwide, with nearly 1.7 million new cases diagnosed (second most common cancer overall). 1 With rising incidence and awareness, breast cancer is the commonest cancer in urban Indian females, and the second commonest in the rural Indian women. 2 Both the morbidity and mortality are high in India with an estimated 70, 218 deaths. 1 Such high mortality rate can be attributed to the lack of adequate knowledge about breast cancer at the root level leading to delay in approaching the medical services and lack of sufficient medical facilities at all levels. This study was concerned about the adverse reactions associated with neo-adjuvant and adjuvant chemotherapy given to breast cancer patients. The various drugs available include paclitaxel, docetaxel, adriamycin, cyclophosphamide, 5-flurouracil, each having its own profile of adverse reactions. The use of combination chemotherapy has been associated with increased response rates as compared with a single agent. 3 The common drug combinations adopted in our government setup was paclitaxel, adriamycin, cyclophosphamide (PAC) and 5-flurouracil, adriamycin, cyclophosphamide (FAC). Paucity of data exists regarding the adverse reactions of these drug regimens adopted in TVMCH. This study was therefore undertaken to compare and ABSTRACT analyse the adverse reactions of taxol and non-taxol based chemotherapy regimens that were commonly employed in the treatment of breast cancer patients attending TVMCH. METHODS The centre of this study was oncology Department, Tirunelveli medical college hospital. The design of this study was prospective observational study. The period of this study was from August 2014 to October 2014. Inclusion criteria  Female subjects aged above 18 years  Histologically and cytologically confirmed invasive mammary carcinoma  Patients with or without metastasis. Exclusion criteria  Male patients  Patients not willing to sign informed consent  Concurrent radiotherapy or chemotherapy for any other tumour  Any active infections, HIV, hepatitis B, C  Pregnant and lactating women. All patients diagnosed with carcinoma breast during the study period of 60 days (August 2 nd -0ctober 2 nd 2014), satisfying the selection criteria, scheduled for or receiving either PAC or FAC regimens were enrolled for this study. The choice of the regimen for a patient is determined by  Total blood count of the patient at the time of chemotherapy  Hemoglobin level of the patient  Age of the patient  Nature and progression of the disease. A complete physical examination, clinical assessment, and all baseline investigations were done before commencing of the study, between 2 cycles of chemotherapy and at study termination. The samples were sent to central diagnostic laboratory, TVMCH. Mitigation of avoidable adverse reactions of these regimens were done using premedicative drugs like dexamethasone, ondensteron, ranitidine and antihistamine in varying doses for each patient depending upon the regimen adopted. Paclitaxel was given at a dose of 175-325 mg/m 2 depending upon the patient as an iv infusion along with normal saline at a rate of 15 drops per minute lasting for about 3 hours. 4 The premedications given were dexamethasone, ondensteron and ranitidine. 5-Flurouracil was given at a dose of 500 mg/m2 as direct IV over a period of 10 minutes, the dose slightly varies with the patient. 4 The premedications given were dexamethasone, ondensteron and ranitidine. Cyclophosphamide was given at a dose of 400-800 mg direct IV over 10 minutes. Adriamycin, 50-80 mg dosage as slow IV with simultaneous normal saline drip. 4 Adverse reactions were noted throughout the study and classified as mild, moderate and severe according to the WHO scale of adverse reactions. The adverse reactions were summarized using descriptive statistics. RESULTS Among 200 female patients admitted in the oncology ward during the study period of 60 days (August 2 nd -October 2 nd 2014), 30 were diagnosed to have breast cancer of which 16 (53%) were on FAC regimen and 14 (47%) on PAC regimen. Adverse reactions were reported in all patients. These were analysed using chi-square test and the p value was calculated (Table 1). Most common adverse reactions reported with PAC regimen include alopecia followed by neutropenia and for FAC regimen were alopecia followed by nausea and vomiting. DISCUSSION In this study, adverse reactions-hematological and extra hematological were reported in both FAC and PAC regimens supported by previous studies. The commonly reported adverse reactions in patients undergoing PAC regimen were hematological. 5 The incidence of neutropenia in these patients was 64.28% while that of anemia was 57%. Eosinophilia had an incidence of 42% indicating allergic reactions despite corticosteroid administration in PAC regimen. The prominent extra hematological adverse reactions were alopecia (92.85%) and melanonychia (57%) reported with PAC regimen which was not significant when compared with FAC regimen. The common adverse reactions associated with FAC regimen were alopecia followed by nausea and vomiting. 6 Infusion related adverse reactions like Thrombophlebitis was higher with PAC regimen (21.42%). The incidence of bone marrow depression reflected in the form of neutropaenia, anemia and thrombocytopenia was comparatively higher with PAC based treatment than FAC regimen, though there was no significant statistical data. 7 About three patients on PAC regimen (21.4%) required administration of filgastrim to increase their total count, whereas no such drugs were required for the patients on FAC therapy. In patients on FAC regimen, about 75% of patients reported nausea and vomiting despite antiemetic use. Extra hematological reactions like Nausea and Vomiting was higher with FAC regimen which favours PAC therapy with a significant p value of 0.030287. Though adverse reactions were reported in both regimens, those associated with FAC were categorized as mild and those with PAC were designated as mild to moderate according to WHO scale of adverse reactions. No severe adverse reactions warranting discontinuation of treatment were reported. The limitation of this study was the sample size is small and therefore we could not prove statistically significant difference between the adverse reaction profile of PAC and FAC regimens. CONCLUSION In conclusion, no severe adverse reactions were reported in either regimen. Though statistically not significant, FAC regimen has a favourable adverse reaction profile when compared with PAC regimen, independent of the potency and efficacy of these regimens in the treatment of patients with or without metastatic breast cancer.
2019-03-16T13:02:59.419Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "712a1b1498aa6c1b84e95d7ac1df3dcdb8211deb", "oa_license": null, "oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/569/523", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3d727bc9049767fa12e820a18ea18845f97026fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
58635526
pes2o/s2orc
v3-fos-license
Rapid detection of internalizing diagnosis in young children enabled by wearable sensors and machine learning There is a critical need for fast, inexpensive, objective, and accurate screening tools for childhood psychopathology. Perhaps most compelling is in the case of internalizing disorders, like anxiety and depression, where unobservable symptoms cause children to go unassessed–suffering in silence because they never exhibiting the disruptive behaviors that would lead to a referral for diagnostic assessment. If left untreated these disorders are associated with long-term negative outcomes including substance abuse and increased risk for suicide. This paper presents a new approach for identifying children with internalizing disorders using an instrumented 90-second mood induction task. Participant motion during the task is monitored using a commercially available wearable sensor. We show that machine learning can be used to differentiate children with an internalizing diagnosis from controls with 81% accuracy (67% sensitivity, 88% specificity). We provide a detailed description of the modeling methodology used to arrive at these results and explore further the predictive ability of each temporal phase of the mood induction task. Kinematical measures most discriminative of internalizing diagnosis are analyzed in detail, showing affected children exhibit significantly more avoidance of ambiguous threat. Performance of the proposed approach is compared to clinical thresholds on parent-reported child symptoms which differentiate children with an internalizing diagnosis from controls with slightly lower accuracy (.68-.75 vs. .81), slightly higher specificity (.88–1.00 vs. .88), and lower sensitivity (.00-.42 vs. .67) than the proposed, instrumented method. These results point toward the future use of this approach for screening children for internalizing disorders so that interventions can be deployed when they have the highest chance for long-term success. Thanks to greater neuroplasticity, interventions can be very effective in this population if disorders are identified early in development [19]. However, the current healthcare referral process usually involves parents reporting problem behaviors to their pediatrician and, if functionally impairing, the child is then referred to a child psychologist or psychiatrist for a diagnostic assessment. Children with internalizing disorders, where symptoms are inherently inward facing, are less likely than those with externalizing disorders to be identified by parents or teachers as needing professional assessment ( [20]; for review see [21]), thus preventing or delaying their access to early intervention. Children under 6 have the highest rate of unmet needs [22]. For example, as little as 3% of 4 year-olds with a clinical diagnosis receive the necessary professional mental health intervention [23]. This points to the need for standardized screening tools for internalizing disorders in young children. Even if referred, current diagnostic assessments have been shown to capture only the most severely impaired preschoolers, but miss a large number of children who may go on to develop additional clinical impairments [24,25]. Providers try to improve these assessments by considering multi-informant reports from children, parents, and teachers, but these also have limitations. For example, children under the age of eight are unreliable self-reporters [26][27][28], and parental report of child problems are often inaccurate [29][30][31] as the unobservable symptoms characteristic of internalizing disorders (e.g., thoughts and emotions), are difficult to identify and thus go underreported [31]. Parents who have an internalizing disorder themselves are known to over-report unobservable symptoms [32], increasing complexity of this problem. Thus, there is a clear and unmet need for objective markers of internalizing disorders that can be incorporated into new screening tools for all children. Observational methods for assessing psychopathology 'press' for specific behaviors and affect [33] and have high research and clinical utility [34]. One example, known as a mood induction task, engages a child in a short laboratory-based activity meant to induce expected negative or positive emotions. To provide objective markers of psychopathology, researchers often utilize a behavioral coding technique on video recordings of the task, where at least two researchers watch the video recordings and assign scores based on child verbalizations or facial and body movements (e.g., see [35]). Behavioral coding has been shown to identify valid risk markers for childhood psychopathology using a variety of mood induction activities [36,37]. However, it has significant drawbacks that limit clinical utility, including the need for extensive training in a standardized coding manual and the hours required to watch and score video recordings of the task, while also consensus scoring a percentage of participants (often one out of five) to ensure reliability [38]. While it is clear that observational methods can provide objective markers of psychopathology, the complexity and resources required for behavioral coding prevent its use as a screening tool for childhood internalizing disorders. New advances in wearable sensors present the opportunity to track child movement without the need for extensive training or time to watch and score task videos. Our previous work has described the use of a wearable inertial measurement unit (IMU), composed of a threeaxis accelerometer and three-axis angular rate gyroscope, for tracking child motion during a standardized fear induction task [39][40][41][42]. Kinematical measures extracted from these data were associated with other known measures of risk for internalizing disorders and confirmed expected temporal characteristics of the task in a small (N = 18) sample of children [40]. In a larger sample (N = 62), IMU data, but not behavioral codes, were associated with parentreported child symptoms and exhibited statistically significant differences between children with and without an internalizing diagnosis [41]. This instrumented mood induction task provides an objective measure of child motion without the limitations of behavioral coding and, when taken with these preliminary results, has the potential to be used as a screening tool for childhood internalizing disorders. Advancing the use of this instrumented mood induction task as a tool for identifying children with an internalizing disorder requires establishing a model of the complex relationship between kinematical measures extracted from wearable sensor data and diagnosis. A data driven approach, like machine learning, is ideally suited for this task, and has been leveraged for this use in a variety of conditions including, for example, Multiple Sclerosis [43][44][45], Parkinson's Disease [46,47], and Atrial Fibrillation [48]. In this case, the wearable sensor time series captured from each child during the mood induction task provides a complete, albeit high dimensional (i.e., an IMU sampling at 100 Hz for a 20-second task yields 12,000 data points), picture of their motion. However, by computing a smaller number of features (e.g., mean, kurtosis) that each explain a different pattern inherent to the data, the dimensionality of the high-dimensional time series can be reduced. This process of defining the set of features that is able to capture important aspects of the raw data is known as feature engineering. The process of machine learning is in then training a statistical model to recognize the relationship between these objective measures of a child's motion during the task and their diagnosis. This approach allows for the realization of much more complex relationships then would be possible from theory-based modeling alone. These efforts form one facet of the burgeoning field of digital medicine [49,50], but notably the use of these techniques for improving childhood mental health is just beginning, with efforts focusing primarily on improving access to care through mobile delivery methods [51]. Thus, the use of machine learning and wearable sensors for advancing the state of childhood mental health screening represents a novel contribution to the field of digital medicine. To further investigate the potential of this instrumented mood induction task as a screening tool for childhood psychopathology, we explore the use of machine learning to develop statistical models for identifying children who have an internalizing disorder. Specifically, this paper builds upon two of our recent conference papers [39,42] by presenting a detailed description of the modeling methodology and performance, an analysis of kinematical measures most discriminative of internalizing diagnostic status, and a comparison to the performance of models trained on parent-reported child symptoms. Participants Studies had approval from the University of Michigan Institutional Review Board (HUM00091788; HUM00033838). Participants included 63 children (57% female) and their primary caregivers (95.2% mothers). Participants were recruited from either an ongoing observational study (Bonding Between Mothers and Children, PI: Maria Muzik; n = 14) or from flyers posted in the community (n = 14) and psychiatry clinics (n = 35) to obtain a sample with a wide range of symptom presentations. Eligible participants were children between the ages of 3 and 8 who spoke fluent English and whose caregivers were 18 years and older. Exclusion criteria were suspected or diagnosed developmental disorder (e.g., autism), having a serious medical condition, or taking medications that affect the central nervous system. The resulting sample of children were aged between 3 and 7 years (M = 5.25 SD = 1.10), was 65% White non-Latinx, and 82.5% lived in two-parent households. Twenty participants (32%) had an annual household income of greater than $100,000. Multimodal assessments, including diagnostic interviews, were conducted for 62 of the children between August 2014 and August 2015. Based on these multimodal assessments and consensus coding, 21 participants were identified as having an internalizing diagnosis (current (n = 17), past (n = 4)) according to DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, 4th. Edition). Diagnostic details are provided in Table 1. Procedure Child and caregiver were brought into the university-based laboratory and provided written consent to complete a battery of tasks. Caregivers completed self-and parent-report questionnaires and a diagnostic interview to assess for child psychiatric diagnoses while children underwent a series of behavioral tasks in an adjacent room. Behavioral tasks were designed to elicit fear responses and positive affect. Participants were compensated for their time. Herein, we consider a subset of data from the larger study by examining participant response to a single behavioral task designed to elicit fear (the 'Snake Task'), as well as the diagnostic interview and questionnaires used to assess internalizing symptoms and diagnoses. The Snake Task has been shown to induce anxiety and fear in young children [52,53]. This task is standardized and all research assistants were trained to carry out the task according to protocol. The total task duration was approximately 90 seconds, and task behaviors were conceptually segmented into three temporal phases [54]: 1) Potential Threat: The child was led into a novel, dimly lit room, unsure of what was inside while the administrator gave scripted statements to build anticipation such as "I have something in here to show you" and "Let's be quiet so it doesn't wake up". The administrator led the child slowly toward the back of the room where a terrarium was covered with a blanket, gesturing for them to follow until they paused within 1 foot of the terrarium; 2) Startle: The child was startled by the administrator rapidly uncovering the terrarium and bringing the fake snake from inside to the child's eye level several inches from their face; 3) Response Modulation: The child was encouraged to touch the snake if they wanted, to ensure it was fake, and was reassured verbally (e.g. "It's just a silly toy snake") as needed, remaining with the snake until the administrator gestured them to leave the room and end the task. Following the task, children transitioned to free play with the task administrator to regulate and debrief about their experience. Questionnaires. The Child Behavior Checklist (CBCL) is a parent-completed questionnaire designed to assess child problem behaviors [55]. The scale consists of 120 items related to behavior problems across multiple domains. Items are scored on a three-point scale ranging from "not true" to "often true" of the child. Responses result in global T scores for externalizing, internalizing, and total problems, as well as a number of empirically based syndrome scales and disorder-based scales. Only scales available in both versions (ages 1.5-5 and 6-18) were used in subsequent analyses. The CBCL has well established validity and reliability (see [56]). Subject demographic information was collected using a questionnaire that includes questions regarding child race, gender and family income. Clinical interview. Trained clinical psychology doctoral students, or postdoctoral fellows, conducted a single structured clinical interview with each child's caregiver. The current study used a version of The Schedule for Affective Disorders and Schizophrenia for School-Age Children Present and Lifetime Version (K-SADS-PL) modified for use with preschool-aged children [57]. In this diagnostic interview, the clinician spent up to two hours with the caregiver assessing symptoms of past and current child psychiatric disorders. Interviewers received monthly (or more frequent) supervision by a licensed psychologist and psychiatrist, wherein all cases were reviewed by all clinicians and the supervisor. Final diagnoses were derived via clinical consensus using the best-estimate procedures [58] to integrate a holistic picture based on child and parent report, family history, and other self-report symptom checklists. It is worth noting that team-based assessment and consensus diagnosis are most often only conducted in research contexts, the resulting diagnoses can be considered a true gold-standard, and this practice is not representative of the diagnostic procedures used in the majority of clinical contexts. Wearable sensor signal processing and feature extraction. During the behavioral battery, child motion was tracked using a belt-worn IMU (3-Space Sensor, YEI Technology, Portsmouth, OH, USA) secured around the waist at approximately the location of the body center of mass. Acceleration and angular velocity data were sampled by the device at approximately 300 Hz, down-sampled to 100 Hz, and low-pass filtered using a fourth-order Butterworth IIR filter with a cutoff frequency of 20 Hz in software prior to use. These data were fused to determine device orientation as a function of time using the complementary filtering approach described in [40,59]. Device orientation was used to resolve raw IMU measurements of acceleration and angular velocity in a world-fixed reference frame that has one axis directed vertically upwards. These data were further decomposed into vertical acceleration and angular velocity (av and ωv, respectively), and the vector magnitude of horizontal acceleration and angular velocity (ah and ωh, respectively). Orientation estimates were also used to compute tilt (α) and yaw (γ) angles of the participant as a function of time yielding six time series (ah, av, ωh, ωv, α, γ) for further analysis. The interested reader may refer to [40], for a detailed description of this approach. Time series were segmented into the three conceptual phases [54]: Potential Threat (20 seconds, from 23 to 3 seconds prior to the moment of startle), Startle (6 seconds, from 3 seconds prior to 3 seconds post the moment of startle), and Response Modulation (20 seconds, from 3 seconds to 23 seconds post the moment of startle) and signal features were extracted from each. Signal features included mean, root mean square (RMS), skew, kurtosis, range, maximum, minimum, standard deviation, peak to RMS amplitude, signal power within specific frequency bands (i.e., 0-0.5 Hz, 0.5-1.5 Hz, 1.5-5 Hz, 5-10 Hz, 10-15 Hz, 15-20 Hz, all frequencies greater than 20 Hz), and the location and height of peaks in the power spectrum and autocorrelation of the signal. This yielded a total of 29 features from each of the six time series, or 174 total features, from each phase of the task. Signal processing and feature extraction were performed in MATLAB (Mathworks, Natick, MA, USA), using source code available from [60]. Statistical models for identifying internalizing diagnosis. A supervised learning approach was used to create binary classification models that relate features from the IMUderived signals to internalizing diagnosis derived from the K-SADS-PL and clinical consensus. Models were created using features from each temporal phase of the task. Performance of the classifiers was established using leave-one-subject-out (LOSO) cross validation. In this approach, features from 61 of the 62 subjects were partitioned into a training dataset and converted to z-scores prior to performing Davies-Bouldin Index [61] based feature selection to yield the 10 features with zero mean and unit variance that best discriminate between diagnostic groups. These features were used to train a logistic regression for predicting internalizing diagnosis. The same 10 features were extracted, converted to z-scores based on parameters (e.g. mean, variance) from the training set, and used as input to the model for predicting the diagnosis of the one remaining test subject. This process was repeated until the diagnosis of each subject had been predicted. Logistic regression was chosen herein to protect against overfitting given the relatively small (N = 62) sample, and because it requires minimal computational overhead for prediction enabling future deployment on resource-constrained devices. We also examined the utility of the CBCL as a screening tool for internalizing diagnosis (according to the K-SADS-PL with clinical consensus) in this sample using previously-established clinical cutoffs (T score � 70) for manualized use [55] and a more conservative cutoff (T score � 55) suggested for improving screening efficiency [62]. Model performance was assessed in several ways. First, we examined classification performance by reporting accuracy, sensitivity and specificity with a score threshold for the logistic regression of 0.5. These metrics were computed following standard definitions [63]. Next, receiver operating characteristic (ROC) curves-which plot true positive rate against false positive rate for varying thresholds on the scores-were constructed for each classifier. Area under the ROC curve (AUC) was used to comment on the general discriminative ability of the classifiers [63]. Finally, a permutation test was conducted to examine these results in the context of results obtained by chance from this dataset [64]. To complete this test, we first approximated the distribution of possible error rates (error rate = number of incorrect predictions / total number of predictions = 1 -classification accuracy) for each classifier as a beta distribution parameterized by the number of incorrect predictions and the total number of observations, as indicated in [64,65], and randomly sampled 100 possible error rates from this distribution. Next, we repeated the model training process outlined previously for 100 random permutations of the diagnostic labels, computing the classification error rate for each. Finally, a pairedsample Mann-Whitney U-test was used to identify the temporal phases that yield classification models with error rates significantly different from those expected by chance from this dataset. For models that reported significantly better error rates than those expected by chance, we further examined the features used as input to these models. Results Performance of the logistic regression models trained for detecting children with internalizing diagnoses based on wearable sensor data from each temporal phase of the snake task are reported in Table 2. Metrics include accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The model developed from data sample during the Potential Threat phase outperforms models from the Startle and Response Modulation phases (Accuracy: .81 vs. .58 and .52; Sensitivity: .67 vs. .33 and .29; Specificity: .88 vs. .71 and .63; AUC: .85 vs. . 59 and .48). This performance difference is further revealed in the ROC curves of Fig 1, where models trained on data from the Potential Threat, Startle, and Response Modulation phases are shown in blue, red, and yellow, respectively. As indicated in the ROC curves, changing the score threshold for the logistic regression can alter its specific performance metrics. In this case, changing the threshold from .5 to .375 for the Potential Threat model maintains the overall accuracy (.81), decreases specificity (.88 to .81), and importantly Accuracy, sensitivity, specificity and area under the receiver operating characteristic curve (AUC) for models trained using data from each temporal phase of the mood induction task (Potential Threat, Startle, Response Modulation). https://doi.org/10.1371/journal.pone.0210267.t002 Fig 1. Receiver operating characteristic (ROC) curves for models trained to detect children with internalizing diagnoses. Curves for logistic regressions trained on data from the potential threat, startle, and response modulation phases of the snake task are indicated in blue, red, and yellow, respectively. The model trained on data from the potential threat phase performs better than the other models. https://doi.org/10.1371/journal.pone.0210267.g001 increases sensitivity (.67 to .81). This could be important when considering use of this method for screening children for internalizing disorders. Results of the permutation test used for determining how the error rate of the classifiers trained on data from each phase compares to error rates expected by random chance from the dataset in are reported in the boxplot of Fig 2. Specifically, the distribution of error rates for the logistic regressions from each phase (teal) and the corresponding error rates achieved by chance (gray) are reported. Statistically significant differences between the median error rates achieved by the classifiers and by chance were observed in the Potential Threat (significantly lower error rate than expected by chance, p < .01) and Response Modulation (significantly higher error rate than expected by chance, p < .01) phases. Significant differences are indicated by asterisks in Fig 2. The results of Fig 2 indicate that the classifier developed from data sampled during the Potential Threat phase is the only model that provides a statistically significant improvement Boxplots of error rates for models trained to detect children with internalizing diagnoses compared to those due to chance for each temporal phase of the snake task. Error rates due to random chance determined via permutation test are shown in gray, while those from the actual data are in teal. Statistically significant differences are noted with an asterisk. The model trained on data from the potential threat phase is the only one to outperform random chance. https://doi.org/10.1371/journal.pone.0210267.g002 in classification error rate over random chance. To this end, we further examine the 10 features used as input to this model. Across the 62 iterations of the leave-one-subject-out cross validation, the following ten features were selected a minimum of 58 times: location of the 6 th peak in the power spectrum of av (av 6pl ), range of ωv (ωv range ), mean of γ (γ mean ), RMS of γ (γ RMS ), range of γ (γ range ), maximum of γ (γ max ), the height of the peak at zero lag in the autocorrelation of γ (γ 0ah ), the heights of the 1 st and 6 th peaks in the power spectrum of γ (γ 1ph ) and γ 6ph , respectively), and the signal power between 0.5-1.5 Hz in γ (γ pb2 ). Nine of the ten features in this list are directly related to the yaw angle (γ) of the subject (ωv is essentially yaw angular velocity), and we therefore report the γ time series from representative subjects with (gray) and without (teal) an internalizing diagnosis in Fig 3A. There is a significant divergence in the yaw angles achieved by each subject beginning roughly half-way through the phase that leads the subject with a diagnosis to end the phase facing in the opposite direction of where they started (γ ffi 180˚), which contrasts the subject without a diagnosis who ends the phase facing in roughly the same direction as when they started (γ < 60˚). The differences noted in these representative subjects are consistent across the sample as evidenced by boxplots of the ten features used as input to the classifier presented in Fig 3B. Subjects with a diagnosis (gray) have higher values of ωv range , γ mean , γ RMS , γ range , and γ max , all of which confirm the divergence noted in Fig 3A is consistent across the sample. Performance of the logistic regression models identifying internalizing diagnosis using elevated T scores (T score � 70 and T score � 55) for the internalizing broadband and two DSMoriented scales (anxiety problems, depressive problems) of the CBCL are reported in Table 3. Metrics include accuracy, sensitivity, specificity, and AUC. Discussion There is a significant need for a rapid and objective method for screening young children with internalizing disorders. We propose the use of data from a single wearable sensor during a 90-second fear induction task and machine learning to fulfill this need. Herein, we take an initial step toward this goal by training classifiers for detecting early indications of internalizing diagnoses using data sampled from each of three conceptually-segmented [54] temporal phases of a mood induction task, establishing their performance, and discussing the implications of these results. We further examine the specific features identified as being especially indicative of an internalizing diagnosis and discuss the behaviors described by these features in the context of internalizing disorders. The proposed approach is the first step toward creating an objective method for screening children for internalizing diagnoses rapidly and at low cost. It is important to place these results in the context of previous work and existing diagnostic techniques. For example, these results compare favorably to our previous work, where k-nearest neighbor (k = 3) and logistic regression models were able to achieve 75% and 80% accuracy, respectively. The logistic regression employed herein achieves 81% accuracy, and we further report sensitivity (.67), specificity (.88), and ROC curves, quantities especially important when considering development of a tool used to screen for psychopathology. Of note, this logistic regression can be optimized for screening by adjusting the score threshold to yield an increase in sensitivity (.67 to .81) at the expense of a slight decrease in specificity (.88 to .81). Moreover, these results help to advance the use of wearable sensors and machine learning in childhood digital mental health, a burgeoning field that promises to improve access to, and speed of, mental healthcare. According to guides for classifying the accuracy of a diagnostic-screening test where AUC's under .60 are considered fail, .60-.70 are poor, .70-.80 are moderate, .80-.90 are good, and .90-1 are excellent [66], IMU-derived features during the Potential Threat phase (AUC = .85) are considered good. Investigation into specific temporal phases of the mood induction task demonstrated that it was the Potential Threat phase that differentiated between children with and without internalizing diagnoses as evidenced by the results of Table 2 and Fig 1. Children with internalizing disorders tended to turn away (γ ffi 180˚) from the ambiguous threat only when they were physically closest in the last 10 seconds of the phase (see Fig 3A). This may suggest that across depressive, anxiety and stress-related disorders, there is a shared anticipatory threat response which manifests physically in young children. Previous literature speculates that this type of response may be due to attention avoidance (attending away from the threat as shown in children with trauma and PTSD [67]) or emotional dysregulation (from attending to the threat in previous moments) [68]. Interestingly, the acute threat response during the Startle phase did not demonstrate clear differences by diagnostic status (e.g., see results of Figs 2 and 3 and Table 2). This could be due to heterogeneity of startle response across internalizing disorders, as previous research finds adults with moderate severity internalizing disorders (i.e., specific phobia) had heightened startles, healthy controls had low startle responses, and those with severe internalizing Accuracy, sensitivity, specificity, and area under the ROC curve (AUC) for logistic regression models on parent-reported internalizing problems over the 6 months leading up to participation in the study as measured by the Child Behavior Checklist. Also included are two subscales of total internalizing problems (Anxiety Problems, Depressive Problems) oriented to DSM-IV criteria [56]. https://doi.org/10.1371/journal.pone.0210267.t003 Screening for internalizing diagnosis in young children with wearable sensors and machine learning disorders (i.e., GAD, PTSD) had blunted startle responses [69]. A similar phenomenon may exist in our data, however larger sample sizes are needed to better assess this possibility. Alternatively, the significantly heightened avoidance motion (i.e., γ) during Potential Threat without a significantly heightened motion during Startle could suggest a physiological manifestation of "It wasn't as bad as I thought it was going to be," a cognitive shift often seen in anxious children after exposure therapy [70]. Regardless of other phases, ambiguous threat avoidance during potential threat contexts appears to unify internalizing disorders and differentiate them from controls (e.g., see results of Table 2 and Fig 3B). We compare psychometric properties of the questionnaire-based parent-reported CBCL and IMU-derived feature models on child internalizing diagnosis as determined via K-SADS-PL with clinical consensus. CBCL-derived models for both cutoffs (55 and 70) exhibited slightly lower classification accuracy (.68-.75 vs. .81), slightly higher specificity (.88-1.00 vs. .88), lower sensitivity (.00-.42 vs. .67), and slightly lower AUCs (.75-.79 vs. .85) compared to IMU-derived models during the Potential Threat phase. CBCL subscale psychometrics in our study are similar to those from much larger studies (e.g., see [66,71]). Notably, both in our study and paralleled in these previous studies is the varied sensitivity of the CBCL, with some samples exhibiting sensitivities as low as .00-.38 [72] and some as high as .44 to .86 [73]. Overall, CBCL internalizing psychometrics across studies suggest room for improvement in internalizing screening efficiency, especially as it was consistently worse than that of externalizing screening efficiency [71]. The IMU-based results presented herein yield a minimum 60% improvement in sensitivity over that observed from the CBCL suggesting that this supplemental objective data may be especially helpful for increasing sensitivity. Overall, this paper describes a methodology requiring very limited computational resources (e.g. compute 10 features from 20 seconds of wearable sensor data, use as input to a logistic regression) which points toward future deployment of this technique for identifying young children with internalizing disorders using resource-constrained but ubiquitous devices like mobile phones. This new approach reduces the time required for diagnostic screening while also establishing high sensitivity-which can help to reduce barriers and better alert families to the need for child mental health services. While these results can likely be improved and extended, and should be replicated, this is an important first step in connecting often overlooked children [20,21] to the help they need to both mitigate their current distress and prevent subsequent comorbid emotional disorders and additional negative sequelae [12,14,74]. Limitations This study is not without limitations. Future research should replicate and investigate our claims in a larger study with subjects at varying levels of risk for developing an internalizing disorder. Additionally, a larger sample size would allow examination of internalizing disorders without the presence of comorbid externalizing disorders, and also specific internalizing disorders to explore whether one disorder type yielded different motions than another. Future work may also explore alternative or additional device locations and more complex non-linear models for improving classification performance. Conclusion The results presented herein demonstrate that, when paired with machine learning, 20 seconds of wearable sensor data extracted from a fear induction task can be used to identify young children with internalizing disorders with a high level of accuracy, sensitivity, and specificity. These results point toward the future use of this approach for screening children for internalizing disorders.
2019-01-22T22:34:03.367Z
2019-01-16T00:00:00.000
{ "year": 2019, "sha1": "45c05e0845ae86c550e3b635c407e173eae550e4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0210267&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45c05e0845ae86c550e3b635c407e173eae550e4", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
211389055
pes2o/s2orc
v3-fos-license
The use of recovery time in timetables: rail passengers’ preferences and valuation relative to travel time and delays Recovery time in the rail industry is the additional time that is included in train timetables over and above the minimum journey time necessary often with the explicit aim of improving punctuality. Recovery time is widely used in railways in a number of countries but prior to this study there has been no investigation of the rail users’ point of view. Perceived recovery time, such as being held outside stations and prolonged stops at stations, might have some premium valuation due to the frustration caused. If perceived recovery time in train timetables does carry a premium, then the benefits of improved punctuality achieved by it will be reduced. This paper is the first to investigate passengers’ views and preferences on the use of recovery time. We summarise the findings of a large study and provide estimates of passengers’ valuations of recovery time, both relative to in-vehicle time and late time, that can be used for economic appraisal purposes. Overall, we find most passengers support the use of recovery time but the context is important. Only 13% of users disapprove of its use as a tool to reduce lateness. The estimated premia vary by demand characteristics and are significant in some contexts, although on average are of a small magnitude. The applicability of the estimates is demonstrated through the appraisal of an actual scheme in the UK. We observe that the introduction of more recovery time along with the subsequent improvement in reliability can lead to significant reductions in generalised journey time, even when recovery time carries a valuation premium. We must however strike a word of caution since we note that there were higher than expected proportions of non-traders in the survey which may have affected the results; future studies into the topic should look to minimise the proportion of non-traders. This study provides valuable and necessary first steps in this challenging topic. Context What is termed recovery time in the rail industry is the extra time or 'buffer' time that is included in train timetables over and above the minimum journey time necessary often with the explicit aim of improving punctuality. In addition to improved punctuality, it can manifest in a number of ways in cases where there is not much delay requiring the train to 'call on' the recovery time: • Longer than necessary stops at intermediate stations; • Slower than line-speed running between stations, which might involve trains being held prior to arriving in a station; • Arriving earlier than scheduled. By way of example, recovery time in Great Britain can be significant on some routes, extending journey times for longer distance travellers by even around 10% as recovery time is incurred through the course of a journey, whilst the insertion of 5 to 10 min extra time on long distance trains prior to the final destination can lead to very large proportionate increases in travel time for those using the service for shorter journeys. 1 Nor is recovery time limited to longer distance operators; for example, in 2004 South West Trains, serving suburban as well as longer distance routes from London Waterloo, undertook a major re-cast of its timetables with a central feature being additional recovery time to achieve a more robust timetable delivery. It is important to note that recovery times are not immediately apparent to passengers who only see the public timetable that includes it, even when they are 'following' their train live on a travel app. Only the train planners have information on the amount of recovery time. Recovery time is not unique to railways and is also present in bus and airline schedules whilst motorists, cyclists and those walking can allow extra time to ensure a punctual arrival. In Great Britain, it has been included in railway timetables to improve punctuality, particularly as in recent years train operators and the infrastructure manager are liable to pay passengers and other parties monies to compensate for lost revenue due to delays. Reliability is also the most important driver of passenger satisfaction with the railways (Passenger Focus 2012). On the other hand, there is a push for timetables to deliver quicker journey times, and inserting recovery time would mean that the timetables are not optimal when there are no delays. There is a need to understand what the balance should be between the efficiency a quicker timetable delivers and the reliability a slower timetable that includes recovery time delivers. This is the subject of this paper. Aims and structure The impetus to this research was that several train operators in Great Britain felt that they might have inserted too much recovery time into timetables and were concerned that perceived recovery time, such as being held outside stations and prolonged stops at stations, might have some premium valuation due to the frustration caused. After all, waiting time might be spent in conditions that are the same as (or approximate to) in-vehicle time yet the convention in transport planning practice worldwide is to value wait time at twice invehicle time (OECD/ITF 2014). If perceived recovery time in train timetables does have a premium valuation then the benefits of improved reliability achieved by it will be reduced. The objective of this research was to understand rail passengers' views on this matter and estimate their valuations of changes in recovery time relative to in-vehicle time and reliability in the form of late time. This paper summarises the findings of a large study and provides estimates of passengers' valuations of recovery time, both relative to in-vehicle time and late time, that can be used for economic appraisal purposes. The fresh empirical evidence reported here provides an important contribution given that, as far as we were aware, there had been no prior research into passengers' preferences on recovery time. Previous research concerning recovery time has largely focussed on the supply side (e.g. UIC 2000; Schittenhelm 2011). Given the complex issues involved and little prior research to guide us, the study was conducted in three phases: (1) an initial online survey of 1006 respondents, (2) two focus groups aimed at providing in-depth insights and informing the main survey, and (3) an ontrain survey, providing information on the perceptions of and attitudes towards recovery time alongside the SP exercise used to examine preferences around recovery time. "Background" section provides additional background on recovery time and briefly reviews the available literature on it. "Initial insights on passengers' perspectives on recovery time from surveys" section presents the key initial insights into passengers' perspectives of recovery time from the surveys and focus groups conducted in the study. The main quantitative phase of the study was based upon a Stated Preference (SP) valuation exercise as part of the final survey: "The stated preference experiment" section sets out how this was designed and a description of the data. "Analysis of stated preference data" reports the findings of the analysis of the SP data followed by an illustration of the use of values in an appraisal in "Illustrative use of values in appraisal" section. Concluding remarks are contained in "Conclusions". Recovery time in railway timetables The published train timetables, upon which travellers make their decisions and plan their journeys, have always contained recovery time additional to the minimum necessary journey time, and there is a strong recognition of the role of recovery time in regulating travel time reliability (Parbo et al. 2016). Recovery time has traditionally been inserted into timetables for a number of reasons: • Variability in traction performance. From the earliest days, some locomotives (even within the same batch) were more powerful or performed better; • Variability in driver and signaller performance; • Avoidance of conflicts at key junctions and points of congestion on the rail network; • Organising train pathing, by providing the right sequence of trains and even, on occasions, improving connections by holding back a stopping service; • Supporting a regular interval or clock-faced timetable, with Switzerland being a prime example; • Service regulation and regularisation, on the grounds it is better to manage delays and out-of-course running before reaching a pinch-point; • Dealing with variability in the volume of rail passengers and station dwell time; • Allowing resilience for engineering based speed restrictions. The over-arching advice on how to calculate recovery time is published by the International Union of Railways (UIC 2000). However, it is ultimately down to individual railway administrations and operators to calculate guidance on their own recovery margins (Palmqvist et al. 2017) and hence these differ between and within countries, with different route densities, rolling stock and route lengths all exerting an influence. We might expect the significance of recovery time to vary across countries if only because what is deemed to be acceptable lateness, termed a delay threshold, itself varies across countries (Li et al. 2010). For example, delay thresholds are set at 3 min for long distance trains in the Netherlands, 5 min in Denmark and Switzerland, through to 10 min in the UK and 15 min in Italy (Schittenhelm 2011). Typically, recovery time margins expressed as a percentage of nominal running time will vary between 3 and 7% in Europe and 6 to 8% in North America (Pachl 2002), but are driven primarily by operational rather than commercial considerations. Recovery time for commercial reasons, which is the purpose of this paper, adds to these non-trivial amounts of recovery time. Regulatory and commercial influences In recent years, there have been other incentives to improve reliability and the use of recovery time. In part, there has been increased recognition of the commercial implications of train reliability. But another raft of incentives, particularly in Great Britain, stem from the regulatory framework. The 'Citizen's Charter' was a British political initiative launched by the government in 1991 with the aim of improving public services in the UK and making them more responsive to users. As far as the railways were concerned, it introduced the concept of passengers being compensated for unreliable services, and in the first instance was restricted to season ticket holders. The post-1996 privatisation form of this is the Passenger's Charter, which is a part of each train operating companies' franchise commitment. This sets out the conditions under which train operators pay compensation to passengers in the event of late arrivals. For example, on long-distance routes, if there is a delay on one leg of a return journey, passengers can expect reimbursement of ¼ of the value of the return ticket if they are 30 min late, ½ the value if an hour late, or all of it if 2 h late. More recently, urban operators such as c2c have been offering those paying with Smart cards very minor refunds (automatically paid electronically) starting if trains are only 3 min late. Such compensation procedures inevitably incentivise recovery time. Parallel to this, the privatised railway industry in Britain separated operations and infrastructure and hence a regulatory system was introduced, termed 'Schedule 8', to incentivise good performance. It specified compensation rates to train operators for service disruptions caused by the infrastructure provider or by other train operators and sets out rewards to the infrastructure provider for better than target performance. Network Rail (NR), as the infrastructure provider in Britain, has incurred Schedule 8 payments to operators of £138 million in 2012/2013, £194 million in 2013/2014, £109 million in 2014/2015 and £106 million in 2015/2016, 2 denoting this is not a trivial issue. Schedule 8 regulations do impact on the construction of timetables, even if in a subliminal fashion. Train operators may consider reducing recovery time, in order to trigger a larger number and greater amount of Schedule 8 payments, especially if they feel that passengers would not stop using the railway in large enough numbers in response to reduced reliability. In contrast, Network Rail might be incentivised to seek increased recovery time to reduce its exposure to compensation payments and indeed to receive bonus payments. However, because excessive recovery time in timetables also causes problems on a capacity-constrained railway (if trains arrive early, there may be nowhere to put them), a balance between these factors is found between NR and operators, even if only iteratively over a number of timetables. Nevertheless, variations in Schedule 8 payments have caused significant budgeting problems for train operators, for instance on the East Coast Main Line. Previous research As far as we are aware, there have been no previous studies of whether and to what extent rail travellers perceive and value the dwell time consequences of recovery time to be different to in-vehicle time. What we think is the closest proxy for this in the literature, because it also involves travelling at less than the 'normal' speed, is the congestion multiplier for car travel and the 'slowed-down' and 'dwell time' multiplier for bus travel. There is now a wealth of evidence regarding the congested travel time multiplier for motorists. Wardman and Ibáñez (2012) provide an extensive summary of international evidence which suggests that a central value is 1.5. A more recent review (Wardman et al. 2016), covering a very large amount of European wide evidence, also returned a multiplier of around 1.5. The only study of which we are aware that examined the valuation of slowed-down time and dwell time for bus is the third UK national value of time study ). It reports a multiplier of around 1.4 for slowed down time for commuters and leisure travellers with leisure travellers having a multiplier of 1.6 for dwell time at the bus stop. The purpose of recovery time is to improve travel time reliability, and valuations of the latter are a feature of the new evidence we here report. In contrast to values of rail recovery time, there is now an extensive literature covering valuations of travel time variability across all modes. The rail industry in Great Britain uniquely has its Passenger Demand Forecasting Handbook (PDFH) and this has recommended reliability values since its inception in 1986. The measure used is mean late time and currently recommended multipliers are in the range 2.3 to 3.9 depending upon flow type. Other significant reviews of rail reliability values are provided by Wardman and Batley (2014) in the UK context, Wardman et al. (2016) in a broader European context and OECD/ITF (2014) in an international context. Initial insights on passengers' perspectives on recovery time from surveys This section discusses the findings from a set of questions regarding passengers' awareness of and attitudes towards recovery time from the online and on-train surveys, as well as the focus groups where passengers provided further insights, particularly into the format of the SP exercise. The online and on-train surveys The online survey was undertaken in February 2012 and was completed by 1006 rail users from a national panel. The on-train survey was conducted in June 2012 on a mix of services. The large sample of 1013 passengers obtained was reduced slightly to 972 after removing those who did not provide all relevant details. Table 1 allows a comparison of the features of the achieved samples with the National Travel Survey (NTS) which provides a representative account of travel in Great Britain obtained from a random sample of households. We took the latter to cover the years 2010 to 2015 and those aged 16 and over making rail trips which yields a sample of 8254 individuals. We report the proportion in each category along with its standard error. The NTS figures are weighted by the number of trips made by each respondent. The journey purpose splits for the online survey result from quotas specified for the recruitment whilst the on-train surveys were conducted on a mix of long distance inter-city (East Coast Main Line and Cross Country), regional (TransPennine) and commuter (First Capital Connect and South West Trains) services as a practical means of surveying a range of routes and traveller characteristics given the resources available. Whilst there are some inevitable discrepancies between our samples and the NTS in terms of journey purpose, this is not a particular cause for concern since the descriptive statistics covering recovery time and the modelling of the SP data both stratify by journey purpose. Nonetheless, when we examine the gender, age, employment status and occupation characteristics of the on-train and online samples, we find them to be encouragingly similar to the distributions in the NTS; this is particularly the case for the On-Train survey. Comparing NTS and On-Train survey, the youngest group (16-25) is slightly underrepresented and those above 45 years-old are also slightly over-represented; however, the figures for other age groups, gender, employment status and occupation categories are not significantly different from the NTS corresponding values. Thus we conclude that the On-Train sample, which will be the one used to derive valuation estimates, can be deemed acceptable on representativeness. Awareness of the existence of recovery time The online survey asked respondents if they had noticed a range of journey time 'irregularities' during their reported most recent journey. 15% stated that they had noticed trains travelling slower than normal between stations, 11% felt that their trains stopped for longer than necessary at intermediate stations and 23% thought their trains stopped or slowed down unexpectedly between stations. Out of all travellers that had experienced any of these circumstances, 74% stated that they were not told about the reasons causing them, which could be because they were planned. Most people are unlikely to be familiar with the term 'recovery time', although the concept is straightforward. For this reason, respondents were informed that "train operators sometimes include in their timetables additional time over the minimum required to get to the destination to allow for unforeseen delays" and were then asked whether they were aware of this. As is apparent in Table 2, only 25% were aware in the online survey, with minor variations among the different user groups. The proportion of aware passengers was somewhat larger (43%) in the on-train survey, being higher for commuters and lower for leisure travellers, and presumably this is because the on-train sample would contain more frequent rail travellers, who may have noticed different journey times by direction of travel. Moreover, the on-train surveys had a greater focus on routes where recovery time had been introduced. We would expect these proportions to grow over time both as recovery time becomes a more common feature of timetables and travellers become more aware of the practice. Moreover, the consequences of recovery time are additional waiting time at or between stations and travellers will experience the annoyance of this even if they are not aware of the presence of recovery time in timetables. Finally, while a lower share of awareness could be attributed to lower frequency, the answers to subsequent questions made us believe that the online responses were somewhat less reliable than those from the on-train survey. This is not surprising given well-known limitations of online panels (see e.g. Significance et al. 2012). Thus, in what follows we focus on the figures from the on-train survey, since the on-train survey is the one actually carrying the valuation experiment. 3 Awareness of the amount of recovery time used by train operators Those who were aware of the concept of recovery time in timetables were asked "how much additional time do you think train operators allow for trains similar to the one you are currently on". Table 3 summarises the responses by purpose and distance band for the on-train survey. The category 'cannot say' in most cases contains the largest proportion of respondents, highlighting the difficulty in this task. Where respondents could provide an estimate, by far the largest proportions are for 2-5 min. The second most common category tends to be 6-10 min, especially for the longer journeys as might be expected. Very few thought that recovery time was less than 2 min and it is only amongst business and leisure travellers on longer distance journeys where there is a noticeable proportion stating the recovery time exceeds 10 min. Perceived recovery time unsurprisingly tends to be larger for longer distance journeys but with no strong variations by journey purpose. A follow-up question revealed that, of those aware of recovery time, 20% felt that it improves punctuality a lot, a further 50% stated that it improved punctuality a little with 19% feeling it made no difference and the remainder unable to say. Ideal recovery time Obvious questions to ask of our samples are whether and to what extent recovery time is wanted. The on-train survey asked about the approval of recovery time in train timetables and about an ideal rail recovery time. Given contingency had been defined to respondents, 4 the question took the form "How much contingency would you ideally like to have on trains similar to the one you are currently on?". Regarding the 'approval' question 56% approved, 12% disapproved, and the remainder had no preference. More importantly, respondents were asked to select the recovery time they would like the train operators to add. The responses are summarised in Table 4. The responses reveal a clear correlation between journey length and ideal recovery time, as might be expected. The largest category chose between 2 and 5 min with two-thirds between 2 and 15 min. A non-negligible 16% could not say, perhaps suggesting that optimal levels are also dependent on other factors such as reliability levels. In summary, the surveys reveal that travellers' awareness of and preferences towards recovery time are mixed but that despite the limited awareness, a majority approves of recovery time as a tool to improve reliability. There is broad support for modest amounts of recovery time, prior to testing how individuals would trade-off recovery time with invehicle time and delays. The focus groups Two focus groups were undertaken, in Basingstoke and Birmingham, with the particular aim of testing the viability of an SP exercise and determining its most appropriate format, and guiding the developments of the questionnaire. Whilst some of the focus group recruits were aware of the concept of recovery time, none were aware of the term 'recovery time'. After being informed of the concept, most thought that recovery time would improve punctuality, and perceived it in a positive way, although a small minority considered that train operators should not need recovery time in order to be punctual and considered it to be a negative feature. All but one of the 20 participants build in their own 'recovery' time for car journeys, but only a minority do so for train journeys and then it tends to be less than for car journeys. The reasoning for building in more car recovery time was because it is their own responsibility to arrive at the destination on time whereas for train journeys it was more the responsibility of the train operator who could be blamed or provide compensation. Participants were more likely to build in significant recovery time into their rail journeys when travelling to airports or when on business trips. The rationale behind the SP exercise is set out below, but it involved specifying journeys over 5 typical days and for each day conveying different actual journey times, levels of punctuality and actual recovery time for the two options offered. The focus groups presented SP 'mock-ups' and these established that respondents could relate to the presentation of different journeys across 5 days, which is critical given that reliability is an inherent feature of the exercise. This was re-assuring given this now seems to be the accepted means of presentation in reliability studies. Nonetheless, we here have the added dimension of recovery time. In one SP version, we provided the timetabled departure and arrival time, but participants felt this was an unnecessary level of detail and indeed the key information can be conveyed without this. A simple version was to present the scheduled time and the planned recovery time, and for each day the actual recovery time and the arrival time punctuality. Participants though preferred a version which additionally included the actual journey time which then made clear the levels of punctuality and actual recovery time. This was preferred to an alternative which instead of providing the actual journey time specified the en-route delay time on each day. Participants generally felt that providing both the en-route delay time and the actual journey time alongside the punctuality and actual recovery time on each day provided too much information to assimilate. We therefore opted, as is apparent in Fig. 1 below, to provide the actual journey time on each day and hence the implied actual recovery time and the punctuality given the scheduled journey time and specified contingency. Design of the stated preference experiments In order to infer valuations of recovery time, meaningful trade-offs had to be offered to travellers. We could not envisage a context where we could simply offer trade-offs between recovery time and journey time in a realistic manner. Since the purpose of recovery time is to improve reliability, a measure of reliability also has to be included in the trade-off context. Minutes of late time were therefore included in the choice scenarios. This would then also allow the estimation of the value of late time, which conveniently can then be used along with the value of recovery time to appraise measures to improve reliability through recovery time. We did not include any monetary terms in the SP exercise since it is sufficient that recovery time is valued in equivalent units of in-vehicle time to enable the extension of the railway industry's Generalised Journey Time (GJT) term to include a weighting of journey time in line with any premium valuation of recovery time. Based on the findings of the focus groups, recovery time was referred to as "contingency time" in the SP experiment offered to respondents. An example of the SP experiment is provided in Fig. 1. It presented travellers with nine choices between two train service options characterised by different levels of scheduled and actual journey times, scheduled and actual contingency times and late times. Option A included contingency time whilst Option B had no contingency time for reasons of simplicity and offering a clear-cut trade-off. The reliability element was presented in what is now a fairly conventional manner of the journey times that might occur on five different days that was first introduced by Senna (1994). The actual journey times in combination with the planned contingency time imply an actual amount of contingency time and an element of late arrival time. These implicit figures were presented in the SP exercise to make the choice task clearer for respondents. For Option A, which contained the recovery time, the train journey times were chosen so that there were never any early arrivals. 5 The actual (unused) contingency time could be wait time between a This was to avoid adding a further dimension to an already challenging SP exercise given that valuing early station arrivals was not a requirement of the study. Where in reality recovery time manifests itself in early arrivals, it is usually at terminus stations although on longer distance journeys, such as on cross-country routes, there can be dwell times at larger stations along the route. On average, around 20% of trains in Britain arrive earlier than the publicly-advertised times but this is rarely by more than a minute or two since trains are generally held to their advertised departure times at all stations along the route. pair of stations, as in the example of Fig. 1, or wait time at an intermediate station, and these were described in the introductory rubric and randomly allocated to respondents. An on-schedule journey would therefore imply an actual amount of contingency time equal to the scheduled amount. Option B, which has no planned contingency time, has a late time which is the actual minus scheduled journey time, and again no early arrivals were offered. 6 The SP design needs to be based around the utility function that it is intended to estimate. This takes the form: where for alternative j, IVT is the 'standard' in-vehicle journey time ('standard' means it excludes recovery time and late time), R represents the recovery time experienced and L denotes the late time. These are averages given that the choice context here covers 5 'daily' scenarios. We adopted the 'boundary ray' approach to the design of the SP experiment (Fowkes 1991) on the grounds that this is both a feasible and attractive option when dealing with three attributes. This method aims to offer trade-offs across variables that are sensible in terms of the range of preferences that respondents might reasonably be expected to have. Setting α in Eq. 1 above to one, to operate in terms of the time multipliers we wish to estimate, the point of indifference, or boundary value, in the choice between options A and B is: where μ is the time value of recovery time and ρ is the time value of late time. We can therefore plot the relationship of indifference (boundary ray) between the value of recovery time (μ) and the value of late time (ρ). This is: We specified option B to have no recovery time and hence to be the more unreliable option. The intercept is therefore . A respondent's choice indicates which side of the boundary ray they are located, and the design task is to select appropriate differences in time, late time and recovery time to offer a sensible range of choices. The slope must here be positive. The intercept can be either positive or negative depending upon the sign of the in-vehicle time difference (IVT B − IVT A ) which gives an element of flexibility in how the boundary rays cover the expected range of values. So how do we decide the differences in IVT? Given option B has more delay time which is relatively highly valued in terms of equivalent units of in-vehicle time, and given that we do not expect the premium attached to the greater recovery time in option A to be as large, then At the time of the survey delay-repay schemes compensate travellers for late arrivals of 30 min or more and so could have had a confounding effect on the interpretation of the SP options. The 9 SP scenarios offered to each person, which contained 90 levels of delay time in total, were therefore restricted to just two (out of 90) instances of 30 min delay and none greater. we have made option B generally quicker so that sensible trade-offs are offered which yield useful information for modelling purposes. We started with a design that was orthogonal in differences, with three levels of difference for each variable, and changed the attributes within the bounds of what we felt reasonable to obtain a sensible set of boundary rays. In the process, we departed from orthogonality but we ensured that the correlations between attribute differences were not large. An example of the boundary rays, relating to journeys of around an hour, is provided in Fig. 2. At the time of the study, we were aware of other advanced SP design methods-e.g. D-efficient designs-which, in principle, yield more precise coefficient estimates (Bliemer and Rose 2011). Notwithstanding this, we used the boundary ray approach outlined above because it uses priors relating to relative valuations in determining the trade-offs to offer, and in the absence of previous work in the area we felt we could better 'guesstimate' relative valuations than coefficients. We also conducted simulation tests on our designs using synthetic choice data prior to implementation which indicated that they could accurately recover the relative values used in creating the test data. For future studies, however, a more efficient design could lead to the estimation of significant values with a lower sample size. Separate SP designs were developed based around the actual journey times on the train routes selected to be surveyed. Such customisation is important for the realism required to obtain reliable responses given that the survey took the form of self-completion 'pen and paper' questionnaires. In total, six SP exercises were designed, based upon reported journeys of around 30 min, 45 min, 1 h, 1½ h, 2 h 15 min and 3 h. So, for example, the 2 h 15 min design catered for the Leeds/Wakefield to Kings Cross journey on the East Coast Mainline. The scheduled recovery times were 5, 10 and 15 min except for the 30-min design where they could be only 5 or 10 min whereas 20 min was also permitted in the designs for journeys of 2 h 15 min and 3 h. While these levels may seem higher than what is typically observed in reality and accordingly people's perceptions (see Table 3), there are two reasons to justify this selection. First, one of the objectives of this study is to inform decisions on how much recovery time to build in and whether adding more is desirable; given that recovery time will be readily appreciated to be under the control of train operators, then variations in it are credible provided that we limit it to what can be expected to be reasonable. Second, using too small changes in time is typically avoided in time valuation literature since respondents struggle to make meaningful choices (this is discussed in more detail below as it also applies to other attributes). Furthermore, one of the purposes of the focus groups was to explore recovery time levels of these magnitudes and they were deemed to be realistic. The levels of late time ranged from zero to 30 min, and also included 5, 10, 15 and 20 min. Where recovery time existed, late time took the level zero in four or five instances in all but one scenario where three of the five late times were zero. When there was no recovery time, the pattern was two of the five late times were zero. The non-zero late times tend to be much larger than typical or average, and as a consequence the mean lateness in the SP exercises of between zero and 9 min where there is recovery time and between 3 and 12 min where there is no recovery time tend to exceed the mean lateness on the routes we were to survey which ranged from 1.8 min on the shorter distance flows covered by South Western Trains through to 3.7 min on the longer distance journeys served by Cross Country. The reasoning behind the late time levels adopted is that restricting them to the few minutes of actual circumstances would run the risk that the variations would have been ignored. Indeed, offering somewhat larger amounts of late time than are routinely experienced is customary in SP studies of travel time variability, such as in the UK (ARUP et al. 2015) and Dutch (Significance et al. 2012) national value of time studies. 7 Moreover, the levels of late time (as well as those for recovery time) and distributions used in the SP exercises had been explored in the focus groups and no adverse feedback was received, whilst we incorporated the safeguard of asking respondents how realistic they found the levels offered whereupon we can test the impact on the estimated parameters of perceived unrealism. SP data collection and summary statistics The SP surveys were conducted on train, partly to ensure passengers were thinking of an actual journey when completing the SP questionnaires and partly because this is a very cost-effective means of achieving large samples. Table 1 indicated that the on-train sample corresponds closely with the NTS in terms of gender, age group, employment status and occupation. Whilst "The online and on-train surveys" section pointed out differences in the journey purpose distributions between the two samples, we account for this in our analysis of the SP data through journey purpose segmentations. There was a 66%:34% split between those choosing option A, which contained recovery time, and Option B, which contained none, reflecting an overall favourable view of the inclusion of recovery time in timetables. When asked about the realism of the SP exercise, 5.5% reported the journey times to be fairly unrealistic with 1.5% regarding them to be very unrealistic. The corresponding figures for late time were 10.0% and 1.6% and for wait time were 19.0% and 3.4%. It is not surprising that wait time is regarded to be most unrealistic since it is likely to be the variable that respondents are least familiar with. Nonetheless, the attribute levels were largely regarded to be realistic and when perceived unrealism was allowed to impact on the relevant parameter estimates no significant effects were obtained. Respondents did not appear to have undue problems with the SP exercise, with around a third finding it to be very easy and only 3% reporting it to be very difficult. Analysis of stated preference data Often, SP studies resort directly to model estimation to analyse individuals' choices. Such an approach, while efficient, risks missing part of the story hidden in the dataset. We believe this is especially the case with this study, as it is the first analysis of recovery time from the passengers' perspective. Thus, we first report a preliminary analysis of the dataset (5.1), followed by a description of the choice modelling methodology (5.2) and the main results (5.3). Non-trading behaviour There was a high degree of non-trading in the sample, with 454 respondents (47%) always choosing the same option across all 9 choice tasks. This is made up of 37% who always chose option A, which includes the recovery time, and 10% who always chose the less reliable option B. However, while the term "non-trader" is widely used in the choice modelling literature, technically we cannot say whether these people did trade-off between options or not. What we mean is that it is possible that these respondents did trade-off, but concluded every time that option A or B was preferable for them based on their intrinsic valuation of the attributes involved. It is also known that non-trading might also be related to inertia effects (Hess et al. 2010); in this respect, while none of the options represents the exact current trip of passengers, it is possible that those who do (do not) currently perceive some recovery time are inclined to the option with (without) recovery time. At the other extreme, non-trading behaviour can also be linked to lack of engagement or experienced difficulty with the survey, leading to the same choice in every task (although the latter is unlikely considering respondents' feedback on the survey exercise). One way of finding out is to check whether non-trading behaviour is related to intrinsic attitudes and preferences. Looking at these respondents in more detail, which we do in Table 5, their choices are generally consistent with their personal view on the use of recovery time by train operating companies. Those who approve of recovery time are more likely to always choose A (45%) in comparison with those with different views (3%, 8% and 26% respectively) whilst those who disapprove are more likely to always choose B (37%) compared to those with other views (4%, 9% and 9% respectively). At the same time, more than 50% of respondents within each 'personal view' category chose A or B at least once. Overall, the information provided by 'non-traders' is justifiable on the grounds of being correlated with their personal views and we cannot discard it. Nevertheless, having half of the sample always choosing the same option can be a problem for the estimation of the model parameters, and this will be taken into account at the modelling stage. Choice patterns by key characteristics It is well known that the rail travel market is made up of a number of different segments with often distinctly different preferences. Travel demand studies generally distinguish between commuters, business travellers and leisure travellers and we do that here. Table 6 provides, separately by journey purpose, further insights into the pattern of SP responses. The statistics shown help to explain how preferences towards recovery time vary depending on journey purpose. A number of key messages emerge from Table 6, starting with some evidence of positive preference towards recovery time, judging by the proportions of choice and personal views. In terms of choice, all three categories of journey purpose have a sizeable majority of respondents who prefer option A (with recovery time). This ranges from 60% for commute to 75% for leisure. In terms of personal views, over 80% of all respondents either approve (57%) or neutral (30%) about the use of recovery time on rail. Responses between different journey purposes in Table 6 come through strongly. Leisure travellers chose the option with recovery time 75% of the time and 60% of them approved of its use. In contrast, among commuters "only" 60% chose option A and 53% approved of its use. In between these lie business users with 63% choosing option A and 58% approving of recovery time. Such differences reflect the characteristics of these individual travel markets and are a positive validation of the survey results. For example, leisure trips are typically one-off trips and sensitive to punctuality (going to the theatre, meeting friends for dinner, catching a flight etc.). As such, any delay to a leisure journey will have a strong disutility for that passenger and it is logical that they will have a stronger preference for recovery time. Commuters travel much more frequently. Consequently, they are likely to experience unreliability on a regular basis and may either have contingency plans for such events or build in their own recovery time by catching an earlier train than they need to. For some commuters (note not the majority) there may be a preference for shorter journeys above more reliable journeys, e.g. saving 15 min 4 days of the week and risk being late 1 day is preferred to no travel savings and guaranteed punctuality. This preference may differ by the type of job (nurse vs an office worker), flexibility of the work environment and the individual. Overall however, the majority of commuters are still supportive of recovery time, albeit less so than leisure travellers. A business trip can share the features of commuting and leisure trips. In terms of frequency it is more likely to resemble a leisure trip, but it also shares with a commuting trip the consequences of delay, e.g. being late to a meeting, which may be perceived as more palatable than the delay consequences in a leisure trip, e.g. missing the start of a concert or a holiday trip. It is therefore not surprising that the level of approval of recovery time for business travellers lies in between those for leisure and commuters. Valuation methodology Choice models are used to analyse the data and infer valuations. First, we describe the base functional form of the Multinomial Logit (MNL) model used, which is followed by two sets of extensions that allows us to account for observed heterogeneity across passengers and the impact of time variability. All preferred models reported in the paper include observed heterogeneity. Base model Individuals are assumed to make a choice between the two options in order to maximise their utility. Each of the two travel options j has an associated utility U j that is defined in terms of the attributes presented in the SP experiment. The utility function of the base MNL model is specified as follows: Since the two options were defined for 5 typical trips over a week, the expected value of each attribute across the 5 trips is used. E(IVT j ) is the average standard In-Vehicle Time for alternative j, E(LATE j ) is the average late time for alternative j, and E(RRT j ) is the average residual recovery time for alternative j. Hence, the RRT is that part of the built-in recovery time that is left 'unused' where it was not replacing any late minutes. In other words, RRT is the additional cost (in time units) that passengers incur in return for reduced late time. The ivt , late and rrt are parameters to be estimated, each indicating the marginal utility of an additional minute for the three types of travel time minutes described above. Three key valuation measures can be inferred from the observed choices in this model: • The value of mean residual recovery time (VMRRT), giving the value of 1 min of mean residual recovery time in relation to 1 min of mean standard in-vehicle time: In other words, the VMRRT is a multiplier on IVT that measures the premium, as perceived by travellers, of a minute of additional time inside the train due to the introduction of recovery time in the timetable. • The value of mean late time (VML), which is a conventional measure of the value of reliability, gives the value of a minute of mean late time in units of mean IVT: This is known in the railway literature as the late time-or lateness-multiplier (Wardman and Batley 2014). • The ratio VML/VMRRT gives the weight of 1 min of mean late time relative to 1 min of mean residual recovery time. This ratio can be interpreted as a type of late time multiplier. Because recovery time is aimed at reducing late time, this is the key measure that informs on how many minutes of mean residual recovery time people are willing to accept in order to reduce mean late time by 1 min. Due to our interest in valuation estimates, we transform the model into 'valuation space', analogous to what is known as Willingness-to-Pay space in the literature on valuation of travel time (Train and Weeks 2005). Given that the VML and VMRRT have a common denominator ( ivt ), the model can be translated into a valuation space where we readily obtain estimates of the VMRRT and VML in minutes of IVT: This model is equivalent to that of Eq. (4), but it provides direct estimates of VMRRT and VML. The coefficient on IVT ( ivt ) becomes effectively a scale parameter in this model. This has the advantage of facilitating the modelling of preference heterogeneity, which can now be linked directly to the valuation estimates (VML and VMRRT). In other words, any preference heterogeneity in the relative valuation of RRT and IVT can be reflected directly through VMRRT instead of through two separate coefficients as in more traditional models (Train and Weeks 2005). Accounting for heterogeneity in travellers' preferences Valuations of recovery time (VMRRT) and late time (VML) are likely to be heterogeneous across respondents. An extensive search for a model specification that dealt with observed and unobserved heterogeneity was conducted, including many variations of MNL, Latent Class (LC), and Mixed Logit (MMNL) models (McFadden and Train 2000). Four different extensions of the MNL model, all of which account for observed heterogeneity, are reported in "Results" section below. Before the selected models are presented, we briefly summarise the specification search and why the other options (namely LC and MMNL variations) were discarded. Passengers' preference heterogeneity can be accounted for in the model through observable variables, such as journey purpose and length, but also in the random error term structure since part of the heterogeneity will not be observable. The Mixed Logit (MMNL) model is typically useful to estimate a distribution of values. We tried and discarded this option because we could not successfully estimate a MMNL model that resulted in significant set of segmented values (e.g. by trip purpose). We feel that this is partly because the values of recovery time and mean late time are multipliers of time, and we would not expect variations in multipliers to be as large as for the more typically estimated monetary values-making it more difficult to simultaneously pick up random and observed heterogeneity. This is in line with the literature, including the most recent studies that estimate late time multipliers (e.g. Hess et al. 2017). We also suspect that the high degree of non-trading in the data limited the possibilities of recovering distributions of values jointly with observed heterogeneity through a MMNL. The latent class (LC) model was technically a better option than the MMNL as a behavioural model, since instead of a distribution of values, the LC estimates a discrete set of different values for each of the latent classes of respondents, where both parameters and class allocation could be associated with individuals' characteristics. Nevertheless, the LC was also discarded for the same reason, namely the failure to provide a significant set of segmented values within a LC structure. Again, we believe this could be linked to the high-degree of nontrading in the data, which limits the possibilities for unpacking all sources of heterogeneity. The estimated MNL models relate all observable heterogeneity to the valuation measures VMRRT and VML in a way that sample average values can be derived for all segments of interest (e.g. commute, business and leisure). Additionally, it is possible to partially control for non trading behaviour. The variables tested include awareness of recovery time, gender, age, journey duration, journey purpose, the need for interchanges and personal views on approval/ disapproval of recovery time. Discrete and continuous multipliers were added to the VMRRT and VML parameters to discern different valuations for different segments. The use of multipliers, instead of additive interactions, simply facilitates direct calculation of valuation estimates for different segments. For discrete variables (e.g. purpose), multipliers on VMRRT and VML were estimated for all categories (e.g. commute and leisure) except for one, the base (e.g. business), for which its multiplier was set to 1. Using travel purpose multipliers on the VML as an example, we specified: where x are the dummy variables for the different x categories of the variable, and V x are the valuation multipliers for valuation measure v and category x. In this example, business ( business ) is the base category, and VML com and VML lei the multipliers for categories commuting and leisure on the VML. The only continuous variable was journey duration and its multiplier was entered as an elasticity specification, commonly used in valuation studies (Mackie et al. 2003;ARUP et al. 2015). Taking VMRRT as an example: where D_VMRRT is the journey duration elasticity of VMRRT to be directly estimated. The denominator D in the multiplier ensures that the readily estimated VMRRT relates to a train journey of D minutes. This D value was set to 45 min for commuters and to 90 min for leisure and business travellers. 8 Since separate elasticities were estimated by purpose, D_V_x would represent the distance elasticity on valuation v for purpose x. Last but not least, an elegant solution was adopted to deal with the high degree of nontrading in our sample. As we previously argued, discarding their preferences was not an option because non-trading behaviour was to some extent related to personal views on the use of recovery time (see Table 5). On the other hand, including multipliers for the nontraders was also not an option because their valuations would not be correctly identified. We needed a mechanism that somehow allowed us to partially control for their extreme preferences, but without fully removing them from the mean valuation estimates. Since the variable 'personal views on the use of recovery time' and non-trading behaviour were partially correlated, the solution adopted was to include multipliers on the valuation measures (e.g. VML) for the "approvers" and "disapprovers" categories (neutral views was set as the base category). These multipliers-modelled in the fashion of Eq. (8)-capture any excess (above or below) on valuations that is strictly related to personal views. As we shall see, with this approach, valuation measures fall somewhat in-between those of the model without these additional multipliers and a model which excludes non-traders. Results for these two additional model variations are included (models 2 and 3) and discussed in the results section. Accounting for the variability of time So far, the models only use the expected values of the time distributions but disregard the variability of outcomes. However, the wider the spread of potential travel time outcomes, the higher the risk and uncertainty. A final extension of the base model (model 4) deals with the variability in time. In this extension, we move away from the assumption in previous models that each of the five outcomes (see Fig. 1) has an equal weight. This version of the model allows for different weights for the five travel time outcomes using a constant absolute risk aversion (CARA) specification (Liu and Polak 2007;Hess et al. 2017): where where s indicates the travel outcome (s =1,…,5) and j indicates the travel alternative. The new parameter picks up the degree of risk aversion/seeking behaviour: a positive (negative) indicates risk aversion (seeking) attitude. Behaviour is risk neutral when approaches zero, thus approximating the base model from Eq. 7. A second extension using a mean-standard deviation model was tested, adding a coefficient on the standard deviation of late time as an additional term. 9 However, the high correlation between mean lateness and the standard deviation (around 95%) did not allow us to separate the two impacts and the CARA approach was preferable. This was also the (10) U j = ivt * ∑ s 1 − e − V j,s 1 5 (11) V j,s = IVT j,s + VML * LATE j,s + VMRRT * RRT j,s approach chosen to account for time variability in the latest UK national study valuing invehicle travel time and late time Hess et al. 2017). Results The estimation results are presented in Table 7 below. Following an extensive model specification search, four models are reported. A base model (model 1) is reported first. Model 1 accounts for part of the observed heterogeneity in the data but critically does nothing in relation to the presence of non-traders. The same model is also estimated including a treatment of the non-traders' choices (model 2) via the multipliers of approval views (θ_disapprove and θ_approve) and excluding the non-traders (model 3) for comparison. Model 4 builds upon model 2 to take account of travel time variability through a CARA specification. As expected from our preliminary analysis of non-trading behaviour, the relatively poor explanatory power of model 1 is arguably related to the lack of any control for the nontraders' responses. Model 2 significantly outperforms model 1 thanks to the valuation multipliers on passengers' approval/disapproval of recovery time. An overview of the results of models 1, 2 and 3 show that, as expected, model 2 (with controls) derives valuations that are in-between those from the model which excludes non-traders (model 3) and the model that includes them but do not control in any way (model 1). Finally, model 4 produces outcomes that are not too far from those of model 2 but it is a more flexible specification which allows for differential weights for each of the five travel time outcomes. The positive α_CARA estimate suggests the presence of risk averse behaviour in the sample. Moving forward, considering the statistical superiority of models 2 and 4 and the greater flexibility of model 4, we focus on the outcomes from model 4. The value of mean late time (VML) indicates that business travellers are willing to accept 5.80 min of IVT if this reduces mean late time by 1 min. For commuters and leisure travellers, the VML is equal to 5.19 and 5.78 respectively but both θ_VML_leisure and θ_VML_commute multipliers are not significantly different from 1 and thus the difference across purposes is not statistically significant. These estimates are higher than current recommendations of between 2.3 and 3.9 min (these vary by segments) in UK official guidelines for appraisal and forecasting but not far from existing evidence in the literature (Wardman and Batley 2014). The recommended late time multiplier in the UK is higher than 3 in cases of rail links to airports or long distance journeys. Moreover, Abrantes and Wardman (2011), in a meta-analysis of valuation of time studies in the UK, found a mean multiplier of 6.35 across 15 observations. The value of mean residual recovery time (VMRRT) is estimated at 1.88 min of IVT for business travellers, 1.46 for commuters, and at 0.79 for leisure travellers; however, these differences across purposes are not significant. Taking the value of 1.88 estimated on business travellers, this indicates that 1 min of recovery time is perceived like 1.88 min of IVT. While these estimates are only significantly different from 1 at the 90% level of confidence, this is only the case for the underlying base category: other multipliers (e.g. awareness or journey duration) modify these estimates for specific segments. All other multipliers are statistically significant and can be interpreted as follows. Awareness of recovery time (θ_aware) is associated with higher VMRRT, which means that people who are aware of recovery time associate it with a more negative utility than IVT. We find this credible since aware passengers will have more experience of the frustration involved in prolonged station stops, stopping outside stations and slowed-down Table 7 Estimation results a t-ratio (0) is the standard t-ratio. t-ratio (1) is used for VMRRT, VML and other multipliers, to determine whether they are significantly different from 1 running elsewhere. Passengers who need to interchange perceive recovery time more positively relative to IVT (as indicated by θ_interchange) probably because they are more likely to benefit from it than other passengers. On the other hand, the VMRRT varies across different journey durations for commuters but not for other purposes (thus, only the journey time elasticity λ_D_VMRRT_commute is included in the final models). The longer the journey, the less beneficial recovery time is for commuters; for shorter journeys, recovery time is perceived practically the same as IVT (I.e. VMRRT close to 1; see Table 8). This is a reasonable result as long commutes are burdensome. People with long commutes will still want reliable journeys, but not so much at the expense of additional IVT (i.e. they may prefer solutions other than increased recovery time). Finally, it can also be observed that VMRRT in model 4 (and in model 2) fall in-between the base VMRRT from the 'no-treatment' base model (model 1) and the 'excluding nontraders' model 3. When non-traders are excluded, VMRRT is roughly equal to 2.1 and significantly different from 1; when non-traders are included but not controlled for VMRRT is equal to 1 (this was expected since most non-trading was in favour of the recovery time option). This confirms that our 'treatment' model provides somewhat of a compromise solution between the extreme options of excluding non-traders and not doing anything about it. Nonetheless, more importantly the overall issue with non-trading detected in this (first) study forces us to strike a word of caution when interpreting the results and provides a valuable lesson for the further studies that examine recovery time preferences. The multipliers on personal views (preferred model) are highly significant and contribute to a large improvement in model fit for models 2 and 4. As expected, the approval (disapproval) multipliers are capturing the much higher (lower) VML of people who always chose the option with recovery time (without recovery time). Unfortunately it is not possible to know the extent to which these multipliers remove the impact of non-trading, but they prove very helpful in allowing us to understand their influence. Also, another advantage of their introduction in the model is revealed by looking at the effects of awareness and interchange across the first three models. There seems to be some confounding between personal views and whether people are aware or have to interchange, and only the inclusion of the personal views multipliers allows all these effects to be disentangled in the model, increasing the precision of the estimates on the awareness and interchange multipliers. Other multipliers on VMRRT and VML (e.g. distance, age or gender) were also tested but found to be not significant and removed from the final specification. Table 8 summarises the main valuations by key segments, including the VML/VMRRT ratios, using estimates from model 4. For each travel purpose we estimate VMRRT, VML and their ratio for the following four categories: unaware (base), aware, awareness weighted average and interchange. Three of these categories follow straightforwardly from the model estimates, whereas the 'awareness weighted average' one provides a weighted average of the values of aware and unaware passengers, using the weights from the sample (reported in Table 6). The discussion of these results focuses on the aware/unaware weighted average category as being an interesting representative case of the overall sample. Table 8 shows how leisure travellers have the lowest VMRRT, equal to 1, meaning that recovery time is deemed to be just like IVT and no extra penalty is associated to it. Business travellers have a VMRRT of 2.3, while commuters' values vary between 1.2 for very short journeys and 2.9 for very long ones. These values are the first to be estimated on this aspect of rail travel and seem reasonable considering the different nature of each travel purpose and the insights from the wider study and focus groups (prior to the SP experiment). We now turn to the ratio VML/VMRRT, which for simplicity we will refer to it as Lateness-Recovery (LR) ratio. This metric is important because it can be interpreted as a pseudo-lateness multiplier that is more relevant for policies that add or reduce the amount of recovery time in a corridor. The LR ratio contrasts 1 min of late time, instead of with additional IVT, with additional residual recovery time. Thus, it compares late time with residual recovery time, which is the relevant information for the appraisal of such policies. The estimated LR ratios are 2.5 for business travellers, 6 for leisure travellers, and between 1.8 and 4.5 for commuters depending on trip duration. These values indicate the minutes of mean residual recovery time that passengers are willing to accept in return for a reduction of 1 min of mean late time. The higher the ratio, the greater the case for recovery time, and vice versa. Our results confirm that recovery time can be especially beneficial for leisure travellers, given the sensitivity of leisure plans to potential delays. On the other hand, recovery time will be least beneficial for commuters travelling over 1 h, as they would not be in a position to accept any extra travel time and would probably make their own adjustments to account for potential delays anyway. These values can in principle be used for appraisal. But, as this is the first study into the issue, it goes without saying that supplementary evidence will be required. This is even more so due to the high presence of non-trading. Further research should be conducted to consolidate the novel first set of values estimated in this study and should focus on reducing the high non-trading behaviour with an improved stated preference design. The outcomes from this first study can inform a better selection of trade-offs in future experiment designs. One drawback of the results is the estimation of VMRRT lower than 1. This is only the case for the values from interchangers (other isolated cases can be observed in Table 8 but these are not significantly different from 1). It may be argued that it does not make sense that 1 min of travel time (in the form of recovery time) can be perceived as less penalising than 1 min of IVT. In that case, any VMRRT for use in appraisal should never be below 1. However, it might also be argued that recovery time is associated with an additional benefit (increased reliability) which is not assumed for IVT. Comparison with the wider valuation literature Since the estimates of VMRRT are the first evidence of its kind, they do not have an available direct comparator in the literature. The only and closest comparator can be found in the latest value of time study in the UK (ARUP et al. 2015), which provided values of 'slowed down' time for bus users. The multipliers for 'slowed down' time were found to be between 1.4 and 1.6, not far from our average estimates for recovery time. When comparing this study with other studies of late time valuation in the railway literature (see Wardman and Batley 2014), our (high) estimates of late time multipliers are also not directly comparable as such for one reason. In contrast with previous SP experiments, ours is unique because it is the only one where the responsibility of delays is being explicitly held by the rail operator, at least to an extent. In other studies, the value of reliability was derived by asking people to pay higher fares or incur alternative longer journey times in return for reduced lateness. This might reveal a very different set of preferences because delays are not the responsibility of rail users and they may be reluctant to pay to reduce them. In this study, however, the way to reduce delays is through the operator's decision of including recovery time. This of course means a longer journey to the passenger, but also to the operator who might expect a demand reduction as a consequence. In some way, recovery time can be seen as an act of responsibility on the part of the operator. This might be a reason why often respondents have chosen rail trips with recovery time and have been so supportive about it during the survey, hence revealing atypically high values of late time relative to both in-vehicle time and residual recovery time. Illustrative use of values in appraisal In this section we discuss how the valuation estimates provided could be used for appraisal. First, as set out in the context, the motivation of this research was to analyse whether perceived recovery time carries a premium relative to standard in-vehicle time. If so, this premium should be applied to changes in recovery time and, if greater than 1, would reduce the reliability benefits of policies that increases recovery time. Perceived recovery time-also referred to as "residual recovery time" (RRT)-has been approximated in the study as "waiting time on board between or outside stations". The results show that the RRT carries a premium in some circumstances. RRT is valued between 1 and 3.5 times IVT, depending on journey purpose, distance, interchanges and awareness. However, it is not so clear that these premiums would translate into reduced reliability benefits in appraisal. The reason is that we have also found late time to carry a higher premium than the standard late time multipliers used in the UK (ATOC 2013), which are in a range of 2.3 to 3.9 (with the exception of trips to airports, where a multiplier of 6 is used). Altogether, we find that a minute of late time is valued at almost 3 times a minute of RRT for commuters and business travellers, and at 6 times for leisure travellers. This correspondence is not far from current practice, i.e. using existing late time multipliers on IVT without attaching a premium to perceived recovery time. Consequently, we would not recommend the use of the estimated premiums for perceived recovery time in isolation under the current appraisal standards (where they would be combined with separately estimated late time multipliers). The trade-off between late time and recovery time is as important as the trade-off between IVT and recovery time, and they should be considered simultaneously. More precisely, from our study we observe that premiums for perceived recovery time (VMRRT) higher than 1 are associated with higher late time multipliers (VML). The VML/VMRRT ratios estimated show the relative values of late time and recovery time in our study. In practice, thus, some adjustment is likely to be necessary to align the relative values of IVT, recovery and late time. Another implementation problem is that, in practice, it is not possible to separate "used RT" from the perceived "residual RT", as this would depend on the distribution of delays for each corridor. Therefore, a pragmatic solution is to assume that all RT will be perceived by passengers, which is a realistic assumption. It is realistic because a route with average lateness of 2 min typically does not have every train delayed by 2 min, but instead some trains are delayed longer while others are on time. Any RT introduced is likely to be perceived. Under this assumption, we seek to apply a premium to any additional RT. Let us consider a policy that adds recovery time (ΔRT) and consequently changes (reduces) average minutes of late time ( ΔAML ). While ΔAML < 0 is a benefit to passengers, ΔRT > 0 is a cost. In the appraisal of such a policy, current practice uses IVT valuation for the additional RT and late time multipliers for the savings in AML. Our study provides new valuation estimates that can enhance the appraisal. As discussed earlier, if the VMRTT are going to be used in conjunction with external late time multipliers, the valuation premium applied to RT should be adjusted based on the values of the Lateness-Recovery (LR) ratio (i.e. VML/VMRRT). To infer the premium or multiplier for RT (namely, RTM = recovery time multiplier), we can contrast the late time multiplier in use from the appraisal guidelines (VML guidelines ) with the recommended Lateness-Recovery ratio (LR) from this research. RTM can be calculated as follows: The RTM is set to be at least equal to 1. Values lower than 1 are difficult to justify, since they imply that RT is less penalising than standard IVT and this is unrealistic. This formula is provided because it is not possible to offer direct estimates that match the segmentation of late time multipliers used in in UK appraisal guidelines (see PDFH; ATOC 2013) and because it has the advantage of being applicable to any other established late time multipliers (e.g. future updated guidelines). As an illustration, if we take an external VML from guidelines equal to x (e.g. 3), then the RTM will only be higher than 1 if, for a given segment, the LR (as reported in Table 8) is lower than x (e.g. 3). Evaluation of an actual timetable change We have obtained data for a scheme in Great Britain which introduced recovery time (RT) into railway timetables and subsequently identified the impact on reliability in terms of mean late time (AML). Analysis was carried out for a number of service groups (SGs) (train services sharing a common route section). SGs 2, 3 and 5 are suburban/local in nature, SG4 is outer-suburban (including some significant intermediate centres generating commuting and business traffic) whilst SG1 is a longer-distance route with more leisure traffic. To illustrate the different estimates across purposes, we calculate the changes in generalized journey times (GJTs) for each purpose separately. The applicable LR ratios (VML/VRRT) for leisure and business travel are equal 6 and 2.5 respectively, based on the estimates provided in Table 8 above, whereas for commute travel the LR ratios vary by journey length. 10 For simplicity, and to avoid releasing anonymous details of the schemes analysed, we use the prevailing average late time multiplier (VML guidelines ) from PDFH (ATOC 2013) at the time of this research, equal to 3. A late time multiplier of 3 is in line with some of the U.K. evidence reviewed by Wardman and Batley (2014) and is widely used internationally (OECD/International Transport Forum 2014). With this information, we can calculate the RTM (Eq. 12). For example, for business travel, the RT should be valued at 1.2 times the IVT (RMT = 3/2.5 = 1.2). With the RTM and VML, we can evaluate policies that increases RT in exchange for reduction in the Average Minutes of Lateness (AML), attaching a premium to recovery time if applicable. We obtain the difference in Generalized Journey Time (GJT) between the Base situation and the Do-Something (characterized by longer IVT-due to added RT-and lower AML) for the five service groups. This exercise is summarised in Table 9 below. We first observe that adding RT did result in significant reductions in AML for the five cases as a result of the scheme. Using VML = 3 and our estimated RTMs, Table 9 show significant reductions in weighted GJT in all cases, albeit less so than with the current approach (which implicitly assumes RTM = 1, i.e. recovery time is valued just like in-vehicle time in all cases). As it stands, aligning our LR with guidelines late time multipliers of (12) RTM = Max VML guidelines LR , 1 3, the recovery time premium is greater than 1 for business travel and medium to long commutes. In those cases, the GJT reductions relevant for appraisal would differ. The impacts will be larger if higher late time multipliers were considered. To calculate the monetary benefits of these policies, the change in weighted GJT would need to be multiplied by the appropriate value of time. For all schemes, we can see how introducing RT was beneficial thanks to reductions in AML overcompensating the addition of RT-even when premiums apply-, especially for service groups with high levels of AML (e.g. service 1). Conclusions Recovery time is the additional travel time that is built into a train service timetable over and above the minimum journey time necessary, often with the aim of reducing the probability of being late. Recovery time is widely used in railways in a number of countries but prior to this study there has been no investigation of the rail users' point of view. This paper summarises the findings of the first survey of rail users on the use of recovery time by train operators and their valuations of it. The paper also adds to the literature on late time valuations. The entire area, of late arrivals and means of reducing them, is significant since hundreds of millions of pounds are involved in financial compensations just in the railway industry in Great Britain and more generally there are appreciable impacts on the economic welfare of rail travellers. The survey included a Stated Preference (SP) experiment aimed at exploring the tradeoffs between travel time, recovery time and late time. While transport planning worldwide uses different valuations for all components of generalised journey time, there was no evidence prior to this study to indicate whether and to what extent recovery time should carry a valuation premium relative to in-vehicle time. The results of this work have been included in the Passenger Demand Forecasting Handbook (ATOC 2013) which represents the official railway industry guidelines in the UK. The surveys reveal that most rail users are very supportive of the inclusion of recovery time in timetables. At the same time, a minority of users disapprove the use of recovery time as a tool to reduce lateness. Leisure travellers, followed by business travellers, are the most supportive. This can be explained by the infrequent nature of their journeys, and the associated importance of being on time for special occasions. Commuters are slightly less supportive presumably because they use the rail service often and, although they still care about being on time, adding extra time for every journey is a less appealing solution (this is especially the case for users with long commutes). These diverse preferences translated into a heterogeneous set of valuation estimates. Our results show that the perceived recovery time carries a premium but only in some circumstances. Relative to in-vehicle time, perceived recovery time is valued between 1 and 3.5 times in-vehicle time, depending on journey purpose, distance, interchanges and awareness. However, if these results are to be applied for appraisal and forecasting purposes, the premium estimates should be evaluated relative to perception of late time. This is crucial as we have also found late time to carry a higher premium than the typical late time multipliers used in the UK, the context of the study. Our estimates show that 1 min of late time is valued at nearly 6 min of in-vehicle time (just above 5 for commuters), which is at the high end of the estimates found in the literature. Altogether, the recovery time multiplier is highly context-dependent and likely to be only slightly above 1 in many cases. Controlling for the variability of late time improves the model fit and suggests the presence of risk-averse behaviour, but has a small influence of the estimates. We provide an illustrative application of recovery time valuation and a guidance for how to use these in practice. The recommended appraisal application has been demonstrated using data from an actual scheme where recovery time was extended in the timetable of a series of rail services. In all cases, recovery time leads to a reduction in late time and an overall reduction in generalised journey time, concluding that schemes of this nature can be beneficial even when valuation premiums beyond the value of in-vehicle time are attached to recovery time. Further research on valuation of recovery time would be highly desirable to build upon the findings of this first study. The outcomes of the study can help to improve the design of future stated preference experiments in this area, in particular the selection of trade-offs. For instance, widening the range of boundary levels and/or masking the focus of the survey (to avoid strong views leading to strategic bias) might help to reduce the non-trading behaviour observed in this context; new debrief questions could also help to further unpack the reasons for non-trading if it occurs Also, the evidence generated can now inform "priors" of future studies, contributing to building more efficient SP designs. However future SP experiments should also cover as an explicit variable different forms of delay-repay scheme given their increasing role in the industry. Future studies that limit the non-trading levels should also explore more advanced models to shed more light on the likely value distributions. Additionally, econometric analysis of ticket sales data, which is very common in rail industry, could be used to verify the findings by analysing routes where there had been changes in recovery time.
2019-10-31T09:13:38.112Z
2019-10-09T00:00:00.000
{ "year": 2019, "sha1": "9c6a9267c0d109898cf0a7feb12b0a13b370f625", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11116-019-10057-z.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "ea605c422e2d73078bfb8d47598ce4fafabc9262", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
7312490
pes2o/s2orc
v3-fos-license
Clinical Analysis of Acinic Cell Carcinoma in Parotid Gland Objectives Acinic cell carcinoma (AciCC) is a rarely encountered malignancy in parotid gland. Because AciCC is rare and was recently recognized as the entity of malignancy, AciCC has been difficult to study. We aimed to analyze the diagnosis and treatment experience for this malignancy in our hospital. Methods We retrospectively reviewed medical records of the 20 patients with AciCC of parotid gland diagnosed from 1990 to 2009. The preoperative computed tomography scan, preoperative fine needle aspiration cytology (FNAC) and intraoperative frozen section results were compared with the final diagnosis. The survival and recurrence were analyzed with the cancer stages and treatment modalities. Results There were 10 males and 10 females, with a mean age of 44.4 years, ranging 8-77 years. The AJCC tumor stage distributions of the patients were 70%, 15%, and 15% for stages I, II, and IV, respectively. The sensitivity of FNAC and intraoperative frozen section was 26.7% and 50.0% respectively. The 10-year survival rate was 90.9% with a mean follow-up of 111 months, ranging 17-251 months. The 10-year disease free survival rate was 74.2% and the mean duration of recurrence from initial surgery was 92.3 months. Conclusion AciCC of the parotid gland is a rare malignancy that has features of less aggressive behavior, and good prognosis. Intraoperative frozen section examination may be helpful in the diagnosis of AciCC of the parotid gland because of the low sensitivity of preoperative computed tomography scan and FNAC. Surgery with adjuvant postoperative radiotherapy is satisfactory for disease control. INTRODUCTION Acinic cell carcinoma (AciCC) is a relatively uncommon malignancy accounting for 1 to 6% of all salivary gland tumors and 15% of all malignant tumors in parotid gland (1,2). Interestingly, AciCC was originally described as an adenoma, but from the 1950s the term acinic cell adenocarcinoma was coined when its ability to metastasize and recur locally had been recognized (3,4). It is hard to diagnose AciCC of parotid gland preoperatively. Computed tomography (CT) scan demonstrated a low sensitivity rate for malignancy. The yield of fine needle aspiration cytology (FNAC) was not high enough to make a correct prediction for AciCC (5). The results of AciCC treatment are quite variable. Local recurrence rate has been reported in 8 to 56% of the patients (5). Mortality from the disease is usually less than 10% at 5 years (6). Because AciCC is rare and was recently recognized as the entity of malignancy, AciCC has been difficult to study. In this study, we retrospectively reviewed 20 parotid AciCC patients undergoing surgery from 1990 to 2009. We aimed to analyze the diagnosis and treatment experience for this malignancy in our hospital. as parotid AciCC in the Department of Otorhinolaryngology at the Seoul National University Hospital from 1990 to 2009. It was conducted with the approval of institutional review board of SNUH. All patients received surgical intervention with a diagnosis of tumor of the parotid gland, which was confirmed pathologically. The records were analyzed for age, sex, clinical features, duration of symptoms, facial palsy, FNAC, imaging study (CT scan), intraoperative frozen section examination, treatment modalities, histopathologic findings, survival, recurrence. The staging system was based on the 7th American Joint Committee on Cancer staging criteria (7). Postoperative radiotherapy (RT) was performed on patients with advanced tumor stage or lack of confidence as to safety surgical margins. Chemotherapy was performed on one patient with distant metastasis. SPSS ver. 18.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Overall cumulative survival rate and disease free survival rate was calculated with Kaplan-Meier method. Demographics and symptoms This study comprised 20 patients diagnosed as parotid AciCC. There were 10 males and 10 females, with a mean age of 44.4 years (range, 8 to 77 years). No significant gender predominance was found in this study. Mean follow-up duration was 111 months (range, 17 to 251 months). Tumors were mainly located in the superficial lobe (12 cases, 60%), while there were 8 (40%) in deep lobe. Among all patients, infra-auricular mass was the most common initial presentation (95%). There was no patient that complained facial palsy preoperatively. Surgical treatment modality All patients underwent surgical intervention of parotid gland tumor. Twelve patients underwent superficial parotidectomy and 8 had total parotidectomy. Facial nerve dissection and preservation were accomplished in 14 patients. In 6 patients, facial nerve was sacrificed without reconstruction due to advanced stage or tumor involvement. One patient underwent sural nerve graft reconstruction after facial nerve resection at the same operation. One patient with level II lymph node involvement and lung metastasis underwent superficial parotidectomy, ipsilateral radical neck dissection and lung lobectomy ( Table 2). Radiotherapy and chemotheapy After surgery of parotid AciCC, postoperative radiotherapy was given in 8 patients, and adjuvant concurrent chemoradiotherapy was used in 1 patient. Treatment outcome, recurrence and prognosis There are 5 cases of recurrence after surgery with or without adjuvant treatment. Three cases of distant metastasis occured. Among them, 2 patients developed lung and one brain metastasis were in 2 and 1 respectively. Two patients had local recurrence in tumor bed. Overall stage was correlated with recurrence. However, there is no significant relationship with tumor size, tumor location, resection margin and lymphovascular invasion (Table 4). One patient initially was diagnosed with lung metastasis died of brain metastasis after the operation and concurrent chemoradiotherapy. The 10-year cumulative survival rate was 90.9% with a mean follow-up of 111 months, ranging 17-251 months. The 10-year disease free survival rate was 74.2% and the mean duration of recurrence from initial surgery was 92.3 months (Fig. 1). Demographics and symptoms AciCC is a rare malignancy. It was initially thought to be benign disease entity and Foote and Frazell (3) first coined it as a carcinoma in the early 1950s. Its ability to metastasize and recur locally was first reported. Because of its rarity, initial misunderstanding and unknown character of AciCC, it has been difficult to study (8). Patients usually visit doctors with the symptom of incidental infra-auricular mass. Facial palsy or pain usually may not exist at the first time. In other studies, the tumor usually appeared as a painless mass in either the superficial or deep lobe of the parotid gland (9)(10)(11)(12)(13). In our study, most of the patients visited for infra-auricular mass and there was no patient that complained facial palsy preoperatively. It was thought that the slow growth pattern of AciCC made the mass painless. In the literature, the female-to-male ratio is about 1.5:1, and the tumor usually occurs in the fifth and sixth decades of life (14). Most of studies have shown females predominant to males at a higher rate. Exceptionally Ellis and Corio (12) suggested that 155 of 294 patients were male (53%). In our study, there was no predominance of gender. A mean age of 44.4 years at diagnosis in our study was younger than that found in past ones. Fine needle aspiration and CT findings Preoperative imaging studies are widely used for head and neck tumors. In our study, CT scan demonstrated a low sensitivity rate for malignancy like FNAC. Six cases were suspected with malignancy preoperatively by CT scan. There was no unique characteristic of parotid gland AciCC in the CT scan. Fine needle aspiration cytology could help clinicians in the differential diagnosis of parotid malignancy from other benign tumors. As for AciCC of parotid gland, the diagnostic ability of FNAC is not good. In other study, 12 patients had preoperative fine-needle aspiration, while only two cases (17%) were consistent with correct cytology diagnosis for AciCC (5). From our experience, the yield of FNAC (4 of 15 cases, 26.7%) was not high enough to make a correct diagnosis of AciCC preoperatively. An adjuvant diagnostic tool for diagnosis of parotid AciCC was necessary, so we performed intraoperative frozen section biopsy and examination. In our result, the sensitivity of intraoperative frozen section itself was 50% and when performed simultaneously with FNAC, its sensitivity was 69.2%. Surgical treatment modality The surgical treatment of choice is total or superficial parotidectomy with dissection of the facial nerve according to mass location. Facial nerve preservation was intra-operatively attempted in all patients except those with macroscopic nerve involvement. If the tumor was well encapsulated without extra-capsular spread, Gomez et al. (15) suggested that surgery alone was satisfactory in the control of AciCC. Positive surgical margin was regarded as an important prognostic factor in AciCC (15). However, from our analysis, there was no recurrence in all cases with positive resection margin. Pathologic staging AciCC is considered a low-grade salivary gland carcinoma with low rate of recurrence and metastasis, and is generally thought to carry a good prognosis. It usually presents as a single or multilobular mass. Cystic degeneration or hemorrhage may be seen in histopathology. The neoplastic cells have morphologic similarity with the normal acinic cells. The pathological diagnosis of AciCC depends on identifying cells presenting serous acinar cell differentiation (Fig. 2). Immunohistochemical staining of AciCC is not specific and does not play an important role in diagnosis. Neural, vessel, or stromal invasion is rare and regarded as a poor prognostic factor (15). Gomez et al. (15) reported that high-grade AciCC with the features of tumor necrosis and high mitotic figures were correlated with poor disease free survival. But the information of nerve involvement, stromal invasion, high mitotic figures and tumor necrosis were not mentioned in our pathologic report. The cervical lymph node metastatic rate in AciCC was reported to be 2.86% with N1 and 5.71% with N2b lesions and the rate of occult regional metastasis was low (15). In our study, regional metastasis in level IIb was found in one patient (1 of 20 patients, 5%). To date, the role of neck dissection in AciCC remains controversial. Therapeutic neck dissection is the accepted treatment for patients with clinically obvious cervical nodal involvement. However, the elective neck dissection of the clinically negative neck is not well defined in the published work. The role of elective neck dissection was not evident in our study or in other studies (15). Some authors suggest routine elective neck dissection in all N0 parotid carcinomas regardless of histology (16). Stennart et al. (17) also recommended a similar treatment, because most salivary gland malignancy including AciCC are not radiosensitive. However, no survival gain has been shown in these two studies. In contrast, several other authors do not believe that routine neck dissection is recommended (6,12,18,19). In our study, we did not perform elective neck dissection on the clinically negative neck and there was no regional treatment failure. In one case of cervical lymph node involvement with lung metastasis, we performed the ipsilateral radical neck dissection along with superficial parotidectomy and lung lobectomy for oncologic safety. Radiotherapy and chemotherapy There has been no prospective randomized study to evaluate the effectiveness of radiotherapy. Primary radiotherapy should be restricted to patients not suitable for surgery or refusing surgery because AciCC has generally not been regarded as being radiosensitive. Some authors have only found a limited response (18,20). Others have claimed a response to postoperative radiotherapy (9,21). Spafford et al. (9) proposed a series of indications for postoperative adjunctive radiotherapy in AciCC: 1) recurrent tumor; 2) equivocal or positive margins, or evidence of tumor spillage; 3) tumor adjacent to the facial nerve; 4) deeplobe involvement; 5) lymph node metastases; 6) extra-parotid extension; and 7) large tumors greater than 4 cm. Greig et al. (22) suggested that the indications for postoperative radiotherapy used were the presence of high-grade histological features (evidence of lympho-vascular invasion and high mitotic rate), close or positive margins at initial resection and tumor recurrence. In our study, patients with advanced stage or positive resection margin were given adjuvant postoperative radiotherapy and had good tumor control. We recommend giving adjuvant radiation therapy in these high-risk patients. There are few studies about the effectiveness of chemotherapy in parotid AciCC. In our study, we adopted chemotherapy as adjuvant therapy in one patient with lung metastasis but after several recurrences, the patient died of brain metastasis. We could not evaluate the outcome of chemotherapy due to limited cases in this study. Treatment outcome and prognosis It was reported that local recurrence rates ranged from 0 to 44%. Reported cervical nodal metastases have ranged from 0% to 43%. Distant metastases have ranged from 0 to 13% (1, 9, 12, 18-20, 23). In our study, local recurrence rate and distant metastases were 10% and 15%. There was no regional recurrence after treatment. The cases of distant metastasis had more aggressive clinical feature than others. They were suspicious for high grade AciCC but we could not found the pathologic reports about tumor necrosis and high mitotic figures. AciCC had better prognosis relative to other malignancies, with 10-and 20-year survivals of 88% and 83%, respectively in the literature (24). The prognosis of AciCC is variable from many series studies, ranging from 55 to 89% (15,20,21). Eneroth et al. (21) reported that the cure rate decreased from 89% at 5 years to 56% at 20 years. Spiro et al. (20) reported the survival rate to be 76% at 5 years, 63% at 10 years, and 55% at 15 years. According to our data, prognosis was good, and the 10-year disease free survival rate was 74.2% and the overall 10-year survival rate was 90.9%. AciCC has been noted to recur many years after initial diagnosis and surgery. Spiro (25) reported two cases recurring beyond 30 years of follow-up, a median time to recurrence of 3 years, and median time to death from disease of 10 years. However, the large study of 294 patients by Ellis and Corio (12) showed that 82% of first recurrences and metastases occurred within 5 years of initial therapy. Two patients recurred after 11 and 13 years respectively in our patients. Other authors have maintained that a follow-up of at least 20 years is necessary to adequately determine final treatment outcomes (9,26). In conclusions, parotid AciCC is a rare malignancy that has features of less aggressive behavior, and good prognosis. Intraoperative frozen section examination may be helpful in the diagnosis of parotid AciCC because of the low sensitivity of preoperative CT scan and FNAC. Surgery with adjuvant postoperative radiotherapy is satisfactory for disease control.
2014-10-01T00:00:00.000Z
2011-12-01T00:00:00.000
{ "year": 2011, "sha1": "0b066165af66eec0919575de3da96273c7881d4c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3342/ceo.2011.4.4.188", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b066165af66eec0919575de3da96273c7881d4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10708954
pes2o/s2orc
v3-fos-license
Brain matters: from environmental ethics to environmental neuroethics The ways in which humans affect and are affected by their environments have been studied from many different perspectives over the past decades. However, it was not until the 1970s that the discussion of the ethical relationship between humankind and the environment formalized as an academic discipline with the emergence of environmental ethics. A few decades later, environmental health emerged as a discipline focused on the assessment and regulation of environmental factors that affect living beings. Our goal here is to begin a discussion specifically about the impact of modern environmental change on biomedical and social understandings of brain and mental health, and to align this with ethical considerations. We refer to this focus as Environmental Neuroethics, offer a case study to illustrate key themes and issues, and conclude by offering a five-tier framework as a starting point of analysis. Humans have altered their environments in pursuit of self-improvement and better opportunities since ancient times, but the scope and impact of these changes are unprecedented today [1]. Technological advancements have yielded positive economic growth, improved standards of living, and provided new ways of protecting human health. At the same time, technology has contributed to widespread negative changes in the environment that include global climate change, deforestation, suburban sprawl, ecosystem loss, and increased health risks from exposure to radiation, toxicants, and stress. While there are different views among scholars of environmental ethics about why humans should value the environment [2], a common position focuses on direct and potential consequences to human health and well-being [3]. Environmental health experts similarly focus on environmental changes in terms of their impact on human health. However, within approaches to environmental ethics and environmental health, less attention has been paid to the specific ethical, social and legal implications of these changes for brain and mental health. 1 To do so, requires that we probe the intersection of diverse biological, social and cultural contexts of human well-being. Brain and mental health are determined by complex interactions between individual predispositions and behavior, social and economic processes, and the environment [4,5]. Classic examples pointing to an association between neurological function and environmental changes include neurological deficits from exposure to mercury [6] and lead [7][8][9], various forms of air [10][11][12][13][14] and water pollution [15], pesticides, and solvents [16][17][18][19][20]. Moreover, cross-cultural studies of indigenous worldviews on identity, concepts of the self, and wellness have highlighted the direct and intimate connections between individuals and their environments [21,22]. These studies remind us not only about cross-cultural differences involved in experiencing brain health and the environment, but also about different layers of vulnerability [23] brought forward by the impact of environmental change. Children [24], the elderly [25], workers who may be exposed occupationally to neurotoxicants [20] and people who live in the proximity of neurotoxicant sources [26] are more vulnerable than other sectors of the population. These unequal levels of exposure interacting with brain stage in development or decline, and differential effects from environmental risks are at the core of the environmental justice movement and, in regard to brain and mental health outcomes, are a central concern of Environmental Neuroethics. Our goal here is to begin a discussion specifically about the impact of modern environmental change on biomedical and social understandings of brain and mental health, and to align this with ethical considerations. There are several reasons for thinking that this approach is timely. To start, brain and mental health disorders, many of which have important environmental factors, are leading contributors to disabilities and morbidity that produce critical public health, societal and economic impacts [27]. In addition, brain development, as well as its optimal function throughout the life of individuals, is particularly susceptible to the environment to which a person is exposed [24]. Considering the vulnerability of brains towards environmental exposures that are not easy to identify or to eliminate [24], we can see why brain and mental health are matters of global concern and social justice and, in particular, as the health risks related to environmental exposures are often distributed unequally. Thus, it becomes crucial to mitigate the negative impacts of environmental change while ensuring fair distribution of the positive ones. This balance represents a key aspect of the Environmental Neuroethics approach we present here. Fracking as a case study Fuel sources with low greenhouse gas emissions are frequently advanced as a replacement to the rapid expansion in fossil fuel usage [28]. Technological advancements such as hydraulic fracturing (fracking) have now made extraction of these gas reserves profitable. The fracking process can impact the environment in various ways through the extraction and discharge of massive quantities of contaminated water, injection of various chemicals into the ground, and the disruption of the landscape with high densities of roads and well-heads that encroach on human settlements and wild habitats [29]. Like other literature on environmental change, contamination of the air and water supplies in the vicinity of fracking operations [17,30] has been linked to health impacts that include asthma, respiratory complaints, gastro-intestinal effects and nosebleeds [31,32]. Such contamination is also related to negative neurological effects. For example, McKenzie and colleagues [26] carried out a retrospective cohort study of 124,842 births between 1996 and 2009 in rural Colorado examining the associations between maternal proximity to fracking sites and birth outcomes. They found that births to mothers residing close to or surrounded by wells (>125 wells/mile) were twice as likely to have a neural tube defects compared to those with no wells within a 10-mile radius (OR = 2.0; 95 % CI: 1.0, 3.9, based on 59 cases). With these types of foundational studies in mind, we examined the prevalence in the literature of associations made between fracking and neurological or mental health impacts. To this end, we carried out an extensive search of peer-reviewed and gray literature of articles, theses, books, abstracts, and government reports on unconventional gas development (UGD), environment, brain and mental health using Google Scholar, the most comprehensive database relevant to the goals of the study. The searches were based on two primary key terms: (1) unconventional gas development, and (2) brain; key UGD search terms: {unconventional natural gas (+/−) development}, {shale gas (+/−) development}, {fracking} and {hydraulic fracturing}; and, key brain search terms were {brain}, {neuro}, {neurological} and {mental}. We also used a range of secondary search terms to ensure that searches identify studies relevant to culture, First Nations, health, ethics, and solastalgia. 2 Of the one hundred and six articles identified, 83 articles originated from the peer-reviewed literature (reviews, N = 57; primary research N = 26) and 23 from the gray literature, dating back to 2009 (Fig. 1). To provide context, we explored the origin of the cases in our sample for country of corresponding author and corresponding author disciplines. Most returns originated from the United States (USA) (N = 83). Twelve papers originated from Australia and six from Canada. One paper meeting our inclusion criteria originated each from China, Germany, New Zealand, Switzerland and United Kingdom. Based on the corresponding authors' affiliation, we found that the majority of corresponding authors held multiple disciplinary associations (N = 45). Twenty-two held affiliations in the health sciences (e.g., medicine), 21 in the social sciences (e.g., sociology, law), 11 were associated with environmental sciences, such as ecology or forestry, and seven have disciplines represented only in a limited basis such as engineering or regional planning. To explore the texts in depth, we conducted a three-part content analysis [33,34] of the full set of cases. Each individual article was used as the unit of analysis. In the first phase of the analysis, we found that the dominant themes relate to public health (N = 31), and regulation and policy (N = 22). Five articles mention UGD and fracking broadly as a threat to Indigenous health. In a second phase, we focused on brain and mental health. Eight of the 106 papers contain elaborate detailed examination of the impact that UGD poses for brain and mental health, arguments for associations between brain and mental health related to UGD, or both. The remaining papers only explore the relationship between fracking chemicals and neurotoxicity superficially and provide little if any mention of ethical implications. In the third phase, we focused specifically on content related to ethics. Two papers provide substantial ethical discussion. One paper argues that environmental damage caused by hydraulic fracturing poses "a new threat to human rights" [35]. The other, written by members of the present author group, makes a call to the Presidential Commission for the integration of ethical considerations and neuroscience into the study of environmental change [36]. Sixty-five papers mention safety and issues related to the duty not to inflict harm; 41 papers mention at least one other ethical concern such as trust, vulnerability, justice, and disempowerment but without any further elaboration on the matter. Overall, the findings reveal that while there is emphasis on health, there is limited ethical discussion of brain and mental health impacts. Environmental Neuroethics in the wild Environmental Neuroethics can provide a framework to investigate the ethical and social implications of environmental change on brain and mental health. Building on previous work [37], we propose a five-tier framework: 1. Brain science and the environment: Neuroscience discovery that is aligned with the measurement and evaluation of factors that affect the way individuals, communities and society adapt and cope with real or perceived environmental threats to well-being. 2. The relational self and the environment: The interface between the environment and brain and mental health, and the mechanisms by which exposures at key points in life may mediate different brain and mental effects; relationships among mental health stressors, susceptibility to mental health issues, and resilience within the context of changing environments. 3. Cross-cultural factors and the environment: Exploration of the role of culture in the relationship between environment and brain and mental health; interactions between Traditional Ecological Knowledge and neuroscience evidence; the impact of environmental change and varying effects on First Nations and settler communities given respective relationships between culture and the environment. 4. Social policy and the environment: Priorities and allocation of resources of local social organizations to deal with environmental impacts on brain and mental health. 5. Public discourse and the environment: The engagement of professional disciplines and communities in multidirectional communication and discourse about neurological, psychological, sociological and ethical dimensions of environmental change; facilitation of international, cross-disciplinary, transdisciplinary collaborations; creation of effective outreach programs that promote public understanding about the impact of environmental change on brain and mental health. This framework can be extended more broadly to other environmental impacts such as the extraction of natural resources, air pollution, use of agricultural chemicals, water contamination, proximity to noxious facilities, mining waste and nuclear plants, ocean degradation, food contamination, and habitat destruction. Moreover, while the focus here has been on changes to the physical environment, Environmental Neuroethics is also concerned with other environments such as digital and social environments, and how these impact neurological health. Notwithstanding the opportunity to expand ethical and social discussion around environmental change, priority setting and paths to action are not without challenges. Reliability and stability of evidence [38], knowledge of impacts [39], and appreciation of risk [40][41][42] are perceived and weighted differently by different stakeholders and are among the key obstacles. Conclusions The identified gaps in the ethical discussion related to environmental change and health as well as the vulnerability of brains, suggest that it is time for an Environmental Neuroethics dedicated to address the interaction of biomedical and social understandings of anthropogenic environmental change. In moving forward, results and resulting scholarship and guidance must be specific, solution-oriented, and proportionate to the benefits and risks in play. 1 We use the term mental health to include "wellbeing, everyday problems in living associated with bodily symptoms of stress and anxiety, mild depression, and seasonal fluctuations in mood and energy, as well as more severe psychiatric disorders, such as major depression, bipolar disorder, schizophrenia, and other psychotic disorders" [21 xiv].
2016-05-12T22:15:10.714Z
2016-02-15T00:00:00.000
{ "year": 2016, "sha1": "0d35cf3e7203d10d7206abb86411692df4f17da4", "oa_license": "CCBY", "oa_url": "https://ehjournal.biomedcentral.com/track/pdf/10.1186/s12940-016-0114-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "757eee7656d89f59a9bad6157de4876fd0520386", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
248859584
pes2o/s2orc
v3-fos-license
Development of the International Spinal Cord Injury Basic Data Set for informal caregivers Study design Mixed-methods, including expert consensus for initial development and a multi-center repeated measures design for field testing. Objectives To develop an International Spinal Cord Injury Basic Data Set for caregivers of individuals with spinal cord injury/disorder (SCI/D) for use in research and clinical care settings. Setting International, multi-disciplinary working group with field testing in five North American pediatric rehabilitation hospitals. Methods The data set was developed iteratively through meetings and online surveys with a working group of experts in pediatric and adult SCI/D rehabilitation and caregivers of individuals with SCI/D. Initial reliability was examined through repeat administration of a beta form with a sample of caregivers recruited by convenience. The sample was characterized with descriptive statistics. Intra-rater reliability of variables was assessed using Intra-Class Correlations. Results The beta test form included 27 items, covering 3 domains: (1) demographic information for persons providing care; (2) caregiver’s allocation of time and satisfaction; and (3) perceived burden of caregiving. Thirty-nine caregivers completed both administrations. Mean time for completion was 10 min. There was moderate to excellent reliability for the majority of variables, but results indicated necessary revisions to improve reliability and decrease respondent burden. The final version of the data form contains 7 items and is intended for self-administration among informal caregivers of individuals with SCI/D across the lifespan. Conclusions The International SCI Basic Data Set for Informal Caregivers can be used to standardize data collection and reporting about informal caregivers for individuals with SCI/D to advance our understanding of this population and the data form has additional utility to screen for caregiver needs in clinical settings. The following questions are to be answered by the primary caregiver for the person with the SCI/D. 3. Please estimate the amount of time you spend in each of these domains in an average week. Then rate how satisfied you are with the amount of time spent in each. Domain Amount of time (hours per week) 4. On the scale below, "0" means that you feel that caring for or accompanying the person with the SCI/D at the moment is not straining at all; "100" means that you feel that caring for or accompanying the person with the SCI/D at the moment is much too straining. Please indicate with an "x" on the scale how burdensome you feel caring for or accompanying the person with the SCI/D is at the moment. Conceptual Background For the purpose of this data set, informal caregivers are defined as individuals, who assist a person with a disability with activities in their home or community. This may include, but is not limited to, assistance for self-care (e.g., bathing, toileting, and dressing), home management, educational activities, play or leisure, and care coordination (e.g., finding healthcare providers, scheduling appointments, managing payment). Informal caregivers have no professional status as a caregiver; they are often family members or loved ones of people with an SCI/D and may be paid or unpaid (Smith, Boucher, & Miller, 2016). Responsibilities of caregiving can affect the physical and mental health of caregivers, and there are significant links between the health-related quality of life of caregivers and persons for whom they provide care (Kelly et al., 2012;Schulz & Beach, 1999;Vonneilich, Lüdecke, & Kofahl, 2016). However, there is a notable gap in data to understand and address needs of caregivers since information about them and their experiences are seldom captured in healthcare encounters, and data collection related to caregivers in research has been highly variable. Domains of interest for this data set initiative include basic demographics of the caregiver, along with a subjective screen of strain related to caregiving. Several other domains may also be relevant to caregivers, including: a) intrinsic factors, such as race, ethnicity, culture, religion or spirituality, physical pain, depression or anxiety, knowledge of SCI/D, and problem-solving skills (Dreer, Elliott, Shewchuk, Berry, & Rivera, 2007;Kelly et al., 2012); and b) extrinsic factors, such as scope of caregiving responsibilities, household demographics, family functioning and support, features of physical environments, and coordination of healthcare (Baker, Barker, Sampson, & Martin, 2017;Gorzkowski, Kelly, Klaas, & Vogel, 2011). However, these domains are beyond the scope of the basic data set and should be considered for inclusion in an extended data set for caregivers, where they can be given more thorough attention. Utilization The data form may be completed by interview or self-report, and it is recommended for use with informal caregivers for individuals with SCI/D at points of follow-up care, once individuals with SCI/D are living in the community. The data set is not intended for use with professional caregivers, who are hired by the individual or family for the sole purpose of providing task-oriented assistance for an individual with SCI/D. It is suggested that caregivers completing the Basic Data Set for Informal Caregivers should also complete the Sociodemographic Basic Data Set and Quality of Life Basic Data Set (Charlifue et al., 2012) for themselves. These data sets were originally developed for persons with SCI, but they are considered appropriate for use among caregivers. Additionally, the Basic Data Set for Informal Caregivers should be linked to the Core Data Set (Biering-Sørensen et al., 2017) and Activity and Participation Basic Data Set (Post et al., 2016) for the person with the SCI/D who the caregiver assists. International Spinal Cord Injury Basic Data Set for Informal Caregivers Providing care for a person with a spinal cord injury or disorder (SCI/D) can be a big responsibility. The purpose of this survey is to get basic information about caregivers of persons with SCI/D. There are no "right" answers. Some questions may be difficult to answer. Please do your best to respond with what you are currently doing and feeling. Please mark your responses to each question below. Although this data set is not intended for agency staff or professionals hired for the sole purpose of providing taskoriented care, option 08 will be used to indicate when such persons complete the form. Narrative responses associated with option 09 should be recorded in data set. What is your gender Enter 99 if "I don't know" is selected by person completing the form or if no response is given. This scale of perceived strain is taken from the Self-Rated Burden Scale, which is validated for use with caregivers of persons with stroke and is recommended as a quick screen of caregivers at risk for poor health-related quality of life (van Exel et al., 2004). VARIABLE NAME: Others Involved in Enter 999 only if no response is given.
2022-05-19T06:23:51.423Z
2022-05-17T00:00:00.000
{ "year": 2022, "sha1": "233523dcee58008cf96654b9cd5ea6709bd985ee", "oa_license": null, "oa_url": "https://www.nature.com/articles/s41393-022-00810-0.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "42a81f8bdb6a0653e7349b01f0cc472f912df49a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218912587
pes2o/s2orc
v3-fos-license
The effect of LRRK2 loss-of-function variants in humans Human genetic variants predicted to cause loss-of-function of protein-coding genes (pLoF variants) provide natural in vivo models of human gene inactivation and can be valuable indicators of gene function and the potential toxicity of therapeutic inhibitors targeting these genes1,2. Gain-of-kinase-function variants in LRRK2 are known to significantly increase the risk of Parkinson’s disease3,4, suggesting that inhibition of LRRK2 kinase activity is a promising therapeutic strategy. While preclinical studies in model organisms have raised some on-target toxicity concerns5–8, the biological consequences of LRRK2 inhibition have not been well characterized in humans. Here, we systematically analyze pLoF variants in LRRK2 observed across 141,456 individuals sequenced in the Genome Aggregation Database (gnomAD)9, 49,960 exome-sequenced individuals from the UK Biobank and over 4 million participants in the 23andMe genotyped dataset. After stringent variant curation, we identify 1,455 individuals with high-confidence pLoF variants in LRRK2. Experimental validation of three variants, combined with previous work10, confirmed reduced protein levels in 82.5% of our cohort. We show that heterozygous pLoF variants in LRRK2 reduce LRRK2 protein levels but that these are not strongly associated with any specific phenotype or disease state. Our results demonstrate the value of large-scale genomic databases and phenotyping of human loss-of-function carriers for target validation in drug discovery. NAtURE MEDiciNE mechanism by which LRRK2 variants mediate their pathogenicity remains unclear, a common feature is augmentation of kinase activity associated with disease-relevant alterations in cell models 3,17,18 . Discovery of Rab GTPases as LRRK2 (ref. 19 ) substrates highlighted the role of LRRK2 in regulation of the endolysosomal and vesicular trafficking pathways implicated in PD 19,20 . LRRK2 kinase activity is also upregulated more generally in patients with PD (with and without LRRK2 variants) 21 . LRRK2 has therefore become a prominent drug target, with multiple LRRK2 kinase inhibitors and suppressors 22 in development as disease-modifying treatments for PD 21,23,24 . There are three LRRK2 therapeutics currently in early clinical testing from both Denali (small molecules DNL201, ClinicalTrials. gov Identifier: NCT03710707 and DNL151, ClinicalTrials.gov Identifier: NCT04056689) and Biogen (antisense oligonucleotide BIIB094, ClinicalTrials.gov Identifier: NCT03976349). Despite these promising indications, there are concerns about the potential toxicity of LRRK2 inhibitors. These mainly arise from preclinical studies, where homozygous knockouts of LRRK2 in mice and high-dose toxicology studies of LRRK2 kinase inhibitors in rats and primates, have shown abnormal phenotypes in the lung, kidney and liver [5][6][7][8] . While model organisms are invaluable for understanding the function of LRRK2, they also have important limitations, as exemplified by inconsistencies in phenotypic consequences of reduced LRRK2 activity seen among yeast, fruit flies, worms, mice, rats and nonhuman primates 25 . Complementary data from natural human knockouts are critical for understanding both gene function and the potential consequences of long-term reduction of LRRK2 in humans. Large-scale human genetics is an increasingly powerful source of data for the discovery and validation of therapeutic targets in humans 1 . pLoF variants, predicted to largely or entirely abolish the function of affected alleles, are a particularly informative class of genetic variation. Such variants are natural models for lifelong organism-wide inhibition of the target gene and can provide information about both the efficacy and safety of a candidate target 2,[26][27][28][29] . However, pLoF variants are rare in human populations 30 and are also enriched for both sequencing and annotation artefacts 31 The pLoF variants seen more than ten times appear in color with remaining variants in gray. LRRK2 pLoF variants are mostly individually extremely rare (less than 1 in 10,000 carrier frequency), with the exception of two nonsense variants almost exclusively restricted to the admixed AMR population (Cys1313Ter and Arg1725Ter) and two largely NFE-specific variants (Leu2063Ter and Arg772Ter). All variant protein descriptions are with respect to ENSP00000298910.7. c, Schematic of the LRRK2 gene with pLoF variants marked by position, with the height of the marker corresponding to allele count in gnomAD (gray bars) and UK Biobank (blue bars). The 51 exons are shown as rectangles colored by protein domain, with the remaining exons in gray. The three variants genotyped in the 23andMe cohort are annotated with their sample count in black text. NAtURE MEDiciNE requires very large collections of genetically and phenotypically characterized individuals, combined with deep curation of the target gene and candidate variants 32 . Although previous studies of pLoF variants in LRRK2 have found no association with risk of PD 10 , no study has assessed their broader phenotypic consequences. We identified LRRK2 pLoF variants and assessed associated phenotypic changes in three large cohorts of genetically characterized individuals. First, we annotated LRRK2 pLoF variants in two large sequencing cohorts: the gnomAD v.2.1.1 dataset, which contains 125,748 exomes and 15,708 genomes from unrelated individuals 9 and 46,062 exome-sequenced unrelated European individuals from the UK Biobank 33 . We identified 633 individuals in gnomAD and 258 individuals in the UK Biobank with 150 unique candidate LRRK2 loss-of-function (LoF) variants, a combined carrier frequency of 0.48%. All variants were observed only in the heterozygous state. Compared to the spectrum observed across all genes, LRRK2 is not significantly depleted for pLoF variants in gnomAD (LoF observed/expected upper bound fraction 9 = 0.64). We manually curated the 150 identified variants to remove those of low quality or with annotation errors suggesting that they are unlikely to cause true LoF ( Fig. 1a and Supplementary Tables 1 and 2). We removed 16 variants identified as low confidence by the LoF transcript effect estimator ((LOFTEE); 6 variants in 409 individuals) 9 or manually curated as low quality or unlikely to cause LoF (10 variants in 129 individuals). One additional individual was excluded from the UK Biobank cohort as they carried both a pLoF variant and the G2019S risk allele. Our final dataset comprised 255 gnomAD individuals and 97 UK Biobank individuals with 134 unique high-confidence pLoF variants (Fig. 1a) and an overall carrier frequency of 0.19%; less than half the frequency estimated from uncurated variants, reaffirming the importance of thorough curation of candidate LoF variants 32 . A subset of 25 gnomAD samples with 19 unique LRRK2 pLoF variants with DNA available were all successfully validated by Sanger sequencing (Supplementary Table 3). Second, we examined LRRK2 pLoF variants in over 4 million consented and array-genotyped research participants from the personal genetics company 23andMe. Eight putative (LOFTEE high confidence) LRRK2 LoF variants were identified. After manual curation, all putative carriers of each variant were submitted for validation by Sanger sequencing and variants with <5 confirmed carriers were excluded. The resulting cohort comprised 749 individuals, each a Sanger-confirmed carrier for one of three pLoF variants (Fig. 1a and Supplementary Table 4). The high rate of Sanger confirmation for rs183902574 (>98%) allowed confident addition of 354 putative carriers of rs183902574, from expansion of the 23andMe dataset, without Sanger confirmation. Analyses with and without these genotyped-only carriers were not significantly different (Supplementary Table 5). Across the two most frequent pLoF alleles we observed an extremely small number (<5) of sequence-confirmed homozygotes; however, given the very small number of observations, we can make no robust inference, except that homozygous inactivation of LRRK2 seems compatible with life. For the remainder of this manuscript we focus on heterozygous pLoF carriers. The three combined datasets provide a total of 1,455 carrier individuals with 134 unique LRRK2 pLoF variants. These variants are found across all major continental populations ( Fig. 1b and Extended Data Fig. 1) and show neither any obvious clustering along the length of the LRRK2 protein, nor specific enrichment or depletion in any of the known annotated protein domains (chi squared P = 0.22; Fig. 1c), consistently with signatures of true LoF 32 . To confirm that LRRK2 pLoF variants result in reduced LRRK2 protein levels, we analyzed total protein lysates from cell lines with three unique pLoF variants. We obtained lymphoblastoid cell lines (LCLs) from two families with naturally occurring heterozygous LoF variants and a third variant was CRISPR/Cas9-engineered into embryonic stem cells (Extended Data Fig. 2), which were differentiated into cardiomyocytes. In all instances, LRRK2 protein levels were visibly reduced compared to noncarrier controls (Fig. 2). These results agree with a previous study which assessed three separate pLoF variants and found significantly reduced LRRK2 protein levels 10 . Together, these six functionally validated variants represent 82.5% of pLoF carriers in this study (1,201 of 1,455). Although heterozygous pLoF carriers have LRRK2 protein remaining, we believe that this state represents a plausible genetic model for therapeutic inhibition of LRRK2, as target engagement by pharmacological inhibitors is unlikely to be complete. We next sought to determine whether lifelong lowering of LRRK2 protein levels through LoF results in an apparent reduction in lifespan. We found no significant difference between the age distribution of LRRK2 pLoF variant carriers and noncarriers in either the gnomAD or 23andMe datasets (two-sided Kolmogorov-Smirnov P = 0.085 and 0.46 respectively; Fig. 3a), suggesting no major impact on longevity, though we note that this analysis is based on age at sample collection, which is not equivalent to longevity and at current sample sizes we are only powered to detect a strong effect (Supplementary Table 6). For a subset of studies within gnomAD, phenotype data are available from study or national biobank questionnaires or from linked electronic health records (Methods). We manually reviewed these records for all 60 of the 255 gnomAD LRRK2 pLoF carriers with available data and recorded any phenotypes affecting the lung, liver, kidney, cardiovascular system, nervous system, immunity and cancer (Supplementary Table 7). We found no over-representation of any phenotype or phenotype category in LRRK2 pLoF carriers. Experiments were repeated ten times with similar results. b, Immunoblot of LRRK2, alpha-actinin (specific to muscle) and GAPDH on three control lines and one CRISPR/Cas9-engineered LRRK2 heterozygous line of cardiomyocytes differentiated from embryonic stem cells (ESCs) (Arg1693Ter-12-40714897-C-T). All variant protein descriptions are with respect to ENSP00000298910.7. Experiments were repeated five times with similar results. NAtURE MEDiciNE The 23andMe dataset includes self-reported data for thousands of phenotypes across a diverse range of categories. We performed a phenome-wide association study comparing LRRK2 pLoF carriers to noncarriers for 366 health-related traits and found no significant association between any individual phenotype and carrier status (Fig. 3b). In particular, we found no significant associations with any lung, liver or kidney phenotypes (Supplementary Tables 5 and 8). The UK Biobank resource includes measurements for 30 blood serum and four urine biomarkers. We found no difference in any of these biomarkers between pLoF carriers and noncarriers (Supplementary Table 9 and Supplementary Fig. 1). In particular, there was no difference between carriers and noncarriers for urine biomarkers transformed into clinical measures of kidney function ( Fig. 4a and Methods) and no difference in six blood biomarkers commonly used to assess liver function (Fig. 4c). We also observed no difference in spirometry measurements of lung function (Fig. 4b). We grouped self-reported disease diagnoses in UK Biobank individuals into categories corresponding to the organ system and/or mechanism (Supplementary Table 10). We observed no enrichment for any of these phenotype groups in LRRK2 pLoF carriers when compared to noncarriers (Supplementary Table 11). We also mined ICD10 codes from hospital admissions and death records for any episodes relating to lung, liver and kidney phenotypes, removing any with a likely infectious or other external cause (Supplementary Table 12 and Methods) and identified six pLoF carriers with ICD10 codes relating to these organ systems (6.19%), compared to 4,536 noncarriers (9.87%; Supplementary Tables 13 and 14). Our results indicate that approximately 1 in every 500 humans is heterozygous for a pLoF variant in LRRK2, resulting in a systemic lifelong decrease in LRRK2 protein levels and that this partial inhibition has no discernible effect on survival or health at current sample sizes. These results suggest that partial reduction of LRRK2 protein in humans is unlikely to result in the severe phenotypes observed in knockout animals. This is consistent with initial phase 1 studies of therapeutic LRRK2 kinase inhibitors, which have shown promising safety results 24 , but are not yet able to address long-term, on-target pharmacology-related safety profiles. The rarity of pLoF variants in LRRK2, combined with the relatively low prevalence of PD, prevents direct assessment of whether LRRK2 inhibition reduces the incidence of PD with current sample sizes (Supplementary Table 5). Future cohorts with many more sequenced and phenotyped individuals (probably millions of Note that this analysis is based on age at sample collection. b, Manhattan plot of phenome-wide association study results for carriers of three LRRK2 pLoF variants against noncarriers in the 23andMe cohort. Each point represents a distinct phenotype, with these grouped into related categories (delineated by alternating black and gray points). The dotted horizontal line represents a Bonferroni-corrected P value threshold for 366 tests. Logistic regression was used for binary phenotypes and linear regression for quantitative phenotypes controlling for age, sex, genotyping platform and the first ten genetic principal components. Full association statistics are listed in Supplementary Table 8. NAtURE MEDiciNE samples) will be required to answer this question. As such, our study focuses entirely on whether partial genetic LRRK2 inactivation has broader phenotypic consequences that might correspond to adverse effects of chronic administration of LRRK2 inhibitors. We acknowledge multiple limitations to this work. First, we relied on heterogeneous phenotype data mostly derived from self-reported questionnaires. Both 23andMe and gnomAD record only age at recruitment, which is an imperfect proxy for lifespan and participants are relatively young compared to the typical age of onset for PD. In addition, at current sample sizes we are only powered to detect a strong effect on lifespan. Our ascertainment of LRRK2 pLoF variants in 23andMe was necessarily incomplete, due to the availability of targeted genotyping rather than sequencing data; this means that a subset of the 23andMe individuals treated as noncarriers could be carriers of LRRK2 pLoF variants not genotyped or imputed in this dataset. We have not directly assessed whether LRRK2 pLoF variants reduce kinase activity and instead take reduction in protein levels as a proxy. Previous studies have, however, shown that Rab10 phosphorylation is markedly reduced when LRRK2 levels are lowered by ~80% using siRNA 34,35 . Additionally, lifelong LoF of LRRK2 may not be equivalent to therapeutic inactivation later in life if biological compensation occurs. Finally, the low-frequency of naturally occurring LRRK2 pLoF variants results in a relatively small number of carriers that could be assessed for each biomarker and phenotype, meaning that we are not well powered to detect subtle or infrequent clinical phenotypes arising from LRRK2 haploinsufficiency. However, our study suggests that any clinical phenotype associated with partial reduction of LRRK2 is likely to be substantially more benign than early-onset PD. This study provides an important proof of principle for the value of very large genetically and phenotypically characterized cohorts, combined with thorough variant curation, in exploring the safety profile of candidate drug targets. Over the coming years, the availability of complete exome or genome sequence data for hundreds of thousands of individuals who are deeply phenotyped and/or available for genotype-based recontact studies, combined with deep curation and experimental validation of candidate pLoF variants, will provide powerful resources for therapeutic target validation as well as broader studies of the biology of human genes. Online content Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of NAtURE MEDiciNE Methods gnomAD variant annotation and curation. The gnomAD resource, including both sample and variant quality control (including sample ancestry assignment), is fully described in our companion paper 9 . Analysis was conducted using gnomAD v.2.1.1. Putative LoF variants were defined as stop-gained, frameshift or essential splice site (splice donor or splice acceptor) as annotated by the Ensembl Variant Effect Predictor 37 . Variants were included if they were annotated as LoF on any of the three high-confidence GENCODE annotated protein-coding transcripts that are expressed in the lung, liver or kidney. All variants also underwent transcript expression-aware annotation which evaluates cumulative expression status of transcripts harboring a variant in the Genotype Tissue Expression (GTEx) project dataset 38 . All high-confidence variants were found in exons with high evidence of expression across all relevant tissues in GTEx. In addition, all were high-confidence pLoF on the canonical transcript, which is the only transcript to include the kinase domain. Variants were filtered out if they were flagged as low confidence by LOFTEE 9 . For the remaining variants, manual curation was performed, including inspection of variant quality metrics, read distribution and the presence of nearby variants using the integrative genome viewer and splice-site prediction algorithms using Alamut. A single splice-site variant (12-40626187-T-C), found in 77 gnomAD carriers, was identified in an individual with RNA-seq data in the GTEx project. The RNA-seq reads were manually inspected to look for any effect on splicing. Assessing the read distribution of a linked heterozygous variant in this individual showed convincingly that the variant has no discernible effect on transcript splicing (Extended Data Fig. 3). All available tissues were assessed with reads from lung tissue shown in Extended Data Fig. 3. The variant was also identified in eight UK Biobank carriers and in 23andMe and was similarly excluded from these cohorts. This study complied with all relevant ethical regulations and was overseen by the Broad Institute's Office of Research Subject Protection and the Partners Human Research Committee. Informed consent was obtained from all participants. Sanger validation of gnomAD variant carriers. Sanger validation was performed on genomic DNA derived from peripheral blood under the following PCR conditions: 98 °C 2 min; 30 cycles 20 s 98 °C, 20 s 54 °C, 1 min 72 °C; 3 min 72 °C using Herculase II Fusion DNA polymerase (Agilent, 600679). PCR products (5 μl) were analyzed on a 2% agarose gel and the remaining product was purified with the Qiagen PCR Purification kit. Sequence analysis was performed with both PCR primers at Quintarabio. Details of variants and PCR primers used for each are listed in Supplementary Table 3. gnomAD phenotype curation and cohort descriptions. The below described studies with LRRK2 pLoF carriers had available phenotype data. For each study, all available records were manually reviewed to identify any reports of health problems, which were categorized into the following classes: lung, liver, kidney, cardiovascular, nervous system, immune and cancer. The genomic psychiatry cohort project. The genomic psychiatry cohort project is a longitudinal resource with the aim of making population-based data available through the National Institute of Mental Health. The repository contains whole-genome sequencing (WGS) data and detailed clinical and demographic data, particularly focused on schizophrenia and bipolar disorders. A large proportion of participants (88%) have consented for recontact 39 . The screening questionnaire consisted of 32 yes/no questions about mental health issues and 23 yes/no questions covering other medical problems including liver, digestive and cardiovascular problems. There were no specific questions relating to lung or kidney phenotypes, although participants were asked to answer yes/no to having any additional health problems. If a participant answered yes to this question, we marked the existence of lung or kidney disease as 'unknown' . One sample was excluded due to conflicting questionnaire answers. The age of the 25 LRRK2 carriers ranged from 19 to 67 years. Two carriers, aged 55 and 60 years, reported having had liver problems and four participants over 60 years reported no liver problems. The Pakistan risk of myocardial infarction study. The Pakistan risk of myocardial infarction study comprises 10,503 individuals characterized using a phenotype questionnaire with >350 items covering demographic and dietary characteristics and over 80 blood biomarker measurements 40 . The predominant focus of the questionnaire was cardiac function and phenotype. While the participants were specifically asked to report suffering from asthma, no other lung, liver, kidney, nervous system or immune phenotypes were directly assayed and so these were marked as 'unknown' for these individuals. The 12 LRRK2 LoF carriers in the study did not differ in terms of age, sex and myocardial infarction status when compared to the entire cohort. The Swedish schizophrenia and bipolar studies. Cases with schizophrenia or bipolar disorder were identified from Swedish national hospitalization registers 41,42 . Controls were selected at random from population registers. All individuals had whole-exome sequencing data 43 . All available ICD codes from inpatient hospitalizations and outpatient specialist treatment contacts were provided for each patient. The national FINRISK study. The FINRISK study has been carried out for 40 years since 1972 every 5 years using independent, random and representative population samples from different parts of Finland. For this work, we used sequencing and health register data from FINRISK surveys between 1992 and 2007 (ref. 44 ). Full health records including ICD10 codes were reviewed by study coordinators who provided us with yes/no answers for each of our phenotype classes. The BioMe biobank at the Charles Bronfman Institute for Personalized Medicine at Mount Sinai. The Mount Sinai BioMe Biobank, founded in September 2007, is an ongoing, broadly consented electronic health record (EHR)-linked bio and data repository that enrolls participants nonselectively from the Mount Sinai Medical Center patient population (New York City). BioMe participants represent broad racial, ethnic and socioeconomic diversity with a distinct and population-specific disease burden, characteristic of the communities served by Mount Sinai Hospital. Currently comprising over 47,000 participants, BioMe participants are of African (24%), Hispanic/Latino (35%), European (32% of whom 40% are Ashkenazi Jewish) and other/mixed ancestry. BioMe is linked to Mount Sinai's system-wide Epic EHR, which captures a full spectrum of biomedical phenotypes, including clinical outcomes, covariate and exposure data from past, present and future healthcare encounters. The median number of outpatient encounters is 21 per participant, reflecting predominant enrollment of participants with common chronic conditions from primary care facilities. Clinical phenotype data have been meticulously harmonized and validated. Genome-wide genotype data and whole-exome sequencing data are available for >30,000 participants. In addition, WGS data are available for >11,000 participants. The full EHRs of three BioMe LRRK2 pLoF carriers were reviewed by local clinicians and we were provided with detailed summaries. Estonian Biobank of the Estonian Genome Center, University of Tartu. The Estonian Biobank cohort is composed of volunteers from the general Estonian resident adult population 45 . The current number of participants of close to 165,000 (representing 15% of the Estonian adult population) makes it ideally suited to population-based studies. Participants were recruited throughout Estonia by medical personnel and participants receive a standardized health examination, donate blood and fill out a 16-module questionnaire on health-related topics such as lifestyle, diet and clinical diagnoses. A detailed phenotype summary from a health survey and linked data including ICD10 codes, clinical laboratory values and treatment and medication information is annually updated through linkage with national electronic health databases and registries. UK Biobank variant detection and curation. The 49,960 exome-sequenced individuals from the UK Biobank were restricted to a subset of 46,062 unrelated individuals of European ancestry. Relatedness was determined using KING kinship coefficient estimates from the genotype relatedness file with a cutoff of 0.0884 to include pairs of individuals with greater than third-degree relatedness. European ancestry was determined by projecting individuals onto the 1000 Genomes Project phase 3 (ref. 46 ) principal-component analysis (PCA) coordinate space, followed by Aberrant R package 47 clustering to retain only those individuals falling within the 1000 Genomes Project EUR PC1 and PC2 limits (λ = 4.5). We further removed individuals who self-reported as non-European ethnicity. We identified all individuals with putative LoF variants detected in the FE analysis pipeline, which used GATK 3.0 for variant calling and filtering 33 . We did not use the SPB pipeline calls due to advertised errors in the Regeneron Genetics Center pipeline at the time we were conducting these analyses. Variants were included if they were annotated as LoF on any transcript expressed in the lung, liver or kidney. As with the gnomAD analysis, variants were filtered out if they were flagged as low confidence by LOFTEE, before manual curation of the remaining variants. This curation included inspection of variant quality metrics, read distribution and the presence of nearby variants using integrative genome viewer and splice-site prediction algorithms using Alamut. In addition, 266 individuals in the full genotyped cohort of 488,288 samples who were carriers of the G2019S risk allele were identified. One individual who was a carrier for both a LRRK2 pLoF variant and G2019S was excluded from all analyses. Carriers of G2019S were not included in the 'noncarrier' cohort in any of the analyses. LRRK2 pLoF carriers, G2019S risk allele carriers and noncarriers are well matched for both sex (Extended Data Fig. 4) and age (Extended Data Fig. 5). UK Biobank phenotype analysis. Blood serum and urine biomarkers. The first recorded value of all fields relating to 'blood biochemistry' (field codes 30600-30890) and 'urine assays' (field codes 30510-30535) was extracted for all individuals. The distribution of values for all biomarkers was plotted ( Supplementary Fig. 1) and a two-sided Wilcoxon test was used to test for a difference between LRRK2 pLoF carriers and noncarriers. NAtURE MEDiciNE These data were also extracted for G2019S risk allele carriers and these individuals were compared to both pLoF carriers and carriers of neither G2019S nor LRRK2 pLoF variants. There was no significant difference in any of the 34 biomarkers between pLoF and G2019S carriers after accounting for multiple testing (Supplementary Table 15). When comparing G2019S carriers to noncarriers we found significant associations with cystatin C and phosphate levels. Clinical measures of kidney function. ACR was calculated by dividing the urine microalbumin value (field code 30500; mg l −1 ) by the urine creatinine value (field code 30510; μmol l −1 ) multiplied by a factor of 0.0001131222. Estimated glomerular filtration rate was calculated using the CKD Epidemiology Collaboration (CKD-EPI) creatinine equation 48 . Normal range values for both ACR and estimated glomerular filtration rate were taken from the National Kidney Foundation website (https://www.kidney.org/kidneydisease/). Spirometry measures of lung function. To assess lung function we used Global Lung Initiative 2012 reference equation z scores standardized for age, sex and height for FEV 1 , FVC and FEV 1 /FVC ratio measured using spirometry. These calculations are available in field codes 20256, 20257 and 20258 and were described previously 36 . Grouped phenotype analysis. The list of all codings within the field '20002 Non-cancer illness code, self-reported' , were taken from the UK Biobank showcase (http://biobank.ctsu.ox.ac.uk/crystal/coding.cgi?id=6). All selectable codings were given a primary grouping pertaining to the main system relating to that disease. In rare instances where more than one grouping could be assigned, the second was included as a secondary grouping. Diseases with an autoimmune basis were given a secondary grouping to reflect a similar underlying mechanism. Due to the opposing effects of some respiratory diseases, where appropriate, phenotypes in this category were given a secondary grouping of airway, interstitial or pleural. Any codings reflecting symptoms, trauma/injury, benign cancer, mental health phenotypes or diseases arising as a result of infection were excluded. All phenotype codings and assigned groupings are listed in Supplementary Table 10. Any coding within the field '20001 Cancer code, self-reported' was assigned a grouping of 'cancer' . To test for an association between any phenotype group and LRRK2 pLoF carrier status, each individual was counted once as either having self-reported any of the phenotypes within a group or having reported none. A Fisher's exact test was used to test for an association. Analysis of ICD10 codes. All ICD10 codes relating to diseases of the liver (K70-K77), diseases of the respiratory system not specific to the upper respiratory tract (J20-J22, J40-J47, J80-J99) or kidney diseases (N00-N29) were curated to exclude any with a primary infectious or external cause (Supplementary Table 12). Asthma was excluded from all analyses to avoid any issues caused by the deliberate ascertainment of the exome-sequenced portion of the cohort on the basis of asthma status. For each individual, we extracted all ICD10 codes from the fields '41270 Diagnoses: ICD10' (recorded from episodes in hospital), '40001 Underlying (primary) cause of death: ICD10' and '40002 Contributory (secondary) causes of death: ICD10' . The number of carriers and noncarriers with any ICD10 code relating to lung (5 pLoF carriers; 2,378 noncarriers), liver (0 pLoF carriers; 652 noncarriers) or kidney disease (3 pLoF carriers; 2,272 noncarriers) were counted. For J43 (emphysema), J44 (other chronic obstructive pulmonary disease) and J47 (bronchiectasis), ICD10 codes were not counted if they were reported alongside exposure to or history of tobacco use (Z77.22, P96.81, Z87.891, Z57.31, F17 or Z72.0). 23andMe variant annotation and curation. 23andMe participants have been genotyped on a variety of platforms and imputed against a reference panel comprising 56.5 million variants from the 1000 Genomes Project phase 3 (ref. 46 ) and UK10K 49 . Putative LRRK2 LoF variants were defined as those classified as high confidence by LOFTEE. Variants were manually assessed for call rate, genotyping and imputation quality and manually curated to ensure they were expected to cause true LoF. For each of the two genotyped LRRK2 pLoF, we determined carrier status by manually inspecting and custom calling the probe intensity plots. For the imputed variants, carrier status was determined from the minimac-imputed dosage. As these calling methods might produce false positives, we confirmed the participants' genotypes through Sanger sequencing. Individuals with discordant genotypes were excluded. This resulted in a cohort of 749 individuals, each of whom is a Sanger sequence-confirmed carrier for one of three pLoF variants (Supplementary Table 4). During initial selection and sequencing, expansion of the database led to inclusion of a number of additional individuals genotyped for one of the pLoF variants, rs183902574. We performed custom calling on these individuals and found 354 deemed as high-confidence carriers (Supplementary Table 4). As these individuals were not Sanger sequenced, all subsequent analyses were performed both including and excluding these individuals. Participants provided informed consent and participated in the research online, under a protocol approved by the institutional review board, Ethical & Independent Review Services, an organization accredited by the Association for the Accreditation of Human Research Protection Programs. Testing the power to detect an age effect in 23andMe. As a positive control for age analysis, we tested the apolipoprotein E (APOE) Alzheimer's disease risk allele rs429358, which has a known effect on lifespan. This effect is highly significant in this dataset (P = 1.2 × 10 −211 ). Given that the carrier count for rs429358 is much higher than for LRRK2 pLoF, we assessed the power of the 23andMe dataset to detect an age effect associated with LRRK2 pLoF variants that is of the same effect size as the known effect of the APOE allele rs429358 by sampling carriers of this variant. We randomly selected N carriers of rs429358 from the 23andMe dataset, performed a Kolmogorov-Smirnov test on the age distribution of those carriers versus 4,000,000 noncarriers and considered the resulting P value. We repeated this process 100 times and then computed the proportion of these simulations with P < 0.05. This tells us our power to reject the null hypothesis that APOE does not have an effect on age at α = 0.05, if we had N carriers in the dataset. We repeated this for different values of N between 1,000 and 20,000 (Supplementary Table 6). Association testing in the 23andMe dataset. Phenotype selection. The 23andMe dataset includes self-reported phenotype data for thousands of phenotypes across a diverse range of categories. These phenotypes have different sample sizes and prevalence, so the power to detect associations varies widely. We began with a curated set of 748 disease phenotypes. We then applied a liberal filter based on our power to detect an association with carrier status. More specifically, assuming a minor allele frequency of 2 × 10 −5 , we restricted to phenotypes where we had power 0.1 to detect an association effect with odds ratio (OR) > 1.3 (for binary traits) or β > 0.2 (for quantitative traits) at α = 0.0001 significance. This left us with 460 binary and 14 quantitative phenotypes. Association testing. For the subset of 366 health-related phenotypes (excluding any related to diet, drug use, lifestyle and personality), we first restricted testing to individuals for whom we had phenotypic data. We calculated pairwise identity by descent (IBD) over all individuals using a modified version of the IBD64 program and then iteratively removed individuals until we were left with a set of participants, no two of whom shared >700 cM in IBD. We then tested the association between phenotype and carrier status, controlling for age, sex, genotyping platform and the first ten genetic principle components. We used logistic regression for binary phenotypes and linear regression for quantitative phenotypes. To control for population structure we restricted our analyses to participants with >97% European ancestry, but the results did not qualitatively change when we dropped this restriction. We also tested associations using only individuals whose carrier status was confirmed by Sanger sequencing, but this also did not result in any meaningful difference. A Bonferroni-corrected P value threshold for 366 independent tests of 1.37 × 10 −4 was used to assess statistical significance. Power analysis. For each phenotype, we computed the theoretical OR we were powered to detect (given in Supplementary Tables 5 and 8) as follows: let m be the proportion of individuals used in the association study of that phenotype who are LRRK2 pLoF carriers and let n0 and n1 be the number of controls and cases, respectively. For each OR in the interval (1, 10) at steps of 0.02, we computed the power of the Cochran-Armitage trend test to detect an association between a variant with minor allele frequency m and OR at α = 0.05, with n0 controls and n1 cases 50 . We reported the smallest OR such that the power was ≥0.8. Analysis of LRRK2 protein levels. Cell culture. All cell lines tested negative for Mycoplasma contamination on a monthly basis with the MycoAlert Detection kit (Lonza, LT07-118) and MycoAlert Assay Control Set (Lonza, LT07-518). Cells were grown at 37 °C with 5% CO 2 . Human embryonic stem cell culture. All pluripotent stem cells were approved by Harvard ESCRO protocol E00052 and E00067. Human ESCs (hESCs) were obtained from WiCell Research Institute (WA01, H1) under a material transfer agreement. Cell lines were authenticated by visual inspection of cell morphology with bright-field microscopy, staining with anti-Oct4 antibody to determine maintenance of pluripotency (Santa Cruz, sc-5279, data not shown), sent to WiCell Research Institute after 6 months of passaging or after isogenic cell line generation for karyotyping and in some cases PCA of RNA-seq data to confirm clustering with other pluripotent stem cell lines. Pluripotent stem cells were plated onto hESC-qualified Matrigel (VWR, BD354277)-coated six-well plates, mTeSR1 medium was changed daily (StemCell Technologies, 85850) and cells were passaged every 5-7 d with 0.5 mM EDTA. Lymphoblastoid cell culture. LCLs were obtained from Coriell Biorepository (GM18500, GM18501, GM18502, HG01345, HG01346) and approved by the Broad Institute Office of Research Subject Protection protocol 3639. Cell lines were authenticated by visual inspection of cell morphology with bright-field microscopy Letters NAtURE MEDiciNE and in some cases PCA of RNA-seq data to confirm clustering with GTEx LCLs. LCL medium was changed every other day with RPMI 1640 medium (Life Technologies), 2 mM l-glutamine (Life Technologies) and 15% FBS (Sigma). Cardiomyocyte differentiation. Cardiomyocyte differentiation of the control and engineered H1 hESC lines was performed according to the protocol by Lian et al. 51 . Briefly, 500,000 cells were plated on hESC-qualified Matrigel (VWR, BD354277), grown in mTeSR1 medium for 4 d (StemCell Technologies, 85850) and switched to RPMI medium (Life Technologies) with B27 supplement (Life Technologies), switching to B27 with insulin at day 7 for the remainder of the protocol. On day 0 of differentiation, 12 µM CHIR99021 (Tocris) was applied for 24 h. At day 3, cultures were treated with 5 µM IWP2 (Tocris) for 24 h. Bright-field images and movies were acquired at day 17 and cells were collected for protein/RNA extraction at day 19. Isogenic cell line engineering. The following guide and homologous recombination (HR) template were delivered into single cell H1 hESCs via nucleofection (Lonza 4D-Nucleofector X unit) using the P3 Primary Cell kit (V4XP-3024), pulse code CA137 and pX459 (Addgene): AATAAGGCATTTCATATAGT and A CA GG CC-TG TG AT AG AG CT TCCCCATTGT G AG AACTCTGAAATTATCATCTGACTA TATGAAATGCCTTATTTTCCAATGGGATTTTGGTCAAGATTAA. Cells were allowed to recover from nucleofection in mTeSR supplemented with 10 µM Rock Inhibitor (Y-27632, Tocris) overnight. For the following 3 d the cells were treated with 0.25 µg ml −1 puromycin (VWR) in mTeSR. Cells were then cultured in mTeSR until colonies were ready to be split. Engineered cells were split into single cells and plated in Matrigel-coated 96-well plates at a density of 0.5 cells per well. Plates were screened for colonies 8-10 d after plating and grown until colonies were ready to be split. Colonies were then split with 0.5 mM EDTA into two identical 96-well plates, one for DNA extraction/PCR/sequencing and one for freezing cells. Once colonies were ready to be split, 96-well plates were frozen in mFreSR (Stem Cell Technologies) and stored at −80 °C until HR-positive wells were identified. HR-edited cells were then thawed and expanded for four generations, validated by Sanger sequencing, karyotyping and OCT4 staining before proceeding with cardiomyocyte differentiation. Off-target analysis of CRISPR/Cas9 engineering. To detect any potential off-target effects caused by CRISPR/Cas9 genome editing, WGS was conducted for both engineered and control cell lines. DNA extraction, quality control and 30× PCR-free WGS were performed by the Genomics Platform at the Broad Institute. An AllPrep DNA/RNA extraction kit was used, following its protocol. Alignment, marking of duplicates, recalibration of quality scores and variant calling were all performed using GATK best practices 52 . We identified 157,230 variants in the engineered cell line that were not found in the control cell line as candidate variants. For the guide RNA (gRNA) used, we defined potential off-target regions as those with a <4-bp mismatch against the full 20-bp gRNA sequence (334 regions) and/or a <2-bp mismatch against the seed (PAM proximal) 12 bp of the gRNA sequence (5,780 regions), each followed by the NGG PAM. We looked for any candidate variant that fell into the potential off-target region, resulting in detection of only one variant (chr8-65084564-A-AT) that fell onto a region with one mismatch against the seed 12 bp of gRNA sequence (chr8:65084560-65084575). No variants with a <4-bp mismatch against the full 20-bp gRNA sequence or perfect match at the seed region were detected. Because a mismatch at the seed region decreases the likelihood of off-target variants and also because the single variant we detected is a known variant (rs1161563412) observed in the population without apparent phenotypic association, we concluded that no major off-target effect exists at the level of violating the main steps of our research. All the analysis for the detection of potential off-targets were conducted using pybedtools 53 and CRISPRdirect 54 software. Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The gnomAD 2.1.1 dataset is available for download at http://gnomad. broadinstitute.org, where we have developed a browser for the dataset and provide files with detailed frequency and annotation information for each variant. There are no restrictions on the aggregate data released. The UK biobank resource was accessed under application number 42890. NAtURE MEDiciNE Extended Data Fig. 2 | Details of CRISPR/Cas9-engineered embryonic stem cells and cardiomyocyte differentiation. a, Sanger sequence of isogenic hESC engineered colony for heterozygous LRRK2 variant 10, clone 13 (GRCh37:12-40714897-C-T). The engineered cell line maintains b, a normal karyotype, c, normal colony morphology and expression of OCT4, and d, differentiates into the cardiomyocyte lineage. The bright field image of cardiomyocytes was captured at day 17 of the differentiation protocol. The cardiomyocyte differentiation was repeated 12 times and the staining for Oct4 was repeated on 30 independent colonies. Statistics For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted Software and code Policy information about availability of computer code Data collection No software was used to collect the data used in this study. Data analysis Data analysis Custom code was used to analyse data and create the figures in this study. This code has been shared in a GitHub repository. The Genome Analysis ToolKit (GATK) was used to analyses sequencing data generated from CRISPR edited cell lines, according to best practice guidelines. Pybedtools and CRISPRdirect software were used to detect potential off-target edits. The Integrative genome viewer (IGV) and Alamut software were used to curate variants. For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors/reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information. Data Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability The data presented in this manuscript and the code used to make the figures are available in https://github.com/macarthur-lab/gnomad_lrrk2.
2020-05-28T09:11:48.017Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "004fa911f0cc27645b088973e5f663487f6910ef", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41591-020-0893-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b7bc24bbcb1312f7ebbcb88e03997f412d5af047", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
265534461
pes2o/s2orc
v3-fos-license
Mitochondrial-targeted antioxidant attenuates preeclampsia-like phenotypes induced by syncytiotrophoblast-specific Gαq signaling Syncytiotrophoblast stress is theorized to drive development of preeclampsia, but its molecular causes and consequences remain largely undefined. Multiple hormones implicated in preeclampsia signal via the Gαq cascade, leading to the hypothesis that excess Gαq signaling within the syncytiotrophoblast may contribute. First, we present data supporting increased Gαq signaling and antioxidant responses within villous and syncytiotrophoblast samples of human preeclamptic placenta. Second, Gαq was activated in mouse placenta using Cre-lox and DREADD methodologies. Syncytiotrophoblast-restricted Gαq activation caused hypertension, kidney damage, proteinuria, elevated circulating proinflammatory factors, decreased placental vascularization, diminished spiral artery diameter, and augmented responses to mitochondrial-derived superoxide. Administration of the mitochondrial-targeted antioxidant Mitoquinone attenuated maternal proteinuria, lowered circulating inflammatory and anti-angiogenic mediators, and maintained placental vascularization. These data demonstrate a causal relationship between syncytiotrophoblast stress and the development of preeclampsia and identify elevated Gαq signaling and mitochondrial reactive oxygen species as a cause of this stress. INTRODUCTION Hypertensive disorders of pregnancy, specifically preeclampsia, confer substantial maternal-fetal health risks during gestation and in the postpartum period (1)(2)(3)(4)(5).Preeclampsia is diagnosed based on the presence of new-onset hypertension and end organ dysfunction after 20 weeks of gestation (6).Current incidence rates are rising (7,8), yet detailed mechanistic insight and effective treatment options are limited.Despite these shortcomings, it is well-established that the placenta has a prominent role in its pathogenesis (9,10), and experts have theorized that generalized syncytiotrophoblast stress may be a final common factor in the development of this syndrome regardless of the initiating factor (11). As the forefront of the maternal-fetal interface, syncytiotrophoblast cells comprise the outer villous layer of the placenta and are essential for material exchange and hormone production (12).Throughout pregnancy, this layer is susceptible to damage (13) and can release stress-induced bioactive factors, extracellular vesicles, and cell-free DNA into the maternal circulation (14,15).Preeclampsia is accompanied by exacerbated oxidative, mitochondrial, and endoplasmic reticulum stress, ultimately contributing to enhanced propagation of these signals (16)(17)(18)(19).Although a wide array of correlative evidence recapitulates the involvement of syncytiotrophoblast stress in the clinical presentation of preeclampsia (16)(17)(18)(19), the causality of this relationship and the precise origins have yet to be established. Many animal studies use hypoxia-related insults such as placental ischemia (20)(21)(22) or anti-angiogenic factors (23,24) to model preeclampsia.However, there is increasing appreciation that hypoxia is just one of many molecular signatures of the disorder (25)(26)(27), and Gαq-coupled pressure-related hormones, particularly vasopressin, angiotensin II, and endothelin-1, are also independently implicated (28)(29)(30).Rodent and ex vivo human experiments reveal that augmented signaling via these circulating substances causes systemic hallmarks of preeclampsia, alters placental morphology, and impairs oxidative buffering (26,31,32).Further, placental expression of regulator of G protein signaling-2 (RGS2), a major terminator of Gαq activation (33,34), is decreased during preeclampsia (35).Reduced placental expression of Rgs2 in the fetoplacental unit of mice is sufficient to induce clinical features of preeclampsia and placental transcriptomic profiles reflective of mitochondrial dysfunction, an unfolded protein response, and oxidative stress (35).Excess production of mitochondrial-derived reactive oxygen species within trophoblast cell cultures disrupts mitochondrial dynamics, perturbs hormone production, and reduces trophoblast fusion, which is critical for syncytiotrophoblast formation (36). We hypothesized that elevated Gαq stimulation within the syncytiotrophoblast layer contributes to the pathogenesis of preeclampsia through disruption of normal redox balance and thus the induction of syncytiotrophoblast stress.Our studies provide evidence of increased Gαq signaling within syncytiotrophoblasts of human placenta during preeclampsia, the sufficiency of selective induction of Gαq signaling within syncytiotrophoblasts of mouse placenta to cause preeclampsia-like phenotypes, and a critical role for mitochondrial redox functions in this process.These studies provide unique critical mechanistic evidence supporting the "syncytiotrophoblast stress" theory of preeclampsia and provide insight toward targetable pathways and available pharmacological agents that may be used to slow progression of this gestational disorder. RESULTS Increased Gαq-related activity is observed in human placenta during preeclampsia Phospholipase C (PLC) is a critical enzyme mediating actions of the Gαq second messenger system (Fig. 1A) (37).Therefore, its activity was assessed as a metric of Gαq signaling.PLC activity was augmented in third-trimester villous placenta samples derived from human pregnancies complicated by preeclampsia compared to control pregnancies (P = 0.009; Fig. 1C). compared to term control (Table 1).Similarly, preterm preeclampsia was also marked by differential expression of canonical genes involved in the heterotrimeric Gαq protein signaling pathway and its downstream events (Fig. 1B), including those provoked by endothelin-1 (Fig. 1B and fig.S1), the angiotensin II type 1 receptor (fig.S1), and chemokine-chemokine receptor interactions (fig.S2).Further, as mitochondrial inefficiencies can be both a cause and consequence of oxidative stress (38), preterm preeclamptic placentas also exhibited gene expression patterns indicative of down-regulated mitochondrial respiratory chain complex I assembly, adenosine 5 0 -triphosphate (ATP) synthesis coupled electron transport, and generation of precursor metabolites and energy (fig.S3). Gαq activation throughout the fetoplacental unit of mice induces preeclampsia-like phenotypes To explore the consequences of excess Gαq signaling within the fetoplacental unit of mice upon maternal-fetal outcomes, we used a Cre-lox genetic strategy to cause expression of the Gαq-coupled hM3Dq Designer Receptor Exclusively Activated by Designer Drug (DREADD) in a tissue-specific manner.The hM3Dq DREADD enables the selective activation of the Gαq pathway in response to a synthetic exogenous ligand, clozapine N-oxide (CNO) (Fig. 1D) (39).To target hM3Dq expression only to the fetoplacental unit, dams harboring a Cre-activable hM3Dq construct were bred with sires expressing Cre via the ubiquitous Actb promoter (Fig. 1D).The hM3Dq construct is located within the Rosa26 locus and consists of a strong ubiquitous Cag promoter that drives green fluorescent protein (GFP) until Cre-mediated recombination. After recombination, GFP is lost, and hM3Dq fused with the red florescent mCherry reporter is expressed (Fig. 1E) (40).Fluorescent imaging was used for model verification, and as expected, maternal (i.e., Cre-deficient) cells express GFP whereas cells of fetal origin (i.e., placenta) express mCherry (Fig. 1E).Pregnant females carrying double-transgenic (hM3Dq + , Cre + ) fetoplacental units received daily injections of CNO (2 mg/kg per day, i.p.) or saline (2 ml/kg per day, i.p.) mid-gestation [gestational day (GD) 12.5 to 14.5] before maternal and fetoplacental assessments following the third injection at GD 14.5.This injection period coincides with the time of mature chorioallantoic placental formation (41) and developmentally aligns with the second trimester in humans (42).This low dose of CNO was chosen to avoid backmetabolism to the pharmacologically active molecule, clozapine, (43) and had no discernible effects independent of the hM3Dq receptor in our explored endpoints (fig.S1 and table S1). Activation of hM3Dq by CNO induced multiple preeclampsialike phenotypes including proteinuria (P = 0.026; Fig. 2A), increased maternal circulating soluble fms(Feline McDonough Sarcoma)-like tyrosine kinase 1 (sFLT1) (P = 0.032; Fig. 2B), and reduced placental vascular endothelial growth factor (VEGF) protein (P = 0.010; Fig. 2C).These data suggest an angiogenic imbalance, as sFLT1 is a truncated form of the VEGF receptor 1 that binds to pro-angiogenic VEGFA and placental growth factor (PlGF) to render them inactive (44,45).Histopatholgic findings in the kidney were absent as assessed by light microcopy within hematoxylin and eosin-stained tissue (Fig. 2D).Total vascularization within each region of the mouse placenta was evaluated by immunostaining for platelet endothelial cell adhesion molecule (CD31).The area of CD31-positive placenta within the labyrinth, responsible for exchange, was severely attenuated (P = 0.0038; Fig. 2, E and F).In contrast, vascular patterning was retained within the decidua (P = 0.52) and junctional zone (P = 0.97), a major director of endocrine function (46).Placental histological lesions were indicative of maternal vascular dysfunction and inflammation, evidenced by the presence of decidual vascular congestion, nuclear debris, fibrin deposition, neutrophil infiltration, and necrosis in hematoxylin and eosinstained tissue (Fig. 2G and fig.S4).Overall, this stimulus represented a massive insult to pregnancy, resulting in frequent spontaneous abortion (3 of 7 pregnancies) and nonviable fetuses (26 of 26).These data confirm that amplified Gαq signaling within the fetoplacental unit has a profound effect on multiple pregnancy outcomes; however, the severity of the phenotypes induced in this model and the lack of cellular specificity prompted additional studies of the prevalence and effects of Gαq activation in individual layers of placenta, focusing upon the syncytiotrophoblast layer. Elevated Gαq signaling and oxidative stress are present in human syncytiotrophoblasts during preeclampsia To evaluate the status of the Gαq cascade and oxidative stress specifically within the syncytiotrophoblast layer of human placenta, we next performed laser capture microdissection to collect syncytiotrophoblast-enriched cellular fractions (Fig. 3, A and B).The protein expression of PLCβ1/3, signature Gαq effectors (37), was elevated in preeclamptic syncytiotrophoblasts compared to controls (P = 0.035; Fig. 3C).Although PLC is an enzyme and thus more reliably assessed at an activity level, the dehydration procedures required for laser capture microdissection and the small quantity of collected tissue render these samples incompatible for this activity assay.However, syncytiotrophoblasts comprise a large percentage of the villous tree, and correlation analysis revealed a positive association between villous PLC activity and syncytiotrophoblast PLCβ protein content (Pearson R = 0.74, P = 0.014; Fig. 3D).During Gαq signal transduction, PLC cleaves phosphoinositide 4,5-bisphosphate to inositol 1,4,5-trisphosphate (IP 3 ) (47).IP 3 binding to its receptors within the endoplasmic reticulum leads to the liberation of calcium (47), which controls a broad array of cellular events [i.e., activation of kinases and transcription factors (48) and vesicular release (49)].IP 3 receptor (ITPR3) mRNA was also increased in preeclamptic syncytiotrophoblasts (P = 0.04; Fig. 3E). Superoxide dismutase-2 (SOD2) is a protective mitochondrial matrix enzyme that catalyzes the conversion of superoxide to hydrogen peroxide (50), and excess free radicals lead to the generation of malondialdehyde (MDA), a final product of lipid peroxidation (Fig. 3F) (51).SOD2 mRNA (P = 0.027) and protein (P = 0.047; Fig. 3G) as well as MDA protein (P = 0.03; Fig. 3H) were increased within preeclamptic syncytiotrophoblasts (Fig. 3F), and there appears to be a positive correlation between the protein expression of PLCβ and SOD2 within these cells (Pearson R = 0.70, P = 0.012; fig.S5).Collectively, these data support the concept that syncytiotrophoblast Gαq signaling facilitates an oxidant defense response within this placental layer and may be involved in the pathophysiology of preeclampsia. Syncytiotrophoblast-restricted Gαq stimulation in mice results in impaired placental development driven by mitochondrial-derived reactive oxygen species The syncytiotrophoblast-specific Gcm1-Cre driver (52) was used to drive expression of the hM3Dq DREADD only within this layer of mouse placenta (Fig. 4, A and B).Homozygous hM3Dq dams were bred with hemizygous sires expressing Cre via the Gcm1 promoter to elicit syncytiotrophoblast II-restricted (52) expression of hM3Dq in ~50% of fetoplacental units.This breeding paradigm allowed paired comparisons between Cre + and Cre − placentas within a dam while also avoiding any unintended epigenetic effects of passing the imprinted Gcm1-Cre transgene through the maternal germ line.Thus, the direct effects of Gαq activation within the syncytiotrophoblast layer upon the structure and function of that placenta can be evaluated (i.e., CNO-treated Cre + placenta versus saline-treated).Simultaneously, endocrine effects evoked by circulating factors can be explored by examining phenotypes of Cre − placentas collected from CNO-treated dams that also carried Cre + placentas.As used in the fetoplacental model (Fig. 1), the DREADD ligand CNO (2 mg/kg per day, i.p.) or saline (2 ml/kg per day, i.p.) was injected daily during GD 12.5 to 14.5, before fetoplacental analyses at GD 14.5. Because of the design of the hM3Dq vector, the hM3Dq-mCherry fusion protein should be present in Gcm1-positive syncytiotrophoblast II cells and GFP in all other cells.Florescent imaging validated hM3Dq targeting to the proper placental region, indicated by mCherry expression only within a subset of labyrinth cells, along with broad GFP expression within the decidual and junctional zones (Fig. 4C).Similarly, in situ hybridization confirmed colocalization of Cre and Gcm1 transcripts within the labyrinth (Fig. 4D). To test our secondary hypothesis that the detrimental effects of exacerbated Gαq signaling depend on excess mitochondrial production of reactive oxygen species, the mitochondrial-targeted antioxidant Mitoquinone (MitoQ) was coadministered (5 mg/kg per day, i.p.) with CNO in a cohort of double-transgenic (hM3Dq + , Gcm1-Cre + ) pregnancies.Control experiments revealed that MitoQ had no observed effects within the context of normal pregnancy (figs.S6 and S7 and table S1), consistent with minimal negative effects of reactive oxygen species when present at controlled concentrations (53). Stimulation of Gαq by CNO within hM3Dq + , Gcm1-Cre + cells was supported by changes in the placental transcriptomic landscape indicative of up-regulated PLC-activating angiotensin-activated signaling, second messenger-and calcium-mediated signaling, and expression patterns reflective of neuroactive ligand interactions with receptors known to couple to Gαq (Table 1 and fig.S8).Structurally, this syncytiotrophoblast-specific model primarily exhibited placental vascular defects.Decidual spiral artery diameter was attenuated following syncytiotrophoblast-localized Gαq induction (CNO Cre + ) compared to saline (P = 0.038; Fig. 5A and fig.S7). Furthermore, the fraction of CD31-positive labyrinth, the region containing syncytiotrophoblasts (46), was reduced within CNO-injected double-transgenic placentas versus controls, including both paired Cre − placentas within the same dam (P = 0.0013) and placentas obtained from dams that received saline vehicle injections instead of CNO (P = 0.016; Fig. 5, B and C).In contrast, CD31-positive area within the decidua and junctional zone remained unaffected by Gαq activation.Decidual thickness was also decreased in paired CNO Cre + versus CNO Cre − placentas (P = 0.0024; Fig. 5D).The ratio between fetal mass and placental mass (F/P) is often used as a proxy for placental efficiency, indicative of nutrient transfer per gram of placenta (54).Within CNO-injected doubletransgenic dams, there was a decline in placental mass (P = 0.0034) and fetal mass (P = 0.0032) as well as F/P ratio (P = 0.016) in Cre + fetoplacental units (Fig. 5E).Of the affected parameters, MitoQ treatment mitigated Gαq-related impairments in labyrinth vascularization (P = 0.038; Fig. 5, B and C) and spiral artery diameter (P = 0.0055; Fig. 5A and fig.S7), although larger morphological changes were not manifested after only 2 days of treatment (Fig. 5, D and E).Collectively, these data demonstrate that enhanced Gαq propagation within syncytiotrophoblasts causes maladaptive changes in vascular supply to the placenta, and this effect is dependent on oxidative stress. Hypoxia-inducible factor 1 (HIF1) is a canonical transcription factor that contributes to cellular responses to low oxygen tension (55), and its HIF1α subunit is activated (i.e., translocated into the nucleus) in the presence of hypoxia (56).Consistent with the induction of a hypoxia response, nucleus-localized HIF1α protein was increased within CNO-treated Cre + placentas (P = 0.044 versus paired Cre − , P = 0.032 versus saline; Fig. 5F).In addition, bulk RNA sequencing revealed an enrichment of genes related to hypoxic response pathways in CNO-treated Cre + placentas compared to saline controls (Table 1).At an individual gene level, transcripts for several previously identified HIF1α targets were elevated in CNO-treated Cre + placentas versus paired Cre − or saline samples (table S2).In contrast, the protein content of VEGFA, a well-recognized HIF1 target (57,58), and its homolog, PlGF (59), were unchanged in labyrinth-enriched placentas (fig.S9).Together, these results support the conclusion that induction of Gαq in the syncytiotrophoblast layer is adequate to initiate selected transcriptional signatures of a hypoxic response within the placenta; nonetheless, this relatively brief and cell-restricted stimulation is insufficient to cause robust changes in typical angiogenic proteins. Consistent with our human placental data (Fig. 3), activation of the Gαq pathway within the syncytiotrophoblast layer in mice elicited an oxidative defense response, demonstrated by elevated labyrinth SOD2 protein expression compared to saline controls (P = 0.0003; Fig. 5G).Despite this rise, catalase protein levels were not significantly altered (Fig. 5H), and the lipid peroxidation marker MDA was increased (P = 0.003 versus saline, P = 0.03 versus CNO Cre − ; Fig. 5I, indicating an inability to adequately buffer the reactive oxygen species present, particularly superoxide and hydrogen peroxide.Simultaneous administration of MitoQ prevented the rise in SOD2 (P = 0.0006) and MDA (P = 0.0535), suggesting that the increased free radical buffering capacity provided by this compound was protective against Gαq-perpetuated oxidative damage.Changes in mitochondrial morphology were also evaluated by electron microscopy (Fig. 5J), which indicated the greatest degree of damage in CNO Cre + placentas compared to those that received saline or CNO + MitoQ.Within CNO Cre + samples, the placental mitochondria generally appeared smaller and more circular, with the exception of some that were large and swollen with dilated cisternae and amorphous precipitate.MitoQ did not completely abolish mitochondrial injury as they exhibited some occasional swelling.However, several mitochondria within this cohort resembled those of the healthy saline group, and there was also qualitatively more total volume of mitochondria. Many of these structural and molecular findings are corroborated by RNA sequencing and align with those prevalent in human preeclampsia (Table 1).Within the up-regulated and differentially expressed gene set, there was an enrichment in superoxide metabolic process, positive regulation of reactive oxygen species, smooth muscle contraction, regulations of cell migration and epithelium development, wound healing, negative regulation of angiogenesis, and other inflammatory signatures within CNO Cre + placentas relative to saline and CNO Cre − controls (Table 1 and figs and serine dehydratase-like protein, which are known to play a role in the biosynthesis of nicotinamide adenine dinucleotide, purines, thymidylate, pyruvate, and amino acids (figs.S16 and S17).This down-regulation may have ultimately contributed to impaired metabolism, nucleotide synthesis, nutrient transport, and overall placental function. Within CNO-treated Cre + placentas, coadministration of MitoQ led to transcriptomic profiles related to negative regulation of G protein-coupled receptor (GPCR) signaling and apoptotic processes, responses to reactive oxygen species, positive regulation of vasculature development, and epithelial cell differentiation (Table 2).Of the highlighted mediators, FoxG1, cyclin D, and TRAIL were decreased compared to CNO alone (fig.S18).Other altered controllers of differentiation, immunity, tissue remodeling, apoptosis, and lipid accumulation included interleukin-1, TGFβ, activator protein 1, c-Fos, matrix metalloproteinase 13, interferon regulatory factor 7, mitochondrial B cell lymphoma-extra large, organic cation transporter 1, Fas, and ATP binding cassette transporter G1 (figs.S19 to S21).MitoQ also enhanced transcription of several proteins necessary for protein processing in the endoplasmic reticulum (fig.S22) and decreased tumor suppressor-activated pathway 6 (Tsap6/Steap3) involved in exosome-mediated secretion (fig.S23), suggesting attenuated placental release of biological factors that may disrupt maternal physiology and fetal development. Collectively, these placental characterizations demonstrate that enhanced Gαq signaling within the syncytiotrophoblast layer alone facilitates morphological and molecular changes that parallel features of human preeclampsia, many of which are driven by mitochondrial-derived reactive oxygen species. Syncytiotrophoblast-specific Gαq activation elicits maternal cardiovascular phenotypes, and MitoQ ameliorates some of these effects Next, the consequences of activating Gαq signaling within the syncytiotrophoblast layer upon maternal cardiovascular phenotypes were examined.Hemodynamic parameters were recorded using radiotelemetric blood pressure transducers implanted before pregnancy (Fig. 6A and fig.S24).Average 24-hour mean arterial pressure was higher in the postinjection period within CNO-injected hM3Dq + pregnancies compared to saline (P = 0.036; Fig. 6B).This rise was driven by both systolic and diastolic components, and average postinjection heart rate remained consistent among groups (Fig. 6B).Increases in pressure during the dark phase appeared to be associated with augmented ambulatory activity (Fig. 6A).However, elevations in mean arterial pressure persisted on GDs in which activity was equivalent or lower in CNO-injected dams, suggesting some degree of hypertension regardless of activity status. Proteinuria at GD 14.5 was exacerbated in pregnancies subjected to syncytiotrophoblast-specific Gαq induction (P = 0.0031), but this effect was mitigated by cotreatment with MitoQ (P = 0.0006; Fig. 6C).Electron microscopy revealed signs of mild tubular injury but an absence of classic endotheliosis in a representative kidney sample from the CNO-treated cohort (Fig. 6D).Observations included the presence of segmental congestion in peritubular capillaries, occasional endothelial swelling, segmental congestion in glomerular capillaries, and segmental foot process effacement. Administration of CNO did not increase maternal plasma sFLT1 (P = 0.18; Fig. 7C).Our interpretation was confounded, however, by differences in the number (i.e., biomass) of Cre + placentas within individual dams.Within CNO-injected double-transgenic dams, there was a positive correlation between plasma sFLT1 and the number of Cre + placentas (Pearson R = 0.76, P = 0.006; Fig. 7C).These findings support the ideas that sFLT1 is primarily placenta derived and that it is positively regulated by Gαq signaling in the syncytiotrophoblast.Last, coadministration of MitoQ suppressed circulating sFLT1 (P = 0.036; Fig. 7C), again indicating that syncytiotrophoblast Gαq-mediated induction of this well-recognized biomarker and mechanistic contributor of preeclampsia is dependent on oxidative stress. DISCUSSION The current study provides critical experimental evidence testing and supporting the concept that the accumulation of syncytiotrophoblast stress plays a causal role in the development of preeclampsia.Further, our findings build upon this theory to suggest that excess Gαq signaling perpetuates this stress and the resultant maternal-fetal aberrations, which are in part mediated by an imbalanced redox state (Fig. 8).This research provides etiological insight regarding a cause of syncytiotrophoblast dysfunction and adds to the growing body of literature supporting the therapeutic utility of targeting oxidative stress to slow the progression of preeclampsia.Data reported here provide evidence of enhanced Gαq-modulated second messenger propagation in human syncytiotrophoblasts during preeclampsia, illustrated by increased PLCβ protein.We suspect that this effect is a result of excess input from hormones (vasopressin, angiotensin II, and endothelin-1) or placental loss of RGS2, an endogenous buffer of Gαq signaling, as these mechanisms are already implicated in preeclampsia pathogenesis by our team and others (25,26,30,68).Although studies confirm the presence of RGS2 and various Gαq-coupled receptors in human syncytiotrophoblast cells (69-71), we did not attempt to empirically decipher the precise cause of elevated PLCβ in our human placental samples.Single-cell RNA sequencing by Zhang et al. (72) demonstrated an enrichment of genes to calcium-dependent protein binding in third trimester syncytiotrophoblasts from preeclamptic patients compared to healthy pregnancies.Our analysis of the available gene set (72) using Shiny GO Molecular Function terms also revealed up-regulated G protein α subunit binding, both of which allude to the potential involvement of the Gαq pathway (fig.S25).Because the multinucleated nature of syncytiotrophoblasts is not ideal for single-cell RNA sequencing applications (73) and this technique provides limited sequencing depth, we chose to complement these data by microdissecting human syncytiotrophoblasts for protein quantification via immunoassay.This approach limited the number of protein targets that could be measured from a single sample but allowed greater sensitivity and bolsters the transcriptomic findings by Zhang et al. (72).A body of literature links Gαq acting hormones and a lack of GPCR inhibitory proteins to placental oxidative stress and mitochondrial dysfunction in animal models (25,26,31,32).Mitochondria are a major producer of the oxygen free radical superoxide, whereby electrons escape the canonical transfer system within the electron transport chain and react with molecular oxygen (74).In excess, superoxide and its by-products promote detrimental oxidation reactions (75,76) that contribute to DNA mutations (77), impaired protein folding (78), altered ion transport (79), apoptosis (78,80), and necrosis (80), which can impair placental capacity (76).Thus, SOD2 defends against superoxide modulated impairments by converting superoxide to hydrogen peroxide within the mitochondrial matrix (50), but its catalytic abilities can be overwhelmed in states of high oxidative stress.Preeclamptic human syncytiotrophoblasts exhibit signs of oxidative damage and impaired respiratory capacity (16)(17)(18), and here we demonstrate elevated SOD2 and lipid peroxidation.Simultaneously, these observations suggest that the rise in SOD2 is a compensatory yet insufficient adaptation to overcome the antioxidant demands imposed by the preeclamptic milieu, but the relationship to Gαq signaling had not been established within the context human preeclampsia or trophoblast subtypes.Notably, our data begins to address this gap and demonstrates a significant positive association among PLCβ and SOD2 in human syncytiotrophoblasts. Although the precise mechanisms linking Gαq activation and mitochondrial-derived superoxide production have yet to be elucidated within this syndrome, it can be postulated that the connection between pathways involves the Gαq messenger, PKC, and its interactions with NADPH oxidase (NOX), a cellular enzyme responsible for the generation of superoxide (81).Several studies have identified PKC as a positive regulator of NOX whereby PKC exerts its effects by phosphorylating NOX subunits (82,83).Of the many NOX isoforms, NOX2 has been shown to promote reverse electron transfer in response to angiotensin II and therefore contributes to the formation of mitochondrial-derived superoxide (84).Studies suggest that MitoQ is a substrate for respiratory complex II and thereby eliminates the burden of backflow of electrons from complex II to I (85,86). Complementing our human discoveries regarding the potential relationship between enhanced Gαq signaling and oxidative stress within the syncytiotrophoblast layer during preeclampsia, we took advantage of transgenic mice to assess the direct effects of selective activation of the Gαq cascade only within this cell type.Previous publications by our lab implicate placental GPCR signaling to the development of preeclamptic phenotypes via elevated circulating vasopressin (26,30) or heterozygous loss of placental Rgs2 (35).These models are confounded by the actions of vasopressin and Rgs2 on many cells and receptor subtypes within and outside the placenta and do not specifically address the role of Gαq or the syncytiotrophoblast stress hypothesis.However, many of the morphological and molecular changes reported in those studies were similar to those observed here with the exception of placental hypoxia, which was not detected following vasopressin infusion or reduced placental Rgs2.We are left to hypothesize that gestational age, crosstalk among G protein pathways, the strength, timing, duration of G protein activation, and the varied actions of these stimulants upon distinct neighboring cell types may all contribute to the discordant induction of hypoxia within these investigations. The labyrinth layer of mouse placenta contains two distinct, but functionally similar, populations of syncytiotrophoblast, responsible for mediating the passage of substances between maternal blood sinusoids and fetal capillaries (73).The current manipulation was limited to syncytiotrophoblast II cells, located in closest proximity to the fetal vasculature.Thus, it is logical that there was diminished labyrinth CD31 staining, denoting a decrease in fetal capillarization, which may have partially driven the fetal growth restriction.With regard to other vascular effects, mouse decidual spiral arteries undergo structural remodeling from GD 7.5 to 10.5 to allow greater oxygen transfer (87), but their vascular reactivity to vasoactive agonists is retained (88).Therefore, the reduced spiral artery diameter following hM3Dq activation in syncytiotrophoblast II cells from GD 12.5 to 14.5 was likely a consequence of factors released from the syncytium into maternal circulation rather than incomplete remodeling.Overall, the histologic abnormalities present upon syncytiotrophoblast-localized Gαq stimulation suggest potential uteroplacental dysfunction, which contributes to hypoxia, inadequate nutrient delivery, and subsequent fetal growth restriction (89) and preeclampsia. It is profound that stimulation of a single second messenger cascade in one subset of syncytiotrophoblasts is sufficient to augment proteinuria and blood pressure in the mother.We speculate that these physiologic effects were a result of altered signaling, placental secretions, and trophoblast damage.While the maternal phenotypes are significant but not severe, it should be noted that only ~50% of placentas within a dam were affected, which lessens the "biomass" or "load" of syncytiotrophoblast stress signals.Acknowledging the lack of a large hypertensive effect, blood pressure was not evaluated in the double-transgenic dams receiving concurrent CNO and MitoQ.However, a previous study confirmed that MitoQ supplementation beginning at GD 13.5 diminished the rise in maternal systolic blood pressure in a modified reduced uterine perfusion pressure mouse model of preeclampsia (22).Future studies are warranted to determine whether more intense or continuous activation of hM3Dq, in the syncytiotrophoblast or other layers and during different gestational age ranges, would elicit a much greater blood pressure phenotype-and if this requires the induction of oxidative stress. The placenta is known to release bioactive factors and syncytiotrophoblast-derived extracellular vesicles into the maternal circulation (19), and syncytiotrophoblasts are a predominant storage site for sFLT1 (90).Therefore, the systemic maternal effects following syncytiotrophoblast-localized Gαq stimulation are presumably attributed to changes in circulating chemokines, cytokines, and sFLT1.There is a convergence of pathways that link Gαq signaling, oxidative stress, and inflammation that may explain the results of the current investigation.As mentioned, the Gαq effector, PKC, mediates phosphorylation events to activate NOX and increase intracellular superoxide (82)(83)(84).PKC and superoxide independently promote activation of nuclear factor κB (NF-κB) (91,92), a well-established transcription factor involved in the regulation of inflammatory responses (93).Preeclamptic placentas exhibit greater NF-κB staining, much of which is localized to syncytiotrophoblast cells (93).Many of the maternal plasma immune markers augmented in the current study are established or predicted NF-κB target genes, including IL12p40 (94), LIX (95), MMP9 (96), and IP10 (97).Recognizing increased placental PKC and reactive oxygen species are relevant in our experimental mouse model and have been previously linked to NF-κB mobilization (91)(92)(93), it is plausible that these circulating inflammatory mediators are placentally derived.Further, MitoQ can cross the rodent placenta (22,98) to locally scavenge free radicals.Therefore, the reductions in plasma IL12p40 and LIX within the CNO + MitoQ cohort suggest that these NF-κB-responsive elements may be perpetuated by oxidative stress. As alluded to, the primary limitations of this study relate to the severity of insult permitted within the syncytiotrophoblast-specific model as only a fraction of placentas within a dam expressed the hM3Dq receptor, and the Gcm1 Cre driver only targets one of the two syncytiotrophoblast layers.Unfortunately, the current availability (and lack thereof ) of cell-specific Cre driver strains does not allow manipulations of both syncytiotrophoblast I and II cells. Further, the short injection period was chosen to avoid DREADD desensitization but may have hindered the acceleration of placental stress.While we would have preferred a more thorough and broad characterization of human and mouse syncytiotrophoblasts, these cells form a large, continuous multinucleated surface layer that is difficult to directly isolate and profile using single-cell omics approaches.Human syncytiotrophoblasts can be differentiated from cytotrophoblasts in cell culture; however, this removes them from their natural environment and prevents integrated explorations, as preeclampsia is a systemic disorder. Emerging technologies using peptide-coated nanoparticles can be used to deliver cargo to the syncytiotrophoblast layer in human placental explants (99) and could be leveraged for site-specific targeting of drugs to dampen Gαq activation.MitoQ supplementation is a simpler option and provides protection in the present study and other animal models of preeclampsia (22,98,100), validating its efficacy in many diverse subtypes of the disorder.MitoQ is already available in a stable formulation for oral dosing and has been administered to humans with no severe adverse events (101), making it a reasonable candidate for translation to the clinic.Future work would require honing in on the population of patients most amenable to MitoQ treatment, the ideal time of administration, dosing, and monitoring of pregnancy outcomes. In summary, excess Gαq signaling in the syncytiotrophoblast layer is sufficient to cause pathological features of preeclampsia via a mechanism involving mitochondrial reactive oxygen species.Our findings support the wide array of correlative human data, suggesting that syncytiotrophoblast stress has a substantial role in the development of this serious disorder (19) and identifies Gαq signaling and reactive oxygen species as a targetable source of this stress.This additional mechanistic insight may be beneficial for the advancement of treatment options and health outcomes within affected women and their children as preeclampsia remains largely unsolved (102). Experimental design The objectives of this study were to determine the causality of syncytiotrophoblast stress in preeclampsia and to explore Gαq signaling as an etiologic factor.Our human experiments demonstrating enhanced Gαq-related activity in villous placenta and syncytiotrophoblast cells during preeclampsia led to our mouse breeding paradigms for fetoplacental (hM3Dq F/F dam x Actb-Cre +/+ sire) and syncytiotrophoblast-specific (hM3Dq F/F dam x Gcm1-Cre +/− sire) Gαq induction.Further, the association between Gαq-related and mitochondrial antioxidant proteins in human syncytiotrophoblasts led to the additional cohort of mice receiving MitoQ within the syncytiotrophoblast-specific model.The sample sizes for experiments using human tissue were determined by the availability of tissue.For murine experiments, sample sizes were calculated using effect sizes and variance from our previous studies (26,30,35) (α = 0.05 and β = 0.2).Injections were randomly assigned among each round of timed pregnancies.Molecular assays and histological assessments were performed in a blinded manner.n = 1 CNO-injected animal subject (hM3Dq F/F dam x Gcm1-Cre +/− sire) was discarded from analyses because the dam carried no Cre + fetoplacental units. Human samples Institutional Review Board approval and written informed consent were granted before the use of any human specimens.Consented plasma and placental samples and associated clinical data (tables S5 and S6) were procured from the Medical College of Wisconsin Maternal Research Placenta and Cord Blood Bank.Preeclampsia was diagnosed on the basis of the presence of hypertension ≥ 140 mmHg (systolic) or ≥ 90 mmHg (diastolic) on two occasions after 20 weeks of gestation and end organ damage.Preeclampsia with severe features indicates any of the following: blood pressure ≥ 160 mmHg (systolic) or ≥ 110 mmHg (diastolic), thrombocytopenia, impaired liver function, renal insufficiency, pulmonary edema, or neurological symptoms.Control samples were obtained from pregnancies that were not affected by chronic hypertension, gestational hypertension, preeclampsia, or gestational diabetes. Animal subjects Animal procedures were approved by the Medical College of Wisconsin Institutional Animal Care and Use Committee.Mice were originally obtained from Jax labs (Cag-FLEX-hM3Dq, 026943; Gcm1-Cre, 026943; Actb-Cre, 033984; C57BL/6J, 000664).All mice were housed in a temperature and humidity-controlled facility with ad libitum access to 2029× Teklad diet and water.Male and female mice were housed together for a single overnight mating to ensure accurate assessment of GD.The light cycle following copulation was deemed GD 0.5.Pregnant dams received intraperitoneal injections (CNO 2 mg/kg body weight; MitoQ 5 mg/kg body weight; saline 2 ml/kg body weight) within the first half of the light cycle once daily from GD 12.5 to 14.5.Dams were euthanized by CO 2 asphyxiation and subsequent decapitation a minimum of 1 hour following the injection on GD 14.5 for collection of maternal tissues, plasma, and fetoplacental units. PLC activity assay PLC activity was evaluated in villous placental tissue using a colorimetric assay (Abcam, ab2773343).Briefly, 0.1 ± 0.2 g (mean ± SEM) of sample was homogenized in 200 μl assay buffer using a pestle and centrifuged at 10,000g at 4°C for 20 min.Two microliters of the supernatant was added per reaction.Absorbance was measured at 405 nm every 41 s for 60 min.Activity was calculated on the basis of product generated within the linear portion of each reaction using a standard curve. Laser capture microdissection For downstream processing of proteins, frozen human placental specimens were sectioned at 8 μm on a Leica CM1950 Cryostat that underwent prior ultraviolet (UV) disinfection and cleaning with 100% ethanol.Samples were mounted on polyethylene naphthalate (PEN) membrane glass slides (Carl Zeiss, 15350731) pretreated with UV radiation and 70% ethanol and transiently stored on dry ice for subsequent preparation.Sections were then dipped in cold 70% ethanol for 30 s followed by staining with hematoxylin solution containing 1 μM phenylmethylsulfonyl fluoride (Thermo Fisher Scientific, 36978) and 50× Roche cOmplete protease inhibitor cocktail (Sigma-Aldrich, 11697498001) for 30 s. Hematoxylin was washed by immersion in distilled ultrapure water (Invitrogen, Thermo Fisher Scientific, 10977015) containing 1 μM phenylmethylsulfonyl fluoride and 50× Roche cOmplete protease inhibitor cocktail, twice.Slides were further dehydrated and cleared in 70% ethanol for 30 s, 100% ethanol for 1 min, xylene for 5 min, and fresh xylene for 5 more minutes before brief drying in a chemical hood.Syncytiotrophoblast capture occurred on a Zeiss Palm Microbeam Laser Microdissection system.Regions of interest were identified and outlined using a 40× objective on a motorized inverted microscope or subsequent UV laser cutting at the minimum power necessary to penetrate the tissue and catapulting onto an adhesive cap (Carl Zeiss, 10138374) for 60 min. Protein expression Ten microliters of radioimmunoprecipitation assay buffer containing 1 mM phenylmethylsulfonyl fluoride and 1× Roche cOmplete protease inhibitor cocktail was immediately pipetted onto the collected human syncytiotrophoblast cells and placed on ice for 30 min.Fractions were vortexed, centrifuged, and stored at −80°C.PLCβ1 and PLCβ3 are the most highly expressed isoforms within human syncytiotrophoblast (69,71) and therefore were probed using a recombinant antibody that detects both isoenzymes.Protein expression of PLCβ1/3 (Abcam, recombinant rabbit monoclonal antibody, ab184743, 1:100), SOD2 (Cell Signaling, rabbit monoclonal antibody, #13194, 1:200), and MDA (Abcam, rabbit polyclonal antibody, ab27642, 1:100) were assessed using Simple Western Jess automated capillary-based Western blot system.Protein Simple Chemiluminescence immunoassays were performed according to manufacturer's instructions using replex reagent for total protein detection.Target protein expression was quantified in Compass for SW (6.1.0)and normalized to 50,000 U of total protein or β-actin [ACTB (Abcam, Rabbit polyclonal antibody), ab8227, 1:100]. For downstream processing of mRNA, frozen human placental specimens were sectioned at 6 μm on a Leica CM1950 cryostat.All surfaces were precleaned with 100% ethanol, and removable cryostat parts were treated with ribonuclease (RNase) zap before sectioning.Samples were mounted on UV-exposed PEN membrane glass slides (Thermo Fisher Scientific, LCM0522) and stored at −80°C for future use.Preceding dissection, sections were dipped in chilled 95% ethanol for 30 s and stained with chilled cresyl violet solution (Abcam, ab246817) for 20 s, followed by immersion in 75% ethanol for 20 s, 95% ethanol for 30 s, 100% ethanol for 30 s three times, xylene for 30 s twice, and xylene for 5 min.Slides were air-dried in a chemical hood before collection using an Arcturus XT Laser Capture Microdissection System.Portions of the syncytiotrophoblast layer were identified and outlined using a 40× objective on a motorized inverted microscope.These areas were attached to an adhesive cap (Thermo Fisher Scientific, Applied Biosystems, LCM0211) with an infrared laser and separated from surrounding tissue via UV laser cutting for a maximum of 60 min. Quantitative real-time polymerase chain reaction An RNeasy Micro Kit (Qiagen, 74004) was used for RNA extraction according to the recommended protocol with minor modifications.Adjustments included adding Recombinant RNase Inhibitor (Takara, 2313A) to the supplied lysis buffer solution rather than 2-mercaptoethanol.Microdissected cells were immediately lysed with 350 μl of lysis buffer containing 5 μl (200 U, 40 U/μl) of RNase inhibitor, centrifuged, and stored at −80°C.Upon rapid thawing in a water bath, 5 μl (200 U) of RNase inhibitor was added to samples again, and RNA was extracted with on-column deoxyribonuclease (DNase) digestion for 15 min.RNA was eluted in 12 μl of RNase-free water.Quantity and quality were assessed using fragment analysis and on a NanoDrop (Thermo Fisher Scientific, NanoDrop One).RNA was reverse-transcribed to cDNA using the SuperScript VILO cDNA Synthesis Kit (Thermo Fisher Scientific, Invitrogen, 111754) and amplified using TaqMan PreAmp Master Mix according to the manufacturer's protocol.Briefly, TaqMan gene expression assay for human GAPDH (Thermo Fisher Scientific, Hs02786624_g1, 4453320, FAM-MGB), SOD2 (Thermo Fisher Scientific, Hs00167309_m1, 4453320, FAM-MGB), and ITPR3 (Thermo Fisher Scientific, Hs01573539_m1, 4448892, FAM-MGB) were pooled into a reaction containing TaqMan PreAmp Master Mix and cDNA and then preamplified on a thermal cycler for 14 cycles.Quantitative real-time polymerase chain reaction (PCR) was performed on amplified product using TaqMan Gene Expression Assays using an Applied Biosystems Step One Plus Real-Time PCR instrument (Thermo Fisher Scientific).Gene expression was analyzed via the Livak method (103). Preparation of injected drugs CNO (Tocris, 4935) was dissolved in 0.9% saline at a concentration of 0.5 μg/μl on the first day of injections and stored at 4°C between days.MitoQ (Focus Biomolecules, 10-1363) was dissolved to stock solution (70 mg/ml) in dimethyl sulfoxide and stored at −20°C.Stock solution was diluted daily in 0.9% saline to 1 μg/μl. Placental histology All placentas were rinsed in phosphate-buffered saline (PBS), grossly sectioned in half, and drop fixed in 10% neutral buffered formalin.After 48 hours, placentas were transferred to 70% ethanol before routine paraffin embedding, sectioning at 4 μm, and hematoxylin and eosin staining by the Medical College of Wisconsin Children's Research Institute histology core.Stitched images were created at ×20 magnification on a Keyence BZ-X series microscope for counting of decidual spiral arteries.The luminal diameter of spiral arteries and layer thickness were examined in ImageJ.All spiral arteries were measured and averaged for each placenta.Each placental layer was measured in the thickest region perpendicular to the base of the placenta.Investigator was blinded to treatment group when performing histological measurements. CD31 immunohistochemistry CD31 detection occurred using a Cell Signaling reagent system.Unstained 4-μm paraffin-embedded placental sections were deparaffinized and rehydrated in a series of xylene, ethanol, and water washes before citrate treatment (Cell Signaling, 14746) for antigen unmasking.Samples were blocked with 3% hydrogen peroxide for 10 min and 5% normal goat serum (Cell Signaling, 5425) in tris-buffered saline with Tween 20 (Cell Signaling, 9997) for 1 hour.CD31 rabbit monoclonal antibody (Cell Signaling, 77699, 1:100) in antibody diluent (Cell Signaling, 8112) was applied and incubated overnight at 4°C.Horseradish peroxidase SignalStain Boost Detection Reagent for rabbit immunoglobulin G (Cell Signaling, 8114) and diaminobenzidine (DAB) substrate (Cell Signaling, catalog no.8059) were used for CD31 staining.Slides were counterstained in hematoxylin, dehydrated, and mounted for imaging on a Keyence BZ-X series microscope.Stitches (20×) of the entire placenta were analyzed via color deconvolution and area quantification in ImageJ. Murine placental protein expression Layer-enriched tissue dissection of murine placenta was performed as outlined by Qu et al. (104).Briefly, uterine muscle and chorionic plate were removed before separating the junctional zone from the underlying labyrinth, which is characterized by a dark red appearance.Techniques were previously validated using quantitative PCR for layer-specific markers (decidua: Lgals3, spongiotrophoblast: Tpbpa, syncytiotrophoblast II: Gcm1).Whole thickness placental or labyrinth dissections were rinsed in cold PBS and then transferred to a microcentrifuge tube on ice containing 200 μl of cell lysis buffer (Thermo Fisher Scientific, Invitrogen, FNN0011) prepared with 1× Roche cOmplete protease inhibitor and 1 mM phenylmethylsulfonyl fluoride.Samples were incubated on ice for 30 min, homogenized with a disposable pestle, and centrifuged at 10,000g for 10 min at 4°C.The supernatant was transferred to a new tube for protein quantification using the Pierce BCA Protein Assay Kit (Thermo Fisher Scientific, 23227).SOD2 (Cell Signaling, rabbit monoclonal antibody, 13194, 1:200), catalase (Cell Signaling, rabbit polyclonal antibody, 14097, 1:200), and MDA (Abcam, rabbit polyclonal antibody, ab27642, 1:100) protein levels were assessed via Protein Simple Chemiluminescence immunoassay on a Wes automated capillary-based Western blot system and normalized to ACTB (Abcam, rabbit polyclonal antibody, ab8227, 1:100).Expression was quantified in Compass for SW (6.1.0).VEGFA (Thermo Fisher Scientific, Invitrogen, EMVEGFACL) and PlGF (Thermo Fisher Scientific, Invitrogen, EMPGF) were measured per commercially available enzyme-linked immunosorbent assays (ELISAs). HIF1α protein localization Cytoplasmic and soluble nuclear protein fractions of whole-thickness mouse placenta were isolated using a Subcellular Protein Fractionation Kit for Tissues (Thermo Fisher Scientific, 87790) and quantified with the Pierce BCA Protein Assay Kit.HIF1α (Abcam, recombinant rabbit monoclonal antibody, ab179483, 1:50) was probed through Protein Simple Chemiluminescence immunoassay on a Wes instrument and normalized to ACTB (Abcam, Rabbit polyclonal antibody, ab8227, 1:100) for cytoplasmic fraction and HDAC1 (Novus Biologicals, rabbit polyclonal antibody, NB10056340SS, 1:200) for soluble nuclear fraction.Quantification occurred in Compass for SW (6.1.0). Urine protein Twenty-four-hour urine samples were collected from metabolic cages, as previously (26,35).Urine was diluted 1:100 and protein content was determined using the Pierce BCA Protein Assay Kit.Concentration (milligrams per milliliter) was calculated in reference to the standard curve.Total daily protein excretion (milligrams per day) was established by multiplying concentration (milligrams per milliliter) by the volume of urine collected. Electron microscopy Peripheral regions of fresh kidney and flash-frozen placenta were minced into approximately 1-mm squares and transferred to glutaraldehyde for 48 hours at 4°C.Tissue was then washed in sodium cacodylate buffer three times for 15 min each and stored in sodium cacodylate buffer at 4°C.Further processing and imaging were conducted by the Medical College of Wisconsin Electron Microscopy Core using a JEOL 2100 transmission electron microscope equipped with an ultrahigh-resolution digital camera.Images were interpreted by a board-certified veterinary pathologist and a renal pathologist. RNAscope Chromogenic duplex RNAscope-based in situ hybridization was performed on paraffin-embedded 4-μm placental sections (Advanced Cell Diagnostics, 322430).Sections were baked, deparaffinized, and pretreated with hydrogen peroxide, target retrieval agent, and protease plus.Following, double Z oligo probes for RNA targets were hybridized (Gcm1, 429661; Cre, 312281) before storage in 5× SSC buffer overnight at room temperature.Probes were then amplified and detected before counterstaining, mounting, and evaluation on a Keyence BZ-X series microscope. Florescent imaging Frozen placenta was sectioned at 10 μm on a Leica CM1950 Cryostat and cover-slipped with ProLong Diamond Antifade Mountant with 4 0 ,6-diamidino-2-phenylindole (Thermo Fisher Scientific, Invitrogen, P36966).Stitched images of fluorophores were captured at 10× on a Keyence BZ-X and a Nikon ECLIPSE 80i microscope.Insets were acquired at 100× on a Nikon A1 laser scanning confocal microscope. Profiling of maternal circulating factors Circulating maternal immune and cardiovascular markers were detected with Eve Technologies (Calgary) multiplex assays.Undiluted human and mouse plasma was sent for Human Cytokine/Chemokine 71-Plex Discovery Assay (HD71), Human MMP and TIMP Discovery Assay (HMMP/TIMP-S,P), and Mouse Cardiovascular Disease Panel 1 7-Plex Discovery Assay Array (MDCVD1), and plasma was diluted 1:1 in PBS for Mouse Cytokine/Chemokine 44-Plex Discovery Assay Array (MD44).Arrays consisted of fluorescent beads coupled to capture antibodies and biotinylated detection antibodies bound to streptavidin-phycoerythrin conjugate.Identification of analyte was enabled by a dual-laser flow cytometry bead analyzer.Raw data were log-transformed for plotting and statistical comparisons.sFLT1 within mouse plasma was measured via commercially available ELISA (MyBioSource, MBS161443). Blood pressure by radiotelemetry Blood pressure and heart rate were recorded by radiotelemetry as described (26,35,105).Radiotelemetric probes (DSI TA11PA-C10) were implanted into the common carotid artery under ketamine/xylazine anesthesia followed by two doses of postprocedure meloxicam.Mice were allowed 1 week to recover after surgery and another week to acclimate to the telemetry suite.Male and female mice were then housed together for a single overnight mating and remated weekly until successful pregnancy was achieved.Pregnancy data were collected at 2000 Hz from GD 11.5 until euthanasia.Continuous recordings were taken 2 hours before injection and 5 hours postinjection.All other recordings were on a 10 min/hour schedule.Dams were initially euthanized at GD 17.5, but this was adjusted to GD 16.5 due to frequent early delivery. RNA sequencing Placental transcriptomes were examined by RNA sequencing at GD 14.5; normalized transcript count data have been submitted to the National Center for Biotechnology Information Gene Expression Omnibus (GSE221732) and raw sequence to the Short Read Archive.Frozen placenta was homogenized in 1 ml of TRIzol with 1.4-mm ceramic beads (PerkinElmer, SKU 19-627) on a Bead Mill Mini Homogenizer (Thermo Fisher Scientific, 15-340-164) before the addition of 200 μl of chloroform.Samples were then vortexed for 15 s, incubated at room temperature for 3 min, and centrifuged at 14,000g for 15 min at 4°C.Aqueous phase was extracted and combined with an equal volume of 70% ethanol, vortexed, and transferred to a spin column from a PureLink RNA mini kit (Thermo Fisher Scientific, Invitrogen, 12183018A).RNA was extracted in concordance with the provided protocol, including Pure-Link DNase treatment (Thermo Fisher Scientific, Invitrogen, 12185010).Total RNA was submitted to the Medical College of Wisconsin Mellowes Center for Genomic Sciences and Precision Medicine for bulk RNA sequencing.Briefly, RNA was quantified on a Qubit fluorometer and then evaluated for quality using RNA integrity number (RIN) (9.6 ± 0.1) and percentage of RNA fragments > 200 nucleotides (DV200) (90 ± 0.2) on a fragment analyzer.Libraries were generated with a Takara low input SMART-Seq Stranded preparation kit and underwent quality control through Kapa Quantification and a MiSeq 50 cycle run, and 100-bp paired-end sequencing was performed on a NovaSeq platform with a 200-cycle SP flow cell and a targeted sequence depth of 55 million reads per sample (table S7). Transcriptomic data analysis The quality of prealigned data was computed by FastQC.All samples had a median per base sequence quality score above 30, and the average quality per read was 31 (with a value of 27 equating to a 0.2% error rate).Reads were aligned to the Gencode vM23 (GRCm38.p6)mouse reference genome with Star and further processed in the MAPRSeq3 workflow to obtain read counts at the gene and exon levels.Aligned data were inspected via RSeQC modules to ensure appropriate quality metrics including coverage uniformity, inner distance between paired reads, and sequencing saturation (fig.S26).Differential gene expression was tested in EdgeR with a pairwise approach comparing: CNO Cre + versus saline Cre − , CNO Cre + versus CNO Cre − , and CNO + MitoQ Cre + versus CNO Cre + .Lists of up-regulated genes (P < 0.05, minimum log 2 fold change of 0.23) were input to Shiny GO (versions indicated in text) for enrichment analysis.Pathways with an enrichment false discovery rate of <0.15 were considered. Statistical analyses Quantitative physiological and histological data are depicted as mean ± SEM unless indicated.For mouse fetoplacental assessments, each individual datapoint represents a separate pregnancy.Experimental results for continuous variables were tested for normality and equal variance.Parametric data were analyzed by t test (twotailed) or analysis of variance (ANOVA) followed by Bonferroni correction for multiple comparisons and nonparametric data by Mann-Whitney U test using GraphPad Prism 9.5.0.P < 0.05 was used as the threshold for significance.Differences in the distribution of categorical variables between groups were assessed by χ 2 test. Supplementary Materials This PDF file includes: Fig. S1 to S26 Tables S1 to S7
2023-12-03T05:08:52.989Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "80a5744a6f56c3843c1cdb5d567b71297e37749e", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "80a5744a6f56c3843c1cdb5d567b71297e37749e", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
237769619
pes2o/s2orc
v3-fos-license
The Influence of Individual Characteristics, Organizational Factors and Job Satisfaction on Nurse Performance Hospitals have professional human resources in various service fields. One of them is a nurse. This study aims to determine the effect of individual characteristics, organizational factors, and job satisfaction on the performance of nurses. This type of research is analytical survey by design (cross sectional). The research sample consisted of 165 nurses with accidental sampling method. This study uses a questionnaire on organizational factors, job satisfaction, and performance. The study was conducted from November to December 2019 at Bunda Thamrin General Hospital. Bivariate data analysis using Chi-square (X2) statistical test and multivariate using logistic regression. The results showed that there was an effect (p<0.05) years of service (p=0.02; RR=0.30), supervision (p=0.00; RR=3.83), rewards (p=0, 00; RR = 5.20), and job satisfaction (p = 0.00; RR = 8.00) on the performance of nurses. Supervision has an effect of four times (OR=3.76) and job satisfaction has an effect of 10 times (OR=10.36) on the performance of nurses. There was no effect (p>0,05) gender (p=0,55; RR=0,38), age (p=0,29; RR=0,52), education level (p=0,32; RR=1,72), career path (p=0,08; RR=0,41), and marital status (p=0,74; RR=0,73) on nurse performance. Job satisfaction and supervision are variables that have the greatest influence on the performance of nurses. So, it is important for institutions to improve the quality of supervision (increase supervisory competence, determine materials and supervision schedules) and increase job satisfaction (considering motivational and hygiene factors). In addition, it provides appropriate rewards and maintains the nurse's tenure, so that it can improve nurse performance. Introduction The hospital has professional human resources in various service fields. One of them is a nurse. Nurses are the health care professions with the most frequent and longest duration of interaction with patients, so that the quality of nurse services determines the image or quality of the hospital. Maimun's research at Bhayangkara Hospital Pekanbaru reported a low nurse performance of 53.45 (Fajrillah & Nurfitriani, 2016). Hidayat's research at the Surabaya hospital showed a low nurse performance of 50% (Fajrillah & Nurfitriani, 2016). Both studies show that the performance of most nurses is still not optimal in providing nursing services to patients. Pinem's research in Medan City showed that the performance of nurses in the inpatient room at Mitra Sejati General Hospital Medan was 61.7%, including the poor category (Manalu, 2019). The results of multiple linear regression showed that the organizational mission variable (p=0.005) had an effect on the nurse's performance. Performance is the result of the real work of the employee's work that can be measured and in accordance with the standards of his work in an organization. Nurse performance is influenced by several factors. The results of Hutauruk's research stated that motivation (p=0.000), workload (0.037), compensation (0.042), career development (0.002), education (p=0.000) had a significant relationship to nurse performance (Hutauruk et al., 2017). Meanwhile, work climate (p=0,059), work ability (p=0,135), and years of service (p=0,697) did not have a significant relationship with nurses' performance. Nurse performance is also influenced by organizational factors. Research shows that the success or failure of an organization depends on the performance of nurses or vice versa (Hameed & Waheed, 2011). Another factor related to performance is job satisfaction. The results of Talasaz's research stated that there was a positive relationship between job satisfaction and performance in Iranian health centers p=0.00 (P<0.01) (Gannika & Buanasasi, 2019). High job satisfaction will lead to a higher level of performance as well. Observations on organizational factors, individual characteristics, job satisfaction and nurse performance become parameters of success in performing nursing care. The success of nursing care not only improves the patient's recovery but also the quality of service and the image of the hospital. Therefore, the researcher determined the formulation of the problem in this study was to determine how the influence of individual characteristics, organizational factors and job satisfaction on the performance of nurses in the Inpatient Installation of Bunda Thamrin General Hospital? Methods This research uses a cross sectional analytic survey research type. The research sample consisted of 165 inpatient installation nurses at Bunda Thamrin General Hospital with accidental sampling. The number of samples is determined based on the slovin formula. This study uses a questionnaire instrument, namely: the organizational factors questionnaire was developed based on the theoretical concept of Gibson, the job satisfaction questionnaire was developed based on the two-factor theory concept from and the performance questionnaire was developed based on the performance theory concept of Gomes (Amin, 2004;Stimpfel et al., 2012). The study was conducted in November -December 2019 at Bunda Thamrin General Hospital. Data analysis was performed using bivariate data analysis with Chi-square (X2) statistical test and multivariate logistic regression analysis. The set significance value is p<0.05. Results and Discussion Individual Characteristics of Nurses Based on the 3 shows that nurses are satisfied with work as many as 153 people (92.70%) and not satisfied working as many as 12 people (7.30%). Based on Table 4 shows that the majority of respondents have good performance as many as 142 people (86.10%). From table 6: multivariate analysis of logistic regression test: Nurses who have good job satisfaction will 10 times to have good performance compared to nurses who have poor job satisfaction. Nurses who state that supervision is good will be 4 times to have good performance compared to nurses who state that supervision is not good. Age Based on the results of the study, there was no effect of age on the performance of nurses (p=0.29). In each age group > 80% nurses have good performance. Each age group has the same potential to produce good performance. This may be due to the fact that the respondents are in the age group between 22-45 years which based on theory is the category of early adulthood (21-35 years) and middle adulthood (35-45 years) (Pieter, 2017). This age group is a productive age group where individuals start their careers and seek career stabilization by showing their best performance. This age group is still energetic so that the body is still able to tolerate activities so as to produce good performance without health problems. The fact that at Bunda Thamrin General Hospital more than 80% of nurses in the productive age group and showed good performance, may be the reason why in this study age did not affect the performance of nurses. The average age of an individual to start his career at the age of 25 years. At the age of 25-30 years a person chooses a field of work, at the age of 30-40 years of career consolidation and at the age of > 40 years there is a career decline followed by a decrease in performance (Karyo, Sex The results of bivariate analysis showed that there was no effect of gender (p=0.55) on performance. This may be because there is a very dominant difference in the proportion between female nurses and male nurses, where the number of female nurses is 95.80% and male nurses are only 4.20%. In addition, this may be because the statements in the nurse's performance questionnaire are performance appraisals based on the quantity of work, quality of work, knowledge of work, creativity, cooperation, responsibility and initiative. The effect of gender on performance based on the theory of Ilyas is in terms of working hours where male nurses who have a high family burden will increase their working hours per week compared to female nurses (Mandagi et al., 2015;Sakban et al., 2019). The number of female nurses is more than male in accordance with data from the Ministry of Health of the Republic of Indonesia that the number of nurses is dominated by women as much as 71%. Male nurses are mostly placed in special rooms that require a lot of extra energy compared to other rooms, such as operating rooms and emergency room installations. At Bunda Thamrin General Hospital, the placement of male nurses is mostly in special units such as the Emergency Room, Surgery Room, and Intensive Care Unit. The results of this study are in accordance with the research of Tsai et al at the Hospital in Taiwan that there is no difference between men and women in terms of providing innovation in nursing care (Tsai, 2013). Female and male nurses have the same responsibility in providing quality nursing services. There is no difference in treatment or workload difference between women and men (Kambuaya et al., 2016). Educational Degree The results of the bivariate analysis showed that there was no effect of education level on the performance of nurses (p = 0.32). Nurses with a Diploma III education level and the nursing profession, each showed good performance amounting to > 80%, which means that at each level of education the potential is the same for give good performance. This may be because at the Bunda Thamrin General Hospital routine coaching and training activities are carried out both externally and especially internally at the hospital. The training aims to improve the knowledge and skills of nurses. In addition, in this hospital there are many standard operating procedures for each nursing care process and nursing actions that are socialized regularly and periodically so that each nurse has the same knowledge for each nursing care process and nursing action in accordance with the standards set by hospital management. Another thing that might cause the absence of influence of education level on nurse performance is because the performance aspects studied in this study are in accordance with theory relating to work quantity, work quality, knowledge of work, creativity, cooperation, responsibility and initiative (Amin, 2004). Where nurses with Diploma III education level and professional nurses both showed good performance for the seven aspects studied. Years of Service The results of the bivariate analysis showed that nurses whose working period was >5 years had better performance than nurses whose tenure was 5 years, in other words, there was an effect of tenure on nurse performance (p=0.02). The working period can affect performance because a nurse who has a long working period in a hospital allows the nurse to adapt to the regulations and nursing work methods in the hospital. In addition, the experience of carrying out nursing care and nursing actions, the training obtained during their working period causes these nurses to have better skills to support performance compared to nurses with short tenures. From the results of the study obtained information that at Bunda Thamrin General Hospital there are internal nursing regulations in the form of nursing policies, nursing service guidelines, standard operating procedures and nursing care standards. All of these regulations become a reference for nurses in carrying out nursing care tasks, nursing actions and nursing administration. This internal nursing regulation is consistently and periodically socialized to all nurses. This nursing regulation is also implemented in the form of a nursing care medical record form or file so that it becomes a guide for nurses in carrying out their duties. Even knowledge of this regulation is periodically evaluated at the time of increasing the nursing career ladder structurally and competence. So that nurses who have had a longer working period have been provided with knowledge about nursing regulations and more experience in practicing these regulations and the implementation of nursing skills which causes these nurses to have better performance than nurses whose tenures are shorter. The fact shows that the longer you work, the more experience the workforce has. On the other hand, the shorter the tenure, the less experience is gained. Work experience provides a lot of expertise and work skills. On the other hand, limited work experience results in lower levels of expertise and skills. Work experience is the main asset, but not the only one, for someone to get involved in a certain field of work. The results of this study are in accordance with the opinion of that tenure affects the capacity and level of performance of nurses in inpatient rooms (Stimpfel et al., 2012). Basically, the longer the nurse's working period, the more proficient, have better capacity and ability compared to nurses who have fewer years of service. The expertise approach is carried out by continuing to do the same thing repeatedly so that it can increase expertise in providing nursing action interventions compared to nurses who have fewer years of service. The experience gained during the work period makes senior nurses more confident in taking action on patients because they have been trained longer and are agile. Career Path The results of the bivariate analysis showed that there was no relationship between career path and nurse performance (p=0.08). At Bunda Thamrin General Hospital, every nurse has a structural career path and a competency career path. Since the beginning of working as a nurse at Bunda Thamrin General Hospital, all nurses have set a structural career path, starting from the implementing nurse, the nurse in charge of the shift, the head nurse of the room, the head nurse of the section, and the head nurse of the nursing sub-field (head of nursing). The increase in structural career paths is influenced by years of service, performance indicators and is also influenced by competency levels/functional career paths. The career path studied in this study is the functional career path or the competency level of Clinical Nurses. The competency level of Clinical Nurses (PK) at the Inpatient Installation of Bunda Thamrin General Hospital consists of Pre-PK, PK I, PK II and PK III levels. There are no nurses who have a PK IV competency level. To get a competency test pass, it is influenced by the level of education, years of service, certification, written exam results, oral exams and practice of competency testing according to the PK level. Before undergoing the competency test, each nurse will receive coaching and training related to nursing care materials and nursing actions that will become the nurse's clinical authority in accordance with the competency level to be addressed. The process of determining nursing competence is called the credentialing process, carried out by the Nursing Committee. After going through the credentialing process, each nurse will have clinical authority in carrying out their work. The existence of this process of coaching and competency testing causes nurses to be required to work according to established standards. This may influence why career paths do not affect performance, because at every level of career paths there are performance standards that must be achieved. Performance standards are measured by individual performance indicators. Marital Status The results of bivariate analysis that there is no effect of marital status on the performance of nurses p = 0.74. Unmarried nurses and married nurses have the same opportunity to show good performance. In the opinion of the researcher, there is no influence of marital status on the performance of nurses in this study may be due to the performance aspects studied in accordance theory are related to the quantity of work, quality of work, knowledge of work, creativity, cooperation, responsibility and initiative (Amin, 2004). where the performance aspect is not influenced by marital status. Based on the theory of the effect of marital status on performance is that it causes increased responsibility and permanent work to become more valuable and important (Kumajas et al., 2014). This is in line with the research of marital status is not related to the performance of nurses at Prof Dr. RSJ. V. L. Ratumbuysang, North Sulawesi (Gannika & Buanasasi, 2019). The results of this study contradict the research of that marital status is related to nurse performance (p = 0.00) (Kumajas et al., 2014). This study also contradicts the research of Tsai et al that there is a relationship between marital status and the ability of nurses to innovate in nursing care. Nurses who are married, 15 times have the ability to innovate in nursing care compared to those who are not married. A person who is married experiences less turnover and is more satisfied with his job compared to an unmarried person (Liou et al., 2013). Supervision The results of the bivariate analysis showed the effect of supervision on the performance of nurses p = 0.00. The results of the multivariate analysis illustrate that nurses who state that supervision is good will have four times better performance compared to nurses who state that supervision is not good. This study is in line with the research of at Tugurejo Hospital Semarang which stated that there was a relationship between the supervision of the head of the room and the performance of nurses in the inpatient room (Pebriani, 2016). The head of the room as a supervisor appreciates what has been achieved by the implementing nurse and provides solutions to the problem (Maimun & Yelina, 2016). Qualitative research conducted found that clinical supervision carried out by supervisors had an impact on the clinical ability of nurses in carrying out nursing care (Bos et al., 2015). During supervision, discussions were held with experienced superiors to solve problems, increase knowledge and develop nursing practice (Brunero & Stein-Parbury, 2008). At Bunda Thamrin General Hospital, supervision activities are carried out routinely every day by supervisors consisting of the head nurse, the head nurse of the installation and the duty manager and the head of the nursing sub-sector. This hospital already has a supervision sheet that contains supervision materials that must be supervised. However, the use of this supervision sheet has not been consistently carried out. In every supervision activity there is a supervisory and coaching function for nurses who are supervised in terms of implementing nursing care, nursing actions and nursing administration. Any potential problems and problems found in each supervision activity are resolved at the same time and reported to the head of the nursing sub-sector. Nurses who become supervisors in supervision activities do not have special competence requirements as supervisors. If you pay attention, the nurse who is the supervisor in this supervision activity is a managerial nurse, where to reach the managerial nurse career path there are requirements, namely: Minimum Competency Level PK2, minimum 3-5 years of service and have experience as head of the room, have minimum education D3 (for head nurse) and S1 professional nursing education for nurses for installation head nurses, duty managers and heads of nursing sub-sectors. So that managerial nurses who act as supervisors are considered competent to carry out supervisory activities. Competencies possessed by supervisors need to be a concern for the field of nursing services in hospitals. The better the competence of a supervisor, the quality of supervision in improving performance will also be better. The demand for good performance is the reason for the importance of supervision carried out by competent people. (1) The competencies that must be possessed by a supervisor are: 1) able to provide clear direction and instructions, 2) able to provide advice, advice and assistance needed by his staff, 3) able to provide motivation to improve the performance and morale of his staff. The importance of knowing when and how supervision will be carried out, 4) supervisors are able to provide training and guidance for their staff, and 5) are able to carry out objective and correct assessments of their staff (Arwani, 2006). Rewards The results of the bivariate analysis showed that there was an effect of reward on the nurse's performance p=0.00. The benefits received by nurses at the Inpatient Installation of Bunda Thamrin General Hospital consist of salaries received every month consisting of basic salary, food allowance and allowances. The basic salary is in accordance with government regulations regarding the provision of minimum wages. In addition, the determination of basic salary also considers the level of education, work experience, certification and level of competence of nurses. The amount of salary received by each nurse is not the same, and is confidential. Salaries are received by nurses regularly and on time at the end of each month. In addition to the monthly salary, nurses also get annual bonuses/incentives. The number of annual incentives received by nurses is determined by considering the achievement of performance indicators, positions, and years of service of nurses. Nurses also receive holiday allowances received during the month of religious celebrations of one time each of their basic salary. In addition, every nurse and all employees also receive health benefits in the form of BPJS Health membership and old age benefits in the form of BPJS Employment membership whose contributions are paid by the company. In the employee satisfaction survey conducted by hospital management in 2019, the level of satisfaction with the compensation variable was 49%. This means that 49% of all human resources at the Bunda Thamrin General Hospital are satisfied with the compensation provided by the hospital management. The effect of rewards on performance lies in the strength of the award itself which is very effective as a motivator for nurses at work. Rewards will be stronger to motivate nurses along with the journey of nurses in developing themselves and can influence individual perceptions regarding the fulfillment of motivational aspects that come from rewards (2) . Compensation is important for nurses as employees because the amount of compensation reflects a measure of the value of their work among the employees themselves, their families and society. Nurses can carry out their work if they are compensated both financially and nonfinancially. Good compensation creates job satisfaction for nurses and can improve performance in carrying out nursing care (Arumwanti, 2014). Job Satisfaction The results of the bivariate analysis showed that there was an effect of job satisfaction on the nurse's performance p = 0.00. Nurse job satisfaction is the most influencing factor in performance. The results of the multivariate analysis showed that nurses who had good job satisfaction were 10 times more likely to have good performance compared to nurses who had poor job satisfaction (p=0.00; OR=10.36). The results of this study are in line with the research that there is a relationship between job satisfaction and nurse performance at Soewandi Hospital (p = 0.00) (Fajrillah & Nurfitriani, 2016). Improved nurse performance due to nurse satisfaction at work will increase patient satisfaction as recipients of nursing care services Job satisfaction occurs at the level where the work results are accepted by the individual as expected. The more people receive the results, the more satisfied they will be. With the creation of job satisfaction which is a positive attitude made by individuals towards their work, good individual performance will be achieved (Kousar et al., 2018). Conclusion There is an influence of individual characteristics (service period), supervision, reward and job satisfaction on the performance of nurses. The variables that most influence the performance of nurses are job satisfaction and supervision. Nurses who have good job satisfaction will be ten times to have good performance compared to nurses who have poor job satisfaction, nurses who state good supervision will four times to have good performance compared to nurses who think supervision is not good.
2021-09-28T01:09:18.013Z
2021-07-12T00:00:00.000
{ "year": 2021, "sha1": "4d4b2607fe271cb5ccc071845616164d84381beb", "oa_license": "CCBYSA", "oa_url": "https://amrsjournals.com/index.php/jamrmhss/article/download/158/179", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b8437ed90cd512d3b7391944aca95cc9ca71c98e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
257015117
pes2o/s2orc
v3-fos-license
Bone Differentiation Ability of CD146-Positive Stem Cells from Human Exfoliated Deciduous Teeth Regenerative therapy for tissues by mesenchymal stem cell (MSCs) transplantation has received much attention. The cluster of differentiation (CD)146 marker, a surface-antigen of stem cells, is crucial for angiogenic and osseous differentiation abilities. Bone regeneration is accelerated by the transplantation of CD146-positive deciduous dental pulp-derived mesenchymal stem cells contained in stem cells from human exfoliated deciduous teeth (SHED) into a living donor. However, the role of CD146 in SHED remains unclear. This study aimed to compare the effects of CD146 on cell proliferative and substrate metabolic abilities in a population of SHED. SHED was isolated from deciduous teeth, and flow cytometry was used to analyze the expression of MSCs markers. Cell sorting was performed to recover the CD146-positive cell population (CD146+) and CD146-negative cell population (CD146-). CD146 + SHED without cell sorting and CD146-SHED were examined and compared among three groups. To investigate the effect of CD146 on cell proliferation ability, an analysis of cell proliferation ability was performed using BrdU assay and MTS assay. The bone differentiation ability was evaluated using an alkaline phosphatase (ALP) stain after inducing bone differentiation, and the quality of ALP protein expressed was examined. We also performed Alizarin red staining and evaluated the calcified deposits. The gene expression of ALP, bone morphogenetic protein-2 (BMP-2), and osteocalcin (OCN) was analyzed using a real-time polymerase chain reaction. There was no significant difference in cell proliferation among the three groups. The expression of ALP stain, Alizarin red stain, ALP, BMP-2, and OCN was the highest in the CD146+ group. CD146 + SHED had higher osteogenic differentiation potential compared with SHED and CD146-SHED. CD146 contained in SHED may be a valuable population of cells for bone regeneration therapy. Introduction Regenerative medicine is a medical technology that uses stem cells to regenerate tissues that have become dysfunctional; it was developed as a new therapeutic technology to replace organ and bone transplantation [1][2][3][4]. Mesenchymal stem cells (MSCs) were first identified as colony-forming cells with the ability to differentiate into osteoblasts, adipocytes, and chondrocytes within bone marrow organs [5]. MSCs are present in skeletal muscles, adipocytes, placentae, dental pulp, and periodontal ligament, and play roles in preparing for and maintaining homeostasis during the restoration of compromised tissues [6,7]. In addition, since MSCs can be collected from tissues and grown in standardized culture conditions, they are used as a transplanted cell preparation for autoimplantation in the medical field; cell preparation for skin and cartilage regeneration has already been marketed [8,9]. Notably, techniques for the isolation and culture of MSCs from oral tissues have . The log phase was assessed based on the cell growth curve from day 2 to day 6 of incubation, and the PDT was calculated for this period. PDT did not differ significantly among the three groups (n = 5, Kruskal-Wallis test, Not significant; N.S.) (c,d). Comparison of cell proliferation. (c) Two hours after BrdU treatment, DNA synthesized in SHED was slightly higher than that in CD146 + SHED and CD146-SHED, but there were no significant differences among the three groups. (d) Twenty-four hours after BrdU treatment, the results were similar (n = 5, Kruskal-Wallis method, Not significant; N.S.). Among the heterogeneous population of SHED isolated from the deciduous pulp, 70.9 ± 4.3% of cells expressed CD146. MSCs were positive for CD90, CD73, and CD105 but negative for CD14, CD19, CD34, and CD45. (b) Comparison of population doubling time (PDT). The log phase was assessed based on the cell growth curve from day 2 to day 6 of incubation, and the PDT was calculated for this period. PDT did not differ significantly among the three groups (n = 5, Kruskal-Wallis test, Not significant; N.S.) (c,d). Comparison of cell proliferation. (c) Two hours after BrdU treatment, DNA synthesized in SHED was slightly higher than that in CD146 + SHED and CD146-SHED, but there were no significant differences among the three groups. (d) Twenty-four hours after BrdU treatment, the results were similar (n = 5, Kruskal-Wallis method, Not significant; N.S.). 46.88 h, and 47.15 h in SHED, CD146-SHED, and CD146 + SHED, respectively. However, there were no significant differences among the three groups ( Figure 1b). BrdU Proliferation Assay Although the proliferative capacity of SHED was slightly higher than that of CD146 + SHED and CD146-SHED 2 h and 24 h after BrdU administration, there were no significant differences among the three groups (Figure 1c,d). 2.3. Osteogenic Differentiation-Related Gene Expression Analyses in SHED, CD146 + SHED, and CD146-SHED Before the induction of osteogenic differentiation, the gene expression of ALP, BMP-2, and OCN did not differ significantly among SHED, CD146 + SHED, and CD146-SHED ( Figure 2a). However, CD146 + SHED had significantly higher gene expression of ALP, BMP-2, and OCN on days 21 and 28 after osteodifferentiation induction than SHED and CD146-SHED (Figure 2b,c). In addition, SHED showed significantly higher gene expression of ALP, BMP-2, and OCN than CD146-SHED (Figure 2b,c). 46.88 h, and 47.15 h in SHED, CD146-SHED, and CD146 + SHED, respectively. However, there were no significant differences among the three groups (Figure 1b). BrdU Proliferation Assay Although the proliferative capacity of SHED was slightly higher than that of CD146 + SHED and CD146-SHED 2 h and 24 h after BrdU administration, there were no significant differences among the three groups (Figure 1c,d). Comparative Analysis of Calcium Deposition among SHED, CD146 + SHED, CD146-SHED Alizarin red staining was dense in CD146 + SHED, SHED, and CD146-SHED on day 28 after the induction of osteogenic differentiation (Figure 3c), with a decreased staining intensity observed in CD146 + SHED, SHED, and CD146-SHED, in that order. In addition, the absorbance assay showed that CD146 + SHED had significantly higher levels than SHED and CD146-SHED, indicating an increased calcified deposit in CD146 + SHED ( Figure 3d). Discussion MSCs are isolated from various tissues, such as the bone marrow, adipose tissue, umbilical cord, and dental pulp, and implanted in defective tissues to promote tissue restoration and regeneration. Since deciduous teeth are shed spontaneously with the permanent tooth replacing them, harvesting SHED is less invasive than harvesting BMSCs, adipose-derived MSCs, and umbilical cord-derived MSCs. Since SHED and BMSCs have a similar bone regeneration capacity, the present study investigated SHED and BMSCs as sources of cells for transplantation into bone defects. In tissues where abundant blood vessels are present, such as the pulp, there is a microenvironment around the blood vessels called the MSC niche [35,36], which comprises MSCs, hematopoietic stem cells, mesenchymal progenitors, fibroblasts, and pericytes (vascular pericytes) [37]. CD146 is expressed in MSCs and the plasma membrane of vascular endothelial cells and pericytes, and is associated with cell-to-cell or cell-to-extracellular matrix [38,39]. The surface antigen CD146 expressed on MSCs is a receptor for growth factors, such as Netrin-1, Wnt-1, and vascular endothelial growth factor (VEGF)-C [40], and it reportedly acts as a coreceptor for VEGF receptor-2 (VEGFR2) and platelet-derived growth factor receptor beta, thereby contributing to angiogenesis and vascular maintenance [41,42]. Since MSCs and pericytes are present in MSC niches and express some similar surface antigens, including CD146, pericytes are considered the origin of MSCs, and CD146 + MSCs may be very close to pericytes [43][44][45]. In the present study, the surface-antigen analyses of SHED-like cells revealed that more than 99.5% of the heterogeneous cells expressed CD105, CD73, and CD90. However, less than 3% of these cells expressed CD14, CD19, CD34, and CD45. SHED isolated from the pulp of deciduous teeth met the requirements for defining MSCs established by the International Society for Cellular Therapy [46]. CD105, CD73, and CD90 are expressed in these cells under standard culture conditions; however, CD45, CD34, CD14, CD11b, CD79a, CD19, and HLA-DR are not. Although CD105, CD73, and CD90 are MSC-positive markers, they are expressed in MSCs, hematopoietic stem cells, and blood cells [47]. However, MSCnegative markers, i.e., CD14, CD19, CD34, and CD45, are specific markers for hematopoietic stem cells and hemocytes [47]. In the present study, cells isolated from the deciduous pulp were CD105-, CD73-, CD90-, CD14-, CD19-, CD34-, and CD45-negative, and they were very likely to be SHED. Differentiation of SHED (which were isolated in a similar procedure) into osteoblasts, adipocytes, and chondrocytes in vitro systems was confirmed by Miura et al. [13] and Nakajima et al. [23]. In this study, the analysis of CD146 expression rates by flow cytometry showed that CD146 was expressed in approximately 70.9% of SHED heterogeneous populations. In contrast, CD146 was expressed in 48.39-66.3% of SHED heterogeneous populations [48,49]. In this study, adequate numbers of CD146 + SHED were isolated from SHED by cell sorting, which might contribute to slightly higher outcomes than previously reported (70.9% vs. 48.39-66.3%). These findings demonstrate the potential of CD146 + SHED as a valuable source of cells for future clinical applications. The properties of SHED, CD146 + SHED, and CD146-SHED were investigated in cellular studies. Based on the BrdU cell proliferation assay and PDT analysis, there were no significant differences in proliferative capacity among the three groups, although SHED had a slightly higher proliferative capacity. CD146 + MSCs exhibited significantly higher cell proliferative capacity than CD146-MSCs in human endometrium-derived MSCs and periodontal ligament-derived MSCs [50,51]. However, some papers have demonstrated that CD146-MSCs have higher proliferative potential than CD146 + MSCs [42]. Paduano et al. reported that the proliferative capacity of CD146-MSCs and CD146+ MSCs, cells varied in each study [52]. In the present study, there were no significant differences in the proliferative ability among CD146-SHED, CD146 + SHED, and SHED. Factors accounting for the difference between the reports may include the type of MSC, performance of cellular conditioning or sorting devices used, and effect of the procedure. In this study, cells were passaged after cell sorting, cultured to confluence, and seeded in 96-well and 12-well plates for cell growth testing. However, CD146 + SHED and CD146-SHED grew more 7 of 14 slowly immediately after cell sorting than SHED without cell sorting. The longer operating times after releasing cells from the dish, and several centrifugations during the cell sorting procedure, might have minor adverse effects on CD146 + SHED and CD146-SHED. Further investigations with different conditions, such as decreasing the number of passages after cell sorting, are warranted. Comparative analysis of bone differentiation potential showed that CD146 + SHED had a higher bone differentiation potential than SHED and CD146-SHED based on ALP staining, quantitative ALP analysis, and alizarin red staining of SHED cultured in an osteogenic differentiation-inducing medium. During the osteogenic differentiation-inducing process, the progenitor cells are differentiated into osteoblasts and express ALP from the early-to mid-stage, followed by the expression of bone-specific OCN and, ultimately, the production of hydroxyapatite and collagen I with a crystallographic architecture present in the bone matrix in vivo [53][54][55]. MSC-differentiated osteoblasts are known to secrete VEGF [56]. As an autocrine factor, VEGF secreted by osteoblasts differentiated from SHED may have promoted the osteogenesis of SHED. VEGFR2 is expressed in CD146-SHED [57] and promotes the expression of bone morphogenetic protein-2 (BMP-2) and Runx2, and CD146 acts as a coreceptor for VEGFR2 in CD146 + SHED. In addition, DPSC bone differentiation is promoted by VEGF in vitro [58]. Moreover, the genetic analysis on days 21 and 28 after the initiation of osteogenic differentiation showed that CD146 + SHED had significantly higher gene expression of ALP, OCN, and BMP-2 than SHED and CD146-SHED, indicating that CD146 + SHED has a higher osteogenic differentiation potential. However, to confirm the relationship between VEGF and CD146 + SHED osteogenic differentiation potential, it is necessary to further examine the signaling pathway; expression analysis of genes (such as VEGF, VEGFR2, Runx2, and Osterix) in osteodifferentiated SHED, CD146 + SHED, and CD146-SHED; and osteogenic differentiation potential by treatment with VEGF and anti-VEGF neutralizing antibodies. Previously, it has been reported that hMSC-CD146(+) cells exhibited greater chemotactic attraction in a transwell migration assay, and when injected intravenously into immune-deficient mice following closed femoral fracture, exhibited wider tissue distribution and significantly increased migration ability as demonstrated by bioluminescence imaging [30]. Therefore, CD146 defines a subpopulation of hMSCs capable of bone formation and in vivo trans-endothelial migration and thus represents a population of hMSCs suitable for use in clinical protocols of bone tissue regeneration [30]. Moreover, newly formed bone matrix with embedded osteocytes of donor origin was reportedly observed upon transplantation of CD146(+) human umbilical cord perivascular cells-Gelfoam-alginate 3D complexes in severe combined immunodeficiency (SCID) mice [32]. In addition, a high expression of CD146 in MSCs from bone marrow reportedly correlates with their robust osteogenic differentiation potential [33]. This study suggested CD146 + SHED had superior bone regeneration potential compared to SHED and CD146-SHED in vitro. Based on the present and previous findings on CD146 + BMSCs [30,32,33,39,41,42,52,59], CD146 + SHED may have the following properties: "They have very close properties to pericytes," and "the binding of VEGF and VEGFR2 further enhances the pathway to promote the expression of bFGF, BMP-2, Runx2, and Osterix as CD146 acts as a coreceptor of VEGFR2." These factors may have promoted angiogenesis and bone regeneration. A previous study on BMSCs with single-cell sorting of BMSC populations revealed osteo-, adipo-, and chondroid differentiation of as many as 100 cells in clonal culture, and only 50% of these cells differentiated into these three lineages. Furthermore, 80% of BMSCs differentiated into the three lineages expressing CD146, and 40% of the cells differentiated into only one or two lineages expressing CD146 [30]. Therefore, even if a population of CD146-positive cells is isolated, there may be a heterogeneous presence of cells within that population, leading to different differentiation potentials [30]. Thus, it is conceivable that CD146 + SHED and CD146-SHED cell populations isolated from heterogeneous SHED cell populations in this study also differ in function and nature, and may be heterogeneous cell populations. At present, it is difficult to investigate the functional heterogeneity of MSC populations in-depth, and much remains to be elucidated [60]. However, single-cell sorting and clonal culture should also be performed in SHED, CD146 + SHED, and CD146-SHED, and the extent of osteodifferentiation and the ability to differentiate into the three lineages in CD146 + SHED and CD146-SHED populations used in this study should be examined in detail in future studies. SHED Isolation and Culture Pulp tissues were collected from deciduous teeth extracted from five healthy patients (Average 9 years 8 months, ± 2 years 4.8 months) who provided informed consent at the Department of Orthodontics, Hiroshima University Hospital. SHED were isolated and cultured using a previously described procedure [21][22][23]34], with reference to the methods of Miura et al. [13] and Gronthos et al. [14]. The following text elaborates on the isolation of SHED from the pulp. A mixture of α-MEM (Sigma-Aldrich, St. Louis, MO, USA), 4 mg/mL collagenase (Thermo Fisher Scientific, Waltham, MA, USA), and 3 mg/mL dispase (Godo Shusei, Tokyo) was prepared. The pulp tissue was immersed in the solution and minced with a scalpel. Adequate dental pulp tissue slices were transferred to a 10-mL tube and then incubated at 37 • C under 5% CO 2 for 20 min with shaking. Cell aggregates were eliminated using a 70-µm cell strainer (CORNING, Corning, NY, USA), and the filtered solution was diluted with α-MEM and centrifuged at 1500 rpm for 5 min. The supernatant was aspirated and added to 20% fetal bovine serum (FBS) (Daiichi Kagaku, Tokyo), 0.24 µL/mL kanamycin (Meiji Seika Pharma Co., Ltd., Tokyo), 0.5 µL/mL penicillin (Meiji Seika Pharma), and 1 µL/mL. After being suspended in α-MEM containing mL amphotericin (MP Biomedicals), they were seeded in a cell culture petri dish (CORNING) with a diameter of 35 mm, cultivated at 37 • C and 5% CO 2 , and the cells were detached from the petri dish using PBS containing 0.25% trypsin (Nacalai Tesque, Kyoto) and 1 mM EDTA (Wako Pure Chemical Industries, Osaka) when confluent and passaged. After the first passage (P1), the cells were cultured in α-MEM containing 10% FBS (Daiichi Kagaku, Tokyo) and the abovementioned antibiotics at 37 • C under 5% CO 2 . This study was conducted in accordance with the Regulations for Epidemiological Studies of Hiroshima University Hospital (approval no. E-20-2). Cells were independently isolated from the deciduous teeth obtained from the five patients, and the cells were cultured separately. Fluorescence-Activated Cell Sorting MSCs were isolated from the pulp of deciduous teeth, and surface-antigen analysis was performed to confirm the presence of CD146. Each SHED collected from 5 patients was cultured and passaged to P3. Flow cytometry was subsequently performed using one of the 10 cm dishes in which the cells of each donor were cultured (five in total), and the targeted surface antigens were analyzed. The targeted surface antigens were CD146 and MSC-positive (CD73, CD90, and CD105) and MSC-negative (CD14, CD19, CD34, and CD45) markers, as defined by the International Society for Cellular Therapy. The cultured cells were detached using phosphate-buffered saline (PBS) containing 0.25% trypsin and 1 mM ethylenediaminetetraacetic acid. The cell suspension was centrifuged at 1800 rpm for 5 min. After aspirating the supernatant, the cells were washed with PBS containing 2% fetal bovine serum (FBS). Two PBS solutions (2% FBS) containing 1 × 10 6 cells were prepared for each antibody detection. One of the prepared solutions was supplemented with 30 µL of PE mouse anti-human CD146, PE mouse anti-human CD90, FITC mouse anti-human CD105, PE-CTM 7 mouse anti-human CD14, APC-H7 mouse anti-human CD19 (BD Pharmingen, San Jose, CA, USA), Brilliant Violet™ 421 mouse anti-human CD73, FITC mouse anti-human CD34, and APC-H7 mouse anti-human CD45 (Becton Dickinson, San Jose, CA, USA). In addition, 5 µL of the corresponding kappa isotype control was added to the other prepared solution and incubated at 4 • C, protected from light, for 20 min. Thereafter, the mixture was washed twice with PBS containing 2% FBS and 3 µL of 7-amino-actinomycin (7-AAD; BD Pharmingen) was added. FLOWJO software (Tomy Digital Biology Co., Tokyo, Japan) was used to analyze the surface antigens. The cells were sorted based on the surface antigen analysis to separate CD146-positive (CD146 + SHED) and negative (CD146-SHED) cells using FACS Aria II Cell Sorter (BD Biosciences, San Jose, CA, USA). The separated cells were cultured in αminimum essential medium (α-MEM) at 37 • C in 5% CO 2 . The remaining cells were not used for flow cytometry and were used for subsequent experiments as an unsorted SHED group. Following the analysis of these surface antigens by flow cytometry, the cells were merely sorted for CD146, only into CD146-positive and CD146-negative SHEDs. Therefore, surface antigens other than CD146 were only analyzed and not sorted. In subsequent experiments, three groups of CD146-positive and CD146negative SHEDs isolated by cell sorting and unsorted SHEDs without flow cytometry were used. The resulting SHED cells were cultured separately. Thereafter, we examined the proliferative and osteogenic differentiation activities of each of the five samples of SHED of an individual. The following variables were examined to compare and examine the cellular proliferative abilities of SHED, CD146 + SHED, and CD146-SHED. Population Doubling Time SHED isolated from deciduous pulp, and CD146 + SHED and CD146-SHED isolated by cell sorting, were cultured in corresponding media, and passage-4 cells were seeded in 24-well plates (CORNING Inc., Corning, NY, USA; 1.0 × 10 4 cells/well) and cultured in 5% CO 2 at 37 • C. Dead cells were stained with 0.4% trypan blue (MP Biomedicals, Santa Ana, CA, USA), and the number of live cells was counted daily from day 1 to day 10 of culture using a hemocytometer. Subsequently, a cell growth curve was generated, and the logarithmic growth phase was defined as days 2-6. Population doubling time (PDT) was calculated using the following equation [23]. where t0 indicates the time taken for the cell count and the number of cells at N, N0:t, t0. Bromodeoxyuridine Cell Proliferation Assay Cell growth ELISA and Cell Proliferation ELISA Bromodeoxyuridine (BrdU) kits (Roche Diagnostics, Basel, Switzerland) were used. SHED, CD146 + SHED, and CD146-SHED were seeded in 96-well plates (CORNING; 3 × 10 3 cells/well) and cultured at 37 • C in 5% CO 2 . After 48 h of growth, the cells were incubated with BrdU for 24 h at 37 • C in 5% CO 2 . The absorbance was measured at the wavelength of 375 nm using a microplate reader (MultiskanTM FC; Thermo Fisher Scientific, Waltham, MA, USA). Quantitative Real-Time Polymerase Chain Reaction Analysis SHED, CD146 + SHED, and CD146-SHED were cultured at 37 • C under 5% CO 2 . After the cells reached 80% confluence, induction of differentiation was initiated with the osteodifferentiation induction medium described above. The cells were harvested before induction, and at 21 and 28 days after induction. The mRNA expression levels of alkaline phosphatase (ALP), osteocalcin (OCN, a bone transcription factor), and bone morphogenetic protein-2 (BMP-2) were determined using quantitative real-time polymerase chain reaction (RT-PCR) analysis with QuantiTect SYBR Green PCR master mix (Qiagen, Valencia, CA, USA) with a LightCycler ® 480 II instrument (Roche Diagnostics). Total RNA was extracted from cells using an RNeasy Mini kit (Qiagen) and quantified using a NanoDrop One/Onec spectrophotometer (Thermo Fisher Scientific, Inc., Waltham, MA, USA). RNA purity was also assessed using this instrument based on the OD 260/OD 280 ratio; only samples with an A260/A280 ratio of 1.5-2.0 were used for further analysis. Subsequently, 1 µg of purified total RNA was reverse-transcribed to cDNA using a ReverTra Ace first-strand cDNA synthesis kit (Toyobo, Osaka, Japan). RT-PCR was performed using Thunderbird SYBR qPCR mix (Toyobo) with specific primer sets (Table 1). ALP Staining and Determination of ALP Activity The cells were fixed on days 3, 7, 14, 21, and 28 after incubation with an osteogenic differentiation medium, and ALP staining was performed using the following methods. After fixation with 4% paraformaldehyde PBS (Fujifilm Wako Pure Chemical, Osaka, Japan) for 10 min, the cells were incubated with PBS containing 0.05% Tween-20 (Roche Diagnostics) (washing buffer). After removing the washing buffer, the cells were incubated with ALP staining solution (Fujifilm Wako Pure Chemical Industries, Ltd.) at room temperature under light-resistant conditions for 10 min. For the quantitative testing of ALP, the cells were harvested on days 3, 7, 14, 21, and 28 using the pNPP Phosphatase Assay Kit (AnaSpec, Fremont, CA, USA) after incubation with an osteogenic differentiation induction medium. The harvested cells were homogenized using a Sonic Vibra Cell (Sonic & Materials, Newtown, CT, USA), and the supernatant was collected and used as the sample. The sample was mixed with 50 µL of the pNPP substrate solution in a 96-well plate (CORNING), and the absorbance was determined at a wavelength of 405 nm using a microplate reader MultiskanTM FC (Thermo Fisher Scientific). Calcium Deposition Analyses (Alizarin Red Staining) After the cells were cultured with an osteogenic differentiation-inducing medium for 28 days, they were washed with PBS and 10 mM Tris-HCl (pH 7.5) with 0.9% NaCl. The cells were fixed with 4% paraformaldehyde and stained with 1% Alizarin Red S (Kshida Chemical, Osaka, Japan). The stained tissue sections were photographed and observed using a BZ-X810 microscope (Keyence, Osaka, Japan). In addition, for quantification analyses, the cells were incubated with a mixed solution of 10% acetic acid and 20% methanol at room temperature for 15 min after staining to elute the stained dye. The eluate was added to a 96-well plate (CORNING), and the absorbance was determined at a wavelength of 405 nm using a microplate reader Multiskan™FC (Thermo Fisher Scientific). Statistical Analysis All data are presented as mean ± standard deviation. The Kruskal-Wallis test, a nonparametric test, was performed to analyze significant differences between the groups using the software BellCurve ® for Excel (SSRI; Tokyo, Japan). p < 0.05 and < 0.01 were considered statistically significant. Conclusions In conclusion, the results indicated that CD146 + SHED has superior bone regeneration ability compared with SHED and CD146-SHED. Moreover, CD146 affects the bone regeneration ability of SHED, and CD146 + SHED might be helpful for bone regeneration treatment. Further studies are needed to elucidate the detailed mechanism of bone regeneration in CD146 + SHED for the clinical application of SHED.
2023-02-19T16:05:16.466Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "0973fc50b23045c63ba9e17a234aa7e5e98ae4b4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fb8d8a2ea60525e091c6f2831c752a428aa87bbb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
237259028
pes2o/s2orc
v3-fos-license
Working posture analysis of construction workers using ergonomics Ergonomics integrates knowledge from the human sciences to match jobs, systems, products, and environments to the physical and mental abilities and limitations of people. Most of the injuries, stresses and strains in construction industry occur due to over-exertion and repetitive work actions. Postural analysis can be a powerful technique for assessing work activities. Use of ergonomic principles reduces fatigue experienced by the human body in various tasks. Ergonomics deals with the study of internal and external stresses acting on the human body. The aim of this study is to assess the level of ergonomics in various tasks in the construction industry .The study also aims to find the level of musculoskeletal disorders in workers and suggest corrective measures for every task having high risk factor. The study is conducted by using the Rapid Entire Body Assessment (REBA) tool of Ergofellow software, to assess the posture of workers in various construction tasks. Fatigue analysis is done by this method, which provides a scoring system for muscle activity caused by static, dynamic, rapid changing or unstable postures. INTRODUCTION Construction industry is one of the most booming industries in the world. An integral part of this industry is the workers. Construction workers are those, who perceive more clearly the lack of job safety. It has been evidenced by a number of studies that construction industry is one of the most hazardous work place industries with high rates of fatalities, injuries and health problems. The high levels of ill-health, accidents and injuries among construction workers could be explained by a high variety of risk factors on the job. This can be assessed by applying ergonomics in construction industry. Ergonomics is the science of fitting the job to the worker. It deals with the study of internal and external stresses acting on the human body. Use of ergonomic principles reduces fatigue experienced by the human body in various tasks. SCOPE OF THE STUDY • The study is limited to five construction tasks involving high potential of development of musculoskeletal disorder (MSD). • Construction workers with a minimum of 5 years of experience are analysed. • Workers from commercial and residential multi-storey building sites are considered. • Continuous monitoring of workers is beyond the scope of this study. LITERATURE REVIEW From the literature review it is noted that Rapid Upper Limb Assessment, RULA and Rapid Entire Body Assessment, REBA are two of the important postural analysis tools [Ansari N A and Dr. Sheikh M J (2014)], [Bhandare A, Bahirat P, Nagarkar V, and Bewoor A (2013)]. RULA provides a quick assessment of the postures of the neck, trunk and upperlimbs along with muscle function and the external loads experienced by the body [McAtamney L and Corlett E N (1993)]. REBA is an analytical tool for entire body assessment [Hignet S, and McAtamney L (2000)].The fatigue involved in particular operations can be quantified and accordingly changes in the work method for system improvement can be suggested with these methods. Some of the other tools for postural analysis are OWAS, RII and PATH methods [Chiasson M,Imbeau D,Aubry K and Delisle A Sustainability,Agri,Food and Environmental Research,, 10(X), 2022: http://dx.doi.org/ (2012)], [Kong Y, Lee S, Lee K and Kim D (2017)], [Kulkarni V S and Devalkar R V (2017)], [Kulkarni V S and Devalkar R V (2018)]. From studies it was observed that the analysis done by RULA and REBA were more consistent and correlated.Most of the studies on postural analysis was conducted in farming, production and manufacturing industries. The postural analysis conducted in construction sites were only for a limited set of activities like excavation, plastering and brickwork. From literature survey it was observed that for the efficient and accurate capturing of postures mobile applications can be an effective and economic tool [Szucs K A and Brown E D (2018)]. So, a mobile application called 'APECS' is adopted for the posture capture and validation. Since the postural analysis of a large dataset of workers is a tedious task, REBA method from ergonomic software called 'Ergofellow' is adopted, because REBA provides more consistent and correlated results compared to other methods. From previous studies and literature reviews the construction activities, having a potential of MSD, selected for this particular study are; reinforcement work, plastering, shuttering, concrete block work and manual material handling. DATA COLLECTION Data for the analysis comprises of results of questionnaires and checklist along with a set of photos or video sequences. This is collected using a portable camera or a video camera. This method of data collection is preferred because it does not interfere with the work flow at the site. Since images from multiple frames and angles are captured, it provides better accuracy .It also acts as a record for future reference. Posture Capture and Validation Using APECS-APECS stands for AI Posture Evaluation and Correction System. It is a posture analyzer for accurate evaluation of spine alignment and body symmetry. It is educational software developed by the company New Body Technology. The software used for this study is the version 2.2.7, for android operating systems. It uses precise photogrammetric algorithms to make accurate posture assessments. A sample image of a concrete block worker captured using APECS is shown in fig.1. Body lines and their respective angles can be read directly from the image. Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ Additional data like any twisting or bending of the neck and trunk, and the support conditions are also filled. Similarly, load factors are filled from the given options. The load is divided into three categories as less than 5 kg, 5 to 10 kg and greater than 10 kg. Additional provision in case of shock or rapid build up force can also be specified. Next step is to specify the body angles of upper arm, lower arm and wrist, as shown in fig.3.Additional provisions for each body part is also available. The coupling factors are to be filled next. These are to measure and score the hand hold and grip of the tools used for the work. Poor and unacceptable coupling conditions will increase the strain in wrist and lower limbs. The last option to be filled is the activity. i.e., whether the body parts are held static for longer than a minute, any repeated actions or any large range changes in posture. Once all the body angles and the field conditions are specified the result is obtained from the software as shown in fig.4.The software produces the REBA score and shows the risk level. For this particular example of shuttering worker the Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ score obtained is 7 which indicate medium risk. Therefore further investigations or change in posture can be adopted. Strained areas observed in them were trunk, shoulders and neck. A total of 41 reinforcement workers were analysed out of which, 26 were at medium risk i.e., 63.41% with a score of 4 to 7.Also, 13 workers were at high risk i.e.31.7% with a score of 8 to 10and 2 were at low risk i.e., 4.878% with a score of 2 or3.workers with negligible risk and very high risk were nil. Construction workers performing plastering works in both standing and seated positions were observed. From the 38 workers analysed, it was inferred that the major areas of strain were trunk shoulders and upper limbs. For those workers in seating position knee was also strained. Out of the 38 plastering workers analysed, 24 were at medium risk i.e., 63.15% with a score of 4 to 7.Also, 13 workers were at high risk i.e., 34.21% with a score of 8 to 10and 1 was at very high risk i.e., 2.631% with a score of 11.workers with negligible risk and low risk were nil. Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ Among the shuttering workers observed main problem was their awkward posture, unstable base and lack of stable support mechanisms. Due to this, the major body parts strained weretrunk, neck, upper limbs and knee. A total of 25 shuttering workers were observed out of which, 14 were at medium risk i.e., 56% with a score of 4 to 7.Also, 8 workers were at high risk i.e.32% with a score of 8 to 10 two people were at very high risk i.e., 8% with a score of 11 and 1 was at low risk i.e., 4% with a score of 3.workers with negligible risk were nil.Among concrete block workers the main problem observed was their frequent bending to reach materials, unstable base and work not performed at eye level. Due to this, the major body parts strained weretrunk, neck, upper limbs and shoulder. A total of 25 concrete block workers were observed out of which 11 were at medium risk i.e., 44% with a score of 4 to 7.Also, 12 workers were at high risk i.e.48% with a score of 8 to 10. One worker was at very high risk i.e., 4% with a score of 11 and one was also at low risk i.e., 4% with a score of 3.workers with negligible risk were nil. Among the manual material handling workers observed, main problem was their incorrect hold of materials and poor lifting mechanisms. Due to this, the major body parts strained weretrunk, shoulders, upper limbs and knee. Weight of the construction material was also a contributing factor. It was inferred that out of the 25 manual material handling workers analysed, 8 were at medium risk i.e., 32% with a score of 4 to 7.Also, 16 workers were at high risk i.e.64% with a score of 8 to 10 .one worker was at low risk i.e., 4% with a score of 3and workers with negligible risk and very high risk were nil. The percentage results of construction workers performing each task is compiled into a tabular form as shown in REBA assessment identified that the shoulders, knees, legs and back were at a high risk for developing MSDs due to the abducted lower-body postures, the repeated actions and the flexion and extension of the upper limbs. It was also observed that along with incorrect Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ and awkward working posture, certain factors like poor coupling, unstable base, static postures and repetitive actions also contributed to these high risk levels.From a total of 154 workers analysed, most of them were under medium to high risk. From the results of the REBA assessment, it was observed that majority of the workers i.e., 53.896% of the total workers were under medium risk, 40.25 % of the total workers were under high risk, 2.597% were under very high risk and 3.24% were under low risk. The outcomes indicate that there was a high risk of developing MSDs. The major reasons for this large percentage of high and medium risks can be concluded as awkward postures along with its duration and other external factors like poor coupling, excessive load and repeated actions under poor supervision. The process needs to be investigated further, and changes must be implemented to protect the workers. SUGGESTIONS AND RECOMMENDATIONS Majority of the reinforcement workers were at medium risk and a considerably large percentage were at high risk. This can be minimized up to a greater extend by avoiding static bending posture. Appropriate support mechanisms should be provided to the workers, which will reduce the stress in their legs and knees. The safety officer/ engineer should ensure correct working posture of the workers. A large percentage of plastering workers were at medium risk due to stretched neck, trunk and upper limbs. Use of stable elevated platforms for overhead plastering can reduce the strain in neck, shoulders and trunk. For plastering at lower levels, appropriate seating should be adopted to avoid strain on knee and lower limbs.Proper equipments like guniting tools should be used to aid plastering over a larger area. Majority of the shuttering workers were at medium to high risk, the main cause of which was bend trunk and insufficient support. In order to reduce this strain, works should be done at appropriate heights. Also, usage of ladders in shuttering activities helps the worker to work at eye-level. Stable working platform and appropriate support mechanisms are the basic changes to be adopted to reduce unnecessary efforts of the worker. In the case of concrete block workers, majority of them were at high risk due to frequent bending and stretching to reach materials and tools. Immediate change in posture is recommended for such workers.An elevated platform for storage should be provided to keep the concrete blocks, mortar and other equipments.This helps to keep all the equipments within immediate reach and avoids unnecessary movements. A high percentage of manual material handling workers were at high risk due to improper lifting techniques. The workers should be given proper training regarding the method of handling the construction materials. They should adopt postures that distribute the weight of the construction material equally to the body. Also, avoid lifting/carrying materials using one side of the body as it will strain that particular side of body and will also affect the Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 10(X), 2022: http://dx.doi.org/ trunk of the worker. Weight of the construction material was a contributing factor to the risk. Usage of equipments like trolleys can minimise these efforts. FUTURE SCOPE This study involves posture analysis of construction workers using REBA method. In future works other ergonomic tools and more sophisticated computer based applications may be adopted. The sample size can be taken in a wide range to improve the accuracy of the results. For this study, five particular tasks were only considered. This can be improved by incorporating all the construction tasks in a site for the study. Consideration of external factors like previous medical history of the workers, their physical features etc may also be adopted.
2021-08-19T20:03:11.068Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7ceed30b3cfa21e6a7e21b7284ef71e1da418d5f", "oa_license": "CCBYNCSA", "oa_url": "https://portalrevistas.uct.cl/index.php/safer/article/download/2545/2117", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bb778d6bf2e0ef04c6ce932ba610f8bcaecb16c3", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
118467193
pes2o/s2orc
v3-fos-license
The Kohn-Sham system in one-matrix functional theory A system of electrons in a local or nonlocal external potential can be studied with 1-matrix functional theory (1MFT), which is similar to density functional theory (DFT) but takes the one-particle reduced density matrix (1-matrix) instead of the density as its basic variable. Within 1MFT, Gilbert derived [PRB 12, 2111 (1975)] effective single-particle equations analogous to the Kohn-Sham (KS) equations in DFT. The self-consistent solution of these 1MFT-KS equations reproduces not only the density of the original electron system but also its 1-matrix. While in DFT it is usually possible to reproduce the density using KS orbitals with integer (0 or 1) occupancy, in 1MFT reproducing the 1-matrix requires in general fractional occupancies. The variational principle implies that the KS eigenvalues of all fractionally occupied orbitals must collapse at self-consistency to a single level, equal to the chemical potential. We show that as a consequence of the degeneracy the iteration of the KS equations is intrinsically divergent. Fortunately, the level shifting method, commonly introduced in Hartree-Fock calculations, is always able to force convergence. We introduce an alternative derivation of the 1MFT-KS equations that allows control of the eigenvalue collapse by constraining the occupancies. As an explicit example, we apply the 1MFT-KS scheme to calculate the ground state 1-matrix of an exactly solvable two-site Hubbard model. INTRODUCTION Density functional theory (DFT) benefits from operating with the electron density, which as a function of just three coordinates is much easier to work with than the full many-body wavefunction. According to the Hohenberg-Kohn (HK) theorem, 1 the density of an electron system in a local external potential v( r) may be found by minimizing a universal energy functional E v [n], whose basic variable is the density. Remarkably, the density uniquely determines the ground state wavefunction (if it is nondegenerate), i.e., there can be only one ground state wavefunction yielding a given density, no matter what v( r) is. However, if the external potential is nonlocal, then the density alone is generally not sufficient to uniquely determine the ground state (see Appendix A for a simple example). Gilbert 2 extended the HK theorem to systems with nonlocal and spin dependent external potential v(x, x ), where x = ( r, σ). It was proved that i) the ground state wavefunction is uniquely determined by the ground state 1-matrix (one-particle reduced density matrix) and ii) there is a universal energy functional E v [γ] of the 1-matrix, which attains its minimum at the ground state 1-matrix. The 1-matrix is defined as γ(x, x ) = N dx 2 . . . dx N ρ(x, x 2 , . . . x N ; x , x 2 , . . . x N ), (1) where dx = σ d 3 r andρ = i w i |Ψ i Ψ i | is the full N -electron density matrix with ensemble weights w i such that i w i = 1. An external potential may be nonlocal with respect to the space coordinates and/or the spin coordinates. For example, pseudopotentials are nonlocal in space, and Zeeman coupling −( |e|/mc) B · σ, where σ is the vector of Pauli matrices, is nonlocal in spin space. The coupling of electron motion and an external vector potential, (|e|/2mc)( p · A + A · p), may also be treated as a nonlocal potential because p is a differential operator. It is rather intuitive that for such external potentials, which couple to the system in more complex ways than the local potential v( r), it is necessary -in order to permit statements analogous to the HK theorem -to refine the basic variable accordingly. Hence spin-DFT, 3,4 whose basic variables are the density and the magnetization density, applies to systems with Zeeman coupling. Current-DFT, 5,6 whose basic variables are the density and the paramagnetic current density, has the scope to treat systems in which the current is coupled to an external magnetic field. Generally, if one considers an external potential that is nonlocal in space and spin, the necessary basic variable is the one-matrix, 2 which contains all of the single-particle information of the system, including the density, magnetization density and paramagnetic current density. The DFT-type approach that takes the 1-matrix as basic variable will be referred to here as 1-matrix functional theory (1MFT). As in DFT, an exact and explicit energy functional is generally unknown. An important difference between 1MFT and DFT is that the kinetic energy is a simple linear functional of the 1-matrix, while it is not a known functional of the density. Thus, in 1MFT the only part of the energy not known explicitly is the electronelectron interaction energy W [γ]. Several approximate 1matrix energy functionals have been proposed and tested recently (see Refs. 7 and 8 and references therein.) Notably, the so-called BBCn approximations, 7 which are modifications of the Buijse-Baerends functional, 9 have given fairly accurate results for the potential energy curves of diatomic molecules 7 and the momentum distribution and correlation energy of the homogeneous electron gas. 8 In Ref. 8, a density dependent fitting parameter was introduced into the BBC1 functional such that the resulting functional yields the correct correlation energy of the homogeneous electron gas at all values of density. There is also the prospect of using 1MFT to obtain accurate estimates for the band gaps of non-highly correlated insulators. 10 Many of the approximate functionals that have been proposed are similar to an early approximation by Müller. 11 Actual calculations in 1MFT are more difficult than in DFT. The energy functional E v [γ] must be minimized in a space of higher dimension because the 1-matrix is a more complex quantity than the density. In the calculations cited above, the energy has been minimized directly by standard methods, e.g., the conjugate gradient method. In DFT the energy is generally not minimized by such direct methods. Instead, the Kohn-Sham (KS) scheme 12 provides an efficient way to find the ground state density. In this scheme, one introduces an auxiliary system of N noninteracting electrons, called the KS system, which experiences an effective local potential v s ( r). This effective potential is a functional of the density such that the self-consistent ground state of the KS system reproduces the ground state density of the interacting system. It is interesting to ask whether there is also a KS scheme in 1MFT. The question may be stated as follows: does there exist a 1-matrix dependent effective potential v s (x, x ) such that, at self-consistency, a system of noninteracting electrons experiencing this potential reproduces the exact ground state 1-matrix of the interacting system? Although Gilbert derived such an effective potential, 2 the implications were thought to be "paradoxical" because the KS system was found to have a high (probably infinite) degree of degeneracy. Evidently, the KS eigenvalues in 1MFT do not have the meaning of approximate single-particle energy levels, in contrast to DFT and other self-consistent-field theories, where the eigenvalues may often be interpreted as the negative of ionization energies, owing to Koopmans' theorem. The status of the KS scheme in 1MFT appears to have remained unresolved, 13,14 and recently it has been argued that the KS scheme does not exist in 1MFT. 8,10,15,16 Gilbert derived the KS equations from the stationary principle for the energy. The KS potential was found to be In this article, we propose an alternative derivation of the KS equations, which, in our view, gives insight into the nature of the "paradoxical" degeneracy of the KS system. One-matrix energy functionals are often expressed in terms of the so-called natural orbitals and occupation numbers. 17 This makes them similar to "orbital dependent" functionals in DFT. The natural orbitals are the eigenfunctions of the 1-matrix, and the occupation numbers are the corresponding eigenvalues. 17 These quanti-ties play a central role in 1MFT. Recently, it was shown that when a given energy functional is expressed in terms of the natural orbitals and occupation numbers, the KS potential can be found by using a chain rule to evaluate the functional derivative in Eq. 2. 18 Although the concept of the KS system can indeed be extended to 1MFT, it has in this setting some very unusual properties. In particular, the KS orbitals must be fractionally occupied, for otherwise the KS system could not reproduce the 1-matrix of the interacting system, which always has noninteger eigenvalues (occupation numbers). This is different from the situation in DFT, where it is usually possible to reproduce the density using only integer (0 or 1) occupation numbers, or in any case, only a finite number of fractionally occupied states. Due to the necessity of fractional occupation numbers, the 1MFT-KS system cannot be described by a single Slater determinant. However, we find that it can be described by an ensemble of Slater determinants, i.e., a mixed state. In order that the variational principle is not violated, all the states that comprise the ensemble must be degenerate. This implies that the eigenvalues of all fractionally occupied orbitals collapse to a single level, equal to the chemical potential. The degeneracy has important consequences for the solution of the KS equations by iteration. We prove that the iteration of the KS equations is intrinsically divergent because the KS system has a divergent response function χ s = δγ/δv s at the ground state. Fortunately, convergence can always be obtained with the level shifting method. 19 To illustrate explicitly the unique properties of the 1MFT-KS system, we apply it to a simple Hubbard model with two sites. The model describes approximately systems which have two localized orbitals with a strong on-site interaction, e.g., the hydrogen molecule with large internuclear separation. 20 The Schrödinger equation for this model is exactly solvable, and we find that the KS equations in 1MFT and in DFT can be derived analytically. It is interesting to compare 1MFT and DFT in this context. We demonstrate that divergent behavior will appear also in DFT when the operator 1 − χ s χ −1 , where χ and χ s are the density response functions of the interacting and KS systems, respectively, has any eigenvalue with modulus greater than 1. In this expression the null space of χ is assumed to be excluded. This article is organized as follows. In Sec. II, we derive the KS equations in 1MFT and discuss how to solve them self-consistently by iteration. In Sec. III, we compare three approaches to ground state quantum mechanicsdirect solution of the Schrödinger equation, 1MFT and DFT -by using them to solve the two-site Hubbard model. II. KOHN-SHAM SYSTEM IN 1MFT It is not obvious that a KS-type scheme exists in 1MFT for the following reason. Recall that in DFT the KS sys-tem consists of N noninteracting particles and reproduces the density of the interacting system. The density of the KS system, if it is nondegenerate, is the sum of contributions of the N lowest energy occupied orbitals On the other hand, in 1MFT the KS system should reproduce the 1-matrix of the interacting system. The eigenfunctions of the 1-matrix are the so-called natural orbitals, and the eigenvalues are the corresponding occupation numbers. 17 Occupying the N lowest energy orbitals in analogy to (3), one obtains Such an expression, in which the orbitals have only integer (0 or 1) occupation, cannot reproduce the 1-matrix of an interacting system because the orbitals of an interacting system have generally fractional occupation (see the discussion in the following section.) The difference between the 1-matrix in (4) and the 1-matrix of an interacting system is clearly demonstrated by the so-called idempotency property. The 1-matrix in (4) is idempotent, i.e., dx γ(x, x )γ(x , x ) = γ(x, x ), while the 1-matrix of an interacting system is never idempotent. However, if the KS system is degenerate and its ground state is an ensemble state, the 1-matrix becomes with fractional occupation numbers f i . The N -particle ground state density matrix of the KS system isρ s = where the Φ i are Slater determinants each formed from N degenerate KS orbitals. The occupation numbers f i are related to the ensemble weights w i by where Θ ji equals 1 if φ i is one of the orbitals in the determinant Φ j and 0 otherwise. 21 A. Derivation of the 1MFT Kohn-Sham equations In this section, we discuss Gilbert's derivation 2 of the KS equations in 1MFT and propose an alternative derivation. We begin by reviewing the definition of the universal 1-matrix energy functional E v [γ]. One-matrix functional theory describes the ground state of a system of N electrons with the Hamiltonian H = N i=1 (t i +v i ) +Ŵ , wheret = −∇ 2 r /2 is the kinetic energy operator,v is the local or nonlocal external potential operator, andŴ = i<j | r i − r j | −1 is the electronelectron interaction (in atomic units = m = e = 1). The ground state 1-matrix and ground state energy can be found by minimizing the functional where By extending the HK theorem, Gilbert proved 2 that a nondegenerate ground state wavefunction, Ψ 0 , is uniquely determined by the ground state 1-matrix, i.e., Ψ 0 is a functional of γ. For this reason the interaction energy, as defined in (8), is a functional of γ. It is apparent that (8) where the interaction energy T r(Ŵρ) is minimized in the space of N -particle density matricesρ that yield γ via (1). The definition (9) is a natural extension to the ENR space because when it is adopted (7) may be expressed as This is a variational functional which attains its minimum at the ground state 1-matrix, as seen from where E 0 is the ground state energy. The extension to the ENR domain is significant, especially for applications of the variational principle, because the conditions a 1matrix must satisfy to be ENR are known and simple to impose on a trial 1-matrix, while the conditions for vrepresentability are unknown in general. The necessary and sufficient conditions 23 for a 1-matrix γ to be ENR are i) γ must be Hermitean, ii) dxγ(x, x) = N , and iii) all eigenvalues of γ (occupation numbers) must lie in the interval [0, 1]. The third condition is a consequence of the Pauli exclusion principle. The 1MFT-KS equations were derived 2 from the stationary conditions for the energy with respect to arbitrary independent variations of the natural orbitals φ i and angle variables θ i chosen to parametrize the occupation numbers according to f i = cos 2 θ i (0 ≤ θ i ≤ π/2). For the purpose of describing a variation in the ENR space, this set of variables, namely {δφ i , δφ * i , δθ i }, is redundant. An arbitrary set of such variations may or may not correspond to an ENR variation, and when it does the variations will not be linearly independent. This causes no difficulty, of course, because it is always possible to formulate stationary conditions in a space whose dimension is higher than necessary, provided the appropriate constraints are enforced with Lagrange multipliers. Accordingly, the Lagrange multiplier terms ij λ ij ( φ i |φ j −δ ij ) which maintain the orthogonality of the orbitals and the term µ( i f i − N ) which maintains the total particle number were introduced. The KS equations were found to be where the kernel of the effective potential is v s (x, x ) = v(x, x ) + δW/δγ(x , x), if the functional derivative exists. The stationary conditions imply that all fractionally occupied KS orbitals have the same eigenvalue i = µ. Gilbert described this result as "paradoxical" because in interacting systems essentially all orbitals are fractionally occupied. The above stationary conditions assume E v to be stationary with respect to variations in the ENR space (except variations of occupation numbers equal to exactly 0 or 1 in the ground state, which are excluded by the parametrization). However, it is not known, in general, whether E v is stationary in the ENR space; the minimum property (11) ensures only that it is variational. Recall that the ENR space consists of all γ that can be constructed from an ensemble, and the energy of an ensemble is not stationary with respect to variations of the many-body density matrixρ. Therefore, the stationary conditions applied in the ENR space may be too strong. On the other hand, the quantum mechanical variational principle guarantees that E v is stationary with respect to variations in the VR space, 34 but it is not known how to determine whether a given γ is VR. Hence, it is not known how to constrain the variations of γ to the VR space. Nevertheless, it may be that in some systems the entire neighborhood of γ gs in the ENR space is also VR. In such cases, E v is stationary in the ENR space, and the stationary conditions applied in Ref. 2 are satisfied at the ground state. We find it helpful to construct an alternative derivation of the KS equations. Consider the energy functional where j = j ({q k }) are Lagrange multipliers that constrain the occupation numbers f j of the natural orbitals to chosen values q j , which satisfy 0 ≤ q j ≤ 1 and j q j = N . These Lagrange multipliers allow us to investigate the degeneracy of the KS eigenvalues, which leads to the "paradox" described by Gilbert. We have omitted the Lagrange multipliers λ ij used in Gilbert's derivation. They are not necessary in our derivation because we formulate the stationary conditions with respect to variations of the 1-matrix instead of the orbitals. We adopt the definition (9) is the ENR space. A variation δγ will be said to be admissible if γ gs + δγ is ENR. For convenience we assume that the static response function χ = δγ/δv for the interacting system under consideration has no null vectors apart from the null vectors associated with a) a constant shift of the potential (which is a null vector also in DFT) and b) integer occupied orbitals, i.e., orbitals with occupation numbers exactly 0 or 1. If χ has additional null vectors, the following derivation must be modified; the necessary modifications are discussed below. Granting the above assumption, G v is guaranteed to be stationary, and the KS equations can be derived from the stationary condition δG v = 0 with respect to an arbitrary admissible variation of γ. The first variation of G v is where the variation of the 1-matrix is expressed as δγ ij = φ i δγ φ j in the basis of the ground state natural orbitals, and the relation δW = T r(ŵδγ) defines a singleparticle operatorŵ. In (14), we have also introduced the definition of the Hermitean operator h =t +v +ŵ, (15) which will be seen to be the KS Hamiltonian. If the last line of (14) is to be zero for an arbitrary Hermitian matrix δγ, then we must have h ij − j δ ij = 0 for all i and j. ENR condition (i) has been maintained explicitly by requiring the variation to be Hermitian. ENR conditions (ii) and (iii) do not impose any constraint on the space of admissible variations as they are maintained by the Lagrange multipliers i . The matrix elements h ij are functionals of the 1-matrix, and the 1-matrix that satisfies the stationary conditions h ij − j δ ij = 0 can be found by solving self-consistently the single-particle equationŝ together with (5). These are the KS equations in 1MFT. If they are solved self-consistently with the occupation numbers fixed to the values q i , they give the orbitals which minimize E v subject to the constraints f i = q i . The KS potential isv s =v +ŵ. The termŵ is the effective contribution of the electron-electron interaction to the KS potential. In coordinate space, its kernel is which recovers Gilbert's result. The kernel of the KS Hamiltonian may be written in the familiar form where v(x, x ) is the external potential and w(x, x ) has been divided into the Hartree v H (x) and exchangecorrelation v xc (x, x ) potentials. In 1MFT, the exchangecorrelation potential is nonlocal. The 1MFT-KS scheme optimizes the orbitals for a chosen set of occupation numbers, but it does not itself provide a rule for choosing the occupation numbers. On this point, it is different from the DFT-KS scheme, where the occupation numbers are usually uniquely determined by the aufbau principle (T = 0 Fermi statistics). 35 In 1MFT, the KS equations have a self-consistent solution for any chosen set of occupation numbers {q i } that satisfy 0 ≤ q i ≤ 1 and i q i = N . The Lagrange multipliers i , which are seen to be the KS eigenvalues, adopt values such that the minimum of G v occurs for a 1-matrix γ min whose occupation numbers are precisely the set {q i }. Therefore, the unconstrained minimum of G v coincides with the minimum of E v subject to the constraints f i = q i . To find the ground state occupation numbers, {q gs i }, one can search for the minimum of the function can be performed by the KS scheme. What can be said about the KS eigenvalues j ? As the minimum of G v is a stationary point, we have for all j. E v is a variational functional in the ENR space. It attains its minimum at the ground state 1-matrix γ gs , where ∂E v /∂f j must vanish for all fractionally occupied (0 < f j < 1) orbitals, for otherwise the energy could be lowered. When q i = q gs i for all i, γ min = γ gs , and (19) implies j ({q gs i }) = 0 for all fractionally occupied orbitals. Thus, we find that the KS eigenvalues of all orbitals that are fractionally occupied in the ground state must collapse to a single level when the chosen set of occupation numbers approach their ground state values, i.e., as q i → q gs i . In Gilbert's derivation of the 1MFT-KS equations, all fractionally occupied KS orbitals were found to have the eigenvalue i = µ, where µ is the chemical potential. As we have not introduced the chemical potential in our derivation (we consider a system with a fixed number of electrons), the eigenvalues collapse to 0 instead of µ. The above arguments do not apply to orbitals with occupation numbers exactly 0 or 1 because these values lie on the boundary of the allowed interval [0, 1] specified by ENR condition (iii). All that can be concluded from the fact that E v has a minimum in the ENR space is j ≥ 0 for orbitals with f j = 0 and j ≤ 0 for orbitals with f j = 1. States with occupation numbers exactly 0 or 1 have been called "pinned states." 8,10 Instances of such states in real systems have been reported, 24 though their occurrence is generally considered to be exceptional. 8,10 Due to the collapse of the eigenvalues at the ground state, the KS Hamiltonian becomes the null operator (20) in the subspace of fractionally occupied orbitals. This is of course analogous to the familiar condition df /dx = 0 for the extremum of a function f (x). Gilbert described 2 a similar result (with µÎ replacing0) as "paradoxical," a statement that has been repeated. 13,14 The problem with (20) is that while we expect the KS Hamiltonian to define the natural orbitals, any state is an eigenstate of the null operator. However, the KS Hamiltonian is a functional of the 1-matrix, and when the occupation numbers are perturbed from their ground state values, the degeneracy is lifted and the KS Hamiltonian does define unique orbitals. In the KS scheme outlined above, this corresponds to the optimization of the orbitals with occupation numbers fixed to values q i , perturbed from the ground states values. In the limit that the occupation numbers approach their ground state values, the optimal orbitals approach the ground state natural orbitals. The degenerate eigenvalues generally split linearly with respect to perturbations away from the ground state. In particular, Here, χ is the static response function defined as The relation δh/δγ = −χ −1 used in (21) is derived in the following section. If χ has a null space, its inverse is defined only on a restricted space. For example, (21) does not apply to pinned states as there is a null vector associated with each pinned state (see below). Our derivation of the 1MFT-KS equations in fact assumes that the static response function χ of the interacting system has no null vectors except for those associated with pinned states and a constant shift of the potential. We now show that this guarantees G v is stationary. If the interacting system has any other null vectors we can no longer be certain that G v is stationary and the derivation should be modified as described below. We have remarked already (see Ref. 34) that G v is stationary in the VR space, i.e., it satisfies the stationary condition δG v = 0 with respect to an arbitrary variation of the 1-matrix in the VR space. However, our derivation of the KS equations requires G v to be stationary in the ENR space. As the VR space is a subspace of the ENR space, this is a stronger condition. The assumption that χ has no null vectors (apart from those associated with pinned states) is equivalent to assuming that any ENR variation (apart from variations of the pinned occupation numbers) is also a VR variation. For if χ has no null vectors, then it is invertible and any ENR variation δγ can be induced by the perturbation δv = χ −1 δγ. 36 Hence, with the above assumption, G v is guaranteed to be stationary with respect to any ENR variation. The above arguments do not apply to variations of pinned occupation numbers because there are null vectors associated with such variations; nevertheless, G v is stationary with respect to such variations as this is maintained by the Lagrange multipliers i . It is now clear how to modify the derivation of the KS equations when χ has additional null vectors. By introducing new Lagrange multipliers, the additional null vectors can be treated in analogy with the pinned states. For example, suppose χ has one additional null vectorû = ij u ij |φ i φ j |, whereû is hermitian and T r(ûû) = 1. The new energy will be stationary with respect to an arbitrary variation in the ENR space. Here, the Lagrange multiplier κ enforces the constraint γ u = p u , where γ u = T r(γû) is the component of γ corresponding toû. The stationary condition δG v = 0 leads to the set of equations h ij − j δ ij − κu ij = 0 in the basis of natural orbitals. The 1-matrix that satisfies these equations can be found by solving self-consistently the eigenvalue equationĥ where S is the unitary matrix that diagonalizes the matrix u in the basis of natural orbitals, i.e., SuS † is diagonal. The energy of the self-consistent solution defines a function G v ({q i }, p u ), whose minimum with respect to {q i } and p u is the ground state energy. It may not be known in advance whether the response function of a given interacting system will have null vectors. Therefore, it is helpful to understand how null vectors occur. Null vectors of χ are connected with the so-called nonuniqueness problem 3,25,26 in various extensions of DFT. A system with the ground state Ψ 0 is said to have a nonuniqueness problem if there is more than one external potential for which Ψ 0 is the ground state. The Schrödinger equation defines a unique map from the external potential to the ground state wavefunction (if it is nondegenerate), but when there is more than one external potential yielding the same ground state wavefunction, the map cannot be inverted. In 1MFT the generality of the external potential (nonlocal in space and spin coordinates) allows greater scope for nonuniqueness than in the other extensions of DFT. Of course, every degree of nonuniqueness is a null vector of χ because if δv does not change the ground state wavefunction, it does not change the 1-matrix either, and hence it is a null vector. In fact, every null vector of χ is caused by nonuniqueness; the existence of a null vector δv that induced a nonzero δΨ 0 would contradict the one-to-one relationship γ ↔ Ψ 0 proved by the extension of the HK theorem to 1MFT. 2 It was mentioned above that there are null vectors associated with the pinned states. Suppose φ k is a natural orbital with occupation number f k = 0 in the ground state. The perturbation of the external potential δv = λ |φ k φ k | does not change the ground state if the system has an energy gap between the ground state and excited states and λ is small enough because δV Ψ 0 = 0, where δV = dxdx ψ † (x)δv(x, x )ψ(x ) andψ andψ † are field operators. If f k = 1, δV Ψ 0 = Ψ 0 and the ground state is again unchanged by the perturbation. The "vector" |φ k φ k | is therefore a null vector of χ if φ k is a pinned state. Another type of nonuniqueness, which has been called systematic nonuniqueness, 26 is related to constants of the motion. Suppose = dxdx ψ † (x)a(x, x )ψ(x ) is a constant of the motion. The ground state, if it is nondegenerate, is an eigenstate of as constants of the motion commute with the Hamiltonian. If the system has an energy gap between the ground state and the first excited state, then a perturbation δV = λ will not change the ground state wavefunction if λ is small enough. Thus, is a null vector of χ. As in DFT, an exact and explicit expression for the universal energy functional in 1MFT is unknown in general. In actual calculations it is usually necessary to use approximate functionals. Many of the approximate energy functionals that have been introduced are expressed in terms of the natural orbitals and occupation numbers. Such functionals are valid 1-matrix energy functionals, but as the dependence on the 1-matrix is implicit rather than explicit, they have been called "implicit" functionals. Recently, the KS equations were derived for this case. 18 It was found that the contribution to the KS potential from electron-electron interactions can be evaluated by applying the following chain rule to (17), B. Iteration of the KS equations In this section we show that the "straightforward" procedure for iterating the KS equations (5) and (16) is intrinsically divergent. The KS equations are nonlinear because the KS Hamiltonian itself depends on the 1-matrix. In favorable cases such nonlinear equations can be solved by iteration. Given a good initial guess for the 1-matrix, iteration may lead to the self-consistent solution corresponding to the ground state. In order to iterate (16), one needs an algorithm to define the 1-matrix of iteration step n + 1 from the 1-matrix of step n, i.e., one needs to "close" the KS equations. In the previous section, we saw that in the 1MFT-KS scheme the occupation numbers f i are held fixed during the optimization of the natural orbitals. The following is a "straightforward" algorithm that optimizes the natural orbitals: i) the KS Hamiltonian for step n + 1 is defined bŷ whereĥ is given by (15) and γ (n) is the 1-matrix of iteration step n; ii) the eigenstates ofĥ (n+1) are taken as the natural orbitals of step n + 1; iii) the 1-matrix of step n+1 is constructed from the natural orbitals of step n+1 by the expression Let u i be the eigenstates of the KS Hamiltonianĥ (n+1) . In operation (ii), the natural orbitals φ i , i.e., T r((γ (n+1) −γ (n) ) 2 ) is the minimum possible. If this procedure converges to the stationary 1-matrix γ min giving the lowest energy possible for the fixed set of occupation numbers, then it defines the function G v ({f i }), introduced in the previous section, for which the ground state energy is the absolute minimum. Unfortunately, for any set of occupation numbers {f i } sufficiently close to the ground state occupation numbers, this procedure does not converge to γ min . In other words, the "straightforward" algorithm defines an iteration map for which the ground state γ gs is an unstable fixed point. This is a consequence of the degeneracy of the KS spectrum at the ground state. The divergence of the iteration map is revealed by a linear analysis of the fixed point. Suppose the occupation numbers are fixed to values perturbed from their ground state values by δf i . Let us consider an iteration step n and ask whether the next iteration takes us closer to the stationary point γ min that gives the minimum energy for the fixed occupation numbers. The linearization of the iteration map at the stationary point gives wherev s =v+v H +v xc is the KS potential. The response function χ was defined in (22). The KS response function is In the last line of (26), we have used which can be established by the following arguments. First consider which follows from (24). The KS Hamiltonian is an implicit functional of the 1-matrix, and (29) which is valid to O(max(|δf i |)). Thus, to establish (28) we need to show δĥ/δγ = −χ −1 at the ground state 1matrix γ gs . The KS Hamiltonianĥ is associated with the original many-body HamiltonianĤ, which has external potential v(x, x ). According to (20) Finally, using δv =χ −1 δγ we obtain δĥ/δγ = −χ −1 , which verifies (28). Returning to the question of convergence, we see that (26) implies that the next iteration takes us farther from the stationary point γ min . The reason is that detχ sχ −1 > 1 if γ is sufficiently close to the ground state. According to (21) the KS response diverges as γ → γ gs because i − j ∼ O(max(|δf i |)). For a fixed set of occupation numbers sufficiently close to their ground state values, the moduli of all eigenvalues of the operatorχ sχ −1 become greater than 1 (the null space of χ is assumed to be excluded). Therefore, a perturbation δγ from the ground state is amplified by iteration, and the ground state is an unstable fixed point of the iteration map. A fixed point is stable if and only if all eigenvalues of the linearized iteration map have modulus less than 1. C. Level shifting method In the previous section, it was shown that the "straightforward" iteration of the KS equations is intrinsically divergent. To obtain a practical KS scheme the iteration map must be modified. In this section, we consider the level shifting method 19 and by linearizing the modified iteration map we obtain a criterion for convergence. Intrinsic divergent behavior can be encountered also in the Hartree-Fock approximation, and various modifications of the iteration procedure have been introduced, for example, Hartree damping (also called configuration mixing) and level shifting. The level shifting method is particularly attractive in 1MFT because it can prevent the collapse of eigenvalues that is the origin of divergent behavior. Indeed, it has been shown that the level shifting method is capable of giving a convergent KS scheme in 1MFT. 18 In the "straightforward" iteration procedure, the change of the orbitals, to first order, from iteration step n to iteration step n + 1 is (32) whereĥ (n) is the KS Hamiltonian for iteration step n and in the second line the orbitals and eigenvalues are from iteration step n. In the level shifting method, the first order change in the orbitals given by (32) is altered by applying the shifts i → i + ζ i to the eigenvalues in the denominator. To first order, this modification is equivalent to adding the term∆ = i ζ i φ (n) i φ (n) i to the KS Hamiltonian for step n + 1. Letĥ ζ =ĥ +∆ define the level shifted Hamiltonian. Repeating the linear analysis of the previous section for the iteration map for this level shifted Hamiltonian, we find where we have defined the operatorΩ with the kernel Ω(xx , yy ) = dzdz χ s (xx , zz ) δ∆(zz ) δγ(yy ) From the last line of (33), we obtain a criterion for the convergence of the iteration map. All of the eigenvalues of the operator must have modulus less than 1. The dependence on the level shift parameters ζ i enters only through the shifted eigenvalues in the denominator of χ s . The level shifting method is effective because it prevents the divergence of χ s at the ground state and there is a cancellation between the two terms in (35). Unfortunately, the convergence criterion depends on χ, which is unknown at the outset of a 1MFT calculation. In Sec. III, the level shifting method is applied in an explicit example and the above criterion is verified. D. Properties of the KS system The distinguishing feature of the KS system in 1MFT is the degeneracy of the eigenvalue spectrum. This has surprising consequences. It was shown in section II A that the KS eigenvalue spectrum splits linearly as we move away from the ground state 1-matrix. Therefore, the total KS energy changes linearly with respect to the displacement, i.e., Fig. 4 in Sec. III B 2). This is surprising because such linear changes do not occur for the energy functional E v (in the VR space). The immediate implication is that E s [γ] is not stationary at the ground state. While this causes no difficultly in principle -we need only the functional E v to be stationary -it is intimately connected with the divergence of the iteration map. Precisely at the ground state E s [γ gs ] = i i , where the prime indicates that only the pinned states with f i = 1 contribute to the sum. Away from the ground state the KS eigenvalue spectrum splits, and E s [γ] is a multivalued functional due to the choice implied in occupying the new KS levels. This is the same choice encountered in the iteration of the KS equations (see Sec. II B), where the natural orbitals φ (n+1) i are selected from among the eigenstates of the KS Hamiltonian. Near the self-consistent solution, there will be one such choice for which the resulting γ (n+1) is very close to γ (n) . It was shown in the preceding section that the static response function of the KS system diverges at the ground state. Thus, even an infinitesimal perturbation δv s may induce a finite change of γ. At the ground state, all of the natural orbitals, except those which have an occupation number that is degenerate, are uniquely defined. The natural orbitals which belong to a degenerate occupation number are only defined modulo unitary rotation in the degenerate subspace. When a perturbation is introduced, the natural orbitals change discontinuously to the eigenstates of the perturbed KS Hamiltonianĥ = δv. These eigenstates may be any functions in the degenerate Hilbert space because δv is arbitrary. III. TWO-SITE HUBBARD MODEL The 1MFT-KS system has some unusual features, such as the collapse of the KS eigenvalues at the ground state, so it is desirable to derive explicitly the KS equations for a simple model. The Hubbard model on two sites provides a convenient example because it is exactly solvable and especially easy to interpret. Also, analytic expressions for the 1-matrix energy functional and KS Hamiltonian can be obtained. In the following sections, for the purpose of comparison, we find the ground state of the two-site Hubbard model by three methods -direct solution of the Schrödinger equation, 1MFT and DFT. A. Direct solution The Hamiltonian of the two-site Hubbard model iŝ H =T +Û +V witĥ where t 12 = t 21 = t, c † iσ and c iσ are the creation and annihilation operators of an electron at site i with spin σ, andn i = σ c † iσ c iσ . We consider only the sector of states with N = 2 and S z = 0, i.e., a spin unpolarized system. In this sector, the eigenstates ofT +Û are in the site basis The following variables have been introduced x = cos(π/4 − α 0 /2), y = sin(π/4 − α 0 /2), and tan α 0 = U/4t with 0 ≤ α 0 ≤ π/2. The eigenvalues of T +Û for the states Φ i are where B = √ U 2 + T 2 and T = 4t. Φ 0 , Φ 2 , and Φ 3 are singlet states (S = 0) and Φ 1 is a triplet state with S = 1 and S z = 0. We will omit Φ 1 from consideration as it is not coupled to the other states by the spin-independent external potential chosen in (36). The Hamiltonian may be written asĤ = λ 0Î + BK, where in the basis (Φ 0 , Φ 2 , Φ 3 ). We have defined the dimensionless variable ν = V /B. The secular equation K − κ iÎ = 0 is The normalized eigenvectors ofĤ are and have energy E i = λ 0 + Bκ i for i = 0, 2, 3, where κ i is a root of the secular equation (39). We have also defined and The two dimensionless energy scales of the system are the interaction strength U/T and the bias V /T . The behavior of the system with respect to these energy scales is illustrated in Fig. 1. The quantity m = (n 1 − n 2 )/2, where n i is the average ground state occupancy of site i, is plotted with respect to the external potential V for various values of the interaction strength U . For the wherem = (n 1 −n 2 )/2. A weakly interacting system (e.g., the solid [blue] curve in Fig. 1) responds strongly to the external potential. In contrast, a strongly interacting system (e.g., the dashed [red] curve) responds weakly up to a threshold V /B ∼ 1 (for a strongly interacting system B ≈ U .) This behavior has a simple interpretation: in order for the external bias to induce charge transfer, it must overcome the on-site Hubbard interaction. In the limit U → ∞, the curve develops step-like behavior near V /B ∼ ±1. B. Solution by 1MFT In the first part of this section, we derive the energy functional and KS Hamiltonian. In the second, we demonstrate the divergence of the iteration of the KS equations. In the third, we use the level shifting method 19 to obtain a convergent KS scheme. Energy functional and KS Hamiltonian For lattice models such as the Hubbard model, the 1matrix is defined as One may ask whether the HK theorem (or Gilbert's extension in 1MFT) applies when the density (or 1-matrix) is defined over a discrete set of points, i.e., when the continuous density function n(r) is replaced by the site occupation numbers n i . This has been investigated, 27,28 and it was found that the HK theorem remains valid. We consider here only spin unpolarized states (S z = 0). Accordingly, we define the spatial 1-matrix The 1-matrix may be expressed as, cf. (5), where φ α are the spatial natural orbitals. As our system is spin unpolarized, the spin up and spin down spinorbitals have the same spatial factors. Therefore, in (46) each spatial orbital φ α may be occupied twice (once by a spin up electron and once by a spin down electron), i.e., 0 ≤ f α ≤ 2. It is convenient to parametrize the natural orbitals as φ a = cos(θ/2) sin(θ/2) and φ b = sin(θ/2) − cos(θ/2) . In terms of this parametrization, the 1-matrix in the site basis is where σ i are the Pauli matrices and A = (f a − f b )/2 = cos α. For the two-site Hubbard model in the sector of singlet states with N = 2 and S z = 0, (44) may be inverted to express Ψ 0 = Ψ 0 [γ]. Explicitly, we find Ψ 0 = cos(α/2)Φ aa − sin(α/2)Φ bb , where Φ ii is the Slater determinant composed of the natural spin orbitals φ i↑ and φ i↓ (i = a, b). The terms of the energy functional The electron-electron interaction energy functional U [γ] agrees with the general exact result for 2-electron closed shell systems 29,30 . We may partition U [γ] into the Hartree energy and the exchange-correlation energy In Sec. II A the KS Hamiltonian was derived from the stationary principle for the energy. For the present model the KS Hamiltonian is a real 2 × 2 matrix. In the site basis its elements are h(ij) = 0 c iĥ c † j 0 . This matrix may be expressed as h = h · σ with In these expressions the variable α represents the dependence on the occupation numbers through the definition α = cos −1 A = cos −1 ((f a − f b )/2), α 0 = tan −1 (U/4t) is the ground state value of α when V = 0, and θ represents the dependence on the natural orbitals, c.f. (47). Let us verify (20) for the uniform case V = 0, for which the ground state 1-matrix has θ = π/2 and α = α 0 . At these values h x = h y = h z = 0, which verifies the eigenvalue collapse in this case. Iteration of the KS equations We demonstrate here the iteration of the KS equations following the straightforward algorithm described in Sec. II B. During the optimization of the orbitals the occupation numbers (i.e. α) are held fixed. Let us look more closely at each operation in the algorithm. In operation (i), the KS Hamiltonian for step n + 1 is found by evaluating (52) at the 1-matrix γ (n) , i.e., at θ = θ n . In operation (ii), we find the eigenstates u i ofĥ (n+1) , which we parametrize in the form (47) with θ = θ n+1 . These eigenstates are taken as the natural orbitals φ (n+1) i for step n + 1. This implies setting each of the φ (n+1) i equal to one of the u i . In the present case, the natural orbitals are chosen such that θ n+1 is as close as possible to θ n . In operation (iii), γ (n+1) is constructed from the φ (n+1) i by (46). We may now condense these three operations into a discrete iteration map on θ, i.e., a map θ n → θ n+1 . It is defined by for 0 < θ n < π and A > 0 (0 < α < π/2). In (53), A gs is the ground state value of A. An example of the iteration map for t = 1, U = 1, V = 0, and A = A gs −0.02 is shown in Fig. 2 ternately drawing vertical lines from the solid curve to the dashed curve and horizontal lines from the dashed curve to the solid curve. The dotted [blue] curve shows an example of the first two iterations beginning from an initial guess θ 0 . The next iterations θ 1 and θ 2 move farther away from the ground state, and the map does not converge to the ground state fixed point θ = π/2. The iteration map is nonlinear and may exhibit quite complex behavior. The linearization of the map at a fixed point tells us whether the fixed point is stable or unstable. As an example, let us consider the uniform case V = 0, for which the ground state fixed point is θ = π/2. Linearization of (53) in terms of the variable m = (n 1 − n 2 )/2 = A cos θ gives where Suppose the occupation numbers are close to their ground state values, i.e., A = A gs + δA where δA is a small displacement. The leading approximation for ξ gives For any nonzero values of t and U , there is a threshold d > 0 such that for |δA| < d, |ξ| > 1. Therefore, the ground state is an unstable fixed point. In Sec. II B, the divergence of the iteration map was connected to the divergence of the static KS response function. Let us verify (26) explicitly for the present case. As seen in (54), the linearized iteration map affects only the diagonal elements of the 1-matrix, i.e., the density, which is described by the variable m. Therefore, the relevant response functions are the density-density response for the KS system and the density-density response for the interacting system For the two-site Hubbard model, these response functions are just constants. The KS response has a functional dependence on the 1-matrix. It diverges as the ground state is approached, i.e., in the limit δA → 0. The linearized iteration map (26) is simply multiplication by a constant which agrees with the direct calculation (56). Of course, in actual calculations it is necessary to have a convergent iteration scheme. One possibility for obtaining convergence is the level shifting method, 19 whose application in 1MFT was discussed in Sec. II C. In the level shifting method, one introduces artificial shifts of the KS eigenvalues in order to improve convergence. A shift of the KS eigenvalue i by an amount ζ i is equivalent to adding a term ζ i φ i φ i to the KS Hamiltonian, where φ i is the orbital with eigenvalue i . The KS system for the two-site Hubbard model has two orbitals. As the divergence of the iteration map is due to the degeneracy of the KS spectrum at the ground state, it seems sensible to prevent degeneracy by introducing a separation 2ζ between the levels. Thus, we add the following term to the KS Hamiltonian at iteration step n where φ a and φ b are evaluated at θ = θ n . An example of the effect of level shifting is shown in Fig. 3. Convergence is achieved when ζ exceeds a threshold, which may be calculated from the convergence criterion (35). The dashed [red] curve in Fig. 3 shows the iteration map with a level shift value greater than the threshold. For the two-site Hubbard model, the criterion for convergence can be visualized graphically as the condition that the magnitude of the slope of the level shifted curve be less than the slope of the solid [black] curve at the fixed point. At each iteration step the KS system has an "instantaneous" energy E s [γ] = tr(ĥ[γ]γ), which has, of course, no physical meaning when the KS system is not selfconsistent. The KS energy is shown in Fig. 4 as a function of the deviation δ γ = (δγ x , δγ z ) of the 1-matrix (48) from the ground state 1-matrix. It is immediately seen that the KS energy is not stationary at the ground state 1-matrix, which is a cusp point where the energy E s changes linearly with respect to δγ. The KS energy is multivalued due to the choice implied in occupying the KS levels when the system is not self-consistent (see Sec. II D). The space curve in Fig. 4 shows the energy as a function of θ for fixed occupation numbers, i.e., for fixed A. The KS response is proportional to the inverse separation between the two branches of the space curve. The separation vanishes as the curve approaches the conic point, which is the origin of the divergent KS response. C. Solution by DFT The two-site Hubbard model with the local external potential chosen in (36) may be treated also with DFT. It is interesting to compare the DFT-KS scheme with the 1MFT-KS scheme, especially with regard to their convergence behavior. The variational energy functional and KS Hamiltonian may be constructed explicitly. An interesting result of the investigation is that the straightforward iteration map is divergent when U > 1.307t (for V = 0). We derive a general condition for the convergence of the DFT-KS equations. Energy functional The HK energy functional for a lattice is where v(i) is the external potential at site i and F [n] is a universal functional of the density (here, site occupancy) defined as whereT is the kinetic energy operator andÛ is electronelectron interaction. In the following treatment of the two-site Hubbard model, we depart from standard practice by enforcing the normalization condition n 1 +n 2 = N explicitly (i.e., through the parametrization), rather than with a Lagrange multiplier. Thus, we take as basic variable the single parameter m = (n 1 − n 2 )/2 that uniquely specifies the density (site occupancy). Similarly, the external potential is specified by the single parameter (62) is then just a function F (m), which may be constructed explicitly as follows: i) a map m → Ψ 0 is defined as the composition of the maps m → V and V → Ψ 0 and ii) the resulting function Ψ 0 (m) is used to evaluate (62). An explicit expression for the map m → V can be found from the inverse of (43). The second map v → Ψ 0 was given in (40). The composition of these two maps gives the ground state as a function of m, i.e., Ψ 0 (m), with which the universal functional (62) may be evaluated. KS Hamiltonian Following standard practice, the KS Hamiltonian takes over, unchanged, the kinetic energy operator from the many-body Hamiltonian. Thus, we consider the KS Hamiltonianĥ =t +v s , where W [n] = E[n, v] − T s [n] contains the Hartree and exchange-correlation energy as well as the external potential energy, and T s [n] is the kinetic energy of the KS system. We do not separate these contributions explicitly. The KS potential is spin independent because the ground state density is spin unpolarized. Also, it is determined only to within an arbitrary additive constant, which we choose such that v s (1) + v s (2) = 0. In the site basis, the KS Hamiltonian is a 2×2 matrix which may be expressed . The kinetic energy of the KS system is evaluated as where φ a is the lowest energy eigenstate of (63) and is twice occupied (once by a spin up electron and once by a spin down electron.) It is parametrized as in (47) with tan θ = −2t/V s . The density of the KS system is Thus, from (65) and (66) the kinetic energy T s is a known function of m s . From (61), (64) and (65), the KS potential is is the given external potential and Eq. 67 is simply the familiar expression v s (r) = v(r) + v H (r) + v xc (r) with a different partitioning of the terms. It is seen that the terms f −∂T s /∂m together correspond to the Hartree and exchange-correlation potentials. Iteration of the KS equations Let us investigate the iteration of the KS equations in the present context. The conventional iteration map consists of the following steps: i) the KS potential for step n + 1 is determined from the density of step n using (67), i.e., V (n+1) s = V s (m (n) s ), ii) the eigenstates ofĥ (n+1) are found, and iii) the density of step n + 1 is calculated with (66). Consider step (i) in more detail. The KS potential is obtained from (67), The right hand side may be expressed differently by using the stationary conditions for the energy functional E[n, v] and the KS energy E s = T s + i v s (i)n i . The stationary condition ∂E/∂m = 0 applied to (61), gives f = −V (m), where V (m) is the external potential such that the interacting system has ground state density m. Similarly, the stationary condition applied to E s gives ∂T s /∂m = −V s . Substituting these relations in (69) yields At self-consistency the V s terms cancel, and we obtain the expected result V = V (m gs ), where m gs is the ground state density. For the present model, the ground state density could be found by solving V = V (m) as V (m) is known exactly from (43). However, in general the ground state must be found by iteration. Eq. 70 implies an iteration map for the density, i.e., a map m . From (66) and the definition tan θ = −2t/V s , we find the relationship The density may be iterated until self-consistency is reached. However, we encounter a technical difficulty for the present model. In order to express explicitly the term V (m (n) s ) in (70), we must invert (43), which involves solving a cubic equation. As the solutions are rather unwieldy, we take here a different approach. We iterate instead the external potential V (m). It may seem odd to iterate the external potential, which is given in the statement of the problem. Nevertheless, the iteration map for V provides an "image" of the iteration map for m s , by virtue of the HK theorem. Such an approach allows us to investigate certain features of the iteration map, in particular its convergence behavior. In order to express (70) as an iteration map for V , we need to express V s as a function of V . In other words, we find the value of V s such that the KS system has density m s = m, where m is the density of the interacting system with V . The composition of (71) and (43) yields the desired functioñ Using (72) in (70), we obtain the iteration map for the external potential which is expressed in implicit form. Examples of the iteration map for a uniform system (V = 0) are shown in Figs. 5 and 6, where the left and right hand sides of (73) are plotted. Suppose an initial value V (0) = 0 is chosen. For a system with V = 0, the ground state has uniform density (m = 0), but the initial density m (0) s associated with V (0) is not uniform. Upon iteration, we expect the KS system to relax to a uniform density, i.e., we expect the KS potential to be such as to push the system closer to uniform occupancy in the next iteration. curves represent the right hand side. The iteration map may be demonstrated graphically by alternately drawing vertical lines from the solid curve to the dashed curve and horizontal lines from the dashed curve to the solid curve. The map displays "charge oscillation." The ground state is a stable fixed point if the magnitude of the slope of the dashed curve at the origin is less than the slope of the solid curve at the origin. For weakly interacting systems the iteration map is convergent, while for strongly interacting systems it is nonconvergent. The threshold for convergence is U ≈ 1.307t. Linearization of the KS equations The nature of the fixed point and the origin of diverent behavior are revealed by linearization of the iteration map. We linearize the map by expanding both sides of (70) with respect to δm s = m s − m gs , where m gs is the ground state density. We find where χ s and χ are the density-density response functions defined in (57) and (58). The threshold for convergent behavior is or equivalently, χ s χ −1 ≤ 2. Note the change from 1 for 1MFT to 2 for DFT on the right hand side, cf. (26). Consider the case V = 0, which has uniform density in the ground state (m gs = 0). Using the (57) and (58) in (75), gives the threshold condition cos(π/4 − α 0 /2) = 4 (sin(π/4 − α 0 /2)) 3 . The leading behavior of the KS response is independent of U , while the response of the interacting system vanishes as For sufficiently large U , the threshold (75) is crossed and divergent behavior results. In DFT, as also in 1MFT, the source of divergent behavior is a KS response that is too large in relation to the exact response. In 1MFT the imbalance results from a divergent KS response, whereas in DFT the KS response generally remains finite but the response of the interacting system becomes too small as U increases. In standard DFT (with continuous n(r)), the analog of the linearized iteration map (74) may be written n (n+1) (r) ≈ dr dr χ s (r, r ) (v c (r , r ) + f xc (r , r )) ×n (n) (r ), where n (n) (r) is the density of iteration step n, v c is the kernel of the Coulomb interaction, and f xc = δv xc /δn is the exchange-correlation kernel. The necessary and sufficient condition for convergence of the KS equations is that all eigenvalues of the operator χ s v c +f xc (80) have modulus less than 1. IV. CONCLUSIONS The status of the KS system in 1MFT has been uncertain. Although Gilbert derived effective single-particle equations from the stationary conditions for the energy functional, the degeneracy of essentially all of the resulting orbitals was thought to be paradoxical. 2,13,14 We have presented an alternative derivation of the KS equations in which the degeneracy is lifted by constraining the occupation numbers. Such a KS scheme is well-behaved in the neighborhood of the ground state occupation numbers. Therefore, the correct natural orbitals are obtained in the limit that the ground state is approached. We have constructed explicitly the 1MFT-KS system for a simple two-site Hubbard model. While we find no paradoxical results, the KS system has many striking features, in particular the collapse of eigenvalues at the ground state. Although the KS eigenvalues do not have a physical interpretation as in DFT, the orbitals, which are called natural orbitals, play an important role in the context of configuration interaction, i.e., the expansion of the full wavefunction as a sum of Slater determinants. 17 This may be important in the search for approximate energy functionals. Beyond the question of the existence of the KS system in 1MFT, there is the issue of its practicality. The KS system has been extremely useful in DFT calculations. Due to the implicit 1-matrix dependence of the singleparticle potential, the KS equations are nonlinear. Such equations are generally solved by iteration. As in DFT, there is a "straightforward" procedure for iteration. In contrast to DFT, the "straightforward" procedure is always divergent, in the sense that the ground state is an unstable fixed point. We have demonstrated the instability of the ground state by linearization of the iteration map. The source of the instability is the divergence of the KS static response function at the ground state, which in turn, is due to the degeneracy of the KS spectrum. Degeneracy-driven instability is reminiscent of the Jahn-Teller effect, and the connection is strengthened if we regard the implicit 1-matrix dependence of the KS Hamiltonian as analogous to the parametric dependence of the Born-Oppenheimer Hamiltonian on nuclear coordinates. In both cases, the energy spectrum splits linearly with respect to displacement from the degeneracy point. Thus, the energy may always be lowered by displacement. For the 1MFT-KS system, this means that the KS energy tr(ĥγ) may always be lowered by displacement from the ground state, leading to an instability of the iteration procedure. However, this is a fictitious energy and the HK energy functional E v is of course always minimum at the ground state.
2007-11-19T18:54:26.000Z
2007-11-19T00:00:00.000
{ "year": 2007, "sha1": "c199ad670758aeba3f1a9cde0a94614838aeedad", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0711.2996", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c199ad670758aeba3f1a9cde0a94614838aeedad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10390488
pes2o/s2orc
v3-fos-license
Disseminated rhodococcus equi infection in HIV infection despite highly active antiretroviral therapy Background Rhodococcus equi (R.equi) is an acid fast, GRAM + coccobacillus, which is widespread in the soil and causes pulmonary and extrapulmonary infections in immunocompromised people. In the context of HIV infection, R.equi infection (rhodococcosis) is regarded as an opportunistic disease, and its outcome is influenced by highly active antiretroviral therapy (HAART). Case presentation We report two cases of HIV-related rhodococcosis that disseminated despite suppressive HAART and anti-rhodococcal treatment; in both cases there was no immunological recovery, with CD4+ cells count below 200/μL. In the first case, pulmonary rhodococcosis presented 6 months after initiation of HAART, and was followed by an extracerebral intracranial and a cerebral rhodococcal abscess 1 and 8 months, respectively, after onset of pulmonary infection. The second case was characterized by a protracted course with spread of infection to various organs, including subcutaneous tissue, skin, colon and other intra-abdominal tissues, and central nervous system; the spread started 4 years after clinical resolution of a first pulmonary manifestation and progressed over a period of 2 years. Conclusions Our report highlights the importance of an effective immune recovery, despite fully suppressive HAART, along with anti-rhodococcal therapy, in order to clear rhodococcal infection. Background Rhodococcus equi (R.equi) is an acid fast, GRAM + coccobacillus, which was first isolated from suppurative pulmonary lesions in foals [1]. The first human case of R. equi infection (rhodococcosis) was reported in 1967 in an immunocompromised patient with pneumonia [2] and its frequency has increased significantly during the last 20 years [3][4][5], especially in immunocompromised patients, such as transplant recipients and HIV-infected patients [6,7]. Rhodococcosis is a rare infection, the exact prevalence of which is not known. Until now, more than 200 cases have been reported worldwide [4,6]. In the majority of the cases, R. equi is acquired by inhalation or aerosols coming from the stool of infected foals. Excavated pneumonia is the most frequent clinical manifestation [8], although spreading of the infection to other organs is common, particularly in the immunocompromised subjects [9][10][11][12][13][14][15]. The diagnosis relies on radiological examinations [16], isolation of R. equi in blood, sputum and other body fluids [17], and histological examination of tissue samples, which may reveal typical necrotizing granulomatous lesions, also termed as malakoplakia [18]. There is no standard treatment for rhodococcosis and it usually consists of a combination of at least two antibiotics to which the agent is susceptible. These include macrolides, rifampin, floroquinolones, aminoglycosides, glycopeptides and carbapenems, although newer drugs, such as tygecicline and linezolid have also successfully been used [19][20][21]. The choice should be based on the results of antibiogram and drugs be given intravenously for at least 2 weeks, followed by prolonged oral suppressive antibiotic treatment [4]. Surgical drainage of abscesses or cavitary lesions may also be required [9]. Despite treatment, the outcome of rhodococcosis is poor in immunocompromised patients, with the highest mortality (50-60%) in HIV infection. The use of highly active antiretroviral therapy (HAART), however, has dramatically changed the prognosis in HIV-infected patients, with reported survival rates of virtually 100% [9]. The cellular immunity, in particular Th1 response, appears indeed to play a prominent role in the containment of R. equi infection [22]. We here report two cases of R. equi pneumonia in HIV-infected patients that disseminated despite virologically suppressive HAART, without CD4+ cell counts increase above 200/μL. These cases highlight the importance of an effective immune recovery induced by HAART, along with appropriate antibiotic therapy, in order to clear rhodococcal infection. In addition, they illustrate the wide spectrum of clinical manifestations caused by R. equi and the potential of non conventional radiological approaches, such as nuclear techniques, in the diagnostic work-up and follow-up of R. equi lesions. Case Presentation Case report 1 In April 2002 a 49 year-old HIV-infected woman was admitted to hospital for persistent fever above 38°C and cough (Table 1). She had started HAART with didanosine, lamivudine and indinavir in October 2001,when her CD4+ cells count was 118/μL, and 2 months later had developed brain and brain stem vasculitis-like contrast-enhancing, white matter lesions, consistent with immune reconstitution central nervous system (CNS) manifestations, for which she did not receive any treatment. At the time of admission in April, CD4+ cell count was 123/μL and HIV-RNA was undetectable. Chest Xray showed a nodular opacity in the left upper lobe and R. equi was cultured in blood and sputum, leading to a diagnosis of R.equi pneumonia. Intravenous treatment with vancomycin, imipenem, ceftriaxone and ciprofloxacin was started, along with oral prednisone (75 mg qd for 8 days, then tapered in 20 days) for a concomitant, likely HIV-related, severe thrombocytopenia. In May 2002, the patient presented with generalized seizures. Brain magnetic resonance imaging (MRI) confirmed the presence of the known vasculitis-like lesions, still contrast enhancing, but also showed an extracranial abscess, which was surgically removed and from which R. equi was cultured. Intravenous antibiotic treatment was continued for a total of 8 weeks, then the patient received oral ciprofloxacin and chlarithromycin for the following 15 weeks. In July 2002 patient's respiratory symptoms resolved, CD4+ cells count was 161/μL and HIV RNA 10,800 c/ml (patient had discontinued therapy in May because of diarrhoea and resumed same HAART in June), chest X-ray and CT scan showed complete regression of the lung nodules. Because of new onset of cognitive symptoms, and persistence of the contrastenhancing vasculitis-like brain lesions, oral prednisone was started at 1 mg/Kg and gradually tapered for a total of five months. In December 2002, about 20 weeks after interruption of anti-rhodococcal treatment, patient developed anisocoria, right hemiparesis, hypoaesthesia, hypopallaesthesia, and dysmetria. At MRI, the known white matter lesions were still present but no longer contrast enhancing (Figure 1a), however, the exam showed two new contrast-enhancing nodular lesions, surrounded by oedema (Figure 1b, c). Cerebrospinal fluid (CSF) examination was unremarkable, no bacteria, mycobacteria and fungi were cultured, cryptococcal antigen and Epstein-Barr virus DNA were undetectable. Antitoxoplasmic treatment was given empirically for 3 weeks, without improvement of clinical conditions, and followed by enlargement of the nodular lesions at MRI (Figure 1d-f). In the suspect of brain rhodococcosis, intravenous treatment with ciprofloxacin, vancomycin and ceftriaxone was given for 3 weeks, together with oral prednisone, followed by resolution of focal symptoms and improvement of MRI lesions. During the following years, severe neurological impairment with gait impairment persisted, the patient developed progressive dementia and was admitted to a long-term medical facility. Follow-up brain MRIs documented the disappearance of the abscesses, persistence of the vasculitis-like lesions and development of cerebral atrophy. Patient changed HAART for toxicity many times, and her immunovirological situation remained stable. She did not develop rhodococcosis relapses or other opportunistic complications, but died in December 2010 for ischemic cardiac events. Case report 2 A 45 year old, HIV-infected woman, with a history of breast carcinoma successfully treated with quadrantectomy and local radiotherapy, presented in May 2005 with low-grade fever (below 38°C), cough and dyspnea (Table 2). She was off HAART since 2000, her CD4+ cells count was 133/μl and HIV RNA 40,700 c/ml. She was on cotrimoxazole as primary P. jiroveci pneumonia prophylaxis. Chest X-ray and CT scan showed a 5 cm pulmonary nodule in the upper right pulmonary lobe (Figure 2a, b). R. equi was isolated from expectorate and, based on the results of antibiogram, patient received 2 weeks of intravenous rifampicin, levofloxacin and azythromicin, followed by 8 weeks of oral levofloxacin and azythromicin. HAART with lamivudine, tenofovir and efavirenz was also started. Respiratory symptoms resolved with progressive reduction of the pulmonary lesions. In August, CD4+ cells count was 60/μl and HIV RNA was undetectable (<50 c/ml). During the following years, patient adherence to HAART remained poor until she stopped treatment in 2008. Her CD4+ cells count always remained below 200/μl, nevertheless, she neither experience respiratory diseases nor any other HIVrelated manifestations. In the summer of 2009, 4 years after R. equi pneumonia, patient reported a weight loss of 10 Kg in 3 months Figure 1 Central nervous system (CNS) immune reconstitution white matter lesions and R. equi brain abscesses by Magnetic Resonance Imaging (MRI) (case report 1). a-c. Brain lesions at the onset of focal neurological symptoms. a. Axial FLAIR brain sequence showing non specific asymmetric bilateral hyperintensity in the subcortical region, expression of immune recconstitution inflammatory reaction. b. Axial FLAIR sequence showing abscessual lesion, surrounded by oedema, in the right temporal region. c. Gadolinium (Gd)-T1 sequence shows the presence of nodular enhancement of the right temporal lesion. d-f. Evolution of brain lesions 1 month after onset of symptoms. d. Axial FLAIR sequence shows the persistence of the non specific white matter hyperintensity; e. Axial FLAIR sequence shows an increase of lesion size and oedema of the right temporal lesion; f. Gd-T1 sequence shows the evolution of contrast enhancement, now presenting as ring enhancement, typical of an abscessual lesion. and low-grade fever, and noticed a subcutaneous, non painful nodule in the right thigh. CD4+ cells count was 90/μL and HIV RNA 78,650 c/mL. An MRI of the tight lesion was highly suggestive for a neoplastic lesion, with necrosis and inflammation (Figure 2c, d), which prompted a diagnostic work-up for tumor identification and staging. A CT scan showed multiple abdominal enlarged lymph nodes, and a 18-fluodeoxyglucose (FDG) PET/CT scan showed increased metabolic activity of the thigh lesion, colon and spleen (Figure 2e). However, a needle biopsy of the thigh nodule demonstrated a non necrotic granuloma, from which R.equi was cultured. A colonoscopy showed a polipoid, stenotic lesion, which histologically disclosed a picture of malakoplakia, with the presence of microorganisms with morphology and staining features consistent with R.equi. Based on the antibiogram, azythromicin, levofloxacin and ryfampicin was started in September 2009. HAART with tenofovir, emtricitabine and atazanavir was introduced 1 month later, with rifampicin replaced by rifabutin, followed by prompt virological response, but no significant CD4+ cells increase. During the following weeks, patient's conditions continued to worsen, with further weight loss, fever persistently above 38°C, and onset of dyspnea, ascites and diarrhoea. An abdominal MRI showed a 4 cm wide lesion of colon and a 3 cm wide lesion in perisplenic peritoneum; abdominal, celiac, para-hepatic and para-aortic enlarged lymph nodes of 1-2 cm of diameter, peritoneal and pleural effusion, while a chest high resolution CT scan showed multiple lung consolidations. In November, antibiotic treatment was switched to imipenem and amikacin, whilw HAART was modified with atazanavir replaced by darunavir/ritonavir, followed by significant clinical improvement and reduction of both dimension and metabolic activity of the thigh lesion and abdominal lymph nodes, as documented by conventional CT and 18-FDG PET/CT scans. After 3 weeks, treatment was replaced by oral levofloxacin and azythromicin. Remarkably, a new gastric hypercaptation was noted at 18-FDG PET/CT scan. In January 2010, the patient received extracorporeal lithotripsy for nephrolithiasis, possibly consequent to granulomatosis-associated hypercalciuria. At this time there was no longer evidence of the thigh lesion. In February 2010, after 10 weeks of anti-rhodococcal oral therapy, patient self-suspended antibiotic and antiretroviral therapy for severe gastralgia. An esophagogastro-duodenoscopy (EGDS) showed esophageal candidiasis and the presence of a large stomach ulceration, which was histologically proved to be a large B cell diffuse lymphoma. CD4+ cells count was 62/μL and HIV-RNA undetectable (<50 c/mL). Patient started proton pump inhibitors and fluconazole. In March 2010 the patient was admitted to Hospital because of abdominal pain and intestinal bleeding requiring blood transfusion. CT scan showed a solid lesion in the right colon (Figure 2f), surrounded by multiple enlarged lymph nodes, with a radiologic appearance of a colon cancer; colonscopy showed a large mass in the proximal ascending colon, which was biopsied. However, in the following days the patient developed peritonitis and caecal perforation that required emergency laparotomy and right hemicolectomy. Histological examination of a 7 cm wide sessile, centrally excavated, heteroplasia in the ascending colon and of peritoneal lymphonodes showed malakoplakia of the intestine and of perivisceral lymph nodes [23]. A similar histological picture was disclosed by an echo-guided biopsy of a 2.5 cm wide lesion of the chest wall. Patient noticed also the appearance of subcutaneous nodules on her face ( Figure 2g) and reappearance of the thigh lesion and R. equi was isolated from both the thigh lesion and blood samples. Based on antibiogram, patient was started again on imipenem, levofloxacin and vancomicin; intravenous azythromicin was added after 3 weeks. Vancomicin was stopped after 4 weeks for severe thrombocytopenia, imipenem replaced by ertapenem after 7 weeks and levofloxacin suspended after 8 weeks. After a total of 14 weeks, intravenous treatment was substituted with oral azythromicin and rifabutin. In June 2010, patient's conditions were improved, with disappearance of the skin lesions, volume reduction of the thigh lesion and lymph nodes at CT scans and no abnormalities in lungs and brain. Multiple biopsies on a control EGDS did not confirm the presence of gastric lymphoma. Despite virological suppression, there was no immunological improvement (74 CD4+ cells count/μL). In August 2010, patient was admitted to hospital for important headache, CT and MRI disclosed multiple brain contrast-enhancing lesions, associated with cerebral oedema (Figure 2h, i). Intravenous imipenem and amykacin were started together with mannitol and dexamethasone, followed by clinical improvement. Systemic spread of rhodococcosis was excluded by a total body CT scan. After 21 days treatment was changed to meropenem and azythromicin and, after 10 weeks, to ertapenem alone. MRI follow-up showed progressive reduction of cerebral lesions, despite occurrence of seizures in December 2010, for which antiepileptic therapy was started. Discussion and conclusions Highly active antiretroviral therapy has impressively reduced mortality and morbidity of opportunistic diseases [24]. Nevertheless, these remain significant in patients with low CD4+ cells count and during the first months of therapy. This may either result from insufficient immunological recovery, or from the inflammatory reaction to the opportunistic infection -the so called immune reconstitution inflammatory syndrome (IRIS)- [25,26]. None of the patients here described was indeed able to increase the CD4+ cells count during the first months of HAART, despite complete virological suppression, and this was associated with the multi-organ spread of rhodococcosis. Lack of immunological recovery, indeed, is associated with enhanced risk of disease progression and death, even in virologically suppressed patients [27]. Conversely, in the first patient, the immunological recovery after CNS relapse was accompanied by a long disease-free period. The integrity of cellular immunity appears essential for clearing R. equi infection [22]. During infection, R. equi survives inside macrophages by inhibiting the formation of the phagosome-lysosome and thereby its degradation [28]. In vitro production of IFN-γ and TNFα in response to stimulation with R. equi is significantly impaired in AIDS patients compared to healthy subjects [29], confirming the importance of adequate cellular, and especially Th1 response for clearing the infection. It is possible that the persistence of R. equi in macrophages also impairs the function of these cells, thus contributing to maintain the immunodeficiency. Indeed, lack of CD4+ T cell rise despite full virological suppression has also been observed in other HIV-associated opportunistic infections, such as visceral leishmaniasis [30], also characterized by the persistence of the parasites in macrophages [31] In our first case, rhodococcal pneumonia occurred 6 months after the introduction of HAART, right after a manifestation of CNS-IRIS, suggesting an immune reconstitution associated event. Indeed, this case met the definition criteria for "unmasking" IRIS, i.e., unmasking or paradoxical deterioration of an opportunistic disease after introduction of an effective antiretroviral therapy [26,32]. A case of paradoxical worsening of R. equi pneumonia following HAART initiation was also recently described (i.e., paradoxycal IRIS) [33]. These observations suggest that R. equi might be added to the list of pathogens associated with IRIS. As for other opportunistic infections, the first weeks after introduction of HAART are critical to this regard and patients should be strictly monitored in order to early recognize IRIS events. However, experience is still limited to determine best timing of HAART introduction in the context of rhodococcosis or use of antinflammatory drugs if IRIS develops. One important clinical issue in our second case was the difficult diagnostic approach, which required invasive techniques, such as needle biopsy and endoscopic biopsy because of the difficult interpretation of the radiological findings. In this setting the potential of "nonconventional" techniques in the diagnostic work-up was remarkable, with 18-FDG PET/CT that proved useful to uncover silent localizations of the disease, such as the colic and peritoneal lesions. This technique is a sensitive tool not only in neoplastic diseases, but also in tuberculosis and invasive aspergillosis [34][35][36]. It was also reported to be useful to disclose a rhodococcal localization of the tongue [37], but it is not an established means for diagnosis or monitoring of rhodococcosis. Like in tuberculosis, high 18-FDG uptake in rhodococcal lesions is likely related to high level of cell proliferation, possibly in relation to the inflammatory process. It was challenging, in our second patient, to recognize R.equi as the responsible of lesions, such as those at thigh, colon and lung, the radiological appearance of which resembled that of a neoplastic lesion, in a patient with previous history of neoplasia and histological evidence of gastric lymphoma. While it is difficult to dissect the effect of individual anti-rhodococcal agents in disseminated infection, it appeared that a regimen containing carbapenems was more effective in our second case than a combination of rifampicin, levofloxacin and azythromicin, which had previously been effective to cure pulmonary infection. On the other hand, a combination of imipenem and aminoglicosides, given for 3 weeks and followed by antibiotic maintenance therapy, although partially effective, did not prevent subsequent progression of extrapulmonary infection. Similarly, the subsequent 14 week course with carbapenems, followed by oral antibiotic maintenance, did not prevent later occurrence of CNS lesions. In this regard, it is of note the dissemination of the infection to the CNS in both of our cases, which occurred in the first case months after apparent recovery of R.equi pneumonia and, in the second case, despite long-term induction and current maintenance therapy. CNS involvement could have been due to spread of infection to the CNS at the time of extra-cerebral disease, kept under control by systemic treatment, and reactivated once antimicrobial treatment was interrupted, such as in the first case, or switched to an oral maintenance regimen, such as in the second case. The cases here presented show that rhodococcosis may disseminate to a number of tissues in HAART-treated immunological failing patients, and indicate that long-term intravenous treatment might be required to avoid relapses at distance, such in CNS, at least until a sufficiently high CD4+ cell count, e.g., more than 200/ μL, is achieved. Achieving complete immune reconstitution by means of HAART remains the most important weapon against rhodococcosis, in addition to combination antimicrobial treatment. Consent Written informed consent was obtained from patients for publication of this study. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2014-10-01T00:00:00.000Z
2011-12-14T00:00:00.000
{ "year": 2011, "sha1": "42c978dc8bfaf47ebcc1154ebdcd44547d4e66d9", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-11-343", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42c978dc8bfaf47ebcc1154ebdcd44547d4e66d9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258890129
pes2o/s2orc
v3-fos-license
Exploring Patients’ Perceptions About Chronic Kidney Disease and Their Treatment: A Qualitative Study Background Unhelpful illness perceptions can be changed by means of interventions and can lead to improved outcomes. However, little is known about illness perceptions in patients with chronic kidney disease (CKD) prior to kidney failure, and no tools exist in nephrology care to identify and support patients with unhelpful illness perceptions. Therefore, this study aims to: (1) identify meaningful and modifiable illness perceptions in patients with CKD prior to kidney failure; and (2) explore needs and requirements for identifying and supporting patients with unhelpful illness perceptions in nephrology care from patients’ and healthcare professionals’ perspectives. Methods Individual semi-structured interviews were conducted with purposive heterogeneous samples of Dutch patients with CKD (n = 17) and professionals (n = 10). Transcripts were analysed using a hybrid inductive and deductive approach: identified themes from the thematic analysis were hereafter organized according to Common-Sense Model of Self-Regulation principles. Results Illness perceptions considered most meaningful are related to the seriousness (illness identity, consequences, emotional response and illness concern) and manageability (illness coherence, personal control and treatment control) of CKD. Over time, patients developed more unhelpful seriousness-related illness perceptions and more helpful manageability-related illness perceptions, caused by: CKD diagnosis, disease progression, healthcare support and approaching kidney replacement therapy. Implementing tools to identify and discuss patients’ illness perceptions was considered important, after which support for patients with unhelpful illness perceptions should be offered. Special attention should be paid towards structurally embedding psychosocial educational support for patients and caregivers to deal with CKD-related symptoms, consequences, emotions and concerns about the future. Conclusions Several meaningful and modifiable illness perceptions do not change for the better by means of nephrology care. This underlines the need to identify and openly discuss illness perceptions and to support patients with unhelpful illness perceptions. Future studies should investigate whether implementing illness perception-based tools will indeed improve outcomes in CKD. Supplementary Information The online version contains supplementary material available at 10.1007/s12529-023-10178-x. Introduction With the ongoing shift towards person-centred healthcare, increased attention is paid towards the perceptions that patients hold [1].A growing body of literature suggests that especially patients' illness perceptions are key to understanding why many patients with chronic conditions (e.g.diabetes mellitus and cardiovascular disease) have poor health outcomes that cannot be explained by the clinical severity of the condition alone [2][3][4].Illness perceptions are part of Leventhal's Common-Sense Model (CSM) of Self-Regulation: this model proposes that when people are faced with a health threat (e.g.diagnosis or symptoms), this evokes cognitive and emotional perceptions, and these perceptions help people to make sense of the situation they are confronted with (i.e.how serious and controllable is the disease).These illness perceptions also affect how patients respond to and cope with the disease (e.g.treatment adherence [e.g.adopt a healthy lifestyle or take medication as prescribed], seek support, etc.) and subsequently contribute to health outcomes [2,3,5,6]. A large body of literature has shown that illness perceptions of patients with kidney failure are associated with various outcomes, including depression, health-related quality of life (HRQOL) and mortality [e.g.7,8].Until now, few studies have focused on earlier stages of chronic kidney disease (CKD), but the longitudinal studies that have been conducted suggest that strong negative illness perceptions are common and a marker for poor outcomes [9][10][11].For example, an accelerated disease progression (i.e. a faster kidney function decline and an earlier start of dialysis) and increased odds for an unfavourable HRQOL-trajectory were detected in patients who attributed more symptoms to their CKD; believed to a lesser extent that they fully understand their CKD and can personally control their CKD; and believed to a higher extent that their CKD has negative consequences upon their lives, has an unpredictable cyclical nature and causes emotional distress [9,10]. Furthermore, studies have shown that unhelpful illness perceptions can be changed by means of psychoeducational support strategies and can lead to improved coping behaviours and health outcomes [12][13][14].Hence, identifying unhelpful illness perceptions may create unique opportunities to improve patient-reported and clinical outcomes in patients prior to kidney failure.Currently, no tools exist in routine nephrology care to identify and support patients with unhelpful illness perceptions.To build adequate, timely and personalized assessment and support tools, in-depth knowledge is needed about (the story behind) illness perceptions (e.g. which illness perceptions underlie patients' personal experiences and ability to cope with CKD and how these illness perceptions evolve over time) and about stakeholders' needs regarding illness perception-based tools.Therefore, this study aims to identify from the patients' and healthcare professionals' perspectives: (1) meaningful and modifiable illness perceptions and (2) needs and requirements regarding identifying and supporting patients with unhelpful illness perceptions prior to kidney failure.By not only including patients' perspectives but also that of professionals, insight into the topics is enriched by their years of experiences caring for patients with CKD and will facilitate the development of tools that fit routine nephrology care. Design and Participants Individual face-to-face semi-structured interviews were conducted between January and October 2019 in the Netherlands.Purposive sampling ensured a heterogeneous patientsample representing the CKD population (e.g.regarding gender, age, educational level and comorbidities) and a heterogeneous professional-sample representing diverse occupations (e.g.nephrologist, nurse practitioner, social worker and dietician).Eligibility criteria for patients were: ≥ 18 years old, sufficient understanding of the Dutch language, estimated glomerular filtration rate of ≤ 30 ml/min/1.73m 2 (CKD stage 4-5: the stages in which the kidneys are severely damaged and getting close to failure) and currently not receiving dialysis treatment.Professionals were eligible when involved in the care for patients with CKD.Professionals were recruited from Leiden University Medical Centre (LUMC) via the research team, and patients via their LUMC care team and via the National and Regional Hollands-Midden Kidney Patients Associations (Nierpatiënten Vereniging Nederland and Diavaria).Recommended guidelines and checklists (e.g.COnsolidated criteria for REporting Qualitative research [COREQ]) were used to conduct and report this study [15,16]. Study Procedure An interview-protocol was developed based on literature [e.g. 2, 3, 5, 6], to maintain consistency in the interviews' format.All study documents were discussed with stakeholders (patients, professionals and representatives of Kidney Patients Associations), 5 pilot-interviews were held with members of Kidney Patients Associations, and documents were adapted based on feedback and pilot-experiences.Patient-interviews consisted of: 1) 'think-aloud' assignment in which patients spoke aloud while filling out two commonly-used, validated illness perception questionnaires (i.e.Brief and Revised Illness Perception Questionnaires [17][18][19][20], hereby gaining insight into patients' personal thoughts about their illness underlying a question, reasons for specific answers, and which questions did (not) correspond with their experiences; and 2) semistructured interview focusing on stakeholders' perspectives regarding: illness perceptions underlying patients' experiences, outcomes and coping abilities; most meaningful and modifiable illness perceptions; and needs and requirements regarding identifying and supporting patients with unhelpful illness perceptions.Professional-interviews consisted of the semi-structured interview including a discussion about the existing B-IPQ/IPQ-R-questionnaires [19,20].The interviewer followed the topic-list's structure (Supplementary File 1), but deviated from it when appropriate.All questions were open-ended and responses were further explored using additional questions and probes.Interviews were conducted by one investigator (an experienced interviewer and psychologist, trained to conduct interviews as part of this qualitative research), recorded digitally (audio recordings using a professional Olympus voice recorder), and field notes on dynamics and nonverbal communication were taken.Participants completed a brief questionnaire to collect participant characteristics. Analysis Interviews were transcribed verbatim and reviewed line-byline by three investigators.Transcripts were not returned to participants for feedback; however, to ensure that participants' responses were correctly understood (e.g. also when it comes to emotions and non-verbal communication), a verbal summary of topics and main issues discussed was given at the end of each interview, and participants were invited to respond to this summary and indicate whether this summary was correct (and if not, provide corrections).A hybrid inductive and deductive approach was used for analysis: (1) inductive phase: transcripts were analysed using thematic analysis (i.e.descriptive analytical method to identify, organize and provide insight into patterns in qualitative data [i.e.themes] using constant comparison, grouping and hierarchically organizing themes) [21,22]; and (2) deductive phase: identified themes were now organized according to CSM-principles [5,6].Initial coding and analysis was done by one investigator in close collaboration with a second investigator (two experienced qualitative researchers, knowledgeable in the field of healthcare and psychosocial aspects of chronic [kidney] disease).To judge consistency of interpretation, both investigators coded two transcripts of patients and professionals, and codes were compared and discussed.During the analysis process, interpretations were iteratively reviewed and critically discussed until consensus was reached.Data saturation was continuously evaluated, until the agreement was reached that no new information was obtained and themes emerged [23].To ensure triangulation [16], a multidisciplinary research team was composed to conduct this study, consisting of diverse perspectives and experiences (psychology and medical students, psychologists, patient-representatives, epidemiologists, nephrologists; 4 females and 5 males), with expertise in qualitative research, nephrology care and psychosocial aspects of CKD.Transcriptions were coded using ATLAS.tiv8 (GmbH).An audit trail was kept, and all files were saved on a secured server.Finally, illustrative quotes were selected and translated from Dutch to English using back translation. Results Twenty-seven interviews were conducted (17 with patients and 10 with healthcare professionals) for a mean duration of 71 ± 20 min.Interviews predominantly took place in LUMCmeeting rooms and four interviews at patients' homes in the regions Hollands-Midden and Friesland.Patients' partners were present during two interviews.Table 1 shows all participant characteristics.As shown in Table 2, mean IPQ-R/B-IPQ-scores often laid around the scales' midpoint.Exceptions include: patients believed to a high extent that their CKD is chronic in nature, that they understand their CKD and that their treatment can effectively control their CKD.Below, the main findings are shown within the context of the CSM-principles, structured following the study aims and presented with illustrative quotations. Meaningful Illness Perceptions in Patients with CKD Prior to Kidney Failure Figure 1A shows a visual representation of all results related to the identification of meaningful illness perceptions. Illness Perceptions Underlying Coping, Outcomes and Experiences Patients and professionals believed that all illness perceptions underlie patients' experiences and coping abilities.The illness perceptions considered most important were either related to the seriousness of CKD (beliefs about the condition's symptoms and its impact: illness identity, consequences, emotional response and illness concern) or the manageability of CKD (beliefs about controllability and patients understanding of the condition: personal control, treatment control and illness coherence).a Like other studies assessing illness perceptions amongst patients with CKD, illness perception 'cause' was not included due to the heterogeneous causes of CKD.For all illness perception-scores (IPQ-R and B-IPQ), a higher score means a stronger illness perception.b The following IPQ-R domains were assessed using 38 questions on a 5-point Likert scale: timeline acute/chronic, cyclical timeline, consequences, personal control, treatment control, illness coherence, and emotional response.The domain 'illness identity' addressed different physical symptoms attributed to CKD and was measured using 14 items in a yes or no format.Domain scores were created following the official IPQ-R instructions. 19 c The B-IPQ assessed the same illness perceptions as the IPQ-R with the exception that illness perception 'concern' was measured instead of 'timeline cyclical'.All eight illness perceptions were measured by means of a single item on a 11-point scale.Not experiencing symptoms greatly influences patients' experiences and coping behaviours (e.g.not adhering to treatment guidelines).Both patients and professionals shared that patients who do experience CKD-related and/or treatment-related symptoms also experience the many negative consequences of CKD upon their lives ('consequences'), strongly influencing patients' experiences, coping abilities and other illness perceptions.For example, fatigue can cause patients to experience limitations in their ability to participate in social activities ('consequences'), which can Visual representation of all results related to the identification of meaningful illness perceptions (A) and modifiable illness perceptions (B) in patient with CKD prior to the initiation of kidney replacement therapy (KRT) Panel A: Visual representation of all results related to the identification of meaningful illness perceptions Bold lines represent the illness perceptions identified by stakeholders as most important in relation to coping abilities, outcomes and personal experiences.Grey boxes represent the illness perceptions identified by stakeholders as most important and modifiable that should be included in illness perception-based assessment and support tools. Panel B: Visual representation of all results related to the identification of modifiable illness perceptions Perceived causes for change include healthcare support, natural disease progression and two critical moments: 1. diagnosis of chronic kidney disease (CKD), and 2. announcement of KRT-initiation in the near future.An 'increase' means that illness perceptions grew stronger over time, an 'decrease' means that illness perceptions grew weaker over time -become more unhelpful when it relates to seriousness -related illness perceptions and more helpful when it relates to illness manageability-related perceptions.Patients stated that they (and many other patients) want to contribute as much as possible to slow down CKD progression: adopt a healthy lifestyle, monitor disease progression, read up on CKD pathophysiology and treatmentdevelopments etc.Both patients and professionals believed that patients can contribute to temporarily stagnate disease progression (e.g. by adequate self-management).However, both acknowledged that treatment non-adherence is common and that kidney function can still deteriorate, even when patients perfectly adhere to treatment guidelines, which caused patients to experience a lack of control: Consequences You can drink a lot and you can limit salt and protein intake, but I think that's just a drop in the ocean.(Patient/woman/24y) Given this difficulty to control actual disease progression, patients considered it crucial that professionals focused to a greater extent on improvement of patients' experiences: You cannot really control disease improvements, except for the usual medical treatment.Improvement is mainly improvement in the experience.[….] you can do a lot about that.(Patient/male/85y) Professionals considered 'treatment control' especially important for patients' experiences and coping abilities: patients need to trust their doctor, treatment and healthcare; and this could be achieved by having a good doctor-patient relationship and practicing evidence-based medicinehereby ensuring optimal treatment and realistic expectations.Patients and professionals believed 'illness coherence' was important as well, especially for patients' coping abilities and to strengthen patients' control perceptions ('treatment and personal control'): I think it's really connected: if you don't understand the disease, you can't control it.(Nurse practitioner).Personal control provides an incentive to delve more deeply into your illness.And better insight means better behavior.(Social worker) General Mindset About CKD and Attitude Towards Life Patients and professionals felt that all illness perceptions are intertwined, together forming patients' general mindset about CKD ('how serious and manageable is this condition') that influences patients' experiences, coping and outcomes (e.g.disease progression and HRQOL).They stated that an overall positive perception of CKD and being hopeful and optimistic in life is most beneficial: For some patients, this meant they needed to learn how to mentally disconnect from CKD sometimes, while others shared they remain hopeful by focusing on the possibility of kidney transplantation.Professionals stated that an overall positive attitude was most often seen in older patients, with possible explanations being: (1) younger patients experience greater consequences because they still work, take care of their children and have a future ahead of them; and (2) older patients have more life experiences, which positively influences their coping abilities: If you're 70 or 75, it's just as impactful and just as bad.But older people can put things into perspective and say: 'Yes it's terrible, but I sat in the waiting room next to someone aged 32'.(Nurse practitioner) Modifiable Illness Perceptions in Patients with CKD Prior to Kidney Failure Patients and professionals believed that all illness perceptions can evolve over time.Figure 1B shows a visual representation of all results related to the identification of modifiable illness perceptions.They identified two critical moments influencing trajectory-development, with the first being 'receiving the diagnosis CKD'.Professionals stated 1 3 that patients are often in shock after receiving the diagnosis.Patients shared that, initially, they were very concerned ('illness concern') when receiving the news they had (to live with) this chronic, progressive disease ('timeline acute/ chronic').The latter caused patients to experience various negative emotions ('emotional response'): That changes over time, thankfully.I very consciously tell patients: 'It's a rollercoaster, the first few months'.(Nurse practitioner) Over time and by the received healthcare, concerns and emotional responses diminished as patients learned more about the disease ('illness coherence') and how they themselves and the treatment can manage disease progression ('personal control' and 'treatment control'): Now that I know that I can get a transplant, concerns are less.I have been very concerned.(Patient/ male/51y) As disease progresses, patients experienced a steady increase in CKD-related symptoms ('illness identity' and 'timeline cyclical') and, consequently, an increase in 'illness concern', 'consequences' and 'emotional response'.Fatigue was perceived as the most impactful symptom: The most prominent and limiting symptom is 'fatigue'.As a result, patients will notice limitations in their daily life or reject activities to be able to keep doing at least some things.(Nephrologist) The second moment was the announcement that 'KRT needs to be initiated in the near future'.Professionals and patients shared that providing information and discussing KRT-options ('illness coherence' and 'treatment control') is experienced as confrontational and evokes negative emotions ('emotional response').Professionals added that after dialysis but also after kidney transplantation, patients still experience great distress ('emotional response') due to the high symptom-and treatment burden ('illness identity') and its negative impact ('consequences): The most crucial moment is starting dialysis.The impact that dialysis has on your life….. (Nephrologist) Patients added that it also worked as incentive to pursue a healthier lifestyle ('personal control'): I already had that sense of personal control, but I only started acting on it when I received 'the call' and realized this was not going well….(Patient/male/62y) Finally, patients and professionals believed that support from the environment and from professionals can positively impact the illness perceptions' development (see 'Needs and requirements for supporting patients with unhelpful illness perceptions'). Needs for Identifying Illness Perceptions Patients and professionals believed that patients would benefit from tools to identify illness perceptions and that it would facilitate a stronger focus on patients' perspectives in nephrology care.Most patients believed that all illness perceptions should be measured because the importance of illness perceptions will vary from one person to the next: I think they are all important.It differs per person what is important to that person.(Patient/woman/24y) Others ranked the importance: 'timeline acute/chronic' was considered least important because patients know CKD is a chronic condition; and 'consequences', 'personal control', 'illness coherence', 'illness concern' and 'emotional response' were identified as essential due to its impact on experiences and coping (see also Fig. 1A).Patients and professionals shared that prioritizing was difficult due to illness perceptions' interconnectedness, for example, understanding CKD ('illness coherence') will increase personal control and will reduce anxiety and concerns ('personal control', 'emotional response' and 'illness concern'): Some people like it when they know everything.It gives them feelings of control.[…] Ignorance causes unrest.(Nephrologist) If people know more about their disease, they will know better how to deal with it.Consequently, their concerns will diminish.(Dietitian) Requirements for Identifying Illness Perceptions Patients and professionals believed that in order to successfully develop and implement an assessment-tool, several aspects should be taken into account.First, the tool should be brief: patients believed the IPQ-R is too long and too much overlap exists between questions.Professionals added that we must avoid patients feeling overloaded.Second, questions should be clear in the CKD-context: patients stated that some IPQ-R/B-IPQ questions were unclear, for example, 'My treatment can control my illness' -what is 'my treatment' when you receive such complex multicomponent CKD-treatment?Third, it should contain the simplest language to ensure usage in as many patients as possible.Patients believed most questionnaires (including IPQ-R/B-IPQ) require relatively high levels of health literacy, language proficiency and reading abilities: I am fairly good at the Dutch language.People who aren't will have problems with those [IPQ-R/B-IPQ].(Patient/male/72y) Four, opinions differed about how and where to complete the tool.Some patients and professionals stated that digital and at home have most advantages (e.g.offers flexibility), while others felt it would be completed more seriously on paper and in the hospital: Putting a few crosses on questionnaires at home, everyone does this within 2 minutes while you are watching soccer or something else.In the hospital, you fill it in more seriously.(Patient/male/73y) Most patients and professions believed the tool should be completed in absence of professionals to prevent social desirability bias, while some patients applauded completion under supervision (e.g.facilitates possibilities to ask questions).Six, all participants agreed that results need to be discussed with professionals and that follow-up is essential but challenging: I think having such a questionnaire is fantastic, but follow-up has to be completely clear.(Nephrologist) You shouldn't leave it at this questionnaire, it should be a reason to adequately refer someone.(Social worker) Finally, professionals believed strenuous efforts should be made to reach patients that would benefit most, namely those with a negative attitude towards their illness, healthcare and life: People who are doing a great job, immediately grab questionnaires and start filling it in.[…] but the group that matters most, withdraws from healthcare and avoids it.(Nephrologist) Needs for Supporting Patients with Unhelpful Illness Perceptions Patients and professionals believed that support strategies for patients with unhelpful illness perceptions are needed in nephrology care.Illness perceptions identified as most important to address are: 'illness concern', 'emotional response', 'illness coherence', 'personal control' and 'consequences', as they are susceptible to change and most closely related to patients' experiences, outcomes and coping abilities (see also Fig. 1A and B).Patients considered addressing 'illness concern', and 'emotional response' particularly important: 'illness concern' by providing education and reassurance and 'emotional response' by creating opportunities to share and receive advice on emotions and thoughts.They stated that social and professional support are crucial to achieve this, because they do not always want to burden their loved ones: Sometimes you do not want to bother your family.Outsiders, yes, they listen to you.Sometimes I tell them more than I tell my own husband or children.(Patient/ female/68y) Patients and professionals regarded addressing 'illness coherence' as important, with the potential to also positively impact patients' acceptance and control-perceptions.A straightforward strategy is to provide information that fits abilities and needs of individual patients: You have to tell the message clearly: 'It's so and so, and I am always there for you'.(Nurse practitioner) Finally, patients shared they are in need of support in dealing with CKD's impact on their lives.Discussing (coping strategies to deal with) 'consequences' could greatly support them: You cannot always change the consequences, but you can change the way you deal with them.It is important to talk about it.(Patient/male/51y) Requirements for Supporting Patients with Unhelpful Illness Perceptions Patients and professionals believed several aspects should be considered to ensure successful development and implementation of strategies.First, support should consist of various (already existing) modules.Ideally, there is one module for each illness perception; patients can then choose and/or professionals can refer patients to one or multiple modules focusing on, amongst others, increasing knowledge, coping skills and psychosocial support.Patients encouraged consultations with social workers and/or psychologists to receive adequate support in coping with (consequences of) CKD, concerns and emotions: We all have moments when you think 'It bothers me!'.Then it is really nice if someone helps and comforts you, and afterwards you feel 'It is all very annoying, but I can still do a lot things!' (Patient/male/64y) Professionals agreed that support should focus on coping strategies and emphasized professionals should always respect and strengthen patients' autonomy: If you push for something patients don't want, they might do it for the doctor.That's undesirable.Leave the choice with patients and let them know they can always reconsider their decision.(Nephrologist) Second, opinions differed on whether support should be digital and/or physical, as preferences will most likely differ from one person to the next.Professionals suggested that physical meetings are best combined with existing hospital visits or otherwise organized outside the hospital.Third, opinions differed on whether strategies should comprise individual and/or group support.Patients believed that a disadvantage of group support is that not everyone gets the same amount of opportunities to share because some groupmembers always dominate the groups.An advantage of peer support is that patients share experiences and give advice, and this was considered important because patients are more likely to take on advice from and ask questions to fellow patients: This so-called 'peer contact', I have noticed, it is often very useful, because people are more likely to take on advice from someone who has it [CKD] themselves.(Patient/male/64y) Professionals stated that patients are initially often hesitant to join peer groups but often find these sessions very useful: I don't think anyone actually thinks 'Oh nice, I'm going to be in such a group'.But I do think it is more useful than patients expect in advance.(Dietician) Four, professionals stressed that strategies should not be standalone but should be structurally embedded into nephrology care.For patients, what's most important is that they know support is out there when they need it: People need to know it's available.Ultimately, it's up to them whether they want to use it.[…] If you don't use it now, that's okay.Just know it's there, that you can find support and information there.(Nurse practitioner) Finally, involvement of and caring for patients' significant others were considered crucial because CKD has a great impact on their lives and wellbeing: Family members deal with all kinds of consequences from the disease.It is less costumery they receive support […] Caring for caregivers is something that could be improved.(Patient/male/72y) Discussion This study showed that patients with CKD and healthcare professionals considered illness perceptions related to the seriousness (illness identity, consequences, emotional response and illness concern) and manageability (illness coherence, personal control and treatment control) of CKD most meaningful.They also believed that patients developed more helpful manageability-related illness perceptions and more unhelpful seriousness-related illness perceptions over time.Implementing tools to identify and discuss patients' illness perceptions was considered important, after which support for patients with unhelpful illness perceptions should be offered. First, the CSM of Self-Regulation appears to be a useful theoretical model to explore perceptions about CKD.We found a good fit between our results and CSM-principles, for example, patients and professionals strongly believed that illness perceptions are multidimensional, interrelated and underlie experiences and outcomes of patients with various chronic conditions [2][3][4][5][6][7][8][9][10][11][12][13][14].However, contrary to the CSM that makes no explicit assumption on the relative importance of individual illness perceptions in different contexts, our participants considered some illness perceptions more important than others in the CKD-context.For example, all illness perceptions were identified as important for patients' experiences, outcomes and coping abilities except for timeline perceptions: 'timeline cyclical' was almost non-existent during interviews and 'timeline acute/ chronic' was identified as least important.These results are in line with research showing that some illness perceptions are more strongly associated with certain outcomes [7][8][9][10][11], for example, stronger negative perceptions of illness identity, emotional response, consequences and personal control are often associated with more distress and impaired HRQOL, while weaker perceptions of treatment control are often associated with increased mortality in CKD-populations.However, these studies also suggest that timeline perceptions play an important role in CKD outcomes (e.g.stronger negative cyclical timeline perceptions are associated with a faster kidney function decline and an earlier start of dialysis) [8][9][10].A possible explanation is that our participants took it as a given that all patients believed CKD to be a chronic, progressive condition with steadily increasing symptoms over time.Such beliefs (e.g.strong chronicity and weak-tomoderate cyclical timeline perceptions) could be considered relatively accurate medical illness perceptions and problems will most likely occur when patients hold inaccurate illness perceptions (e.g.believe CKD to be a temporary, nonprogressive condition with highly unpredictable symptoms) [24,25].Moreover, our participants also emphasized that dismissal of illness perceptions as 'irrelevant' is complex due to illness perceptions' interrelatedness, for example, timeline perceptions are correlated with emotional response, consequences, illness identity and concern [19,20]. Illness perceptions perceived as most meaningful are either related to the seriousness of CKD (i.e.symptoms and its impact: illness identity, consequences, emotional response and illness concern) or manageability of CKD (i.e.what do you need in order to manage: illness coherence, personal control and treatment control) [3,5,6].Interesting to note is that responses of patients and professionals overlapped to a great extent, illustrating that these professionals have adequate insight into patients' beliefs about CKD.The most striking difference is that patients considered seriousness and manageability illness perceptions most meaningful, while professionals predominantly emphasized the importance of manageability illness perceptions and in particular 'treatment control'.These findings underline the need for professionals to adopt a more holistic approach to nephrology care: focus on managing CKD disease progression and supporting patients in coping with (the impact of) CKD [1,26,27]. Our results suggest that illness perceptions evolve over time: seriousness-related illness perceptions grew stronger (i.e.patients increasingly attribute symptoms to their CKD and believe that CKD has negative consequences, and causes worries and emotional distress) and so did manageabilityrelated illness perceptions (i.e.patients increasingly believe they understand their and that CKD can be effectively controlled by their treatment and by themselves).In other words, patients developed more unhelpful seriousnessrelated illness perceptions and more helpful manageabilityrelated illness perceptions.Perceived causes for changes in illness perceptions include natural CKD progression, healthcare support (e.g.education) and two critical moments: receiving the CKD diagnosis and the message that KRT needs to be initiated soon.These results align with literature showing that illness perceptions not only change as a result of interventions targeting these illness perceptions but also change naturally and according to clinical status, medical treatment, newly obtained knowledge and experiences [3-6, 12-14, 28, 29].Interesting to point out is that some illness perceptions seem to change more often than others, for example, timeline perceptions changed at one moment (e.g.chronicity beliefs grew instantly strong after receiving the CKD diagnosis), while illness concern and emotional response changed at multiple timepoints (e.g.grew stronger after receiving the CKD diagnosis and after experiencing an increase in symptoms and consequences) and fluctuated over time (e.g.diminished after receiving healthcare support). Clinical Implications and Illness Perception-Based Tools Although longitudinal observational studies are needed to confirm illness perception-trajectories prior to kidney failure, our results suggest that several meaningful and modifiable illness perceptions do not change for the better by means of routine nephrology care.Consensus existed amongst our participants on the need for an assessment-tool to identify and openly discuss patients' beliefs about CKD and their treatment, and for additional strategies to support patients with unhelpful illness perceptions, preferably as early as possible in the course of CKD.All nine illness perceptions were considered important for inclusion in tools as the importance differs from one person to the next.However, some illness perceptions were considered most essential, namely consequences, emotional response, personal control, illness concern and coherence, as these illness perceptions are modifiable and most closely related to patients' experiences, outcomes and coping abilities.It was believed that nephrology care could greatly benefit from increased support for seriousness-related illness perceptions (i.e.dealing with consequences, emotions and concerns about the future), hereby adding to literature highlighting the need for additional psychosocial strategies to support patients in dealing with a chronic (kidney) disease [1,4,26,27].Moreover, nephrology care (e.g.education) already seems to positively influence manageability-related illness perceptions (i.e.control perceptions and illness coherence) but perhaps not in all patients yet (e.g.patients with limited health literacy skills), and these illness perceptions could potentially be strengthened to ensure beliefs are turned into actual behaviour (e.g.act on personal control beliefs by adopting a healthy lifestyle) [1,30,31]. Several illness perceptions were not selected, amongst others, because they are most likely already medically accurate illness perceptions that receive sufficient attention in routine nephrology care (i.e.treatment control and timeline beliefs).Illness identity was also not prioritized, which is striking because participants considered CKDrelated symptoms essential to patients' experience (especially fatigue) -literature confirms the latter, illustrating the high symptom burden in different CKD-stages [25-27, 32, 33].A possible explanation might be peoples' conviction that increased symptom burden is inextricably linked with the typical CKD-trajectory and that 'there is nothing you can do about it'.Indeed, symptoms will increase with declining kidney function [24], but literature also suggests that many potentially treatable CKD-related symptoms often remain undiscussed and un(der)treated [34,35].Therefore, developing and implementing symptom-management strategies seems crucial -to reduce symptom burden and to positively impact illness perceptions (illness identity and interrelated perceptions such as consequences) and HRQOL in patients with CKD [27,[34][35][36].Our study identified several requirements for illness perception-based tools.For assessment: the tool should be brief, include clear questions in the CKD-context, contain the simplest language, enable flexible completion (e.g. on paper and digitally), function as a conversation-started to openly discuss patients' illness perceptions and be accompanied by action-plans which include support from professionals -requirements corresponding with known considerations when using patient-reported outcome measures (PROMs) in clinical practice [33,37,38].For support: in line with and building on literature, a multicomponent psychosocial educational program is needed that addresses all modifiable and meaningful illness perceptions, comprises physical and digital components (N.B. interviews were held pre-COVID-19), combines individual sessions with group sessions, includes consultations with a social worker and psychologist, and incorporates peer-topeer support (e.g.mentoring via phone and online communities) and support for caretakers [4,[12][13][14]31].Another essential requirement was that this program is no standalone program but fully embedded in nephrology care: patients need reassurance that support is out there when they need it and literature also suggests that factors such as timing, accessibility and readiness to engage should be taken into account when providing support [39,40].Furthermore, future studies are needed to investigate whether implementing such illness perception-based tools will indeed improve outcomes in patients with CKD (e.g.prior to kidney failure) and other chronic conditions. Finally, our results suggest that intertwining illness perceptions form patients' general mindsets about CKD and that a beneficial positive mindset is most often seen in older patients.These findings correspond with literature indicating that interrelated illness perceptions form patients' illness schema [5,6], that these so-called illness perception profiles reflect more stable dispositions towards a condition that greatly contribute towards health outcomes [41] and that positive psychological functioning (including optimism) plays an important role in adaption to and outcomes of patients with chronic conditions [42].The role of age seems more complex: generally, average optimism levels indeed grow with age, but there are also indications that it peaks around a person's 50/60s after which it begins to decline [43].Until now, little is known about illness perception profiles, optimism and age-related differences in patients with CKD, and hence, additional research is warranted.Important to mention is that patients consider it challenging to stay optimistic when so much uncertainty exists about their future.The prospect of transplantation helped some patients to keep their hopes up, but not all patients are eligible for transplantation and the symptom-and treatment burden is still high after transplantation [38].Therefore, psychosocial strategies to support patients in coping with uncertainties about the future are needed, and so are studies aimed at developing and implementing prognostic models for a broad range of patient-relevant outcomes [1,44,45]. Strengths and Limitations This study's most important strength is that it provides unique in-depth insight into illness perceptions of patients with CKD and into how these illness perceptions evolve prior to kidney failure.By identifying meaningful and modifiable illness perceptions and by identifying needs and requirement, timely and personalized theory-based tools can be developed and implemented to identify and discuss illness perceptions, and to support patients with unhelpful illness perceptions in routine nephrology care.A study limitation is research reflectivity [16]: even though multiple investigators with different backgrounds have interpreted results, we cannot rule out the possibility that (theoretical) preconceptions have coloured results.For example, using a more socially oriented theory instead of the somewhat more cognitive and individual-focused CSM of self-regulation in the deductive phase of the analysis, could have resulted in increased insight into the role of social factors such as caretakers and significant others.Furthermore, although purposive sampling ensured a heterogeneous patient-sample representing the Dutch CKD population, transferability of our results could be improved by including more low-educated patients, more (inter) national centres, patients with earlier stages of CKD and patients with CKD since birth or their childhood.Additionally, although speculative, it is possible that participating patients may have a more positive attitude towards CKD, healthcare and life; inclusion of patients with a more negative attitude may have provided valuable information about how to reach patients that withdraw from healthcare and that would benefit most from additional support. Conclusion Several meaningful and modifiable illness perceptions do not change for the better by means of routine nephrology care.This underlines the need to identify and openly discuss illness perceptions, and to provide support for patients with unhelpful illness perceptions about CKD.Special attention should be paid towards strengthening psychosocial support for patients and caregivers to deal with the many CKDrelated symptoms, negative consequences and emotions, and concerns about the future. Fig. Fig.1Visual representation of all results related to the identification of meaningful illness perceptions (A) and modifiable illness perceptions (B) in patient with CKD prior to the initiation of kidney replacement therapy (KRT) Table 2 Illness perception scores of patients with CKD (n = 17) a 20 A higher score indicates that patients believe to a greater extent that… Seriousness-Related Illness Perceptions: Symptoms & Impact of CKD Patients and professionals considered 'illness identity' to be crucial.They explained that some patients do not experience symptoms of CKD and, hence, do not consider themselves a patient -they only see their kidneys as 'diseased organs' and experience CKD through consultations: I only experience my illness through test results on the computer screen.Besides that, I have no idea that my kidneys are failing.(Patient/male/78years[y]) mindset about CKD & aƫtude towards life How you deal with it [CKD], has mainly to do with your mindset.With a little bit more positivity […] you see that the glass is half full.If not, you should take a smaller glass.(Patient/male/51y) It depends on your attitude towards life.I truly believe that, if you have a positive attitude towards life, you get much further than you would expect based on physical condition.(Nephrologist)
2023-05-26T06:17:51.694Z
2023-05-24T00:00:00.000
{ "year": 2023, "sha1": "798fd8a13d64420c458dd4a4c571fa1207893537", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12529-023-10178-x.pdf", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "9c215ffa97f21bbd5e8fb965d43939775bf185f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233396394
pes2o/s2orc
v3-fos-license
Petiole-Lamina Transition Zone: A Functionally Crucial but Often Overlooked Leaf Trait Although both the petiole and lamina of foliage leaves have been thoroughly studied, the transition zone between them has often been overlooked. We aimed to identify objectively measurable morphological and anatomical criteria for a generally valid definition of the petiole–lamina transition zone by comparing foliage leaves with various body plans (monocotyledons vs. dicotyledons) and spatial arrangements of petiole and lamina (two-dimensional vs. three-dimensional configurations). Cross-sectional geometry and tissue arrangement of petioles and transition zones were investigated via serial thin-sections and µCT. The changes in the cross-sectional geometries from the petiole to the transition zone and the course of the vascular bundles in the transition zone apparently depend on the spatial arrangement, while the arrangement of the vascular bundles in the petioles depends on the body plan. We found an exponential acropetal increase in the cross-sectional area and axial and polar second moments of area to be the defining characteristic of all transition zones studied, regardless of body plan or spatial arrangement. In conclusion, a variety of terms is used in the literature for describing the region between petiole and lamina. We prefer the term “petiole–lamina transition zone” to underline its three-dimensional nature and the integration of multiple gradients of geometry, shape, and size. Damage-Resistant Transition Leaves, which are the main site of photosynthesis, can differ considerably in their geometry, shape, size, and venation. In most leaves, the planar leaf blade (=lamina), which is often connected to the stem by a rod-shaped leaf stalk (=petiole), captures the light needed to produce energy-rich organic molecules. On this basis, the loss of leaf blades, e.g., as caused by drought stress, frost, diseases, or mechanical damage, poses an existential threat to the plant. Thus, one can assume that a high selective pressure exists on the development of a damage-resistant transition between the lamina and petiole during evolution. A Bunch of Terms A literature search shows that the transition between petiole and lamina does not have a common name, which is possibly one of the reasons that it is not mentioned in modern textbooks. Table 1 shows a compilation of selected publications that have addressed this topic from various scientific perspectives. Some of the terms used express the particular aspect of a specific spatial arrangement: "point" is to be understood as one-dimensional, "area" as two-dimensional, and "zone" as three-dimensional. Other terms describe the Probably, the chosen terms are related to various scientific perspectives, such as systematics, morphology, anatomy, biomechanics, and biomimetics and to whether the whole leaf or selected aspects such as the venation have been investigated. Sewell [1] described the connection of the petiole and lamina from an ontogenetic viewpoint, in particular from observations after germination of various Salvia species. Niinemets and Fleck [6] pursued a biomechanical point of view and considered both the petiole and midrib of the lamina as individual beam-like structures that are connected to each other and that mechanically influence one another. Poulsen and Nordal [7] approached the topic from a systematic and morphometric perspective. They classified various Chlorophytum species according to their petiole and lamina characteristics, thus explicitly distinguishing between stalk and blade. Jones and Kang [9] investigated the ontogenetic/developmental pattern formation of leaf veins anatomically including the transition from the petiole to lamina. In the framework of a biomimetic project, Langer et al. [12] investigated the petiole-lamina transition zone of foliage leaves as models for technical solutions between rod-shaped and planar elements in architecture. They have shown that the cross-sectional shapes of the peltate leaves of Caladium bicolor gradually change from an almost circular petiole to a triangular transition zone and, finally, to a three-lobed star-shaped planar lamina. Sacher et al. [13] examined the transition zones of the peltate leaves of Colocasia fallax and Tropaeolum majus, showing how the transition between lamina and petiole handles mechanical stress and the important role of the vascular bundles for load dissipation. Materials Systems with Multiple Gradients In general, plants can be regarded as fiber-reinforced materials systems, such that stiff and rigid tissues such as vascular bundles and fibers are embedded in a more flexible matrix of parenchyma [19,20]. By analogy to functionally graded materials in technology, plants maintain resistance to damage by spatial gradients rather than by abrupt changes. Spatial gradients can effectively enhance the functionality and biomechanical performance of biological materials systems by alleviating stress concentrations, allowing the formation of new functions and enabling adaptations to partially conflicting requirements [21,22]. An impressive example for the development of such a compromise are foliage leaves, which show an interplay of flexural rigidity in order to bear the weight of the leaf and torsional flexibility (respectively low torsional rigidity) in order to enable streamlining upon wind loads. Flexural rigidity (EI) is composed of the bending elastic modulus (E) and the axial second moment of area (I), whereas torsional rigidity (GJ) is composed of the torsional modulus (G) and the polar second moment of area (J). Vogel [23] has characterized this trade-off by defining the dimensionless twist-to-bend ratio (EI/GJ). The integration of multiple gradients can take place on various hierarchical levels in the "bottom-up" manner from molecules to cells and tissues and further to plant organs and entire plants [22]. According to Liu et al. [21], the gradients in biological materials systems are fundamentally associated with changes in (i) structural characteristics including the arrangement, distribution, dimensions, and orientations of building units, (ii) chemical compositions between similar components, and (iii) gradient interfaces between dissimilar components. Various gradients are present in foliage leaves. A gradient in arrangement is reflected by the architecture in the zone between the petiole and lamina. According to their spatial arrangement, we distinguish between the 3D-configuration (angle approx. 90 • ) as in peltate leaves and between the 2D-configuration (angle approx. 0 • ) as in leaves with a petiole connected to the basal lamina margin. In the context of the flexural and torsional loading of the petioles, the gradient in dimension is the continuous increase or decrease of the cross-sectional area (A), axial second moment of area (I) and polar second moment of area (J). The ratio of I/J provides information about the influence of the different cross-sectional geometries and their shapes to the twist-to-bend ratio [23][24][25]. Generally, the I/J ratio is 0.25 for elliptical cross-sections if the minor axis is half the major axis (minor axis aligned in the vertical direction), 0.2-0.5 for U-profiled cross-sections, 0.5 for circular cross-sections, 0.81 for squared cross-sections, 0.83 for isosceles triangles, and 1.25 for elliptical cross-sections if the major axis is twice the minor axis (major axis aligned in the vertical direction). Consequently, high values of EI/GJ can usually be found if the bending elastic modulus (E) is much higher than the shear modulus (G) [26]. The gradient of distribution is mirrored by the increasing or decreasing density of vascular bundles and fibers in the periphery or the center of the cross-section, respectively. The distribution of strengthening tissues has a pronounced influence on the axial and polar second moments of area of these tissues. In principle, the cross-sectional distribution of the vascular bundles is fixed by the body plan of plants, which is a set of morphological features common to many members of a phylum (also termed "bauplan" [27]). In general, monocotyledons have an atactostele and dicotyledons have an eustele. Deviations from the general body plan are possible with respect to the specific functions of the leaves, such as water and food storage, attachment, or defense. The gradient in orientation is based on anisotropic structural units with properties that depend strongly on orientation [21,22]. A gradual reorientation of the fibers and vascular bundles in the transition between petiole and lamina depends on the body plan and, thus, on parallel, reticulate or radiate venation and always contributes markedly to the damage-resilient connection of both leaf parts. In addition, this gradual reorientation is also dependent on the spatial configuration of the petiole and lamina, as it differs between the 2D-and 3D-configurations of leaves. For plants, the phenolic polymer lignin plays a crucial role for spatial gradients attributable to the variations in the chemical composition at the interface between dissimilar components, a point at which contact failure commonly occurs [28]. In most cases, strengthening tissues are directly surrounded by a lignin gradient in the cell walls of the surrounding parenchymatous cells [29,30]. This lignification gradient prevents abrupt changes at the interface between non-lignified parenchyma with an elastic modulus of 5 to 100 MPa and lignified sclerenchyma fibers with an elastic modulus of 24 to 45 GPa [31]. Motivation Although both the petiole and lamina of foliage leaves have been well studied in terms of their functional morphology and biomechanics, a detailed analysis of the transition zone between them has barely been addressed. The scientific question of this study has been to identify objectively measurable morphological and anatomical criteria for a generally valid definition and characterization of the petiole-lamina transition zone of foliage leaves by means of a comparative study of four model plants with various body plans (mono-and dicotyledons) and leaf architectures (peltate leaves and leaves with a marginal petiole position). Four main aspects for distinguishing between the petiole, transition zone and lamina have been assessed: (i) the changes of cross-sectional geometry, (ii) the determination of shape (e.g., aspect ratio, tapering mode, ratio of axial and polar second moment of area), (iii) the quantification of size (e.g., linear or exponential increase of cross-sectional area), and (iv) the three-dimensional arrangement and course of the vascular bundles (e.g., number and contribution to cross-sectional area of vascular bundles). Results The four species studied show similarities and dissimilarities with respect to their general body plan as monocotyledons and dicotyledons and their spatial arrangement of the petiole and lamina. According to Niklas [32], geometry, shape, and size are not synonymous. Size, for example, is a substantial variable that can be expressed in units of a physical quantity, whereas shape is a natural variable that has a magnitude but lacks a unit. In the present study, we distinguish among various cross-sectional geometries (e.g., circle, ellipse, or triangle) and describe "shape" by calculating the aspect ratio AR and the tapering mode α (Table 2). Moreover, we measure "size" in terms of the cross-sectional area A, the axial second moment of area I, and the polar second moment of area J. In the following, we present these quantitative results in Table 3 and some of these results exemplarily for one sample of each species studied in Figure 1. Table 2. A list of all analyzed variables and a brief description of each. In addition, we display in Figure 2 the µCT results of the three-dimensional arrangement of the vascular bundles in the petiole-lamina transition zone of all four investigated species. Videos of the µCT scans can be found in the Supplementary Materials Videos S1-S4. In addition, we display in Figure 2 the µCT results of the three-dimensional arrangement of the vascular bundles in the petiole-lamina transition zone of all four investigated species. Videos of the µCT scans can be found in the Supplementary Materials Videos S1-S4. belongs to the monocotyledonous family Asparagaceae. This cultivar preferentially grows in shaded habitats on slightly moist soil, for example in humid forests or on shaded, damp steep slopes. Their leaves are not sessile and thus untypically have a leaf sheath that is functionally a petiole. The petiole is marginally attached to the lamina and therefore is categorized as having a 2D-configuration. The elliptical lamina has an attenuated lamina base and a parallel venation typical of monocotyledons (Figure 3a). Geometry, Shape and Size The petiole has a U-profile in transverse section, with a prominent groove on the adaxial side (Figure 3b). The petiole tapers hyperbolically. The change of the U-profile of the petiole to a V-profile indicates the beginning of the transition zone. The I/J ratio of the transition zone is significantly higher than that of the petiole (W = 120, p = 0.023) ( Table 3). The linear (to a good approximation) increase of A, I, and J in the apical part of the petiole changes into an exponential increase in the petiole-lamina transition zone ( Figure 1a, Table 3). This increase is attributable to the thin lamina that emerges on the mediolateral tips of the V-profile of the sections (Figure 3c). The lamina portion grows markedly in size, whereas the U-profile of the midrib remains essentially unchanged. Vascular Tissue The arrangement of the individual vascular bundles shows a V-profile in the petiole ( Figure 3b, the bundles appear in dark violet) and a U-profile in the transition zone ( Figure 3c, the bundles appear in orange). In the transition zone, the distance between the bundles increases acropetally (Figure 2a,e,i). This applies in particular to the mediolateral bundles, which enter into the lamina. This acropetal increase in the inter-bundle distance is a consequence of the apically increasing curvature of the mediolateral bundles (Figure 2a). In contrast, the bundles of the basal transition zone run straight and in parallel. Additionally, the µCT data clearly show crosslinks between the longitudinal vascular bundles, which are only partially visible in the thin-sections. Figure 1c shows the results of a detailed analysis in 100 µm steps of a single sample. The number of vascular bundles increases from 18 to 19 in the apical region of the petiole and from 19 to 23 in the transition zone. In parallel, the area fraction of the vascular tissue decreases with increasing number of vascular bundles in the transition zone. General Description of the Leaf The species Caladium bicolor Vent. (hereafter C. bicolor) belongs to the monocotyledonous family Araceae and possesses peltate leaves having a 3D-configuration. This species grows mainly in moist habitats for example along rivers or in swampy areas. The leaves have a heart-shaped lamina with the petiole being slightly offset adaxially. They exhibit a rotate venation, with the veins radiating from the entry point of the transition zone into the lamina (Figure 4a). Geometry, Shape and Size The petiole of C. bicolor is a circular truncated cone that tapers nearly linearly ( Table 3). The geometrical change from the circular petiole (Figure 4b) to the triangular transition zone (Figure 4c) is accompanied by a change from a linear increase of A, I, and J in the apical part of the petiole to an exponential increase in the petiole-lamina transition zone (Figure 1b, Table 3). The exponential increase is visible in the increasing size of the triangular transverse section, especially in the abaxial direction, because of lobe formation at the flank of the triangular shape. Due to the 3D nature of the lamina, a hole occurs in the middle of the thin-section in the apical transition zone (Figure 4c). The I/J ratio of the transition zone is not significantly higher than that of the petiole (W = 110, p = 0.085) ( Table 3). Vascular Tissue The vascular tissue is scattered and embedded in the non-lignified parenchyma of the petiole (Figure 4b, the vascular tissue appears in dark violet) and the transition zone ( Figure 4c, the vascular tissue appears in orange). Individual vascular bundles run in parallel in the basal transition zone; several vascular bundles align in the middle of the transition zone and merge to three vascular strands in the apical transition zone, from where they enter the lamina (Figure 2b,j). In addition, individual bundles are distributed in the lamina in a net-like manner and thus simultaneously interconnect the vascular strands. This net-like vascular structure, in the apical transition zone, emerges from the innermost individual bundles coming from the basal transition zone (Figure 2f). The large vascular strand on the abaxial side in the apical transition zone consists of individual bundles from the outer abaxial side and the outer mediolateral sides of the basal transition zone (Figure 2b). Furthermore, in the apical transition zone, smaller abaxial clusters split off from the large strands. Moreover, the two large adaxial strands consist of outer and inner adaxial bundles originating from the basal transition zone. All these individual vascular strands enter the lamina with a specific curvature. This curvature is higher for the adaxial strands than for the abaxial ones, since the basal transition zone is slightly inclined abaxially. Figure 1d shows the results of a detailed analysis in 100 µm steps of a single sample. The number of vascular bundles decreases from 75 to 68 in the apical petiole and increases from 69 to 125 in the transition zone. In parallel, the area fraction of the vascular tissue decreases with the increasing number of vascular bundles in the transition zone. General Description of the Leaf The species Hemigraphis alternata (Burm.f.) T.Anderson (hereafter H. alternata) belongs to the dicotyledonous family Acanthaceae and its leaves have a 2D-configuration (Figure 5a). This species grows preferably in partial shaded to open habitats on moist soil. The leaves possess an ovate lamina with an oblique base and a pinnate venation. The cross-sectional shape of the petiole is preserved in the form of the midrib, whereas the lamina portions increase over the course of the transition zone. The leaves exhibit a reticulate venation. Geometry, Shape and Size The petiole is elliptic in transverse section with a groove on the adaxial side that is especially pronounced in the basal part ( Figure 5b) and tapers hyperbolically ( Table 3). The petiole has an epidermis with a thick underlying hypodermis, consisting of three to four layers of small thick-walled collenchyma cells. The beginning of the petiole-lamina transition zone is characterized by the appearance of the lamina on one of the mediolateral sides of the petiole (Figure 5c). Typically for an oblique base, the lamina appears further apically on the other side. The transition from the petiole into the lamina is reflected in the change from a slightly linear decrease to an exponential increase of A, I, and J (Figure 2e, Table 3). The I/J ratio of the transition zone is not significantly smaller than that of the petiole (W = 59, p = 0.448) ( Table 3). Vascular Tissue The vascular bundles of H. alternata form a U-profiled strand in the center of the cross-section of the petiole (Figure 5b, the bundles appear in dark violet) and the transition zone ( Figure 5c, the bundles appear in orange). In the transition zone, the area of the central vascular bundle decreases from basal to apical (Figure 2g,k). The thin-sections (Figure 5c), and the µCT data (Figure 2c) reveal that individual vascular bundles depart from this central vascular strand outwards in mediolateral direction to supply the lamina. Within the lamina, leaf ribs emerge around these individual bundles in the form of lamina thickenings. However, we have not seen any crosslinks between the bundles and/or the U-profiled central strand (Figure 2c,g,k). Figure 1e shows the results of a detailed analysis in 100 µm steps of a single sample. The number of vascular bundles increases from 26 to 40 in the apical petiole and from 42 to 49 in the transition zone. In parallel, the area fraction of the vascular tissue decreases with increasing number of vascular bundles in the transition zone. General Description of the Leaf The dicotyledonous species Pilea peperomioides Diels (hereafter P. peperomioides) belongs to the Urticaceae family and possesses orbicular and peltate leaves, categorized as having a 3D-configuration. This species grows in shady and moist habitats on rocks, for example in forests or on ledges of cliffs. The petiole is slightly offset adaxially. The leaves have a rotate venation, with the veins radiating from the entry point of the transition zone into the lamina (Figure 6a). Geometry, Shape and Size The petiole of P. peperomioides is a circular truncated cone that tapers hyperbolically ( Table 3). The geometrical change from a circular (Figure 6b) to an elliptical cross-section indicates the beginning of the petiole-lamina transition zone (Figure 6c). This change from the petiole into the lamina is also reflected in the change from a linear increase to an exponential increase of A, I, and J (Figure 1f). The exponential increase of the crosssectional area can also be seen in the transverse thin-sections, particularly in the lobes forming in the middle transition zone (Figure 6c). Because of the 3D nature of the lamina, a hole occurs in the middle of the thin-section in the apical transition zone (Figure 6c). The I/J ratio of the transition zone is not significantly smaller than that of the petiole (W = 48, p = 0.190) ( Table 3). Vascular Tissue In the petiole, the vascular bundles are present as six strands in the center and, in the acropetal direction, converge closer together and partly merge with each other (Figure 6b, the bundles appear in dark violet). At the entry to the basal transition zone, the thinsections (Figure 6c, the bundles appear in orange) and the µCT data (Figure 2l) show that the vascular bundles form a single U-profiled strand that is thicker on the abaxial side than on the adaxial side and that splits acropetally into eight separate vascular strands that radiate in all directions into the lamina (Figure 2d,h). Since the petiole is slightly inclined to the abaxial side, the abaxial strands enter the lamina with less curvature than the strands on the adaxial side (Figure 2d). No crosslinks between the individual strands could be found. Figure 1h shows the results of a detailed analysis in 100 µm steps of a single sample. The number of vascular bundles increases from 5 to 10 in the apical petiole and decreases from 10 to 8 in the transition zone as some bundles merge together. In parallel, the area fraction of the vascular tissue decreases in the transition zone. General Description of the Leaf Hosta x tardiana 'El Niño' Piet Warmerdam (patent PP14632) (hereafter H. tardiana) belongs to the monocotyledonous family Asparagaceae. This cultivar preferentially grows in shaded habitats on slightly moist soil, for example in humid forests or on shaded, damp steep slopes. Their leaves are not sessile and thus untypically have a leaf sheath that is functionally a petiole. The petiole is marginally attached to the lamina and therefore is categorized as having a 2D-configuration. The elliptical lamina has an attenuated lamina base and a parallel venation typical of monocotyledons (Figure 3a). Discussion The object of this study has been to find measurable morphological and anatomical criteria in order to objectively distinguish the transition zone from the petiole and lamina and to characterize this important but often overlooked part of foliage leaves. We have carried out a comparative study to determine key criteria, with respect to geometry, shape, and size that are generally independent of the respective body plans. Therefore, the study includes the body plans of monocotyledons and dicotyledons and the various spatial arrangements of petiole and lamina in terms of peltate leaves (3D-configuration) or leaves for which the petiole is marginally attached to the lamina (2D-configuration). Since we have never observed any tearing of the lamina from the petiole within the transition zone, this zone seems to be quite damage-resistant. In this context, Liu et al. [21] point out that damage formation can be prevented by superimposing several local properties on each other, resulting in a structure with various gradients (arrangement, distribution, dimension, orientation, interface, composition). We therefore examined the transition zones in our comparative study with regard to some of these gradients. Gradual Change of Geometry All examined foliage leaves show a gradual change in cross-sectional geometry. The leaves with a 2D-configuration mostly retain the geometry of their petiole, with the latter continuously merging into the lamina as a midrib. Apically, the cross-sections are more and more characterized by the mediolaterally attached parts of the lamina. In contrast, in the leaves with a 3D-configuration, the geometry of the petiole is not preserved. The geometry changes from circular to triangular or elliptical and, finally, lobules of the lamina can be seen that emerge almost uniformly in all directions. Since the petiole and lamina meet at an angle of about 90 • , the tissues of the petiole-lamina transition zone must also show a corresponding curvature. In summary, the change of geometries from the petiole to the transition zone and midrib are more dependent on the spatial arrangement as a 2D-or 3D-configuration than on the body plans of the monocotyledons or dicotyledons. Gradual Change of Shape For all species analyzed in the present study, with the exception of H. tardiana, the ratio of axial and polar second moments of area (I/J) does not differ significantly on comparing the petiole and transition zone. This indicates that I/J is preserved in terms of a gradient of dimension. Both investigated species with a 2D-configuration possess an adaxial groove, whereas the species with a 3D-configuration do not. H. alternata has a tiny groove at the base of the petiole, whereas H. tardiana keeps a U-profile over the entire petiole including the transition zone. An individual deep groove can convert a circular profile with an I/J = 0.50 [33] into a U-profile. In the case of H. tardiana, the median I/J = 0.33 of the U-profiled petioles is significantly smaller than the median I/J = 0.45 of the V-profiled transition zones. In leaves with a 2D-configuration, the highest bending moments occur at the base of the petioles. An increase in flexural rigidity combined with adequate torsional rigidity in this area can be considered advantageous. This is because the U-profiled petiole is "fixed" in the most favorable position in terms of bending, i.e., with the opening of the U against the acting bending force. Wolff-Vorbeck et al. [34] have shown that, with an increasing size of the groove, the twist-to bend ratio increases because of an increase of the flexural rigidity and a simultaneous decrease of the torsional rigidity. Higher torsional flexibility enables lamina attached to their basal margin to reorient, streamline and thus reduce the wind-induced drag [23,26]. Similar to the U-profile of the petioles of H. tardiana, the monocotyledonous banana (Musa textilis) has U-profiled petioles, which have a 45 to 100 times higher flexural rigidity than torsional rigidity [24]. Regardless of the particular shape of the petiole, all petioles show a taper in the apical direction, i.e., a gradual change of dimension in terms of the decrease of cross-sectional area. In summary, irrespective of whether the plant is monocotyledonous or dicotyledonous, the presence of a groove or U-profile on the adaxial side is mainly found in petioles with a 2D-configuration. This shape is advantageous for withstanding high bending forces and for reducing wind-induced drag by (torsional) reorientation. No dependency has been found between the tapering modes and the body plans of the monocotyledons or dicotyledons or the spatial arrangement as 2D-or 3D-configurations. Gradual Change of Size The gradient of dimensions can be measured by the change of the cross-sectional area A and the variables derived from it, namely the axial second moment of area I and the polar second moment of area J from the petiolar base to its top. The variables A, I, and J of all samples show a linear fit in the apical petiole and an exponential increase in the petiole-lamina transition zone. However, differences are apparent in the slope a and the exponential growth constant b of these variables. With the exception of C. bicolor, the linear fit of the cross-sectional area reveals (very) small positive or negative values of the slope a A_petiole , indicating that almost no increase or decrease occurs. However, the different tapering modes α petiole with various outer shapes (envelopes) influence the linear fit, such as a convex shape (0 < α < 1; second order paraboloid of revolution), a concave shape (α > 1; hyperboloid of revolution), or a straight shape with taper (α ≈ 1; circular cone), or a straight shape without taper (α = 0; circular cylinder). The dicotyledonous species studied have higher exponential growth constants b for the variables A, I, and J than the monocotyledonous species indicating that their body plan allows them to increase the lamina region more rapidly in the transition zone. This can be explained by a more robust body plan, perhaps based on their internal structuring in terms of the arrangement and merging of vascular bundles, a major strengthening tissue of foliage leaves (see µCT data, Figure 2). The assumption that strengthening tissues have a strong influence on the growth constant b is supported by the results in H. alternata. This species has the highest b-values and is also the only species with a mechanically important hypodermal collenchyma layer. In summary, all petioles show an approximately linear fit for the variables A, I, and J, but without considerable increase or decrease, with the exception of C. bicolor, which shows a slight increase. In contrast, all petiole-lamina transition zones are characterized by an exponential increase of the variables A, I, and J. The change from a linear to an exponential fit is a characteristic that applies to all samples regardless of the body plan or the spatial arrangements of petiole and lamina. Gradual Change of Tissues Arrangement Dependent on the atactostele of the monotyledons, the eustele of the dicotyledons and the spatial arrangement of the petiole and lamina with a 2D-or 3D-configuration, similarities and dissimilarities of the arrangement and orientation of the vascular tissues are likely, especially in the transition zone. The petiole of H. tardiana shows a parallel venation similar to the lamina, whereas C. bicolor reveals a typical atactostele. The venation of H. alternata und P. peperomioides is typically eustele. In the 2D transition zones, the main part of the vascular bundles runs straight and parallel from the apical petiole into the midrib, with only a few bundles splitting off and entering the basal lamina portions. In the 3D transition zones, the vascular bundles split at the beginning of the transition zone and run in all directions into the lamina. However, individual bundles of the studied monocotyledons are separate, whereas the vascular bundles of the investigated dicotyledons form merged strands. A vascular strand arrangement in the 3D transition zone, comparable with that of the dicotyledonous P. peperomioides, has previously been described for the peltate leaves of the dicotyledonous Malva neglecta [2], Kingdonia uniflora [3], and Tropaeolum majus [13]. Another aspect common to all petiole-lamina transition zones is the increase in the number of vascular bundles in the transition zone compared with the apical petiole. The bundles split in order to supply the increasing lamina regions [35]. In all four species studied, this increase is accompanied by a decrease in the area fraction of the vascular tissue AF v . This can be explained on the basis that the total area of vascular tissue remains almost constant across the transition zones, whereas the total cross-sectional area of the foliage leaf increases, leading to a relative decrease in the area fraction of the vascular tissue. In addition, this transition from thicker parallel bundles to a multitude of thin vascular bundles can also be seen as a gradient of dimensions that allows for load sharing and, thus, a more resilient connection between the petiole and lamina [6,21,26]. In summary, the course of the vascular bundles in the petiole depends on the body plan of the plant as a monocotyledon or dicotyledon. In contrast, the course of the vascular bundles in the transition zone is more dependent on the spatial arrangement than on the body plan. In the 2D transition zones, the vascular bundles run straight and parallel from the apical petiole into the midrib. In contrast, all vascular bundles in the 3D transition zones initially run in the middle of the cross-section and, from there, some groups of vascular bundles curve towards the lamina. Plant Material Plants of H. tardiana, C. bicolor, H. alternata and P. peperomioides were cultivated in the greenhouse of the Botanic Garden of the University of Freiburg, Germany. The selection criteria for the four species were: (1) two monocotyledonous species and two dicotyledonous species with (2) stalked leaves exhibiting 2D-and 3D-configuration, (3) herbaceous and (4) perennial plants that (5) are easy to cultivate to provide sufficient experimental material. The restriction to herbaceous plants resulted from the limited variety of stalked monocotyledonous species and was intended to allow for better comparability. Leaves of all four plant species were cut at the base of the petiole with a scalpel. One leaf per species was photographed with a Lumix DMC-FZ1000 camera (Panasonic, Kadoma Osaka, Japan; Figures 3a, 4a, 5a and 6a). Geometry, Size and Shape The diameters of 25 petioles per species were recorded every 1 cm for H. tardiana and P. peperomioides, every 3 cm for C. bicolor, and every 0.5 cm for H. alternata, because of the different lengths of the petioles of the individual species. At each point, the diameter was measured in the lateral direction d l and in the adaxial-abaxial direction d a with a caliper (accuracy ± 0.05 mm) with reference to the lamina. The aspect ratio AR petiole of each petiole sample was calculated by: In addition, the tapering mode α petiole was calculated for the 25 petioles per species by using the method described by Caliaro et al. [36]. The tapering mode describes whether the shape of the petiole resembles more a circular cylinder (α = 0), a second order paraboloid of revolution (α = 0.5), a circular cone (α = 1), or a hyperboloid of revolution (α = 1.5). Each image of a transverse thin-section (see Section 4.4) was rotated such that the abaxial side of the section was oriented downwards. The serial transverse sections were the basis for the classification into geometrical categories. Furthermore, in these images, the entire cross-sectional area was masked with the "Threshold" function of Fiji [37] (ImageJ Version 1.52p). The masked areas A together with the axial second moments of area I and the polar second moments of area J were calculated using the function "Slice Geometry" of the BoneJ plugin [38] for Fiji. The data for A, I, and J were plotted with Microsoft Excel 2016 (Microsoft, Redmond, WA, USA) and the linear and exponential regressions were performed using the built-in functions. The linear slopes a and exponential growth constants b were determined via the given regression equations for the linear (Equation (2)) and the exponential (Equation (3)) regressions: with m being the intercept and n the initial value. An additional detailed analysis in 100 µm steps was carried out with a single sample of each species. The vascular tissue of each transverse section was masked using GIMP (Version 2.10.6). The number of vascular bundles and the area of the vascular tissue was calculated via the "Analyze Particles" function of Fiji. The percentage area fraction of the vascular tissue AF v was calculated by: where A v is the area of the vascular tissue and A t the total cross-sectional area. µCT Scanning One petiole-lamina transition zone sample (see Section 4.4) of each of the four species was critical point dried in acetone (CPD 030, BAL-TEC AG, Balzers, Liechtenstein) and scanned using a high-resolution µCT (Skyscan 1272, Bruker, Kontich, Belgium). The 360 • scans were performed with rotation steps of 0.6 • , an image resolution of 2452 × 1640, a source voltage of 50 kV without filter, a source current of 200 µA, frame averaging over 3 frames and a random movement correction of 10. Because of the differing sample dimensions, various scan resolutions were used: 6 µm for H. tardiana, 3.7 µm for C. bicolor, 5.5 µm for H. alternata, and 5 µm for P. peperomioides. The data were reconstructed in NRecon (Version 1.6.10.1, Bruker, Kontich, Belgium) with a beam hardening correction of 25 %, a ring artefact correction of 5 and smoothing set to 4 for H. tardiana and H. alternata and to 3 for C. bicolor and P. peperomioides by using a Gaussian smoothing kernel of 2. The vascular tissues of each sample were segmented and visualized using Avizo 2019.2 (Thermo Fisher Scientific, Waltham, MA, USA). The videos can be found in the Supplementary Materials Videos S1-S4. Sample Preparation for Histological Studies Before thin-sectioning, the petiole, petiole-lamina transition zone, and lamina of the leaves were identified for the collection of the respective samples. Samples of the transition zone consisted of the apical 5 mm of the petiole together with a square of 1 cm side length from the lamina. The remaining petiole length was divided into three equal parts that corresponded to the basal, middle, and apical section of the petiole. Prior to thin-sectioning on a rotatory cryotome (MEV, SLEE medical, Mainz, Germany), the samples were frozen on a metal sample holder by using a specific freezing solution (Tissue-Tek O.C.T. Compound, Sakura Finetek Japan Co., Tokyo, Japan). Transverse thin-sections with a thickness of 100 µm were prepared from the basal, middle, and apical part of all petiole samples. Serial transverse thin-sections of the transition zone with a thickness of 200 µm were prepared for five samples of each species. For better anatomical resolution of the transition zone, a sixth sample of each species was serially thin-sectioned at a thickness of 100 µm. Sections of the petiole samples were stained with 0.05% w/v toluidine blue O [39] for 8 min and afterwards rinsed in distilled water for 8 min. Toluidine blue O (TBO) stains lignified tissue blue to dark violet, while non-lignified tissue is stained red-purple. The 100 µm serial thin-sections of the transition zone were stained with 0.1% w/v aqueous acridine orange for 8 min and rinsed in distilled water for 8 min. Acridine orange (ACO) stains lignified tissue bright yellow/orange, whereas non-lignified tissue is stained dark brown/red. The 200 µm thin-sections of the transition zones remained unstained. Images of all sections were recorded via a microscope (BX61, Olympus, Tokyo, Japan) equipped with a USB camera (DP71, Olympus, Tokyo, Japan). Statistics Raw data are provided in the Supplementary Materials Table S1. The software GNU R 3.6.2 was used for statistical analyses [40]. Parametric data are represented by mean values ± one standard deviation, whereas non-parametric data are shown as median values with respective interquartile ranges (IQR). To determine significant differences of the measured variables between the species and between the petiole and transition zone of each species, Kruskal-Wallis tests were performed together with Wilcoxon signed-rank post hoc tests (with p-value adjustments according to Holm [41]) for paired data and with Mann-Whitney-U post hoc tests (with p-value adjustments according to Holm [41]) for unpaired data, after testing for normal distribution (Shapiro-Wilk test) and for homoscedasticity of the variances (Levene test). For all statistical tests, we employed an alpha level of 5%. Conclusions Although many detailed studies on the morphology, anatomy, and biomechanics of both the petiole and lamina of foliage leaves have been published, the transition zone between them has mostly been ignored. Those scientists who have studied the petiolelamina transition zone have assigned a variety of terms to this area, depending on their scientific discipline and the underlying scientific question. The transition zone is interesting because, by superimposing various gradients, it creates a damage-resistant transition between the petiole and the lamina, which often differs considerably in geometry, shape, and size. The objective of this study has been to analyze similarities and dissimilarities in the geometry, shape, and size of the petiole and the petiole-lamina transition zone in order to objectively distinguish and characterize the transition zone with at least one key criterion. Our comparative study of four species has included the body plans of monocotyledons and dicotyledons and the various spatial arrangements of petioles and lamina in terms of peltate leaves (3D-configuration) or leaves for which the petiole is marginally attached to the lamina (2D-configuration). Some characteristics are dominated by the body plan (course of vascular tissues in the petiole) and others by the spatial arrangement (change of geometries and course of vascular bundles in the transition zone). However, all the examined samples demonstrate that the investigated transition zones are defined by one key criterion and thereby differ from the petiole and lamina, namely the exponential increase of the cross-sectional area and the axial and polar second moments of area. In conclusion, a variety of terms are used in the literature to describe and characterize the area between petiole and lamina. Nevertheless, to emphasize the 3D nature and integration of multiple gradients of geometry, shape and size, we prefer the term "petiole-lamina transition zone".
2021-04-27T05:16:33.342Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "5e15d4a95e261125f82bb6e6a5ff092862d4517d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/10/4/774/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e15d4a95e261125f82bb6e6a5ff092862d4517d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
259862363
pes2o/s2orc
v3-fos-license
The Effects of Social Determinants and Resilience on the Mental Health of Chilean Adolescents The aim of this research was to evaluate the effects of social determinants (i.e., gender, educational vulnerability, and socioeconomic status) and resilience on the mental health of Chilean adolescents in pre-, during, and post-COVID-19 pandemic contexts. The study included a group of 684 students, ranging in age from 12 to 18 years, who were attending educational institutions in the city of Arica. The Child and Adolescent Assessment System (SENA) was used to measure mental health problems, the Brief Resilience Scale for Children and Youth (CYRM-12) was used to measure resilience, and the Vulnerability Index of Educational Institutions was used to measure educational vulnerability. The results suggest increases in depressive, anxious, and social anxiety symptomatologies over time (wave by year, 2018, 2020, and 2021). In addition, multiple linear regression models showed predictive effects of the COVID-19 pandemic, gender, vulnerability index, socioeconomic status, and resilient behaviors on mental health problems. The worsening of mental health indicators over time requires the greater coordination and integration of mental health experts in the most vulnerable educational centers. Introduction In recent years, the mental health of millions of people has systematically worsened [1]. In 2019, 970 million people reported living with a mental disorder, with anxiety disorders and depression being more prevalent [2]. In 2020, the number of people living with a mental disorder increased significantly due to the COVID-19 pandemic [1]. Although effective prevention and treatment options exist, access to effective care is limited for most people with mental disorders [3]. The social context in which human beings develop plays a fundamental role in the well-being and care for people's mental health [4]. The high frequency of occurrence and severity of mental disorders are strongly correlated with individuals' socio-economic circumstances, including factors such as poverty, income inequality, or involuntary migration [5,6]. The social determinants of mental health that stand out the most in the literature are demographic, which include gender, age, ethnicity, and life expectancy [7][8][9][10][11]; economic, where financial and employment status, housing, and income inequality are considered [12][13][14][15]; an individual's area of residence, which is related to variables such as safety, the availability of services, and places of recreation [16][17][18][19]; environmental events, such as natural disasters and armed conflicts, as well as hazards to the ecosystem due to climate change [20][21][22]; and the sociocultural domain, which encompasses education, Despite the existence of some studies contrasting the effects of the social determinants of adolescent mental health in the Chilean context, the research is still insufficient and/or scarce. Moreover, no study has jointly contrasted the effects of the COVID-19 pandemic, resilience, and social determinants on the mental health of young people. Hence, the objective of this research was to assess the impact of social determinants and resilience on the mental well-being of Chilean adolescents within the context of the COVID-19 pandemic, before, during, and after its occurrence. Participants A repeated cross-sectional design [56] was used with three samples of high school students belonging to urban educational establishments in 2018, 2021, and 2022. Data collection was performed via a non-probability convenience sampling strategy [57]. The 2018 sample was composed of 249 students, of which 52.2% (n = 130) attended educational establishments with low vulnerability indexes and 47.8% (n = 119) attended educational establishments with high vulnerability indexes. The ages of the students ranged from 12 to 18 years, with a mean age of 14.42 (SD = 1.83); 50.2% (n = 125) reported being female, 90% (n = 224) reported being Chilean, 70.3% (n = 175) reported belonging to a religion, and 67.1% (n = 167) reported no ethnicity (the term no ethnicity refers to the absence of a specific ethnic or racial identity. This option is for individuals who do not identify with any particular ethnic group or prefer not to disclose their ethnicity). The 2021 sample was composed of 206 students, with 47.6% (n = 98) attending low-vulnerability-index educational institutions and 52.4% (n = 108) attending high-vulnerability-index educational institutions. The ages of the students ranged from 12 to 18 years, with a mean age of 14.23 (SD = 1.59); 55.3% (n = 114) reported being female, 87.4% (n = 180) reported being Chilean, 55.3% (n = 114) reported belonging to a religion, and 58.7% (n = 121) reported having no ethnicity. The 2022 sample was composed of 229 students in which 40.2% (n = 94) attend low-vulnerability-index educational institutions, and 59.8% (n = 140) attend highvulnerability-index educational institutions. The ages of the students ranged from 12 to 18 years, with a mean age of 14.48 (SD = 1.55), 56.3% (n = 129) reported being female, 88.0% (n = 206) reported being Chilean, 56.5% (n = 130) reported belonging to a religion, and 57.1% (n = 129) reported having no ethnicity. The sociodemographic details are presented in Table 1. Instruments Ad-hoc sociodemographic questionnaire: This questionnaire collected information on the students' sex, age, socioeconomic status, nationality, religiosity, and ethnicity. The year of data collection was used as a proxy for the effect of the COVID-19 pandemic. Vulnerability index of educational institutions: This index was calculated by estimating the weighted percentage of the risk needs of students attending different educational institutions; some of the factors encompassed in this category are limited maternal education, unaddressed medical necessities, existential needs, insufficient weight for age, and other related aspects. According to the statistics presented in the Annual Municipal Development Plan of Arica [58], a classification of "high" vulnerability and "low" vulnerability was made. According to this report, between 2014 and 2017, public schools exhibited an average vulnerability index of 86%, surpassing the 77% vulnerability level observed in the commune [59]. Conversely, during the same timeframe, subsidized and private schools had an average vulnerability index of 74%, which remained lower than the percentage observed at the communal level [59]. The cut-off point was the average vulnerability percentage of the commune. The most elevated score in the low vulnerability category was 73, while the lowest score in the high vulnerability category was 78. Child and adolescent assessment system (Sistema de Evaluación de Niños y Adolescentes, SENA): This instrument measures several emotional and behavioral problems [60]. Self-report versions were used for high school students 12 to 18 years old. The high school version was composed of 188 items; however, for this study, only the items pertaining to the internalized problem dimensions (e.g., depression, anxiety, and social anxiety, 32 items in total) were used. The items were on a five-point Likert behavioral/attitudinal statement scale (from 1 = "never or almost never" to 5 = "always or almost always"). Higher scores suggest the presence of a higher level of maladaptation and difficulties in adapting to the context. The version used in this study reported evidence of reliability and validity based on the internal structure of the test (Sánchez-Sánchez et al., 2016). The SENA scales have presented Cronbach's alpha coefficients greater than 0.75 [61]. Brief Resilience Scale for Children and Youth (CYRM-12) [62]: This scale measures the degree of resilience in the face of adversity based on the interaction between individual, relational, community, and cultural factors (e.g., "I try to finish what I start" and "My family will be there for me in difficult times"). Response options respond to behavioral statements on a 5-point Likert scale (from 1 = "never" to 5 = "very much"). Higher scores suggest higher levels of resilience. Llistosella et al. [63] translated and validated the 32-item version of the Child and Youth Resilience Measure (CYRM) into Spanish and reported evidence of validity of their scores, reporting a satisfactory internal structure of the test and satisfactory reliability coefficients (α > 0.80). Procedures The study, which is part of a larger project from the Educational Justice Center, was approved by the Ethics Committee of the Universidad de Tarapacá (26-2017) on 20 September 2017. The researchers reached out and extended voluntary invitations to principals and counselors from educational institutions in Arica to partake in this study. Out of the 42 schools invited, 29 principals agreed to participate in the data collection for the years 2018, 2021, and 2022, resulting in a refusal rate of 31%. Parental consent was sought after explaining the purpose and scope of the study. The students then signed a form agreeing to participate. The questionnaires were administered in a pencil and paper format by at least 2 trained assistants. The application procedure took place within each class for every course, in which a minimum of two trained interviewers, along with the respective teachers of the courses, responded to the questionnaire. The duration was approximately 45 min, and data collection sessions were conducted from March to December of 2018, 2021, and 2022. Data Analysis Initially, to characterize the sample in each year, the proportion of the students' sociodemographic variables was obtained and the central tendency (i.e., mean), dispersion (i.e., standard deviation, minimum, and maximum), and shape (i.e., skewness and kurtosis) of the continuous variables were obtained. The Shapiro-Wilk test was also used to assess univariate normality. Afterward, a statistical technique called an analysis of variance (ANOVA) was utilized to assess and compare the variations in the average scores of the depression, anxiety, and social anxiety scales across different years of application (serving as a proxy to gauge the impact of the COVID-19 pandemic). Multiple comparisons were conducted with a Games-Howell correction because the data did not show homoscedasticity (depression: Levene's test, F = 4.64, p = 0.010; anxiety: Levene's test, F = 5.24, p = 0.006; and social anxiety: Levene's test, F = 9.52, p < 0.001), and the partial eta squared (η2p) was used as an estimate of effect size. Parametric analyses were used because ANOVAs are sufficiently robust with data that did no present homoscedasticity and are not normally distributed either (see Table 2) [64]. Finally, to evaluate the potential predictive capacities of social determinants and resilience regarding students' mental health indexes, multiple linear regression analyses were performed. One model was estimated for depression, another for anxiety, and a final model for social anxiety. Gender, the year of application (i.e., used as a proxy to measure the effect of the COVID-19 pandemic), vulnerability index, socioeconomic status (SES), nationality, ethnicity, being Aymara (The Aymara are an indigenous ethnic group with a strong presence in the Andean region that includes the territories of Chile, Peru, and Bolivia), religion, and resilience were included as predictor variables. The categorical predictor variables were transformed into dummy variables. The standardized ß coefficients represented changes in the standard deviations of the criterion variables. Predictor variables with higher standardized ß coefficients suggest a greater relative effect on student mental health indexes. All assumptions were met. The presence of multicollinearity among predictor variables was ruled out via the inflated variance factor (IVF), for which the values were less than 5 for all variables. The residuals were independent of each other (depression: Durbin-Watson statistic, DW = 2.02, p = 0.880, anxiety: DW = 2.03, p = 0.782, social anxiety: DW = 2.03, p = 0.722). The presence of homoscedasticity was verified by examining a scatter plot of the predictors and standardized residuals. The normality of the residuals for each dependent variable was assessed via the examination of a histogram and a Q-Q plot of the standardized residuals. The statistical hypothesis testing for the data analyses was conducted at a significance level of 5%. The statistical analyses were carried out using Rstudio and Jamovi version 2.3.2 software. Descriptive Analysis Symmetry and kurtosis were outside the acceptable ranges to be considered normally distributed in the 2018 and 2021 collections, with the exception of 2022. In addition, the Shapiro-Wilk test showed that none of the mental health variables had a normal distribution in the 2018, 2021, and 2022 collections. Details of the descriptive analyses of the study variables are presented in Table 2. The ANOVA showed statistically significant differences between the different years of application and the mental health indexes. Regarding the dependent variable of depression, the main effect of application year was statistically significant (F (2, 443) = 19.24, p < 0.001). Post-hoc test results showed a significant increase in depression scores between the appli- Multiple Linear Regression Models on Mental Health Indexes The results of the multiple linear regression analyses showed that for the depression criterion variable, the model was statistically significant (F (11) = 23.8, p < 0.001) and able to explain 27.3% of the variance in the students' depression scores. There is a significant and negative relationship between depression and resilience scores when all other variables remain constant (i.e., their value was zero). Thus, with an increase in one standard deviation of resilience, the depression score will decrease on average by −0.706. Depression scores increased on average by 0.247 and 0.592 when the application years were 2021 and 2022, compared to 2018 when, all other variables remained constant. The depression scores decreased on average by −0.424 when students were male compared to female when all other variables remained constant. The depression scores increased on average by 0.204 in students belonging to families with medium SESs compared to a low SES when all other variables were held constant. The details of the multiple linear regression are presented in Table 3. For the anxiety criterion variable, the multiple linear regression model was statistically significant (F (11) = 15.54, p < 0.001) and able to explain 19.3% of the variance in the students' anxiety scores. There is a significant and negative relationship between anxiety and resilience scores when all other variables are held constant. Thus, with an increase in one standard deviation of resilience, the anxiety score will decrease on average by −0.473. Anxiety scores increased on average by 0.385 and 0.780 when the application years were 2021 and 2022 compared to 2018 with all other variables held constant. Anxiety scores decreased on average by −0.548 when the students were male compared to female, with all other variables remaining constant. Anxiety scores decreased on average by −0.183 in students belonging to families with low EVIs compared to a high EVI when all other variables remained constant. The details of the multiple linear regression are presented in Table 3. For the social anxiety criterion variable, the multiple linear regression model was statistically significant (F (11) = 12.76, p < 0.001) and able to explain 16.2% of the variance of students' social anxiety scores. There is a significant and negative relationship between social anxiety and resilience scores when all other variables are held constant. Thus, with an increase in one standard deviation of resilience, the social anxiety score will decrease on average by −0.386. Social anxiety scores increased on average by 0.302 and 0.644 when the application years were 2021 and 2022, compared to 2018, with all other variables held constant. Social anxiety scores decreased on average by −0.534 when students were male compared to female, with all other variables remaining constant. Social anxiety scores increased on average by 0.181 in students belonging to families with medium SESs compared to a low SES when all other variables were held constant. The details of the multiple linear regression are presented in Table 3. Discussion The purpose of the present study was to evaluate the effects of social determinants and resilience on the mental health of Chilean adolescents in pre-, during-, and post-COVID-19 pandemic contexts. This study showed that students attending schools with high and low vulnerability indexes experienced significantly worsened mental health over the years from 2018 (i.e., pre-pandemic) to 2021 (i.e., during the pandemic) and 2022 (i.e., postpandemic). This finding is consistent with the ample evidence of the negative effects of the COVID-19 pandemic on adolescent mental health [65][66][67][68][69][70][71][72][73]. Therefore, it is necessary to design interventions that provide tools to facilitate the return to the classroom, especially in context of vulnerability. This study also showed increases in depressive, anxious, and social anxiety symptomatologies when the students were female, their family had a medium SES (i.e., only in depression and social anxiety), and they belonged to educational establishments with high vulnerability indexes (i.e., only in anxiety), while the resilient response of the students was able to reduce mental health problems. These findings are also consistent with those found in the literature [29][30][31][32]37,38], with the exception of the socioeconomic status factor, which may seem contradictory. One might expect worse mental health in families with low socioeconomic levels since it is usually assumed that families with low economic resources possess greater vulnerability [74]. However, it is important to note that in this study, the variable SES was constructed in terms of the schools attended by the students (i.e., public, subsidized, and private schools). Therefore, there is a possibility that the participation of families with low and medium SESs between public and subsidized schools is not very clear. It is possible that the majority of families with low SESs are participating in subsidized schools and vice versa. In this sense, the vulnerability index could complement this contradictory finding. Similar to how numerous public schools were categorized as having a low vulnerability index, certain subsidized schools were identified as having a high vulnerability index. As a result, these outcomes partially align with previous findings documented in the existing literature [12][13][14][15][23][24][25]43,[53][54][55]. Some factors that may support these results probably respond to the difficult access that students belonging to families from low and middle socioeconomic strata have to treatment, the negative effects of the COVID-19 pandemic, and the perceptions of uncertainty, loneliness, and/or of not possessing the necessary tools to perform adequately in the neoliberal system that prevails in Chile [43,53,75,76]. The existence of disparities in anxiety levels, according to the vulnerability index in school communities, could indicate the importance of enriching standardized and generalist interventions. This would imply adapting these interventions to the specific context of vulnerability faced by adolescents. This study has some limitations. First, a non-probabilistic sampling strategy was used, which limits the generalizability of the results to other sociocultural contexts. Secondly, only students from one region of northern Chile participated; therefore, future research should contrast these results with students from other regions of the country. Third, although a repeated cross-sectional design was used which allows for the establishment of pseudo-longitudinal explanations [77], it is not possible to establish changes over time in the students (i.e., trajectories); therefore, future studies should use longitudinal designs that allow the findings of this study to be contrasted with those in a sample of adolescents measured at different times. Finally, this study did not consider other sources of information that contribute to a better understanding of adolescent mental health, such as information reports from parents, guardians, and/or teachers. Despite the limitations, the findings of this study provide relevant information about the resilient responses of high school students in Chile attending low-and high-vulnerability educational establishments pre-, during, and post-COVID-19 pandemic and the pandemic's effects on their mental health. The results suggest the design of interventions with new perspectives based on the students' context of vulnerability. It also provides background information for future research to include variables not considered in this study and to test more complex explanatory models. Likewise, the findings of this study can be used as inputs to propose new orientations in the design of public policies that produce structural changes in the reduction of the mental health needs of Chilean high school students. Finally, it is suggested to strengthen the coordination between mental health experts and the most vulnerable educational centers, implement programs to promote resilience, address socioeconomic and educational inequalities, promote a comprehensive health approach in the educational curriculum, and establish psychological and emotional support programs within schools. These suggestions aim to address social determinants and promote mental health in the educational context to improve student well-being. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
2023-07-15T15:19:04.156Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "cd8edff883a55da9a884c45e8149d14275a0f07f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/10/7/1213/pdf?version=1689226473", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ab89aa02861afa3a703d382a6bae2f58f0bbc8d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
262984650
pes2o/s2orc
v3-fos-license
UFOs shot down: An Extraterrestrial Charade? The last chance before the nuclear Apocalypse In February 2023, the airspace of two countries in North America and China was the scene of a series of UFO sightings, resulting in the proven shooting down of one of these objects by the United States Air Force. Reportedly, one of the object was identified as a balloon of Chinese origin, while the others remain unknown. With respect to this, the Latin American Academy of Scientific Ufology (LAASU) organized an international meeting to discuss this subject on March 18, 2023 and five professionals from different academic fields were invited to give their points of view. With over 40 years dedicated to Unidentified Flying Objects (UFO) research, Dr. Acosta Navarro, LAASU’s Director, supported the that hypothesis of extraterrestrial manifestation would be the best explanation for the observed UFO phenomena shot down, which could represent a charade or puzzle to be interpreted. The development of this article within LAASU was prioritized by the transcendence and contemporaneity of the phenomenon and was named the "Operation Ray of Zeus". Ten points of argumentation are presented to support this hypothesis. We propose that the downed UFOs could be a consequence of direct or indirect actions by Extraterrestrial Intelligences, which can be interpreted as a charade, intending to raise awareness and reflection about the current critical moment of risk of self-destruction, highlighting our technological limitations and showing our vulnerability. Introduction In February 2023, the airspace of two countries in North America and China was the scene of a series of Unidentified Flying Objects (UFO) or Unidentified Anomalous Phenomena (UAP) sightings, resulting in the proven shooting down of one of these objects by the United States Air Force.Reportedly, one of the objects was identified as a balloon of Chinese origin, while the other five remain unknown.[1] The event has rattled global geopolitics.The United States claims that the balloon shot down by an F-22 fighter jet on February 4, 2023, the first in the series, was a Chinese spy balloon (ChBa-down).China acknowledges the origin but denies that it was a spy balloon, claiming it was just a civilian research balloon.[2] With respect to this, the Latin American Academy of Scientific Ufology (LAASU), a non-profit, non-governmental organization founded in Sao Paulo (Brazil) dedicated to study and research of UFOs, organized an international meeting to discuss this subject on March 18, 2023.Five professionals from different academic fields were invited to give their points of view (Figure 1). Several hypotheses were addressed during the meeting.In the lecture by the Brazilian journalist Agnes Franco, MSc, who was inspired by the recalibration of the radars, and offered hypotheses more related to human geopolitics.[3] She highlighted those images from the fighter jet cameras that have yet to be released, which raises suspicions, since it is well-known that the fighters have cameras.Also, for credibility, one would expect the country to prove what it claims to have found through images. In his presentation, the Brazilian lawyer Flori Tasca, PhD, stated that he believed the objects to be of human origin.One of his comments was that the moment the U.S. identifies the first balloon, it ceases to be UAP or UFO because it has already been identified as a terrestrial artifact.He cites Toni Inajar's article [4] on identifying a UFO, for instance, impossible angles for human objects, emission of strong lights, sudden change in altitude, etc. Tasca argued that none of the patterns presented in that article have been observed in any of the objects cited in this article.In agreement with Agnes Franco, he concludes that these objects apparently have more to do with geopolitics than exopolitics. Italian journalist Candida Mammoliti criticizes how the security forces of countries deal with the UFO phenomenon and claims that according to the Canadian government's research it would not represent a threat.[5] Despite this, the approaches of many states continue to be warlike and violent.She also cites the La Joya (Peru) incident, when over a thousand people and a pilot identified a cream-colored, dome-shaped, 600-meter half-circular object, and there was a military assault that fired 64 shells.[6] The object could have absorbed some; others fell to earth.Therefore, from this incident, one can conclude that it is complex, if not impossible, to bring down such objects.She argues that these are not geopolitical phenomena since they have been sighted in several countries, in different political contexts, and with other descriptions.According to Candida, like others ufologists, if such objects, sighted for millennia, were dangerous, this would already be clear. The Brazilian medical doctor Daniel C. Gamini advocates that the UFO phenomenon should be "... studied, in a methodical, logical and standardized manner, with transparency and cooperation among the various international entities...".He argues that millions of dollars are spent on military matters which could be allocated to scientific research.Gamini recalls that between the 1950s and 1970s, academia did not ignore ufology as it does today and cites several researchers from different fields who have researched the subject, citing an article whose main argument is that the movement of UFOs is nothing like human technologies, even reaching the speed of Mach 60. [7] The doctor cites another medical article cross-referencing over 30,000 witnesses to the phenomenon and their mental health, finding that most enjoyed full mental health.[8] Gamini also cites several statements by U.S. authorities claiming the possibility of intelligent life visiting Earth, such as Senator Marco Rubio, John Brennan, and John Ratcliffe. John Budrys, a senior American UFO researcher, despite not being a panelist, had a significant point during the comments when he argued in favor of logic, calling for detailed observation on how the advance of human technology and the development of UFO in governance can be a trap if researches do not have its feet grounded.UFO researchers need to be scientific and conservative, and not be tempted into speculations if they are to maintain credibility.There is plenty of factual data clearly supporting the reality of UFOs, without tainting the subject with unsupported ideas.Of course, armed with facts, researchers need to try to conceive of what larger picture and meanings are indicated.However, any pure speculations need to be clearly presented as such.He applauded Dr. Navarro's now published careful studies as solid attempts to introduce scientific rigor to the difficult by nature evaluation of extraterrestrial contacts with human beings.Budrys currently is working to help experiencers of extraterrestrial contact as a member of the Experiencer Support Team from the Organization for Paranormal Understanding and Support" (OPUS-EST).[9] In addition, Budrys has a website to provide this same service to alleged experiencers of extraterrestrial contact.[10] Finally, in the meeting, the last lecturer was the Peruvian/Brazilian scientist Dr. Julio Acosta-Navarro, PhD.(Figure 2).The arguments he presented are the main argument of this paper.His claims will be presented in the following chapters, with some observations added later as the topic matures.Dr. Acosta Navarro, LAASU's Director, maintains that the hypothesis of extraterrestrial manifestation would be the best explanation for the observed UFO phenomena shot down in February of this year.The facts would be a consequence of direct or indirect action with global reach by Extraterrestrial Intelligences, which could represent a charade or puzzle to be interpreted to raise awareness and provoke reflection about the critical moment that humanity is currently going through. Methodology Dr. Acosta-Navarro is Director/Founder of the Latin American Academy of Scientific Ufology (LAASU) from its creation in 2011.He also is a medical doctor and university professor from the School of Medicine of Sao Paulo University, holding a PhD degree in Medical Sciences and has a second PhD degree in Social Sciences, both from the Sao Paulo University.Additionally, he has more than 44 years of experience in the Ufology field (Figure 3).Dr. Acosta Navarro gave his first impressions of the downed UFO phenomenon in an interview on the Epimetheus Project with the Internet channel "Universo Paralelo OVNI" on February 21, 2023, the same year the Project Epimetheus research and results were published in a scientific journal.[11] Such research on case studies of alleged Close Encounters of the Fifth Degree (CE5ᵗʰK) demonstrated that some of these 'contactees' could be considered of high probability to be authentic, and that in these contacts, information in several fields of knowledge was passed by Extraterrestrial Intelligences with revolutionary potential for humanity.[12] Days after his research was published, Dr. Acosta Navarro reinforced his views in the interview mentioned above about the phenomena observed in February. The development of this article within LAASU was prioritized by the transcendence and contemporaneity of the phenomenon and was named the "Operation Ray of Zeus".It was used information from official entities, communication mass media, documents from the LAASU, published scientific papers and virtual networks.All the sources were duly referenced in the moment of citation. Results and discussion The main hypothesis of this paper is the possibility that the objects shot down during last February have extraterrestrial origins, even though official statements move in the opposite direction. The following are ten points of argumentation to support this hypothesis. Argument 1: Operation "False Flag" or other official geopolitical argumentation does not explain the phenomena The main official explanation of the recent UFO overflights and shootdowns is that there is no convincing and detailed explanation of their nature.However, some scholars suggest it could be a possible 'false flag operation' or a consequence of geopolitical actions involving mainly the United States and China. An act can be understood as a 'false flag operation' when the victim of the act -or allies -is the generator, even if against himself, giving clues or directly accusing enemies of being the authors.The term can refer to initiatives by individuals, non-governmental organizations, or countries.[13] According to David Silbev, a historian at Cornell University, a 'false flag operation' was the trigger for the beginning of World War II, when the Gestapo attacked its transmission tower, Gleiwitz, pretending to be a Polish offensive.In the staging, several Germans lost their lives, such as a farmer who was shot dead as if he was the saboteur, and concentration camp prisoners killed in the guise of German guards as if they were victims of the alleged terrorist.[14] Another war start marked by a 'false flag operation' was the one that occurred in Vietnam in the 1960s.According to Lieutenant Commander Pat Paterson of the U.S. Navy, after 40 years of speculation about the attack on the destroyer USS Madoxx in the Gulf of Tonkin, thanks to audiotapes, declassified documents, and uncovered facts, it is possible today to say that what led the U.S. Congress to vote in favor of the U.S. plunging into the Vietnam war was no more than a fabricated scene.[15] Professor Scott Radnitz also reminds us of the episode involving the conflict between Russia and Chechnya, when, after a series of bombings, several people were arrested carrying explosives as if they were Chechen militants, which was later proved to be Russian security agents, i.e., one more staging of severe conflicts.[14] In any case, a false flag operation in the episode that is the subject of this article would have little logic in light of the geopolitical context of war instability, especially by the subsequent position of the U.S. government.No retaliation due to UFOs associated with the Chinese government was observed, from the American government which hurts the historical narrative of false flags.At the same time, such an operation would not be necessary to trigger a possible U.S. attack on China since the geopolitical situation between these powers is in crisis due to the critical tension in the South China Sea and the conflict over Taiwan since before the February events. As journalist Liu Zhen points out, a group of think tanks in Beijing pointed out in March that U.S. military activities have practically doubled, including those involving aircraft carriers and training with allies.In the South China Sea, for example, in 2021, there were 12 exercises at sea.This year, as of this report's publication, there were 22 bomber outings and 11 nuclear submarine passages, according to U.S. reports.The Chinese believe that the target of the exercises is China.Scholars warn of the risk of an imminent conflict.[16] In addition, the war for dollar hegemony has recently helped to escalate this situation.There is no need to create false pretexts to start reprisals because the situation is already critical enough.Furthermore, it is a fact that there is a dispute over global economic hegemony and that China and Russia are opposed to this hegemony, strongly led by the United States, and that dependence on the dollar is being questioned.[17] On February 12, Melissa Dalton, Assistant Secretary of the U.S. Department of Defense, said that due to the Chinese balloon (ChBa-down), greater attention was being paid to radar and that this would explain, "in part," the increase in detected objects.[18] However, this official explanation leaves many questions unanswered.On the same day, China announced that it was about to shoot down a UFO in Shandong (3UFOdown), a province near Rizhao in the Yellow Sea, even alerting fishermen to watch out for their safety.[19] In order to more clearly understand the chronological sequence of the events we presented Table 1 with the description of the six objects detected and/or shot down in February 2023, that are commented on this paper. Argument 2: Reality of the UFO phenomenon, which has a physical expression, allows it to be considered a possible cause when not fully explained by a natural or artificial human cause. The UFO phenomena in its modern phase (considering the report of pilot Kenneth Arnold on June 24, 1947, Washington State) was widely known for its strong casuistry that was covered up by governments and neglected by the academic community until recently.The evidence accumulated in the last decades about UFOs, obviously discarding the false ones, has demonstrated their reality, and even recently, different personalities and governmental and academic entities have publicly declared it.Except for some published works of scientists such as James Mac Donald, Allen Hynek [20], Jaques Vallee [21] and Dr. Acosta-Navarro [22,23], we can consider that only in December 2017, ufology gained consensual recognition with the publication by the "New York Times" of encounters of American jets with UFOs and a subsequent recognition by the American government of the authenticity of the evidence.In that time, the "New York Times" published three videos of UFOs.The first of the videos, known as the "GIMBAL footage" shows a 2004 encounter near San Diego between a Navy fighter jet and an UAP.[24] Dr. Acosta-Navarro launched a "LAASU Manifest" [25] at an international event on October 26, 2019, in São Paulo, putting seven affirmative points on the state of the art of UFO knowledge, within which some would be confirmed with later events (Figure 4).The first point affirmed the physical nature of UFOs, and the sixth point the suspicion that some governments were aware of their reality.Textually it was put that: "The phenomena or manifestations of Extraterrestrial Intelligences are real and can have a physical and material expression allowing these to be investigated under the scientific method in addition to having a psychic expression."These have mystified scientists and the military, according to senior administration officials briefed on the findings of the highly anticipated government report.The report determined that the vast majority of more than 144 incidents over the past two decades did not originate from any American military or other advanced U.S. government technology, the officials said.[26] The report concedes that much about the observed phenomena remains difficult to explain, including their acceleration, as well as ability to change direction and submerge.Some of the incidents were recorded on video, including one taken by a plane's camera in early 2015, that shows an object zooming over the ocean waves, which caused the pilots to question what they were watching. In the American congressional hearing in 2022, Pentagon officials testified, and admitted the authenticity of encounters of UFOs with jets.An unclassified version of a report investigating several cases was released again admitting that the vast majority of the cases did not originate from American military or other advanced US government technology.[27] In addition to the 144 UAP reports cited above, recently unclassified information was released from the American government showing there have been 247 new reports and another 119 that were either since discovered or reported after the preliminary assessment's time period.This totals 510 UAP reports as of 30 August 2022.UAP events continue to occur in restricted or sensitive airspace, highlighting possible concerns for safety of flight or adversary collection activity. [28] Here we can acknowledge the efforts of journalist Leslie Kean that was probably the first major writer to forcefully cite the same flight safety concerns in her book several years ago.She has been a major figure in several efforts to get officials to look seriously into UFOs.[29] On Wednesday, July 31, 2023, members of an independent NASA panel studying UFOs, or what the U.S. government now terms UAP for "unidentified anomalous phenomena," said in their first public meeting that scant high-quality data and a lingering stigma pose the greatest barriers to unraveling such mysteries.The 16-member body, formed last year among leading experts from scientific fields ranging from physics to astrobiology, held a four-hour session streamed live on a NASA webcast to deliberate their preliminary findings ahead of issuing a report expected later this summer.The panel's chairman, astrophysicist David Spergel, said his team's role was "not to resolve the nature of these events but rather to give NASA a roadmap" to guide future analysis.The greatest challenge panel members cited, however, was a dearth of scientifically reliable methods for documenting UFOs, typically sightings of what appear as objects moving in ways that defy the bounds of known technologies and laws of nature.[30] Despite the scientists concluding that there is ""absolutely no convincing evidence" of extraterrestrial activity in any sightings to date, this doesn't mean the panel has ruled out aliens, military adversaries or any other explanation -just that of the 800 or so sightings of strange flying objects or other phenomena that defy easy explanation, the data are simply not sufficient to draw any definitive conclusions.[31] Dr. Acosta Navarro believes that the objects recently shot down in February of 2023 could be low-tech, unmanned alien probes, ships, decoys, or toys.These were unlike the incidents of UFO sightings with carrier jets where other patterns of speed and flight capabilities have been observed, and not similar to any known human craft. Argument 3: Contradictions of US government officials trying to rule out extraterrestrial origin without being able to explain the nature of UFOs. On Monday 13 of February, White House press secretary Karine Jean-Pierre said at Monday's daily press briefing: "No sign of alien activity, the White House says."There is no -again, no -indication of aliens or extraterrestrial activity with these recent takedowns" [32] From this official statement, we can consider two key points.First, it states that the objects are not extraterrestrial spaceships, but without explaining what they are.It is not logical, rational, or scientific.On the other hand, assuming that the US government is accurately stating the objects are not extraterrestrial, it can then be assumed that the US government is aware of what extraterrestrial ships are like to be able to claim that these are not. From this statement, one could deduce that the US government is aware of the reality of the presence of extraterrestrial intelligence on Earth. These statements also contrast with others.For instance, that from Pentagon Press Secretary Brig.Gen. Pat Ryder from Friday 10 February, talking about that day's shootdown: ""We have no further details about the object at this time, including any description of its capabilities, purpose, or origin." The most impactful statement, made one day before Karine's statement, was from General Glen Van Herck.When a reporter asked him on Sunday 12 in February 2023, if the U.S. military has ruled out potential actions by extraterrestrials, he did not dismiss the idea.""I haven't ruled out anything" he said."At this point, we continue to assess every threat or potential threats unknown that approaches North America with an attempt to identify it.It could be a gaseous type of balloon inside a structure or it could be some type of a propulsion system.But clearly, they're -they're able to stay aloft."[32] Another fact to comment on is the radar detection of another UFO in Montana (UFOradar), which was to be the fourth in the sequence and which again led to the order for US jets to intersect.However, when they arrived at the point of detection, they found nothing and concluded that a radar anomaly had caused it. The North American Aerospace Defense Command, a joint effort to monitor the skies by the U.S. and Canada, issued a statement late Saturday 11 February saying it had detected a "radar anomaly" over Montana, but nothing unusual was found when fighter jets were sent to the area.North American Aerospace Defense Command (NORAD) issued a temporary flight restriction in conjunction with the FAA on Saturday in central Montana near the town of Havre, "to ensure the safety of air traffic in the area," according to the agencies.[33] The senator Matt Rosendale questioned the claim that it was a "radar anomaly" on his Twitter [34]: I am in constant communication with NORCOM and they have just advised me that they have confidence there IS an object and it WAS NOT an anomaly.I am waiting now to receive visual confirmation.Our nation's security is my priority. This point makes us speculate whether the UFO was, in fact, real and that it perhaps fled or withdrew from the range of the American jets.We may wonder if there was, in fact, other undisclosed information. Another point of contradiction is the information that these UFOs produced magnetic interference in the war jets.F-22 pilots from US fighter aircraft who saw and shot down an object threatening airspace over Alaska on Friday 10 in February 2023 (1UFOdown), said it interfered with their sensors and had no propulsion system.Prior to shooting down the object, Kirby told reporters, the pilots of the F-22 jets that took it down circled it and determined it was unmanned -and lacked the ability to maneuver midair and change its speed like previous balloons have been seen doing.[35] In addition, CNN reported an anonymous source with knowledge of the directions said the pilots shared conflicting observations about the object, including whether it had interfered with their systems, and said that they could not explain how it stayed in the air.[36] Another official reported that in the UFO shoot-down episode at Lake Huron, the F-16 fighter jet did not shoot down the UFO initially with the first missile but only with the second.How could a fighter jet miss with a more modern AIM-9x Sidewinder missile?What would be the explanation? These contradictions could indicate that the US government may be withholding information?.Or it simply reflects the different points of interpretation of officials not having a consensus on the nature of the phenomenon.We can also consider that there could be is another possibility for this secrecy not involving extraterrestrial action.If these were all Chinese balloons sent into our airspace to test our reactions and capabilities, they certainly learned a lot.First, that we are very careful, or at least Biden is very careful, not to risk any harm to Americans on the ground.Second that the Montana radar system is not very accurate at picking up objects.Third, that Sidewinder missiles are not 100% accurate, despite their high cost.The Chinese probably learned other valuable things by simply floating a couple balloons over the US and carefully noting our reactions.The embarrassment over these last two failures may be why the military does not want to publicly speak further on these balloons.However, this rationale does not explain all the arguments presented here. Argument 4: Incomplete information suggesting withholding communication Several pieces of information given by US government officials are incomplete.Despite the jets being near these UFOs, there are no photographs, video footage, or captured objects.As General Ryder reported about the first UFO of Alaska (1UFOdown). The object was about the size of a small car, the general said, and did not resemble in any way the Chinese surveillance balloon shot down off the coast of South Carolina earlier this week."We have no further details about the object at this time, including any description of its capabilities, purpose or origin," he said.Two F-22s flying out of Joint Base Elmendorf in Alaska, took down the object.The one missile shot was an AIM-9X Sidewinder."We have HC-130, HH-60 and CH-47 aircraft participating in that recovery," the press secretary said.[37] The second UFO (2UFOdown) in Canada was cylindrical, smaller than the Chinese balloon, but without further details.In fact, this second UFO was very different from the detailed information given of the original Chinese balloon shot down (ChBa-down) on February 4, 2023 that is not being considered a UFO. In the case of the third UFO shootdown at Lake Huron in North America (4UFOdown), it is described as having no identifiable cargo or intelligent control, suggesting the jets were very close.Again, no further details about the objects were given.The object, which was flying at about 20,000 feet, had an octagonal structure with strings hanging off it but had no discernible payload, U.S. officials said.[32] If the pilots saw the UFOs up close, they could give more information.Why was it not done?The seemingly incomplete information could suggest that the United States is covering up information critical to understanding the nature and origin of the phenomena observed. Argument 5: Rescue suspended and paradoxical because they were shot down with chances to recover and did not recover. One of the strongest points questioning and weakening the official version is that the rescue was suspended early and still shows no trace of any downed UFO.According to US officials, the operation was terminated on February 17, 2023, due to weather and access difficulties to the sites.[38] If all UFOs were shot down with the same protocols, with jets and missiles, with the planning of the location to be shot down to facilitate rescue without harm to the population, why was only the Chinese one filmed, shot down, and rescued and shown to the public?[39] Why didn't they continue the search for the others?Did they find the remains and not publicize them?Did they not take them down?These are legitimate questions, given the lack of logic in the facts. Recently in June 2023, United States Air Force (USAF) officer and former intelligence official David Grusch publicly claimed that unnamed officials told him that the U.S. federal government maintains a highly secretive UFO recovery program and is in possession of "non-human" spacecraft and "dead pilots."In 2022, Grusch filed a whistleblower complaint with the U.S. Office of the Intelligence Community Inspector General (ICIG) to support his plan to share classified information with the U.S. Senate Select Committee on Intelligence.[40] We can hypothesize that the United States government may have recovered physical evidence demonstrating extraterrestrial origin leading to an attitude of concealment of this information.It would not be a novelty given the history of this country in covering up the UFO phenomenon in past decades.[29] 3.6.Argument 6: Global UFO wave history is associated with dramatic moments in humanity. Several scholars have suggested the presence and influence of extraterrestrial intelligences in the development of society since the origin of the human species.[41,42] The LAASU 2019 Manifesto clarifies that: "...there is evidence of the presence of these Extraterrestrial Intelligences in humanity's past, and they may have influenced our origins, history, religion, cultures, and scientific and technological development" [25].In the opinion of Dr. Acosta-Navarro, there has been a subtle presence of UFOs in dramatic moments of humanity since antiquity, as well as great wars in the past, after the development of atomic energy, and at the beginning of the space age.[43] Subsequently, at the "Forum on the War in Ukraine and the Risk of Nuclear Hecatomb", organized by LAASU in March 2022 (Figure 5), another Manifest was published on the war in Ukraine supporting several points regarding the manifest presence of Extraterrestrial Intelligences in situations of armed conflict.[44] Some of the nine affirmative points: "Disclose that the UFO casuistry suggests that Extraterrestrial Intelligences, which visit the Earth, are of different origins and levels of technological/scientific advancement and have strong concern regarding a nuclear conflict.Concern that is perceived, after the mastery of atomic energy by humanity, with the intensification of contacts. Based on information received in Close Encounters of the Fifth Kind from authentic "contactees", which suggest the presence of Extraterrestrial Intelligences in the contemporary of society, consider the strong possibility of their indirect influence on the outcome of the Ukraine/Russia conflict. Although the serious UFO casuistry suggests that Extraterrestrial Intelligences have sufficient technological capacity to block or neutralize our nuclear weapons, argue that the hypothesis presented here is that Extraterrestrial Intelligences will not directly intervene in the final fate of an eventual nuclear war." Before the events of February, the global risk situation had reached its most dangerous levels with the course of the Russian invasion of Ukraine and the resulting confrontation between Russia and North Atlantic Treaty Organization (NATO).It had reached its most dangerous levels with Vladimir Putin saying he felt entitled to use nuclear weapons against Ukraine.It is the closest we have come to a nuclear war since 1962, during the tension between Cuba and the United States.Add to this narrative a surprisingly unprepared Russian military facing Ukrainian defenders funded by the West.[45] Days before and immediately after the downed UFO events, the crisis remained.At the UN General Assembly, António Gutierres stated that a global nuclear conflict is even greater than in the Cold War period: "In fact, the Doomsday Clock is a global alarm clock.We need to wake up and get to work".[46] Figure 5 Poster announcing the international virtual meeting "Fórum: Guerra na Ucrânia e o risco de Apocalipse nuclear: Extraterrestres onde estão?"March, 2022, São Paulo, Brazil. Relations between Russia and the United States, long strained, have worsened even further since Russia's invasion of Ukraine last year.In February, Moscow pulled out of the New Strategic Arms Reduction Treaty (START) nuclear arms reduction treaty with Washington.Both the United States and Russia -by far the world's largest nuclear powers -have said that a nuclear war can never be won and must never be fought, but the conflict in Ukraine has raised fears of a direct confrontation with the West.[47] Argument 7: Worldwide recognition of the importance of the UFO phenomenon and possible contact with Extraterrestrial Intelligences by the United Nations The United Nations (UN) had previously addressed the subject of the UFO phenomenon and possible contact with extraterrestrial intelligence.In 1978, academics and UFO researchers Jaques Vallee and Allen Hynek suggested a department dedicated to researching these phenomena to the UN.Despite the favorable GA/426 decision, nothing has been done.[48] During the Cold War, US President Ronald Reagan, calling for a better understanding of the risk of nuclear war, would utter almost prophetic sentences at the UN General Assembly: "Can we and all nations not live in peace?In our obsession with antagonisms of the moment, we often forget how much unites all the members of humanity.Perhaps we need some outside, universal threat to make us recognize this common bond.I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.And yet, I ask you, is not an alien force already among us?What could be more alien to the universal aspirations of our peoples than war and the threat of war?..." [49] More recently, the subject of the possibility of extraterrestrial contact was considered at the UN during a press conference, when Mazlan Othman, Director of the UN Office for Outer Space Affairs, stated that the United Nations would have a mechanism to deal with any matters affecting humanity, including extraterrestrial life.However, she also stated that she did not know the protocols or what would be done in the face of something of this magnitude.[50] Dr. Acosta-Navarro, in the LAASU 2019 Manifest, already cited in this article, also states the need for the UN to reset and reassess the importance of the UFO phenomenon: "Local, national and international authorities, such as the United Nations, the global scientific community and the media, must take immediate action in recognizing this reality and its significance to society, although we know that some governments are very likely to have a deep understanding of the reality of the issue". Argument 8: The opinion of 'contactees' and ufologists days after incidents Since George Adamsky, the 1950s 'contactee' considered highly likely to be authentic in the Final Contact Project's assessment, several other authentic 'contactees' have reported that various Extraterrestrial Intelligences they have made contact with claim to be concerned about the danger of self-destruction resulting from a nuclear war. In the Final Contact Project, a milestone study in the field, we hypothesized that authentic CE5thK could probably be happening.We designed and applied an original approach with qualitative and quantitative variables constituting a rule of 12 criteria for defining a high probability, low probability or inconclusive evidence in order to set the authenticity of 72 cases of alleged CE5thK.A high probability was considered as authentic.We found 25 cases to be a high probability of being an authentic phenomenon.Thus, we found evidence to support that advanced contact from ETI with human beings probably is happening on the Earth at present.To our knowledge, this was the first time such findings were reported.The results were counter to mainstream science and astronomy, challenging the paradigm of the impossibility that humankind has been contacted by ETI.[51] Additionally, in these authentic CE5thK cases, there was important and far-reaching information with amazing potential for our society in several fields of knowledge.Information received by authentic 'contactees' was related to origin, motivations of these ETI, and even specific technical information. In addition, and more recently, the Epimetheus Project would confirm these results and expand the information in various fields of human knowledge.A total of 40% of authentic 'contactees' report that Extraterrestrial Intelligences from different origins have concerns related to the risk of self-destruction of humanity by the misuse of atomic energy.[12].Uri Geller, another 'contactee' with a high probability of being authentic, stressed the importance and extreme caution of how the US government should treat the phenomenon. "There are reports of UFOs that the Pentagon released not long ago, footage filmed from the cockpit of US fighter jets" Geller said."These UFOs are flying at such speed that there's no technology available to us that could analyze this phenomenon.The Pentagon doesn't know what they are.""But we, those who believe in extraterrestrial life, believe that UFOs are visiting our planet for a reason.There's a plan here."[52] Sixto Paz, famous 'contactee' and ufologist, [53] considered authentic in the Final Contact Project, stated that in his interpretation and analysis -but not as information passed on to him -that was unlikely that extraterrestrial craft could be shot down because of their advanced technology.However, Sixto accepted that several aspects are interesting, such as the reaction of the US government.He agreed that the interpretation of the phenomenon as a whole leaves much to think about.[54] 3.9.Argument 9: Events following the February incidents reinforcing the hypothesis of extraterrestrial influence Days after the February events, a paper by scientists Loeb & Kirkpatrick supports the hypothesis that an extraterrestrial spacecraft is in the solar system, seeding probes during its passage by Earth.The hypothesis was raised from the orbital parameters of Oumuamua and other objects like IM2.According to the scientists, the probes would not have been picked up by telescopes due to low starlight reflection.In addition, the researchers hypothesize that such objects can be fueled by sunlight or water.[55] Unexplained UFO sightings continued in the United States after the incident and may be part of the same UFO wave.On February 14, from north of Philadelphia, a sighting of four objects was reported, accompanied by an aircraft that resembled a military helicopter.[56] On February 28, news reports from Russia reported the air closure of St. Petersburg after UFOs were detected, with jets sent to intercept them.[57] If there were suspicions of a cover-up, March 28 reinforced this hypothesis since the Pentagon declared that the UFO rescue would be classified information.[58] On the other hand, the world geopolitical situation with the risk of a nuclear hecatomb and self-destruction remains due to the war in Ukraine, with Russia withdrawing from the anti-nuclear missile treaty with the United States.[59] Days before the closing of this article, more news confirms the gloomy world geopolitical outlook.Recently, on May 19, at the G-7 meeting, after Biden, US president, announced new military aid to Ukraine of 375 million dollars to Ukraine, Lula criticized the US positioning on the war between Russia and Ukraine: "Biden does not talk about peace".[60] What will the news be tomorrow?How much longer will we have as humanity?Here, we can cite Ronald Reagan's almost prophetic phrases at the UN to illustrate this point.The phenomena that occurred represented something unknown, a common problem for the world, which tends to homogenize or unify us for the questioning or understanding of the phenomenon. The four UFOs shot down could be low-tech extraterrestrial machines, described in some cases as flying at 32 and 64 km/h, posing no danger.[61].Unlike the incidents with aircraft carrier jets presenting flight capabilities that, as far as is known, no terrestrial ship could have. Figure 6 Image representing the central hypotheses of this study On the other hand, although in all cases the US government triggered the typical protocol of identification, pursuit, and shooting down, the UFOs were not described as identical, being described as cylindrical, octagonal, small as a car, and even the case of the Montana UFO detected on radar but not found, ghost-like. Another point to be considered is that the phenomenon could lead to different perceptions of the pilots of the war jets. CNN reported an anonymous source with knowledge of the directions said the pilots shared conflicting observations about the object, including whether it had interfered with their systems, and said that they could not explain how it stayed in the air.[62] UFOs of geometric, round, cylindrical, octagonal, and perhaps "hide-and-seek" ghost shapes were observed throughout the period.Could all this represent a child's game for humankind: a form of communication or a charade?Are any warnings to be interpreted?Can we imagine that Extraterrestrial Intelligences may be treating us as we treat our children with toys such as the geometric "shape-passing" game to improve our intellectual capacity and understanding?(Figure 6).The sequence of facts witnessed worldwide via the mainstream media and networks on February 10-12, 2023, gave a clear picture of the faces of the highest government officials of the most economically, politically, and militarily powerful parents expressing surprise, confusion, and inability to understand or explain what was happening at those times. Is it possible that we are living the times of the Noah again?: "Every living thing on the face of the earth was wiped out; people and animals and the creatures that move along the ground and the birds were wiped from the earth.Only Noah was left, and those with him in the ark." [63] Classically there are two main explanatory hypotheses of extraterrestrial contact phenomena, the "Zoo hypothesis" and the "Interdimensional hypothesis".The "Zoo hypothesis" [64] speculates on the assumed behavior and existence of technologically advanced extraterrestrial life and the possible reasons they refrain from contacting Earth.It is one of many theoretical explanations for the Fermi paradox.The hypothesis states that alien life intentionally avoids communication with Earth to allow for natural evolution and sociocultural development, and avoiding interplanetary contamination, similar to people observing animals at a zoo.The hypothesis seeks to explain the apparent absence of extraterrestrial life despite its generally accepted plausibility and hence the reasonable expectation of its existence.[65] A variant on the zoo hypothesis suggested by scientist John Ball is the "laboratory" hypothesis, where humanity is being subjected to experiments, with Earth being a giant laboratory.We could speculate that if the "Zoo Hypothesis" were correct the ETI could be testing our reactions and behaviors in the face of the phenomenon. Another theory is the "Interdimensional Hypothesis" (IDH) proposed by several scientists, such as John Keel [66] and Jacques Valle.[67] It states that UFOs and related events involve visitations from other "realities" or "dimensions" that coexist separately alongside our own.It is not necessarily an alternative to the extraterrestrial hypothesis, since the two hypotheses are not mutually exclusive.Both could be true simultaneously.The IDH also holds that UFOs are a modern manifestation of a phenomenon that has occurred throughout recorded human history, which in prior ages were ascribed to mythological or supernatural creatures.This hypothesis has the advantage is its ability to explain the apparent ability of UFOs to appear and disappear from sight and radar; this is explained as the UFO entering and leaving our dimension ("materializing" and "dematerializing").[68] A few years ago, Dr. Acosta-Navarro presented a third hypothesis, the "Homeric Stage Hypothesis" [69] where the relation of Extraterrestrial Intelligences with human beings even in modern times would hold similar characteristics as described by Homer in his classical "Iliad" and "Odyssey".They would act as gods or semi-gods with many behavioral features of human nature and strong presence in the society.What happened in February could be a fast, complex and total action from the extraterrestrial gods sending a message for the humankind decrypt it?…was it a real "Operation Ray of Zeus"?(Figure 7). Even, all these hypotheses are not necessarily excluding and they could be complementary.Dr. Acosta Navarro had already stated in the last affirmative point of the2019 Manifest, the complex nature of the phenomenon of extraterrestrial contact.Just over three years later, this statement would seem even more accurate, and in a way, could be considered prophetic. 7. The intrinsic nature of these Extraterrestrial Intelligences, as well as the reasons for their presence and interaction with humans, represents a complexity beyond the limit of our understanding and rationality that constitute the greatest challenge for the next generations. In the case that the hypothesis proposed here is correct, we should hope that extraterrestrial intelligence do not feel attacked by the shooting down of unmanned objects, or worse, a hostile and bellicose attitude, even while officially recognizing that they did not represent a direct threat to the population.If we sent exploratory craft to another planet and they were attacked or shot down, that would have a meaning for us and a response.We should consider the apparent ability to neutralize the nuclear weapons arsenal reported in various situations.One of most commented on happened in Malstrom Air Force Base, in Montana in February 1967.[43] Conclusion This study supports the hypothesis that the downed UFOs in February 2023 would be a consequence of direct or indirect action with a global reach of Extraterrestrial Intelligences, which can be interpreted as a charade or puzzle, intending to raise awareness and reflection about the critical moment of risk of self-destruction, highlighting our technological limitations and showing our vulnerability.It remains to be seen whether we will have time and be able to change fate before an imminent nuclear apocalypse. Figure 3 Figure 3 Last face-to-face meeting of the team of members and collaborators of Latin American Academy of Scientific Ufology (LAASU), a non-governmental organization non-profit, in São Paulo, June 2023, Brazil. Figure 4 Figure 4 Document "Manifest LAASU 2019" launched in the international meeting "Encontro Internacional de Exopolitica: o Fenomeno OVNI rumo as Nações Unidas".Registered in notary of Sao Paulo, Brazil, October 2019, Sao Paulo, Brazil.After that, in June, 2021, the Pentagon admitted the authenticity of these encounters and released an unclassified version of a report investigating 144 cases by the US Defense Department's Advanced Aerospace Threat Identification Program (AATIP), an agency created in 2007 but only publicly known in 2017.American intelligence officials still cannot explain the unusual movements of the aerial phenomena witnessed by Navy pilots in recent years.These have mystified scientists and the military, according to senior administration officials briefed on the findings of the highly anticipated government report.The report determined that the vast majority of more than 144 incidents over the past two decades did not originate from any American military or other advanced U.S. government technology, the officials said.[26]The report concedes that much about the observed phenomena remains difficult to explain, including their acceleration, as well as ability to change direction and submerge.Some of the incidents were recorded on video, including one taken by a plane's camera in early 2015, that shows an object zooming over the ocean waves, which caused the pilots to question what they were watching. 3. 10 . Argument 10: Global meaning of the charade or puzzle whose message tends to homogenize or unify us in questioning or understanding the phenomenon. Figure 7 Figure 7 Dr. Acosta-Navarro and his wife Dr. Silvia Prado visiting the archeological remains of Oracle of Delphi (Greece), August 2022. Table 1 Description of the six objects detected and/or shut down in February 2023, in the airspace of USA, Canada and China.Final location based on official governmental USA and media information
2023-09-27T15:19:42.784Z
2023-09-30T00:00:00.000
{ "year": 2023, "sha1": "eac05a4cbde90ffd53e3733e34b9e4e915a4f6f8", "oa_license": "CCBYNCSA", "oa_url": "https://wjarr.com/sites/default/files/WJARR-2023-1895.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "52b7fe41a297ddd2ebb928985011e6f50108526b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
55782847
pes2o/s2orc
v3-fos-license
Quantitation of ERK1/2 inhibitor cellular target occupancies with a reversible slow off-rate probe† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c8sc02754d Target engagement is a key concept in drug discovery and its direct measurement can provide a quantitative understanding of drug efficacy and/or toxicity. Quantitation of ERK1/2 Inhibitor Cellular Target Occupancies with a Reversible Slow Off-Rate Probe Honorine Lebraud 1* , Olga Surova 1,2 SCH-TCO (10 mM in MeOH, black) and biotin-PEG4-tetrazine (10 mM in MeOH, pink) were mixed in a 1:1 ratio for 5 min at r.t. The LC-MS profile of the crude mixture is shown in blue (a mixture of diastereoisomers is obtained). Supplementary Figure 2. LC-MS profile of the IEDDA reaction between TCO-GDC-2 and biotin-PEG4-tetrazine. TCO-GDC-2 (10 mM in MeOH, black) and biotin-PEG4-tetrazine (10 mM in MeOH, pink) were mixed in a 1:1 ratio for 5 min at r.t. The LC-MS profile of the crude mixture is shown in blue (a mixture of diastereoisomers is obtained). HCT116 (B) and A375 (C) lysates were incubated with the indicated inhibitor for 1 h at 4 °C followed by incubation with SCH-TCO (1 µM) for 30 min at 4 °C. The levels of ERK1/2 pulled down during the protein enrichment experiments were quantified by densitometry from the immunoblottings (A) and plotted (B and C). Supplementary Figure 6. Target occupancy of LY3214996 and GDC-0994 using TCO-GDC-2. HCT116 lysates were incubated with the indicated inhibitor for 1 h at 4 °C followed by incubation with TCO-GDC-2 (1 µM) for 30 min at 4 °C. The levels of ERK1/2 pulled down during the protein enrichment experiments were quantified by densitometry from the immunoblottings. Supplementary Figure 7. Ability of SCH-TCO to pull down ERK1/2 after in-cell treatment. A. HCT116 cells were incubated with SCH-TCO (1 h). Following cell lysis and coupling with tetrazine beads (30 min at 4 °C), proteins bound to SCH-TCO were pulled down and separated on 4-12% NuPAGE gels. Immunoblots for ERK1/2 are shown. Proteins unspecific binding to the beads (background noise) is shown as 'cell lysates through filter' on the figure. B. HCT116 cells were incubated with SCH-TCO for 15, 30, 45 or 60 min. Following cell lysis and coupling with tetrazine beads (30 min at 4 °C), proteins bound to SCH-TCO were pulled down and separated on 4-12% NuPAGE gels. Immunoblots for ERK1/2 are shown. Supplementary Figure 8. Effect of incubation time on target engagement. HCT116 cells were treated with GDC-0094 for 1h or 1h30 followed by SCH-TCO (1 µM, 15 min). After cell lysis and pull down, the levels of ERK1/2 were quantified by densitometry from the immunoblotting. Supplementary Figure 9. Effect of cell lysis on ERK1/2 pull down using SCH-TCO in cells. When lysates are treated with the untagged inhibitor followed by SCH-TCO, the probe binds to the remaining free ERK1/2 (normal conditions). The same phenomenon happens after in-cell treatment. However, in this case, cell lysis must occur after drug/probe treatments. It is postulated that the cell lysis perturbs the binding of the inhibitor with the protein resulting in higher SCH-TCO protein interactions. The apparent target engagement of the inhibitor is therefore lower than in reality. Supplementary Figure 10. Potential issue during ERK1/2 pull down with TCO-GDC-2. With a covalent probe, the protein is tightly linked to the protein preventing dissociation of the whole complex during pull down (normal conditions). TCO-GDC-2 being a non-covalent probe, it is likely that the complex TCO-GDC-2/protein/beads dissociates during pull down. Less protein is pulled down resulting in higher apparent engagement. Figure 11. CETSA melting curves. Supplementary CETSA melting curves for ERK1 and 2 in HCT116 lysates (A) and cells (C) after treatment with either DMSO or the selected ERK1/2 inhibitors (10 µM, 15 min). Immunoblot showing the levels of ERK1 and ERK2 remaining soluble in HCT116 lysates (B) and cells (D) after heating in the presence of DMSO or ERK1/2 inhibitors. Synthesis of TCO tagged probes derived from SCH772984 TCO-SCH probe was synthesised via a convergent synthetic route as outlined in Supplementary Scheme 1. Pyrimidine 1 and boronic ester 2 reacted under Suzuki conditions followed by deprotection of the piperazine moiety using acid and amide formation with chloroacetyl chloride which led to chloro derivative 4. Boc protected proline 5 was converted in a one-pot procedure into the methyl ester proline hydrochloride salt 6 which was reacted with chloro intermediate 4. The product of the nucleophilic substitution was consequently hydrolysed under basic conditions to give acid 7 as a lithium salt. The nitro derivative 9 was synthesised from nitro indazole 8 which was first protected with a trityl group then reacted under Suzuki conditions with the appropriate boronic ester. Reduction of the nitro group under catalytic hydrogenation led to aniline 10 which was reacted with acid 7 under amide coupling conditions. Trityl deprotection using acid, basic hydrolysis of the ester group and final amide coupling to insert the TCO tag afforded TCO-SCH probe. (m) 10% Pd/C, H 2 , MeOH, 92%; (n) TCO-NHS ester, DIPEA, DMF, r.t., 1 h, 37%. Synthesis of TCO tagged probes derived from GDC-0994 Both TCO tagged probes were synthesised from a common intermediate, the aminopyrimidine 27, obtained from a convergent route as shown in Supplementary Scheme 3. Suzuki coupling between 4-bromopyrimidine 17 and boronic acid 18 followed by hydration of the pyridine ring through displacement of the fluoro group by water in acidic environment led to the hydropyridine 19. The diol 22 was obtained from aldehyde 20 which underwent a Wittig reaction followed by a Sharpless asymmetric dihydroxylation using AD-mix β. Protection of the primary alcohol with a silyl group and mesylation of the secondary alcohol led to intermediate 24 which, after nucleophilic substitution by hydropyridine 19, gave pyrimidine 25. After oxidation of the methyl sulphur into the methyl sulfone with m-CPBA and nucleophilic displacement by using ammonium hydroxide, the aminopyrimidine 27 was synthesised. Buchwald coupling between aminopyrimidine 27 and bromopyridine 28 followed by TBDMS deprotection using acid, basic hydrolysis of the ester group and finally amide coupling with TCO-amine led to the TCO tagged probe TCO-GDC-1. General Notes for Syntheses Anhydrous solvents were purchased either from VWR or SeccoSolv and were stored under nitrogen. Other solvents were purchased from Fisher Chemicals. Commercially available reagents were used as received. TCO-Amine and TCO-NHS ester were purchased from Jena Bioscience. Petrol refers to the fraction with a boiling range between 40 and 60 ºC. All reactions were followed by TLC analysis (pre-coated TLC sheets ALUGRAM ® SIL G/UV 254 , Macherey-Nagel) or LC-MS (liquid chromatography mass spectrometry) on Agilent 1200 HPLC and 6140 MS using a YMC-Triart C18 column (50 x 2.0 mm, 1.9 µm). 1 H NMR spectra were recorded on a Bruker 400 UltraShield TM spectrometer. Chemical shifts are reported in parts per million (δ) referenced to the appropriate deuterated solvent employed and relative to TMS. Multiplicities are indicated by s (singlet), br s (broad singlet), d (doublet), t (triplet), q (quadruplet), m (multiplet). 'Flash' column chromatography was performed on pre-packed silica cartridges (Biotage SNAP cartridges, KP-Sil) on Biotage Isolera Four. All reactions were carried out under nitrogen. The purity of the final probes was determined by LC-MS and 1 H NMR and was always >95%. Propionyl chloride (2.0 mL, 23.3 mmol) was added dropwise to an ice-cold solution of (R)-1-Nboc-beta-proline 5 (1.0 g, 4.7 mmol) in dry MeOH (18 mL). The reaction was allowed to warm up to room temperature and was stirred for 18 hours. The solvent was removed in vacuo to give methyl (3R)-pyrrolidine-3-carboxylate hydrochloride 6 (0.8 g, 4.6 mmol, 99%) as a pale orange oil. methyl (3R)-pyrrolidine-3-carboxylate hydrochloride 6 (0.42 g, 2.53 mmol) were added, followed by potassium iodide (0.56 g, 3.38 mmol). The reaction was stirred at room temperature for 2 hours. EtOAc (10 mL) and water (10 mL) were added and the organic phase was extracted with EtOAC (2 x 10 mL) then dried over MgSO 4 . The solvent was removed in vacuo and the crude product was purified by flash column chromatography with 1: 9 MeOH:
2018-12-10T14:51:21.593Z
2018-09-17T00:00:00.000
{ "year": 2018, "sha1": "12251407344120c3431d98f176d5a6ac75f72e31", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/sc/c8sc02754d", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3599d93f817b821652b25d76e19f5905f09cfea2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
270663832
pes2o/s2orc
v3-fos-license
Future Green Energy: A Global Analysis : The main problem confronting the world is human-caused climate change, which is intrinsically linked to the need for energy both now and in the future. Renewable (green) energy has been proposed as a future solution, and many renewable energy technologies have been developed for different purposes. However, progress toward net zero carbon emissions by 2050 and the role of renewable energy in 2050 are not well known. This paper reviews different renewable energy technologies developed by different researchers and their potential and challenges to date, and it derives lessons for world and especially African policymakers. According to recent research results, the mean global capabilities for solar, wind, biogas, geothermal, hydrogen, and ocean power are 325 W, 900 W, 300 W, 434 W, 150 W, and 2.75 MWh, respectively, and their capacities for generating electricity are 1.5 KWh, 1182.5 KWh, 1.7 KWh, 1.5 KWh, 1.55 KWh, and 3.6 MWh, respectively. Securing global energy leads to strong hope for meeting the Sustainable Development Goals (SDGs), such as those for hunger, health, education, gender equality Introduction The world is experiencing an energy crisis of major depth and complexity, with shortages highlighting the importance of energy [1].However, this crisis also presents an opportunity to undertake a step change and decarbonize the energy system, making the energy system more resilient, secure, and affordable for all.In Europe, industry and transportation are the two sectors that emit the most CO 2 [2].However, even within these hard-to-abate sectors, technological solutions are available, and the main obstacle to transitioning to renewable energy is the cost.The manufacturing of batteries, solar panels, electric cars, nuclear plants, energy efficiency solutions, heat pumps, and hydrogen energy is flourishing around the world [3].The United States has introduced a major green energy package as part of its Inflation Reduction Act, and Europe is forming a similar response.These countries aim to prime their economies by increasing the energy balance, which is the only way to lower the cost of energy, through increasing the supply of green energy, which is the only energy that becomes cheaper with more users, thereby stealing a march on the rest of the world [4].Elsewhere, USD 3.4 trillion of investment is needed to fund the infrastructural gap in Africa alone.It is extremely important to ensure that renewable energy projects be funded in an inclusive way (i.e., without excluding any technology or fuel that is important for the future).It would be quite difficult for one player in the industry to decarbonize a complete value chain alone.A platform created by the World Energy Fund (WEF) seeks to unite a group of industry players who collectively decarbonize the value chain and thus spread the risk.However, energy is not only about supply [5].There is much discussion about fossil fuels versus renewable energy but less about working on the demand side (i.e., lowering demand so that less energy needs to be supplied).In reality, the technology, knowledge, and capital to increase the supply of renewable energy and decrease energy demand already exist.In the latter case, electricity meters with built-in two-way communication between the customer's location and the energy provider (smart meters) can help lower energy prices, improve energy efficiency, and provide near-real-time data on energy consumption, which benefits both the energy supplier and the consumer.However, there are also drawbacks to this technology.First, because smart meters rely on wireless connectivity, there is a risk of hacking and other security risks regarding the data transmitted between the utility and the smart meter [6], raising privacy and data protection concerns.Second, smart meters need a constant internet connection, with interruptions in the connection having a major effect on their use.Third, utility costs may rise, as smart meters provide the potential to bill clients according to peak and off-peak usage because they can detect energy usage more precisely than standard meters.Some consumers may see a considerable increase in their energy costs as a result, and more households may find it difficult to make ends meet.High initial costs, incompatibility with current electrical systems, malfunctions due to electromagnetic interference, increased electronic waste and environmental impact, inefficiency in unfavorable network conditions, and the inability to reduce energy bills alone are other drawbacks of smart energy meters [7]. From the above, it can be inferred that utilization of renewable energy resources around the globe depends upon the country and is limited, but there is insufficient information on the actual situation.The objective of this study was thus to provide an overview of the missions and objectives for renewable energy by 2050 and primarily to prompt global (especially African) policymakers to consider the future of renewable energy.The novelty of this study lies in describing renewable energy locations at a global level and making recommendations for effective exploitation of these in the future.Achieving the SDGs requires collaboration and partnerships at all levels, including governments, businesses, civil society, and individuals.The goals are interconnected, and progress in one area can contribute to progress in others. The remainder of this paper is structured as follows.A literature review is presented in Section 2, the methods used in this analysis are summarized in Section 3, and the findings are presented and discussed in Section 4. Some conclusions are drawn in Section 5, and some recommendations are made in Section 6. Literature Review More than 50% of the world's population lives in urban areas, and that figure could rise to nearly 70% by 2050.Big cities have major requirements for water, food, and energy [8].This heavy demand on resources poses daunting challenges to researchers in a world grappling with climate change.Future cities and towns will need large amounts of energy [9], and revolutionizing the complex energy supply system is one of the greatest challenges in the global transition to renewable (green) energy.For most people, it is also probably the most tangible challenge [10]. It is important for those responsible for policies in the world's cities to take a leading role, since is possible for cities to change [11].There is a general consensus that we urgently need to move to an economy that is simultaneously renewable, circular, and nature-positive, as there is little time left to save the planet [12].To assess how a sustainable energy supply can work in practice, we reviewed relevant cases from around the world. In the United States, the city of Lancaster in California is aiming to become the first carbon-neutral city in the country.Lancaster has around 175,000 residents, and in 2009, officials started a journey to "go green", fundamentally transforming the city's economy and infrastructure [13,14].A technological overhaul was undertaken, and more importantly, there was a shift in mentality whereby the purpose of the city authority became to assist people in going green and not delay them.For example, it used to take a minimum of six months for a household to obtain a permit to put solar panels on their roof [15].Lancaster's mayor, Rex Parris, began by having photovoltaic panels installed on all municipal buildings, with the electricity generated used for public lighting, which saved the city a great amount of money.This money was put toward installing even more photovoltaic panels on the roofs of private residences, while this system became mandatory for new buildings.Bit by bit, Lancaster created an alternative energy network.Excess electricity started being used to generate hydrogen to fuel public transportation, and thanks to the sunny weather in California, green energy and hydrogen production continued to expand.The low-cost electricity and cheap hydrogen attracted new, large companies, solidifying Lancaster's reputation as a green boomtown.When Lancaster began the process of transforming its energy system in 2009, the unemployment rate was at 17%, but by 2023, it had dropped to around 6%. Lancaster is now a self-sufficient green energy powerhouse and highly profitable, and both the city of Lancaster and Mayor Parris have received many awards for their achievements. In the German state of Bavaria, Wunsiedel, a rural region where the forest industry is key, took a different approach to going green.When Marco Krasser took the helm of the regional energy supplier, he shifted Wunsiedel to a circular system that effectively linked its regionally strong timber industry with the local energy system.The aim was to reuse as much energy as possible multiple times, with excess energy accumulated (e.g., in the form of wood waste or waste heat from machines) being harnessed rather than lost.Surplus energy generated from solar and wind power is used to press forestry waste into wood pellets, which are burned to generate heat or power a turbine for electricity in a cascade system comprising solar and wind energy, battery storage, and combined heat and power.Through this system, the construction industry is linked to the timber industry, and the timber industry is linked to agriculture or forestry.This creates local circular energy economies that can be scaled up to all levels, which in turn satisfy energy demand in the form of electricity and heat, as well as electricity in terms of mobility [16]. Wunsiedel in Bavaria and Lancaster in California have both tapped into locally available resources as best they can, and both have created infrastructures in which green energy is used as efficiently as possible in an ongoing cycle.Of course, such systems should ideally be integrated into construction projects from the very start.In Copenhagen, a newly built district called Nordhavn served as the testing ground for the "Energy Lab" project, a living laboratory for research into innovative and more efficient energy cycles.The buildings there are well insulated and retain heat, saving money and reducing demand for energy, which are especially important at peak hours in the early morning [17].Additionally, commercial businesses in the neighborhood use compressors powered by electricity for cooling down goods.By using slightly more electricity when there is a surplus from wind turbines or photovoltaic cells, these businesses can actually optimize operation of the compressors and produce extra energy which can be used to heat nearby buildings.Thus, the waste heat system is actually a smart component in a coupled energy system, which is an ingenious cycle.The energy put into the system is used several times, and the whole neighborhood benefits.The goal of a modern circular economy is to save energy and increase efficiency.The cycles described are optimized to make energy competitive in price while serving as an extension or even an alternative to large, centralized grids [18]. Norway and its capital of Oslo are among the pioneers in the green energy transition.Oslo is aiming to reduce CO 2 emissions to zero by 2030, and Mayor Marianne Borgen helped to draft and pass a series of concrete measures to achieve this.Oslo is now considered the world capital of e-mobility [19] and has made significant headway in making its construction sector carbon-neutral through advances in heating and building materials [20].To achieve the city's ambitious zero emissions goal, residents and businesses are both playing an active part. In 2019, an office building called the "powerhouse", commissioned by Sonja Horn's company, was inaugurated in the Norwegian city of Trondheim.The roof is covered with 3000 square meters of solar panels which are angled optimally to capture the sun's rays in northern Europe [21].Through these, the powerhouse produces an annual average of 500,000 kWh of electricity, which is more than double the amount it consumes [22].The surplus electricity is sent to a local micro-grid to supply neighboring buildings, electric buses, and cars.This pioneering project is the first of its kind, and the powerhouse is an attractive place for young people to sit and work. When a new building is planned in Oslo, the focus is on three main aspects.The first of these is to use less resources and materials (i.e., the reuse of materials and recycled materials must be considered before sourcing new materials) [23].Many construction sites in Oslo are now zero-emission sites because the technology is in place through challenging establishments and the industry, thus leading the way [24].Like Oslo, the rest of Norway is aiming to be carbon-neutral by 2030.The country has a large oil and gas sector but also a wealth of hydrogen power [25].Norwegian Minister Espen Barth Eide is confident that the necessary transition to a carbon-neutral economy will bring opportunities rather than risks for domestic industries.Already, the service industry that developed because of 50 years of the petroleum sector is quite eager to enter this new area.Since it can run oil or gas platforms in the North Sea in 10 m high waves and extreme conditions, it can create, for example, floating wind power, and since it was good at building fossil fuel ships with advanced technology, it can be good at building hydrogen-or ammonia-driven ships with advanced technology [26].This circular energy economy relies as much on technological innovation from major industries as a stable grid that can provide constant and reliable green power [27].In northern Europe, this can best be achieved with wind power from offshore parks and with hydrogen power.If the countries bordering the North Sea can help balance one another's demand for green power, then this could become a model worldwide.The longest of the underwater links to date was constructed in 2021 to connect Norway with England's east coast [28].At some hydroelectric power stations in Norway, water drops hundreds of meters to propel turbines that generate gigawatts of electricity [29].Hydropower generated at the Kvilldal power station is converted for onward transmission to Blyth in England, where gigawatts of electricity are generated from offshore wind and where a converter station that physically converts direct current to alternating current (or vice versa) is being installed [30].This converter has interconnectors which allow it to take in green energy from lakes in Norway and offshore winds, enabling a transition to green energy not just in the UK but in neighboring countries such as Norway, France, and Denmark.Britain has become a leader in Europe in developing offshore wind power in the North Sea and is now an exporter of green power via a super-fast green highway for the transfer of energy to connected countries, enabling security of supply.Blyth was once a prosperous mining town, but it suffered a sharp economic blow from the decline in coal mining [31].Port of Blyth manager Martin Lawlor hopes that the power link will help return the town to its former glory.The port of Blyth is already a major offshore energy hub for the UK, and this is actually helping attract further investments, as companies want to be part of the cluster and feed off some of the specialty hydraulics and electrics, vessel operators, and cable factories, which will help to drive further investments all around the estuary. The world's largest network for reliable generation of energy has been under construction in the North Sea since 2020.In order for a new energy economy to succeed, it is crucial to build large green power grids that are stable.By becoming partners in a new North Sea grid through direct coast-to-coast lines, border countries are inching closer to the goal of attaining energy security [32].The North Sea grid will deploy the latest technology to move generated energy back and forth on demand.Large industrial centers will be built at the hubs, including a planned "energy island" about 80 km off the coast of Jutland, as part of an inner network on the high seas that can be expanded over time.This energy island, the first in a series, will actually power different countries around the North Sea at the same time [33].According to the latest estimates, it will cost more than EUR 30 billion but one day should provide electricity for up to one million households.This will require large substations where alternating current can be converted to direct current and back again, which is vital for transmitting electricity over long distances.The energy island system started as a piece of technology to help integrate large bulk power and transmit it over long distances with much better efficiency thanks to much lower losses.The more systems it integrates, the more complex the entire energy system becomes.To integrate the next 20-30% of electric vehicles into the electricity system, it will need to integrate up to 60 GW of offshore wind, which is roughly equivalent to the capacity of 40 nuclear power plants, and thus planning and investments must begin now in order to deploy the grid technology on time.Construction of a new power cable on the east coast of Britain was recently completed.It connects the grids of Britain and Denmark and will supply them with electricity from offshore wind parks in both countries.The new interconnector between the two countries is called the Viking Link and is 765 km long, making it the longest underwater power cable in the world.To meet the ever-growing need for energy in the future, large storage facilities will be required in addition to transmission infrastructure.Hydrogen has immense potential as a storage medium for green electricity [34]. At a Siemens energy site in Berlin, Germany, a simulator shows the total demand for energy in a complex industrial society.If hydrogen is to become the new optimal energy carrier, then technology to produce hydrogen holds great strategic significance, although the industrial infrastructure is only being built now.One way to create hydrogen is via electrolysis, a process which uses an electric current to split water into hydrogen and oxygen.The electricity used to carry out this process must come from renewable sources so that its production is sustainable.The advantage of electrolysis is that it can be integrated into existing economic cycles relatively easily.In the future, an industrial site could use electrolysis to secure its electricity supply with hydrogen storage.Hydrogen is available in virtually unlimited quantities and may become the key to future supplies [35].There are three levers for overall energy transition: energy efficiency by reducing energy consumption and finding ways to recycle the energy that is produced; electrification where possible, because this is going to be the cheapest path to decarbonization; and hydrogen and green molecules where electrification is not enough in order to capture and store energy and reuse it elsewhere or in processes. Hydrogen can be further refined with CO 2 into new fuels to replace fossil fuels in heavy industry.The hope is that hydrogen can be the basis for a whole range of fuels in the future.Hydrogen can be used as is, but much will be transformed into so-called e-fuels together with captured carbon to replace many of the fuels that we know today.In many poor parts of the world, wood and fossil fuel products such as propane are used for cooking and heating.Converting hydrogen and CO 2 into a clean fuel would be a sustainable alternative, and some projects are in the experimental stage but will hopefully become building blocks in an ever-expanding circular energy economy [36]. At sites around the world, researchers are working on green technologies.Singapore in particular is considered a laboratory for the future.The main problem for the new energy economy is that batteries are the most important storage medium but are made from costly materials which are becoming increasingly scarce as global demand grows.Research is needed on how to recycle lithium-ion batteries and other e-waste so that it can be reintegrated into the production cycle.There has to be a mindset shift toward a circular economy to prevent shortages of essential resources.Universities such as Nanyang Technological, Berkeley, and Stanford are among the world's most well-regarded research hubs and are focusing on developing technology that could be rapidly deployed in future industries [37]. There are many synergies between materials research and research on a circular economy of materials, but there is still some silo thinking.In Copenhagen, many practical experiments relevant to both areas are being logged in a database to highlight the most promising results. It often takes two decades for basic research to reach industrial maturity, but for the climate crisis, time is of the essence, and solutions need to be employed faster.This is an incredibly complex challenge, and applied research will need to adapt.The main challenge and the potential solution both lie in inventing new materials for the green transition.This involves rethinking materials discovery, system development, and the process itself as well as integrating all parts of the discovery production and end use cycle.This is especially critical because the next innovations are already on the horizon as researchers conduct research in the field of solar energy conversion, turning sunlight into electricity and heat.A relatively new branch of research is working to imitate nature's most fundamental energy-harvesting process (photosynthesis).Nature has the capacity for something almost miraculous in the leaf of every plant: harvesting CO 2 from the atmosphere together with water in the presence of sunlight and transforming those chemical reactants into complex sugars and starches that sustain life.These complex sugars and starches are essentially fuels.Researchers at Pasadena, Ilmenau, and the Fraunhofer Institute have drawn inspiration from nature to envision a process called artificial photosynthesis, which uses engineered materials to perform the same kind of reduction-oxidation reactions that enable the formation of fuels directly from sunlight.However, instead of sunlight shining on a leaf, as occurs in nature, researchers use structures made of intricately manufactured semiconductors (artificial leaves) in which solar energy can turn water into hydrogen and oxygen.The efficiency of artificial photosynthesis is currently at 19.3%.If this process could be scaled up for industrial use, then hydrogen would become cheaper than any other fuel.Semiconductors, which are small and inconspicuous, are of critical importance for artificial photosynthesis and are the basis of all advanced technologies.They can be made from many different types of materials.Of course, not every single component of new energy systems is ready for action, and rolling out innovations to communities and industries will be a key step.Many scientific breakthroughs and technological innovations have not yet been widely implemented for public use.Researchers have made tremendous strides in recent years, and technology has come a long way, but successfully transforming the energy supply to make it sustainable hinges on the ability to scale these solutions and integrate them into large sectors of society before it is too late. Working Principle of a Smart Meter On the demand side, the newest innovation in energy management is the smart meter.It can be used to cut energy waste, boost productivity, and lower costs by monitoring how much energy is consumed in homes and workplaces in real time (Figure 1).Since smart meters make it simple and quick for users to monitor their energy consumption, they are seen as an innovative approach to energy management.The energy source in a house or place of business communicates with smart meters to function.In order for the energy supplier to bill appropriately for the quantity of energy consumed, the meter measures the energy used and sends the data to them.Additionally, the meter records data on energy use over time, enabling users to keep an eye on their consumption and, if necessary, take action to cut back.Water, natural gas, and electricity usage may all be measured with smart meters.They are typically linked to a wireless network, enabling real-time data collection and transmission.Customers can more readily identify where energy is being consumed and take action to reduce it by using the meters to monitor energy usage in every section of a home or business.There are several advantages which smart meters offer their users [29].They make it possible for consumers to monitor their energy consumption in real time and identify areas where they can cut costs by using less energy.Energy providers can also create comprehensive profiles of their customers' energy usage with smart meters, which helps them better understand demand and deliver more specialized services.The widespread adoption of smart meters in many nations is propelling advances in energy systems with greater efficiency.They are making it possible for energy suppliers to better serve their clients and for consumers to have more control and knowledge over how much energy they use. Methods A series of procedures (review of available techniques, data analysis, identification of relevant questions, and further analysis) were applied to the secondary data used in this paper (Figure 2).A literature review was used to gather general data.The review results were codified and consolidated using grounded theory.The data were then analyzed by concept group (tidal, hydrogen, or solar energy), taxonomy, and notes and used in a questionnaire-based survey on database security that yielded quantifiable and qualitative information.The survey responses were analyzed by statistical methods, followed by grouping or reducing the statements and producing a list of statements to validate and a question bank.Finally, research gaps were identified (output). Methods A series of procedures (review of available techniques, data analysis, identification of relevant questions, and further analysis) were applied to the secondary data used in this paper (Figure 2).A literature review was used to gather general data.The review results were codified and consolidated using grounded theory.The data were then analyzed by concept group (tidal, hydrogen, or solar energy), taxonomy, and notes and used in a questionnaire-based survey on database security that yielded quantifiable and qualitative information.The survey responses were analyzed by statistical methods, followed by grouping or reducing the statements and producing a list of statements to validate and a question bank.Finally, research gaps were identified (output).Using various search keywords and key words, relevant articles for the literature review were found in SCOPUS, Dimension, PubMed, Web of Science (WOS), Crossreff, and Google Scholar.After comparing the hits, duplicate papers were removed (Figure 3).A slicer filtering technique was used in Microsoft Excel to process the remaining documents.Using various search keywords and key words, relevant articles for the literature review were found in SCOPUS, Dimension, PubMed, Web of Science (WOS), Crossreff, and Google Scholar.After comparing the hits, duplicate papers were removed (Figure 3).A slicer filtering technique was used in Microsoft Excel to process the remaining documents.We started by downloading many articles and saving them as CSV files.Then, we arranged the various pieces according to the year of publication.We also considered journal titles.The information was arranged into cell arrays according to a "published since 2000" criterion.Older publications were excluded. Tidal Energy Tidal energy generation is summarized in Figure 4.A tide refers to the rise and fall of the ocean's surface caused by the centrifugal force created by the Earth's rotation and attraction mainly from the gravitational field of the moon, which is twice that of the sun.Ocean tides on Earth, the source of tidal energy, are caused by periodic changes in this gravitational pull.Owing to the ocean's high inertia, a bulge in the water level is produced, momentarily raising the sea level.Similar to wind turbines, which use wind energy to power turbines, tidal stream generators employ the kinetic energy of moving water to power turbines.In order to allay worries about their effects on the surrounding environment, certain tidal generators can be completely submerged or integrated into existing bridges.Tidal barrages exploit the potential energy in the difference in height (or hydraulic head) between high and low tides and use it to create electricity in carefully positioned specialized dams.The transient increase in tidal power is directed into a sizable basin behind the dam, which contains a significant quantity of potential energy when the sea level rises and the tide starts to come in. Tidal Energy Tidal energy generation is summarized in Figure 4.A tide refers to the rise and fall of the ocean's surface caused by the centrifugal force created by the Earth's rotation and attraction mainly from the gravitational field of the moon, which is twice that of the sun.Ocean tides on Earth, the source of tidal energy, are caused by periodic changes in this gravitational pull.Owing to the ocean's high inertia, a bulge in the water level is produced, momentarily raising the sea level.Similar to wind turbines, which use wind energy to power turbines, tidal stream generators employ the kinetic energy of moving water to power turbines.In order to allay worries about their effects on the surrounding environment, certain tidal generators can be completely submerged or integrated into existing bridges.Tidal barrages exploit the potential energy in the difference in height (or hydraulic head) between high and low tides and use it to create electricity in carefully positioned specialized dams.The transient increase in tidal power is directed into a sizable basin behind the dam, which contains a significant quantity of potential energy when the sea level rises and the tide starts to come in.An emerging and intriguing technique called dynamic tidal power (DTP) seeks to exploit the interaction between kinetic and potential energy in tidal flows.When tidal phase discrepancies are introduced across a dam, shallow coastal seas with strong oscillating tidal currents parallel to the coast, such as those off of the UK, China, and Korea, experience a notable water level discrepancy.Circular retaining walls with integral turbines can be built to harness the potential energy of tides in a novel approach (Figure 5).The artificially produced reservoirs created by these walls bear some resemblance to tidal barrages, with the exception that the former lack an innate ecosystem.Lagoons can also be doubled (or tripled) in format, either with no pumping at all or with pumping that flattens the output of electricity.Figure 6 shows the global tidal energy potential, where the USA has the greatest potential (57 GW), followed by Canada (29 GW), Russia (16 GW), Argentina (6 GW), and Europe (around 2 GW).However, the tidal energy potentials for some coasts of west Africa, China, and Australia are not known [38][39][40].An emerging and intriguing technique called dynamic tidal power (DTP) seeks to exploit the interaction between kinetic and potential energy in tidal flows.When tidal phase discrepancies are introduced across a dam, shallow coastal seas with strong oscillating tidal currents parallel to the coast, such as those off of the UK, China, and Korea, experience a notable water level discrepancy.Circular retaining walls with integral turbines can be built to harness the potential energy of tides in a novel approach (Figure 5).The artificially produced reservoirs created by these walls bear some resemblance to tidal barrages, with the exception that the former lack an innate ecosystem.Lagoons can also be doubled (or tripled) in format, either with no pumping at all or with pumping that flattens the output of electricity.An emerging and intriguing technique called dynamic tidal power (DTP) seeks to exploit the interaction between kinetic and potential energy in tidal flows.When tidal phase discrepancies are introduced across a dam, shallow coastal seas with strong oscillating tidal currents parallel to the coast, such as those off of the UK, China, and Korea, experience a notable water level discrepancy.Circular retaining walls with integral turbines can be built to harness the potential energy of tides in a novel approach (Figure 5).The artificially produced reservoirs created by these walls bear some resemblance to tidal barrages, with the exception that the former lack an innate ecosystem.Lagoons can also be doubled (or tripled) in format, either with no pumping at all or with pumping that flattens the output of electricity.Figure 6 shows the global tidal energy potential, where the USA has the greatest potential (57 GW), followed by Canada (29 GW), Russia (16 GW), Argentina (6 GW), and Europe (around 2 GW).However, the tidal energy potentials for some coasts of west Africa, China, and Australia are not known [38][39][40].Figure 6 shows the global tidal energy potential, where the USA has the greatest potential (57 GW), followed by Canada (29 GW), Russia (16 GW), Argentina (6 GW), and Europe (around 2 GW).However, the tidal energy potentials for some coasts of west Africa, China, and Australia are not known [38][39][40].Table 1 shows the capability for energy generation per unit mass of different types of renewable energy sources.The mean power capacities of solar, wind, biogas, geothermal, and hydrogen energy are 325 kWh, 900 W, 300 W, 434 W, 150 W, and 2.75 MWh, while their capabilities for generating electricity are 1.5 kWh, 1182.5 kWh, 1.7 kWh, 1.5 kWh, 1.55 kWh, and 3.6 MWh, respectively.In terms of ranking, ocean (offshore) energy is the renewable energy capable of generating by far the most electricity, followed by wind and geothermal energy.Solar and biogas energy generate equivalent amounts of electricity. Hydrogen Energy Table 2 shows the different technologies used for generation of hydrogen energy.Alkaline electrolysis, PEM electrolysis, anion exchange membrane, steam reformation, partial oxidation, and biomass gasification are available in commercial form, with efficiencies of 50-78%, 50-83%, 57-59%, 70-85%, 60-75%, and 35-50%, respectively.Seawater electrolysis is available at the R&D level with an efficiency of 72%, while autothermal reformation Table 1 shows the capability for energy generation per unit mass of different types of renewable energy sources.The mean power capacities of solar, wind, biogas, geothermal, and hydrogen energy are 325 kWh, 900 W, 300 W, 434 W, 150 W, and 2.75 MWh, while their capabilities for generating electricity are 1.5 kWh, 1182.5 kWh, 1.7 kWh, 1.5 kWh, 1.55 kWh, and 3.6 MWh, respectively.In terms of ranking, ocean (offshore) energy is the renewable energy capable of generating by far the most electricity, followed by wind and geothermal energy.Solar and biogas energy generate equivalent amounts of electricity. Hydrogen Energy Table 2 shows the different technologies used for generation of hydrogen energy.Alkaline electrolysis, PEM electrolysis, anion exchange membrane, steam reformation, partial oxidation, and biomass gasification are available in commercial form, with efficiencies of 50-78%, 50-83%, 57-59%, 70-85%, 60-75%, and 35-50%, respectively.Seawater electrolysis is available at the R&D level with an efficiency of 72%, while autothermal reformation and pyrolysis will become available in the near term, with efficiencies of 60-75% and 35-50%, respectively.Solid oxide electrolysis cells will become available in the medium term, with an efficiency of 89%.Photolysis, dark fermentation, photo fermentation, and microbial electrolysis cells are likely to emerge in the long term, with efficiencies of 10-11%, 60-80%, 10%, and 78%, respectively.Table 3 shows the capital costs of hydrogen and battery storage.The battery charging cost is currently USD 196/kW, the discharging cost is USD 60/kW, and the storage cost is USD 218/kW.The underground hydrogen charging, discharging, and storage costs are USD 942/kW, USD 574/kW, and USD 0.08/kW, respectively, while the aboveground hydrogen charging, discharging, and storage costs are USD 942/kW, USD 574/kW, and USD 0.08/kW, respectively.Green hydrogen has the potential to be extremely important in the transition to a more sustainable and clean energy future when costs decrease, technology advances, and policies that support it are implemented.A possible method for creating hydrogen from renewable energy sources is shown in Figure 7. Upon decomposition of organic matter, such as food scraps and animal manure, biogas, an environmentally benign renewable energy source, is created.Organic materials or agricultural waste can be gasified in a regulated setting to extract a mixture of hydrogen and methane (biogas) that can be used to power vehicles, heat homes, and produce electricity.Electrolysis is a process that turns water into hydrogen and oxygen using energy generated by renewable sources, like solar technology (photovoltaics (PVs) or concentrated solar power (CSP)), wind, or hydropower.The hydrogen produced can be stored or used for a variety of applications, including industrial processes and fuel cells.Green hydrogen is a clean energy source that requires cooperation from businesses, governments, communities, and academic organizations.Its generation presents an opportunity to boost sustainable growth, diversify energy sources, and lower CO2 emissions (Table 4).Sub-Saharan Africa, the Middle East and North Africa, North America, Oceania (Australia), South America, the rest of Asia, Northeast Asia, Europe, and Southeast Asia have estimated energy capacities of around 2715, 2023, 1314, 1272, 1114, 684, 212, 88, and 68 exajoules (EJs), contributing 28.6, 21.3, 13.3, 13.4, 11.7, 7.2, 2.23, 0.92, and 0.67% of the total, respectively. Solar Energy Table 5 shows the global solar energy potential by continent without an EROI threshold.Africa has the highest amount of solar energy, with a total of 444 J/year (40% of the global total), PVs of 444 J/year (37% of the total), and CSP of 112 J/year (38% of the total).Asia has the second-highest potential, with a total of 315 J/year (29% of the total), PVs of 361 J/year (30% of the total), and CSP of 72 J/year (25% of the total).Oceania is third, with a total of 125 J/year (11% of the total), PVs of 129 J/year (11% of the total), and CSP of 55 J/year (19% of the total), and South America is fourth, with a total of 114 J/year (10% of the total), PVs of 120 J/year (10% of the total), and CSP of 23 J/year (8% of the total).North America and Europe have the lowest solar energy potentials (Table 5).Green hydrogen is a clean energy source that requires cooperation from businesses, governments, communities, and academic organizations.Its generation presents an opportunity to boost sustainable growth, diversify energy sources, and lower CO 2 emissions (Table 4).Sub-Saharan Africa, the Middle East and North Africa, North America, Oceania (Australia), South America, the rest of Asia, Northeast Asia, Europe, and Southeast Asia have estimated energy capacities of around 2715, 2023, 1314, 1272, 1114, 684, 212, 88, and 68 exajoules (EJs), contributing 28.6, 21.3, 13.3, 13.4, 11.7, 7.2, 2.23, 0.92, and 0.67% of the total, respectively. Solar Energy Table 5 shows the global solar energy potential by continent without an EROI threshold.Africa has the highest amount of solar energy, with a total of 444 J/year (40% of the global total), PVs of 444 J/year (37% of the total), and CSP of 112 J/year (38% of the total).Asia has the second-highest potential, with a total of 315 J/year (29% of the total), PVs of 361 J/year (30% of the total), and CSP of 72 J/year (25% of the total).Oceania is third, with a total of 125 J/year (11% of the total), PVs of 129 J/year (11% of the total), and CSP of 55 J/year (19% of the total), and South America is fourth, with a total of 114 J/year (10% of the total), PVs of 120 J/year (10% of the total), and CSP of 23 J/year (8% of the total).North America and Europe have the lowest solar energy potentials (Table 5). Table 5. Global solar potential (total, photovoltaics (PVs), and concentrated solar power (CSP)) split by continent without an EROI threshold.All values in J/year [58,59].When setting the EROI value at ≥9 and splitting the global potential energy per year by continent (Table 6), Africa again is in first place, with a total of (PV only) 124 J/year (67% of the global total), followed by Asia (35 J/year, 19% of the total), South America (14 J/year, 8% of the total), and Oceania (11 J/year, 6% of the total).North America and Europe made no contribution.South America 14 8 Conclusions This paper assessed renewable energy potential from a global perspective, based on a review of the literature.By 2050, the global population will be 9 billion, and the demand for energy will have increased accordingly.However, the goal for 2050 is to achieve zero net carbon emissions, and this can only be achieved if all countries contribute as much as possible.To this end, developed countries are focusing on green energy policies, whereas underdeveloped countries, particularly in Africa, are currently not involved in such efforts.Future energy sources, such as offshore energy, solar energy, and wind and geothermal power, are hot research areas at present. Offshore energy can be obtained in three main forms: ocean thermal energy, wave energy, and tidal energy.The USA has the highest tidal energy potential (57 GW), followed by Canada (29 GW) and Russia (16 GW).Europe has the lowest known tidal energy potential (2 GW).The mean global capabilities for solar, wind, biogas, geothermal, hydrogen, and ocean energy are 325 W, 900 W, 300 W, 434 W, 150 W, and 2.75 MWh, and their mean capacities for generating electricity are 1.5 KWh, 1182.5 KWh, 1.7 KWh, 1.5 KWh, 1.55 KWh, and 3.6 MWh, respectively.Thus, the ocean is the renewable source that can produce by far the most electricity (offshore). The various technologies for producing hydrogen energy include steam reformation, partial oxidation, alkaline electrolysis, PEM electrolysis, an anion exchange membrane, plasma electrolysis, steam reformation, and biomass gasification, with efficiencies of around 60-85%.All of these processes are available in commercial form.Autothermal reformation and pyrolysis will become available in the near future (efficiencies of 60-75% and 35-50%, respectively), while seawater electrolysis is only available at the R&D level with 72% efficiency. The solar energy global potential split by continent without an EROI threshold is the highest for Africa (40% of the global total), followed by Asia (29%), Oceania (11%), and then South America (10%). Thus, the availability of renewable energy differs between continents, as well as the utilization of said energy.Real-time monitoring of energy consumption is now possible by using intelligent electrical gadgets and sensors in smart energy meters.In order to adopt more effective energy-saving techniques, it is important to reveal consumption patterns through real-time monitoring of energy usage.Smart energy meters enable electrical utility companies to remotely gather meter readings without having to visit a customer's location, saving money, time, and effort.They provide accurate real-time data on energy consumption, and time-of-use pricing is applied, which improves billing accuracy.They also enable real-time electrical supply system monitoring and can identify service problems more quickly, giving a more dependable electrical supply.Customers can use information on their patterns of energy consumption to make decisions that maximize energy efficiency and reduce energy costs.This lowers energy use, which in turn reduces greenhouse gas emissions and the carbon footprint.Integrating renewable energy sources such as solar, wind, and geothermal power with the conventional grid is made easier through the use of smart energy meters, reducing the need for energy generation using conventional techniques.However, smart energy meters have some drawbacks, including poorer customer adoption, greater installation costs, and cybersecurity vulnerabilities. Recommendations In efforts to achieve net zero carbon emissions by 2050, we make the following recommendations, especially for African leaders, since that continent has not yet set any green energy targets: ➢ Securing the energy sector brought about a great revolution to SDG achievement. Therefore, all nations should pay attention to energy security first.➢ Resources are limited, and thus global attention must focus on renewable energy.Tidal (offshore) energy is the most suitable renewable energy source.➢ Africa has many renewable energy sources, but tidal energy is available only off the west coast of Africa.ECOWAS countries must collaborate on the utilization of available tidal energy for the prosperity of all.➢ African countries should focus on renewable energy sources such as hydrogen production and wind, geothermal, biogas, and solar power.To contribute to net zero carbon emissions and overcome poverty, all leaders of African countries must turn to renewable energy and take action immediately.➢ Achieving net zero carbon emissions by 2050 and green energy generation will be difficult for individual African countries, and thus we recommend that groups of countries unite within regions to invest in renewable energy.➢ Developed countries cannot achieve net zero carbon emissions by 2050 on their own.This must be a common goal for all countries worldwide.➢ The East Africa region has a massive amount of resources, and continued growth in hydro, coal, oil, gas, bioenergy, and solar PV power is predicted up until 2040.However, the region faces challenges in providing affordable and reliable electricity for its population.For instance, high electricity prices in Kenya limit household use and discourage energy-intensive industries, while in Ethiopia and other countries, almost 85% of the population does not have access to electricity.Leaders in East African countries should concentrate on renewable energy to improve the prosperity of the region.➢ According to the Ministry of Water, Irrigation, and Energy (MoWIE), Ethiopia had generation capacity of 2310 MW in 2015, and there were plans to expand the nation's power output capacity to 15,000 MW by 2020.However, the population is increasing, as is energy demand.To fulfill the energy demands of the growing population, the Ethiopian government should focus on hydrogen production from water and biogas, wind, solar, and geothermal sources.➢ Geothermal energy (heat) from the inner core of the Earth is one of the most sustainable forms of energy.The technology works by pushing hot water from reservoirs in volcanoes and geysers toward the surface, where it turns into steam due to the reduced pressure.Active volcanoes are proving important in the global race to transition to renewable energy, with regions containing these natural wonders working to harness their heat.Erta Ale, a continuously active basaltic shield volcano in the Afar Region of northeastern Ethiopia, which is part of the wider Afar Triangle, has not been used for energy generation in the country to date, but it could solve domestic energy problems and reduce the carbon footprint for the country and the continent. Energies 2024 , 18 Figure 2 . Figure 2. Mind map showing components included in the analysis in this study. Figure 2 . Figure 2. Mind map showing components included in the analysis in this study. Figure 7 . Figure 7.A possible route for generating hydrogen using renewable energy. Figure 7 . Figure 7.A possible route for generating hydrogen using renewable energy. Table 2 . Comparison of available technologies for hydrogen generation.
2024-06-22T15:17:23.741Z
2024-06-20T00:00:00.000
{ "year": 2024, "sha1": "4f7b09da1f4909fed11f1eb9ba792fe1ee8b8218", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/17/12/3039/pdf?version=1718939558", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f9dcc43b19e870be496a8d0146cb02c9ecf940c5", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [] }
246828397
pes2o/s2orc
v3-fos-license
Development of a multi-component risk assessment process for face to face consultations in an outpatient setting Keywords: COVID-19;Risk assessment;Face-to-face Purpose: The aim of developing a standardised risk assessment process was to aid physiotherapists in balancing clinical need and risk mitigation when planning care and using that information to facilitate informed shared decision with people using the service when considering virtual or face to face consultations. Methods: A multi-component risk assessment process was developed using the Chartered Society of Physiotherapy's 7 factors to consider when planning face to face consultations and NHS England guidance as the core foundation. Specific components included COVID-19 screening, clinical vulnerability, age, gender, ethnicity, smoking and alcohol intake, clinical need, red flags, consideration of virtual consultation, consideration of patient carers and vulnerable household members, risk mitigation, shared decision making and informed consent. The document was shared with a multi-professional team and members of the equality, diversity and inclusion team to ensure the process was inclusive and sensitive to our diverse staffing group. Results: The risk assessment framework was formally ratified by the Trust clinical governance group and adopted as part of the combined therapy services electronic note system and became a component of the electronic notes audit to ensure appropriate use. The process was also used by multi-profession services outside of therapies. Conclusion(s): The impact of the COVID-19 pandemic has meant services have evolved new ways of working to ensure delivery of quality patient care safely with shared decision making integral to the process. The development of the standardised risk assessment form supported clinicians and patients in making fully informed shared decisions about their care plan. Impact: The impact of this project was that patents who had a clinical need and/ or preference for face to face treatment were able to make informed decisions about their care balancing clinical need and risk in a transparent and inclusive way. Funding acknowledgements: Not funded. Virtual Physiotherapy UK 2021/ Physiotherapy 114S1 (2021 e71-e244 e133 Education England (HEE) roadmap for FCP Accreditation of Advanced Practice Conclusion(s): Through cross-system partnership and collaboration it has been possible to implement a new workforce model across an Integrated Care System with a population of 2.6m. Pump priming funding to pay for Level 7 modules at a local HEI was instrumental in establishing this programme however sustainable funding is now available through the apprenticeship scheme to ensure continued workforce development. A single model for implementation across the different places in West Yorkshire and Harrogate would have been unlikely to succeed as the models of MSK service provision varied significantly between places, however a set of shared principles for integration, competencies and minimum standards supports parity in service quality and standards across the ICS. Impact: The successful approach of whole system collaboration for rapid (15 month) development and implementation will be considered for other first contact roles in physiotherapy and other professions particularly in respiratory care and frailty. Keywords: COVID-19; Risk assessment; Face-to-face Purpose: The aim of developing a standardised risk assessment process was to aid physiotherapists in balancing clinical need and risk mitigation when planning care and using that information to facilitate informed shared decision with people using the service when considering virtual or face to face consultations. Methods: A multi-component risk assessment process was developed using the Chartered Society of Physiotherapy's 7 factors to consider when planning face to face consultations and NHS England guidance as the core foundation. Specific components included COVID-19 screening, clinical vulnerability, age, gender, ethnicity, smoking and alcohol intake, clinical need, red flags, consideration of virtual consultation, consideration of patient carers and vulnerable household members, risk mitigation, shared decision making and informed consent. The document was shared with a multi-professional team and members of the equality, diversity and inclusion team to ensure the process was inclusive and sensitive to our diverse staffing group. Results: The risk assessment framework was formally ratified by the Trust clinical governance group and adopted as part of the combined therapy services electronic note system and became a component of the electronic notes audit to ensure appropriate use. The process was also used by multi-profession services outside of therapies. Conclusion(s): The impact of the COVID-19 pandemic has meant services have evolved new ways of working to ensure delivery of quality patient care safely with shared decision making integral to the process. The development of the standardised risk assessment form supported clinicians and patients in making fully informed shared decisions about their care plan. Impact: The impact of this project was that patents who had a clinical need and/ or preference for face to face treatment were able to make informed decisions about their care balancing clinical need and risk in a transparent and inclusive way. Purpose: Distal radius fractures are the most commonly reduced fracture in the emergency department (ED) (Malik, Appelboam &Taylor 2020). They account for nearly one sixth of all fractures presenting to the ED (Goldie, 2002). The quality of any reduction can influence definitive management and any reduction should aim for as close to anatomical position as possible (Lichtman et al., 2010). There is no literature on physiotherapists performing such a task. The aim was to show advanced practice physiotherapists, with appropriate training, can safely and effectively perform the reduction of displaced distal radius fractures. Methods: A case series review of 10 reductions. Review of pre and post reduction x-rays was performed by two different orthopedic surgeons and an emergency department
2022-02-16T14:10:37.583Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "34da37c47bd2132a49ef422bda2b439435a5e04f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "34da37c47bd2132a49ef422bda2b439435a5e04f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119746502
pes2o/s2orc
v3-fos-license
Surgery obstructions and Heegaard Floer homology Using Taubes' periodic ends theorem, Auckly gave examples of toroidal and hyperbolic irreducible integer homology spheres which are not surgery on a knot in the three-sphere. We give an obstruction to a homology sphere being surgery on a knot coming from Heegaard Floer homology. This is used to construct infinitely many small Seifert fibered examples. Main results. Theorem 1.1. For p an even integer at least 8, let Y p denote the Seifert fibered integer homology sphere Σ(p, 2p − 1, 2p + 1). The manifolds Y p satisfy: (i) Y p is not surgery on a knot in S 3 , (ii) π 1 (Y p ) is a weight one group, (iii) Y p is surgery on a two-component link in S 3 , (iv) no Y p is smoothly rationally homology cobordant to Auckly's example nor to each other (regardless of orientation). Theorem 1.1 is essentially proved in two steps. The first step consists of finding an obstruction in Heegaard Floer homology to a homology sphere being surgery on a knot. The second step consists of an analysis (but not complete computation) of the Heegaard Floer homology of the manifolds Y p . Before stating these results, we recall from [OS04,OS03a] that for a homology sphere, its Heegaard Floer homology, HF + (Y ), is a Z-graded F[U ]-module, where F = Z/2 and U lowers degree by 2. Further, HF + (Y ) admits a non-canonical decomposition graded such that deg(1) = d(Y ) and HF red (Y ) is a finite sum of cyclic modules. The (even) integer d(Y ), called the d-invariant or correction term, is in fact an invariant of smooth rational homology cobordism. The main obstruction we will present for a homology sphere being surgery on a knot is the following. Theorem 1.2. Let Y be an oriented integer homology sphere such that Y = S 3 1/n (K), for some integer n and some knot K ⊂ S 3 . If d(Y ) ≤ −8, then U · HF red 0 (Y ) = 0. Remark 1.4. It is known that d(S 3 ) = 0. Since Auckly's surgery obstruction required the manifold to be homology cobordant to S 3 , any manifold one could obstruct from being surgery by Theorem 1.2 could not be used for Auckly's argument and vice versa. Remark 1.5. It is straightforward to generalize Theorem 1.2 to obtain further restrictions of this form on the Heegaard Floer homology of manifolds with highly negative correction terms obtained by surgery on a knot in S 3 . Using such a variant, one can also show that the toroidal Seifert fibered homology sphere Σ(2, 5, 19, 21) is not obtained by surgery on a knot. In light of Theorem 1.2, we are interested in analyzing both the d-invariants of Y p and the U -action on HF red (Y p ). Proof of Theorem 1.1. (i): Notice that the property of a manifold being surgery on a knot in S 3 is independent of orientation. Therefore, we work with Y p oriented as in Theorem 1.6. It is clear that for p ≥ 8, Theorems 1.2 and 1.6 now show that Y p is not surgery on a knot in S 3 . (iii): Since Y p is a Seifert fibered space with 3 singular fibers, the result follows from [HW13,Proposition 8.2]. Alternatively, one can directly verify that Y p is in fact obtained by surgery on the (2, 2p) torus link with surgery coefficients −(p + 1) and −(p − 1) (see Figure 1). (ii): This part of the proof was shown to us by Cameron Gordon. We will show more generally that the Brieskorn sphere Σ(p, q, r) has weight one fundamental group. Suppose that Z = Σ(p, q, r) has normalized Seifert invariants e 0 , p ′ p , q ′ q , r ′ r (see, for instance, [Sav02]). Then, we have π 1 (Z) = x, y, z, h | h is central, x p = h p ′ , y q = h q ′ , z r = h r ′ , xyzh e 0 = 1 . We claim that π 1 (Z) is normally generated by h e 0 xy. We will show π 1 (Z)/ h e 0 xy is trivial. In this quotient, z = 1, so we have Therefore, we can rewrite this as In particular, π 1 (Z)/ h e 0 xy is abelian. However, since Σ(p, q, r) is an integer homology sphere, π 1 (Z) is a perfect group, and thus so is π 1 (Z)/ h e 0 xy . Therefore, π 1 (Z)/ h e 0 xy is a perfect abelian group, and thus trivial. This completes the proof. We are also able to say something for arbitrary homology spheres. Recall that any reducible homology sphere is not surgery on a knot in S 3 . The argument of Gordon and Luecke which is used to prove this result uses that the ambient manifold is S 3 . For any homology sphere Y , we are able to construct infinitely many reducible manifolds which cannot be surgery on a knot in Y . Theorem 1.7. Let Y be an integer homology sphere and let # k Σ(2, 3, 5) denote the connected sum of k Poincaré homology spheres with the same orientation. For k ≫ 0, the manifold # k Σ(2, 3, 5) is not surgery on a knot in Y , regardless of orientation on Y . Remark 1.8. The reducibility of # k Σ(2, 3, 5) is not important for Theorem 1.7. What is necessary is a family of integer homology spheres with unbounded d-invariants which are L-spaces (i.e., HF red = 0). The only known irreducible homology sphere L-spaces are S 3 and the Poincaré homology sphere. Organization: Theorem 1.2 is proved in Section 2 by utilizing the mapping cone formula for rational surgeries given in [OS11]. In Section 3, we study the plumbing diagrams of the manifolds Y p and prove Theorem 1.6(i) using the algorithm of Ozsváth-Szabó [OS03b]. In Section 4, we review the algorithm given in [Ném05,CK12] to compute the Heegaard Floer homology of Seifert homology spheres. In Section 5, we analyze HF + (Y p ) and prove Theorem 1.6(ii). Finally, in Section 6, we prove Theorem 1.7. Acknowledgments We would like to thank Matt Hedden for pointing out that a surgery obstruction could come from comparing the reduced Floer homology with the correction terms. We would also like to thank Cameron Gordon for supplying the proof of Theorem 1.1(ii). Mapping cones The goal of this section is to prove Theorem 1.2. Let Y be an integer homology sphere with d(Y ) ≤ −8. Recall that we would like to see that if Y = S 3 1/n (K), then U · HF red 0 (Y ) = 0. We first restrict the possible values of n. Lemma 2.1. If Y is an integer homology sphere such that d(Y ) < 0, then Y is not 1/n-surgery on a knot for any n < 0. For the rest of this section, we only consider the case of 1/n-surgery on a knot K for n > 0. The main tool is the rational surgery formula of Ozsváth-Szabó [OS11]. We refer the reader to [NW10] for a concise summary. We very briefly recall the main ingredients for notation without much explanation. As usual, let . For each s ∈ Z, Ozsváth and Szabó associate to K a relatively-graded F[U ]-module A s , which is isomorphic to the Heegaard Floer homology of a large positive surgery on K in a certain Spin c structure. Further, associated to each s, there are two graded, module maps v s , h s : A s → T + , which represent maps coming from certain Spin c cobordisms. Each A s admits a splitting When it will not cause confusion, for n ≥ 0, we may write U −n to mean the corresponding element of T + ⊂ A s . Although A s is not a module over F[U, U −1 ], we will further abuse notation and for an element a ∈ T + ⊂ A s , we write U −k a to mean the unique element in T + ⊂ A s such that U k · U −k a = a. For each s, we have that v s | T + (x) = U Vs x for some non-negative integer V s . Similarly, for some non-negative integer H s . Note that each of these maps is surjective. We will need the following important properties of these integers (see [Ras03,Section 7 From this information, we can compute the Heegaard Floer homology of S 3 p/q (K) for any rational p/q ∈ Q. We will restrict our attention to the case of S 3 1/n (K), for n > 0. For each s, consider n copies of A s , denoted A s,1 , . . . , A s,n . Further, for each s ∈ Z and 1 ≤ i ≤ n, define B s,i = T + . For an element x in A s,i or B s,i , we may write this element as (x, s, i) to keep better track of the indexing. We will also write k (mod n) to refer to the specific representative between 1 and n. Define the map Φ 1/n : We define an absolute grading on the mapping cone of Φ 1/n (where the A s,i and B s,i are given trivial differential) by requiring that the element 1 ∈ B 0,1 has grading −1 and that Φ 1/n lowers grading by 1. We remark that the indexing we are using is expressed differently than in [OS11]. Remark 2.3. Theorem 2.2 is not quite stated as in [OS11]. Their theorem instead establishes an isomorphism between Heegaard Floer homology and the cone of a chain map whose induced map on homology is Φ 1/n . In general, for a nullhomologous knot in an arbitrary three-manifold, one cannot compute Heegaard Floer homology of surgeries by looking at the cone of the induced map on homology. However, for knots in S 3 (or any L-space), one may compute the homology of the cone of Φ 1/n to obtain the desired result. With this, we are nearly ready to give the proof of Theorem 1.2. First, we make an observation about HF red . Recall that HF red (Y ) is defined to be HF + (Y )/Im(U N ) for N ≫ 0. Note that if a ∈ HF + k−2 (Y ) is of the form U b for some b ∈ HF + k (Y ) and a is not in Im(U N ) for N ≫ 0, then U · HF red k (Y ) = 0. From this and Theorem 2.2, it is straightforward to see that the following proposition implies Theorem 1.2. Proposition 2.4. Let K ⊂ S 3 and let Y = S 3 1/n (K) for some positive integer n. If d(Y ) ≤ −8, then there exist cycles x and y in the cone of Φ 1/n such that (i) x, y are non-zero in homology, (ii) y = U x, (iii) gr(x) = 0, (iv) for N ≫ 0, the element y is not homologous to U N z for any cycle z. We first consider the case when n = 1. In this case, we remove the index i used in the A s,i and B s,i . Let x = U 1−V 2 in A 2 and let y = U x. Note that x and y are both non-zero in A 2 since V 2 ≥ 2. We have that v 2 (x) = 0, since v 2 restricted to T + ⊂ A 2 is multiplication by U V 2 . We also have h 2 (x) = 0, since h 2 restricted to T + ⊂ A 2 is multiplication by U H 2 and H 2 = V 2 + 2 by (2.3). Hence, x is a cycle in X. Since Φ 1/n is an F[U ]-module map, y = U x must be a cycle in X as well. We now show that x and y satisfy the conditions of the proposition. (i) Since the image of the differential on X is contained in B and x is a non-trivial element in A 2 , the cycle x is non-zero in the homology of X. Similarly, y is non-zero in the homology of X. (ii) By the definition of y, we have that y = U x. (iii) Let z s denote the lowest graded non-zero element of T + ⊂ A s . Note that v s (U −Vs z s ) = h s−1 (U −H s−1 z s−1 ) and this image is the lowest graded non-zero element in B s . We claim that gr(z 0 ) = −2V 0 = d. This follows since v 0 (U −V 0 z 0 ) is the lowest graded non-zero element in B 0 , v 0 lowers grading by one, and the grading of the lowest graded non-zero element in B 0 is −1. We have that gr(z s ) = gr(z s−1 ) , and v s and h s−1 both lower grading by one. Then where the penultimate equality follows from (2.3) and (2.4). Since z 2 = U V 2 −1 x and U lowers grading by two, it follows that gr(x) = 0, as desired. (iv) We would like to show that for large N , y is not homologous to U N w for any cycle w in In particular, U −N y is not in the kernel of the differential on X. Note that Moreover, if we choose N greater than max s=1,2 min{n | U n · A red s = 0} , then we claim any other element of X whose boundary could cancel with U 2−N ∈ B 2 has projection onto A 1 given by U 2−H 1 −N z 1 ∈ A 1 . Indeed, by our choice of N , there are no other non-zero elements of A 1 or A 2 in this grading; further, for an element not contained in A 1 or A 2 , its boundary cannot be contained in B 2 . Observe that such an N exists since A red s is finite-dimensional as an F-vector space. Therefore, if a cycle in X has projection onto A 2 given by U −N y, then it has projection onto A 1 given by U 2−H 1 −N z 1 . Now, suppose that w is a cycle in X such that U N w is homologous to y. Then w has projection onto A 2 given by U −N y. Thus, the projection of w onto A 1 must be U 2−H 1 −N z 1 . Observe that 2 − H 1 < 0, since H 1 = V 1 + 1 and V 1 ≥ 3 by (2.2), (2.3), and (2.4). Thus, we have that U N · U 2−H 1 −N z 1 = 0. This implies that U N w has non-trivial projection to A 1 . Since the image of the differential on X is contained in B and y ∈ A 2 , the cycle y cannot be homologous to an element with non-trivial projection to A 1 . Hence y is not homologous to U N w. This completes the proof of the proposition when n = 1. The proof when n > 1 is similar. Let x = U 1−V 1 ∈ T + ⊂ A 1,2 and let y = U x ∈ A 1,2 . As above, it is straightforward to show that x and y are both non-zero in the homology of X. Thus, (i) and (ii) hold. We proceed to show that x and y satisfy (iii) and (iv); the arguments are similar to the n = 1 case above. In particular, U −N y is not in the kernel of the differential on X. Moreover, if we choose N greater than then any other element whose boundary could cancel with U 2−N ∈ B 1,2 has projection onto A 1,1 given by Suppose that w is a cycle in X such that U N w is homologous to y. Then the projection of w onto A 1,1 must be U 2−H 1 −N z 1,1 . As discussed above, H 1 = V 1 + 1 and V 1 ≥ 3, and so 2 − H 1 < 0. Thus, U N · U 2−H 1 −N z 1,1 = 0 and therefore, U N w has non-trivial projection onto A 1,1 . In particular, y is not homologous to U N w, since the image of the differential on X is contained in B. This completes the proof of the proposition. Plumbings Recall that Y p = Σ(p, 2p − 1, 2p + 1), where we have oriented Y p such that it bounds a positivedefinite plumbing. In this section we determine explicitly the negative-definite plumbing whose boundary is −Y p . We will use this plumbing to compute the correction term of Y p and hence prove Theorem 1.6 (ii). Proof. Since Y p bounds a positive-definite plumbing, clearly −Y p bounds a negative definite plumbing. Furthermore, since Y p has three singular fibers, this plumbing graph will have three arms. We follow the recipe given in [Sav02, Example 1.11] to find this plumbing. We look for the unique integers e 0 , p ′ , q ′ , r ′ solving with p > p ′ ≥ 1, 2p − 1 > q ′ ≥ 1, and 2p + 1 > r ′ ≥ 1. It can be checked that the solution is given by e 0 = −2, p ′ = 1, q ′ = 2p − 2, and r ′ = 2p. The number e 0 is the weight of the central vertex. Given integers m > n > 1, there exists a unique sequence of integers a 1 , . . . , a k with a i > 1 for all i = 1, . . . , k, satisfying m n = a 1 − 1 This is called the continued fraction expansion of m n , and we denote it by m n = [a 1 : a 2 : · · · : a k ]. We now look at the continued fraction expansions of p p ′ , q q ′ , and r r ′ , which determine the negative weights of the vertices on each branch. We have ]. Hence the result follows. We restate Theorem 1.6(ii) as the following proposition. Proposition 3.2. For p even, we have that d(Y p ) = −p. Proof. Let X p denote the four-manifold given in Figure 2. By Proposition 3.1 we know ∂X p = −Y p . A result of Ozsváth and Szabó [OS03b, Corollary 1.5] says that the correction term of −Y p can be computed using the intersection form on H 2 (X p , Z) as follows. Let Char(X p ) denote the set of all characteristic cohomology classes. for every vertex v of the plumbing graph. Next we note that the number of vertices in the plumbing graph is 4p. Then the correction term of −Y p at its unique Spin c structure is given by When p is even, X p has even intersection form and thus K = 0 is a characteristic cohomology class. Clearly K = 0 maximizes the above expression since the intersection form is negative definite. Graded roots The purpose of the present section and the next one is to prove the following result which finishes the proof of Theorem 1.6 when combined with Proposition 3.2. Proposition 4.1. For every even integer p ≥ 4, we have that U · HF red 0 (Y p ) = 0. The proof uses the techniques of graded roots which were introduced by Némethi [Ném05] and extensively studied in [CK12,KL13]. In this section we motivate and explain our strategy to prove Proposition 4.1 and give the necessary background. The proof will be given in the next section. Background. Definition 4.2 (Némethi, [Ném05, Section 3.2]). A graded root is a pair (Γ, χ), where Γ is an infinite tree, and χ is an integer-valued function defined on the vertex set of Γ satisfying the following properties. Up to an overall degree shift, every graded root can be described by a finite sequence as follows. Let ∆ : {0, . . . , N } → Z be a given finite sequence of integers. Let τ ∆ : {0, . . . , N + 1} → Z be the unique solution of We identify, for each n ∈ {0, . . . , N +1}, all common vertices and edges in R n and R n+1 to get an infinite tree Γ ∆ . To each vertex v of Γ ∆ , we can assign a grading χ ∆ (v) which is the unique integer corresponding to v in any R n to which v belongs. Clearly many different sequences can give the same graded root. For example the elements n ∈ {0, . . . , N } where ∆(n) = 0 do not affect the resulting graded root. Associated to a graded root (Γ, χ) is a graded F[U ]-module H(Γ); we omit the grading function from the notation. As an F-vector space, H(Γ) is generated by the vertices of Γ. Further, the grading of a vertex, v, is given by 2χ(v). Finally, U · v is defined to be the sum of vertices w which are connected to v by an edge and satisfy χ(w) = χ(v) − 1. 4.2. Strategy of the proof. To a large family of plumbed manifolds, Némethi associates a graded root whose corresponding module is isomorphic to Heegaard Floer homology up to a grading shift [Ném05]. In [CK12], Némethi's method is simplified for Seifert homology spheres. Before describing this method in Section 4.3, we begin with an example to illustrate the process. This will also enable us to explain the strategy for the proof of Proposition 4.1. For simplicity, we will construct the graded root for Y 3 = Σ(3, 5, 7) and consequently compute its Heegaard Floer homology. While Y 3 does not have p even, this computation will still lend insight into the family of computations we are interested in. We consider the number {0, 4, 13, 15, 19, 21, 30, 34}. We write ∆ Y 3 as an ordered set, recording in sequence, the value of ∆ Y 3 on each element of X Y 3 : We then combine the consecutive positive values and the consecutive negative values to write a new sequence which produces the same graded root: We indicate the graded root Γ Y 3 in Figure 3. We can read off the Heegaard Floer homology of Y 3 up to a degree shift from its graded root. As relatively-graded modules, we have that HF (4) (6) Figure 3. The graded root associated to Y 3 = Σ(3, 5, 7). The grading of each vertex v is written in the form χ(v), (absolute grading). (2) (4) Let us observe why U · HF red 0 (Y p ) = 0 when p = 3, 4, 5 using these graded roots. From the description of the U -action on the homology of a graded root Γ, we see that the dimension of ker(U ) n is the number of branches ending at degree n whereas dim H n (Γ) is the number of vertices in degree n. From the pictures of the graded roots of Γ Yp we clearly see that there is exactly one degree 0 vertex which is not the end of a branch; this vertex is in the image of U N for N ≫ 0. Since HF red (Y p ) is the cokernel of U N for N ≫ 0, we have U · HF red 0 (Y p ) = 0 for p = 3, 4, 5. In order to prove Proposition 4.1 in general, we need to see a pattern in the graded roots of Y p . Repeating the graded root computation for a few more values reveals that the bottom of the graded root of Y p shows one of the patterns indicated in Figure 5, depending on the parity of p. We call these "sub-graded roots" creatures and denote them by Γ Cp . Proposition 4.1 reduces to showing that the bottom of each graded root is the creature Γ Cp . In order to formalize and prove this we are going to need abstract delta sequences which were introduced in [KL13]. Figure 5. Creatures Γ Cp as sub-graded roots of the graded roots associated to Y p . Gradings are χ values. Delta sequences. Recall from [KL13] that an abstract delta sequence is a pair (X, ∆), where X is a well-ordered finite set, and ∆ : X → Z − {0} which is positive at the minimal element of X. These objects codify graded roots via the method described in Section 4.1. We review the description of the abstract delta sequence (X Y , ∆ Y ) which is associated to an arbitrary Brieskorn sphere Y = Σ(p, q, r). Let N Y = pqr − pq − pr − qr, and let S Y denote the intersection of the the interval [0, N Y ] with the semigroup generated by pq, pr, and qr. Define the set The significance of this abstract delta sequence is the following. 4.4. Refining and merging delta sequences. One can define operations on abstract delta sequences which do not change the corresponding graded root. Two such operations are refinement and merging which we now define. Let (X, ∆) be a given abstract delta sequence. Suppose there exists a positive integer t ≥ 2 and an element z of X such that |∆(z)| ≥ t. Pick integers n 1 , . . . , n t , all of which have the same sign as ∆(z), and satisfy n 1 + · · · + n t = ∆(z). From this we construct a new delta sequence. Remove z from X and put t consecutive elements z 1 , . . . , z t in its place to get a new ordered set X ′ . Define ∆ ′ : X ′ → Z, such that ∆ ′ (x) = ∆(x), for all x ∈ X \ {z}, and ∆ ′ (z i ) = n i , for all i = 1, . . . , t. We say that the delta sequence (X ′ , ∆ ′ ) is a refinement of (X, ∆) at z, and conversely (X, ∆) is the merge of (X ′ , ∆ ′ ) at z 1 , . . . , z t . Definition 4.4. An abstract delta sequence is said to be reduced if it does not admit any merging (hence there are no consecutive positive or negative values of ∆). An abstract delta sequence is called expanded if it does not admit any refinement (hence every value of delta is ±1). Clearly every delta sequence admits unique reduced and expanded forms. Note that the abstract delta sequences of Brieskorn spheres are in expanded form. 4.5. Successors and predecessors. Let (X, ∆) be an abstract delta sequence. Denote by S (respectively Q) the set of all elements in X where ∆ is positive (respectively negative). For x ∈ X, we define its positive successor suc + (x) = min{x ′ ∈ S | x < x ′ } (respectively negative successor suc − (x) = min{x ′ ∈ Q | x < x ′ }). In other words, suc ± (x) is the first positive/negative element of the delta sequence after x. Should suc ± (x) not exist, we treat it as an auxiliary element which is larger than any element in X. Note that (X, ∆) is in reduced form if and only if for all x ∈ S, x < suc − (x) ≤ suc + (x) and for all x ∈ Q, x < suc + (x) ≤ suc − (x). We have the analogous notions, pre ± (x), which are the predecessors. is the maximal set of all consecutive elements of X which are contained in S (respectively Q) which contains x (respectively y). We describe an explicit model for the reduced form of (X, ∆), denoted (X,∆) such thatX ⊂ X. This is done as follows. DefineS = {π + (x) | x ∈ S} (i.e., the largest endpoints of each maximal interval of elements with positive values) andQ = {η − (y) | y ∈ S}. We then merge each When discussing the reduced form of (X, ∆), we will always assume we are working with this explicit model for the reduced form of (X, ∆). There is also an obvious quotient from X toX given by x → π + (x) for x ∈ S and y → η − (y) for y ∈ Q. We will sometimes not distinguish between an element of X and its representative inX. The reason for this is that if x < y or y < x in X, for x ∈ S and y ∈ Q, then the same inequality holds for their images inX. Let Y be a Brieskorn sphere. We would like to study the reduced form (X Y ,∆ Y ) of the delta sequence (X Y , ∆ Y ) according to the above model. First we point out that N Y − x ∈X Y whenever x ∈X Y . Moreover, it follows from the construction that∆ Y (N Y − x) = −∆ Y (x) for x ∈X Y . Finally we have the following. Proof. Let Y = Σ(p, q, r) be a Brieskorn sphere. Let x 0 denote the minimum of {pq, pr, qr}. Throughout this proof, all successors and predecessors are taken with respect to X Y and notX Y . Note that suc + (0) = x 0 . Let y denote the maximal element of S(pq, pr, qr) less than N Y . Then suc − (0) = N Y −y. Since x 0 is the smallest generator of S(pq, pr, qr), gaps between any two elements of this semigroup are less than or equal to x 0 . Since N Y ∈ S(pq, pr, qr), we have N Y − y < x 0 by definition of y. Hence, we have 0 < suc − (0) < suc + (0) which implies π + (0) = 0. Therefore 0 ∈X Y . Consequently, 4.6. Tau functions and sinking delta sequences. Given an abstract delta sequence (X, ∆), one defines the well-ordered set X + := X ∪ {z + } where z + > z for all z ∈ X, and a function τ ∆ : X + → Z, as in Section 4.1, with the following formula τ ∆ (z) = w∈X w<z ∆(w), for all z ∈ X + . We call τ ∆ the tau function of the delta sequence (X, ∆). An important part of the study of abstract delta sequences is to detect where their tau functions attain their absolute minimum. Below we define a class of delta sequences whose tau functions have easily detectable minimum. Proposition 4.7. The tau function of a sinking delta sequence attains its absolute minimum at the last element and nowhere else. Proof. Follows from the definition. We will also need certain dimension formulas for H(Γ ∆ ), regardless of whether (X, ∆) is sinking. We find it convenient to work in the reduced form. Let (X, ∆) be a given abstract delta sequence and let (X,∆) be its reduced form. Letτ :X + → Z be the tau function of the reduced sequence. For any z ∈X + other than the minimal element, let pre(z) denote the immediate predecessor of z inX + (i.e., pre − (z) if z ∈S and pre + (z) if z ∈Q). Denote by z min the minimal element ofX + . Proof. As we noted before, passing to the reduced form does not change the graded root. The proof follows from the construction of the graded root Γ ∆ and the description of the U -action on H(Γ ∆ ). First of all dim(H 0 (Γ ∆ )) is the total number of vertices with degree 0. Now recall the construction of the graded root from the tau function given in Section 4.1. Since∆ is reduced, each timẽ τ (pre(z)) > 0 andτ (z) ≤ 0 happens, we create a new vertex of degree 0. Sinceτ (z min ) = 0, we have one more vertex of degree 0. By the description of this U -action, dim(ker U ) 0 equals the number of valency one vertices on Γ ∆ which have degree 0. Each timeτ (pre(z)) > 0 andτ (z) = 0, happens we create such a vertex. We need to add one to include the vertex corresponding to z min . 4.7. Symmetric delta sequences. We see an obvious symmetry in Figures 3 and 4. In fact this symmetry more generally holds for the graded roots of Seifert homology spheres. The purpose of the next two definitions is to characterize those delta sequences whose graded roots are symmetric. To simplify the description, we shall use the notation f = k 1 , k 2 , . . . , k n to denote the function f : X → Z, whose value is k 1 at the minimal element of X, then k 2 at the successor of the minimal element, et cetera. Definition 4.9. Given an abstract delta sequence ∆ = k 1 , . . . , k n , we define the following (i) negation, −∆ = −k 1 , . . . , −k n , (ii) reverse, ∆ = k n , . . . , k 1 . Note that neither the negation nor the reverse of an abstract delta sequence need be an abstract delta sequence. If ∆ 1 = k 1 , . . . , k n and ∆ 2 = ℓ 1 , . . . , ℓ m are abstract delta sequences we define their join by ∆ 1 * ∆ 2 = k 1 , . . . , k n , ℓ 1 , . . . , ℓ m . Further, observe that by the symmetry of ∆ Y , if the maximal element of Semigroups and creatures Having given the necessary background, we are now ready to prove Proposition 4.1. First we formally define the creatures given in Figure 5, by indicating their delta sequences. Then we will observe that Proposition 4.1 holds for the creature graded roots, which will be denoted Γ Cp ; namely, we will show U · H red 0 (Γ Cp ) = 0. Finally we shall prove a technical decomposition theorem which essentially reduces the proof of Proposition 4.1 for Y p to checking that it holds for the creatures. Throughout this section, let p be an even integer with p ≥ 4. We will often write p = 2ξ + 2. Definition 5.1. For every p = 2ξ + 2 with ξ ≥ 1, the creature Γ Cp is the graded root defined by the symmetrization of the abstract delta sequence Let p be given, and consider the creature graded root Γ Cp and its homology H(Γ Cp ), which is an F[U ]-module supported in even degrees (see Section 4.1 for the construction of H(Γ)). Let pre(z) denote the immediate predecessor of z, as in Proposition 4.8. From this we observe that there is exactly one element that belongs to the set {z | τ (pre(z)) > 0 and τ (z) ≤ 0}, but does not belong to the set {z | τ (pre(z)) > 0 and τ (z) = 0}. We underlined the value of the tau function at this element in (5.1) above. Hence by Proposition 4.8, the proof follows. Let Y p = Σ(p, 2p − 1, 2p + 1). Let ∆ Yp denote the corresponding abstract delta sequence as described in Section 4.3, and let∆ Yp denote its reduced form. The proof of Proposition 4.1 will follow from the following technical statement about∆ Yp . Lemma 5.3. For every even integer p ≥ 4, we have the following decompositioñ where ∆ Cp is the creature sequence given in Definition 5.1, and ∆ Zp is a sinking delta sequence. Let us first see why the above lemma implies Proposition 4.1, and thus completes the proof of Theorem 1.6. Proof of Proposition 4.1. Consider the graded root Γ Yp whose grading is shifted so that it agrees with the absolute grading of HF + (Y p ) (see Theorem 4.3). The decomposition in Lemma 5.3 implies that the creature graded root Γ Cp embeds into Γ Yp as a subgraph. Moreover, Proposition 3.2 implies that this embedding is in fact degree preserving. Since ∆ Zp is sinking, by Proposition 4.7 and (5.1), the minimum value of τ ∆ Zp is 0, which is the initial value of τ ∆ Cp , and is uniquely attained at the maximal element of ∆ Zp . Thus, we see that H ≤0 (Γ Yp ) = H ≤0 (Γ Cp ) as graded F[U ]-modules. By Theorem 4.3 and Proposition 5.2, we see dim HF + 0 (Y p ) = dim(ker U ) 0 + 1. This implies the desired result. Here we deliberately break the lines, so the pattern of the elements is visible. Proof. First, it is clear that (p − 1)r + ∈ S(r − , r + ), and therefore is the maximal element. Next, note that if a + b = k and 1 ≤ a ≤ k and 1 ≤ b ≤ k, then since r − < r + , Therefore, to establish the order as given in the statement of the lemma, we just need to show that as long as k ≤ (p − 1), we have kr + < (k + 1)r − . This inequality is easily checked using the definition r ± = p(2p ± 1). (a − 1)r − + (b − 1)r + + w, ar − + br + is consecutive in N. This will be used frequently throughout the proof. Before proceeding, we point out that if x ∈ S Yp can be written as x = ar − +br + +cw for some nonnegative integers a, b, and c, then this decomposition is unique by the Chinese remainder theorem. Suppose now that a is a non-negative integer and b is a positive integer such that a + b ≤ p − 1. Fact 5.5, combined with this observation about the unique representability of elements in S Yp , implies Combining Lemma 5.4 with (5.2) now gives a complete description of S(r − , r + , w)∩[0, (p−1)r + ]. Namely for each element x = ar − + br + of the pyramid given in Lemma 5.4, we have min{a, b} more consecutive elements preceding x. In particular, there are no elements of S(r − , r + , w) between (k − 1)r + and kr − for k ≤ p − 1. We also point out two inequalities which will be used throughout the proof of Lemma 5.3: The validity of these two inequalities can be checked directly from the definitions r ± = p(2p ± 1) and N Yp = 4p 3 − 8p 2 − p + 1. 5.1. The proof of Lemma 5.3. In order to study ∆ Yp we will find it more convenient to work with its reduced form. We shall make use of the explicit model of the reduced form given in Section 4.5. Hence, we will heavily rely on the notation introduced there. Recall that in order to determine the reduced form, we must compute π ± (x) for x ∈ S Yp . Proof. First, let One can check that x ′ ≤ (p − 1)r + . Hence we have by Lemma 5.4 and Fact 5.5 that x < x ′ and that the elements of S Yp strictly between x and x ′ are of the form x ′ − i, for 1 ≤ i ≤ min{a − 1, b + 1} and they are consecutive in X Yp . Thus, Since the elements between suc + (x) and x ′ are consecutive in X Yp (and S Yp ), if y ∈ Q Yp satisfies y < x ′ , it must satisfy y < suc + (x). However, it is straightforward to verify that (5.3) and (5.4) imply This completes the first part of the lemma. First, we would like to determine π − (x) in the generic case. By Fact 5.5, {x − min{a, b}, . . . , x} is a consecutive subset of X Yp which is contained in S Yp . Let By (5.2) and Lemma 5.4, we have that pre + (x − min{a, b}) = x ′′ . Again, Lemma 5.4, (5.3), and (5.4) imply Similar to above, when a < p − 1, we have that Thus, if x is neither (p − 2)r + nor (p − 1)r − , the second claim follows, since π + (x) = x. In order to deal with the exceptional cases, we will prove that there is no element of Q Yp between (p−2)r + and (p−1)r − . Since the above arguments show that there exists an element of Q Yp between pre + ((p − 2)r + ) and (p − 2)r + , and an element of Q Yp between (p − 1)r − and suc + ((p − 1)r − ), this will establish that [π − (x), π + (x)] ∩ S Yp = {(p − 2)r + , (p − 1)r − } for x = (p − 2)r + or (p − 1)r − . Here, we are using our description of S Yp to deduce that there are no elements of S Yp between (p − 2)r + and (p − 1)r − . We remark that more generally, if x = ar − + br + and x ≥ (p − 1)r + , we are still able to deduce that {x − min{a, b}, . . . , x} ⊂ [π − (x), π + (x)] ∩ S Yp . Finally, recall that for x ∈ X Yp , we may also write x for the induced element inX Yp . In order to prove Lemma 5.3 we are interested in finding a decomposition∆ Yp = (∆ Zp * ∆ Cp ) Sym , such that ∆ Zp is sinking and ∆ Cp is the creature sequence from Definition 5.1. Recall that we write p = 2ξ + 2 for some positive integer ξ. Define It is clear that ∆ Zp is an abstract delta sequence, since∆ Yp is. Lemma 5.8. The abstract delta sequence ∆ Zp is sinking. Lemma 5.9. As abstract delta sequences, ∆ Wp ∼ = ∆ Cp , where ∆ Cp is the abstract delta sequence from Definition 5.1, and ∆ Wp is defined as in (5.10). Having collected all the necessary ingredients, the proof of Lemma 5.3 now follows from (5.8), Lemma 5.8, and Lemma 5.9. As discussed, this was the remaining piece needed to prove Proposition 4.1, and consequently Theorem 1.6. Knot surgeries in other manifolds In the proof of Theorem 1.2, the only thing special about S 3 is that it is an integer homology sphere L-space (i.e., HF red = 0) and that d(S 3 ) = 0. The following theorem is a slight generalization. Theorem 6.1. Let Y and Y ′ be oriented integer homology spheres. Suppose that HF red (Y ) = 0 and d(Y ′ ) ≤ d(Y ) − 8. Then, if Y ′ is obtained by surgery on a knot in Y , then there is a non-trivial element of HF red (Y ′ ) in degree d(Y ) which is not in the kernel of U . Proof. The proof is the same as Theorem 1.2, where one only difference is that one has to incorporate the d-invariant of Y into some of the statements. The main observation is that for n > 0, we have the more general formula d(Y 1/n (K)) = d(Y ) − 2V 0 , where V 0 is defined analogously for Y and K as for a knot in S 3 . This follows by repeating the arguments in [NW10, Proposition 1.6] for a knot in an integer homology sphere L-space. Proof of Theorem 1.7. Orient Σ(2, 3, 5) such that it is the boundary of a negative-definite plumbing. In this case, d(Σ(2, 3, 5)) = 2. Let Z be an integer homology sphere. We will show that for k ≫ 0, the manifold # k Σ(2, 3, 5) is not surgery on a knot in Z, regardless of orientation of Z. We conclude by pointing out that Theorem 6.1 can also be extended to statements about p/qsurgery where |p| ≥ 2. One can then apply the same arguments as in Theorems 1.2 and 1.7 to show that if Z has cyclic first homology, # k Σ(2, 3, 5) is not surgery on a knot in Z for k large. Finally, the analogous statement when Z has non-cyclic homology is trivial. Thus, in conclusion, for any three-manifold Z, there exist infinitely many integer homology spheres which are not surgery on a knot in Z.
2014-08-07T08:02:17.000Z
2014-08-07T00:00:00.000
{ "year": 2014, "sha1": "b63d4b3bbeba0e27473f76e2315ef5802d6139a4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1408.1508", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b63d4b3bbeba0e27473f76e2315ef5802d6139a4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
13130561
pes2o/s2orc
v3-fos-license
Determinants of pain and functioning in knee osteoarthritis: a one-year prospective study Objective: To identify predictors of pain and disability in knee osteoarthritis. Design: A one-year prospective analysis of determinants of pain and functioning in knee osteoarthritis. Study setting: Primary care providers in a medium-sized city. Patients: A total of 111 patients aged from 35 to 75 with clinical symptoms and radiographic grading (Kellgren-Lawrence 2–4) of knee osteoarthritis who participated in a randomized controlled trial. Main measures: The outcome measures were self-reported pain and function, which were recorded at 0, 3 and 12 months. Disease-specific pain and functioning were assessed using the pain and function subscales of the Western Ontario and McMaster Universities (WOMAC) Osteoarthritis Index. Generic physical and mental functioning were assessed using the RAND-36 subscales for function, and physical and mental component summary scores. Possible baseline predictors for these outcomes were 1) demographic, socioeconomic and disease-related variables, and 2) psychological measures of resources, distress, fear of movement and catastrophizing. Results: Multivariate linear mixed model analyses revealed that normal mood at baseline measured with the Beck Anxiety Inventory predicted significantly better results in all measures of pain (WOMAC P=0.02) and function (WOMAC P=0.002, RAND-36 P=0.002) during the one-year follow-up. Psychological resource factors (pain self-efficacy P=0.012, satisfaction with life P=0.002) predicted better function (RAND-36). Pain catastrophizing predicted higher WOMAC pain levels (P=0.013), whereas fear of movement (kinesiophobia) predicted poorer functioning (WOMAC P=0.046, RAND-36 P=0.024). Conclusions: Multiple psychological factors in people with knee osteoarthritis pain are associated with the development of disability and longer term worse pain. Introduction There is an emerging consensus that the degree of knee pain and disability symptoms among osteoarthritis patients appears to rest upon a complex interaction of factors, including structural damage, peripheral and central pain processing mechanisms, obesity, culture, and demographic as well as psychosocial factors. 1,2 For instance, the European Project on Osteoarthritis concluded that advanced age, female gender, lower educational attainment and a higher body mass index were independently associated with disability. 3 With respect to structural damage, it has been shown that pain does not always accompany radiological findings of knee osteoarthritis. 4 Furthermore, the radiographic severity of knee osteoarthritis has been reported to have a weak or no association with disability in these patients. 5 Increasing evidence has suggested the importance of psychological (affective, cognitive, behavioural) variables in explaining and predicting osteoarthritis pain and disability. 6,7 According to a population-based survey of individuals living in 17 countries, depression and anxiety disorders occurred significantly more often among those with self-reported arthritis. 8 In a study by Smith and Zautra 9 among women with osteoarthritis, measures of anxiety and depression emerged as independent and significant predictors of current and next week pain, with anxiety having almost twice the effect of depression. Over the past 15 years, pain-related cognitions, such as pain catastrophizing and self-efficacy, have become a major interest in psychosocial pain research. Pain catastrophizing refers to the tendency to ruminate about pain and magnify it. Somers et al. 10 reported in a cross-sectional setting that pain catastrophizing explained a significant proportion of the variance in measures of pain, psychological disability, physical disability and gait velocity in overweight and obese patients with knee osteoarthritis. Fear of movement or kinesiophobia is another variable used to describe negatively charged emotions towards pain and function. Heuts et al. 11 concluded that pain-related fear was significantly associated with functional limitations among osteoarthritis patients. Self-efficacy, on the other hand, represents a more positive aspect of adjusting to pain. Self-efficacy is a concept used to describe the strength of one's beliefs in one's ability to complete tasks and reach goals. According to a systematic review by Benyon et al., 6 there is strong evidence that self-efficacy predicts disability but not pain among osteoarthritis patients. Several psychological variables have been studied in relation to pain and function among patients with chronic musculoskeletal diseases. However, the number of studies investigating the predictive role of psychological factors in knee osteoarthritis is somewhat scarcer. In this analysis, we assessed whether disease-specific, demographic and psychological factors at baseline predict self-reported pain and function during a one-year follow-up of a randomized controlled trial among patients with knee osteoarthritis. 12 Patients and methods The study participants were 111 patients with radiologically (Kellgren-Lawrence 2-4) 13 diagnosed knee osteoarthritis and associated pain symptoms. They participated in a randomized controlled trial with a group-based cognitive-behavioural intervention to treat pain, and were followed up for one year. 14 The outcome measures were recorded at 0-, 3-, and 12-month follow-up points using postal questionnaires. The questionnaires included questions about knee pain and physical function, demographic, socioeconomic and disease-related variables and psychological variables. Questionnaires for knee pain and physical function The outcome measures in this analysis were selfreported pain and functioning (physical and mental). The following measures were used: Disease-specific pain and physical functioning were measured with the pain and function subscales (0−100 mm) of the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) using the Finnish validated version. 15 The self-reported generic assessments of physical and mental functioning were assessed with the Finnish validated SF-36-item Health Survey RAND-36 16 subscales for function and physical and mental component summary scores. Both orthogonal and oblique assessments 17 for the summary scores were calculated and used in parallel. We used average and SD values of different RAND-36 subscales in the Finnish population when calculating the component summary scores. 16 Possible baseline predictors Possible baseline predictors for the outcomes were divided into two groups: 1) Demographic, socioeconomic and disease-related variables and 2) psychological measures of resources and coping, fear of movement and catastrophizing and distress. Baseline predictors were transformed to dichotomous variables before the analysis, except for age, which was maintained continuous. In the transformation to dichotomous variables, we used cut-off values based on classification systems (body mass index, Kellgren-Lawrence scale) or the median of the observations (duration of the knee symptoms). In the case of exercise frequency, the cut-off value (⩾2 times a week vs. ⩽1 times a week) was chosen with respect to the recommendations of the Physical Activity Guidelines for Americans 19 of strength training at least two times a week. The patients were asked to report how often they exercised with the following response alternatives: daily, 4-6 times a week, 2-3 times a week, once a week, 2-3 times a month, or a couple of times a year or less. For the number of comorbidities, the cut-off (0−2 vs. ⩾3) was chosen on the basis of reasonable group sizes and clinical relevance. 2) Psychological variables. Psychological variables were assessed with questionnaires focusing on psychological resources (life satisfaction, sense of coherence, pain self-efficacy), fear and catastrophizing (kinesiophobia and pain catastrophizing) and mood (depressive symptoms, symptoms of anxiety). In the transformation to dichotomous variables, we used clinical cut-offs defined for each questionnaire when available (Life Satisfaction scale, Tampa Scale of Kinesiophobia, Beck Depression Inventory, Beck Anxiety Inventory). Where a cut-off had not been defined, for clinically meaningful comparisons we used data-driven tertile grouping (Sense of Coherence, Pain Self-Efficacy Questionnaire, Pain Catastrophizing Scale). Depressive symptoms were assessed using the Finnish version of the 21-item Beck Depression Inventory, which has been found valid and reliable. 27 The cut-off point for depression was set at 9/10 (normal mood, scores 0-9 vs. elevated depressions symptoms, scores 10 or more) according to the original formulation by Beck and Beamesderfer). 28 The Beck Anxiety Inventory 29 was used to evaluate the severity of symptoms of anxiety (normal mood, scores 0-7 vs. mild anxiety or more, scores 8-63). 29 Although the Finnish version of the Beck Anxiety Inventory has previously been used in some studies, exact data on its validity and reliability are scarce. Statistical analysis All statistical analyses were performed using SPSS (version 22.0, SPSS, Chicago, IL, USA). Demographic characteristics and baseline data were summarized with descriptive statistics. The number of study patients in this analysis was based on the power calculations for the original randomized controlled trial, 14 where 54 patients per group (two groups) were needed in the comparison of the mean WOMAC pain scores between the groups. The associations of possible explanatory variables with the outcome variables were assessed with a multivariate linear mixed model, in which the correlation structure of the data due to the multiple measurements (0, 3 and 12 months) could be taken into account. The mixed model has the advantage of using all available data in the analysis, irrespective of whether some data points are missing for a given participant. Separate models were estimated for each outcome. It has been recommended that covariates should be chosen based on their substantive basis and not on a test of differences. 30 Thus, age, gender, educational level, the number of comorbidities, the body mass index, work status, marital status and disease severity were included as covariates based on their associations with the study outcomes in prior research. 3,7,31 The covariates were dichotomised (Table 1) before the analysis, except for age, which was maintained continuous (per 10 years). Finally, a model for demographic, socioeconomic and disease-related variables was fitted in the form: Outcome 0;3;12 = sex + age + education + comorbidities + body mass index + work status + marital status + radiological grade + duration of knee pain + time+ randomization + time x randomization. In the same way, a second model was formulated in which life satisfaction, sense of coherence, pain self-efficacy, kinesiophobia, catastrophizing, depressive and anxiety symptoms were included as covariates based on their associations with the study outcomes in prior research. 6,7 Again, the covariates were dichotomised (Table 2) before the analysis. Thus, the model for psychological measures was fitted in the form: Outcome 0;3;12 = Life Satisfaction + Sense of Coherence + Pain Self-Efficacy Questionnaire + Tampa Scale of Kinesiophobia + Pain Catastrophizing Scale + Beck Depression Inventory + Beck Anxiety Inventory+ time+ randomization + time x randomization. The time-by-treatment interaction in both models addresses the question of whether the groups differed in the change between the measurement points. A non-significant time-by-treatment interaction suggests that the changes over the followup period cannot be distinguished from sampling error. Since the time-by-treatment interaction was non-significant in all outcomes, we decided to remove the term from both of the models. As group randomization did not show any significance as a covariate in either of the models, one can conclude that the intervention of the original randomized controlled trial did not have any effect on the outcome variables. Thus, the term could also have been removed from the mixed model analysis. However, we decided to keep it for reasons of clarity. Results The baseline characteristics of the study patients are presented in Table 3. The associations of baseline variables (predictors) with the outcome variables have been described in Tables 1 and 2. Multivariate linear mixed model analyses revealed that normal mood at baseline measured with the Beck Anxiety Inventory predicted significantly better results in all of the outcome measures during the one-year follow-up. Strong pain selfefficacy and satisfaction with life predicted significantly better scores in RAND-36 function, mental and physical component summaries. High scores in the Pain Catastrophizing Scale predicted significantly higher WOMAC pain levels. Low kinesiophobia scores, on the other hand, predicted significantly lower impairment in WOMAC function Discussion The current analysis revealed the significance of anxiety symptoms as predictors of knee osteoarthritis pain and function: Mixed model analysis showed that normal mood in the Beck Anxiety Inventory at baseline predicted better results in all outcomes of pain and function during the one-year follow-up. Moreover, the predictive role of baseline psychological resource factors for measures of function was highlighted in the follow-up of these patients. Additionally, negatively charged emotions and expectations towards pain were found to be important predictive factors in knee osteoarthritis symptoms. The role of anxiety symptoms in predicting knee osteoarthritis pain and functional impairment has been well established. 8,9 Generally, anxiety disorders among primary care patients with chronic pain have been found common, and the number of disorders adversely associated with impairment in health related quality of life and RAND-36 mental component summary scores. 32 Among other affective variables, depressive symptoms have been demonstrated to have an association with knee pain and activity limitations. 8,9 However, in the current analysis, depressive symptoms did not have any predictive value for self-reported pain or function. One reason for this may be the low baseline levels of depressive symptoms among the study patients, with only 19 reporting at least mild depression. This, in turn, may result from the recruitment process of the original randomized controlled trial: The candidates had to take the initiative to participate in the study. In addition, severe psychiatric conditions were criteria for exclusion. The importance of psychological resource factors was emphasized in relation to measures of function of knee osteoarthritis patients. Pain selfefficacy and satisfaction with life both predicted better generic measures of function (RAND-36 function, mental and physical component summaries) in the follow-up. According to previous research findings, strong pain self-efficacy appears to enhance and maintain the long-term effects of rehabilitation, 33 while weak pain self-efficacy has, in contrast, been found predictive of long-term disability and depression. 34 Satisfaction with life, on the other hand, has been found to be a powerful predictor of various health risks and health-related adversities among persons with musculoskeletal disorders, such as the length of sick-leave 35 and poorer postoperative recovery. 36 Negatively charged expectations toward pain and function, that is, kinesiophobia and catastrophizing, were also important predictors of knee osteoarthritis symptoms. A low tendency for pain catastrophizing predicted less pain (WOMAC), while low scores in the Tampa Scale of Kinesiophobia predicted better generic (RAND-36 function, mental and physical summaries) and disease-specific (WOMAC) function. Findings from previous research have also supported the importance of pain catastrophizing in predicting pain and explaining disability and psychological distress in knee osteoarthritis patients. 10,37 Moreover, kinesiophobia has been reported to influence function in osteoarthritis patients. 38 Among the factors associated with a healthy lifestyle, we found that a lower body mass index predicted a better physical component summary score (RAND-36). Earlier findings by Edwards et al. 3 demonstrated an association between a higher body mass index and lower objectively measured physical performance. Furthermore, the benefits of exercise training in reducing pain and improving function have been well established in knee osteoarthritis patients. 39 In the current study, those exercising more frequently had significantly better RAND-36 function scores. In our analysis, the number of comorbidities was found to predict both pain (WOMAC) and generic function (RAND-36 physical component summary). Earlier studies 31 have reported similar findings. Among the disease-related variables, the radiographic severity of knee osteoarthritis had predictive value for generic measures of function (RAND-36 function, mental and physical summaries). However, findings from previous studies have been somewhat contradictory on this matter. 5 Additionally, baseline values for WOMAC pain and function and RAND-36 physical component summaries were significantly better than follow-up average values, a phenomenon demonstrated in several previous studies among osteoarthritis patients. 40 The strengths of the present study include the repeated examination of a number of pain, function and psychological outcomes and the use of X-ray with at least Kellgren-Lawrence 2 13 scale knee osteoarthritis to confirm the diagnosis of knee osteoarthritis. Furthermore, the study sample can be considered representative of ordinary communitydwelling knee osteoarthritis patients, as most of the participants (77%, n = 86) were enrolled in the study as a result of a previous referral to a knee X-ray by their general practitioners. On the other hand, a central limitation of this analysis could be the fact that the study patients were too tightly selected due to the inclusion criteria of the original randomized controlled trial. Firstly, the patients had to have quite a high WOMAC pain subscale level (VAS ⩾40/100 mm) to be included. Almost half of the study candidates (47%, n = 209) had to be excluded because their WOMAC pain level was too low. 14 Secondly, the recruitment process may have resulted in the selection of patients who were more active and better off in some aspects of psychological well-being than the average knee osteoarthritis patient. The current analysis added to the limited number of prospective studies concerning the impact of psychological factors on pain and function in knee osteoarthritis patients. To our knowledge, the results provided some new information on the predictive role of pain catastrophizing, kinesiophobia and life satisfaction in self-reported pain and function in this particular patient group. Moreover, the finding that the radiographic severity of knee osteoarthritis had predictive value for generic measures of function was interesting. In general, the results call for the routine assessment of multiple psychological factors in knee osteoarthritis to identify those patients or sub-groups of patients who need additional behavioural and psychological attention. 10,41 Not taking these factors into considerations will probably contribute to prolonged disability and further pain. Clinical messages • • Among knee osteoarthritis patients, the absence of anxiety symptoms at baseline was a strong predictor of milder pain and better function during the follow-up. • • Life satisfaction and pain self-efficacy predicted better function among knee osteoarthritis patients. • • A low level of pain catastrophizing and kinesiophobia predicted milder symptoms of knee osteoarthritis.
2018-04-03T03:42:07.390Z
2013-03-01T00:00:00.000
{ "year": 2016, "sha1": "04951d16656a894eca0d292da2532143b3264e35", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0269215515619660", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "04951d16656a894eca0d292da2532143b3264e35", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
267755896
pes2o/s2orc
v3-fos-license
Experiences with waterjet hydrosurgery system in wound debridement Background Recently, a new device, the Versajet™, involving "Hydrosurgery Technology" which combines lavage and sharp debridement instrumentation has been described for soft tissue debridement. Methods The Versajet™ Hydrosurgery System utilizes a reusable power console with foot pedal activation, disposable handpiece and tubing assembly in conjunction with sterile saline and standard waste receptacle. The purpose of this paper is to report our experiences with this instrument in debridement of a variety of wounds prior to final reconstructive surgery. Technical details and pitfalls are discussed to facilitate clinical use. Results Efficient, safe and fast debridement was achieved in all patients using the hydrosurgery system. The actual time the hydrosurgery system was used for debridement averaged as 15.5 minutes. In ten patients, an adequately debrided wound bed was achieved with a single operative procedure, in four patients; two stages were required prior to reconstructive surgery. In one patient with recurrent sacral-iscial pressure sore, two debridements were carried out followed by long term vacuum assisted closure. The postoperative course was uneventful in all patients, but in three with a minor breakdown of the skin graft, which eventually healed with no surgical intervention. Conclusion As a result of our clinical experience, the Versajet™ enables surgeon to precisely target damaged and necrotic tissue and spare viable tissue. This modality may be a useful alternative tool for soft tissue debridement in certain cases. However, further studies are required to investigate its cost-effectiveness in wound management. Background Surgical debridement of necrotic tissue is an essential part of wound care prior to any reconstructive options. Sharp techniques utilized for this purpose have been the mainstay and are commonly used in combination with pulsed lavage and/or irrigation. Recently, a new device, the Versajet™ Hydrosurgery system, has been described for debridement. The Versajet™ hydrosurgery system utilizes high fluid technology for wound debridement. To date, few papers evaluating its role have been published in the literature [1][2][3][4]. The purpose of this paper is to report our experiences with this instrument in debridement of a variety of wounds prior to final reconstructive surgery. Technical details and pitfalls are discussed to facilitate clinical use. Hydrosurgery system The Versajet™ hydrosurgery system was provided by Smith & Nephew, Inc. Colorado. The US Food and Drug Administration (FDA) approved this new surgical instrument that utilizes a high powered parallel waterjet for wound debridement. The system consists of a reusable power console with foot pedal activation, disposable hand piece (15°/14 mm, 45°/14 mm, 45°/8 mm) and tubing assembly in conjunction with sterile saline and standard waste receptacle for maximized effectiveness (Fig. 1, 2, 3, 4). The hydrosurgery system projects a high-velocity waterjet across the operating window into an evacuation collector thereby creating a localized vacuum. The suction permits the surgeon to hold and cut targeted tissue while aspirating debris from the site. The cutting and aspiration effects can be controlled by adjusting console power settings (10: highest, 1: lowest), handpiece orientation, and handpiece pressure The hydrosurgery system was used for debridement of fifteen wounds in fifteen patients: 2 venous ulcers (distal leg extending to the foot dorsum, and the middle third of the leg), 1 sacral-iscial pressure sore, 1 burn wound to hand, 1 traumatic scalp wound, 2 traumatic thigh and/or popliteal fossa wounds, 2 traumatic leg wounds, 1 thigh wound following necrotizing fasciitis (Shooter's abscess), 1 leg wound following necrotizing fasciitis (Diabetic), 1 traumatic elbow wound, 1 traumatic foot wound, 1 traumatic arm wound, and 1 above knee amputation stump wound (Table 1). For all cases, a standard handpiece with a 45 degree angled tip and a 14 mm working window was employed. Four cases that underwent debridement with Close-up view of the handpiece Figure 3 Close-up view of the handpiece. The Versajet™ hydrosurgery console Figure 1 The Versajet™ hydrosurgery console. The system utilizes a reusable power console with foot pedal activation, disposable handpiece and tubing assembly in conjunction with sterile saline. the hydrosurgery system are presented in the following section. Patient 1 This was a 44-year-old male patient who presented to the Plastic Surgery outpatient clinic with a chronic non-healing wound on the left leg (10 × 15 cm) due to venous stasis. The wound had been present for three months and exhibited positive methicillin resistant staphylococcus aureus (MRSA) cultures on two occasions. The MRSA was treated with parenteral as well as oral antibiotics. Past medical history was significant for moderate rheumatoid arthritis for which the patient was taking 5 mg prednisone daily. The patient's wound was initially treated using wet to dry dressings for five days. Next, a single debridement with the Versajet™ hydrosurgery system (Power setting: 5) was performed followed by immediate split-thickness skin grafting. Early postoperative course was uneventful with full graft take (Fig. 5, 6, 7). Patient 2 This was a 42-year-old female patient who had a third degree burn injury to the left hand following a motor cycle accident. The injury affected the volar and proximal aspect of index, middle, ringer, little finger and distal palmar regions. An initial escharectomy was followed by two debridements using the hydrosurgery system (Power setting: 2-3). Subsequently, a thick partial thickness skin grafting was performed with full take (Fig. 8, 9, 10). Physical therapy was started within ten days after skin grafting with no disability or contracture formation at 3 month-follow-up. Patient 4 This was a 50-year-old female patient with a traumatic left elbow wound due to a motor vehicle accident. The soft tissue defect was 5 × 7 cm in size with exposed olecranon. Because of her additional abdominal injury, a wound vacuum assisted closure (Vacuum-assisted closure (V.A.C), KCI, San Antonio, TX) was placed. Within three days after The receptacle in which the debri and aspiration fluid are col-lected Figure 4 The receptacle in which the debri and aspiration fluid are collected. VAC treatment, debridement with the hydrosurgery system (Power setting: 5) was performed followed by immediate reconstruction with lateral arm island flap (Fig. 11, 12, 13). Physical therapy was initiated at one week after the surgery. The patient regained elbow functions in 2 months. Patient 5 This was a 30-year-old female patient with a traumatic degloving injury (15 × 25 cm) involving distal thigh and popliteal fossa extending to the lateral aspect of the knee on the right side. The mechanism of injury was a motor vehicle collision. The Versajet™ hydrosurgery system was used twice for wound debridement. The wound edges had 6-7 cm of dissection medially, laterally, and superiorly. The Versajet™ was also used for debridement under the dissected skin edges (Power setting: 6). The underneath of the avulsion skin flaps were tacked down to the wound bed to obliterate the dead space. The distal thigh wound was skin grafted, and the popliteal wound and lateral aspect of the knee were reconstructed with medial gastrocnemius muscle flap covered with split-thickness skin graft (Fig. 14, 15, 16). Minor wounds healed with no complications at three weeks. Results The Versajet™ hydrosurgery system was used for soft tissue debridement in fifteen patients with a variety of wounds. Debridement of the burn wound with the Versajet™ hydro-surgery system The wound after debridement with the Versajet™ hydrosur-gery system Figure 6 The wound after debridement with the Versajet™ hydrosurgery system. Demographics, wound characteristics, and type of the reconstructive surgery are displayed in Table 1. The patients' age ranged from 18 to 55 years, with an average of 39.1 years. All patients had debridements performed under general anesthesia in an operating room. In ten patients, an adequately debrided wound bed was achieved with a single operative procedure. In four patients, two steps were required before the final reconstructive surgery. In one patient with long-standing recurrent sacral-iscial pressure and poor general status, after dry eschar was removed, two debridements were carried out followed by long term V.A.C. application. No postoperative infection was noted in any of the patients. In three patients, minor breakdown of the skin graft was observed; however, they healed with no surgical intervention. The estimated average time per hydrosurgical procedures was 15.5 minutes. For the cases presented, a standard handpiece with a 45 degree angled tip and a 14 mm working window was employed. The lowest power setting used for debridement was 2-3 (hand burn case: patient 2), and the highest power setting was 8-9 for debridement of a deep pressure sore (patient 6). No safety issues such as sharp injuries or splash contamination were reported during the use of the Versajet™. Discussion Few articles have been published about the hydrosurgery system. The high-powered waterjet is a unique device compared to the pulse irrigator, which is a low-energy waterjet. The high-powered parallel waterjet device has the ability to focus a high-powered stream of water into a high-energy cutting implement. Water dissection works by the Venturi effect. A jet of saline, propelled by a power Debridement of the elbow wound with hydrosurgery system Figure 12 Debridement of the elbow wound with hydrosurgery system. One week after thick split-thickness skin grafting with full take Figure 10 One week after thick split-thickness skin grafting with full take. The healthy wound bed after the Versajet™ was used Figure 9 The healthy wound bed after the Versajet™ was used. Traumatic right elbow wound with exposed olecranon Figure 11 Traumatic right elbow wound with exposed olecranon. console, travels across the operating window of a handheld piece and then into a suction collector. This system of pressurized saline functions like a knife. The saline beam is aimed parallel to the wound so that the cutting mechanism is a highly controlled form of tangential excision [4]. We experienced many advantages utilizing the hydrosurgery system in fifteen patients. This single device technique combines lavage and sharp debridement instrumentation with single-handed operation due to holding and treating with one device. The device provided the control to hold targeted tissue during irrigation and excision, and importantly, the handpiece provided the ability to perform simultaneous debridement as well as removal of debris by aspiration. This helped keep the operative field cleaner and drier compared to conventional lavage techniques. The highly selective form of tangential excision enables surgeon to precisely target damaged and necrotic tissue and spare viable tissue. Furthermore, the hydrosurgery system offers multiple power settings for controlled excision around delicate tissues. The Versajet™ hydrosurgery system enables the surgeon to accurately control the cutting, debriding and aspiration effects by adjusting the console power settings from 1 to 10 as well as by angulating the hand piece. Increasing the power setting decreases the duration of debridement, Reconstruction of the popliteal defect and lateral aspect of the knee with medial gastrocnemius muscle flap Figure 15 Reconstruction of the popliteal defect and lateral aspect of the knee with medial gastrocnemius muscle flap. Immediate reconstruction with lateral arm island flap follow-ing debridement Figure 13 Immediate reconstruction with lateral arm island flap following debridement. Debridement of the underneath of the avulsion skin flaps with the Versajet™ handpiece, in addition to wound bed preparation Figure 14 Debridement of the underneath of the avulsion skin flaps with the Versajet™ handpiece, in addition to wound bed preparation. whereas decreasing the power setting increases the duration of debridement. Alternating pressure of the handpiece can further modulate its use and effect on the wound surface. It would be safer if one starts with a lower setting and makes appropriate adjustments based on the individual wound being excised. Preservation of nerves, vessels, and tendons is of utmost importance when performing debridement in the hand (as in patient 2), and surgical precision and control of excisional depth are particularly important. The power setting of 2 or 3 allowed an efficient and safe debridement in this case. Sharp techniques do not work well in small areas and in areas with a three-dimensional structure and the Versajet™ demonstrated particular advantage in the debridement of superficial to mid-thickness burns in areas like the face, hand, and foot which can be difficult to reach and contour with conventional techniques [2,3]. Another significant advantage of this device demonstrated was its ability to efficiently debride irregular and complex contour wounds such as deep pressure sores, traumatic wounds which can often be difficult to effect with cold knife (Patient 6, Fig. 17, 18). However, because in its current form, the device does not effectively debride desiccated eschar in pressure sores covered with dry eschar, the preferred approach is to sharply remove the eschar and then use the Versajet™ to debride the underlying necrotic tissue. The system enables rapid and controlled debridement, likely resulting in shorter procedure times. In our experience it reduced total operating room time for debridement with the estimated time of debridement on average approximately 1.5 times higher if Versajet™ debridement was not used. Although the scope of this paper is not to study cost-effectiveness of the system, but to describe our experiences with the Versajet™, there is some evidence indicating significant cost savings and reduction of the number of required debridement procedures with hydrosurgery system compared to conventional techniques. A complex contour pressure sore involving sacral, gluteal and ischial regions in a 48-year-old paraplegic patient Figure 17 A complex contour pressure sore involving sacral, gluteal and ischial regions in a 48-year-old paraplegic patient. Reconstruction was completed with split-thickness skin graft-ing Figure 16 Reconstruction was completed with split-thickness skin grafting. Despite the high cost of the disposable handpiece (355 $), it has been indicated that the hydrosurgical approach shortens hospitalization and healing time, thus allowing a total savings in wound management [1,4]. However, none of these studies 1,4 have included a well-designed control group to demonstrate conclusively that the instrument is cost effective. Further studies in this field that include a true control group in their study design are required. Nevertheless, based on our experience, we believe that the hydrosurgery system worked more efficiently in the presented case series for wound preparation and debridement with a satisfactory outcome compared to conventional sharp techniques utilized in similar control wounds. We have been using this hydrosurgery system for wound debridement for approximately one year and based on my sixteen-year experience in debridement of a variety of similar wounds with conventional sharp techniques and subsequent reconstruction, a comparison between conventional sharp debridement and hydrosurgery system is provided in the following paragraphs, in addition to pertinent comparison data from few available published reports. Chronic venous ulcers and diabetic wounds of the lower extremity were debrided more efficiently with the hydro-surgery system compared to the sharp knife. Versajet™ gradually removed the necrotic and/or fibrinous tissue and enabled us more precisely to preserve the viable tissue for a possible skin grafting. In such cases, the outcome was significantly better in Versajet™ group. Deep pressure sores, traumatic complex deep perineal wounds, traumatic deep extremity wounds with irregular surfaces were often hard to debride with sharp techniques, and we found the handpiece of the Versajet™ very efficient for debridement of the deep necrotic tissues. Similarly, in traumatic degloving type injuries where avulsion skin flaps developed, it was often very difficult to debride necrotic tissues under those flaps using conventional technique. By means of the handpiece, Versajet™ provided faster and more effective debridement that also allowed the flaps to easily attach to the recipient wound bed. Furthermore, sharp knife sometimes caused unexpectedly deep debridements jeopardizing the viability of the avulsed flaps. A clinical comparison revealed a better postoperative outcome in cases where hydrosurgery system was used. In our experience, in deep second degree as well as third degree burn injuries involving large flat surfaces, tangential excision using sharp knife has always been a superior option. In addition, using the Versajet™ in such cases increased the time required for debridement, in addition to its cost. However, in small three dimensional areas requiring debridement Versajet™ proved to be a better alternative. There was no difference between Versajet™ and conventional techniques in terms of operating time required for debridement, number of the debridements and outcome when treating superficial second degree burn wounds and superficial traumatic or postsurgical wounds, regardless of wound size. However, Versajet™ proved to be costly when used in such cases. Versajet™ was not an option for removing eschar. Furthermore Versajet™ was not effectively applicable in wounds where the bulk and amount of necrotic tissue load was very high. In such cases, conventional sharp techniques provided faster and efficient debridement and better outcome. Nevertheless one can remove the bulky necrotic tissue with sharp techniques and use Versajet™ for a precise debridement in such cases. However cost of the treatment would be an issue in such cases. Use of Versajet™ seemed to decrease the number of the debridements required in preparation of the wounds in our cases. This was evidenced by the decreased postoperative wound infection and better reconstructive outcome with higher skin graft take rate. In decubitus ulcers, venous ulcers, chronic ulceration from traumatic or post-Use of Versajet™ for debridement of this wound with irreg-ular surfaces before placement of wound V.A.C Figure 18 Use of Versajet™ for debridement of this wound with irregular surfaces before placement of wound V.A.C. surgical wounds, Granick [4] et al. demonstrated a statistically significant reduction in the number of debridements required to adequately prepare the wound bed for closure in Versajet™ group compared to conventional techniques. This seemed to be associated with more efficient debridement leading to decreased bacterial count. Mosti [1] et al. compared the Versajet™ and traditional moist dressing in leg ulcers and found that bacterial burden decreased from 10 6 to 10 3 in approximately 42 % of the patients in the Versajet™ group. However, a randomized controlled study seems necessary to demonstrate the effects of the Versajet™ system on quantitative bacterial load as compared to conventional techniques. Conclusion Efficient debridement of traumatic wounds, pressure sores, burn wounds, and chronic non-healing wounds due to diabetes mellitus, venous insufficiency, peripheral vascular disease is an essential and crucial step in wound management. In this article, usage of a hydrosurgery system that utilizes high fluid technology is presented for debridement of various wounds in fifteen patients. Technical details and pitfalls are provided to facilitate clinical application of the technique. Inability to remove hard eschar and to debride the bone are two known drawbacks of the hydrosurgery system. Even though the hydrosurgery system cannot replace sharp techniques for desiccated eschar removal and other techniques for bone debridement, it can be an efficient alternative for soft tissue debridement in certain cases. We believe that this new tool will soon find greater applicability in our practice. However, its cost effectiveness needs to be studied in detail with well-controlled studies.
2017-06-25T19:45:30.057Z
2007-05-02T00:00:00.000
{ "year": 2007, "sha1": "6eb693479e8b1c33dd83a9fa85308a2339fd10d2", "oa_license": "CCBY", "oa_url": "https://wjes.biomedcentral.com/counter/pdf/10.1186/1749-7922-2-10", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fdfac05cccca8e84d059970087c5e3a8492edf7c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
46722346
pes2o/s2orc
v3-fos-license
Complex Structure of Triangular Graphene: Electronic, Magnetic and Electromechanical Properties We have investigated electronic and magnetic properties of graphene nanodisks (nanosize triangular graphene) as well as electromechanical properties of graphene nanojunctions. Nanodisks are nanomagnets made of graphene, which are robust against perturbation such as impurities and lattice defects, where the ferromagnetic order is assured by Lieb's theorem. We can generate a spin current by spin filter, and manipulate it by a spin valve, a spin switch and other spintronic devices made of graphene nanodisks. We have analyzed nanodisk arrays, which have multi-degenerate perfect flat bands and are ferromagnet. By connecting two triangular graphene corners, we propose a nanomechanical switch and a rotator, which can detect a tiny angle rotation by measuring currents between the two corners. By making use of the strain induced Peierls transition of zigzag nanoribbons, we also propose a nanomechanical stretch sensor, in which the conductance can be switch off by a nanometer scale stretching. I. INTRODUCTION Graphene, which is a one-layer thick honeycomb structure of carbon, is an amazing material 1 . Electrons exhibit high mobility and travel micron distances without scattering at room temperature. It is a very thin and strong material, showing very high thermal conductivity. Graphene is now a main topic of nanoscience. Much attention has been focused on graphene nanoribbons, which is a one-dimensional ribbon-like derivatives of graphene. They have various band structure depending on the edge and width. In particular, zigzag gaphene nanoribbons show edge ferromagnetism due to almost flat low-energy band at the Fermi level. There are a profusion of papers on them, among which we cite some of early works [2][3][4] . Another basic element of graphene derivatives is a graphene nanodisk 5 . It is a nanometer-scale disk-like material which has a closed edge. It may be considered as a giant molecule made of aromatic compound. It is possible to manufacture them by etching a graphene sheet by Ni nanoparticles 6 . Among them, trigonal zigzag nanodisks have a novel electric property that there exist half-filled zero-energy states in the non-interacting regime, as was revealed first by the tight-binding model 5 and then by first-principle calculations [7][8][9] . Various remarkable properties of nanodisks have been investigated extensively in a series of works 5,[10][11][12] . Nanodisk is also referred to as nanoisland 7 , nanoflake 9,13,14 , nanofragment 15 or graphene quantum dot 16,17 . Nanoribbons and nanodisks correspond to quantum wires and quantum dots, respectively. They are candidates of future carbon-based nanoelectronics and spintronics alternative to silicon devices. A nanoribbon-nanodisk complex can in principle be fabricated, embodying various functions, only by etching a graphene sheet. Furthermore, graphene is common material and ecological. In this paper, exploring electronic, magnetic and electromechanical properties of trigonal zigzag graphene nanodisks [see Fig.1], we propose some application of nanodisk-nanoribbon complex to nanoelectronics, spintronics and electromechanics devices The nanodisk size is defined by N = Nben − 1 with Nben the number of benzenes on one side of the trigon. Here, N = 5. The A (B) sites on the lattice are indicated by red dots (while circles). The electron density is found to be localized along the edges. (b) The nanodisklead system. A nanodisk is connected to the right and left leads by tunneling coupling. It may act as a spin filter. II. ENERGY SPECTRUM We calculate the energy spectrum of the nanodisk based on the nearest-neighbor tight-binding model, which has proved to describe accurately the electronic structure of graphene, carbon nanotubes, graphene nanoribbons and other sp 2 carbon materials. The Hamiltonian is defined by where ε i is the site energy, t ij is the transfer energy, and c † i is the creation operator of the π electron at the site i. The summation is taken over all nearest neighboring sites i, j . Owing to their homogeneous geometrical configuration, we may take constant values for these energies, ε i = ε F and t ij = t ≈ 2.70eV. There exists one electron per one carbon, and the band-filling factor is 1/2. Then, the diagonal term yields just a constant, ε F N C , and can be neglected in the Hamiltonian, where N C is the number of carbon atoms. We define the size N of a nanodisk by N = N ben −1, where N ben is the number of benzenes on one side of the trigon as in Fig.1(a). It can be shown 5 that the determinant associated with the Hamiltonian (1) has such a factor as implying N -fold degeneracy of the zero-energy states. The gap energy is as large as a few eV for nanodisks with small N , where it is a good approximation to investigate the electronelectron interaction physics only in the zero-energy sector, by projecting the system to the subspace made of those zeroenergy states. As we shall see in Section V, the approximation remains to be good even for those with large N . III. TRIGONAL SYMMETRY The symmetry group of a trigonal nanodisk is C 3v , which is generated by the 2π/3 rotation c 3 and the mirror reflection σ v . It has the representation {A 1 , A 2 , E}. The A 1 representation is invariant under the rotation c 3 and the mirror reflection σ v . The A 2 representation is invariant under c 3 and antisymmetric under σ v . The E representation acquires ±2π/3 phase shift under the 2π/3 rotation. The A 1 and A 2 are 1-dimensional representations (singlets) and the E is a 2-dimensitional representation (doublet). These properties are summarized in the following character table: The zero-energy sector consists of N orthonormal states. We are able to index them by the wave number k along the edge. It is a continuous parameter for an infinitely long graphene edge. According to the tight-binding-model result, the flat band emerges for − π ≤ ak < −2π/3 and 2π/3 < ak ≤ π. (4) We focus on the wave function ψ (x, y) at one of the A sites on an edge, and investigate the phase shift when we step over to the neighboring site [see Fig.1(a)]. There are N links along one edge of the size-N nanodisk, for which we obtain the phase shift N ak, where a is the spacing between the neighboring A sites and k is the wave number along the edge. On the other hand, the phase shift is π at the corner. The total phase shift is 3N ak + 3π, when we encircle the nanodisk once. By requiring the single-valueness of the wave function, it is found to be quantized as where it follows that 0 ≤ n ≤ (N − 1)/2 from the allowed region (4) of the wave number. We may group the states according to the trigonal symmetry (3). With respect to the rotation there are three elements c 0 3 , c 3 , c 2 3 , which correspond to 1, e 2πi/3 , e 4πi/3 . Accordingly, the phase shift of one edge is 0, 2π/3, 4π/3. The state is FIG. 2. Probability density flow for the zero-energy states in the nanodisk with N = 7. The representation of the trigonal symmetry group C3v is indicated in the parenthesis. A vortex appears at the center of mass for the state belongs to the E (doublet) representation. The winding number at the center is 2 in the state |k + n . grouped according to the representation of the trigonal symmetry group C 3v as follows, (6) The zero-energy state is indexed by the quantized wave number as |k α n with (6). To see the meaning of the wave number k α n more in detail 12 , we have calculated the probability density flow, for states |k α n , which we show for the case of N = 7 in Fig.2. We observe clearly a texture of vortices. These vortices manifest themselves as magnetic vortices perpendicular to the nanodisk plane when the electromagnetic fields are coupled. The total winding number N vortex is calculated by with m = 0, 1, 2, · · · , ⌊(N − 1)/2⌋ in the size-N nanodisk, where ⌊a⌋ denotes the maximum integer equal to or smaller than a. We find N vortex = 3n for k 0 n , N vortex = 3n + 1 for k − n+1 and N vortex = 3n + 2 for k + n . The wave functions are classified in terms of modulo of the total winding number: The wave function belongs to the E-representation and has chiral edge mode for N vortex ≡ 1, 2 (mod 3), and belongs to the Arepresentation and has non-chiral edge mode for N vortex ≡ 0 (mod 3). The winding number of the vortex at the center of the nanodisk is 0, 1, 2 in the state |k 0 n , |k − n , |k + n , respectively. IV. QUASIFERROMAGNET The total spin of the ground state is determined by Lieb's theorem. The total spin is given by the difference of the A site and the B site, The susceptibility χ in unit of Sg. The size is N = 1, 2, 2 2 , · · · 2 10 . The horizontal axis stands for the temperature T in unit of JN/kB. The arrow represents the phase transition point Tc in the limit N → ∞. Hence we expect a nanodisk to act as a ferromagnet. The ferromagnetic ground state is robust against perturbations such as randomness and lattice defects since it is assured by Lieb's theorem. This feature brings out a remarkable contrast between nanodisks and nanoribbons. The ferromagnetic order is fragile due to the lack of Lieb's theorem in the case of nanoribbons. We investigate ferromagnetic properties by introducing Coulomb interactions into the zero-energy sector 5 . We have calculated specific heats and susceptibilites at temperature T in Fig.3. There appear singularities in thermodynamical quantities as N → ∞, which represent a phase transition at T c between the ferromagnet and paramagnet states, where T c = JN/2k B . For finite N , there are steep changes around T c , though they are not singularities. It is not a phase transition. However, it would be reasonable to call it a quasiphase transition between the quasiferromagnet and paramagnet states. Such a quasi-phase transition is manifest even in finite systems with N = 100 ∼ 1000. The specific heat takes nonzero-value for T > T c , as shown in Fig.3(a), which is zero in the limit N → ∞. The result indicates the existence of some correlations in the paramagnet state. On the other hand, the susceptibility χ always shows the Curie-Weiss low χ ∝ 1/T near T = 0, and exhibits also a behavior showing a quasi-phase transition at T = T c , as shown in Fig.3(b). In the finite system, the expectation value of S z,tot is always zero because there is no spontaneous symmetry breakdown in the finite system, and the behavior is that of paramagnet. V. MAGNETISM OF LARGE NANODISKS For large N nanodisks, the band gap decreases inversely proportional to the size. One may wonder if our analysis based only on the zero-energy sector is valid. Indeed, the size of experimentally available nanodisks is as large as N = 100 ∼ 1000. We wish to argue that our analysis based on the zeroenergy sector is essentially correct, even if the size N of the nanodisk is large and the band gap becomes very narrow. Near the Fermi energy, the density of states (DOS) D (ε) consists of that of the bulk graphene and an additional peak at the zero-energy states due to the edge states for N ≫ 1, as illustrated in Fig.4. Hence, together with spin degrees of freedom, it behaves as with a certain constant factor c. The linear term is due to the bulk states, and the Dirac delta function term is due to the edge states. The important point is that the edge-state peak is clearly distinguished from the DOS due to the bulk part. It is enough to take into account only the zero-energy sector to analyze physics near the Fermi energy, since the contribution from the edge states is dominant. We calculate the magnetization of a nanodisk when its size is large. We start with the Hubbard Hamiltonian, Let n ↑ , n ↓ be the average numbers of the up and down spins. The magnetization is given by m = n ↑ − n ↓ . It is determined self-consistently by the relation in terms of the Fermi distribution function and Substituting the formula (9) into the Storner equation (11), we obtain with the dilogarithm function Li 2 (x). It is difficult to solve this equation for m self-consistently at general temperature T . We examine two limits, T → 0 and T → ∞. For the zero temperature (T → 0) we obtain the magnetization as As a result, when we apply a spin-unpolarized current to the nanodisk, the outgoing current is spin polarized to the direction of the nanodisk spin. Consequently, this system acts as a spin filter. total magnetization. Hence the magnetization is m = N , and the ground state is fully poralized whenever U = 0. Ferromagnetism occurs irrespective of the strength of the Coulomb interaction. The magnetization is propotional not to N C but N . In this sence the ground state of nanodisk is not bulk ferromagnet but surface ferromagnet, which is consistent with the previous result. We next investigate the high temperatuer limit (T → ∞). Using the Taylor expansion of the dilogarithm function, The leading term is the second term, and hence the main contribution comes from the bulk. The solution is only m = 0 for which ∆ = 0. There is no magnetization at high temperature. A comment is in order. We have assumed that the magnetization axis is fixed and only longitudinal fluctuations of the magnetic moments take place. In general, spin-wave-like fluctuations are dominant at the edges of graphene 18 , because they are gapless Goldstone modes. On the contrary, there exist no gapless Goldstone modes in graphene nanodisks, because the edge is finite and closed. Furthermore, when the length of the edge is very small, spin-wave-like fluctuations have a large gap. Hence, our approximation is valid for nanodisks. VI. APPLICATION FOR SPINTRONIC DEVICES The nanodisk-spin system is a quasiferromagnet, which is an interpolating system between a single spin and a ferromagnet. It is easy to control a single spin by a tiny current but it does not hold the spin direction for a long time. On the other hand, a ferromagnet is very stable, but it is hard to control the spin direction by a tiny current. A nanodisk FIG. 6. The spin valve is made of two nanodisks with the same size, which are connected to leads. Applying an external magnetic field, we control the spin direction of the first nanodisk to be |θ and that of the second nanodisk to be |↑ . The incoming current is unpolarized, but the outgoing current is polarized and its magnitude can be controlled continuously. This acts as a spin valve. quasiferromagnet has an intermediate nature: It can be controlled by a relatively tiny current and yet holds the spin direction for quite a long time 5 . Taking advantage of these properties we have already proposed elsewhere 11 some applications of graphene nanodisk-lead systems to spintronic devices. They are spin filter, spin memory, spin amplifier, spin valve, spin-field-effect transistor, spin diode and spin switch, among which here we make a consice review of spin filter, spin valve and spin switch. We newly propose nanodisk arrays and nanomechanical switch. Spin filter: We consider a lead-nanodisk-lead system [see Fig.1(b)], where an electron makes a tunnelling from the left lead to the nanodisk and then to the right lead. This system is a reminiscence of a metal-ferromagnet-metal junction [see Fig.5]. If electrons in the lead has the same spin direction as the nanodisk spin, they can pass through the nanodisk freely. However, those with the opposite direction feel a large Coulomb barrier and are blocked (Pauli blockade [ Fig.5(c)]) 11 . As a result, when we apply a spin-unpolarized current to the nanodisk, the outgoing current is spin polarized to the direction of the nanodisk spin. Consequently, this system acts as a spin filter. Spin valve: A nanodisk can be used as a spin valve, inducing the giant magnetoresistance effect. We set up a system composed of two nanodisks sequentially connected with leads [see Fig.6]. We apply external magnetic field, and control the spin direction of the first nanodisk to be |θ = cos θ 2 |↑ + sin θ 2 |↓ , and that of the second nanodisk to be |0 = |↑ . We inject an unpolarized-spin current to the first nanodisk. The spin of the lead between the two nanodisks is polarized into the direction of |θ . Subsequently the current is filtered to the up-spin one by the second nanodisk. The outgoing current from the second nanodisk is I out ↑ = I cos θ 2 . We can control the magnitude of the up-polarized current from 0 to I by rotating the external magnetic field. The system act as a spin valve. Spin switch: We consider a chain of nanodisks and leads connected sequentially [see Fig.7]. Without external magnetic field, nanodisk spins are oriented randomly due to thermal fluctuations, and a current cannot go through the chain. However, when and only when a uniform magnetic field is applied FIG. 7. A chain of nonodisks and leads acts as a spin switch. Without external magnetic field, nanodisk spins are oriented randomly due to thermal fluctuations, and a current cannot go through the chain. However, as soon as a uniform magnetic field is applied to all nanodisks, the direction of all nanodisk spins become identical and a current can go through. to all nanodisks, the direction of all nanodisk spins become identical and a current can go through. Thus the system acts as a spin switch, showing a giant magnetoresistance effect. The advantage of this system is that a detailed control of magnetic field is not necessary in each nanodisk. Nanodisk arrays: We investigate nanodisk arrays, which are materials where nanodisks are connected in one-or twodimentions. These structures have already been manufactured by etching a graphene sheet by Ni nanoparticles 6 . We show an example of a trigonal-zigzag-nanodisk array sharing one zigzag edge as in Fig.8(a). We show the corresponding band structure in Fig.8(b). It is intriguing that there are N -fold degenerate perfect flat bands in the nanodisk with size N . This fact is confirmed by Leib's theorem. Each nanodisk has spin N/2 and makes ferromagnetic coupling between two nanodisks. In the same way we can make two-dimensional nanodisk arrays. It is to be emphasized that they show ferromagnetism, and not quasiferromagnetism, though they are made of nonmagnetic materials. The perfect flat band will be robust even when electron interactions are introduced. Nanomechanical switch: We construct a nanomechanical switch contacting two graphene trigonal corners [see Fig.9(a)]. We assume the angle between two corners is θ. This angle is tuned by an external mechanical force. The carbonskeleton structure is made of σ-bonds, and is very rigid except for this rotational degree of freedom. The conductance is determined by the overlap integral of π-electrons between two corners, which is given by When the two planes are parallel (θ = 0), the overlap takes the maximum value and π-electrons can go through the contact. This is the on state. When the two planes are orthogonal (θ = π/2), the overlap takes the minimum value and πelectrons can not go through the contact. This is the off state. The angle is changed nanomechanically. The system acts as a nanomechanical switch. It could detect the angle very sensitively and be useful for detect nanomechanical oscillations. By connecting two nanomechanical junctions, we can construct a nanomechanical rotator [ Fig.9(b)], where two corners are suspended mechanically while the central rhombus rotates rather freely. This structure may be useful to detect molecular dynamics. When molecules contact the rotator, they are detected by rotating it and changing the resistance between the two corners. Peierls instability: We connect two triangular corners with a zigzag nanoribbon [see Fig.10(a)]. This structure has already been manufactured experimentally 6 . Polyacetylene has the Peierls instability: Carbons with conjugate bonds are spontaneusly deformed into alternating single and double bonds. We expect the Peierls instability to occur also in graphene nanoribbons [ Fig.10(b)], because grapahene nanoribbon is a natural extension of polyacetylene 3 . (We denote the width of a nanoribbon by W with W = 0 for polyacetylene and W = 1 for polyacene.) However, as we now show, the Peierls instability will not occur in nanoribbons with the width W > 2. On the contrary, it is possible to induce the Peierls transition manually by streaching a nanoribbon. Mak- ing an advantage of this property, we propose a nanomechanical switch sensor. The model Hamiltonian is the Su-Schrieffer-Heeger-like model 19 , In this model, when the Peierls instability occurs, the transfer integral takes two values corresponding to the single and double bonds, and otherwise it takes one value corresponding to the conjugate bond. We are able to calculate the gap analytically in the case of polyacene (W = 1), where the band structure around k = π is given by The band gap is determined by substituting k x = π as Fig.10(c) by taking the values t 1 = t − δt 1 = 0.94t for single bonds and t 2 = t + δt 2 = 1.08t for double bonds. We have carried out a numerical estimation of the band sturacture for W ≥ 2. A tiny gap ∆ opens at k = π. The logarithm plot of the band gap ∆ as a function of the width W is shown in Fig.10(d). The gap decreases exponentially as a function of the width, ∝ 10 −1.7W . The gap of the case W = 1 (polyacene) is 49meV, and that of the case W = 2 is 0.8meV. We next compare the energy gain from this gap with the energy cost from the elastic energy of a lattice deformation. The ground-state energy difference between distorted and undistorted structures is very tiny in polyacene 20 . On the other hand, the elastic energy cost is proportional to the width W , and hence the elastic energy cost becomes larger than the gap energy gain for wider nanoribbons. The Peierls instability will not occur spontaneously in nanoribbons with W > 2. We have also estimated how the band gap ∆ depends on the external force. The transfer integrals change proportionally to the external force. For simplicity we have set t 1 = (1 − 0.06λ) t and t 2 = (1 + 0.08λ) t. It takes about 10GPa for deformation 0.01t for graphene, while it takes about 1GPa for deformation 0.01t for narrow graphene nanoribbons 21 . We show the logarithm plot of the λ dependence of the band gaps in Fig. 10(e). We propose an application of the above system. By streching a nanoribbon along the ribbon direction, horizontal bonds are stretched and vertical bonds are shrinked. The resultant structure is resemble to the deformed structure induced by the Peierls transition. We may call it a strain induced Peierls transition. When we stretch this structure, the band gap opens and the conductance at the zero energy becomes zero by the strain induced Peierls-transition. On the other hand, without the external mechanical force, nanoribbons with W > 2 is gapless and the system is conductive. We can switch the conductance from on to off by stretching the system. The system acts as a nanomechanical switch sensor detecting nanoscale displacement. VII. CONCLUSIONS A nanodisk can be used as a spin filter just as in a metalferromagnet-metal junction. A novel feature is that the direction of the spin can be controlled by external field or spin current. We have newly proposed nanodisk arrays and nanomechanical switch. These nanodisk-nanoribbon complex structure will open a new field of nanoelectronics, spintronics and nanoelectromechanics purely based on graphene.
2011-01-19T04:45:33.000Z
2011-01-19T00:00:00.000
{ "year": 2011, "sha1": "4baedc60822338628a55872ec9d46947a8b72b0e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1101.3612", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4baedc60822338628a55872ec9d46947a8b72b0e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
214618468
pes2o/s2orc
v3-fos-license
Total Knee Arthroplasty with a Ti6Al4V/PEEK Prosthesis on an Osteoarthritis Rat Model: Behavioral and Neurophysiological Analysis Arthroplasty is a surgical procedure to restore the function of the joint of patient suffering from knee osteoarthritis. However, postoperative functional deficits are reported even after a rehabilitation program. In order to determine the origin of functional deficits of patient suffering from knee osteoarthritis and total knee arthroplasty, we developed a rodent model including a chemically-induced-osteoarthritis and designed a knee prosthesis (Ti6Al4V/PEEK) biomechanically and anatomically adapted to rat knee joint. Dynamic Weight-Bearing, gait kinematics, H-reflex from vastus medialis muscle and activities from metabosensitive III and IV afferent fibers in femoral nerve were assessed at 1 and 3 months post-surgery. Results indicate that knee osteoarthritis altered considerably the responses of afferent fibers to their known activators (i.e., lactic acid and potassium chloride) and consequently their ability to modulate the spinal sensorimotor loop, although, paradoxically, motor deficits seemed relatively light. On the contrary, results indicate that, after the total knee arthroplasty, the afferent responses and the sensorimotor function were slightly altered but that motor deficits were more severe. We conclude that neural changes attested by the recovery of the metabosensitive afferent activity and the sensorimotor loop were induced when a total knee replacement was performed and that these changes may disrupt or delay the locomotor recovery. The main goals of a TKA are to restore knee function and alleviate pain by correcting knee joint deformity. This process has steadily increased over the past decades and is projected to grow drastically in the coming years in line with prevalence of obesity, non adapted physical activities and osteoarthritis 7,8 . Establishment of a knee prosthesis is a major procedure because it results of an inevitable joint capsule opening, menisci removal and cruciate ligaments disruption causing important neuromuscular disturbances 9 . Postoperative functional deficits are mainly characterized by proprioception and kinesthesia impairment, strength loss, voluntary activation deficit and atrophy of both quadriceps and hamstring muscles [9][10][11][12] . To reduce these functional deficits, physical therapies, mainly based on neuromuscular electrical stimulation (NMES) and voluntary physical activities, are commonly recommended. Unfortunately, despite these strategies, patient recovery remains often partial as demonstrated by several case reports [11][12][13][14] . For example, it was concluded that even 6 to 13 years post-surgery and despite physical rehabilitation, quadriceps muscle weakness persisted in people with a TKA compared with healthy people 15 . Currently, no universal rehabilitation protocol for patients who have undergone a TKA exists 14 . Moreover, the contralateral limb can also exhibit functional deficits even in the absence of joint disease, because of the functional limitation imposed by the operated knee 10 . Indeed, several studies demonstrated that quadriceps/hamstring muscles weakness and associated co-activation occurred likewise bilaterally resulting in knee stiffness 16,17 . Thus, ability to use appropriate motor strategies decreases and related gait functionality are altered because to a switch to slower and safer patterns 9,[18][19][20] . Over time, this phenomenon could lead to contralateral knee degradation which cannot sustain higher repeated weight bearing strain, and could finally result in a second TKA process of the contralateral healthy knee. This adaptive strategy is presumed to increase joint stability during prolonged activity 17 and reflects the role of sensorimotor loop in overcoming task constraints despite heavy neuromuscular deficits. However, during the first year after surgery, Swinkels et al., reported a 24.2% of postoperative fall rate for patients who have had a TKA and a 45.8% fall rate for patients identified as fallers prior to surgery showing that nervous compensation strategies are not sufficient to ensure the safety of operated patients 21 . The underlying mechanisms of neuromuscular changes in patients undergoing a TKA are not well understood 9,22 . However, in OA and TKA context, joint and muscular lesions induce afferent impairment leading to central nervous changes. Indeed, sensorimotor loop and central motor drive are modulated by chemical and mechanical changes in skin, joints and muscles [23][24][25] . More precisely, motor control is partially regulated by both groups I/II (mechanosensitive) and groups III/IV (mechano-and metabosensitive) joint and muscle afferents. As explained above, in context of OA and/or TKA, joint, cutaneous, and mainly muscle mechanosensitive (proprioceptive) information from group I and II could be altered. However, afferent fibers from groups III and IV could play a leading role because they are activated by different chemical (mainly potassium, lactic acid, inflammatory mediators) and mechanical stimuli (intramuscular pressure) 26 and because their activation leads to modulation of reflexes mediated by group I and II afferent fibers, α-motoneuron activity and descending motor drive 24,25 . Thus, any change in III and IV afferent activity may delay the recovery because of an inappropriate neural strategy of recovery. Furthermore, as suggested in the literature, it seems relevant to explore the neuromuscular adaptations from patient suffering from knee osteoarthritis and TKA in order to more efficiently target the origin of functional deficits and improve long-term outcomes 9,22 . In order to determine the origin of the functional deficits observed after knee osteoarthritis and TKA, we developed a rodent model including a chemically-induced-OA and a knee prosthesis biomechanically and anatomically adapted to rat knee joint. To the best of our knowledge, only two studies have already attempted to create and to implant a knee prosthesis on rat 27,28 . However, both of these models were not really suitable as their shape did not respect knee joint surface, leading to joint instability and bone fractures. In this study, the first challenge was, after modeling the knee joint, to design a total knee prosthesis complying normal anatomical and biomechanical features of the rat knee joint. Then, the prosthesis was implanted after a chemically-induced-OA and Dynamic Weight-Bearing, gait kinematic, H-reflex from vastus medialis muscle and activities from III and IV afferent fiber in femoral nerve were assessed at 1 and 3 months later. The study was completed by a histological analysis of the joint. Materials and Methods Animals and experimental design. Forty-eight adult male Sprague Dawley rats (12 weeks old), weighing 400 g (Centre d'Elevage Roger JANVIER ® , Le Genest Saint Isle, France), were housed in plastic cages at 22 °C with a 12 h light/dark cycle. Food (Safe ® , Augy, France) and water were available ad libitum. An acclimation period of one week was allowed before the initiation of the experiment. All animals were weighed before each experimental step. Rats were randomly assigned to four experimental groups: 1) Control group (n = 12) which received no treatment; 2) Sham group (n = 12) in which TKA surgery approach was made without injuring knee joint capsule; 3) MIA group (n = 12) which received knee intra-joint injection of mono-iodoacetate leading to an OA and 4) TKA group (n = 12) undergoing a TKA three weeks after a mono-iodoacetate injection. This last group remains consistent with clinical reasons (i.e., OA) leading to TKA prescription. Each of these four groups was divided into two subcategories according to the recovery time before the electrophysiological analysis (i.e. one month -M1 or three months -M3). In order to design the total knee prosthesis, three additional rats were sacrificed prior to the study. Their left tibial and femoral bones were collected and modeled. ethical approval. Anesthesia and surgical procedures were performed according to the French law on animal care guidelines. The Animal Care Committees of Aix-Marseille Université (AMU) and Centre National de la Recherche Scientifique (CNRS) approved our protocols. Individuals conducting the research were listed in the authorized personnel section of the animal research protocol or added to a previously approved protocol (License A 13 01306). Furthermore, experiments were performed following the recommendations provided in the Guide for Care and Use of Laboratory Animals (U.S. Department of Health and Human Services, National Institutes of Health) and in accordance with the European Community's council directive of 24 November 1986 (86/609/ EEC), the ARRIVE Guidelines and the U.K Animal (Scientific Procedure) Act,. 1986. All these guidelines were carefully followed. No clinical sign of pain or unpleasant sensations (i.e. screech, prostration, hyperactivity, anorexia) or paw-eating behavior was observed throughout the study. prosthesis design. In order to create a prosthesis that is biomechanically and anatomically adapted to the rat knee joint, the same methods employed in human clinic were used. Briefly, collected tibial and femoral bones were immersed during 4 hours in boiling water with antibacterial soap in order to completely separate bones from surrounding tissue. Bones were digitalized using an optical measuring system (3D Scanner ATOS 3, GOM ® , Braunschweig, Germany) and the data were treated by an inverse engineering methodology of a computer-assisted design system (CATIA V5, Dassault System ® , Velizy, France). Thus, three-dimensional (3D) numerical model of rat knee joint was obtained. From this model, a computer-assisted design system allowed to design the geometrically-adapted tibial and femoral components of the future prosthesis (Fig. 1). Femoral and tibial components were respectively machined from a titanium alloy (Ti6Al4V) and a polymer material (polyether-ether-ketone; PEEK) by a 5-axes micro-milling machine (US 20, DMG-Mori ® , Leonberg, Germany). The programming process for machining each piece was performed by an ISO standard program generated by the CATIA V5 system. osteoarthritis induction. Monoiodoacetate (Sigma Aldrich ® , St. Louis, MO, USA) injection is recognized as an effective technique allowing histologic and morphologic changes in joint cartilage, closely similar to those observed for patients with OA [29][30][31][32] . By promoting glyceraldehyde-3-phosphate dehydrogenase activity inhibition of chondrocytes, mono-iodoacetate induces glycolysis disruption resulting finally in progressive chondrocytes cell death 32,33 . Thus, an injection of 1 mg of mono-iodoacetate diluted in 50 μl of saline is sufficient to induce maximal functional deficits by the 14 days post-injection 30 . In order to induce knee osteoarthritis, rats from MIA and TKA groups were slightly anesthetized by a transcutaneous intramuscular injection of a mixture of 0.65 ml of ketamine (Ketamine 1000, Virbac ® , Carros, France) and 0.25 ml of chlorpromazine (Largactil ® , 0.1 ml per 100 g, Sanofi Aventis Laboratory ® , Paris, France). Then, a single intra-joint injection of mono-iodoacetate (1 mg diluted in 50 μl of saline) was performed in the left knee joint with a microsyringe (Ito Corporation Exmire ® , Shizuoka, Japan). total knee arthroplasty surgery. Rats were anesthetized by an intraperitoneal injection of a mixture of medetomidine hydrochloride (0.5 mg.kg −1 , Medetor ® , Virbac ® ) and ketamine (75 mg.kg −1 ). The left hindpaw was shaved, and, according to the human parapatellar approach, a 1.5-cm skin incision was made along the medial border of the patellar tendon. The latter was laterally reclined and maintained in this position with retractors. In order to improve joint exposure, knee was positioned in flexion allowing thereby cruciate ligaments section and meniscectomy. Then, a portion of joint surface of the femoral condyles and tibial tray, with volume and shape equivalent to femoral and tibial component of the prosthesis, was removed using a micro milling/grinder machine (Dremel ® 300 series multitool, Robet Bosch ® SAS, Saint-Ouen, France). Suitable femoral and tibial guides were specially designed and used to ensure proper placement of tibial and femoral components. These guides were also used as drill barrels to introduce tibial and femoral prosthesis stems into bones to optimize implant-to-bone adhesion. As in orthopedic surgery, femoral and tibial components were fixed with bone acrylic www.nature.com/scientificreports www.nature.com/scientificreports/ cement (CEMFIX 1, Teknimen ® , L'Union, France) spread on the inner surface on each component. The fixation was performed by manual pressure exerted for several seconds until cement dries to ensure good attachment. Following this drying delay, knee was positioned in extension to check the well alignment between femoral and tibial prosthesis. Then, patellar tendon was replaced above the knee, sutured to muscular edge and skin was subsequently sutured (Vicryl ® 3-0, Ethicon ® , Johnson & Johnson ® Medical SAS, Issy-les-Moulineaux, France). Finally, a 0.2 ml injection of local anesthetic (Lidocaine T7394c, Sigma-Aldrich ® ) was performed subcutaneously around the implantation site to minimize post-operative pain. Behavioral tests. Dynamic weight-bearing. Weight distribution is considered as a relevant index of pain and is based on the assumption that nociceptive inputs can alter paw weight distribution 30,34,35 . Dynamic weight-bearing (DWB) device (Bioseb ® Development, Vitrolles, France) consists of a biometric floor instrumented cage (Captor surface: 10.89 mm 2 , Captor threshold: 0.1 g; Matrix Sensor 5250 type:/10, Tekscan ® Inc., Boston, MA, USA) allowing discrimination and measurement of the pressure exerted by each rat paw during short periods of free moving 34 . However, the paw weight-bearing can only be analyzed in static position periods during trials. Thereby, weight distribution functional deficits associated with trauma of central and/or peripheral nervous system were objectively identified [34][35][36] . In the present study, each rat was placed on DWB device and pressures (g/unit area) of each paw were recorded during two trials of 5 min each. The first step of data processing was to assign each pressure zone to the corresponding rat paw by means of a synchronized video-recording and scaled map of the stimulated captors. Only stable paw pressures of at least 0.2 s were retained. Then, home-made MatLab ® functions allowed us to: i) normalize the time periods spent on 2 and 4 paws by the total time spent in static position for each recording, ii) normalize pressures in rearing condition (pressures exerted only on the two hindpaws) to the total weight of the animal and iii) normalize pressures in standard condition (pressures exerted by the four paws) to the total weight of the animal. Gait kinematics. Two weeks before data acquisition, animals were trained within an enclosed plexigas ® walkway (L150 x W9 x H40 cm 3 ) which was illuminated by two spotlights. A camera (iPhone 5, Apple ® Cupertino, CA, USA) was positioned perpendicularly to the walkway allowing the recording of rat locomotion. Prior to data acquisition, the left hindpaw of each rat was shaved and four anatomical landmarks were drawn with an indelible marker on the greater trochanter, knee lateral condyle, ankle medial malleolus and the 5 th metatarsal bone, thus identifying the hip-knee, knee-ankle and ankle-paw bone segments. Rats were successively placed on the right side of the walkway such that their left side was exposed to the camera lens and a sound stimulus was used to initiate their locomotion. Each rat accomplished five trials which were validated only if they performed at least four consecutive gait cycles at a steady pace in the field of view of the camera. Then, video analysis software (Kinovea ® , Free software, Association Kinovea, Le Taillan Médoc, France) was used to extract the 2D coordinates of the four joint markers. The angular evolution of knee and ankle during gait cycles was then reconstructed. Based on 5 th metatarsal height, stance and swing phases of each gait cycle were identified. In order to generate average group profiles, both phases of all trials were normalized with respect to time. According to previous study, the transition from stance into swing phase was estimated to occur around 60% of gait cycle 37-39 . electrophysiological recordings. Rats were anesthetized by an intraperitoneal injection of urethane (0.12 g/ml; 1 ml/100 g; Sigma-Aldrich ® ). The inner part of the healthy hindpaw was incised and femoral artery was isolated from surrounding tissues. In order to allow injections chemical known to activate III and IV afferent fibers, a catheter was inserted into the proximal portion of the artery toward the abdominal fork. The operated/ treated hindpaw was also incised in order to isolate femoral nerve. Rat was then placed in a Faraday cage, to avoid signal noise, and femoral nerve was placed on stimulation electrodes (MLA 1204 needle electrodes, 29 gauge, 2 mm pin; ADInstruments Ltd., Paris, France) and covered with paraffin oil to prevent drying. To collect electromyographic activity (EMG), two electrodes were inserted into left vastus medialis muscle. A supplemental reference electrode was positioned on neutral tissue. Using a differential amplifier, the nerve signal was amplified (2 kHz) and band-passed filtered (100 Hz to 3 kHz). H-reflex. In order to evaluate the maximal amplitude of the H-reflex, stimulation intensity was progressively incremented (by 0.01 mA) from motor threshold to obtain maximal amplitude of the H-reflex (H max ). Since H max was obtained, electrodes position remains unchanged. In order to control H max stability, a series of twenty stimulations was then performed at a frequency of 0.1 Hz. After 10 minutes of rest, a new series of twenty stimulations was performed during which a mixture of 0.5 ml of potassium chloride (KCl, 10 mmol/l) and 0.5 ml of lactic acid (AL, 25 mmol/l) was injected through the catheter at the 6 th stimulation. This protocol allowed verifying the effect of III and IV muscle afferent activation on H-reflex response. Indeed, it was previously demonstrated that specific activation of these afferent groups induced a H-reflex inhibition 40 . Thus, fourteen post-injection reflexes were averaged and compared to the six pre-injection reference reflexes. Then, stimulation intensity was increased in order to attain the maximal amplitude of the M-wave (M max ). The same injection protocol was performed to verify M max stability and the absence of M max response when III and IV muscle afferents were activated. So, to overcome the impact of changes in muscular conduction properties, results were expressed as a ratio of H max / M max . Amplitude variation occurring after chemical injection was only due to spinal and supra-spinal regulatory mechanisms. III and IV afferent fibers. Following H-reflex recording, femoral nerve was cut in its proximal part to only record the afferent activity. The distal part of the femoral nerve was placed on bipolar reception electrode and immersed in paraffin oil. Electrical signal, recorded in absence of any movement, was amplified (10-100 K) with (2020) 10:5277 | https://doi.org/10.1038/s41598-020-62146-0 www.nature.com/scientificreports www.nature.com/scientificreports/ a differential amplifier (P2MP ® SARL, Marseille, France), filtered (pass-band filter: 30 Hz to 10 kHz) and discriminated in order to exclude aberrant signals (P2MP ® SARL). Discharge rate was calculated with a software (Biopac MP150 ® and AcqKnowledge ® software, Biopac System, California, USA) and the variation of afferent discharge frequency during data acquisition was calculated off-line using an in-house MatLab ® program. First of all, basal discharge of afferent fibers (without chemical injection) was recorded during 230 seconds. This baseline was considered acceptable if discharge variation did not exceed 3%. Thus, discharge variations recorded following chemical injections were considered to an increase in III and IV afferent activity and not to variations due to environmental conditions. Afferent activity was recorded during 60 s after an injection of a mixture of KCl/AL (0.5 ml) with a concentration of 5/15 mmol/l or 10/25 mmol/l. These concentrations are close to those naturally released by rat hindlimb muscles during physical activity. To avoid chemical accumulation and to give chemicals time to be degraded, a delay of 10 min was left between the two injections 41 . Post-injection afferent discharge was compared to pre-injection activity and expressed as a percentage of baseline discharge frequency (%). Histological analysis. At the end of the electrophysiological recordings, animal were killed with an overdose of anesthetic (Pentobarbital sodium, 180 mg/kg, i.p., Exagon ® , Axience, Pantin, France) and left knee joint of Control and MIA groups were removed from rat hindlimb. Femoral and tibial bones were sectioned at 0.5 cm on both side of the knee joint. Samples were completely cleaned of all surrounding soft tissues and fixed in 4% formaldehyde (Merk Millipore ® , Fontenay sous Bois, France) in 0.01 mol/l phosphate buffer saline (Sigma-Aldrich ® ) at pH 7.4 for 10 days. Samples were rinsed in PBS (pH 7.4) and decalcified in 12% of ethylenediaminetetraacetic acid (EDTA, Sigma-Aldrich ® ) during 2 weeks. Radiographies were regularly performed to control decalcification. When decalcification was achieved, samples were washed in deionized water and dehydrated in alcohol baths of increasing concentration (60°, 80°, 95°, 100°). Samples were left immersed 2 days in each bath at room temperature and finally cleaned with tissue clearing agent (Histo-Clear-II ® , National diagnostics ® , Atlanta, USA). Knee joints were infiltrated with immersion in three successive baths of liquid paraffin during 4 hours and embedded in solid paraffin (polyisobutylene mixture, Paraplast plus ® , Sigma-Aldrich ® ). Blocks of paraffin containing rat knee joints were cut in the sagittal plane using a microtome (Leica ® RM 2265, Wetzlar, Germany). Sections (thickness: 8 μm) were stained with toluidine blue (pH 5), dehydrated and mounted with rapid mounting medium for microscopy (Entellan ® , Merk Millipore ® ). Toluidine blue was used to highlight metachromatic properties of the cartilage. This cationic metachromatic blue dye binds to the cartilage proteoglycans leading to a coloring of the matrix in purple-red 42,43 . Histological sections were analyzed with an optical microscope (Olympus BX40, Olympus France SAS, Rungis, France) photographed using a CDD camera (Olympus DP21, Olympus France SAS) and evaluated using Mankin score for osteoarthritis. This score makes it possible to classify sections into no (scores: 1 to 5), moderate (scores: 6 to 10) and severe (scores: 11 to 14) osteoarthritis according to surface structure, chondrocytes aspect and coloration features 44 . Statistical analysis. All analyses were performed using statistical software (SigmaStat, Systat Software, Inc., San Jose, USA). Population normality was verified using the K² D'agostino-Pearson test. The results were processed through an analysis of variance (ANOVA). Results from DWB were analyzed by a three-way ANOVA (group effect, paw effect and time effect) while kinematics results were analyzed by a two-way ANOVA (group effect and time effect). Concerning kinematics analysis, ANOVA was carried out for each of the 30-time periods of walking. Finally, regarding electrophysiological data, a two-way ANOVA was carried out (group effect and time effect for the H-reflex; group effect and dose effect for afferent fiber activity). Data were expressed as mean ± SD. Results were considered significant if the p-value fell below 0.05. Scientific RepoRtS | (2020) 10:5277 | https://doi.org/10.1038/s41598-020-62146-0 www.nature.com/scientificreports www.nature.com/scientificreports/ Gait kinematic. Evolution of ankle and knee angles during normalized gait cycles is illustrated in Fig. 4. At M1, no difference was observed between groups in ankle angles during the stance and the swing phase. However, at M3, ankle angles of the MIA group were significantly higher at the beginning of stance phase compared to the SHAM (p < 0.05) and TKA (p < 0.01) groups and at the end of the swing phase compared to all the other groups (p < 0.001 for all). In addition, at the end of the swing phase, the MIA group showed significant (p < 0.05) higher ankle angles at three months compared to those measured at one month. Finally, at the beginning of the stance phase, ankle angles of TKA group were significantly higher (p < 0.05) at M1 than at M3. Concerning knee angles, at M1, values of the Control group in the first half of the stance phase were significantly higher (p < 0.001) than values of the SHAM and MIA groups. Values of the Control group was also significantly higher than values of the SHAM (p < 0.001) and MIA (p < 0.01) groups at the end of the swing phase. In addition, TKA knee angles were significantly lower (p < 0.001) than all the other groups during the first 66% of stance phase and during the last half of swing phase. At M3, TKA knee angles were also lower than those of Control (p < 0.001), SHAM (p < 0.05) and MIA (p < 0.001) groups during the first 66% of stance phase. Furthermore, knee angles of the MIA group www.nature.com/scientificreports www.nature.com/scientificreports/ were significantly higher (p < 0.001) to those of Control group at the beginning of the second half of swing phase and higher for the entire second half part than those of SHAM (p < 0.05) and TKA (p < 0.001) groups. At the end of swing phase, knee angle values of the Control group were significantly higher than those of SHAM (p < 0.01) and TKA (p < 0.001) groups. In addition, knee angles of TKA group were significantly higher (p < 0.001) at M3 compared to M1 during the first 66% of stance phase and during the last half of swing phase. Finally, MIA group showed also higher knee angles during the first third of stance phase (p < 0.05) and the last half of swing phase (p < 0.01) at M3 than at M1. However, H max /M max ratio was significantly lower (p < 0.01) in the MIA group compared to the TKA group. Finally, no difference was observed in each group between values calculated at M1 and M3. After chemical injection, at M1, no difference was observed between Control (−13.26 ± 4.3%) and SHAM (−12.17 ± 1.75%) groups. However, TKA group showed a significant (p < 0.001) higher H max /M max variation (−31.62 ± 1.8%) than Control, SHAM and MIA (−0.4 ± 3.3%) groups. In addition, H max /M max variation in the MIA group was also significantly lower (p < 0.001) than other groups. At M3, no difference was observed between Control (−17.30 ± 7.1%), SHAM (−11.96 ± 1.60%) and TKA (−12.50 ± 2.4%) groups. However, H max/ M max variation was significantly (p < 0.001) lower for MIA (−1.0 ± 1.4%) group compared to all other groups. Finally, except for TKA group that showed a significant (p < 0.001) lower H max /M max value at M3 than at M1, no difference was observed between values calculated at M1 and M3 for the others groups. III and IV afferent fibers. Data of afferent discharge variation following chemical injections are presented in Fig. 6. At M1, all group responded to chemical injections. However, no difference was observed between all groups following KCl/AL 5-15 mM injection (Control: 6.94 ± 1.16%; SHAM: 5.31 ± 1.55%; MIA: 4.61 ± 1.22%; TKA: 5.30 ± 0.56%). After KCl/AL 10-25 mM injection, only Control (9.66 ± 1.13%) and TKA (8.23 ± 1.39%) www.nature.com/scientificreports www.nature.com/scientificreports/ groups showed higher (p < 0.01) afferent discharge variation compared to 5-15 mM injection. Furthermore, following 10-25 mM injection, Control afferent discharge variation was higher (p < 0.05) than variation for SHAM (7.01 ± 1.51%) and MIA (5.89 ± 1.77%) groups. At M3, all group responded to chemical injections. However, afferent discharge variation was not different between groups following KCl/AL 5-15 mM injections (Control: 7.63 ± 1.46%; SHAM: 6.06 ± 1.17%; MIA: 4.97 ± 2.8%; TKA: 5.04 ± 0.42%). After KCl/AL 10-25 mM injection, afferent discharge variation was significantly higher (p < 0.05) for Control (11.08 ± 1.89%), SHAM (8.28 ± 1.18%) and TKA (6.50 ± 1.34%) groups compared to 5-15 mM injection. However, no difference was observed concerning discharge variation in the MIA group between 10-25 mM (3.58 ± 1.30%) and 5-15 mM injections. Afferent discharge variation of MIA group following 10-25 mM injection was significantly lower than discharge of the other groups (Control and SHAM: p < 0.001; TKA: p < 0.01). In addition, Control group showed higher (p < 0.001) afferent discharge variation than TKA group. Finally, afferent discharge variation of MIA group was lower (p < 0.05) at M3 than at M1. Histological analysis. Representative pictures of knee sections of Control and MIA groups at M1 and M3 are shown in Fig. 7. In the Control group, knee joints presented a thick layer of hyaline cartilage which demonstrated high content in proteoglycans and glycosaminoglycans. A high density of chondrocytes was also observed. Significant differences between the SHAM and MIA groups are indicated by #. Significant differences between the MIA and TKA groups are indicated by Δ. At M3, significant differences between MIA and TKA group are indicated by λ. (B) H max /M max ratio variation to injection of a mixture of KCl and LA (10-25-mM). At M1, significant differences with Control group are indicated by θ. Significant differences between the SHAM and TKA groups are indicated by δ. Significant differences between the MIA and TKA groups are indicated by ψ. Significant differences between the MIA and SHAM groups are indicated by Ω. At M3, significant differences with Control group are indicated by+. Significant differences between the MIA and TKA groups are indicated by χ. Significant differences between the MIA and SHAM groups are indicated by γ. Finally, in the same group, significant difference between M1 and M3 are indicated by π. (2 symbols: p < 0.01 and 3 symbols: p < 0.001). One month following intra-joint MIA injection, hyaline cartilage of MIA group showed important purple coloration deficits which highlighted a major decrease of proteoglycans and glycosaminoglycans, particularly at the center part of the joint where biomechanical constraints are important. Density of chondrocyte was poor and cells were disorganized. In addition, deep cracks were observed at joint surfaces and cartilaginous remnants were present in the intra-joint space. From these observations a Mankin score of 11 was assigned to the MIA group at M1. Three months following intra-joint MIA injection, there was poor hyaline cartilage staining which extended over all joint surfaces. Calcified cartilage did not seem to be damaged and still had a healthy appearance. Nevertheless, joint surfaces presented many deep cracks and a very low cell density. Some chondrocytes had a condensed appearance which seemed to reflect an apoptotic process. From these observations a Mankin score of 12 was assigned to the MIA group at M3. Figure 6. Afferent responses after injections of a mixture of KCl and LA. Discharge variation was expressed as a percentage of the baseline discharge. In each group, significant differences between the two mixtures (5-15 mM and 10-25 mM) are indicated by *. For a given dose, significant differences between group are indicated by + . Finally, in each group, significant differences between M1 and M3 for a given dose are indicated by #. (1 symbol: p < 0.05; 2 symbols: p < 0.01 and 3 symbols: p < 0.001). Discussion Nowadays, total knee arthroplasty (TKA) procedure is steadily increasing in most industrialized countries. This dramatic trend expected to continue in the coming years due to population aging, increasing physical activity and obesity progression 8,45 . Generally, resorting to total knee prosthesis establishment occurs for people suffering from severe knee osteoarthritis (OA), with severe pain and/or significant loss of autonomy due to the resulting motor limitations. TKA involves a heavy surgery engendering major motor deficits, mainly characterized by muscular atrophy of stabilizing knee muscles (quadriceps and hamstring), strength loss, voluntary activation deficits, and proprioceptive/kinesthetic deficits 12,16,46,47 . Paradoxically, even though motor deficits are necessarily due to motor control adjustment processes, very few studies have investigated these processes following a TKA altering a large number of sensory receptors and consequently disturbing the functioning of the sensorimotor loop 48 . Thus, any change in the sensorimotor loop may be at the origin of motor deficits and their persistence in time. In the present study, for the first time, we demonstrate that knee OA altered considerably the responses of III and IV afferent fibers to their known stimuli 49 and consequently their ability to modulate the sensorimotor loop, although, paradoxically, motor deficits seemed relatively light suggesting compensatory mechanisms. On the contrary, we demonstrate that TKA slightly altered the afferent responses and the sensorimotor loop but that motor deficits were more severe suggesting a central adaptive process. High magnification (C-D) reveals a linear organization of chondrocytes (encircled in red) which are dense and homogeneous throughout the cartilage and with a healthy appearance. At M1 (E-H), in the MIA group, the thickness of the cartilage is reduced as the number of proteoglycans in the hyaline cartilage of both femur and tibia (red rectangle). Calcified cartilage shows a reduced thickness, a low density of chondrocytes and cluster with residues of cells (*). Joint surfaces are cracked (dotted arrow) and cartilaginous remnants still containing a few cells are observed in the intra-joint space (*). At M3, proteoglycans are very rare from the whole joint surfaces. Joint surfaces present many deep cracks and a very marked hypocellularity with cells having a very condensed appearance (arrows). (2020) 10:5277 | https://doi.org/10.1038/s41598-020-62146-0 www.nature.com/scientificreports www.nature.com/scientificreports/ In accordance with several previous studies 50-53 , we observed that MIA injection induced severe cartilage damage demonstrating that this method was effective to induce severe OA. Thus, as in human clinical cases, our rat model is suitable for total knee arthroplasty. In our study, we noted, as for patients with OA, that joint cartilage impairment did not necessarily lead to significant functional limitations on the short term 54,55 . Moreover, it was reported that degenerative OA processes are not linear 56 . Indeed, the phases of stabilization and aggravation that follow one another could potentially explain the largest deficits observed in the first months (M1) compared to those in the third month (M3) even if at M3 the cartilage was more degraded. Moreover, habituation to pain could explain the weight bearing distribution on two and four paws in MIA group. Regarding gait kinematics, although knee angular variations in the MIA group were close to those of Control group, ankle angle values suggested that rats with OA adopt a strategy in which the ankle participated differently to locomotion in order to compensate knee joint deficits. This strategy had already been observed in people with knee OA 2,3 . Indeed, the literature reports that people with knee arthritis readjust their motor control to reduce knee contribution and increase that of ankle 57 . Although this study assessed knee OA effects on a period of three months, degenerative features of OA suggests that the relatively "good" functionality observed in MIA groups could be temporary. Indeed, like for patients with OA, we can expect that, with time, the progressive degradation of joint cartilage and pain will significantly alter knee functionality. The choice to implant our prosthesis model is then quite justified. Weight distribution (DWB) indicated changes from M1 to M3 in the mean pressures of the forepaws in the TKA group. Indeed, at M1, the pressures exerted by the forepaws in the TKA group were significantly higher than those of the other groups indicating a posterior-anterior compensation in which the forepaws were more solicited to lighten the weight in the posterior part of the body. At M3, a rebalancing was observed with an increase in the mean pressures exerted by the operated leg that may indicate a recovery. However, this recovery was incomplete because mean pressures on the operated side remained lower than those of the Control and SHAM groups and because an imbalance persists between the two anterior legs. Gait kinematics revealed that animals in which a total knee prosthesis was implanted were able to walk but were slightly leaning on the operated limb. Furthermore, results suggested that these implanted animals presented some recoveries characterized by a reduction of functional deficits with time, i.e., the deficits were lower at M3 compared to those at M1. These results were particularly encouraging because they concretely validated our prosthesis design anatomically and biomechanically adapted to rat knee. More precisely, at M1, knee angles of TKA group were significantly lower than those of the three other groups at the beginning of stance phase (at the beginning of the forward weight transfer) and at the end of swing phase (when hindpaw extends to prepare the ground contact). These results highlight a strong alteration of knee extension capacity for implanted animals. This is in accordance with several human clinical studies demonstrating major weakness of quadriceps, which is the main knee extensor muscle group 12,16,46,47,58,59 . At M1, knee angles of TKA group were still different compared to Control, SHAM and MIA groups. However, there was a marked improvement in gait kinematics, both at the beginning of the stance phase and at the end of swing phase compared to knee angles at M1. Unlike the MIA group, there did not seem to be any ankle compensation strategy that could be explained by the congruence of tibial and femoral parts of the prosthesis ensuring a low level of joint constraint and allowing to implanted animals to adopt the same walking strategy that Control group. Gait of TKA group remained deficient but recovered tending towards normality and not towards a phenomenon of compensation as described previously for the animals with knee OA. In the MIA group, joint surface incongruence involved a change in motor strategy with changes of ankle angles during locomotion. In the MIA group, the basal value of the H max /M max ratio and changes of this ratio following activation of the III and IV afferent fibers were very low at M1 and M3. On the contrary, H max /M max ratio was decreased in all other groups confirming previously results indicating that specific activation of III and IV by KCl and LA was followed by a decrease in H max amplitude 40 . Our results may suggest that knee OA disturbed the afferent fibers to modulate the motor order. This hypothesis was reinforced by the fact that, following KCl/LA 10-25 mM injection, increase in discharge frequency of III and IV afferent fibers was lower in MIA group compared to the other groups. Our results were consistent with previous electrophysiological studies demonstrating a direct relationship between OA severity and afferents sensitivity 60,61 . Moreover, many studies have suggested that joint damage can lead to arthrogenic muscle inhibition 62-65 that is a reduction in spinal reflexes excitability, including H-reflex by sensory information originating from damaged joint areas 62,64 . Thus, we can't not excluded that knee joint damages could inhibit motor units excitability of quadriceps. Moreover, inflammatory reaction associated with joint surfaces degradation is likely to contribute to the severity of nervous deficits. Indeed, cartilage damage promotes the release of inflammatory mediators such as cytokines including interleukins (IL-1β and IL-6) and tumor necrosis factors (TNF-α), and degradation enzymes such as metalloproteinases (MMPs) 3,66,67 . Some of these substances are known to stimulate III and IV afferent fibers 24,68,69 . Thus, a down-regulation or an over-activation of III and IV afferent fibers could explain the lack of response following high dose of KCl/LA injection and then a decreased sensory signal to different integrative centers. Finally, our results indicate that knee OA (MIA group) did not greatly affect motor activity suggesting the existence of compensatory mechanisms related to nervous system adaptation in response to changes in activities from III and IV afferent fibers. In the TKA group, the basal value of the H max /M max ratio was not different from Control group either at M1 or at M3. However, at M1, the H max /M max ratio variation following III and IV afferent fibers activation was greater than those other groups that could be explained by changes induced by arthroplasty surgery. Indeed, cruciate ligaments section, menisci removal and joint surfaces shortening may suppress inflammatory components, partially alter joint afferents and then arthrogenic muscle inhibition 63,64,70 . At M3, the H max /M max ratio variation following III and IV afferent fibers activation seemed to normalize; i.e., the variation was similar to that of the Control and (2020) 10:5277 | https://doi.org/10.1038/s41598-020-62146-0 www.nature.com/scientificreports www.nature.com/scientificreports/ SHAM groups. These results could be explained by an adaptive strategy to the remaining information from joint afferents. As a methodological limitation, it could be argued that our results are difficult to transpose to human model. However, our model of chemically induced osteoarthritis was demonstrated to be closer to clinical situation 39 . Another lock consists in the complexity of the human knee prosthesis since this prosthesis is composed with a femoral part and a tibial part with two components. For the latter, a metallic part is directly inserted into the tibial plateau and an upper part is made in ultra-high molecular weight polyethylene (UHMWPE). Consequently, this piece allows to lower friction and therefore decreases the delamination risk between the two metallic parts of the prosthesis. However, to simplify the problem in our model, we choose to work on a prosthesis made with only two parts. The tibial part was made in PEEK which is known to provide good osseointegration without cytotoxicity and to provide a good friction coefficient with the metallic part. The femoral part was classically made in TiAl6V4. In conclusion, the aim of the present study was to determine the origin of functional deficits after knee OA and TKA. Our results indicated that after 3 months, knee OA did not induce significant motor functional limitations. However, the damaged joint surfaces were associated to an alteration of the III and IV afferent responses and of changes in H max /M max ratio following their stimulation. Furthermore, TKA was associated to greater motor limitations which were not associated to altered responses of III and IV afferents. Finally, paradoxically, the adjustment of the central drive by III and IV afferent fibers seemed again possible when the knee prosthesis was implanted that could suggest recovery. We conclude that neural changes observed when a total knee replacement was performed may disrupt or delay the locomotor recovery. Because human clinical studies seem to report that functional recovery was not completed following TKA, it could be interesting to explore, in futures studies, the effect of improving the placement of the prosthesis and associated surgery procedures on sensorimotor loop changes. Finally, we are therefore on the initiative of an innovative "research and development" platform aimed at optimizing the clinical management of patients using TKA thanks to research carried out on an animal model: the rat. The innovation of a prosthesis adapted to this model allows us to explore in a more invasive way consequences of arthroplasty on neuromuscular adaptation. It will therefore be possible to work upstream to optimize the functional recovery of patients but also to find a consensus as to the best management strategy to adopt. Furthermore, our prosthesis model could allow in vivo testing of the effectiveness of innovative materials that do not yet have an orthopedic application. Data availability All data analyzed during this study are included in this publication. The datasets during and/or analyzed during the current study are available from the corresponding author on reasonable request.
2020-03-24T14:44:12.460Z
2020-03-24T00:00:00.000
{ "year": 2020, "sha1": "1bc12468e1add7f133e7111149765d995bb0def5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-62146-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dccd4590559f71db5a5bb5b5bf843abcbff4c4ff", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
214786492
pes2o/s2orc
v3-fos-license
The electrocardiogram in the diagnosis and management of patients with dilated cardiomyopathy The term dilated cardiomyopathy (DCM) defines a heterogeneous group of cardiac disorders, which are characterized by left ventricular or biventricular dilatation and systolic dysfunction in the absence of abnormal loading conditions or coronary artery disease sufficient to cause global systolic impairment. In approximately one third of cases, DCM is familial with a genetic pathogenesis and various patterns of inheritance. Although the electrocardiogram (ECG) has been considered traditionally non‐specific in DCM, the recently acquired knowledge of the genotype–phenotype correlations provides novel opportunities to identify patterns and abnormalities that may point toward specific DCM subtypes. A learned ECG interpretation in combination with an appropriate use of other ECG‐based techniques including ambulatory ECG monitoring, exercise tolerance test and imaging modalities, such as echocardiography and cardiovascular magnetic resonance, may allow the early identification of specific genetic or acquired forms of DCM. Furthermore, ECG abnormalities may reflect the severity of the disease and provide a useful tool in risk stratification and management. In the present review, we discuss the current role of the ECG in the diagnosis and management of DCM. We describe various clinical settings where the appropriate use and interpretation of the ECG can provide invaluable clues, contributing to the important role of this basic tool as cardiovascular medicine evolves. Introduction Dilated cardiomyopathy (DCM) is currently defined by the presence of left ventricular (LV) or biventricular dilatation and systolic dysfunction in the absence of abnormal loading conditions or coronary artery disease sufficient to cause global systolic impairment. 1 Dilated cardiomyopathy is an umbrella definition that encompasses multiple disorders where myocardial abnormality is not related to coronary or valvular or congenital heart disease and cannot be explained by abnormal haemodynamic conditions. 2 Besides being very generic, the term DCM may be downright inaccurate, as chamber dilatation is often absent: indeed, the term 'dilated' has recently been questioned. This nosographic complexity builds on a significant genetic heterogeneity with mutations found to be linked to the disease in at least 50 different individual genes and a polymorphic clinical presentation with arrhythmias and heart failure being the most common manifestation. 1 In this review, we will discuss the several reasons why the ECG is still a paramount piece of the puzzle in diagnosis, risk stratification and management of DCM. Following an overview of abnormalities involving each segment of the ECG, individual diseases characterized by specific patterns are discussed in detail. Methods The authors approached the topic formulating the research question: what is the role of ECG in diagnosis and management of DCM? Therefore a systematic search through the web-based engine PubMed was conducted in order to identify all studies meeting the eligibility criteria. The most relevant studies answering the main research question were selected. Finally, results were presented systematically taking into account the complexity of the disease and the various aetiologic backgrounds. Systematic approach in ECG interpretation Electrocardiographic abnormalities characterize the majority of patients with DCM, with abnormal ECG features reported in more than 80% of the cases ( Table 1). [3][4][5][6][7][8][9][10] Despite the traditional opinion that ECG abnormalities in DCM are non-specific, in contrast with other cardiomyopathies, such as hypertrophic cardiomyopathy (HCM) or arrhythmogenic right ventricular cardiomyopathy (ARVC), recent advances in the understanding of genotype-phenotype correlations provide the opportunity to recognize specific ECG patterns that are typical of certain genetic or acquired forms of DCM. 5 As the ECG is rarely normal in DCM, ECG abnormalities should trigger the initiation of a diagnostic work-up. However, when interpreting the ECG of patients with cardiomyopathies, the approach should be 'cardiomyopathy-oriented', i.e. abandoning classical concepts derived from the world of ischaemic and hypertensive heart disease, and focusing on specific 'red flags' ( Table 2) 1,[11][12][13][14][15][16][17][18][19] that should be carefully integrated in the broader clinical and familial context. P wave Atria are often dilated in DCM, reflecting raised filling pressures and/or associated valvular abnormalities. This may be reflected on the ECG with P-wave changes suggestive of left and/or atrial enlargement. 20 While isolated right atrial enlargement is uncommon, 3,5,6 left atrial enlargement is seen in a variable proportion of patients and is often considered a marker of long-standing disease. 3,[5][6][7] Atrial fibrillation (AF) is a common pathway for all forms of DCM following progression to heart failure. Early onset of AF in young individuals, however, may suggest specific DCM aetiologies, mainly of genetic origin 12 ( Table 2). PR interval First-degree and/or advanced atrio-ventricular (AV) blocks can be found in patients with DCM 1 ; conduction abnormalities, especially in young patients, suggest a specific genetic background often associated with neuromuscular diseases, laminopathy, or ion channel disorder. Conduction abnormalities are also relatively common in acquired conditions such as cardiac sarcoidosis and Chagas disease. 21 . QRS complex Loss of vital myocardium and diffuse LV fibrosis may both lead to reduced QRS amplitude, especially in the precordial leads. 3,5,10 Low QRS voltages may also reflect fat infiltration, such as in arrhythmogenic cardiomyopathy due to desmosomal gene mutations, with involvement of the left as well as the right ventricle. 21,22 When LV hypertrophy (LVH) voltage criteria (either Sokolow-Lyon or Cornell) are met in patients with DCM, a hypertensive aetiology should be excluded. [3][4][5] Left bundle branch block (LBBB) is found in roughly one third of patients with DCM, 23 sometimes preceding the structural phenotype, and carries an adverse prognostic value; LBBB can be the result of a discrete lesion within the his bundle, as suggested by Narula 24 explaining why pacing at the distal His bundle can improve electrical end echocardiographic desynchronization in patients with LBBB. 25 Many patients with a diagnosis of LBBB have a combination of LVH and left anterior fascicular block rather than true LBBB. 26 True LBBB should be diagnosed if QRS duration is ≥140 ms (130 ms in women), there is a QS or rS pattern in V1-V2 and mid-QRS notching or slurring in ≥2 of leads V1, V2, V5, V6, I, aVL. After cardiac resynchronization therapy (CRT), this morphology is associated with a better echocardiographic response and survival than other intraventricular delays. 27 Right bundle branch block (RBBB) is generally uncommon in patients with DCM (2-6%) 3,5,6 but it is frequently found in patients with neuromuscular disease due to pathogenic variants in the dystrophin gene. 28 Q waves The lack of consensus regarding the definition of a pathological Q wave, with multiple proposed diagnostic criteria, constitutes a source of confusion. Q-wave duration ≥40 ms or an absolute depth of >3 mm are considered pathological criteria by some, whereas others recommend an amplitude ≥25% of the ensuing R wave. 20 Q waves may be observed in DCM in the absence of ischaemic heart disease and are more common in the anterior and lateral leads. 5,6 As discussed below, cardiac involvement in muscular dystrophies is often characterized by posterior or inferior Q waves, which reflect transmural myocardial fibrosis. 29 ST segment/T-wave abnormalities Repolarization abnormalities are common in DCM, and generally reflect LV impairment. T-wave inversion (TWI), especially in the lateral leads, is a recognized feature of certain genetic forms (for example filamin C or desmosomal disease). 13 In contrast to HCM, where striking repolarization abnormalities (such as deep TWI especially in the lateral leads) are common, TWI in DCM is less deep and not associated with voltage criteria for LVH. 14,30 QT interval The QTc interval is generally normal in DCM. A short QT interval has been associated with primary carnitine deficiency which may cause DCM. 31 QT variability at prolonged monitoring has been shown to be of potential use in sudden cardiac death (SCD) risk stratification in patients with DCM. 32 Ventricular premature beats Ventricular premature beats (VPBs) may be found in up to 40% of patients with DCM. 33 Frequent VPBs may promote LV systolic dysfunction and in some cases it may be challenging to establish whether VPBs are the main driving force of LV systolic dysfunction (tachycardiomyopathy), an arrhythmic manifestation of an underlying cardiomyopathy, or an innocent by-stander. There is no consensus regarding the burden of VPBs considered to be sufficient to cause LV systolic dysfunction; however, a high burden has been variably defined as ranging from >10 000 to 25 000 VPBs/day and as >10% to 24% of total heart-beats during the 24 h. 19 The type and not only the burden of VPBs is relevant in the differential diagnosis between forms of DCM. Moreover, certain VPBs morphologies are commonly identified in athletes and are considered benign. These include infundibular and fascicular morphologies. On the contrary, other morphologies of VPBs such as LBBB/intermediate or superior axis or RBBB/intermediate or superior axis and wide QRS may be a sign of underlying myocardial disease. 34 An 'arrhythmogenic' subset of genetic DCM, epitomized by lamin A/C and desmosomal forms of DCM, is characterized by complex and polymorphic ventricular arrhythmias (including frequent VPBs) early in the course of the disease, heralding increased risk of SCD. 12 The presence of VPBs and/or non-sustained ventricular tachycardia (NSVT) does not generally dictate the choice of protecting a patient with DCM phenotype with an implantable cardioverter-defibrillator (ICD) in primary prevention. However, the presence of frequent arrhythmias, especially if associated with pathogenic variants in desmosomal genes and/or myocardial fibrosis at CMR, may suggest a high risk of SCD and therefore an ICD should be considered in this setting. The development of ventricular arrhythmias at the exercise tolerance test, including an increase in VBPs or development of NSVT during exercise, may be a sign of an arrhythmogenic phenotype with underlying desmosomal pathogenic variants. 1 Supraventricular arrhythmias The identification of AF through ambulatory monitoring is an important aspect of management of DCM and may dictate important choices as commencement of anticoagulation therapy for stroke prevention. 1 The detection of paroxysmal supraventricular arrhythmias in young patients with DCM should prompt investigation for familial LMNA cardiomyopathy. 12 In summary, from a practical standpoint, when approaching a patient with unexplained LV dilatation and/or systolic dysfunction, a systematic analysis of the ECG from the beginning of the P wave to the end of the T wave may provide invaluable clues that may point toward the diagnosis of specific subtypes with implications for management and prognosis. The ECG in specific genetic forms of dilated cardiomyopathy Some ECG features are clues of specific genetic DCM subtypes, as certain disease-causing genes are associated with characteristic ECG abnormalities 35 (Table 2 and Figure 1) that may have diagnostic as well as prognostic value for the patients and their relatives. For example, pathogenic variants in certain genes (lamin A/C, filamin C, desmosomal genes and phospholamban) may express an arrhythmogenic phenotype, which should lead to early decisions on ICD implantation in primary prevention. In the following lines, we describe ECG patterns in some of the most common DCM-associated genotypes. Titin Truncating variants in the gene for the sarcomeric protein titin (TTN) have been identified as the most common genetic cause of DCM and found in 10% to 20% of cases. 36 A typical ECG pattern of patients harbouring pathogenic variants in TTN has not been described. A recent study 37 on one of the largest cohorts to date, showed that patients with TTN truncating variants had a higher prevalence of AF and ventricular arrhythmias than DCM patients with other aetiologies, while the prevalence of LBBB and conduction abnormalities was lower. Lamin A/C Variants of the lamin A/C gene (LMNA) are found in up to 8% of DCM cases. 38 Early conduction disease, manifesting as sinus bradycardia, sinus node arrest, AV blocks (first or second degree AV block, later progressing to complete heart block) or LBBB are relatively common in this form of DCM. 1 Such findings often precede the development of an overt dilated phenotype ( Figure 1A). These patients exhibit a high prevalence of supraventricular arrhythmias, in particular AF (present in almost half of the patients at their first presentation), but also atrial flutter and atrial tachycardia. 39 Frequent VPBs and episodes of NSVT may also be found at the ECG or at ambulatory monitoring. Risk of SCD and progression to refractory heart failure is common in this genetic subset. Similar to the LMNA variants, variants in the emerin gene (EMD or STA), responsible for Emery-Dreifuss muscular dystrophy, frequently lead to conduction disturbances 40 and supraventricular arrhythmias. 41 Filamin C Filamin C is an intermediate filament that cross-links polymerized actin, contributing to anchoring cellular membrane proteins to the cytoskeleton. Filamin C gene (FLNC) variants account for about 4% of DCM cases. 13,16,38 Repolarization abnormalities, especially TWI in the precordial or inferolateral leads are a common finding in patients with FLNC variants. 14 Dystrophin The dystrophin gene (DMD) is located on the short arm of the X chromosome and shows an X-linked pattern of inheritance. Cardiac involvement is present in approximately 90% of the cases of Duchenne's muscular dystrophy and 70% of Becker's muscular dystrophy. 1 The ECG in DCM due to DMD pathogenic variants classically mimics a posterior, inferior and/or lateral myocardial infarction, with abnormal Q waves in leads I, aVL and V6 or in leads II, III and aVF, associated with high-voltage R waves in leads V1 and V2, which is due to the progressive accumulation of a transmural scar in the posterolateral region of the left ventricle 42 ( Figure 1C). Short PR interval and sinus tachycardia are also frequent, along with right-axis deviation and RBBB. 43 Desmin Desmin is a cytoskeletal protein which forms muscle-specific intermediate filaments. Pathogenic variants in the gene encoding desmin (DES) cause a wide spectrum of phenotypes of different cardiomyopathies, skeletal myopathies, and mixed skeletal and cardiac myopathies. Desmin variants account for 1-2% of all cases of DCM. 1 ECG abnormalities are common in this setting (up to 60% of the cases). 44 Conduction abnormalities are frequently observed (AV blocks and RBBB), followed by supraventricular and ventricular arrhythmias. 44 DMD -Ventricular arrhythmias 11,13,14,16,19 Frequent VPBs phenotypical expressions, including isolated conduction defects, NSVT and familial, early-onset AF, which may be associated with LV systolic dysfunction 11,45 ( Figure 1D). Desmosomal genes Although desmosomal variants have been historically associated with ARVC, recent studies have shown that ARVC is often characterized by LV involvement with forms that are phenotypically very similar to DCM. 13,21 A recent study reported desmosomal genes pathogenic variants in 3.5% of patients with DCM. 13 The ECG in patients harbouring variants in desmosomal genes may be characterized by low voltages on both limb and precordial leads, delayed ventricular depolarization and repolarization abnormalities including TWI that may extend to the lateral leads (V5-V6) 46 (Figure 2). Low voltages are due to the typical subepicardial distribution of fibrofatty replacement within the left ventricle, often circumferential, preventing electrical transmission from the inner layers. Patients with DCM harbouring desmosome gene variants often develop ventricular arrhythmias and are at risk of SCD. 13 RNA-binding motif 20 RNA-binding motif 20 (RBM20) is an RNA-binding protein expressed highly in both atria and ventricles involved in alternative splicing process. DCM in RBM20 variants is frequently associated with early onset, severe heart failure, and arrhythmic potential. 1 Although RBM20 mouse models showed a prolonged PR and heart rate-corrected QT interval, these features are not exhibited in humans and a typical ECG pattern has not been described. A genetic diagnosis is important as a distinct propensity to sustained ventricular arrhythmias has been observed in patients harbouring RBM20 variants. 47 Phospholamban The PLN gene encodes phospholamban, a protein responsible for inhibition of sarco−/endoplasmic reticulum Ca 2+ -ATPase (SERCA) function. Variants in the PLN gene result in increased SERCA inhibition with defective calcium reuptake, with consequent reduction in contractility and DCM phenotypic expression. In some DCM cohorts, especially in The Netherlands and in Germany (due to founder mutations), the prevalence of PLN variants is high. Typical ECG features are low QRS complex potentials and decreased R-wave amplitude, mainly in anterior-lateral precordial leads. 15 In summary, the genetic background of DCM may be complex and the ECG may suggest specific genetic abnormalities. ECG 'red flags' may indeed point toward particularly aggressive genetic forms that would require specific management, such as ICD implantation in primary prevention at an early stage of the disease. The ECG in other cardiomyopathies with dilated hypokinetic phenotype Left ventricular or biventricular dilatation and systolic dysfunction is a common final result of various disease processes. Cardiomyopathies characterized by myocardial infiltration or by LVH at an initial stage may progress toward LV dilatation and dysfunction. Typically, cardiac amyloidosis (especially the AL form) may be characterized by significant reduction of QRS voltages, a feature that may be shared also by HCM in the so-called 'burn-out' phase. While this ECG sign reflects an infiltrative myocardial process in cardiac amyloidosis, it underlies high myocardial fibrosis burden in HCM. 30 Figure 2 Electrocardiogram of a 32-year-old woman exhibiting a dilated cardiomyopathy phenotype, found to have a desmoplakin pathogenic variant. Note the low voltages on the limb and precordial leads. Cardiovascular magnetic resonance (late gadolinium enhancement sequences) shows subepicardial late gadolinium enhancement, which is more evident at the level of the right side of the interventricular septum and the basal lateral wall. The ECG in non-genetic forms of dilated cardiomyopathy A number of chemical compounds can induce DCM, the most common of which are chemotherapeutic agents, cocaine and alcohol. Inflammation and auto-immune response can result in particularly malignant forms of DCM. 23 Some ECG features are typical of specific acquired forms of DCM ( Table 3). Inflammatory cardiomyopathies Acute myocarditis may result in chronic inflammation and evolution to DCM in a variable proportion of patients. 48 The spectrum of ECG features in DCM resulting from previous myocarditis is wide and often non-specific. However, especially in the acute inflammatory phase, fibrosis), conduction abnormalities (especially in myocarditis due to Lyme disease), lateral TWI, increased QRS duration and frequent VPBs or NSVT might be present 28 ( Figure 3A). A specific form of inflammatory cardiomyopathy is Chagas disease, caused by the parasite Trypanosoma cruzi and common in South America and Central America. Chagas disease is characterized by conduction system abnormalities, most commonly RBBB with left anterior fascicular block and AV blocks. 49 A clinically manifest cardiac involvement with LV or biventricular systolic dysfunction with or without chamber enlargement occurs in approximately 5% of patients affected by systemic sarcoidosis, but a silent disease is far more common, according to autopsy studies. 50 The most common ECG manifestations are abnormalities in the conduction system, such as AV blocks, bundle branch blocks and fascicular blocks 51,52 ( Figure 3B). Unexplained advanced AV blocks in young individuals, especially if associated with LV systolic dysfunction, should raise the suspicion of cardiac sarcoidosis. 53 Frequent VPBs and non-sustained or sustained ventricular tachycardia are also common, and might be the first presentations of the disease. 54 Tachycardia-induced cardiomyopathy Tachycardia-induced cardiomyopathy is defined as the reversible impairment of ventricular function with or without chamber dilatation induced by persistent arrhythmia. 23 Both atrial and ventricular arrhythmias may cause or at least promote LV or biventricular systolic dysfunction. The exclusion of underlying structural heart disease may be challenging as current imaging techniques, for example CMR, cannot easily identify diffuse fibrosis which may be a substrate for arrhythmias. In this setting, the ECG (and prolonged ECG monitoring) may provide useful insights in the diagnosis ( Figure 3C). For example the demonstration of high VPBs burden or of atrial arrhythmias, especially if poorly controlled in terms of heart rate, may suggest tachycardia-induced cardiomyopathy where a timely diagnosis is important given the potential for recovery with appropriate treatment. 19 Dilated cardiomyopathy caused by drugs and toxins A series of drugs and toxins can cause DCM. Anthracyclines and several other agents used for oncologic treatment may be toxic for the heart, resulting in a clinical picture that is generally characterized by LV systolic dysfunction with or without LV dilatation, often without a specific ECG pattern. Prolonged QTc interval and decreased QRS voltages have been shown to correlate with LV systolic dysfunction in this setting 55 ( Figure 3D). The burden of arrhythmia in patients with anthracycline-related cardiomyopathy is not different from patients with other forms of DCM. 56 The relationship between alcohol intake and heart failure is influenced by various genetic and environmental factors. (>80-100 g/day for >10 years) in combination with otherwise unexplained cardiomyopathy. 57 Non-specific abnormalities like complete or incomplete LBBB, AV conduction disturbances, alterations in the ST segment can be found comparable to those of idiopathic DCM. 58 Cocaine and methamphetamines are sympathomimetic drugs that induce heightened inotropic and chronotropic effects. The effects on the heart are multiple, including coronary vasospasm, atherosclerosis and LV systolic dysfunction. Although in the acute setting ECG ischaemic changes are often present, specific ECG patterns are often absent in the chronic phase. Differential diagnosis with cardiac adaptation to exercise Long-term athletic training is associated with a series of alterations in cardiac structure, function and electrical activity and chamber dilatation is commonly observed especially in endurance athletes. 59 Significant LV dilatation in athletes may pose a challenge in the differential diagnosis with DCM. Recent international recommendations for ECG interpretation in athletes underscore which abnormalities should be considered as reflective of physiological adaptation to exercise and which instead should be regarded as highly suggestive of pathology. 20 While isolated voltage criteria for LVH or left-axis deviation are highly suggestive of a normal process, low voltages, LBBB, repolarization abnormalities and pathological Q waves are more likely expression of DCM. Although sinus bradycardia and first-degree AV block are normal findings in athletes, extreme bradycardia (< 30 bpm) and advanced AV blocks suggest a pathologic process and should be further investigated. In summary, unexplained dilatation and/or systolic dysfunction is a description of a phenotype, but not a diagnosis. Although DCM may be genetic/familial in up to 25% of cases, 1 secondary causes should be excluded. In this context, a correct ECG interpretation in conjunction with a detailed personal and family history may provide useful clues for the final diagnosis. A correct identification of a possible secondary process underlying LV systolic dysfunction is relevant as, in some cases, LV systolic dysfunction may be reversible after specific treatment. Standard ECG for risk stratification Dilated cardiomyopathy is a dynamic condition and ECG abnormalities reflect the natural history of the disease. The presence of LBBB at baseline has been reported as an independent predictor of worse outcomes (all-cause mortality and SCD) in patients with severely impaired systolic function. 60 Moreover, a significant proportion of patients develop new-onset LBBB during follow-up, which likewise is a strong and independent predictor of all-cause mortality. 61 Selection of patients who may potentially benefit from CRT is based on the severity of their LV systolic dysfunction, of their symptoms [as assessed by New York Heart Association (NYHA) class], and most importantly, ECG criteria indicative of ventricular dyssynchrony. 2,62 In general terms, current European and American guidelines recommend CRT in symptomatic heart failure patients (NYHA class ≥II) with an ejection fraction of ≤35%, 63 LBBB, and a QRS duration of ≥150 ms. 64,65 Although patients with LBBB are more likely to respond to CRT compared to patients with RBBB or non-specific interventricular conduction delay, 66 both these guidelines also state that CRT should be considered in non-LBBB with QRS ≥150 ms. American guidelines widen their indication for CRT to include patients with ischaemic cardiomyopathy and NYHA class I symptoms with an ejection fraction Non-specific electrocardiographic abnormalities, such as first-degree atrio-ventricular block and prolonged QRS duration in a patient affected by chemotherapy-induced dilated cardiomyopathy. Notably, a long QTc can be seen, which is associated with worse long-term prognosis. of <30% and LBBB with a QRS duration of ≥150 ms. 65 A wider QRS has also emerged as a predictor of ventricular arrhythmias 63,67,68 and, more interestingly, has been independently associated with early (<6 months) arrhythmic events. 69 The impact of new-onset arrhythmias is also relevant for DCM prognosis. Atrial fibrillation (both at baseline and development during follow-up) has been associated with poor long-term survival and need for heart transplantation in patients with DCM. 9 Fragmentation of the QRS complex (i.e. the presence of notching of the QRS or an additional R wave) carries a higher risk for major ventricular arrhythmias and major adverse cardiac events. 68,[70][71][72][73] More recently, low QRS amplitude and anterolateral TWI emerged as independent predictors of major ventricular arrhythmias and SCD in patients with DCM. 5, 28 Merlo et al. 5 recently showed that specific ECG features, including TWI in anterolateral leads, LVH (according to Sokolow-Lyon criteria) and higher heart rates are predictors of heart transplant and death. Although many ECG features may underlie a propensity for potentially fatal cardiac arrhythmias, none of these are included in the recommendations for ICD implantation in primary prevention, which are still based on LV ejection fraction and NYHA class only. 64 Clinical implications A learned ECG interpretation is extremely valuable when approaching patients with unexplained LV systolic dysfunction and/or LV dilatation. Gaps in evidence and future suggestions Most studies on ECG in DCM are single-centre retrospective studies. Despite the rapid increase in the understanding of the genotype-phenotype correlations in DCM and the role of the ECG in this context, the knowledge on the modifications of the ECG during the natural history of the disease is still limited. As the definition of DCM is based mainly on structural features as LV dilatation and systolic dysfunction, clinical research has focused mainly on imaging tools such as echocardiography and more recently CMR. Future studies should include ECG data systematically in order to identify possible additional roles of the ECG in the diagnosis and risk stratification of the disease. Conclusions The ECG is a cornerstone in the diagnosis and management of cardiomyopathies. While traditionally the ECG was considered non-specific in DCM, a deeper understanding of the genotype-phenotype correlations and of the complex aetiological background underlying a DCM phenotype, increasingly reveals ECG patterns and 'red flags' that provide the opportunity to early identify or suspect specific genetic and acquired forms. A 'cardiomyopathy-oriented' ECG interpretation in the setting of LV or biventricular systolic dysfunction may suggest clinical scenarios requiring a specific approach in terms of clinical management. For example the finding of just a mild LV systolic dysfunction at echocardiography in a young patient may wrongly reassure the clinician. In fact, if this feature is combined with certain ECG 'red flags' such as AV blocks and/or AF, a laminopathy should be excluded; if instead low voltages and TWI in the lateral leads are found, a desmosomal disease should be suspected with major implications not only for the patient but also for first-degree family members who may potentially be at risk. A learned ECG interpretation combined with a wise use of the most advanced imaging techniques and genetic testing is extremely useful in the approach to patients where clinical management is often complex, despite continuous developments in cardiovascular medicine.
2020-04-05T05:19:55.097Z
2020-04-03T00:00:00.000
{ "year": 2020, "sha1": "5a318abb41ab2764b4ab0d3c5de2a2940a5cd03e", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ejhf.1815", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "327bdaf6028c3405e57a0e4e25aa24ae6acc3e8c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125805823
pes2o/s2orc
v3-fos-license
Experiments on high energy reactions in the diffractive regime at LHC In a few years, the LHC will push the energy frontier in accelerator physics significantly further, with the primary goal of obtaining a better insight in the fundamental constituents of matter and their interactions, e.g. the understanding of the origin of the electroweak symmetry breaking. Due to its capability of colliding various beam species, it will also offer unique possibilities for further studies of the strong interaction, in as yet uncovered kinematical regions. In order to exploit the machine capabilities best, an extension of the coverage of the approved detectors in the forward region (small scattering angles wrt the beam) is highly desirable. This contribution discusses the physics motivation, the baseline machine and experiment layout and as well as possibilities for extensions of the detector coverage. Introduction The LHC is primarily designed to be a discovery machine, providing proton-proton collisions at the highest center-of-mass energy with a very large luminosity.It will however also be able to provide nucleus-nucleus and proton-nucleus collisions, again at the highest center-of-mass energies, offering a unique facility for the study of the strong interaction. Amongst the primary goals for pp collisions is the understanding of the origin of the electroweak symmetry breaking, which could manifest in the observation of one (or more) Higgs boson(s).The LHC will also vastly extend the potential for discovery of new physics beyond the Standard Model, extending the mass scale up to several TeV for direct observations.In addition, the experiments are designed with the goal of performing precision measurements within the Standard Model (and as well of new processes -if found).The LHC will offer unique possibilities to study strong interaction properties at the (future) energy frontier in a variety of processes and thus probe further Quantum Chromo-Dynamics (QCD) as the fundamental theory of the strong interaction.These data are of importance (esp.for pp collisions) to properly understand background processes for searches and precision measurements.Although a truely full acceptance detector is presently out of reach (as proposed in [1]), possibilities exist for a sizeable increase in physics coverage by adding additional components to one (or more) of the approved detectors at LHC.These additional components could provide a much better coverage of the forward region (the region of small scattering angles wrt the beam direction). This contribution discusses the physics motivation for such extensions, followed by a brief description of the LHC machine and of the running scenarios presently foreseen.Next, an overview of the five approved LHC experiments and a summary of their baseline coverage is given.Finally, possibilities for extending the coverage in the forward region and related instrumentation aspects are discussed. Forward physics As shown in Fig. 1, pp collisions show the highest multiplicities in the central region (|η| < 51 ).However the largest energies are found in the forward region (corresponding to very small scattering angles wrt the beam direction, e.g.|η| > 5 implies θ < 10 mrad).In order to observe particles in this region, detection has to occur at large distances from the PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Stefan Tapprogge interaction point.As well be shown, it this region that is of most interest and at the same time the most challenging for experimental instrumentation.It is worth to note that a center-of-mass energy of √ s = 14 TeV for pp collisions corresponds to an incident proton energy of about 10 17 eV in the laboratory frame.The list of physics processes, which will benefit significantly from an enlarged acceptance at small angles wrt the beam, includes: • total cross-section and elastic scattering, • soft and hard diffractive scattering, • properties of rapidity gaps, • exclusive central production, • event structure (energy flow, multiplicities, leading particle spectra, . . .), • low-x phenomena, • photon-nucleus interactions and many more.In the following, more details on the physics motivation and requirements for observables and measurements will be given for selected examples. A characteristic feature of diffraction is the occurence of so called large rapidity gaps, which are regions in phase space without particle production (indicated in the above list of processes by the + sign).Except for double diffractive dissociation, diffractive events also contain one (or two) leading protons (protons which have lost only a small fraction of their momentum). The momentum loss ξ of a proton in single diffraction (pp → p + X) is related to the mass M X of the dissociative system X by the relation PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop where s denotes the square of the center-of-mass energy.The distance in pseudo-rapidity ∆η between the leading proton and the closest particle belonging to the system X is given by ∆η ≈ − log ξ. (2.2) In non-diffractive processes, the occurence of large rapidity gaps is exponentially suppressed with increasing values of ∆η, due to the colour flow in the interaction.Most of the cross-section for diffraction is given by soft processes, where no hard scattering occurs.A precise understanding of the properties of soft diffractive events and of the fractional contribution of the various diffractive event classes to the total cross-section is important for a precise measurement of the total cross-section itself.The knowledge is also of relevance for an improved understanding of cosmic ray physics, as discussed further below. The description of soft diffractive processes relies mostly on phenomenological models (e.g. by Regge theory), whereas the occurence of a hard process in a diffractive event (e.g. the production of a high p T jet) allows to investigate the partonic structure of these processes, and thus possibly obtain a description from first principles (from the Lagrangian PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Stefan Tapprogge of QCD).Extensive data are available from electron-proton scatterning at HERA [2].The comparison of these data to hard diffractive events, as measured in pp interactions at Tevatron [3], has led to interesting observations.A breakdown of factorisation2 is found, which might be related to the survival probability of rapidity gaps.As shown in Fig. 3, a hard process in single diffractive pp scattering can be visualized as the interaction of two partons, one of which belongs to a proton and the other forms part of a colour-less entity coupling to the other, quasi-elastically scattered proton.This entity is called Pomeron, in reference to the Pomeranchuk trajectory describing the properties of hadron-hadron interactions at high energy within Regge theory.Within the parton model and QCD, the Pomeron could be modelled as a two gluon system, in a colour singlet configuration.The LHC with its increase center-of-mass energy and its high luminosity should provide a wealth of data to obtain better insight in this class of processes within the strong interaction. Exclusive production to search for new physics Central diffractive processes, as shown in Fig. 4, lead to the production of a central system (e.g. containing two jets as shown) and two leading protons, each of them separated by a gap in rapidity from the central system.Due to its large center-ofmass energy, the LHC can be thought of providing Pomeron-Pomeron collisions with a broad reach in the Pomeron-Pomeron center-of-mass energy, extending up to O(TeV). PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Stefan Tapprogge If the momentum loss ξ 1,2 of both protons is measured precisely, then the mass M X of the central system can be determined (via the so called missing mass method [4]) as At LHC, as an example, a central system with a mass of 140 GeV requires ξ 1 • ξ 2 = 10 −4 .This can be realized by having both protons lost 1 % of their momentum (symmetric configuration) or e.g. by having one proton with a momentum loss of ξ 1 = 2 • 10 −3 and the other one having ξ 2 = 0.05.For the symmetric case, the central system is produced at rest in the laboratory frame, otherwise it is boosted along the beam direction. The process of central diffraction could provide a complementary way to search for and determine properties of a light Higgs boson.As discussed in detail in [5,6], the exclusive production of a Higgs boson would provide a very clean signature and would allow to use the decay mode H → b b (having the largest branching ratio for masses around 120 GeV).Due to selection rules, it is expected that the signal-to-background ratio will very favourable.However the expected cross-section is not too large, about 3 fb, although the uncertainties on this value are not negligible.The detection and precise measurement of both protons could allow for a mass resolution of O(1GeV).It is also possible that other new particles might be detected and measured in exclusive production, e.g.pairs of super-symmetric particles. Exclusive production can not only occur via strong interaction processes, but also in two photon processes.As discussed in [7], this process could also be used for exclusive production of the Higgs boson, as well as of W + W − pairs or t t pairs. Low x physics The study of parton dynamics at small values of x 3 might reveal additional insight into the dynamics of the strong interaction and its description by QCD.This can be done possibly by measurements of the proton structure function (or the various parton densities) or by studying specific final states, such as forward jet production.Fig. 5 indicates the reach of existing data from fixed target experiments and of the HERA experiments in the plane of Bjorken-x and squared momentum transfer Q 2 (of the hard scattering).The baseline reach of LHC extends for a given value of x to much larger values of Q 2 , thus insuring the validity of the perturbative approach.On the other hand, for fixed values of Q 2 , smaller values of x can be reached than e.g. at HERA.As discussed later, the acceptance of e.g.ATLAS and CMS for high p T objects is restricted to the region of |η| < 2.5 (for leptons, photons and b-tagged jets) and up to |η| < 5 (for jets).As can be seen from the diagram, in order to reach values of x ≈ 10 −6 or smaller, one would have to measure in the region 5 < |η| < 8 (for values of Q 2 > 10 GeV 2 ).It is thus clear that the smallest values of x will only be accessible if coverage for large values of the pseudo-rapidity |η| is available. For physics at small x values, it is also important to point out the studies which can be done in pA and AA collisions, which can help to understand better the structure and the dynamics of nuclear matter.Due to the large number of partons in nuclei for small x values, saturation and shadowing effects might be more easy to observe in these collisions. Event structure and cosmic ray physics In Fig. 6, the flux of cosmic rays is shown as determined from extended air showers created in the earth atmosphere.The observed spectrum extends in a power-law form over many orders in magnitude, more than 10 in energy and more than 30 in flux, without showing any clear structures.Of very special interest [8] are events seen at the upper end of the spectrum, with energies of more than 10 19 eV.The LHC will probe the energy region of about 10 17 eV in pp collisions and about 10 18 eV in P bP b collisions, extending the reach by up to three orders of magnitude beyond the one of Tevatron. It is important to note that the available statistics at LHC will be enormous, in comparison to the observed rate of cosmic rays in this energy regime.For the region of the so called ankle (about 10 18 eV), only one cosmic ray event is expected per km 2 and per year, whereas at LHC a rate of 1 Hz of accepted events will provide a sample of 10 7 events per year. PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop The interpretation of the extended air showers observed on the earth's surface aims at a precise determination of the energy and of the species of the incident particle initiating the shower.This unfolding from the observed particles and their properties needs however precise models of the hadronic interaction, which in turn rely on extrapolation from existing accelerator measurements.It is expected that the uncertainties will be reduced once the range of extrapolation is getting smaller. As an example, Fig. 7 shows the fractional energy x lab = E/E lab of leading hadrons produced in pp collisions at a proton energy E lab = 10 17 eV.Clearly visible are the differences in the prediction of the four models shown.The existing data do not constrain enough the hadronic interactions models and further measurements at higher energy are most welcome.Some of the most important measurements to be performed at the LHC include the total pp cross-section, the fraction of diffractive dissociation in the total cross-section, the energy flow, and multiplicities as well momentum spectra of leading particles. Of special importance for these measurements is the forward region, as the behaviour PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Stefan Tapprogge of the inelastic interactions and the spectrum of leading particles in this region determines the energy transport through the atmosphere and thus strongly influences the air shower development.It is important to point out that present models indicate that measurements of only the central region (for properties such as energy flow and multiplicities of inelastic events) are not sufficient, as the models do not predict a consistent behaviour between changes in the central region and the forward region.For a more detailed discussion of relevant measurements and their importance, see [9,10]. LHC machine and running scenarios The LHC will be installed in the former LEP tunnel, which is located at up to 100 m below surface and has a circumference of about 27 km.It will consist of two rings, where the beams can be brought into collision at four interaction points.In order to reach a center-ofmass energy of √ s = 14 TeV for pp, more than 1200 super-conducting dipole magnets with a nominal field strength of 8.3 T are needed to bend the protons.The design luminosity will be L = 10 34 cm −2 s −1 , to be reached by filling the machine with 2835 bunches, each containing about 10 11 protons.The separation between two bunch crossings will be 25 ns. It is feasible to run the machine at lower values of √ s, down to about 2 TeV, and thus giving the possibility of obtaining overlap with proton-antiproton collisions at Tevatron.Furthermore, the LHC is designed to provide nucleus-nucleus collisions.In the case of P bP b collisions, a center-of-mass energy of 1148 TeV4 can be reached at a luminosity of L = 10 27 cm −2 s −1 .Collisions of lighter ions are possible as well, e.g. of Sn, Kr, Ar and O.In addition, the LHC can operated in pA mode, colliding protons on nuclei.In this case the center-of-mass system of the collision will not be at rest in the laboratory frame, but shifted by up to one unit in rapidity.Luminosities foreseen for pA collisions should range from L = 7.4 • 10 29 cm −2 s −1 for pP b up to L = 1.0 • 10 31 cm −2 s −1 for pO. PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Most of the time, the LHC is expected to be operated in pp mode.Approximately one month per year should be devoted to the studies of nucleus-nucleus and proton-nucleus collisions.Furthermore shorter dedicated runs with special conditions should take place, e.g. for TOTEM to perform a precise measurement of the total cross-section. LHC experiments Two big underground caverns (at the interactions points 1 and 5) have been excavated for the two general purpose experiments, ATLAS and CMS, which are optimized for high p T physics.At interaction point 2, the ALICE experiment will be situated, dedicated to the study of heavy-ion collisions.Point 8 will be taken by LHCb, aiming at the study of b-hadron physics.Furthermore, the TOTEM experiment (to be installed in point 5) will measure the total cross-section in pp collisions. For most experiments, the design phase has been finished and the mass production of their components (esp. in the case of ATLAS and CMS) is well under way, sometimes even close to completion.In the following, a brief overview of the main features of each experiment is given. ALICE The ALICE [11] detector, as shown in Fig. 9, will re-use the magnet of the L3 experiment.The central element of ALICE will be a huge time projection chamber (TPC), allowing precise tracking in the high multiplicity environment of central heavy ion collisions.Its coverage will be |η| < 1. Inside the magnet further components are foreseen for photon PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop detection, for electron-positron pair detection and for multiplicity measurements (the latter using Si detectors covering the region of −5.4 < η < 3), as well as for particle identification (via time-of-flight and transition radiation).Outside of the magnet, a dedicated muon spectrometer (2.4 < η < 4) with a separate dipole magnet is situated on one side of the experiment.ALICE will have also detectors in the machine tunnel: a zero degree calorimeter to measure e.g. the centrality of the heavy ion collision. ATLAS ATLAS [12] is a general purpose experiment, shown in Fig. 10, optimized for high p T physics.Surrounding the interaction point, several tracking detectors will measure charged particles and reconstruct (primary and secondary) vertices.Closest to the beam, three layers of Si pixel sensors will be placed, followed by four layers of Si strip detectors.Further out, there will be a straw tube tracker (TRT), which can detect transition radiation to identify electrons.All these components are situated inside a solenoid magnet with a field of 2 T. The tracking detectors (Inner Detector) cover the region up to |η| < 2.5 and are surrounded by calorimetry, extending up to |η| < 4.9.In the barrel region, a fine grained liquid argon (LAr) accordion calorimeter is foreseen as electromagnetic part, followed by a tile scintillator calorimeter as hadronic compartment.In the endcap and forward region, LAr technology is used again.Outside of the calorimeters, an open air-core toroid magnet PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop system is situated, interleaved with muon detectors, to provide detection of muons and a stand-alone measurement of their momentum in the region |η| < 2.7. The overall size of ATLAS is about 40 m × 22 m and its weight will be about 7000 t.More details on the expected performance of ATLAS can be found in Ref. [13]. CMS CMS [14] is the other general purpose detector and is shown in Fig. 11.As ATLAS, it has been optimized for the detection of high p T leptons, photons, jets (with and without b-tagging) and measurement of missing transverse energy.The tracking is based on an all silicon system, where the interaction point is surrounded with layers of pixel detectors.The remainder of the tracking volume is made of layers of Si strip detectors.The tracking coverage extends up to |η| < 2.5.Surrounding the tracker, a P bW O 4 crystal electromagnetic calorimeter is situated, which is followed by a scintillator sandwich hadronic calorimeter.All of these components are located inside a large solenoid magnet, providing a field of 4 T. The calorimetric coverage is extended up to |η| = 5 by a forward calorimeter, instrumented with quartz fibers.The return yoke is instrumented for muon detection, covering the region |η| < 2.5. CMS will have a size of 22 m × 15 m and a weight of about 13000 t. LHCb The LHCb [15] layout (as shown in Fig. 12) resembles a forward spectrometer, although PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Stefan Tapprogge LHCb will take data from colliding proton bunches.The interaction point will be surrounded be a precise vertex detector, followed by a tracking system, including a dipole magnet.LHCb will have various possibilities for particle identification, including two ringimaging Cerenkov (RICH) detectors, electromagnetic and hadronic calorimetry and a muon system.The acceptance region extends over 1.9 < η < 4.9. TOTEM The primary goal of TOTEM is to measure the total cross-section via the luminosity independent method, which requires the simultaneous determination of elastic scattering (under small angles) and of the rate for inelastic interactions.TOTEM will thus have two types of detectors (as shown in Figs. 13 and 14).Firstly, detectors to measure charged particles from inelastic events in the region 3 < |η| < 7 and secondly, detectors to measure leading protons (e.g. from elastic scattering) at distances of 100−200 m from the interaction point in the machine tunnel (using so called Roman Pots).TOTEM will be installed in interaction point 5, the inelastic detectors will be located inside the CMS experiment. Baseline coverage for forward physics The baseline design of the experiments, as described above, will allow (although not always in the same experiment) to measure the production of identified particles in the region −2.5 < η < 4.9, where ATLAS and CMS should be able to reach p T values of O(1 GeV) for Furthermore, the charged multiplicity will be measured by ALICE in the region −5.4 < η < 3 and by TOTEM in the region 3 < |η| < 7. The energy flow will be covered by ATLAS PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Stefan Tapprogge and CMS for |η| < 5. ALICE will be able detect leading neutrons and TOTEM will measure (at least during dedicated runs) leading protons as well. Possibilities for extensions of coverage Although two types of possible extensions in the detector coverage can be distinguished, i.e. the measurement of leading particles and the detection and measurement of particles produced under small angles in inelastic interactions, several aspects of instrumentation will be similar, as in both cases the detection has to be done close to the beam (pipe). The measurement of leading particles has to occur at large distances from the interaction point, as these particles are either scattered under very small angles or have lost only a small fraction of their momentum and thus leave the beam envelope only far away from their production point.An increase of the acceptance for particles from inelastic events has to happen mostly within the experimental caverns (before the first magnetic elements of the accelerator) and thus needs to be done very close to the beam, in order to have access to small scattering angles. Extensions within the experimental cavern As discussed above, amongst the motivations for extending the coverage, inside the experimental cavern, for the detection of particles from inelastic interactions are the measurements of • energy flow, • charged particle multiplicity, • jet production, • electron and photon production, • tagging of rapidity gaps and • muon multiplicity in η regions. For the measurement of charged multiplicity (where the combination of all experiments should cover up to |η| < 7), an extended coverage for the measurement of the energy flow (up to |η| < 8) could be achieved by installing additional calorimeters inside the experimental cavern.This would have to be done close to the beam pipe at a distance of about 18 m PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Stefan Tapprogge from the interaction point (or possibly by instrumenting the TAS absorber5 .A detailed proposal for a very forward calorimeter (surrounding the beam pipe) has been worked out in the context of CASTOR [17], which has been designed primarily to search for centauros and strange objects in heavy ion collisions at LHC.The availability of both a calorimeter and tracking detectors in front of it would allow for limited particle identification capabilities, such as measurements of electrons and possibly also photons.Instrumentation inside the experimental caverns close to the beam pipe has to take into account the high radiation environment, the limited access possibilities and the need to provide services (e.g.power and signal cables, cooling circuits) to the components.All of this has to respect the already mostly finalized design of the approved experiments. Extensions within the machine tunnel The measurement of elastic scattering down to very small values of the momentum transfer −t (which is necessary for a precise determination of the total cross-section, as discussed in [16]) requires a special optics set-up of the machine, where the beams are no longer strongly focused at the interaction point (to obtain the highest luminosity).The layout of the interaction regions 1 and 5 allows for instrumentation to be installed at distances of about 150 m and 210 m from the interaction point (see Fig. 14).For this special optics set-up, elastic scattering should be measurable down to values of at least −t ≈ 10 −2 GeV 2 . Acceptance for leading protons Protons, which have lost a small fraction of their momentum, are subject to a stronger bending force in the magnetic elements of the machine and thus deviate from the nominal orbit.At large distances from the interaction point, they leave the beam envelope and can thus be detected in sensors operating close to the circulating beam.The displacement x at a given location s along the machine ring depends on the value ξ of the momentum loss and the size of the dispersion DX, given by the machine optics, according to x = DX(s) • ξ. (5.1) In Fig. 15, the evolution of the dispersion (and of the beam size) in the horizontal plane is shown for the nominal LHC optics set-up (strong focusing at the interaction point for high luminosity -β * = 0.5 m).In order to detect protons with small momentum losses, locations very faw away from the interaction point are required.A proton with a momentum loss of 0.5 % will be displaced by about 5 mm only at a distance of about 400 m from the interaction point.Here the beam has a size of about σ x = 0.3 mm in the horizontal plane and thus an approach to a distance of 10 times the beam size would allow to observe such a proton.In addition, the instrumentation in this region would give coverage for diffractively scattered protons with a momentum loss of about 2 % or more. A detailed study on the acceptance as a function of the momentum loss ξ of leading protons for two locations (at 215 m and 425 m from the interaction point) is shown in Fig. 16.Here an approach to the center of the circulation beam of value of 10 times the size of the beam has been assumed, including an additional distance of 0.1 mm for inactive areas in the detector.The location presently available for instrumentation at 215 m allows for values ξ > 0.03 only (at 50 % acceptance), corresponding to a lower limit of about 420 GeV in mass M X for exclusive central production. In order to reach smaller values in the momentum loss (and thus in the mass of the centrally produced system), one would have to go to distances of about 425 m, where however presently no warm space for instrumentation is foreseen.At this location the lower limit in ξ is about 2 • 10 −3 , corresponding to M X > 28 GeV.The upper limit in PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop momentum loss is -for a given location -determined by the apertures of the beam pipe and the machine element between the interaction point and the location of the Roman Pot. Leading neutrons The detection of leading neutrons can be performed in a so called 'zero degree' calorimeter, which would be installed after the beams are separated (to match to the two beam pipe structure in the arcs of LHC).ALICE foresees to have such a device, as mentioned above.For ATLAS and CMS, studies are ongoing which investigate the possibility of instrumenting an absorber (TAN) at a distance of about 140 m from the interaction point. Instrumentation aspects Leading protons have been and are usually measured using silicon or scintillating fibre detectors, located in a movable casing (called Roman Pot), which provides also the separation from the beam vacuum.Fig. 17 shows a schematic drawing of a Roman Pot station, where the beam can be approached from two sides using detectors situated in movable pots.After stable beam conditions are reached, the pots are moved as close as possible towards the circulating beam to provide the best acceptance for small angle elastic scattering as well as for small momentum loss protons.As the available space for additional instrumentation will often be very limited (e.g.inside the experimental caverns), a new detector concept has been developed, the microstation [18].Its conceptual design is shown in Fig. 18.The basic idea is to perform the measurement inside the beam pipe, to obtain the closest possible approach of the sensor to the circulating beam.The design aims at a lightweight and very compact component (integrated with the beam pipe).It has to respect several requirements from the machine point of view, such as the compatibility with the machine vacuum and no significant additional impedance to be introduced by the components.The sensor planes will be very precisely movable in a reproducible way, implemented by using inchworm motors built out of ceramic elements.This movement has to be also extremely reliable, as the microstation might be deployed in regions where access is difficult.The sensor is foreseen to be silicon based.Depending on the location and the type of measurement, it could be either of Si strip type PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop or of Si pixel type, the latter in case of larger particle densities (i.e. for measurements of inelastic event properties).For these sensors, special emphasis has to be given to a minimisation of inactive areas close to the mechanical edge of the sensor, which would increase the effective distance to the beam center, from where onwards measurements would be possible.A fully functional prototype for validation of the above requirements is presently under construction.Finally an important issue, which should not be forgotten, is the online selection of events with leading protons.For ATLAS and CMS, the trigger system has to reduce the bunch crossing frequency of 40 MHz (or the interaction rate of about 1 GHz at design luminosity) to a rate of O(100 Hz) to mass storage.The first stage of the selection will be a hardware based trigger, which has to operate within a maximum latency of 2.5 µs (ATLAS) resp.3 µs (CMS).Events which are not accepted at this first level trigger are lost.Calculating the time-of-flight for protons from the interaction point to the detector position and the signal propagation time back to the electronics cavern of the experiment (where the final trigger decision is made) shows that only leading proton detectors at distances of up to 200 − 230 m will be able to deliver in time an input signal to the first level trigger.For locations at large distances, one would possibly have to make a selection based on topological criteria of centrally produced high p T objects and then include the information from leading proton detectors at the higher level trigger stages, where the latency is much less of a constraint. Conclusions The LHC will be in a few years the energy frontier in accelerator particle physics and will offer unique opportunities for studies of the strong interaction in as yet uncovered kinematical regions.In order to maximally exploit the physics potential, extensions of the approved detectors in the forward region (to detect and measure particles scattered under small angles wrt the beam direction) are highly desirable.Such extensions can be classified either as components dedicated to the measurement of leading particles (which then have PrHEP JHW2002 Twenty-sixth Johns Hopkins Workshop Stefan Tapprogge to be installed at large distances of O(100 m) from the interaction point) or as detectors for the identification and measurement of particles from inelastic events, produce under small scattering angles (to be installed inside the experimental caverns).These extensions would provide a significant increase in the physics potential of LHC and its experiments.They are however extremely challenging to develop, as a lot of constraints have to be respected (e.g.compatibility with the operation of the machine and the experiments, restrictions on the available space).Several experiments are presently investigating the technical feasibility of such extensions, where also a coherent effort between the machine and the experiments is needed.The experimental groups are open for suggestions and ideas, as well as to contributions from interested groups not yet participating in the LHC. Figure 4 : Figure 4: Exclusive central production of two jets. Figure 5 : Figure 5: Kinematic reach in the (x, Q 2 ) plane, indicating the LHC coverage for various acceptances in rapidity y. Figure 7 : Figure 7: Predictions of various models for properties of pp interactions at energies of E = 10 17 GeV, showing the distribution of the fractional energy x lab for the leading hadron. Figure 8 : Figure 8: Layout of the LHC, showing the underground caverns of the four interactions regions. Figure 13 : Figure 13: The TOTEM layout of the two telescopes for measurement of particles from inelastic interactions within the CMS detector. Figure 14 : Figure 14: The TOTEM layout for the leading proton measurements, showing four possible locations (RP1 -RP4) for Roman Pots. Figure 15 : Figure 15: Dispersion DX and SIGX = 10 • σ x (where σ x is the beam size) in the horizontal plane for the high luminosity machine optics set-up. Figure 16 : Figure 16: Acceptance for leading protons in the high luminosity machine optics set-up. Figure 17 : Figure 17: Sketch of a Roman Pot station. Figure 18 : Figure 18: Conceptual design of a microstation.
2019-04-22T13:05:34.214Z
2002-02-15T00:00:00.000
{ "year": 2002, "sha1": "ad792607475dca73132214f79dddc8e7a639a9d7", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/009/024/pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "20c99264a9dc8c458187d509e55b789a8c1ebc88", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
150904169
pes2o/s2orc
v3-fos-license
Digital Humanities ’ Shakespeare Problem Digital humanities has a Shakespeare problem; or, to frame it more broadly, a canon problem. This essay begins by demonstrating why we need to consider Shakespeare’s position in the digital landscape, recognizing that Shakespeare’s prominence in digital sources stems from his cultural prominence. I describe the Shakespeare/not Shakespeare divide in digital humanities projects and then turn to digital editions to demonstrate how Shakespeare’s texts are treated differently from his contemporaries—and often isolated by virtue of being placed alone on their pedestal. In the final section, I explore the implications of Shakespeare’s popularity to digital humanities projects, some of which exist solely because of Shakespeare’s status. Shakespeare’s centrality to the canon of digital humanities reflects his reputation in wider spheres such as education and the arts. No digital project will offer a complete, unmediated view of the past, or, indeed, the present. Ultimately, each project implies an argument about the status of Shakespeare, and we—as Shakespeareans, early modernists, digital humanists, humanists, and scholars—must determine what arguments we find persuasive and what arguments we want to make with the new projects we design and implement. Introduction Digital humanities has a Shakespeare problem; or, to frame it more broadly, a canon problem.Too many digital projects and sites focus on Shakespeare alone.Some sites highlight Shakespeare to the exclusion of other writers; other projects set their bounds at Shakespeare and "not Shakespeare".Digital humanities' Shakespeare problem both stems from and reifies Shakespeare's centrality to the canon of English literature.While this problem is, indeed, a digital humanities problem, it is also a problem in the arts and humanities more generally.Shakespeare is one of the few writers regularly featured in single-author undergraduate courses (alongside, perhaps, Chaucer, Milton, and Austen, albeit to a lesser extent).Shakespeare's works are so often produced on the twenty-first century stage that American Theatre excludes Shakespeare from their annual list of top-produced American plays in order to "make more room on our list for everyone and everything else" (Tran 2018).Digital humanities has often been heralded as the solution to the canonicity problem, but that is a great burden that it cannot bear alone. This essay begins by demonstrating why we need to consider Shakespeare's position in the digital landscape, recognizing that Shakespeare's prominence in digital sources stems from his cultural prominence.I describe the Shakespeare/not Shakespeare divide in digital humanities projects and then turn to digital editions to demonstrate how Shakespeare's texts are treated differently from his contemporaries-and often isolated by virtue of being placed alone on their pedestal.In the final section, I explore the implications of Shakespeare's popularity to digital humanities projects, some of which exist solely because of Shakespeare's status.Shakespeare's centrality to the canon of digital humanities reflects his reputation in wider spheres such as education and the arts.No digital project will offer a complete, unmediated view of the past, or, indeed, the present.Ultimately, each digital humanities project presents an argument about the status of Shakespeare, and we-as Shakespeareans, early modernists, digital humanists, humanists, and scholars-must determine what arguments we find persuasive and what arguments we want to make with the new projects we design and implement. Although the definition of digital humanities (and perhaps even the definition of Shakespeare) is subject to disagreement, for this essay, I limit my scope to digital humanities resources for pedagogy and research.This excludes games such as Richard III Attacks! (P.2015), online performances such as Such Tweet Sorrow (Silbert 2010), and social media hashtags like #ShakespeareSunday.Cultural studies often informs New Media Shakespeare scholarship to show Shakespeare's continued prominence online (see O'Neill 2015 for an overview): consider recent issues of Shakespeare Quarterly (Rowe 2010) and Borrowers and Lenders (Calbi and O'Neill 2016) on this topic.Stephen O'Neill (2018), drawing on Douglas Lanier's notion of "Shakespearean rhizomatics" (Lanier 2014), equates "Our contemporary Shakespeares" to "digital Shakespeares", describing both as "fully rhizomatic in their extraordinary and seemingly endless flow of relations."Christy Desmet suggests that we need to encounter all digital Shakespeares (both digital humanities and new media) through the lens of Ian Bogost's "alien phenomenology" (Bogost 2012), considering "material objects and networks as models for posthuman relations" (Desmet 2017, p. 5).Although Digital Humanities and New Media are often paired, for the purpose of this essay it is useful to differentiate the two: new media endeavors that participate in or create digital culture versus digital humanities projects that announce themselves as contributing to our general and scholarly knowledge.This article focuses on digital humanities projects for two reasons: first, as one way of limiting the scope of the "seemingly endless flow of relations" in Digital Shakespeares, and second, because the majority of digital humanities projects exist primarily to educate rather than to entertain. Digital humanities projects provide the resources we use to study and teach the early modern period: digital editions, bibliographies, digitizations, catalogs, and more.Often, digital humanities projects are expanded from earlier print resources: consider, for instance, the online English Short Title Catalogue (British Library 2006) and its print antecedents, the short-title catalogs by Pollard and Redgrave (1926) and Donald Goddard Wing (1945).Nondigital scholarly resources frequently skew towards Shakespeare; even the library catalogs we use to access archival resources are not neutral and emphasize Shakespeare above his contemporaries (Estill 2019a).Many digital humanities resources replicate this Shakespeare-centric focus, and, as such, misrepresent the materials they provide or offer a skewed perspective on early modern literature, theatre, and culture.Biased sources can only lead to biased scholarship; and while some professors will be able to see the biases of the sites they visit, many students will not.This is particularly problematic because, as Christie Carson and Peter Kirwan explain, "Students are some of the key 'users' of digital Shakespeare" (Carson and Kirwan 2014a, p. 244). It has been well-documented that major digital literary studies projects often focus on canonical authors.There is excellent work on the biases of digital humanities projects, particularly in relation to the status of women writers (see, for instance, Wernimont and Flanders 2010;Mandell 2015;Bergenmar and Leppänen 2017) and the canon of American literature (Earhart 2012;Price 2009), yet comparatively few scholars have critiqued how digital humanities overrepresents perhaps the most canonical figure in all of English literature: Shakespeare."Shakespeare and Digital Humanities" has been and continues to be a fruitful area of research, with special issues of Shakespeare (Galey and Siemens 2008), the Shakespearean International Yearbook (Hirsch and Craig 2014), RiDE: Research in Drama Education (Bell et al., forthcoming), and this issue of Humanities.The prevalence of digital humanities tools in Shakespeare teaching and research leads Carson and Kirwan to wonder, "are all Shakespeares digital now?" (Carson and Kirwan 2014a, p. 240).The questions less often asked are: when we focus on Shakespeare(s) in our digital projects, what is excluded by our Shakespeare-centrism?And how does that shape how we access and understand early modern drama?Digital Shakespeare studies often focuses on Shakespeare's place in the digital world, without questioning why he is given such primacy and the ramifications of his continued canonization. A decade ago, Matthew Steggle (2008) showcased how digital projects were "developing a canon" of early modern literature.Building on the "interrelated cycles" that Gary Taylor identified as supporting Shakespeare's centrality to textual studies, Brett Greatley-Hirsch describes "the long shadows cast by the cultural, scholarly, and economic investments in Shakespeare" (Hirsch 2011, p. 569), specifically as it pertains to digital editions of early modern plays.This essay furthers the work by Steggle, Greatley-Hirsch, and others by arguing that we must continually assess the landscape of digital projects available for teaching and researching the early modern period in order to understand and shape the future of the field. As the argument goes, traditional anthologies and resources are constricted by page counts and other limited resources, unlike digital projects, which can be democratizing due to their lack of-or, more realistically, different-limitations.In that vein, Neil Fraistat, Steven E. Jones, and Carl Stahmer (Fraistat et al. 1998, p. 2) suggest that "one of the strengths of Web publishing is that it facilitates-even favors-the production of editions of texts and resources of so-called non-canonical authors and works."Earhart (2015, esp. chp. 3), however, traces the familiar pattern of discovery then loss for noncanonical writers: their work is digitized, declared as recuperated, and then the site disappears.Another way digital humanities has been announced to recover noncanonical writers is by projects that digitize on a large scale.Julia Flanders (2009) explains: It is now easier, in some contexts, to digitize an entire library collection than to pick through and choose what should be included and what should not: in other words, storage is cheaper than decision-making.The result is that the rare, the lesser known, the overlooked, the neglected, and the downright excluded are now likely to make their way into digital library collections, even if only by accident.Indeed, it is the decision-making where Shakespeare too often gets pulled artificially to the fore: sometimes even in the foundational decisions about project scope.The next section of the essay explores how single authors are represented in small-scale digital resources versus large-scale digital resources, thinking about them in terms of labor, funding, and project scope. The Shakespeare/Not Shakespeare Divide in Digital Humanities Resources There is a lopsidedness to early modern online resources: some, such as the English Short Title Catalog (ESTC; British Library 2006) and the Database of Early English Playbooks (DEEP; Lesser and Farmer 2007) deliver breadth of coverage that is, due to their large scope, necessarily shallow; others, such as The Shakespeare Quartos Archive (Bodleian Library 2009) or MIT's Global Shakespeares (Donaldson 2009), provide deep coverage of a much narrower topic.Both approaches are needed to support different avenues of early modern scholarship, but, the latter, I contend, too often begins and ends with Shakespeare. The logistical reasons for these very different kinds of projects (broad coverage versus deep coverage) are readily apparent.The notion of "Shakespeare" offers a convenient scope and bounds for a given project.Many projects that include detailed metadata, extensive editorial annotation or encoding, expensive-to-create facsimiles, or streaming media center on the work of a single author.The Pulter Project (Knight and Wall 2018), for instance, is an example of a new project that focuses on a single author, and, indeed, a single manuscript, in order to offer a hypertext edition with multiple layers of editorial intervention, linked related texts, and comparative viewing options.The Digital Cavendish Project (Moore and Tootalian 2013) offers a range of ways to interact with Margaret Cavendish's life and texts: site visitors can explore Margaret Cavendish's social network, search the bibliography-in-progress of Cavendish scholarship, and make use of reference works such as a list of Cavendish's printers and booksellers and a spreadsheet locating all known copies of Cavendish's early publications.We can imagine extending these projects by adding another analysis section, another manuscript, or even another individual author.However, to extend these projects by any order of magnitude, by say, covering all seventeenth-century women writers or all previously unpublished manuscript poetry would be to undertake significant amounts of labor and would require both time and money. These single-author projects are the fruits of detailed scholarly attention: they are "boutique" digital projects.In their discussion of archival practices, Mark A. Greene and Dennis Meissner position "boutique digitization" at the far end of the continuum from "'Googlization' (ultra-mass digitization)" (Greene and Meissner 2005, p. 196).The former, boutique projects, require "extraordinary attention to the unique properties of each artifact" (Conway 2010, p. 76).While Greene, Meissner, and Conway focus on archival digitization projects, the continuum also applies to digital humanities projects, many of which include digitized elements alongside other interventions: transcriptions, editorial apparatus, bibliographic resources, and so forth.The Shakespeare Quartos Archive (Bodleian Library 2009) is an example of extraordinary attention to primary sources: the site's goal is to "reproduce at least one copy of every edition of William Shakespeare's plays printed in quarto before the theatres closed in 1642."Where possible, however, they include digitizations of as many copies of each Shakespeare quartos as possible.Their prototype offers thirty-two quartos of Hamlet (from Q1-Q5), carefully digitized and painstakingly encoded. 1 With their attention to primary sources, the Shakespeare Quartos Archive project argues that scholars must pay attention to copy-specific details.The Shakespeare Quartos Archive text encoding highlights different marginalia in each copy, the binding, and even the library ownership stamps. 2 While the Shakespeare Quartos Archive can be used as an exemplar of a "boutique" project, it is not the labor of a single scholar.This project emerged from the collaboration of multiple major institutions, including, most notably, the Bodleian Library of the University of Oxford, the British Library, the University of Edinburgh Library, the Folger Shakespeare Library, the Huntington Library, and the National Library of Scotland.The project was made possible by major grant funding from the United States's National Endowment for the Humanities (NEH) and the United Kingdom's Joint Information Systems Committee (JISC). The well-supported Shakespeare Quartos Archive raises another reason for author-centric approaches, namely, existing funding models.As Jamie "Skye" Bianco (2012) explains, "digital humanities is directly linked to the institutional funding that privileges canonical literary and historiographic objects and narratives" (see also Price 2009).In her review, Desmet unpacks the project's "rationale for a focus on Shakespeare's quartos" (Desmet 2014, p. 143): the rarity and fragility of the material objects; their locations in libraries around the world; and the lack of Shakespearean manuscript texts.This rationale, while a compelling argument for why we need to digitize and encode all early modern play quartos, hardly touches on why Shakespeare is the focus of the project.We lack authorial manuscripts of many plays by many playwrights.The Shakespearean focus of the Shakespeare Quartos Archive is taken for granted. It is hard to imagine the Ford Quartos Archive receiving much enthusiasm from funders, despite the fact that John Ford's plays are still edited, anthologized, taught, and performed today.There are many ongoing editorial projects focused on individual early modern playwrights, such as Oxford University Press's The Complete Works of John Marston (Butler and Steggle, forthcoming); yet to imagine digitizing and encoding all known early printings of Marston's work for a Marston Quartos Archives seems far-fetched, and the notion of turning to even less canonical playwright-say, the Glapthorne Quartos Archive-hardly bears thinking about.Shakespeare sells.Shakespeare's name is itself a valuable commodity (Hodgdon 1998;McLuskie and Rumbold 2014;Olive 2015).Digital project 1 Just as Digital Humanities has a Shakespeare problem, Shakespeare studies has a Hamlet problem, although the prominence of Hamlet in Shakespeare studies, both digital and otherwise, is a topic for another essay.For evidence of Hamlet's prominence, see Bernice W. Kliman et al.'s HamletWorks (Kliman et al. 2004) and Estill, Klyve, and Bridal (Estill et al. 2015). 2 The Shakespeare Quartos Archive uses the Text Encoding Initiative (TEI) for their XML (eXtensible Markup Language), which includes elements such as <handNote>, <stamp>, and <fw> (form work, for running heads, as an example).For more on their detailed encoding, see Desmet 2014.funders and creators recognize its value as much as academic publishers who push for Shakespeare's name in book titles. Martin Mueller pointed to tenure and promotion part of the reason for the scholarly focus on major, canonical plays.He asked, "You can see why professional scholars stay away from minor plays, unless they explicitly deal with hot topics.A play may interest them, but how will an entry about it look on a c.v.?" (Mueller 2014).While there is a wealth of valuable scholarship on minor plays, as Mueller points out, "the annual number of publications about Shakespeare dwarfs-by at least an order of magnitude-the number of publications about his contemporaries."Before scholars can achieve tenure and promotion, they must first land that tenure-track job, which, in many cases, means demonstrating that they can teach the single-author undergraduate Shakespeare course(s).Just as work on noncanonical playwrights can be met with institutional skepticism, digital humanities publication has tended to be undervalued by tenure and promotion committees, prompting scholarly bodies such as the Modern Language Association (MLA) to publish interventions like "Guidelines for Authors of Digital Resources" and "Guidelines for Evaluating Work in Digital Humanities and Digital Media" (MLA Committee on Information Technology 2012a, 2012b).For scholars creating digital projects, both funding and institutional structures of tenure and promotion can offer disincentives to go beyond Shakespeare. Shakespeare is so privileged in early modern digital humanities projects that some projects market themselves as a corrective.Mueller's now-defunct Shakespeare His Contemporaries (Mueller 2016) described itself as "a project devoted to the collaborative curation of non-Shakespearean plays from Shakespeare's world."3Despite offering a digital humanities project that recognizes and pushes back against Shakespeare's centrality to early modern drama studies, Shakespeare His Contemporaries's self-definition ("non-Shakespearean"), title (Shakespeare His Contemporaries), and scope ("Shakespeare's world") all gravitate around Shakespeare.This is hardly unique.Similarly, the "Beyond Shakespeare" project (a podcast and blog) has a twitter bio announces their interest in "anything but the Bard", just as their handle, @BeyondShakes, and the project title evokes his name (Crighton 2013). Andy Kesson, Lucy Munro, and Callan Davies's "Before Shakespeare" reveals valuable insights about mid-sixteenth century London theatres.In their article, "DH and Non-Shakespearean Theatre History", Davies and Kesson (forthcoming) explain how the digital components of their project are an integral part of their outreach mission: The digital presence of "Before Shakespeare" is centered around showcasing various media at once: archives, discussion, videos, images, performance, and song-from Soundcloud to YouTube-to increase the visibility of non-Shakespearean drama and diversify its availability and appeal beyond printed editions and text. Despite their non-Shakespearean focus, or, indeed, perhaps because of it, their project title, URL (beforeshakespeare.com),"About" description, and Twitter account similarly centralize Shakespeare in the literary canon, even while resisting this positioning.The "About" page explains, "Before Shakespeare is also the first project to take seriously the mid-century beginnings of those playhouses, seeing them as mid-Tudor and early Elizabethan phenomena rather than becoming distracted by the second generation of people working in the playhouses, the most famous of whom is William Shakespeare himself" (Kesson et al. 2016, "About").Their Twitter avatar (@B4Shakes, as of January 2019) is a picture of Shakespeare himself, though with the word "before" covering his eyes and with his mouth silenced by a series of decorative fleurons.There hardly seems to be an elegant solution for digital projects designed to push attention away from Shakespeare.As the most recognizable literary figure from his day, it could be argued that a site designed to appeal to the general public would be remiss to avoid naming him: there is no need to turn him into he-who-must-not-be-named, giving the name of Shakespeare even more power.Furthermore, for a project aiming to reach "wider audiences within and beyond scholarship", name-dropping Shakespeare can be an effective way to attract people to their site and social media, which will then offer "a powerful advertisement for the force and fascination of currently 'non-canonical' plays" (Davies and Kesson, forthcoming). Despite the potential for democratization or canon expansion, digital projects too often reify canon, even when they attempt to subvert it.Emma Smith (2017) describes how this effect is not limited to the digital: drawing on examples from scholarship, culture, and online, she shows how "attempts to decentre Shakespeare are thus often self-defeating."She continues, "Do we privilege Shakespeare above other writers?Self-evidently and self-fulfillingly so."Smith contends that "Shakespeare studies have begun to reflect on the conditions and consequences of their own cultural supremacy"; this article contributes to these ongoing reflections.Although Smith acknowledges the "cultural, theatrical and educational disadvantages of Shakespeare-centrism," she concludes by positioning Shakespeare as "the apex predator in a cultural ecosystem where he has no rivals, only prey," suggesting our focus on Shakespeare is somehow required for the metaphoric ecosystems of culture and scholarship.Digital projects, however, have the potential to go beyond this status quo, by, for instance, positioning Shakespeare alongside his contemporaries or by highlighting the historical moments that led to Shakespeare's current position as cultural touchstone. Digital Editions and the Privileging of Shakespeare's Text When we turn to digital editions, those digital humanities stalwarts, we see the same "not Shakespeare" construction of projects as detailed above.For instance, Greatley-Hirsch's Digital Renaissance Editions was "inspired by the Internet Shakespeare Editions" (Greatley-Hirsch 2015, homepage).That is to say, an online edition of Shakespeare's works inspired a site whose aim is to offer "electronic scholarly editions of early English drama and texts of related interest, from late medieval moralities and Tudor interludes, occasional entertainments and civic pageants, academic and closet drama, and the plays of the commercial London theaters, through to the drama of the Civil War and Interregnum" for all authors, except Shakespeare (Greatley-Hirsch 2015, homepage).These sibling projects only reinforce the divide between Shakespeare and not-Shakespeare.Shakespeare's central position in the canon becomes exceptional: he no longer falls under the umbrella of "Renaissance" or "early English drama."By excluding Shakespeare, Digital Renaissance Editions follows the tradition of printed non-Shakespearean anthologies, such as Arthur F. Kinney's Renaissance Drama (1999) and David Bevington's English Renaissance Drama (2002). With digital editions, however, this Shakespeare-not Shakespeare gulf can be bridged, for instance, with a federated search interface.It would be wonderful to see, in the future, a new way to access Digital Renaissance Editions, the Internet Shakespeare Editions (Jenstad 2018), and the Queen's Men Editions (Ostovich 2006), all of which are built on the same platform, where users can easily compare content from across all three sites, perhaps searching for keywords across plays from all three.There is, of course, a value to maintaining each site separately: each project makes an argument about how we need to approach early modern drama.The Internet Shakespeare Editions includes much non-Shakespearean content, such as the full text and facsimiles of the play A Yorkshire Tragedy; however, the non-Shakespearean content is provided as context for our understanding of Shakespeare.A Yorkshire Tragedy is included in the Internet Shakespeare Editions because of its status as "almost Shakespeare": although now accepted as apocryphal, it was once attributed to Shakespeare and was published in the second imprint of the 1664 folio.Similarly, the Internet Shakespeare Editions includes an extract from Robert Greene's Selimus, because Jessica Slights deemed Greene's play a valuable intertext for her edition of Othello (Slights 2017).A Yorkshire Tragedy, Selimus, and other non-Shakespearean works on the site are categorized as "resources" (the last option from the top menu) whereas Shakespeare's plays and poems are the "texts" the Internet Shakespeare Editions foregrounds (the first option from the top menu).The Internet Shakespeare Editions guides users to approach all non-Shakespearean content through the lens of Shakespeare, first and foremost.The argument of Digital Renaissance Editions emerges to counter this overreliance on Shakespeare, yet ends up making Shakespeare conspicuous in his absence.As a digital edition based on the plays performed by a single playing company, the Queen's Men Editions argues for the value of performance and the importance of repertory-based studies not defined by authorship (Ostovich 2006, "The QME brand"). As Scott McMillin and Sally-Beth MacLean (McMillin and MacLean 1998), Lucy Munro (2009), and others have demonstrated, repertory studies is a valuable field that could be bolstered with even further digital editions organized by theatre company or playing space.At this point, the ISE, DRE, and QME offer three sites, three goals, and three uneven slices of early modern drama.While "maintaining the integrity of [the] sites" and "eliminat[ing] confusion" about their roles and boundaries (Ostovich 2006) is important, there is still a place for a federated search that would allow users to approach the content on all three sites at once.Although this imagined federated search would, at this moment, be far from a universal view of early modern English drama, it could offer a more comprehensive overview than each site currently provides as they stand alone, connected for users only with the occasional hyperlinks.Diane K. Jakacki's thoughtful description of the Internet Shakespeare Editions tagset, relation to its sibling sites, and the potential of linked open data insightfully considers the potential of "acts of editorial disruption" to "allow us to move forward toward infinity while maintaining editorial stability across digital projects" (Jakacki 2018, p. 158).As digital editions of early modern drama "move forward toward infinity", we must assess if we want Shakespeare to be the default number one. The Folger Shakespeare Library has also published digital projects defined by the presence or lack of Shakespeare: Folger Digital Texts (Mowat et al. 2012) and the Digital Anthology of Early Modern English Drama Anthology (Brown et al. 2016).However, unlike the Internet Shakespeare Editions and their sister sites the Folger sites provided edited texts without critical introductions or notes.Folger Digital Texts offers editions of Shakespeare; the Digital Anthology includes editions and bibliographic information about, as their homepage announces, "other plays from Shakespeare's time" (emphasis in the original).The Digital Anthology Frequently Asked Questions page anticipates that users will want to know "Where is Shakespeare?And how does this relate to him?"Their response runs, in full: William Shakespeare's plays are not part of EMED, for a simple reason: EMED was conceived as a way of showcasing all of the other playwrights writing in England's early modern era. By bringing together their plays, however, EMED recreates the theater world that made possible Shakespeare's career and influenced his work.Shakespeare knew many of the earlier plays as an actor or audience member.He also collaborated and competed with some of the playwrights.He directly influenced others.To read Shakespeare's works, we recommend another Folger resource: the Folger Digital Texts.Some of the plays in EMED have historically been attributed to Shakespeare, including The London Prodigal, Sir John Oldcastle, and The Yorkshire Tragedy.These are currently regarded as "Shakespeare Apocrypha" and are no longer attributed to Shakespeare.For an xplanation of how The London Prodigal fits (or does not fit) into Shakespeare's corpus, see Peter Kirwan's article in Shakespeare Documented.(Hyperlinks removed from original.)Even as they undertake important work on early modern drama beyond Shakespeare, the Digital Anthology repeatedly presents the non-Shakespearean plays at the center of their project as "other".They assert that their site is valuable because it adds to our knowledge of Shakespeare.Their anticipated users don't care about Sir John Suckling or even Christopher Marlowe.They highlight the value of their site's "almost Shakespeare" apocryphal content.The Digital Anthology links to two Folger projects focusing entirely on Shakespeare: the Folger Digital Texts and Shakespeare Documented, both examples of "deep" digital humanities projects with a focus on Shakespeare. Even if we consider the Digital Anthology of Early Modern Drama and Folger Digital Texts as twinned projects, they are not identical, but fraternal twins.The interfaces for both sites are quite different; one of the most notable differences is that the Folger Digital Texts Shakespeare editions are presented in modern spelling, whereas the rest of the early modern drama corpus is not.This is because the Folger Digital Texts are based on the Folger's print series, edited by Paul Werstine and Barbara Mowat, which means they have a different level of editorial intervention.The Folger digital projects do not neatly fit into the "deep" and "broad" categories: rather, they exist to serve different audiences.A nonspecialist will have an easier time navigating Shakespeare's texts on Folger Digital Texts than the plays on the Digital Anthology of Early Modern English Drama.Conversely, the Digital Anthology appeals to scholars by offering extensive links existing resources, such as DEEP and the ESTC, as well as the additional data about early performance and publication that offers easy comparison across the corpus.The artificial divide the Folger sites erect between Shakespeare and not Shakespeare, then, is only compounded when, for instance, a scholar wants to know plays first performed in 1599 and returns a list of eleven plays, which, based on the Digital Anthology's scope, excludes Shakespeare's Julius Caesar, Henry V, and As You Like It. (A similar search in DEEP will include all results, but with multiple entries for each play that has more than one pre-1660 publication.)This is to say, the Folger's Digital Anthology of Early Modern English Drama is a digital project both with breadth (including bibliographic data about 403 plays) and depth (offering full texts of twenty-nine plays), yet it is the project's very exclusion of Shakespeare that warps the search results to offer an unrepresentative view of early modern drama and instead presents results with a Shakespeare-sized hole at their center.Indeed, the work of other writers is also omitted with the Shakespearean: for instance, Fletcher's work in Henry VIII is cut out from the corpus simply because it is a collaboration with Shakespeare. Even in digital editions ostensibly focused on non-Shakespearean early modern drama, Shakespeare's shadow looms.The Queen's Men Editions currently provides performance editions of nine plays from the Queen's Men repertory-and four of these nine plays (Famous Victories of Henry V, King Leir, Troublesome Reign of King John, and True Tragedy of Richard III) have Shakespearean counterparts.The repertory of the Queen's Men Company did not comprise 44% of plays directly related to Shakespeare (McMillin and MacLean 1998, esp. appendix A); yet this digital project has begun by privileging those texts.The Folger's Digital Anthology of Early Modern English Drama similarly offers an edition of The True Chronicle of King Leir, as well as the apocrypha they highlight in their FAQ.Some of the same apocryphal plays (including The London Prodigal) appear in both the "resources" section of the Internet Shakespeare Edition and the Digital Anthology of Early Modern Drama.Just having proximity to Shakespeare means these works get more editorial attention than other plays. Richard Brome Online (Cave 2010) remains remarkable in the history of open-access online editions of early modern drama.4Greatley-Hirsch notes, "Until the launch of Richard Brome Online in 2010, there were no electronic critical editions of non-Shakespearean Renaissance drama available" (Hirsch 2011, p. 574).Today, it still stands alone in the landscape of digital humanities projects as the only non-Shakespearean author-based online edition.(The Cambridge Edition of the Works of Ben Jonson Online (Butler 2014), which expands and supplements their printed play editions, is paywalled.)Richard Brome Online argues for the value of considering the works of a single playwright as an oeuvre-an approach often taken to Shakespeare.Like repertory-based editions, there is the idea that if we could expand this model to every author or every repertory, we would have a complete representation of the plays of the period.The realities of early modern collaborative playwriting and anonymous works, however, will complicate future author-based online editions, although author-based editions will certainly have their place in digital humanities projects; I, for one, look forward to Christopher Marlowe Online or John Webster Online. Let us take Webster's The Duchess of Malfi as an exemplar of the status of non-Shakespearean plays online.The Duchess of Malfi is not Shakespearean apocrypha, nor is it a source or adaptation of one of Shakespeare's plays.(Webster's play, however, was performed by the King's Men, Shakespeare's company.)Despite having only marginal Shakespearean ties, The Duchess of Malfi is of continued scholarly interest and has an ongoing performance history, including a 2018 Royal Shakespeare Company production directed by Maria Aberg (Aberg 2018).Although currently Shakespeare's plays are performed at much higher rates than those by his contemporaries, performance and scholarship about performance offers one opportunity to effectively decenter Shakespeare.Even though The Duchess of Malfi is a relatively popular early modern play, it does not currently appear in any of the digital editions discussed in this essay so far (The Internet Shakespeare Editions, Queen's Men Editions, Digital Renaissance Editions, Folger Digital Texts, Early Modern Anthology of Early Modern English Drama)-it does, however, appear in both printed anthologies mentioned (Kinney 1999;Bevington 2002).In future expansions, it could fall into the scope of Digital Renaissance Editions and the Digital Anthology.Yet today, in 2019, it can only be found freely available online in out-of-copyright editions (on HathiTrust (Furlough 2008), GoogleBooks (Google 2004), and the Internet Archive (Kahle 1996)), in its Early English Books Online-Text Creation Partnership (Early English Books Online-Text Creation Partnership EEBO-TCP) transcription and derivatives, and in a single digital edition.The archived version on Renascence Editions (Moncrief-Spittle 2001) offers a transcription of William Hazlitt's 1857 edition; the 1910 Harvard Classics edition, edited by Charles W. Eliot, is available on Bartleby.com:Great Books Online (1993), Project Gutenberg (Hart 1971), and the ebooks@Adelaide (Thomas 2015) sites-though not all of these sites are transparent about their sourcetexts.St John's College Digital Archive offers an unannotated, undated facsimile of a typewritten Duchess of Malfi text (King William Players 1947), with no clue as to its origins except that it is posted in the "Playbills and Programs" digital collection, many of which are "from productions by The King William Players, the St. John's student theater troupe". 5 The only online scholarly edition of The Duchess of Malfi less than a hundred years old is Larry Avis Brown's 2010 edition (last updated 2018), which includes glosses, commentary on each scene, and photos from a 1998 production at Lipscomb University in Nashville (Brown 2010).Brown's useful edition, however, exists separately from most of the sphere of early modern English drama online: it is a boutique project that stands alone, without links to and from many scholarly resources.Brown links to The Internet Shakespeare Editions, noting that his edition won their "swan" award in 2003, yet in the ISE rebuild, all mentions of Brown's site (still findable in their site search) now result in "Page Not Found" errors.The usefulness of Brown's Duchess of Malfi edition, then, is hampered by its lack of findability.I admit I only stumbled upon this edition because it is linked from the Wikipedia page for The Duchess of Malfi."Boutique" editions created by individual scholars, particularly when peer-reviewed, have the potential to democratize our access to early modern plays-but this access must include findability.As Jakacki notes, however, "the ambition of a network of linked sources has significant implications for the editorial processes of not one, but all of the resources involved" (Jakacki 2018, p. 165).Previously, Early Modern Literary Studies (Steggle 2004) and Renascence Editions (Bear 1994) made efforts to host boutique editions of early modern literature edited to varying degrees, however these attempts seem to have been largely abandoned. Shakespeare is separated from the other playwrights and poets of his day by our current scholarly digital editions.Greatley-Hirsch quantified the disproportionate number of digital editions of Shakespeare compared to his contemporaries (Hirsch 2011); this analysis suggests that the disparity extends beyond the amount of Shakespearean texts online to the very ways the texts are made accessible.As Katherine Rowe (2014) argues, scholars need to assess if digital Shakespeare texts are "good enough" for the purposes we wish to apply them, including digital analysis. 6Furthermore, I assert that we need to bring this awareness to our use of digital projects about early modern drama more generally: what questions do we bring to the projects?What are our goals as users? 7 Proliferating Shakespeares Shakespeare's cultural prominence accounts for many of the factors discussed thus far: funders' pro-Shakespeare predilections, appeals to general audiences, and the "non-Shakespeare" project backlash.Shakespeare's preeminence itself also leads to the development and shape of digital humanities projects themselves.Peter Donaldson's Global Shakespeares highlights Shakespeare's cross-cultural appeal and offers site visitors evidence of how Shakespeare's plays are adapted and performed around the world.The nature of the Global Shakespeares site (and similar sites such as Shakespeare in Taiwan or Shakespeare in Spain) is only possible because Shakespeare is a global commodity. 8A Global Peeles site would have precious little content, because George Peele's works are not as frequently rewritten and staged.Global Shakespeares does not strive to be comprehensive: it is not a repository of full-length filmed productions, nor is it a record of all international Shakespeare production.Rather, it is a gathering of curated videos, taken from a wealth of global Shakespeare materials; it is the very wealth of materials that makes the project possible. Other examples abound of digital projects that exist precisely because of Shakespeare's cultural prominence.The four hundredth anniversary of Shakespeare's death in 2016 led to the reimagining or launch of multiple new digital projects, many of which are devoted to Shakespeare's legacy.Shakespeare & The Players (Rusche and Shaw 2016), for instance, is a collection of nearly 1000 postcards of Shakespearean performances from 1880-1914.The Victorian Illustrated Shakespeare Archive (Goodman 2016) offers a repository of illustrations of Shakespeare's works by four Victorian illustrators.Performance Shakespeare 2016 (Massai and Bennett 2016) captured a database of those productions that were performed in honor of the quadricentennial anniversary.Exploring Shakespeare's ongoing and changing cultural impact is an important part of Shakespeare studies, which naturally lends itself to the creation of resources that, in turn, highlight Shakespeare's prominence. It is not then surprising that Shakespeare is overrepresented in scholarship about the early modern period.Shakespeare's prominence in digital humanities now contributes to this cycle: scholars write about Shakespeare because they can research him in innovative ways (easily comparing, for instance, early printed texts on the Shakespeare Quartos Archive, or watching a production on the Global Shakespeares site); the interest in Shakespeare, in turn, generates more Shakespeare-centric sites, often specifically designed for teaching and research.The World Shakespeare Bibliography Online (Estill 2019b) serves as a record of this research and as another element of the self-reinforcing cycle of Shakespeare publication.The World Shakespeare Bibliography is a database of performances of and publications about Shakespeare, which ultimately shapes how and what we research. 9The boundaries of the World Shakespeare Bibliography (it includes only works that focus on Shakespeare), means that scholars using the WSB will not be able to find related work about early modern literature or the professional Elizabethan stage more broadly, unless that scholarship includes a sustained focus on Shakespeare.Users of any other author-focused bibliography, such as the Marlowe Bibliography Online For a thoughtful discussion of the strengths and weaknesses of the Global Shakespeares project, as well as a consideration of the opportunities for and threats to the project, see Diana Henderson (2018).Henderson positions her in-depth analysis as "a case study that may assist others wrestling with the challenging, changing digital/Shakespeares studies landscape" (p.70).For additional reflections on Global Shakespeares, including by its editors, see Henderson's citations.9 For a history of the World Shakespeare Bibliography and its move online, see (Estill 2014).(McInnis and Allan 2019) or the Margaret Cavendish Bibliography Initiative (Siegfried 2019), will face similar limitations; however, the sheer scope of the World Shakespeare Bibliography (currently over 126,000 records) can lead scholars to forget about the world of scholarship beyond its scope, whereas the limits of smaller, boutique bibliographies are more readily apparent. The bibliography with breadth to complement the World Shakespeare Bibliography's depth is the MLA International Bibliography (MLA International Bibliography 2018).The World Shakespeare Bibliography, of course, covers much material outside the scope of the MLAIB, such as professional productions, podcasts, digital projects, and reviews.The World Shakespeare Bibliography's depth of scope leads to multiple benefits, including having descriptive annotations and cross-referencing between items (for instance, a journal article about film adaptations of Hamlet would be cross-referenced to entries for each post-1960 film discussed, which in turn would have an annotation describing the cast as well as a list of reviews and other scholarly works that had discussed the film).Yet, even where their scopes are the same, the Shakespeare-centric focus of the World Shakespeare Bibliography means that there are items in the WSB that should appear in the MLAIB but simply aren't included.Books offer the most striking disparity: only 14% of the books published after 1960 annotated in the World Shakespeare Bibliography are indexed in the MLA International Bibliography.10Despite being the MLA International Bibliography (emphasis added), it is too often the global, non-English contributions that are among the thousands of overlooked texts.As such, perhaps counterintuitively, it is the World Shakespeare Bibliography's specificity of focus that leads to its greater inclusivity of global materials. The digital projects that reflect Shakespeare's cultural prominence, in turn, reinforce his position in our scholarship by opening new avenues for research, often focused entirely on Shakespeare and his legacy.Indeed, digital humanities' Shakespeare problem extends beyond the framing and focus of existing and in-progress digital projects (what we study) by affecting the kinds of research we can undertake (how we study).For instance, Shakespeare, and the consideration of what is Shakespearean or not, has been central to stylometry, an area of study that now uses primarily digital methodologies.Shakespeare has long been the testing ground and often bellwether for new approaches to both literary criticism and textual studies (Parvini 2012;Machan 2000); new digital humanities approaches are no exception, and often turn to Shakespeare as a first case study.The cycle that reinforces Shakespeare's centrality continues into the digital: online projects about Shakespeare beget new research questions that are, in turn, focused on Shakespeare.The boundaries of Shakespeare-centric projects affect the very questions we can bring to our research and teaching and the new questions we are conditioned to develop. Conclusions If we could imagine an early modern digital project with both depth and breadth that positions Shakespeare in his changing historical contexts, the rise of bardolatry over time would mean reflecting Shakespeare's rising cultural prominence over the past centuries.A synchronic project might choose to focus only on Shakespeare's lifetime or only on the heyday of Elizabethan and/or Jacobean professional theatre, yet such a digital project would not capture Shakespeare's legacy.Even if we could conceptualize (let alone realize) the most idealized, unbiased digital project, we would certainly not be able to navigate or query it without bringing in our conditioned, canonical biases. Too frequently, we protest that we are "Shakespeareans" or "early modernists" first and digital scholars second11 ; yet, in order to be effective scholars, we must train ourselves and future generations about digital research methods, including how to determine scope and functionality.To make the most of those valuable early modern digital projects we have, scholars must understand what questions these resources can effectively answer.As John Lavagnino (2014) has observed, today, all humanists undertake research with digital tools, whether they consider themselves digital humanists or not.In both building and using tools about the early modern period, we need to create and reference transparent and detailed project descriptions and guidelines.The future of early modern studies will be shaped by the digital tools that will change the way we research.The potential of linked open data or other digital advances, however, will not be realized if scholars do not critically analyze each digital project as we would a monograph, an edition, a performance, or a bibliography.This essay's critical engagement of digital projects both individually and in their online ecosystem demonstrates that digital humanities has a Shakespeare problem.As these projects evolve and depreciate and as new projects are built, we will have to continue our assessments.How we choose to respond to these early modern digital resources and how we design our future projects will, in turn, shape how we understand the literary canon. 5 TheSt.John's College Catalogue for 1947-1948 reveals that the King William Players produced The Duchess of Malfi in their 1946-47 season (St.John's College in Annapolis 1948). 6 See also the discussion, cited by Rowe, on the open review for Andrew Murphy's "Shakespeare Goes Digital" (Murphy 2010) about how Shakespeareans use digital texts.7 See The Shakespeare User: Critical and Creative Appropriations in a Networked Culture, edited by Valerie M. Fazel and Louise Geddes (Fazel and Geddes 2017), particularly the chapter by Eric Johnson (2017). 8
2019-03-27T09:52:35.920Z
2019-03-04T00:00:00.000
{ "year": 2019, "sha1": "4bb85e831d5a7c8c1739a73db2704fd959c12f4a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0787/8/1/45/pdf?version=1553654034", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4bb85e831d5a7c8c1739a73db2704fd959c12f4a", "s2fieldsofstudy": [ "Art", "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
231650945
pes2o/s2orc
v3-fos-license
A Molecule of the Viridomycin Family Originating from a Streptomyces griseus-Related Strain Has the Ability to Solubilize Rock Phosphate and to Inhibit Microbial Growth Some soil-borne microorganisms are known to have the ability to solubilize insoluble rock phosphate and this process often involves the excretion of organic acids. In this issue, we describe the characterization of a novel solubilizing mechanism used by a Streptomyces strain related to Streptomyces griseus isolated from Moroccan phosphate mines. This process involves the excretion of a compound belonging to the viridomycin family that was shown to play a major role in the rock phosphate bio weathering process. We propose that the chelation of the positively charged counter ions of phosphate constitutive of rock phosphate by this molecule leads to the destabilization of the structure of rock phosphate. This would result in the solubilization of the negatively charged phosphates, making them available for plant nutrition. Furthermore, this compound was shown to inhibit growth of fungi and Gram positive bacteria, and this antibiotic activity might be due to its strong ability to chelate iron, a metallic ion indispensable for microbial growth. Considering its interesting properties, this metabolite or strains producing it could contribute to the development of sustainable agriculture acting as a novel type of slow release bio-phosphate fertilizer that has also the interesting ability to limit the growth of some common plant pathogens. Introduction Phosphorus (P) is often scarce in natural soils and its low availability limits plant growth and thus agricultural yields [1,2]. To solve this problem, agricultural soils are amended with soluble phosphate. However, the generation of soluble phosphate from natural rock phosphate (RP) is expensive and highly polluting and alternatives should be found. Natural ground RP was used in traditional agriculture with limited success because of its poor solubility. Interestingly several reports in the literature mentioned that some soil-borne microorganisms are able to solubilize mineral phosphates and are thus of great interest to enhance P availability for plant nutrition [2,3]. The introduction of such microorganisms in fields might constitute a way to reduce soluble phosphate fertilizer input [1,2,4]. The commercially available poorly soluble RP is mainly a calcium hydroxyapatite [Ca 10 (PO 4 ) 6 OH] but calcium could be substituted by other cations such as Fe 3+ , Al 3+ , Na + , and Mg 2+ ions [5]. Numerous reports in the literature mention that most Phosphate Solubilizing Microorganisms (PSM) dissolve insoluble mineral phosphates via secretion of organic acids [6]. In addition, some PSM can contribute to plant health via the production of bioactive substances, limiting the growth of some specific plants pathogens [7,8]. Their use could thus reduce the excessive input of chemical pesticides that are known to be toxic to human health [9]. Among these PSM, Actinobacteria [10][11][12][13][14] are of special interest. These filamentous and spore-forming bacteria contribute to soil fertility through their valuable ability to decompose soil organic matter. They also contribute to plant health and fitness thanks to their ability to produce molecules able to limit the growth of devastating bacterial or fungal phytopathogenic agents [15,16]. We previously isolated from Moroccan phosphate mine and characterized a Streptomyces griseus-related strain able to efficiently solubilize RP [10] as well as to produce antifungal and anti-bacterial metabolites [11,12]. In this issue we demonstrated that the RP solubilization ability of this strain was due to excretion of a molecule of the viridomycin family. This molecule can allow the slow release of the phosphate of RP via its ability to chelate the positively charged counter ions of the phosphate constitutive of RP. Detection and Purification of Bio-Active Fractions The cell-free supernatant of 3 L cultures of the Streptomyces griseus-related strain grown as described in Materials and Methods was extracted with isoamyl alcohol. The resulting greenish organic phase was concentrated to dryness (1.65 g), dissolved in 2 mL of methanol as a crude extract, and analyzed by Thin Layer Chromatography (TLC). After migration and revelation, four active spots were detected by bio autography on the TLC plates. The four fractions designated as A, B, C, and D showed Rf values of 0.9, 0.45, 0.37, and 0.3, respectively. A and D spots showed both antibacterial and antifungal activities against the tested micro-organisms, while B and C showed only antibacterial activity (see material and methods; (Figure 1). Subsequently, the crude extract (275 mg/mL) was deposited on preparative silica gel plates in order to collect the A and D fractions in sufficient quantity. We obtained 115 mg/mL of fraction A (41.8% of crude extract) and 98.5 mg/mL of fraction D (35.8% of crude extract) ( Figure 1). Determination of the Mineral Phosphate-Solubilizing Abilities of the Bio-Active Fractions In order to determine the mineral phosphate-solubilizing abilities of the bio-active fractions, 500 µL of active fractions A and D at a final concentration 98.5 mg/mL were checked for their ability to solubilize RP and TCP (Tri-calcium phosphate, Ca 3 (PO 4 ) 2 ) as described in material and methods ( Figure 1). The results demonstrated that only fraction D (greenish color) showed an ability to solubilize RP as well as TCP. The concentrations of solubilized phosphate from RP and TCP were 63.7 µg/mL and 135.2 µg/mL, respectively. HPLC Analysis of the Bio-Active Fraction D that Also Bears Mineral Phosphate-Solubilizing Ability The HPLC chromatogram of the active greenish fraction (D) revealed several peaks. The retention time of the main peaks D 1 and D 2 was 15.62 and 17.99 min, respectively ( Figure 2). These sub-fractions were checked again for their ability to solubilize mineral phosphate and for their antibiotic activity against Micrococcus luteus ATCC 381 (ML), Bacillus subtilis ATCC 9524 (BS), Pythium ultimum BCCM 16,164 (PU), and Mucor ramannianus NRRL 1829 (MR) using the paper filter disk technique. The purified D 1 compound showed only antibacterial activity whereas the purified D 2 compound showed both the ability to solubilize mineral phosphate of TCP and RP ( Figure 2) as well as to inhibit the growth of all tested bacterial and fungal strains (Table 1). These sub-fractions were checked again for their ability to solubilize mineral phosphate and for their antibiotic activity against Micrococcus luteus ATCC 381 (ML), Bacillus subtilis ATCC 9524 (BS), Pythium ultimum BCCM 16,164 (PU), and Mucor ramannianus NRRL 1829 (MR) using the paper filter disk technique. The purified D1 compound showed only antibacterial activity whereas the purified D2 compound showed both the ability to solubilize mineral phosphate of TCP and RP ( Figure 2) as well as to inhibit the growth of all tested bacterial and fungal strains (Table Larger volumes of the pure active D 2 fraction led to a higher release of solubilized phosphate from TCP and RP. For instance, 100 µL and 500 µL of the D 2 fraction resulted in concentrations of solubilized phosphate of 28.6 µg/mL and 45.2 µg/mL from RP and 78.6 µg/mL and 110.7 µg/mL from TCP, respectively ( Figure 3). Antibiotics 2021, 10, x FOR PEER REVIEW Figure 3. Phosphate released from TCP (4 mg/mL, white histograms) and RP (1 mg/mL, bla tograms) incubated for five days at 30 °C on a rotary shaker (180 rpm) in the presence of 100 and 500 µL of purified compound D2 or an equivalent volume of sterile bi-distilled water (n control). The amount of soluble phosphate released from RP and TCP of the negative contro subtracted from that present in the assays containing the purified compound D2. The final v are arithmetic means of three independent assays and error bars represent standard deviati the mean values of the triplicates. Structural Determination of the Bio-Active Compound of Fraction D2 After purification, the structure of the active D2 compound (30 mg) was eluc by different spectroscopic methods. The HR-ESIMS and 13 C NMR spectra reveale the molecular formula of the D2 compound was C21H12O9N3Fe. The ESIMS spectru tained in the full-can mass range (m/z) 100-2000 and in the negative mode showe main peaks at m/z 506 (Calcd, 506.1875) and at m/z 1035, corresponding to (M-H (2M + Na-2H) − , respectively. Table 2 summarizes the assignments of protons and c of the NMR spectrum of the D2 compound. Structural Determination of the Bio-Active Compound of Fraction D 2 After purification, the structure of the active D 2 compound (30 mg) was elucidated by different spectroscopic methods. The HR-ESIMS and 13 C NMR spectra revealed that the molecular formula of the D 2 compound was C 21 H 12 O 9 N 3 Fe. The ESIMS spectrum obtained in the full-can mass range (m/z) 100-2000 and in the negative mode showed two main peaks at m/z 506 (Calcd, 506.1875) and at m/z 1035, corresponding to (M-H) − and (2M + Na-2H) − , respectively. Table 2 summarizes the assignments of protons and carbons of the NMR spectrum of the D 2 compound. Analysis of 13 C NMR spectra and 1 H NMR spectra revealed the presence of a ketone group and an aldehyde group and three 'potential' aromatic protons on the same moiety. According to its atomic composition, to the NMR data, and to the green color of its chromophore moiety, we identified this compound as a 3-imino -4-oxo-cyclohexa-1,5diene-carbaldehydo residue. This compound is likely to constitute a monomer of the D 2 compound (Figure 4a). The δ values and coupling patterns of four protons on the cyclohexane ring of the chromophore (Table 2) and the long-range couplings between the aldehyde proton and the ring carbons indicated that an aldehyde group was present at C-7 of the monomer (Figure 4a). The monomer should also have a ketone group at C-1 and an oxim group (-C=N-O) at C-2 to be consistent with the molecular formula of D2 (Figure 4a). The result obtained with mass spectrometry and various NMR measurements (COSY, HSQC, HMBC, NOESY, DOSY) revealed that the structure of the D2 compound was identified as Tris (3-imino -4-oxo-cyclohexa-1,5-diene-carbaldehydo) iron (III) as shown in Figure 4b. This compound belongs to the viridomycin family and is similar to viridomycins A, E, and F. The greenish color of this compound suggested that it may contain iron. The presence of iron in this viridomycin-like molecule was thus tested and confirmed by the atomic absorption spectrophotometer method. The iron concentration was 0.26 mg of Fe 3+ /mg of purified viridomycin, corresponding to approximately 2 moles of iron per mole of viridomycin. Discussion In this issue we demonstrated that a Streptomyces griseus-related strain that possesses multiple plant growth-promoting activities [10][11][12] produces a molecule of the viridomycin family [17] able to solubilize RP [11,13]. This family of molecules produced by some Actinobacteria was discovered in 1964 under the form of greenish pigments [18] constituted by a mixture of several molecules showing subtle structural differences and thus named viridomycins (A, B, C, D, E, and F) [19][20][21]. These molecules showed weak antibiotic activity that might be due to their ability to chelate iron, a metallic ion indispensable for microbial growth. Their production was shown to be triggered under the condition of phosphate limitation [18][19][20][21] but not under the condition of iron limitation [22]. This suggested that the first function of these molecules was to scavenge phosphate under the condition of phosphate limitation. However, our report is the first one to clearly demonstrate the ability of a molecule belonging to the viridomycin family to solubilize insoluble mineral phosphates. The capture by molecules of the viridomycin family of the positively charged counter ions of the P component of RP and TCP (including calcium, iron, etc…), is thought to destabilize the structure of RP or TCP, allowing the liberation and solubilization of the negatively charged phosphates. The δ values and coupling patterns of four protons on the cyclohexane ring of the chromophore (Table 2) and the long-range couplings between the aldehyde proton and the ring carbons indicated that an aldehyde group was present at C-7 of the monomer (Figure 4a). The monomer should also have a ketone group at C-1 and an oxim group (-C=N-O) at C-2 to be consistent with the molecular formula of D 2 (Figure 4a). The result obtained with mass spectrometry and various NMR measurements (COSY, HSQC, HMBC, NOESY, DOSY) revealed that the structure of the D 2 compound was identified as Tris (3-imino -4-oxo-cyclohexa-1,5-diene-carbaldehydo) iron (III) as shown in Figure 4b. This compound belongs to the viridomycin family and is similar to viridomycins A, E, and F. The greenish color of this compound suggested that it may contain iron. The presence of iron in this viridomycin-like molecule was thus tested and confirmed by the atomic absorption spectrophotometer method. The iron concentration was 0.26 mg of Fe 3+ /mg of purified viridomycin, corresponding to approximately 2 moles of iron per mole of viridomycin. Discussion In this issue we demonstrated that a Streptomyces griseus-related strain that possesses multiple plant growth-promoting activities [10][11][12] produces a molecule of the viridomycin family [17] able to solubilize RP [11,13]. This family of molecules produced by some Actinobacteria was discovered in 1964 under the form of greenish pigments [18] constituted by a mixture of several molecules showing subtle structural differences and thus named viridomycins (A, B, C, D, E, and F) [19][20][21]. These molecules showed weak antibiotic activity that might be due to their ability to chelate iron, a metallic ion indispensable for microbial growth. Their production was shown to be triggered under the condition of phosphate limitation [18][19][20][21] but not under the condition of iron limitation [22]. This suggested that the first function of these molecules was to scavenge phosphate under the condition of phosphate limitation. However, our report is the first one to clearly demonstrate the ability of a molecule belonging to the viridomycin family to solubilize insoluble mineral phosphates. The capture by molecules of the viridomycin family of the positively charged counter ions of the P component of RP and TCP (including calcium, iron, etc . . . ), is thought to destabilize the structure of RP or TCP, allowing the liberation and solubilization of the negatively charged phosphates. Some other microorganisms, besides Actinobacteria [23][24][25], including Aspergillus niger [26], Enterobacter sp., Erwinia sp. [27], and Penicillium rugulosum [28] were also reported to solubilize insoluble RP by excretion of chelating substances able to form stable complexes with phosphorus counter-ions such as Al, Fe, and Ca. These results were further supported by Narsian and Patel [29], who tested the effects of known chelators including EDTA, DTPA, NTA, aluminon, and oxine on RP solubilization. However the structure of these natural chelating molecules has never been characterized [30,31]. Furthermore, these metal chelating molecules/siderophores may act as antibiotics since they chelate metals, such as iron, that are indispensable for microbial growth [32,33]. We previously demonstrated that the introduction in greenhouse soil of the Streptomyces griseus-like strain producing metabolites of the viridomycin family had a positive effect on plant growth and fitness and was able to limit the detrimental effects of phytopathogen fungi [12,13]. Considering its interesting properties, the spreading of this strain or of strains with similar interesting properties in greenhouse and/or open air agricultural soils could contribute to the development of more sustainable agriculture since the molecules produced by these bacteria are indeed able to limit the growth of various plant pathogens as well as to make available for plant nutrition the phosphate present in poorly soluble ground rock phosphate used as a cheap amendment in traditional agriculture. Production and Extraction of Active Compounds The culture mentioned above was centrifuged at 10,000× g for 10 min. The collected supernatant was filtered through a 0.45 µm-pore-size filter (Supor-450; Pall Corporation, Port Washington, NY, USA) to remove cell debris. The filtrate was extracted with isoamyl alcohol (3-methyl butan-1-ol, Sigma, St. Quentin Fallavier Cedex, France), 100: 25 (v/v) acidified to pH 2 with HCl 6 N. The greenish organic phase was collected and concentrated to dryness (1.65 g) under vacuum on a Rotavapor (Laborata 4000, Heidolph) at 40 • C and then dissolved in 2 mL of MeOH to obtain the crude extract called E (275 mg/mL). Detection and Purification of Active Fractions with Antimicrobial Activities In order to assess the presence of different bio-active metabolites in the crude extract by bio autography [34], volumes of 40 µL and 60 µL of the extract were spotted onto 20 × 20-cm silica gel plates (Merck Art. 5735, Kiessel gel 60F254) in order to assess its inhibitory activity against the Gram positive bacterium Bacillus subtilis ATCC 9524 and the fungus Pythium ultimum BCCM 16164, respectively. The TLC plates were developed using a mixture of ethyl acetate-methanol, 100: 15 (v/v) and were air-dried overnight at 37 • C to remove the solvents. After migration, the separated compounds were visualized at 254 nm under UV light (absorbance) and at 365 nm (fluorescence). The TLC plates were then placed in a plastic bioassay dish (23 × 23 × 2.2 cm 3 , Fisher Scientific Labosi) and overlaid with 150 mL of nutrient agar or PDA media (containing 7 g/L of agar), inoculated with B. subtilis (10 6 cfu/mL) or P. ultimum (100 cfu/mL), respectively. Once the agar solidified, the plate was incubated at 30 • C. Four spots with bio-activity were detected by bio autography on the TLC plates. The four fractions designated as A, B, C, and D showed Rf values of 0.9, 0.45, 0.37, and 0.3, respectively. After 24 h of incubation for P. ultimum and 48 h for B. subtilis, clear areas corresponding to zones of growth inhibition of the microorganisms were detected, indicating the location of antibiotic compounds on the TLC plates. A and D spots showed both antibacterial and antifungal activities against the tested micro-organisms, while B and C showed only antibacterial activity. Subsequently, the crude extract (E, 275 mg/mL) was deposited on preparative silica gel plates in order to collect the A and D fractions in sufficient quantity. We obtained 115 mg/mL of fraction A (41.8% of total E) and 98.5 mg/mL of fraction D (35.8% of total E). Antimicrobial Bioassay against Bacteria and Fungi The active fractions were collected and their antimicrobial activities were tested against two bacterial (Bacillus subtilis ATCC 9524 and Micrococcus luteus ATCC 381) and fungal (Pythium ultimum BCCM 16,164 and Mucor ramannianus NRRL 1829) species using the paper filter disk technique: To do so, 20 µL of the active purified fractions were deposited on sterile cellulose disks (5 mm diameter, Pasteur Institute) and allowed to dry overnight at 37 • C to evaporate the methanol solvent. Subsequently they were placed aseptically on the surface of plates of nutrient agar (Difco Maryland, USA) or of Sabouraud agar medium (Biorad, Schiltigheim, France) inoculated with bacteria at 10 6 cfu/mL and fungi at 100 cfu/mL, respectively. Sizes of the inhibition zones were determined after 24 h of incubation for Bacillus subtilis ATCC 9524 and Micrococcus luteus ATCC 381 and 48 h for Pythium ultimum BCCM 16,164 and Mucor ramannianus NRRL 1829. Controls consisted of sterile cellulose discs. Three replicates were performed for each fraction for each microorganism. Determination of Phosphate-Solubilizing Abilities In order to check the phosphate-solubilizing abilities of the active antimicrobial fractions, various volumes of the selected fractions or of sterile distilled water (negative control) were added to 1 mL of sterile bi-distilled water containing 1 g/L of RP or 4 g/L of TCP (Sigma Aldrich, St. Quentin Fallavier Cedex, France). The mixtures were incubated at 30 • C for five days on a rotary shaker (180 rpm). The values obtained with the negative controls were subtracted from the values obtained with the fractions. The amount of soluble phosphate released from RP and TCP was estimated by the Olsen and Sommers method [35]. The final values are the arithmetic mean of three independent assays (Figure 3). The only fraction with both antibiotic and phosphate-solubilizing activities was fraction D that was selected for further investigations. Purification and Structure Elucidation of the Active Compound The active fraction D was purified by HPLC (Waters: controller 600, pump 600, dual λ absorption detector 2487, Linear Recorder); column C18 (250 × 7.8 mm 2 UP ODS); mobile phase: linear gradient of methanol-H 2 O from 0 to 100% for 52 min; flow rate: 1 mL/min, λ detection at 220 nm and at 420 nm. The column temperature was 30 • C. The injection volume was 200 µL. Under these conditions, the retention time was recorded at 17.6 min. This fraction was shown to contain two main peaks, D 1 and D 2 , but only D 2 showed both phosphate-solubilizing and anti-microbial activities. After purification of the active compound present in D 2 by HPLC, the latter was subjected to spectroscopic studies. 1 H NMR spectroscopy: Varian Unity 500 (500 MHz), Bruker AMX 500 (500 MHz), Varian Inova 500 (500.33 MHz). 13 C NMR spectroscopy: Varian Unity 500 (125.8 MHz), Varian Inova 500 (125.8 MHz). Chemical shifts were measured relative to tetramethylsilane as an internal standard. The homonuclear and heteronuclear 1D and 2D NMR spectra were recorded on a Varian Inova 500 instrument. Mass spectrometry was performed with an LCC ion-trap mass spectrometer (INSERM, Purpan, Toulouse). Samples were analyzed by electrospray ionization in both negative and positive ion mode, and the full-scan mass range (m/z) was 100-2000. Determination of the Iron Content of the Active Compound The presence of iron in the purified D 2 compound was verified as follows. Laboratory glassware needed was kept overnight in a 5% nitric acid solution before use and then rinsed with triple-distilled water and dried in an oven. Subsequently, 3 mg of the purified product was dissolved in 5 mL triple-distilled water acidified with 5% of nitric acid and heated for 30 min at 60 • C. The iron concentration was determined as described in Rauret et al. [36] and measured at 248.3 nm using a Unicam SP 969 atomic absorption spectrophotometer. A standard curve was established with a solution of Fe 3+ (FeCl 3 , Sigma) in the 0.06 to 5 mg/L range. Data Availability Statement: The data supporting the findings of this study are available from the corresponding author.
2021-01-21T06:16:23.670Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "43968341c424e1b10cc0dc51f4366bb58918e2a8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/10/1/72/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b915733fb42642f75885a50e256df1c0cd98f1b4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
21548744
pes2o/s2orc
v3-fos-license
Effect of lymphadenectomy extent on advanced gastric cancer located in the cardia and fundus AIM: To analyze the prognostic impact of lymphadenectomy extent in advanced gastric cancer located in the cardia and fundus. METHODS: Two hundred and thirty-six patients with advanced gastric cancer located in the cardia and fundus who underwent D2 curative resection were analyzed retrospectively. Relationships between the numbers of lymph nodes (LNs) dissected and survival was analyzed among different clinical stage subgroups. RESULTS: The 5-year overall survival rate of the entire cohort was 37.5%. Multivariate prognostic variables were total LNs dissected (P < 0.0001; or number of negative LNs examined, P < 0.0001), number of positive LNs (P < 0.0001), T category (P < 0.0001) and tumor size (P = 0.015). The greatest survival differences were observed at cutoff values of 20 LNs resected for stage Ⅱ (P = 0.0136), 25 for stage Ⅲ (P < 0.0001), 30 for stage Ⅳ (P = 0.0002), and 15 for all patients (P = 0.0024). Based on the statistically assumed linearity as best fit, linear regression showed a significant survival enhancement based on increasing negative LNs for patients of stages Ⅲ (P = 0.013) and Ⅳ (P = 0.035). CONCLUSION: To improve the long-term survival of patients with advanced gastric cancer located in the cardia and fundus, removing at least 20 LNs for stage Ⅱ , 25 LNs for stage Ⅲ , and 30 LNs for stage Ⅳ patients during D2 radical dissection is recommended. INTRODUCTION At present, surgery is the most effective treatment for gastric cancer. As the standard procedure for advanced gastric cancer, D2 radical resection has been widely accepted and practiced [1][2][3] . The final results of a randomized Dutch trial [4] showed that patients with N2 disease might benefit from a D2 dissection, which required removing all the lymph nodes (LNs) of Group 1 and Group 2. According to another randomized Italian trial [5] , a D2 lymphadenectomy was also advised. Lymph node metastasis is considered one of the most important prognostic factors [6,7] , and adequate lymphadenectomy is advocated for gastric cancer. However, the number of LNs that should be removed and examined when performing a D2 lymphadenectomy has not been determined for advanced gastric cancer located in the cardia and fundus [8] . Therefore, the aim of this retrospective study was to evaluate the relative contributions of both the number of total resected LNs and the number of negative LNs to the outcome of patients with advanced gastric cancer located in the cardia and fundus, and provide further evidence for rational lymphadenectomy. Materials Between January, 1996 and June, 2002, 236 patients diagnosed with primary gastric cancer located in the cardia and fundus was treated with curative resection at the Department of Oncology, Affiliated Union Hospital of Fujian Medical University, Fuzhou, China. The surgical procedure was defined as curative when no grossly visible tumor tissue (metastasis or LN involvement) remained after the resection and the resection margins were histologically normal. There were 197 male and 39 female patients whose ages ranged from 30 to 79 years (58.8 ± 9.8 years). All patients received a D2 or more extended dissection according to the Japanese Classification of Gastric Carcinoma (JCGC). Lymph nodes were meticulously dissected from the en bloc specimens, and the classification of the dissected LNs was determined by specialist surgeons who reviewed the excised specimens after surg er y based on the Japanese Classification of Gastric Carcinoma [9] . The clinical and histopathologic data of each patient were collected and recorded in a specifically designed form. The tumors were histologically classified according to the WHO classification criteria and staged according to the 5th Edition of the TNM system [10] , as listed in Table 1. The follow-up was carried out by trained investigators through mailings, telephone calls, visiting patients or recording the patients' consultations at the outpatient service. The survival time was the time from diagnosis until the last contact, the date of death, or the date that the survival information was collected. All surviving patients were followed for more than five years. The median follow up for the entire cohort was 44 mo (range, 1-136 mo). The follow-up rate was 94.0%, with 222 cases involved. Methods Patients were stratified into five groups based on the number of total LNs removed as follows: < Multivariate survival analysis The five-year overall survival rate of the entire cohort was 37.5%. The backwards elimination model yielded the following independent prognostic variables: total LN count (or number of negative LNs examined; P < 0.0001), number of positive LNs (P < 0.0001), T category (P < 0.0001) and tumor size (P = 0.015). The covariates gender (P = 0.052), age (P = 0.329), Borrmann type (P = 0.373), histological grade (P = 0.132), and type of gastrectomy (P = 0.093) all failed to retain significance levels in this model. The risk ratios and 95% confidence intervals are listed in Table 2. The number of negative LNs and the total number of LNs examined behaved interchangeably and maintained a similar significance level when substituted for each other. This suggests that the number of negative LN can reflect the extent of lymphadenectomy, as can the total number of LNs. Impact of total LN counts by univariate survival analysis When five-year overall sur vival was compared by increasing total LN count, higher LN counts were generally associated with better survival ( Figure 1A). Of the entire cohort, for the T4, N2 and N3 categories, the best survival outcomes were observed with total LN counts no less than 30. For the N1 category, the best survival outcomes were observed with total LN counts between 25 and 29, as shown in Table 3. When adjusted for the T category, the survival rate of the 25-29 LNs group was higher than that of the < 15 (χ 2 = 13.41, P = 0.0002), 15-19 (χ 2 = 7.06, P = 0.008) and 20-24 LNs groups (χ 2 = 5.69, P = 0.017). The survival rate of the ≥ 30 LNs group was higher than that of the < 15 (χ 2 = 7.03, P = 0.008), and the 15-19 LNs groups (χ 2 = 4.91, P = 0.03). When adjusted for the N category, the survival rate of the 25-29 LNs group was higher than that of 15-19 (χ 2 = 9.67, P = 0.002) and 20-24 LNs groups (χ 2 = 5.68, P = 0.02); the survival rate of the ≥ 30 LNs group was higher than that of the 15-19 (χ 2 = 6.56, P = 0.01) and 20-24 LNs groups (χ 2 = 4.56, P = 0.03). Cut point analysis of survival differences relating to total LNs dissected In an attempt to identify the optimal total LN count cutoff, survival comparisons were made for all stage groups at increasing total LN counts between 7 and 35. The greatest discrepancies, as measured by the chisquare test, were stage dependent, and varied from the cutoff levels of 20 (stage Ⅱ), 25 (stage Ⅲ), and 30 (stage Ⅳ) to the cutoff level of 15 (entire cohort), as listed in Table 4. Impact of negative LN counts by univariate survival analysis The five-year survival, based on categories, showed considerable variations with increasing counts of negative LNs. An obvious trend toward better survival results for higher negative LN counts was observed (Table 5). Figure 1B shows the overall survival curve for all patients according to different negative LN strata. When adjusted for T category, the survival rate of the 20-29 LNs group was higher than that of the 10-19 LNs group (χ 2 = 10.51, P = 0.0012); when adjusted for N category, the survival rate of the 20-29 LNs group was higher than that of either the 0-9 (χ 2 = 14.99, P = 0.0001) or the 10-19 LNs group (χ 2 = 5.23, P = 0.02); the survival rate of the 10-19 LNs group was higher than that of the 0-9 LNs group (χ 2 = 19.05, P < 0.0001). Projected impact of negative LN counts on overall survival Based on the statistical linearity regression, the impact of negative LN counts on overall survival was analyzed. For the stage Ⅱ subgroup, the hypothetical baseline fiveyear survival (based on the y-intercept, i.e. no negative LN) was 39.6%. Similarly, for the stage Ⅲ subgroup, the baseline five-year survival (with an assumed zero negative LNs) was 15.4%. For the stage Ⅳ subgroup, the baseline five-year survival (also with an assumed zero negative LNs) was 2.5%. For all patients, the calculated five-year survival rate at baseline was 11.2%. For every ten extra LNs added to the negative LN count, the calculated overall survival increased by 5.77% (stage Ⅱ), 6.09% (stage Ⅲ), 7.65% (stage Ⅳ) or 6.24% (the entire cohort). In this setting, the regression showed a statistically significant survival improvement based on increasing negative LN number only for patients with stages Ⅲ (P = 0.013) and stages Ⅳ (P = 0.035). The results for stage Ⅱ had no statistical significance (P = 0.195). DISCUSSION Over the last few decades, an increase in the incidence of upper third gastric cancer has been reported by many investigators around the world [11][12][13] . It is generally accepted that the prognosis of patients with this type of carcinoma is worse than for tumors in other parts of the stomach [14,15] . The only potentially curative treatment for this disease is complete surgical resection, with an en bloc LN dissection. The D2 radical resection has been regarded as the standard surgical procedure for advanced gastric cancer [16] . Lymph node dissection during the D2 radical resection is not confined to the anatomical extent, but also to the number of LNs dissected. The current edition of the UICC staging manual recommends examining at least 15 LNs for adequate gastric cancer staging. However, is that adequate for advanced gastric cancer located in the cardia and fundus? It was reported that patients with advanced gastric cancer located in the cardia and fundus often showed a higher frequency of perigastric LNs and higher proportion of overall LN metastasis [17] . Koufuji et al [18] investigated 49 cases with an upper gastric cancer invading the esophagus who underwent surgical treatment. The incidences of LN metastasis were 0% for T1, 67% for T2, 81% for T3 and 80% for T4. Ichikura et al [19] retrospectively analyzed 65 cases with cardiac carcinoma and found that the incidences of LN metastasis were 68% for Siewert's type Ⅱ tumors, and 94% for Siewert's type Ⅲ tumors. In the present study, 82.2% of patients had LN metastasis. For these patients, it is better to determine the extent of lymphadenectomy according to the extent of LN metastases preoperatively or intraoperatively. However, this is difficult to achieve as there is lack of reliable measures for clinical diagnosis preoperatively and for comprehensive LN biopsy intraoperatively [20,21] . If the LNs were not completely removed, the probability of residual tumor cells would increase, leading to poor prognosis. Therefore, some investigators advocated removing adequate LNs intraoperatively to avoid this situation [22][23][24] . However, it is not certain how many LNs need to be dissected in a D2 lymphadenectomy. Liu et al [25] showed, in 147 patients with adenocarcinoma of the stomach who had undergone gastrectomy with curative intent, that for stage Ⅲ disease, removal of >15 LNs [26] . They suggested that patients with Siewert type Ⅱ and Ⅲ adenocarcinoma of the gastroesophageal junction (GEJ) should undergo adequate lymphadenectomy to permit examination of ≥ 15 LNs, allowing accurate identification of prognostic variables. Removal of ≥15 LNs is associated with more accurate survival estimates for patients with advanced disease. For adenocarcinoma of the GEJ, a minimum removal of 25 LNs was recommended by Gee et al [27] . Our present observations showed that better long-term survival outcomes were obtained with higher numbers of LNs resected during D2 lymphadenectomy. This is consistent with the research results of Schwarz et al [28] . We suggest that for adequate LN resection, including total LNs and the number of negative LNs, the removal of 20 LNs for stage Ⅱ, 25 LNs for stage Ⅲ, 30 LNs for stage Ⅳ and 15 LNs for the entire cohort be recommended during D2 radical dissection. Total LNs and number of negative LNs can reflect the extent of LN dissection and influence survival [29,30] . In the present study, total LN count or negative LN counts turned out to be independent protective factors, according to the symbols of regression coefficient. These two factors behaved interchangeably, and maintained a similar significance level when substituted for each other. Furthermore, an impact of total LNs or negative LNs on survival was observed. In the univariate survival analysis, higher total LN counts may translate into better outcomes. The effect was much more significant with T4 and N2, N3 disease. We thus postulate that even in locally advanced disease with adjacent organ invasion or advanced nodal involvement that is still resectable, adequate LN dissection influences survival. The contribution of negative LN counts to the prognosis of patients is partly due to LN micrometastases. In patients without lymph metastases identified by HE staining, about 20% had LN micrometastases [31] . del Casar et al [32] reviewed 144 patients with primary gastric adenocarcinoma who underwent surgery and found that lymphatic and/or blood vessel tumoral invasion (LBVI) was present in 46 patients (31.9%), which was significantly associated with a poorer overall patients' survival. Therefore, curability was reported as one of the most reliable predictors of long-term survival for LN-negative gastric carcinoma patients [33] . A similar study [34] examined the impact of the number of negative LNs on survival. The study was conducted in patients with stage Ⅲ colon cancer, and demonstrated that a higher number of negative nodes was associated with better survival. In our study, the impact of negative LN on survival in these stage patients also showed an obvious trend toward better survival for higher negative LN counts. For every ten extra LNs added to the negative LN count, the calculated overall survival increased by 6.09% for stage Ⅲ, and 7.65% for stage Ⅳ patients, based on the linear regression. Generally, better long term survival was observed with higher total LNs or negative LNs, showing the contribution of sufficient lymphadenectomy toward reducing residual tumor cells. For the curative-intent gastrectomy of locally advanced disease, retrieval and examination of adequate numbers of LNs is suggested for gastric cancer located in the cardia and fundus. Background The incidence rates of upper third gastric cancer, mostly located in the cardia and fundus, has increased in recent years. Few studies have investigated the relative contributions of both the number of total resected lymph nodes (LNs) and the number of negative LNs to the outcome of patients with advanced gastric cancer located in the cardia and fundus. Research frontiers Some researches have considered D2 radical dissection to be standard procedure for advanced gastric cancer located the cardia and fundus, which requires the removal of Group 1 and Group 2 LNs. The dissected LN count is found to have strong association with patient survival.
2018-04-03T04:14:46.136Z
2008-07-14T00:00:00.000
{ "year": 2008, "sha1": "9e52cfc149c1447ad42ee9f80ba90239b196d91f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.14.4216", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a12cc197d89a52faa12debff062ff23697c4a304", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251402491
pes2o/s2orc
v3-fos-license
Holomorphic foliations of degree four on the complex projective space In this paper, we study holomorphic foliations of degree four on complex projective space $\mathbb{P}^n$, where $n\geq 3$, with a special focus on obtaining a structural theorem for these foliations. Furthermore, for a foliation $\mathcal{F}$ of degree $d\geq 4$ with a sufficiently high $k^{th}$-jet, we prove that either $\mathcal{F}$ is transversely affine outside a compact hypersurface, or $\mathcal{F}$ is transversely projective outside a compact hypersurface, or $\mathcal{F}$ is the pull-back of a foliation on $\mathcal{F}^2$ by a rational map. Introduction and Statement of the results The study of holomorphic foliations has attracted the attention of many mathematicians and has been gaining more prominence in recent decades. For example, it is well established that the theory of foliations plays in important role in the analysis of subvarieties within projective varieties. A compelling illustration of this fact is provided by Bogomolov's paper [1], which concerns the renowned Green-Griffiths-Lang conjecture. Techniques from Algebraic Geometry have proven exceptionally valuable when exploring singular holomorphic foliations. Notably, in his acclaimed Lecture Notes [12], J.-P. Jouanolou established that a generic polynomial vector field of degree greater than one on the complex projective plane does not possess any invariant algebraic curve. Recently, a focal point in the theory of foliations has been the classification problem, particularly for codimension one holomorphic foliations on P n . For instance, the following conjecture (cf. [6, page 2]) is attributed to different authors such as Marco Brunella, Alcides Lins Neto, Dominique Cerveau, and others: Conjecture 1. Any codimension one holomorphic foliation F on P n , with n ≥ 3, (*) either F is transversely projective outside a compact hypersurface; (**) or F is the pull-back of a holomorphic foliation G on P 2 by a rational map Φ : P n P 2 . The concepts used in the previous conjecture will be explained throughout this paper. We emphasize that every codimension one holomorphic foliation F on P n corresponds to a homogeneous 1-form ω on C n+1 of degree d + 1 that defines π * (F), whose singular set has codimension at least two. Here, π : C n+1 \ 0 → P n represents the natural projection, and π * (F) signifies the pull-back of F via π. By definition, the integer d is the degree of the foliation F. We remark that the integer d is precisely the number of tangencies of F with a generic line L. Consequently, we are equipped to explore the space of codimension one holomorphic foliations on P n with a specific degree. The Zariski closure of this set is identified as an algebraic set and naturally has irreducible components With a focus on explaining these irreducible components to a certain degree, significant progress has been made through several works. The initial case that can be considered involves the space of codimension one foliations on P n of degree zero. It has been established that this space possesses only one component, isomorphic to the Grassmannian of lines in P n . A proof of this assertion can be found in [10]. In P n with n ≥ 3, the space of holomorphic foliations of degree one contains two irreducible components, as established by Jouanolou in [12]. Later, Dominique Cerveau and Alcides Lins Neto resumed the study of irreducible components and presented a significant result that brought this subject back into the spotlight. In [5], they proved that in P n , with n ≥ 3, the space of codimension one foliations of degree two has six irreducible components. Furthermore, they explicitly illustrate the generic element of each one of these irreducible components. These components are called Linear pull-back foliations, Rational components, Logarithmic components, and an Exceptional component. The paper of Cerveau-Lins Neto was an invaluable contribution and served as motivation for numerous researchers to focus their studies on the classification of irreducible components of the space of holomorphic foliations on P n . Recently, in [8], Corrêa-Muniz proved that the space of foliations of degree two on P n and dimension k ≥ 2 also encompasses six irreducible components. Despite the progress made in these studies, solving the classification problem of irreducible components of the space of holomorphic foliations is far from simple. For example, even after a complete description of the irreducible components of the degree two foliations space on P n , with n ≥ 3, it has not yet been possible to explain all the irreducible components of the space of codimension one holomorphic foliations on P n , n ≥ 3, of degree d ≥ 3. Nevertheless, Cerveau and Lins Neto presented a structural theorem for degree three foliations in [6]. More precisely, they proved that if F is a holomorphic codimension one foliation of degree three on P n , with n ≥ 3, then • either F admits a rational first integral, • or F is transversely affine outside a compact hypersurface, • or F = Φ * (G), where Φ : P n P 2 is a rational map and G is a foliation on P 2 . Recently, in [15], F. Loray, J. V. Pereira and F. Touzet established a more refined structural theorem for codimension one foliations of degree three on P 3 . Building upon this, in [9], R. C. da Costa, R. Lizarbe and J. V. Pereira extended the result to P n , n ≥ 3. Furthermore, Da Costa, Lizarbe and Pereira [9, Theorem B] provide a complete list of the irreducible components of the space of foliations of degree three on P n , n ≥ 3, whose general elements do not admit a rational first integral. However, even in [9], a complete classification of the irreducible components of the space of foliations of degree three on P n , n ≥ 3 is not known. Nevertheless, they have established that the space of codimension one foliations of degree three on P n , n ≥ 3, have at least 24 distinct irreducible components. Inspired by the previous results, this paper is devoted to studying degree four foliations on P n , with n ≥ 3. Our main result is the following: Theorem A. Let F be a codimension one holomorphic foliation of degree four on P n , with n ≥ 3. Then, (i) either F admits a rational first integral; (ii) or F is transversely affine outside a compact hypersurface; (iii) or F is a pure transversely projective outside a compact hypersurface; (iv) or F = Φ * (G), where Φ : P n P 2 is a rational map and G is a holomorphic foliation on P 2 . (v) or there exists a birational map Ψ : P n−1 × P 1 P n such that the foliation Ψ * (F) is defined by a 1-form described as follows: where the 1-forms β j do not depend on t ∈ P 1 , for all 0 ≤ j ≤ 4. Note that in order to confirm Conjecture 1 for degree four codimension one foliations on P n , n ≥ 3, it is necessary to prove that item (v) is equivalent to some of the previous items. We hope to prove this fact in the near future. One of the difference between Theorem A and the structural theorem of degree three foliations given by Cerveau-Lins Neto [6] is that pure transversely projective foliations exist. That is, there are foliations of degree four with projective transverse structure that is not affine. See the example given in [7,Section 5.4]. Now, let us focus on foliations of degree d ≥ 4. Let F be a degree d foliation on P n . Then, F can be represented in an affine coordinate system C n ≃ E ⊂ P n by an integrable polynomial 1-form where the coefficients of the 1-forms ω j are polynomials homogeneous of degree j, 0 ≤ j ≤ d + 1, and i R (ω d+1 ) = 0. Given p ∈ E, let j k p (ω E ) be the k th -jet of ω E at p, and let Note that J (F, p) depends only on p and F, and not on E and ω E . Moreover, the singular set of F is given by It is well-known that Sing(F) is an algebraic set and always contains irreducible components of codimension two, (cf. [13]). Motivated by the family of foliations of [7,Section 5.4], where the first k th -jets (k < 3) of the 1-form defining this family are all zero, we propose the following structural theorem for foliations of degree d ≥ 4 in P n , with n ≥ 3. Theorem B. Let F be a codimension one holomorphic foliation of degree d ≥ 4 on P n , with n ≥ 3. Suppose that one of the two conditions is satisfied: Then, (i) either F admits a rational first integral; (ii) or F is transversely affine outside a compact hypersurface; (iii) or F is a pure transversely projective outside a compact hypersurface; (iv) or F = Φ * (G), where Φ : P n P 2 is a rational map and G is a holomorphic foliation on P 2 . To prove Theorems A and B, we will use the tools of [6] and techniques concerning foliations admitting a finite Godbillon-Vey sequence (cf. [2] and [7]). The paper is organized as follows: in Section 2, we introduce the concept of holomorphic foliations and present some important results about codimension one foliations on P n . We also define the notions of affine and projective transverse structures of a foliation. Moreover, we establish the notion of a Godbillon-Vey sequence and state crucial results of foliations that admit a Godbillon-Vey sequence with finite length. Section 3 is devoted to the proof of Theorem A, which will be broken down into several lemmas. Our proof is given step to step. Each step is according to length of a Godbillon-Vey sequence adapted to a degree four foliation. Finally, in section 4, we will prove Theorem B using some lemmas used in the proof of Theorem A. Background and knows results 2.1. Codimension one holomorphic foliations. Let M be a complex compact connected manifold of dimension n ≥ 2. A codimension one singular foliation F on M is given by a covering by open subsets {U i } i∈I on M and a collection of integrable holomorphic 1-forms ω i on U i satisfying: The condition (3) implies that the singular set of F defined by Sing(F) := ∪ i∈I Sing(ω i ) is a complex variety of M of codimension at least two. In the special case where M is a projective manifold and F a foliation as above, we can associate to F a meromorphic 1-form ω in the following way. We take a rational vector field Z on M , not tangent to F, that is h j = i Z |U j ω ≡ 0; the meromorphic 1-form ω defined on U j by ω |U j = ω j /h j is global and integrable. In this case, we will say that ω defines F. We specialize to the case M = P n , the n dimensional complex projective space. In that context, we can define F as follows: let π : C n+1 \ {0} → P n be the natural projection, and consider π * (F) the pull-back of F by π; with the previous notations, π * (F) is defined by the 1-form π * (ω j ) on π −1 (U j ). Recall that, for n ≥ 2, we have H 1 (C n+1 \{0}, O * ) = {1} (cf. [4]). Consequently, there exists a global holomorphic 1-form ω on C n+1 \{0} which defines π * (F) on C n+1 \{0}. By Hartog's prolongation theorem ω can be extended holomorphically at 0. By construction we have i R ω = 0, where R is the Euler (or radial) vector field: This fact and the integrability condition imply that ω is co-linear to an integrable homogeneous 1-form n j=0 A j (z)dz j , where the A j 's are homogeneous polynomials of degree d+ 1, gcd(A 0 , . . . , A n ) = 1. Therefore, we can state that every codimension one holomorphic foliation F on P n corresponds to a homogeneous integrable 1-form ω on C n+1 that defines π * (F) with cod(Sing(ω)) ≥ 2. By definition, the integer d is the degree of the foliation F. Observe that 1-form ω is well-defined up to multiplication by non-zero complex number. The following result will be crucial to demonstrate our theorems. Let F be a codimension one holomorphic foliation on a projective manifold M . Assume that ω is a meromorphic 1-form defining F. We say that F is transversely projective or F has a transverse projective structure (cf. [16]) when there are meromorphic 1-forms ω 0 = ω, ω 1 and ω 2 on M satisfying This means that, outside the polar and singular set of the 1-forms ω i , the foliation F is regular and transversely projective in the classical sense (see [11], or [7, Section 2.2]). When ω 2 = 0, we say F is pure transversely projective. If ω 2 = 0, i.e. dω 1 = 0, we say that F is transversely affine or F has a transverse affine structure. To end this subsection, let us give some examples of foliations on P n . Example 2.2 (Foliations with rational first integral). Let P, Q : C n+1 → C be a homogeneous polynomials such that deg(P ) = deg(Q) = k ≥ 1. Then ω = QdP − P dQ defines a codimension one foliation F on P n with a rational first integral P/Q. Example 2.3 (Foliations associated to closed meromorphic 1-forms). If ω is a closed meromorphic 1-form on P n , n ≥ 2, then it defines a codimension one holomorphic foliation on P n . According to [14, Proposition. 1.2.5], we have that ω has a decomposition where the λ i 's are complex numbers and the f i 's and h are rational functions. The leaves are (outside the singular set of the foliation) the connected components of the level sets of the multi-valued function i λ i log f i + h. 2.2. Godbillon-Vey sequences [11]. Let F be a codimension one holomorphic foliation on a projective manifold M . A Godbillon-Vey sequence for F (briefly G-V-S) is a sequence (ω 0 , ω 1 , . . . , ω k , . . .) of meromorphic 1-forms on M such that F is defined by ω 0 = 0, and the formal 1-form is integrable, that is, Ω ∧ dΩ = 0. In this case, the 1-form in (1) is meromorphic and can be extended meromorphically to M × P 1 . Since it is integrable, it defines a codimension one foliation H on M × P 1 such that H| M ×{0} = F. When a meromorphic vector field X exists on M which is transversal to F at a generic point, then a unique meromorphic 1-form ω defining F exists, satisfying i X (ω) = 1. According to [7, Section 2.1], we can define a Godbillon-Vey sequence for F by setting ω k := L k X (ω), where L k X (ω) denotes the k th Lie derivative along X of the 1-form ω. When there exists N ∈ N such that ω N = 0 but ω k = 0 for all j > N then we say that F admits a finite G-V-S of length N . In general, the length is infinite. If F admits a G-V-S of length N ≤ 2 then F is transversely projective outside a compact hypersurface. When N = 1 then F has a transverse affine structure, see for instance [16] and [11]. For foliations that admit G-V-S of length ≥ 3, we have the following result: Theorem 2.4 (Cerveau -Lins-Neto -Loray -Pereira -Touzet [7]). Let F be a codimension one foliation on a complex manifold M that admits a G-V-S of length N ≥ 3. Then • either F is transversely affine; • or there is a compact Riemann surface S, 1-meromorphic forms α 0 , · · · , α N in S and a rational map φ : When M = P n , n ≥ 3, necessarily S = P 1 and the 1-form in 2 can be written as Remark 2.5. Let F and G be codimension one foliations on complex manifolds X and Y , respectively. Suppose that G admits a finite G-V-S of length N and Remark 2.6. Let F and G be codimension one foliations on complex manifolds X and Y , respectively. Suppose that F = Φ * (G), where Φ : X Y is a dominant meromorphic map. Then, G is transversely projective (resp. affine) if, and only if, so is F, (cf. [7, Theorem 2.21]). In the case Φ is a finite ramified covering, the above statement is equivalent to Theorem 1.6 (resp. Theorem 1.4) in [3]. Proof of Theorem A Let F be a codimension one holomorphic foliation of degree four in P n , n ≥ 3. In order to prove Theorem A, we consider two possibilities: In the first case, F admits a rational first integral by invoking Corollary 2.1. This consequently establishes the validity of assertion (i) within Theorem A. Therefore, we shall assume that there exists a point p ∈ P n such that J (F, p) ≥ 2. By employing affine coordinates (z 1 , . . . , z n ) ∈ C n ⊂ P n , where p = 0 ∈ C n , we can conveniently consider F| C n : ω = 0, where ω is a polynomial 1-form in C n expressed as follows: here, α j corresponds to homogeneous polynomial 1-forms of degree j, 2 ≤ j ≤ 5, and We shall express α j as: where P j−1i are homogeneous polynomials of degree j − 1. Note that F 6 ≡ 0 by (4). We proceed to examine the pull-back of ω through the process of blowing-up of P n at 0 ∈ C n ⊂ P n . Let σ :P n → P n denote the blow-up at 0 ∈ C n ⊂ P n , and letF represent the strict transform of F by σ. Our objective is to calculate σ * (ω) within the chart We have depends only on τ . Utilizing the condition F 6 (τ, 1) ≡ 0, we derive the 1-form η as follows: This 1-form serves to define the foliationF in the chart (τ, x). Given the aforementioned conditions, we are presented with the subsequent possibilities for F i : (4) Possibilities solved in an analogous way: We will investigate these potential scenarios by means of the following lemmas. Our exploration will revolve around ω and F, taking into account the provided conditions, unless specified otherwise. Then F is the pull-back by a linear map of a foliation on P n−1 . (3), it is evident that α 5 ≡ 0, otherwise F would have degree ≤ 3. On the other hand, the integrability of ω implies that Upon applying Cartan's magic formula, we get and applying Euler's formula to L R (ω) yields It follows from (8) that i R (dω) = 3α 2 + 4α 3 + 5α 4 + 6α 5 and from (7), we deduce Since the coefficients of α j are homogeneous polynomials of degree j, each of these exterior products generates a homogeneous 2-form polynomial of a degree different from each other, so none of them can be a combination of the others, we obtain Since α 5 ≡ 0 and α 2 ∧ α 5 = α 3 ∧ α 5 = α 4 ∧ α 5 = 0, we have that there exists meromorphic functions f j , j = 2, 3, 4, such that α j = f j α 5 . We assert that f j ≡ 0 for all j = 2, 3, 4. Indeed, suppose by contradiction that some f j ≡ 0. Then we have that the coefficients of (f 2 + f 3 + f 4 + 1)α 5 would be of a degree different than 5, an absurd. Hence, the statement is proven. Consequently, all f j = 0, and thus α j = 0, for all j = 2, 3, 4. In particular, we have ω = α 5 . Since α 5 is integrable, it defines a foliation of degree four, say F n−1 , on P n−1 . If we consider P n−1 as the set of lines through 0 ∈ C n ⊂ P n and Φ : C n \ {0} → P n−1 the natural projection then F = Φ * (F n−1 ). This finishes the proof of the lemma. Lemma 3.2 (Case 2). Suppose that F 4 ≡ F 5 ≡ 0, and F 3 ≡ 0. Then, either F is transversely affine, or F is the pull-back by a rational map of a foliation on P 2 . This implies thatη has a transversely affine structure, consequently establishing the same for F. Lemma 3.3 (Case 3). Suppose that F 4 ≡ F 3 ≡ 0, and F 5 ≡ 0. Then either F is transversely affine, or F is the pull-back by a rational map of a foliation on P 2 , or F is pure transversely projective. Proof. It follows from (37) that When we consider the birational map ψ(τ, z) = (τ, 1 z ) = (τ, x), we obtain 1) , and β 3 = − θ 2 F 5 (τ, 1) . It is important to observe that ζ admits a finite G-V-S of length ≥ 3 provide that β 3 ≡ 0. Consequently, F also holds this property. Therefore, in this case, we find that either F is transversely affine, or F is the pull-back by a rational map of a foliation on P 2 , in accordance with Theorem 2.4. On the contrary, let us assume that β 3 ≡ 0, implying that θ 2 ≡ 0. It follows from (14) that where β j = θ j+3 F 5 (τ, 1) for all 0 ≤ j ≤ 2. Since i ∂x (β j ) = 0, and L ∂x (β j ) = 0, for all 0 ≤ j ≤ 2, the integrability condition ofη implies Now, we will proceed to prove the existence of (η, Ω, ξ) that establishes a projective transverse structure forη, adhering to the condition By selecting Ω = β 1 + 2xβ 2 , we arrive at On the other hand, by computing dΩ, we obtain Now, we choose ξ = 2β 2 , which satisfies the following equality It is worth that Ω is closed if, and only if β 2 ≡ 0. This would lead to ξ ≡ 0, resulting in an affine transverse structure forη. On the contrary, when β 2 ≡ 0,η admits a pure transversely projective structure. SinceF is defined byη, we can deduce that either F is transversely affine, or admits a pure transversely projective structure. Lemma 3.4 (Case 4). Suppose that F 3 ≡ 0, and F 4 ≡ 0. Then, either F is transversely affine, or F is the pull-back by a rational map of a foliation on P 2 . We can assume that β 3 ≡ 0, because otherwise we would get a finite G-V-S of length ≥ 3, and in this case, by applying Theorem 2.4, either F is transversely affine, or it is a pull-back by a rational map of a foliation on P 2 . Consider the birational map ψ(τ, z) = τ, As in the previous subcase we can assume that β 3 ≡ 0 by Theorem 2.4. We will now work both cases at the same time as they both fell into the form (20). Remembering that in the initial blow-up we have π * (α 2 ) = x 2 [xθ 2 + F 3 (τ, 1)dx] = x 3 θ 2 . When α 2 ≡ 0, we can establish that β 0 ≡ 0. In this situation, η takes the form of η = xβ 1 + x 2 β 2 + dx, aligning with the conditions of Lemma 3.2, so let us assume α 2 ≡ 0. The integrability of ω = α 2 + α 3 + α 4 + α 5 gives us The coefficients of α j are homogeneous polynomials of degree j, then the coefficients of α 2 ∧ dα 2 are homogeneous polynomials of degree 3 and none of the other 2-forms in the other parcels have coefficients of degree 3, then α 2 ∧ dα 2 = 0. In particular, either cod(Sing(α 2 )) ≥ 2 and α 2 define a degree one foliation in P n−1 ; or α 2 = hα 1 , where α 1 defines a degree zero foliation in P n−1 (α 2 = hα 1 , in this case cod(Sing(α 2 )) < 2, and the foliation needs to be saturated and therefore what is left is a 1-form polynomial of degree 1. In both cases α 2 has an integrating factor, that is, there exists a function f such that d(f −1 α 2 ) = 0. In Subcase II, we have for this case. Now, let us consider the birational map Ifη is the one that appears in Subcase I, just use Φ 1 , if it isη in Subcase II, just use Φ 2 , so we will omit the indices i of Φ i and F i because the calculation we will perform using these transformations in one case is identical to the other. Ifη is like in (20), a straightforward calculation gives Φ * (η) = Fη, wherē η = dz +β 0 + zβ 1 + z 2β 2 withβ 0 = β 0 F ,β 1 = β 1 + dF F , andβ 2 = F β 2 . Let us consider the birational map φ(τ, w) = (τ, 1 w ) = (τ, z). Then φ * (η) = −w 2η , whereη = dw −β 2 − wβ 1 − w 2β 0 . Since i ∂w (β j ) = 0 and L ∂w (β j ) = 0, for all 0 ≤ j ≤ 2, the integrability ofη implies From (22) and (23), we get dβ 0 = 0, and by the first equation in (24) we obtainβ 0 ∧β 1 = 0. Denoting by M k the set of meromorphic functions in P k . It follows fromβ 0 ∧β 1 = 0 that there exists g ∈ M n−1 such thatβ 1 = gβ 0 . The second relation in (24) gives us Therefore, there exists h ∈ M n−1 such that The third relation in (24) implies We will designate G as the foliation generated byβ 0 in P n−1 . Notably,β 0 gives rise to a foliation in P n−1 , due to its definition within C n−1 , and additionally, it is closed. We have two possibilities: Possibility I. G has no non-constant first integral. We claim that ω has an integral factor. In fact, by (25) where c ∈ C, otherwise G would have non-constant first integral. We can then writê in particular, if we set P := w + g , then Thereforeη has an integral factor and thus ω as well. Then there exists h such that Consequently, F is transversely affine. Possibility II. G has non-constant first integral. We assert that F is the pull-back of a Riccati equation on P 1 × P 1 by a birational map. Indeed, by Stein's Factorization Theorem G has a meromorphic first integral, say f , with connected fibers: if φ ∈ M n−1 and dφ ∧ df = 0 then there exists ψ ∈ M 1 such that On the other hand, the relation (25) implies that there exists φ 2 ∈ M 1 such that and by Stein's Factorization Theorem replacing inη we havê Consider the rational map Φ 1 : P n−1 × P 1 P 1 × P 1 given by Note that θ is an integrable 1-form that defines a Riccati equation in P 1 × P 1 and furthermore, it has a transversely affine structure as we wanted, then F is transversely affine. Lemma 3.5 (Case 5). Suppose that F 5 ≡ 0, and F 3 ≡ 0 ≡ F 4 . Then, there exists a birational map Ψ : P n−1 × P 1 P n such that the foliation Ψ * (F) is defined by a 1-form described as follows: where the 1-forms β j do not depend on t ∈ P 1 , for all 0 ≤ j ≤ 4. Continuing, we consider the birational map ψ t : P n−1 × P 1 P n−1 × P 1 defined as The proof concludes by observing that the 1-forms β j are dependent solely on τ , for all 0 ≤ j ≤ 4, and taking Ψ : Lemma 3.6 (Cases 6 and 7). Suppose that F 4 ≡ 0, and F 3 ≡ 0, F 5 ≡ 0; or F 3 ≡ 0, F 4 ≡ 0, and F 5 ≡ 0. Then, the foliation F is either transversely affine or is the pull-back by a rational map of a foliation on P 2 , or there exists a birational map Ψ : P n−1 ×P 1 P n such that the foliation Ψ * (F) is defined by a 1-form described as follows: where the 1-forms β j do not depend on t ∈ P 1 , for all 0 ≤ j ≤ 4. Proof. As in Lemma 3.5. the map σ extends to a birational map ψ 1 : P n−1 × P 1 P n−1 × P 1 . Then, following from equation (37), the foliation ψ * (F) is defined by the 1-form: Since F 5 ≡ 0, we can divide the expression of η by F 5 (τ, 1) and obtain: where α j = θ j /F 5 (τ, 1), for all 2 ≤ j ≤ 5. Now, we factorize the polynomial Note that c 1 (τ ) and c 2 (τ ) are not identically zero, as assumed from the hypotheses F 3 ≡ 0, F 5 ≡ 0, and c 1 (τ ) · c 2 (τ ) = F 3 (τ,1) F 5 (τ,1) . First, we consider the birational map ψ z : P n−1 × P 1 P n−1 × P 1 defined by ψ z (τ, z) = τ, c 1 (τ )z with Now we will analyze the expression of η z in (32), observing that we have the following subcases: Subcase I. c 1 (τ ) = c 2 (τ ). In this situation, we have that Since c 2 (τ ) is not identically zero, we can divide η z by c 2 (τ ) and obtain a 1-formη that is equivalent to the 1-form derived in equation (9) of Lemma 3.2. Hence, we can conclude that the foliation F is either transversely affine or is the pull-back by a rational map of a foliation on P 2 . Subcase II. c 1 (τ ) = c 2 (τ ). In this subcase, the 1-form η z is equivalent to the 1-form derived in equation (29) of Lemma 3.5. Therefore, we can conclude that there exists a birational map Ψ : P n−1 × P 1 P n−1 × P 1 such that the foliation Ψ * (F) is defined by a 1-form described as follows: where the 1-forms β j do not depend on t ∈ P 1 , for all 0 ≤ j ≤ 4. We can condense the results obtained in the above lemmas in the following proposition: Proposition 3.7. In the above situation, we have the five possibilities: (i) either F is transversely affine outside a compact hypersurface; (ii) or F is pure transversely projective outside a compact hypersurface; (iii) or F is a pull-back by a rational map of a foliation on P 2 ; (iv) or F is a pull-back by a linear map π : P n → P n−1 of a foliation of degree four on P n−1 . (v) or there exists a birational map Ψ : P n−1 × P 1 P n such that the foliation Ψ * (F) is defined by a 1-form described as follows: where the 1-forms β j do not depend on t ∈ P 1 , for all 0 ≤ j ≤ 4. 3.1. End of the proof of Theorem A. We give the proof by induction on the dimension n ≥ 3. If n = 3, then Theorem A follows from Corollary 2.1 and Proposition 3.7. Let us assume that Theorem A is true for n − 1 ≥ 3 and prove that it holds for n. Let F be a codimension one foliation of degree four on P n , n ≥ 4. It follows from Corollary 2.1 and Proposition 3.7 that, either F satisfies one of the conclusions of Theorem A, or F is the pull-back by a linear map π : P n → P n−1 of a foliation F n−1 of degree four on P n−1 . In this last case, as Theorem A holds true for n − 1, it follows that one the five possibilities outlined below must also be true: (i) F n−1 has a rational first integral, say F : P n−1 P 1 . In this case, F • π is a rational first integral of F and we are done. (ii) F n−1 is transversely affine. In this case, F n−1 admits a G-V-S of length one. Hence, F also admits a G-V-S of length one by Remark 2.5. (iii) F n−1 is transversely projective. In this case, F n−1 admits a G-V-S of length two. Hence, F also admits a G-V-S of length two by Remark 2.5. (iv) F n−1 = Φ * (G), where G is a foliation on P 2 and Φ : P n−1 P 2 a rational map. In this case, we get F = (Φ • π) * (G) and we are done. (v) There exists a birational map Ψ 1 : P n−2 × P 1 P n−1 such that the foliation Ψ * 1 (F n−1 ) is defined by a 1-form described as follows: where the 1-forms β j do not depend on t ∈ P 1 , for all 0 ≤ j ≤ 4. Now consider any rational map φ : P n−1 × P 1 P n−2 × P 1 such that it fixes the variable at P 1 , and choose any birational map Ψ such that π • Ψ = Ψ 1 • φ. With this, we have that Using the fact that φ fixes the variable over P 1 , we deduce that Ψ * (F) is defined by a 1-form similar to item (v). This concludes the proof of Theorem A. Proof of Theorem B Let F be a codimension one holomorphic foliation of degree d ≥ 4 in P n , n ≥ 3. Suppose that one of the two conditions is satisfied: (1) for all p ∈ Sing(F), we have J (F, p) = 1; (2) there exists p ∈ Sing(F) such that J (F, p) ≥ d − 1. In the first case, F admits a rational first integral by invoking Corollary 2.1. This consequently establishes the validity of assertion (i) within Theorem B. Therefore, we shall assume that there exists a point p ∈ P n such that J (F, p) ≥ d − 1. By employing affine coordinates (z 1 , . . . , z n ) ∈ C n ⊂ P n , where p = 0 ∈ C n , we can conveniently consider F| C n : ω = 0, where ω is a polynomial 1-form in C n expressed as follows: here, α j corresponds to homogeneous polynomial 1-forms of degree j, d − 1 ≤ j ≤ d + 1, and Once again, we will express α j as: α j (z) := n i=1 P ji (z)dz i , with d − 1 ≤ j ≤ d + 1. Additionally, we introduce where P j−1i are homogeneous polynomials of degree j − 1. Note that F d+2 ≡ 0 by (35). We proceed to examine the pull-back of ω through the process of blowing-up of P n at 0 ∈ C n ⊂ P n . Let σ :P n → P n denote the blow-up at 0 ∈ C n ⊂ P n , and letF represent the strict transform of F by σ. Our objective is to calculate σ * (ω) within the chart (τ 1 , . . . , τ n−1 , x) = (τ, x) ∈ C n−1 × C → (xτ, x) = (z 1 , . . . , z n ) ∈ C n ⊂ P n . In the case (2), after dividing η by F d , we have where γ 1 = θ d−1 /F d (τ, 1), γ 2 = θ d /F d (τ, 1), and γ 3 = θ d+1 /F d (τ, 1) depends only on τ . The 1-form η 1 is equivalent to the 1-form from (9) of Lemma 3.2, Subcase II. Consequently, we can deduce that either F is transversely affine, or F is the pull-back by a rational map of a foliation on P 2 . In the case (3), we consider the birational map ψ z : P n−1 × P 1 P n−1 × P 1 defined as ψ z (τ, z) = τ, F d (τ,1)z 1−F d+1 (τ,1)z . A direct calculation yields ψ * z (η 1 ) = Once again, the 1-form η z is equivalent to the 1-form from (9) of Lemma 3.2, Subcase II. Thus, we can deduce that either F is transversely affine, or F is the pull-back by a rational map of a foliation on P 2 . This finishes the proof of Theorem B.
2022-08-09T01:16:30.644Z
2022-08-08T00:00:00.000
{ "year": 2022, "sha1": "4ff6be0debcad4aad452955ba3db79e0df70fad2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4ff6be0debcad4aad452955ba3db79e0df70fad2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
16013251
pes2o/s2orc
v3-fos-license
Albinism in the domestic cat (Felis catus) is associated with a tyrosinase (TYR) mutation Summary Albino phenotypes are documented in a variety of species including the domestic cat. As albino phenotypes in other species are associated with tyrosinase (TYR) mutations, TYR was proposed as a candidate gene for albinism in cats. An Oriental and Colourpoint Shorthair cat pedigree segregating for albinism was analysed for association with TYR by linkage and sequence analyses. Microsatellite FCA931, which is closely linked to TYR and TYR sequence variants were tested for segregation with the albinism phenotype. Sequence analysis of genomic DNA from wild-type and albino cats identified a cytosine deletion in TYR at position 975 in exon 2, which causes a frame shift resulting in a premature stop codon nine residues downstream from the mutation. The deletion mutation in TYR and an allele of FCA931 segregated concordantly with the albino phenotype. Taken together, our results suggest that the TYR gene corresponds to the colour locus in cats and its alleles, from dominant to recessive, are as follows: C (full colour) > cb (burmese) ≥ cs (siamese) > c (albino). Albinism is a congenital disorder that is characterized by lack of pigment in hair, skin and eyes. Recently, the causative mutations for the siamese and burmese temperaturesensitive alleles have been identified in tyrosinase (TYR; Lyons et al. 2005). Complete albinism in cats is hypothesized to be caused by an additional allele at TYR, contributing to the allelic series at the colour (C) locus: C (full colour) > c b (burmese) P c s (siamese) > c (complete albino). To confirm that albinism is a TYR allele in cats, an analysis of an Oriental and Colourpoint Shorthair cat pedigree that segregates for albinism ( Fig. 1) was tested for linkage with FCA931, a marker 1.7 cM from TYR (Menotti-Raymond et al. 1999. In addition, sequence analyses of TYR were conducted to identify a causative mutation for feline albinism. DNA was isolated from buccal cells and blood samples from cats in the multi-generational pedigree ( Fig. 1) according to published procedures (Sambrook & Russell 2001;Oberbauer et al. 2003). Phenotypes were verified by visual inspection, breeder reports, segregation in families and photographs (Fig. 2). Relationship of the cats was verified by parentage testing with 19 microsatellites (data not shown). Pigmented cats that produced albino offspring were assumed to be obligate carriers of the complete albinism allele. Microsatellite FCA931, which is linked to TYR (Menotti-Raymond et al. 1999, was genotyped as previously described (Grahn et al. 2004), and the alleles were tested for concordant segregation with the colour phenotypes. Tyrosinase exons were sequenced as previously described (Lyons et al. 2005) from three albino cats and three obligatory carriers, as well as from three wild-type cats that were not associated with the albino pedigree. The TYR sequences of the albino cats and the wild-type cats were identical except for a cytosine deletion at position 975 in exon 2, which causes a frame shift and a premature stop codon in the protein translation nine codons downstream of the deletion (Fig. 3). The sequence of the deletion allele was submitted to GenBank (AY743347). Fifty of 67 cats in the pedigree ( Fig. 1) were then screened for the identified deletion mutation by direct sequencing. The genotypes for marker FCA931 and for the mutation are presented in Fig. 1. The cytosine deletion was homozygous in all 15 albino cats and heterozygous in all seven obligate carriers. Analysis of the pedigree suggested that albinism had an autosomal recessive mode of inheritance. One recombination event was detected between FCA931 and the colour phenotype. The founder cats were Colourpoint Shorthairs, so these cats have at least one siamese allele (c s ), which was confirmed by sequence and restriction fragment length polymorphism (RFLP) analyses (Lyons et al. 2005). In our previous study (Lyons et al. 2005), mutations in TYR associated with the temperature-sensitive phenotypes of siamese and burmese cats were identified. Robinson (1991) suggested that an allele in TYR causes albinism because other species have the same phenotype associated with TYR mutations. Our analysis of an extended pedigree supports that the albinism phenotype is allelic to full colour (C), burmese (c b ) and siamese (c s ), suggesting the allelic series C > c b P c s > c based on mutations in TYR. However, sufficient breeding studies have not been performed to confirm the allelic series in cats, specifically, the interaction of full colour and burmese with the albino allele. The putative albino mutation identified in this study would produce a truncated protein because a stop codon occurs in exon 2, nine amino acids downstream of the deletion. The amino acids located near the cytosine deletion are conserved among dogs, human, mice, cattle and rabbits (Fig. 3), further supporting that this mutation is a significant change in the protein. Expression studies and complementary DNA sequencing are needed to support this finding. Albino cats have been reported in the literature (Todd 1951;Turner et al. 1981) but have not been well characterized. Blue-eyed vs. pink-eyed albino cats have not been clearly distinguished in the published reports (Bamber & Herdman 1931;Todd 1951;Leventhal 1982;Leventhal et al. 1985). Thus, it is unclear whether there is more than one non temperature-sensitive albinism allele in cats, as has been reported in mice (reviewed in Beermann et al. 2004), in humans (summarized at http://albinismdb.med.umn.edu/) and in cattle (Schmutz et al. 2004). The albino cats evaluated in this study have blue eyes. As with most blue-eyed cats, reduced pigment in the tapetum produces a ÔreddishÕ (as opposed to a ÔgreenishÕ) tapetal reflection or Ôeye-shineÕ. The c allele has been reserved for red-eyed (complete) albinism, but the difference in the tapetal reflex suggests that the single report of a red-eyed albino cat may be in error. In conclusion, we propose that a cytosine deletion in TYR at position 975 in exon 2 is associated with albinism in cats. This mutation could be used in a DNA test to detect carriers and assist with breeding programmes. This finding also supports the use of the cat as a model for human TYRassociated albinisms. Albinism in the domestic cat 177 Figure 3 Exon 2 nucleotide and protein sequence alignments of feline and human tyrosinase (TYR). The TYR nucleotide sequences for Felis catus (FCA) and Homo sapiens (HSA) were AY743347and M27160 respectively. The amino acids (aa's) for each codon are listed below the nucleotide sequences. The albino mutation is a cytosine deletion at nucleotide 975 that causes a frameshift, leading to a stop (OCH) codon nine residues downstream of the deletion GenBank accession no. AY743347. The portion of the cat albino allele that is altered relative to the wild-type sequence is presented in bold. Amino acids that are conserved among dogs, human, mice and cattle are underlined. The rabbit has a single amino acid change in this region, replacing an alanine with a serine.
2017-10-24T07:56:19.877Z
2006-04-01T00:00:00.000
{ "year": 2006, "sha1": "1fbcec6f3520138395b6692f38892fbc55e5ea7d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1365-2052.2005.01409.x", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d5c2bb59b3570e14eea64af26bd2638d1cfbaeb0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266887294
pes2o/s2orc
v3-fos-license
Investigation of Gallium Arsenide Deformation Anisotropy during Nanopolishing via Molecular Dynamics Simulation Crystal orientation significantly influences deformation during nanopolishing due to crystal anisotropy. In this work, molecular dynamics (MD) simulations were employed to examine the process of surface generation and subsurface damage. We conducted analyses of surface morphology, mechanical response, and amorphization in various crystal orientations to elucidate the impact of crystal orientation on deformation and amorphization severity. Additionally, we investigated the concentration of residual stress and temperature. This work unveils the underlying deformation mechanism and enhances our comprehension of the anisotropic deformation in gallium arsenide during the nanogrinding process. Introduction Gallium arsenide, as a III-V compound semiconductor, exhibits direct bandgap characteristics when compared to traditional elemental semiconductor materials such as silicon (Si).It finds extensive applications in the manufacturing of laser diodes [1] and offers reduced noise levels in high-frequency operating conditions compared to silicon devices [2].Furthermore, gallium arsenide material demonstrates high carrier mobility [3] and optical coupling effects [4], making it well-suited for next-generation communication and advanced optical device fabrication [5,6].However, during semiconductor processing, surface defects induced by fabrication processes have a significant impact on the electrical characteristics and service life of the final devices [7].Existing research has indicated that the crystal orientation of gallium arsenide surfaces significantly influences the performance of the final processed devices, and selecting different crystal orientations during processing can result in substantial enhancements of semiconductor components [8,9]. Due to the significant impact of surface defects generated during semiconductor device processing on the final quality, scholars have conducted extensive experimental research.These experiments primarily include indentation tests and scratch tests [10,11], which employ experimental methods to observe structural surface defects and subsequently investigate surface morphology and crystal structure damage.Gao et al. [12] utilized molecular dynamics to examine GaAs laser bar cleavage.Their study highlighted the influence of scratching depth on scratch quality and provided optimal parameters for GaAs cleavage.Li et al. [13] conducted Vickers indentations on a GaAs single crystal, yielding defects like dislocations, microtwins, stacking faults, and amorphization.Proposed amorphization mechanisms include high-pressure and shear deformation; high-pressure induced amorphization and shear deformation induced amorphization indicate the transformation from crystalline to amorphous structure.Li et al. [14] investigated cracks induced by 0.049 N load indentations in GaAs, observing shear-related crack initiation, dislocation generation, lattice distortion, and amorphous band formation.Annealing eliminated the amorphous band, revealing a crack propagation via decohesion.Huang et al. [15] studied monocrystalline GaAs deformation during nanoscratching, revealing atomic-scale lattice bending in semiconductor materials.They discussed the lattice bending mechanism and found the residual stress could be responsible for the local lattice bending.Parlinska et al. [16] explored GaAs nanoindentations and nanoscratches using different indenters.The Berkovich indentations caused convergent dislocations, twins, and slip bands, while the 60°wedge indentations led to divergent bands and median cracks.They discussed the mechanism of deformation of the crystals and found that the deformation was mainly concentrated at the front of the indenter.They similarly found by TEM experiments that the crystal deformation was mainly concentrated at the front of the indenter.Wasmer et al. [17] employed nanoindentation and scratching to study gallium arsenide.They discovered twinning during indentation and slip bands and perfect dislocations during scratching.This phenomenon was attributed to differing strain rates, higher in scratching, promoting a perfect α dislocation propagation, while slower indentation velocities enable twinning nucleation from surface inhomogeneities.Wasmer et al. [18] employed scratch tests on GaAs {001} crystals with loads (5-100 mN) and a Berkovich indenter.They unveiled the plastic deformation stages, including dislocation cloud formation, median cracks nucleation, surface radial cracks, plastic flow, lateral cracks, and chip formation.These events exhibited a power-law dependence.Elastic recovery was approximately 15%, explained by the rheological factor X. Gao et al. [19] utilized scratching and cleavage operations to enhance GaAs cleavage planes in high-power semiconductor laser cavities.Scratching with a lower load and higher speed reduced damage, while the scratch capability index (SCI) indicated the cleavage plane quality.This approach can advance semiconductor laser chip manufacturing.They also discussed the relationship between scratch quality and load and found that the load has a significant effect on scratch quality.Yu et al. [20] employed nanoscratch tests on GaAs {100} using an atomic force microscope (AFM) with a SiO 2 tip.Decreasing the sliding velocity increased the scratch depth.High-resolution transmission electron microscopy (HRTEM) found no lattice damage.The material removal was attributed to dynamic interfacial bond breakage.High-speed sliding resulted in a faster GaAs surface material removal, ideal for SiO 2 polishing without surface damage.Chen et al. [21] used molecular dynamics (MD) simulations to investigate single-crystal copper nanoscratching.They observed depthdependent subsurface changes, differing (100) and (111) plane behaviors, and identified stack faults.It was shown that the surface integrity was not only related to the scratch depth, but the surface grain orientation also had a non-negligible effect on the surface integrity.Fan et al. [22] employed oblique nanomachining to enhance GaAs machining quality.They observed an early dislocation avalanche and a favorable plasticity during cutting under certain tip conditions, particularly oblique cutting.Gao et al. [23] employed a novel method to study anisotropic stress in GaAs.They found a lower stress along {100} than {110} orientations.The (011) plane displayed potential as a preferential cleavage plane with improved quality.This research enhanced the understanding of cleavage mechanisms.The study discussed the stress field of the GaAs scribing process and showed that the maximum stress was concentrated at the tip of the indenter and appeared anisotropic in different directions.Wang et al. [24] employed AFM tip-based nanoscratching to create GaAs nanochannels, studying the material removal and subsurface damage.Depths below 11 nm favored cutting over plowing, inducing stacking faults, dislocations, nanocrystallization, and amorphization.Wu et al. [25] probed GaAs surface defects using a conductive atomic force microscope (C-AFM).Scratches showed a higher edge current, influenced by the load.Etching increased currents, with scratch-induced Schottky barrier height changes.Fang et al. [26] studied Si and GaAs nanomechanical properties via nanoindentation and nanoscratch.Results showed a decreased Young's modulus and hardness with a higher load, hold time, and cycles.GaAs exhibited a pop-in effect, and the wear behavior varied with the feed and load.The scratch technique used the material removal volume to evaluate hardness.The study found an effect of the applied load on the GaAs surface quality, which related to the surface hardness and Young's modulus. However, due to the high cost of experimental research and the stringent requirements for experimental environments, many scholars are gradually adopting a combination of MD simulation with experimental research.The MD simulation studies of materials are widely used to investigate the mechanical behavior and deformation mechanisms of materials at the nanoscale.It has been widely applied in the study of atomic-scale surface deformation and crystal structure and is suitable for the study of properties that are difficult to measure with many traditional experimental methods [27][28][29][30][31]. Li et al. [32] employed molecular dynamics simulations to investigate the influence of cracking on GaAs deformation in different crystal orientations during processing.Their findings revealed cracking-induced alterations in atomic-level deformation behavior, attributed to the tensile stress distribution and fracture surface variations.The anisotropy induced by the surface grain orientation, which has an important effect on the surface defects, can also be seen by MD simulation.Xu et al. [33] used molecular dynamics simulations to investigate GaAs crystal anisotropy during nanoscratching.They found significant anisotropic effects on the deformation, residual stress, and surface properties, offering new insights into the material behavior.The study also confirmed that the anisotropy of the surface grain direction had an important influence on the distribution of residual stresses.Yi et al. [34] utilized molecular dynamics to examine GaAs nanoscratching in chemical mechanical polishing.Phase transformation and amorphization were the dominant deformation mechanisms.Anisotropic effects were observed, with varied scratching resistance and friction coefficients among different GaAs crystal orientations, providing insights into the mechanical wear in GaAs polishing.The study also confirmed that the anisotropy of the surface grain direction had an important influence on the scratching forces.Chen et al. [35] employed molecular dynamics simulations to explore surface and subsurface deformations in gallium arsenide during nanocutting.Dislocations, phase transformations, and anisotropic effects were investigated, providing insights into performance-affecting factors in GaAs machining.Li et al. [36] used molecular dynamics simulations to explore plowing-induced deformation in GaAs.They observed crack initiation, propagation, and dislocation-dominated plasticity, providing atomic-level insights into a novel deformation pattern in GaAs during plowing.The MD simulations also found that the deformation and high stress areas were mainly distributed at the front end of the indenter, which was consistent with the experimentally generated phenomena.Fan et al. [37] simulated the AFM tip-based hot machining of GaAs at temperatures of 600 K, 900 K, and 1200 K, revealing reduced cutting forces, increased friction, enhanced material removal rate, and ductile response with dislocations, along with chip densification during hot cutting.Fan et al. [38] studied nanoscale friction using MD simulations and AFM nanoscratch experiments on gallium arsenide.They examined the scratch depth effects, revealing a size-dependent behavior.The study found correlations between MD simulations and AFM experiments, indicating a specific scratch energy insensitivity to the tool geometry and scratch speed.However, the pile-up and kinetic coefficient of friction were influenced by the tool's tip geometry.Fan et al. [39] investigated a diamond wear during AFM-based nanomachining of GaAs via MD simulations.They observed the diamond tip's elastic-plastic deformation and transformation from a cubic to graphite structure, identifying graphitization as the dominant wear mechanism, introducing a novel method for quantifying the graphitization conversion rate.Chen et al. [40] investigated gallium arsenide's crack formation during nanocutting.They found a transition from dislocation to phase transformation at higher cutting speeds, with more cracks at greater depths.Deformation shifted from ductile to ductile-brittle, with cracks at the amorphoussingle crystal boundaries.Tensile stress was concentrated at crack tips.Taper-cutting experiments revealed a 25 nm brittle-ductile transition depth, supported by transmission electron microscopy (TEM) showing microcracks and polycrystals in the subsurface, aligning with simulation findings.Li et al. [41] reviewed molecular dynamics simulations in tip-based nanomachining (TBN), covering material-specific models, TBN mechanisms, and future prospects, offering valuable insights for further research in this field.The study provided a systematic overview of the molecular dynamics study of TBN, showing that the molecular dynamics approach was applicable to the study of mechanical properties and surface defects.Fan et al. [42] used molecular dynamics to explain ductile plasticity in polycrystalline gallium arsenide during nanoscratching, emphasizing the dislocation nucleation at grain boundaries and its impact on material behavior.Rino et al. [43] studied structural phase transformation in crystalline gallium arsenide under a 22 GPa pressure, with a reverse transformation observed at 10 GPa, showing hysteresis.Molecular dynamics results matched experiments, estimating a 0.06 eV energy barrier.The simulation results showed that there was a clear relationship between the stresses and the changes in the crystal structure.Kodiyalam et al. [44] investigated pressure-induced structural transformation in gallium arsenide nanocrystals, with nucleation occurring at the surface, leading to inhomogeneous deformation and grain boundaries.It was also found that the region of high-pressure distribution had an important influence on the transformation of the crystal structure.Parasolov et al. [45] developed a nanoindentation model using molecular dynamics on GaAs.Above 100 K, nanoindentation led to increased point defects in GaAs atomic layers, attributed to thermal energy fluctuations and external stress.Gular et al. [46] conducted geometry optimization calculations on GaAs up to 25 GPa using a Stillinger-Weber potential.They determined a B3 to B1 phase transition at 17 GPa and evaluated various material properties, providing valuable insights for future GaAs pressure studies.The comprehensive analysis of MD simulations shows that anisotropy has an important effect on surface defects and crystal structure, and the MD simulation method is also applicable to the study of micromechanical properties in nanofabrication; the effect of crystal orientation will be further investigated in this study. Existing studies have shown that in the processing of GaAs crystalline materials, the selection of the appropriate crystal orientation has an important impact on the performance of the final processed workpiece [8,9].In this work, the machined surface of GaAs with different crystallographic orientations is modeled and the surface morphology and amorphous damage layer after nanopolishing are investigated; in addition, residual stresses as well as temperatures are analyzed in order to select a suitable crystallographic orientation for nanofabrication.This work utilizes MD simulations to investigate the processes of surface generation and subsurface damage.A nanoscale-polishing molecular dynamics model incorporating the microasperity structure of the actual processed surface is established.The surface topography, mechanical properties, and phase transition processes under {100}, {110}, and {111} crystal orientations are analyzed, validating the influence of anisotropy on the surface morphology and subsurface crystal phase transformation extent.Furthermore, by analyzing the differences in surface pile-up after nanoscale polishing for three crystal orientations, this work also examines the impact of the surface crystal orientation on the temperature distribution and residual stress distribution during the nanoscale polishing process, which may have practical implications for nanoscale polishing processes. Simulation Methods In comparison to the traditional nanoscale polishing model, the nanoscale polishing model employed in this study takes into account the microconvex structures present on the actual processed surface.The variables under investigation pertain to the crystallographic orientations of gallium arsenide (GaAs) surfaces during the nanoscale polishing process, specifically the {100}, {110}, and {111} crystallographic orientations.The nanoscale pol- ishing model for GaAs crystals, as illustrated in Figure 1, can be conceptually divided into two main components: the equivalent spherical representation of the diamond polishing tool and the GaAs surface with its microconvex structures. As depicted in Figure 1a, the equivalent diamond polishing particle had a diameter of 12 Å, consisting of 159,486 atoms, and possessed a lattice constant of 3.57 Å.The equivalent GaAs surface was composed of two parts: a substrate with dimensions of 300 Å × 220 Å × 50 Å and microconvex structures comprising one-quarter spheres at both ends and a central half-cylinder, all with a radius of 7 Å.The centers of the spherical structures at the two ends were located at (110 Å, 110 Å, 50 Å) and (190 Å, 110 Å, 50 Å), respectively.The position of the diamond particle was (−60 Å, 110 Å, 120 Å).The total number of gallium atoms was 104,963, and the total number of arsenic atoms was 103,420.The crystallographic structure of the GaAs crystal is depicted in Figure 1b, with a lattice constant of 5.654 Å. The equivalent model for the gallium arsenide (GaAs) surface was divided into three distinct layers, as shown in Figure 1a: the Newtonian atomic layer situated at the top, where atomic motion follows Newton's second law and is calculated using the velocity Verlet algorithm [47]; the isothermal atomic layer in the middle, which regulates temperature changes based on the Berendsen thermostat [48]; and the fixed atomic layer at the bottom, where atomic positions and velocities are constrained to prevent atoms from escaping the boundary.In the multilayer structure, the thickness of the Newtonian layer was 100 Å (70 Å for the radius of the microconvex body and 30 Å for the basal portion), the thickness of the thermostatic layer was 10 Å, and the temperature of the boundary layer was 10 Å.In addition to the potential energy parameters, to ensure convergence, the model set boundary conditions as well as energy minimization constraints so that the model was in a steady state before nanopolishing.To enhance computational efficiency in the simulation, this work employed periodic boundary conditions for the nanoscale polishing process.Specifically, periodic boundary conditions were applied in the y-direction to exploit the system's symmetric properties, while nonperiodic boundary conditions were imposed in the x-direction (processing direction) and the z-direction (normal to the surface) to ensure a realistic representation of the system.This study utilized the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [49] for molecular dynamics simulations and employed the open visualization tool (OVITO) [50] for the visualization and postprocessing of the simulation results.The detailed parameters of the model are presented in Table 1.The simulation workflow included the prepolishing energy minimization process using the conjugate gradient method [51].The model's relaxation process was conducted under the NPT ensemble with a relaxation time of 100 ps.During this process, the model's temperature gradually stabilized at room temperature (293 K) using the Nose-Hoover thermostat, and the potential energy converged to −5.30 × 10 −5 eV.The temperature and potential energy changes during the relaxation process are illustrated in Figure 2. Following the relaxation of the model, the ensemble was switched to NVE, and the simulation of nanoscale polishing was performed.During the relaxation phase of the model, the temperature gradually stabilized at 293 K, and the total potential energy of the model gradually stabilized at −5.30 × 10 −5 eV.In this process, the polishing speed of diamond abrasive particles was set at 100 m/s in the (0,1,0) direction, with a polishing distance of 30 nm.Before the calculations for stresses, RDF, and temperature and after the nanopolishing simulation, the model was subjected to a relaxation process, which resulted in a more stable surface structure after processing.To observe the stable structure of the surface after the nanoscale polishing process, a second relaxation process was conducted for the model, also with a relaxation time of 100 ps.During the process of nanoscale machining, the selection of the interatomic potential energy is of paramount importance.In the case of polishing gallium arsenide (GaAs) workpieces, the interatomic potential energy functions in Ga-Ga, Ga-As, and As-As atoms are described by the Tersoff potential [52] and the parameters refers to [53].The expression of the Tersoff potential function is shown in Equation ( 1).For the interatomic potential energy function in carbon-carbon (C-C) atoms in diamond polishing particles, the Tersoff potential was employed.The interatomic potential energy functions between carbon (C) atoms in diamond polishing particles and gallium (Ga) or arsenic (As) atoms in GaAs workpieces are governed by the Ziegler-Biersack-Littmark universal screening function (ZBL) potential [54].The expression of the ZBL potential is presented in Equation ( 2), where the parameter inner is the distance where the switching function begins, and outer is the global cutoff for the ZBL interaction.The parameters inner of Ga-C and As-C are 31.0and 33.0, respectively.The parameters outer of Ga-C and As-C are 12.0. where V ij is the Tersoff potential energy, f R means the two-body term, f A means the three-body term, f C means the cutoff of the coefficient. where Z 1 , Z 1 are the number of protons in the nucleus, e is the electron charge, ε 0 is the permittivity of vacuum, and ϕ(r ij /a) is the universal screening function of ZBL potential.When evaluating surface residual stresses in polished gallium arsenide (GaAs) workpieces, the von Mises stress was calculated.It was determined based on the atomic stress tensor, taking into account the combined effects of six stress components, as expressed in Equation (3).When considering temperature variations during the nanoscale polishing process, the temperature change was represented using the average kinetic energy expression [48], as shown in Equation (4). where σ vm (i) denotes the von Mises stress, and σ(i) denotes an atomic stress tensor. where E k represents the average atomic kinetic energy, k denotes the Boltzmann constant which is 1.381 × 10 −23 J/K, and T denotes the temperature. Surface Quality After nanoscale polishing on the gallium arsenide (GaAs) surface, significant atomic displacements were observed primarily due to the intense interaction between diamond polishing particles and the rapidly moving surface.The atomic displacements of the three crystallographic orientations were color-coded based on the total atomic displacement and displacement along the z-axis, as depicted in Figure 3(a1-c1).The regions of maximum atomic displacement for different crystallographic orientations were concentrated at the ends of microprotuberances formed as chip piles, with varying specific distributions.The {100} crystallographic orientation exhibited a maximum displacement concentration at the top of the polished chip pile, while the {110}, and {111} orientations, as a result of their detachment from the surface after microprotuberance polishing, showed an atomic displacement accumulation along the x-axis.A profile analysis of atomic displacements within the chip along the y-axis is shown in Figure 3(a2-c2).Along the z-axis, atomic displacements within the residual pile-up on the surface after polishing gradually decreased for the {100}, {110}, and {111} orientations.Due to the anisotropy caused by the crystallographic orientations, atoms from the {110} and {111} orientations exhibited the highest atomic displacements as they detached from the surface following their interaction with microprotuberances and diamond polishing particles, resulting in relatively smaller atomic displacements within the remaining portion compared to the {100} orientation.After analyzing the atomic displacements following nanoscale polishing, postprocessing was carried out based on the z-coordinate positions of gallium arsenide polished surface atoms.As illustrated in Figure 4, observations from the z-direction top view and z-direction cross-sectional view revealed significant alterations in atomic distribution after nanoscale polishing.Notably, different crystallographic orientations of microprotrusions exhibited distinct impacts on the surface quality following polishing.Specifically, microprotrusions with a {100} crystallographic orientation did not disintegrate along with the diamond polishing particles during nanoscale polishing; instead, they accumulated on the preexisting surface, resulting in a maximum surface asperity height of 138 Å.In contrast, atoms from the {110} and {111} crystallographic orientations split into two parts after nanoscale polishing, with pile-up heights of 98 Å and 85 Å, respectively, exerting a lesser influence on the postpolishing surface quality. Mechanical Property In order to further investigate the influence of surface crystallographic anisotropy on mechanical properties, the normal force F z and tangential force F x during nanoscale polishing were separately calculated, as shown in Figure 5.The relationships between the normal force F z and the polishing distance, as well as the tangential force F x and the polishing distance, were analyzed.After the contact between diamond polishing particles and gallium arsenide microprotrusions, the contact force gradually increased with the moving distance, reaching a maximum value before gradually decreasing.Furthermore, it could be observed that F z was numerically greater than F x .For polishing distances less than 5 nm, there were no significant differences in F z and F x among the three crystallographic orientations.However, when the polishing distance was between 5 nm and 20 nm, significant differences in F z and F x appeared among the three crystallographic orientations, and after the diamond polishing particles left the microprotrusions, both F z and F x gradually decreased to their minimum values.After calculating the contact forces in three crystallographic orientations, the average contact forces within the nanoscale polishing range of 5-20 nm were determined, as illustrated in Figure 6.It is evident that the F z and F x components were highest for the {110} crystallographic orientation.Conversely, the F x component was at its minimum for the {100} orientation, indicating reduced interatomic interactions.This observation aligned with the earlier analysis of the surface atomic displacement.It is plausible that atomic displacements in the x-direction were limited due to the nanopolishing, resulting in a lesser interaction for {100}.In contrast, the {110} and {111} orientations exhibited larger F x values, implying stronger interatomic interactions, potentially leading to the detachment of some atoms as the microprotrusions decomposed due to the enhanced F x .In terms of F z , the {111} orientation experienced the lowest contact force, suggesting weaker interactions in the z-direction.This may be correlated with the earlier observation of the lowest atomic stacking height in the z-direction.A further investigation is required to understand its implications on the subsurface atomic structure. Amorphization Analysis After nanoscale polishing, the anisotropy not only influences the mechanical properties but also results in differences in the crystal structure of the subsurface after processing.By examining the crystal structures of subsurface atoms, as depicted in Figure 7, it could be observed that following nanoscale polishing, most of the remaining atoms on the processed surface underwent an amorphization process, with only the outermost atoms of the microconvex polished surface retaining their original structure, namely the cubic diamond structure.From the cross-sectional view on the y-z plane, it is evident that the thickness of the subsurface damaged layer (SDL) varied after nanoscale polishing, with the {110} crystal orientation reaching a maximum thickness of 3.16 nm, the {100} crystal orientation measuring 2.89 nm, and the {111} crystal orientation having a minimum thickness of 2.19 nm.Consistent with the previous analysis of mechanical properties, the thickness of the SDL exhibited a similar trend to the variation in F z values, potentially attributed to the varying degrees of z-directional interactions resulting from the crystal anisotropy.The thickness of the amorphous SDL layer exhibited anisotropy, indicating that the selection of different crystallographic orientations for processing had an influence on the crystal structure within the polished surface.In order to further investigate the subsurface damage process, an analysis was conducted using the radial distribution function (RDF), as shown in Figure 8.The RDF is a commonly used function for studying crystal structures, representing the distribution of atomic distances, with different peaks in the curve characterizing different crystal structures.As depicted in Figure 8a , RDF calculations were performed for the processed surface before, during, and after polishing.It can be observed that during the nanoscale polishing process, there was a decrease in the peak at 2.45 Å and an increase in the peak at 2.85 Å, indicating that some crystal structures with an atomic spacing of 2.45 Å were disrupted during the processing and transformed into structures with an atomic spacing of 2.85 Å, and this process was irreversible.For different crystal orientations, as shown in Figure 8b , it can be seen that the {110} orientation exhibited a higher peak at 2.85 Å compared to other orientations, indicating a lower proportion of amorphization atoms in the residual atoms after processing for the {110} orientation. Analysis of Temperature Distribution Figure 9(a1-c1) represents the temperature distribution on the surface during the nanoscale polishing process.It can be observed that atoms with higher total displacements correspond to regions with higher temperatures, consistent with the previous analysis of atomic displacements.This observation indicates a relationship between the temperature distribution and the total atomic displacement during nanoscale polishing.Figure 9(a2-c2) displays a side view of the temperature distribution along the y-axis.It is evident that the temperature of {111} crystal facet debris reached a maximum of 600 K, while the surface atom temperature reached a minimum of 210 K.In contrast, the {100} facet debris exhibited a minimum temperature of 460 K, with the surface temperature reaching a maximum of 310 K.The opposite trends in temperature variation between different crystal facets during nanoscale grinding may be attributed to the fact that debris atoms carry away more heat, resulting in lower residual atomic temperatures on the surface.In Figure 10, the average temperatures and errors in different regions after nanopolishing with different crystal orientations are presented, which are consistent with the trends in Figure 10.The average temperature of the {100} crystal orientation for the cutting chips was the lowest at 421.85 K, while the average temperature of surface atoms was the highest at 339.21 K. On the other hand, the {111} crystal orientation exhibited the highest average temperature for the cutting chips at 607.725 K, with the surface atoms having an average temperature of 284.99 K. From the statistical analysis of the average temperatures, it could be inferred that following nanopolishing, the cutting chip atoms in the {100} crystal orientation did not separate into two parts with the movement of the diamond polishing particles.This resulted in a higher temperature accumulation, indirectly providing a higher temperature environment for amorphization, possibly leading to a more pronounced degree of amorphization.In contrast, the {111} crystal orientation showed a lower average temperature for the remaining surface atoms after nanopolishing, which may have resulted in a lower degree of amorphization. Analysis of Residual Stress The degree of crystal amorphization is not only related to the temperature environment but also to the internal stress intensity within the region.In Figure 11, the visualization results of the von Mises stress distribution on the surface after nanoscale polishing are presented.It can be observed in Figure 11(a1-c1) that following nanoscale polishing, there existed a high-stress distribution region (in red) with values ranging from 3 to 4 GPa on the surface in contact with the polishing particles, as well as stress distribution regions of 1-2 GPa, corresponding to the previously mentioned amorphous surface atomic regions.In the y-directional cross-sectional view, it can be observed that the highest stress in the SDL layer on the surface after three crystallographic directions processing was approximately 3.9 GPa.The stress distribution in the {110} and {111} facets was generally similar to that on the surface, although specific numerical values require further calculation.As shown in Figure 11, for the calculation of residual stresses on the processed surface, Figure 12a illustrates the distribution of total residual stresses at a depth of 56 nm along the z-axis for different crystallographic orientations.It can be observed that the 110 orientation exhibited the highest stress, followed by the {111} orientation, while the {100} orientation showed the lowest residual stress.The residual stress distribution at various depths along the {100} orientation is depicted in Figure 12b, revealing a generally negative correlation between residual stress and depth.The minimum residual stress was observed at a depth of 54 nm, while stress levels increased closer to the surface, reaching higher values at a depth of 60 nm.Furthermore, an analysis of different stress components on the {100} orientation is presented in Figure 12c.It is evident that the shear stress component σ xx in the xx direction was significantly larger than in other directions.Additionally, a comparison of the xx direction stress component σ xx was made among different crystallographic orientations.Notably, the {111} orientation exhibited a smaller stress component value, whereas the {110} orientation displayed the largest residual stress component value.These findings may have implications for the stability of the crystal structure and surface quality following surface polishing. Conclusions In this work, MD simulations were employed to investigate the processes of surface generation and subsurface damage.A nanoscale polishing molecular dynamics model was established, taking into consideration the microconvex structures of the actual processed surface.The analysis encompassed surface morphology, mechanical properties, and amorphization processes under the {100}, {110}, and {111} crystal orientations, thus confirming the influence of anisotropy on surface morphology and subsurface crystalline amorphization extent.After analyzing the disparities in surface pile-up following nanoscale polishing in three crystal orientations, it was discerned that the {111} crystal orientation exhibited a lower residual atomic height and a lower normal contact force during the processing.Additionally, an investigation of subsurface crystalline amorphization revealed a thinner amorphous layer beneath the {111} crystal orientation.In the RDF analysis, it was observed that the proportion of atoms undergoing amorphization was slightly lower under the {110} crystal orientation compared to the other two orientations.Furthermore, this work examined the influence of the surface crystal orientation on the temperature distribution and residual stress distribution during the nanoscale polishing process.Regarding temperature, the {111} crystal orientation exhibited lower surface temperatures during the processing.In terms of stress, it was found that the tangential residual stress component, σ xx , was larger compared to the normal residual stress component, σ zz .Additionally, σ xx under the {111} crystal orientation was lower.Considering the comprehensive analysis of postpolishing surface morphology, contact forces, SDL thickness, temperature, and stress distribution, it can be concluded that the microconvex structures under the {111} crystal orientation have a lesser impact on surface quality and subsurface amorphization after polishing, which may hold significance for practical nanoscale polishing processes. Figure 2 . Figure 2. Temperature and potential energy of the relaxation process before nanopolishing. Figure 3 . Figure3.The total atomic displacement of the surface following nanoscale polishing.In panels (a1-c1), the cumulative atomic displacement is depicted, while panels (a2-c2) specifically represent atomic displacement in the z-axis direction. Figure 4 . Figure 4.The surface quality (a,c,e) in the z-direction view, and the cross-sectional view (b,d,f) in the y-direction after nanoscale polishing. Figure 5 . Figure 5.The relationship between (a) the tangential force, F x , and (b) the normal force, F z , with respect to the polishing distance. Figure 6 . Figure 6.The mean contact force within a range of 5 nm to 20 nm in nanoscale polishing.(a) F x ; (b) F z . Figure 8 . Figure 8.The RDF curve for the nanopolishing process.(a) Different stages of the machining process, (b) different crystal orientations. Figure 10 . Figure 10.The average temperature profiles subsequent to nanoscale polishing, with distinct delineations for (a) the chip-removal atomic domain and (b) the residual-surface atomic domain. Figure 12 . Figure 12.(a) The distribution of total stresses along different crystal orientations, (b) the relationship between residual stresses and depth distribution, (c) the stress components along different directions, and (d) the residual stress components along different crystal orientations in the processing direction, σ xx , after nanoscale polishing. Table 1 . The MD simulation parameters.
2024-01-10T16:09:26.313Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "3ea7520727287a0ec13c34d4802cd16b08c45973", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/15/1/110/pdf?version=1704709409", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "efd484f698117f576bd31e8b12bda2d821b1c45f", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
258591706
pes2o/s2orc
v3-fos-license
β-arrestin1 regulates astrocytic reactivity via Drp1-dependent mitochondrial fission: implications in postoperative delirium Postoperative delirium (POD) is a frequent and debilitating complication, especially amongst high risk procedures, such as orthopedic surgery. This kind of neurocognitive disorder negatively affects cognitive domains, such as memory, awareness, attention, and concentration after surgery; however, its pathophysiology remains unknown. Multiple lines of evidence supporting the occurrence of inflammatory events have come forward from studies in human patients’ brain and bio-fluids (CSF and serum), as well as in animal models for POD. β-arrestins are downstream molecules of guanine nucleotide-binding protein (G protein)-coupled receptors (GPCRs). As versatile proteins, they regulate numerous pathophysiological processes of inflammatory diseases by scaffolding with inflammation-linked partners. Here we report that β-arrestin1, one type of β-arrestins, decreases significantly in the reactive astrocytes of a mouse model for POD. Using β-arrestin1 knockout (KO) mice, we find aggravating effect of β-arrestin1 deficiency on the cognitive dysfunctions and inflammatory phenotype of astrocytes in POD model mice. We conduct the in vitro experiments to investigate the regulatory roles of β-arrestin1 and demonstrate that β-arrestin1 in astrocytes interacts with the dynamin-related protein 1 (Drp1) to regulate mitochondrial fusion/fission process. β-arrestin1 deletion cancels the combination of β-arrestin1 and cellular Drp1, thus promoting the translocation of Drp1 to mitochondrial membrane to provoke the mitochondrial fragments and the subsequent mitochondrial malfunctions. Using β-arrestin1-biased agonist, cognitive dysfunctions of POD mice and pathogenic activation of astrocytes in the POD-linked brain region are reduced. We, therefore, conclude that β-arrestin1 is a promising target for the understanding of POD pathology and development of POD therapeutics. Introduction Postoperative delirium (POD) is a neuropsychiatric complication occurring after high risk procedures with general anesthesia, such as orthopedic surgery [1]. It is mainly characterized as neurocognitive disorders such as memory loss, attention deficits, and concentration failure in vulnerable patients such as the elderly [2,3]. Despite its acute course, typically 1-3 days after surgery plus anesthesia, delirium has devastating consequences including increased postoperative mortality, decreased quality of life and long-term risk for Alzheimer's disease (AD) [2,4]. However, the etiology of POD remains to be determined, for the lack of animal models to study the neuropathogenesis and targeted interventions. Orthopedic surgery is performed routinely, especially in older adults, to repair common bone injuries. As many as 50% of patients suffer from delirium and cognitive dysfunctions after orthopedic surgery; therefore, researchers have developed a mouse model of tibial fracture to study the impacts of orthopedic surgery on the central nervous system (CNS) and investigate the pathogenesis of POD [5,6]. Studies in human patients' brain and biofluids (CSF and serum), as well as in animal models for POD support the occurrence of inflammatory events as a critical driver of POD [7][8][9]. In line with these concepts, our previous studies have proved that treatments targeting inflammatory responses in POD presents promising therapeutic potentials [10]. Microglia and astrocytes are key regulators of inflammatory responses in the CNS. Glial activation appears to be a double-edged sword in neural diseases and is traditionally categorized as neurotoxic (M1 microglia and A1 astrocytes) or neuroprotective (M2 microglia and A2 astrocytes) [11,12]. Meanwhile, the phenotypic distribution of glia may vary in different pathological conditions [11,12]. Previous studies have verified that immune episodes after surgery trigger the release of pro-inflammatory cytokines from microglia, including tumor necrosis factor-α (TNF-α) and interleukin-β (IL-β), which correlates to more severe forms of dementia and AD pathogenesis [13][14][15]. Surgery-induced activation of complement component 3/complement component 3 receptor (C3/ C3R) signaling pathway, which is later regarded as the most characteristic marker of neurotoxic astrocytes, is reported to worsen cognitive impairments in perioperative neurocognitive disorders [16]. These researches collectively infer the toxic/pathogenic glial activation in the progression of POD. Guanine nucleotide-binding protein (G protein)-coupled receptors (GPCRs) constitute the largest known super family for signal transduction and regulate fundamental physiological functions essential to life, such as metabolism, neurotransmission, immune responses, and homeostasis. For their multiple functions, they are also the most intensively studied drug targets [17]. Conformational changes of the receptor increase their affinity for the multifunctional GPCR regulatory or adaptor proteins known as the β-arrestins, including two super family members β-arrestin1 and β-arrestin2 [18]. The association between receptors and β-arrestins blocks subsequent G protein activation and has an important role in traditional GPCR desensitization. In addition to their canonical role in binding to phosphorylated GPCRs to evoke receptor desensitization and endocytosis [18], β-arrestins have much broader effects including cellular signaling transduction and epigenetic modifications by interacting with numerous non-receptor binding partners [19,20]. For example, β-arrestins-associated complexes combine with extracellular signal-regulated kinase (ERK) to mediate ERK signaling pathways, as are the Raf-1, MEK1, and ubiquitin dependent signaling [20][21][22]. Besides, β-arrestins can also act as a dynamic nuclear linker of several transcriptional factors and co-factors, such as β-catenin, NF-ĸB and the hypoxia-inducible factor 1α (HIF-1α), mediating the epigenetic regulation of genes associated with pathological progression [23]. Studies using β-arrestins knockout (KO) mice have shown that β-arrestins regulate numerous physiologic and pathophysiologic processes, highlighting their potential roles as therapeutic targets. For example, β-arrestin2 is found to regulate NLRP3 inflammasome complex formation by interacting with NLRP3 protein, thus involving in the pathogenesis of Parkinson's disease (PD) [24]. Furthermore, β-arrestin2-biased agonist prevents dopaminergic neuron degeneration [25]. Another study demonstrates that β-arrestin1 KO mice diminishes amyloid-β pathology by regulating γ-secretase complex assembly [26]. Although there are so many interesting studies emphasizing their important roles, whether G protein-coupled receptors and the signaling molecules may be promising therapeutic targets of POD are unknown. With this study, we showed that β-arrestin1 decreased in the tibial fracture mouse model for POD, accompanying the pro-inflammatory micro-environment in the hippocampus of POD model mice, while β-arrestin2 showed no change. This decreased expression of β-arrestin1 was mainly found in the reactive astrocytes. Using β-arrestin1 KO mice, we found that β-arrestin1 deficiency aggravated pathological phenotypes and behavioral abnormalities of POD mice. Further investigations in vitro demonstrated the β-arrestin1 interacted with dynamin-related protein 1 (Drp1) to inhibit excessive mitochondrial fragmentation, which was mediated by mitochondrial translocation of Drp1. Furthermore, activation of β-arrestin1 signals by β-arrestin1-biased agonist Carvedilol (Carv) protected POD mice from cognitive dysfunctions as well as neurotoxic astrocytic activation. Our study expands the pathological mechanism of postoperative delirium and provides a rationale for clinical medication. Materials and methods Animals C57BL/6J mice (3 months) were obtained from Medical Animal Experiment Center of Nanchang University. β-Arrestin1 knockout mice were obtained from Jackson's laboratory (stock number: 011131). Mice were bred and maintained in the Animal Resource Centre of Nanchang University. All animal experimental protocols were approved by the Institutional Animal Care and Use Committee of Nanchang University and were performed in accordance with the standards established by the guideline of Nanchang University. Surgical model Tibial fracture was performed as previously described [6]. Briefly, mice were randomly divided into groups needed and then anesthetized with 1.8% isoflurane and oxygen at 2 L/min in the small animal anesthesia machine (RWD Life Science). Muscles were disassociated following an incision on the left hindpaw. A 0.38-mm stainless steel pin was inserted into the tibia intramedullary canal, followed by osteotomy, and the incision was sutured. 2% lidocaine solution was applied locally before the incision, and 1% tetracaine hydrochloride mucilage was applied to the wound twice daily to treat the pain. For Carv administration, 3.2 mg/kg Carvedilol and vehicle control were delivered every day to mice in drinking water for 30 days [27]. And then, the mice were established the tibial fracture mouse model for POD, during which Carv was administered continuously until behavioral test. Behavioral tests Y-maze test Y-maze was performed referring to our previous study [10]. The Y-maze apparatus consisted of three arms separated by an angle of 120°. In the Y-maze tests, all mice were applied two trials separated by an interval of 1 h. In the first trial, each mouse was placed at the center of a symmetrical Y-maze and was allowed to explore freely through the maze, except for the novel arm, for 5 min. In the second trial, all experimental arms opened and mice were placed in the maze in the starting arm, with free access to all three arms. Bouts of novel arm entry and duration in the novel arm were recorded. The hindpaw of a mouse entering one arm was defined as one arm entry. The total time of the experiment was 5 min. Morris water maze Morris water maze test was conducted in a circular tank (1.1 m in diameter) containing opaque water (22 ± 1 °C) and a platform (10 cm in diameter) submerged 1.0 cm under the water. The water tank was dimly lit and surrounded by a white curtain. The maze was virtually divided into four quadrants, with one containing the platform (diameter 10 cm). Four prominent cues were placed outside the maze as spatial references. Mice were placed in the water facing the tank wall at different start positions across trials in a quasi-random fashion to prevent strategy learning. Mice were allowed to search for the platform for a duration of 1 min; if the mice did not find the platform during 1 min, they were guided towards the platform and then required to stay for 20 s. Each mouse went through four trials (one trial from each quadrant) per day for five consecutive days. After each trial, the mouse was dried and placed back into its cage until the start of the next trial. All mouse movements were recorded via a video tracking system, which calculated distances moved and time required to reach the platform (latency). The spatial probe trial was conducted 24 h after the last training session (on day 6). For the probe trial, the platform was removed and mice were allowed to swim for 1 min. Time to reach the invisible platform on the probe trial day were was recorded and calculated. Times of mice crossing the hidden platform were recorded and calculated. Primary cell cultures and treatments For primary astrocyte culture, the brain tissues of WT and β-arrestin1 −/− neonatal mice aged 1-3 d were stripped of meninges and blood vessels under a microscope. Then, the tissues were digested with 0.25% trypsin (Gibco, #27250018) for 2 min and terminated by Dulbecco's modified Eagle's medium (DMEM, Gibco, #12100-046) supplemented with 10% fetal bovine serum (FBS, Gibco, #10437028). Cell suspension was filtered with a 40 μm filter (BD falcon, #352340) and centrifuged at 1000g for 5 min. Cells were re-suspended in DMEM supplemented with 10% FBS and 1% penicillin/streptomycin (P/S, Gibco, #15640055) and then plated in culture dishes (Corning, #430167). The culture medium was replaced with fresh medium 24 h later and then refreshed every 3 days. After the cells grew to 90% on the 7th-9th day, the cells were split into culture plates as needed. Immunohistochemical (IHC) analysis The brain tissues were dehydrated with 20% sucrose dissolved in phosphate-buffered saline (PBS) and then 30% sucrose-PBS for 3 days, respectively, after post-fixed in 4% paraformaldehyde (PFA). Then, the brains were cut into 25 μm-thick slices. For immunohistochemical analysis, brain sections are incubated with 3% hydrogen peroxide to quench the endogenous peroxidase activity before blocking with 5% BSA/PBST. After overnight incubation with primary antibody in 4 ℃, the HRP-labeled secondary antibody was incubated at room temperature for 1 h, and then the slices were rinsed with PBS for three times. Finally, the slices were visualized by the Diaminobenzidin (DAB, Boster, #AR1002) reaction for 5 min. Stereo Investigator software was used to image and count the number of positive cells under the microscope (Olympus). Enzyme-linked immunosorbent assay (ELISA) Hippocampal tissues were drawn from surgery-induced POD model mice after successful establishment of the mouse model. The brain tissue lysis was conducted the protein quantification assay to be homogenized to the same protein concentration. Brain IL-1β, TNF-α, IL-6, IL-4, IL-10, and BDNF were measured using ELISA kits for mouse according to the manufacturer's instructions (ExCellBio). RNA isolation and quantitative real time PCR (RT-PCR) Total RNA was extracted from hippocampal tissues and primary astrocytes using Trizol reagent (Invitrogen, #15596026) and reversely transcribed into cDNA using ChamQ Universal SYBR qPCR Master Mix (Vazyme, #Q711). Real-time PCR was performed in a 10 μl reaction system containing cDNA, primers, and HiScript III 1st Strand cDNA Synthesis Kit (Vazyme, #R312) with the ABI system. GAPDH was used as an internal control gene. qPCR primers were designed using a primer design tool, and their sequences were as follows: Detection of mitochondrial functions by fluorescent dyes MitoSOX fluorescent dye MitoSOX (Invitrogen, #M36008) is a fluorescent dye which can be oxidized by mitochondrial superoxide of live cells and exhibit red fluorescence. This fluorescent dye is used for the detection of reactive oxygen species (ROS). After treatment, the culture medium of primary astrocytes was sucked off and the cells were stained with 5 μM MitoSOX at 37 ℃ in dark for 10 min. The cells were then rinsed twice and then resuspended with cold PBS containing 1% FBS for flow cytometric analysis (FACScan; Becton Dickinson). JC-1 fluorescent dye JC-1 fluorescence probe (Invitrogen, #T3168) was a membrane permeable dye to determine mitochondrial membrane potential. Upon membrane polarization, JC-1 was transformed from mitochondrial aggregates with emission of a strong red fluorescence (Ex = 585 nm, Em = 590 nm) to cytoplasm monomers in green fluorescence (Ex = 514 nm, Em = 529 nm). After the treatment, the culture medium of astrocytes was discarded, and the cells were added with fresh JC-1 solution (final concentration is 10 μg/ml) and incubated at 37 ℃ for 30 min. After rinsing with PBS for three times, the cells were digested and re-suspended in cold PBS containing 1% FBS for flow cytometric analysis (FACScan; Becton Dickinson). Mitotracker green Mitotracker green (Beyotime, #C1048) was used to stain the mitochondria. After the treatment, astrocytic culture medium was discarded and the cells were added with fresh Mitotracker green solution and incubated at 37 ℃ for 40 min. After rinsing with PBS for three times, the cells were imaged using a fluorescence microscope. Determination of ATP content The ATP content in primary astrocytes was detected using the enhanced ATP assay kit (Beyotime, #S0027) in accordance with the manufacturer protocol. Cells were lysed in the ATP lysis buffer, and the cell precipitates were lysed and then homogenized with the ATP lysis buffer. ATP contents were determined using a luminometer. Mitochondrial protein extraction Mitochondria of primary astrocytes were extracted using Cell Mitochondria Isolation Kit for Cultured Cells (abcam, #ab110170). Primary astrocytes were digested by trypsin and then centrifuged to collect the pellets. After suspending cells in reagent A and incubating for 10 min, the cells were homogenized using dounce homogenizer. Then, homogenates were centrifuged at 1000g for 10 min at 4 ℃ and retain supernatant. The supernatants were then proceeding to centrifuge at 12,000g for 15 min at 4 ℃ and retain pellet. After centrifugation, the pellets were re-suspended in reagent C and analyzed for the protein concentration of the mitochondrial part, while the supernatants were centrifuged at 12,000g for 10 min at 4 ℃ and then collected the supernatants of this step, which were cytoplasmic protein without mitochondria. Oxygen consumption rates (OCR) analysis Mitochondrial respiratory functions of primary astrocytes were measured oxygen consumption rate (OCR) using Seahorse xF96. Primary astrocytes were plated on Seahorse xF96 plates and challenged with untreated microglia culture medium (MCM) and LPS-incubated MCM. Three cycles of baseline measurement of OCR were taken followed by 3 cycles of sequential measurements after injection of oligomycin (ATP synthase inhibitor), carbonyl cyanide-4-(trifluoromethoxy) phenylhydrazone (FCCP, mitochondrial respiration uncoupler), and antimycin A (Complex III inhibitor) in conjunction with rotenone (Complex I inhibitor). OCR data was normalized to cell numbers. Statistical analyses All data are represented as mean ± SEM. with at least 3 independent experiments. Statistical analyses were performed using GraphPad Prism 7.0. Student's unpaired two-tailed t-test, one-way ANOVA or two-way ANOVA was conducted according to test requirements. Difference was considered significant at P < 0.05. The number of replicates and repeats of individual experiments and statistical tests are indicated in the legends. Orthopedic surgery induces glial activation and pro-inflammatory phenotypes in the hippocampus Neuroinflammation is an intertwined consequence of glial activation, inflammatory cytokines release and indirect effects of non-inflammatory events [9]. To figure out the interrelationships between multiple cellular functions and POD pathology, mainly the effect of neuroinflammation on the disease trajectory of POD, we assessed the hippocampal micro-environment of mice after orthopedic surgery. The release of pro-and antiinflammatory cytokines in POD mouse model is the most predominant phenotypes. As shown in Fig. 1A, orthopedic surgery induced significantly increased levels of pro-inflammatory cytokines, including IL-1β, TNF-α and IL-6 in the hippocampal homogenates, while no marked changes were observed in antiinflammatory cytokines including IL-4 and IL-10 levels (Fig. 1B). Emerging findings suggest that neurotrophic factors may also affect the functionality of the neuroinflammation and is associated with postoperative delirium [28,29]. Brain derived neurotrophic factor (BDNF), one type of neurotrophic factors decreased markedly (Fig. 1B). We next compared the mRNA levels of these inflammatory and neurotrophic genes in the hippocampus of control mice and POD mice. Proinflammatory genes including Il-1a, Il-1b, Il6 and C1q were significantly higher than that of control group, while anti-inflammatory and neurotrophic genes showed decreased trends or no significant change (Fig. 1C). These results implied POD induced a proinflammatory micro-environment in the hippocampus. As microglia and astrocytes are the active players of brain inflammatory events and mediates the release of neurotrophic factors [30], we, therefore, used immunohistochemical method to detect the activated states of microglia and astrocytes in hippocampal tissues. After surgery, activation of microglia and astrocytes, manifested as increased numbers and enlarged areas in the immunohistochemical staining of glial fibrillary acidic protein (GFAP) and ionized calcium-binding adapter molecule 1 (Iba-1), were shown in dentate gyrus (DG) of mice ( Fig. 1D-G). Immunoblotting analysis of these specific markers consistently showed that surgery induced the increased expression of Iba-1 and GFAP (Fig. 1H, I). Glial activation may be beneficial (promotion of tissue repair by increasing antiinflammatory cytokines and neurotrophic factors) or detrimental (exacerbation of tissue damage by increasing pro-inflammatory genes and neurotoxic molecules) [31]. Previous studies have verified that pro-inflammatory mediators (IL-1, TNF-α, etc.) are sufficient to activate neurotoxic reactive astrocytes, termed the neurotoxic astrocytes [32]. Here we demonstrated that the astrocytic activation in POD mouse model was skewed towards the neurotoxic reactivity, as we found that neurotoxic astrocytes markers synchronously increased in the hippocampus of the surgery-treated mice compared with that of control group mice (Fig. 1J), while markers of protective astrocytes changed inconsistently with some of them increased and some unchanged (Fig. 1K). And also, protein levels of some characteristic markers of neurotoxic astrocytes were up-regulated significantly in the POD group (Fig. 1L, M). Co-localization of C3, the most characteristic marker, and GFAP was significantly increased after orthopedic surgery (Fig. 1N, O). These results suggest that orthopedic surgery induces injurious inflammatory responses in the hippocampus. β-arrestin1 is decreased in the reactive astrocytes of orthopedic surgery-treated mice GPCRs are the most intensively studied drug targets for their substantial involvement in human pathophysiology and their pharmacological tractability [17]. To analyse the association of GPCR family members with the inflammatory pathogenesis of POD, we performed a gene assay to screen the expression of GPCR family genes in the hippocampus of POD mice. We demonstrated that Ccr10 and β-arrestin1 were significantly decreased in the POD mice brain compared with the control mice, among which the β-arrestin1 mRNA levels lowered most remarkably ( Fig. 2A). Combining these RT-PCR results with the previous study, which implicated that β-arrestins as the multifunctional proteins downstream of GPCRs, are involved in the cognitive impairment and dementia of the amyloid-β pathology [26], we, therefore, investigated the role of β-arrestins in orthopedic surgery-induced postoperative delirium. We supplementarily compared the protein levels of β-arrestin1 and β-arrestin2 in samples from the hippocampus of POD mice with control mice. Consistent with the RT-PCR results, β-arrestin1 were significantly decreased, whereas levels of β-arrestin2 showed no significant change in the hippocampal tissues of POD mice (Fig. 2B, C). Next, we investigated the cellular distribution of decreased β-arrestin1 in the hippocampus focusing on the two main inflammatory cell types in the CNS: astrocytes and microglia, by immunofluorescent co-labeling. Compared to the control group, β-arrestin1 was found significantly decreased in GFAP-positive astrocytes after surgery (Fig. 2D, F). Co-localization of β-arrestin1 with the microglial marker Iba-1 showed no change (Fig. 2E, G). Taken together, orthopedic surgeryinduced decrease of β-arrestin1 is mainly found in the reactive astrocytes. β-arrestin1 in astrocytes modulates cognitive impairments and astrocytic reactivity in the mouse model for POD To figure out whether β-arrestin1 plays a role in the pathogenesis of POD, we established orthopedic surgery mouse model using β-arrestin1-knockout mice (Fig. 3A) and then measured the pathological phenotypes including behavioral deficits and brain inflammatory responses as the schematic diagram in Fig. 3B. Y maze test and Morris water maze test were used for cognitive function detection [33]. We found that surgery-treated mice showed significant decrease in the bouts of novel arm entrance and the duration in the novel arm of the Y maze ( Fig. 3C-E), implying the decreased learning and memory abilities in surgery-treated mice compared with the control mice. In addition, β-arrestin1 deletion aggravated these impaired abilities of mice (Fig. 3C-E). Results from Morris water maze test showed that the surgery-treated mice took longer time to reach the hidden platform and showed reduced bouts of platform area crossing during hidden-platform acquisition trial compared with those of control mice (Fig. 3F-H). These results demonstrated that β-arrestin1 deletion significantly exacerbated the motor performance deficits and the spatial learning and memory impairments induced by the surgery. As we found that the decreased expression of β-arrestin1 was shown in reactive astrocytes, we next assessed astroglial activation and astrogliosis in β-arrestin1-deficient POD mice. Compared to control mice, POD mice had numerous activated astrocytes in DG regions, manifested as proliferative morphology of the GFAP-labeled astrocytes (Fig. 3I, J), as well as increased numbers of the reactive astrocytes (Fig. 3I, K). Deletion of β-arrestin1 exacerbated astrocytic reactivity (Fig. 3I-K). We also detected the mRNA levels of the representative neurotoxic astrocytes markers. As shown in Fig. 3L, β-arrestin1 deficiency had no discernable effects on the mRNA levels of these markers under basal conditions, but sharpened their increase in POD mice. We also detected protein levels of representative markers of neurotoxic astrocytes, including C3, Serping1 and Psmb8 and we found that β-arrestin1 deficiency aggravated the increased protein levels in POD mice (Fig. 3M, N). These results implies that β-arrestin1 deletion exacerbates the cognitive impairments of POD mice and the neurotoxic astrogliosis in the hippocampus of this mouse model. Furthely, we specifically knocked down β-arrestin1 in astrocytes by AAV micro-injection into the hippocampus to investigate whether astrocytic β-arrestin1 was the major mediator in POD pathogenesis. The schematic diagram is shown in Fig. 3O, in which orthopedic surgery mouse model was established after successful micro-injection of AAV into the hippocampus (Fig. 3P). (See figure on next page.) Fig. 2 β-arrestin1 is decreased in the reactive astrocytes of orthopedic surgery-treated mice. A mRNA levels of GPCR-related genes in the hippocampus were analyzed by RT-PCR. B Expression of β-arrestin1 and β-arrestin2 in the hippocampus of CON and POD mice. C Densitometric analysis of β-arrestin1 and β-arrestin2. D Representative immunofluorescent staining of GFAP (red) and β-arrestin1 (green) in hippocampal slices of CON and POD mice. E Representative immunofluorescent staining of Iba-1 (red) and β-arrestin1 (green) in hippocampal slices of CON and POD mice. F Relative co-localized signals of the GFAP-positive and β-arrestin1-positive immunofluorescent particles between CON and POD group. G Relative co-localized signals of the Iba-1-positive and β-arrestin1-positive immunofluorescent particles between CON and POD group. Data are analyzed by unpaired Student's t-test. n.s means no significance. * P < 0.05 and ** P < 0. The astrogliosis in POD mice with NC AAV injection was significantly increased after astrocytic β-arrestin1 knockdown, manifested as remarkable proliferative morphology of the GFAP-labeled astrocytes and increased numbers of the reactive astrocytes (Fig. 3Q, R). Similarly, we observed astrocytic β-arrestin1 knockdown further decreased bouts of novel arm entrance of POD mice in Y maze test and took longer time to reach the hidden For C-M except for R, data were analyzed by two-way ANOVA followed by Tukey's multiple comparisons test. * P < 0.05, ** P < 0.01 and *** P < 0.001 vs. the WT-CON group or NC AAV CON group. # P < 0.05, ## P < 0.01 and ### P < 0.001 vs. the WT-POD mice or NC AAV POD. For R, data are analyzed by unpaired Student's t-test. * P < 0.05 and *** P < 0.001 vs. the NC AAV POD group. n = 6 mice per group for immunofluorescent staining. n = 3 for western blotting. n = 7-10 mice for behavioral tests. Values are presented as means ± SEM platform in water maze test (Fig. 3S, U), but showed no significant change in residence time in the novel arm of Y maze and bouts of platform area crossing in water maze (Fig. 3T, V), implying that astrocytic β-arrestin1 knockdown partially aggravated learning and cognitive abilities of the POD mice. These in vivo experiments by static micro-injection confirm, in a great measure, that β-arrestin1 in astrocytes is mainly involved in POD pathogenesis. β-arrestin1 deletion aggravates the neurotoxic reactivity of primary astrocytes To dissect the mechanistic details of β-arrestin1mediated astrocytic reactivity, we cultured primary astrocytes from WT and β-arrestin1-deficient mice, and induced the astrocytes to the reactive neurotoxic phenotypes. As the neurotoxic astrocytes were originally described as being activated by lipopolysaccharide (LPS)treated microglia that release IL-1α, TNF, and C1q, we, therefore, stimulated the primary astrocytes with LPSincubated microglia culture medium (MCM) (Fig. 4A). We first analyzed the C3, Serping1 and Psmb8 levels by immunoblotting analysis (Fig. 4B, C), showing that β-arrestin1 deletion significantly increased the protein levels of the representative markers (Fig. 4B, C). RT-PCR analysis of the markers in primary cultures implied that β-arrestin1-deficient astrocytes are more sensitive to the LPS-MCM stimulation, in which higher mRNA levels of neurotoxic astrocytes markers were shown compared with the WT LPS-MCM group (Fig. 4D). Consistently, the C3-positive signals, as well as Serping1-positive signals in GFAP-labeled astrocytes by immunofluorescent analysis increased visibly in the β-arrestin1-deficient astrocytes with LPS-MCM stimulation ( Fig. 4E-H). As former studies have confirmed that fragmented and dysfunctional mitochondria are involved in glia-mediated inflammatory responses [34], we, therefore, detected whether β-arrestin1 modulated astrocytic reactivity via manipulating mitochondrial dysfunctions. Mitochondrial functions were evaluated by measuring ROS production and the mitochondrial membrane potential. Cells were incubated with MitoSOX Red mitochondrial super-oxide indicator and JC-1 fluorescent dye and then applied to flow cytometric analysis. The results showed that the WT astrocytes treated with LPS-MCM showed higher red fluorescence than the untreated astrocytes, implying increased ROS levels. Meanwhile, β-arrestin1 deletion enhanced the ROS fluorescent signals (Fig. 4I, K). JC-1 assay showed that LPS-MCM induced transformation of red fluorescence to green fluorescence, implying an drastic MMP disruption. β-Arrestin1 deletion drove the fluorescence-transforming trend to a larger extent (Fig. 4J, L). And above all things, functional mitochondria with preserved inner membrane potential show ability to generate ATP [35]. We detected the mitochondrial respiratory functions using Seahorse experiment. We observed a slight decrease, though no significant defects, in oxygen consumption rates of basal respiration, ATP generation and proton leak in β-arrestin1 −/− astrocytes (Fig. 4M, N). LPS-MCM prohibited oxygen consumption for basal respiration, ATP generation and proton leak, which were further aggravated after β-arrestin1 deletion (Fig. 4M, N). Similarly, β-arrestin1 deletion under basal condition showed no significant effects on ATP contents, but further reduced the decrease of ATP content caused by LPS-MCM (Fig. 4O). Together, these results show that β-arrestin1 knockout aggravates the neurotoxic phenotype of astrocytes and the mitochondrial dysfunctions. β-arrestin1 co-localizes with Drp1 in primary astrocytes Accumulating findings demonstrate that β-arrestins can serve as scaffold proteins and function as signal transducers by facilitating interaction of signaling molecules [20][21][22]. We, therefore, thought deeply of the scaffolding functions of β-arrestin1 in the mitochondrial dysfunctions in reactive astrocytes in the current study. We pulled down β-arrestin1 and then detected β-arrestin1binding proteins by label-free mass spectrometry, in which we analyzed these proteins related to mitochondrial functions. We found that dynamin-related protein 1 (Drp1), which was highly associated with the neurotoxic astrocytes [34], was identified in the mass spectrometryidentified proteins that bound β-arrestin1 (Fig. 5A). We further detected the co-localization of β-arrestin1 and Drp1 after the plasmids co-transfection within HEK293T cells, we showed that Drp1 interacted with β-arrestin1 and exhibited co-localization by Imaris Image Software (Fig. 5B). Co-immunoprecipitation (CO-IP) assay consistently confirmed the interaction of Drp1 and β-arrestin1 in astrocytes (Fig. 5C, D). As the mitochondrial effector fission protein, Drp1 interacts with mitochondrial fission 1 (Fis1) to facilitate the mitochondrial fission. This process is controlled by the anchorage of Drp1 to Fis1, the receptor in the mitochondrial membrane [36]. We, therefore, assessed the co-localization of Drp1 and Fis1 in the β-arrestin1 −/− and WT reactive astrocytes by CO-IP analysis, in which demonstrated that β-arrestin1 deficiency increased the interaction between Drp1 and Fis1 in primary astrocyte cultures (Fig. 5E, F). We then separated the mitochondrial and cytoplasmic parts and detected the Drp1 levels by immunoblotting analysis. As shown in Fig. 5G, mitochondrial Drp1 was significantly up-regulated after treatment of LPS-MCM, whereas β-arrestin1 deletion increased the mitochondrial Drp1 levels, and the cytoplasmic Drp1 was decreased. We also observed the translocation of Drp1 to the mitochondria * P < 0.05, ** P < 0.01 and *** P < 0.001 vs. the WT CON-MCM group. # P < 0.05, ## P < 0.01 and ### P < 0.001 vs. the WT LPS-MCM group. Values are presented as means ± SEM from at least three independent experiments by immunofluorescently double-staining TOMM20, the mitochondrial marker, and Drp1 (Fig. 5H, I) in the primary astrocytes; and β-arrestin1 deficiency promoted the mitochondrial translocation of Drp1. Furthermore, Mitotracker green, a mitochondrion-selective probe which passively diffuses across the plasma membrane and accumulates in active mitochondria, was used to label the mitochondria and the results in Fig. 5J showed that healthy mitochondria in WT and β-arrestin1 −/− control group were circular shapes, while LPS-MCM treatment shaped the mitochondria to be dotted or fragmented patterns. Taken together, β-arrestin1 co-localizes with Drp1 and β-arrestin1 deletion drives the interaction of Drp1 with Fis1 to promote mitochondrial fission. β-arrestin1-biased ligand induces the interaction of β-arrestin1 and Drp1 to inhibit mitochondrial fission in vitro As the preceding data have shown that the regulatory effects of β-arrestin1 on mitochondrial fission by interacting with fission effector protein Drp1, we next asked whether activated β-arrestin1-biased signals would play a role in Drp1-dependent mitochondrial functions. Carvedilol (Carv) is one of the three β-adrenergic receptor (βAR) ligands approved for heart failure, and researches have documented Carv acts on β-arrestin1-biased mechanism to promote cardio-protective actions [37,38]. We pre-treated the primary astrocytes with Carv and then induced the astrocytes to the reactive states (Fig. 6A). CO-IP analysis was used to detect the combination of Drp1 and β-arrestin1, in which we found that Carv promoted the interaction of these two proteins (Fig. 6B, C). This increased interaction allowed a reversal effect on the translocation of Drp1 to mitochondria induced by the stimuli, shown as a decreased co-localization of Drp1 with mitochondrial marker TOMM20 in Carv-pretreated reactive astrocytes compared with rest astrocytes (Fig. 6D, E). We thus labeled the astrocytic mitochondria with Mitotracker Green to observe their morphology. As shown in Fig. 6F, healthy mitochondria in control group were in circular shapes, while LPS-MCM treatment shaped the mitochondria to be dotted or fragmented patterns. Carv pretreatment protected the astrocytic mitochondria from excessive fragmentation. Taken together, these results demonstrate that Carv promotes the combination of Drp1 and βarr1 to inhibit the Drp1-dependent mitochondrial dysfunctions. β-arrestin1-biased ligand Carvedilol recovers the neurotoxic astrocytes reactivity We next determined the effects of Carv on astrocytes reactivity. We assessed neurotoxic astrocyte markers in primary cultures by RT-PCR and the results confirmed that Carv pretreatment reversed the increased expression of these reactive markers (Fig. 7A). Corroborating these results, immunoblotting analysis revealed decreased levels of the representative markers in the Carv-treated astrocytes with LPS-MCM incubation, compared with the untreated astrocytes with LPS-MCM incubation (Fig. 7B, C). We also examined C3 and Ser-ping1 levels by immunofluorescence and the results showed that C3-positive fluorescent signals, as well as Serping1-positive signals in astrocytes were significantly decreased in astrocytes with Carv pretreatment followed by LPS-MCM stimulation compared with that of untreated astrocytes with LPS-MCM stimulation ( Fig. 7D-G). In addition, flow cytometric analysis of MitoSOX fluorescent dye (Fig. 7H, J) and JC-1 assay system (Fig. 7I, K) both showed that Carv pretreatment attenuated the increased ROS fluorescent intensity and transformation of red fluorescence to green fluorescence in the reactive astrocytes, implying a protective effects against mitochondrial malfunctions. We also conducted the mitochondrial oxygen consumption rate by Seahorse experiment. Carv pretreatment showed no effects on the oxygen consumption rates used for basal respiration, ATP generation and proton leak, but recovered the defeats in astrocytes with incubation of LPS-MCM (Fig. 7L, M). ATP assay kit also demonstrated that Carv protected the astrocytes from LPS-MCM-induced ATP decrease Representative immunofluoresent staining of β-arrestin1 (green) and Drp1 (red) processed by Imaris Microscopy Image Analysis software. C Flag-tagged Drp1 construct and β-arrestin1 pcDNA3.1 construct were co-transfected in HEK293T cells. Cell lysates were immunoprecipitated with anti-Drp1 antibody and then the samples were analyzed by immunoblotting. D Flag-tagged Drp1 construct and β-arrestin1 pcDNA3.1 construct were co-transfected in HEK293T cells. Cell lysates were immunoprecipitated with anti-β-arrestin1 antibody and then the samples were analyzed by immunoblotting. E Immunoblotting analysis of Fis1 proteins in cell lysates of primary astrocytes immunoprecipitated with Drp1 antibody. F Immunoblotting analysis of Drp1 proteins in cell lysates of primary astrocytes immunoprecipitated with Fis1 antibody. G Protein levels of Drp1 in cytoplasmic and mitochondrial parts. H Immunofluorescent staining of TOMM20 (green) and Drp1 (red) in primary astrocytes. I Relative co-localized signals of the TOMM20-positive and Drp1-positive immunofluorescent particles between groups. J Representative images of Mitotracker Green in primary astrocytes under confocal microscope. Data were analyzed by two-way ANOVA followed by Tukey's multiple comparisons test. * P < 0.05 and *** P < 0.001 vs. the WT CON-MCM group. ## antibody. C Immunoblotting analysis of β-arrestin1 proteins in cell lysates of primary astrocytes immunoprecipitated with Drp1 antibody. D Immunofluorescent staining of TOMM20 (green) and Drp1 (red) in primary astrocytes. E Relative co-localized signals of the TOMM20-positive and Drp1-positive immunofluorescent particles between groups. F Representative images of Mitotracker Green in primary astrocytes under confocal microscope. Data were analyzed by one-way ANOVA followed by Dunnet's post-hoc test. * P < 0.05 vs. the CON group. # P < 0.05 vs. the LPS-MCM group. Values are presented as means ± SEM from three independent experiments (Fig. 7N). Collectively, these data suggest that Carv, the β-arrestin1-biased agonist, exerts protective roles against the reactive signature and mitochondrial abnormalities. β-arrestin1-biased ligand Carvedilol protects POD mice from pro-inflammatory phenotypes After confirming the inhibitory effects of β-arrestin1biased ligand on astrocytic reactivity, we next verified its roles in mouse model for POD. The administration strategy of Car is shown in Fig. 8A. By immunofluorescent staining of GFAP to assess astroglial activation and astrogliosis in orthopedic surgery mouse model with Carv administration, we observed that Carv treatment recovered the proliferative morphology GFAP-labeled astrocytes and reduced the numbers of activated astrocytes in DG regions, compared to the POD mice ( Fig. 8B-D). By RT-PCR analysis of the hippocampus of the orthopedic surgery mouse model (Fig. 8E), we found that Carv administration significantly contradicted the increased mRNA levels of the markers representing the neurotoxic astrocytic reactivity in POD mice. Analysis of the characteristic markers in protein levels (Fig. 8F, G) revealed consistently reduced expression in the Carv-treated POD mice compared with that of POD mice. Behavioral tests detecting cognitive functions were also conducted after establishing orthopedic surgery mouse model with Carv administration. It was shown that the surgery-treated mice took longer time to reach the hidden platform and showed reduced bouts of platform area crossing during hidden-platform acquisition trial compared with those of control mice in the Morris water maze test (Fig. 8H-J). Carv administration recovered motor performance deficits and spatial learning and memory impairments in orthopedic surgery miceinduced POD mice (Fig. 8H-J). In Y maze test, Carv abrogated the significant decrease in the bouts of novel arm entrance and the total retention time in the novel arm of the Y maze manifested in surgery-treated mice (Fig. 8K-M), implying its protective roles of Carv in the learning and memory disabilities in surgery-treated mice. Taken together, Carv administration protects POD mice from excessive reactivity of the neurotoxic astrocytes and deficits in cognitive dysfunctions. β-arrestin1 deletion negates the inhibitory effects of Carvedilol on neurotoxic astrogliosis and POD progression We also supplementarily explored whether Carvedilol's roles in astrocytic reactivity and POD pathological process are exclusively dependent on β-arrestin1. In vitro studies using β-arrestin1 −/− astrocytes to be treated with Carvedilol and then induced into the neurotoxic astrocytes (Fig. 9A) showed that β-arrestin1 deficiency cancelled Carv's reversal effects on the increased markers of neurotoxic astrocyts by LPS-MCM stimulation (Fig. 9B). As to the mitochondrial malfunctions of neurotoxic astrocytes, β-arrestin1 deletion also abolished the attenuated effects of Carv on excessive ROS generation induced by LPS-MCM (Fig. 9C, D). These results demonstrate in vitro that Carvedilol relies on β-arrestin1 to regulate the astrocytic reactivity. Furthermore, β-arrestin1 −/− mice were used to verify β-arrestin1's indispensable roles in carvedilol's neuroprotective effects. In immunofluorescent staining of GFAP showing the astroglial activation and astrogliosis, we observed that Carv treatment recovered the proliferative morphology GFAP-labeled astrocytes and reduced the numbers of activated astrocytes in WT mice, but not β-arrestin1 −/− mice (Fig. 9E-G). Behavioral test using water maze test to observe cognitive function of POD mice displayed that β-arrestin1 knockout completely negated the effects of Carv on the cognitive deficits in POD mice. These results demonstrate the indispensable role of β-arrestin1 in Carvedilol's therapeutic effects on POD progression. Altogether, Carv relies on β-arrestin1 to inhibit neurotoxic astrogliosis and POD progression. Discussion In the current study, we showed that orthopedic surgery-induced POD mice exhibited pro-inflammatory phenotypes, as well as excessive neurotoxic astrocyte reactivity in the hippocampus. A gene assay to screen the Data were analyzed by one-way ANOVA followed by Dunnet's post-hoc test. * P < 0.05, ** P < 0.01 and *** P < 0.001 vs. the CON group. # P < 0.05, ## P < 0.01 and ### P < 0.01 vs. the POD group. n = 6 mice per group for immunofluorescent staining. n = 3 for western blotting. n = 10 mice for behavioral tests. Values are presented as means ± SEM expression of GPCR family genes showed that β-arrestin1 mRNA levels lowered most remarkably in POD mice. We, therefore, investigated the role of β-arrestin1 in orthopedic surgery-induced postoperative delirium and found that β-arrestin1 deletion showed enhanced cognitive dysfunctions in POD mice and promoted the molecular signature resembling A1-like reactive astrocytes in the mice hippocampus. Further in vitro experiments implied that β-arrestin1-deficient astrocytes were prone to the excessive Drp1-dependent mitochondrial fragmentation and mitochondrial dysfunctions. As β-arrestin1 is reported to be a scaffold protein to transduce intracellular signals by facilitating interaction of signaling molecules, we, therefore, found that β-arrestin1 can interact with cytoplasmic Drp1 to inhibit its translocation to the mitochondrial membrane, which facilitates mitochondrial fragmentation. This process has been proved to be the mechanistic inducer of neurotoxic astrocytes. Further investigations demonstrated that pharmacological manipulation of β-arrestin1-biased signals would prohibit the reactivity of neurotoxic astrocytes and halt pathological progression of postoperative delirium (Fig. 10). We provided direct evidence to reveal that activating β-arrestin1-biased signals recovered the neurotoxic astrocytic reactivity and, therefore, ameliorated cognitive dysfunctions in POD mice. This study provides insight into the mechanism of astrocytic inflammatory responses and mitochondrial functions regulated by β-arrestin-biased adrenergic receptors. Adrenergic systems are well-studied prototypes for heterotrimeric GPCRs that respond to diffusible hormones and neurotransmitters, which regulate both cognitive function and immune function, and dysregulation of adrenergic tone may potentiate neuroinflammation in neurological diseases [39][40][41]. Our previous study and other researches found that adrenergic receptors were implicated in the pathogenesis of perioperative neurocognitive disorder [10,41,42]. Most notably, we have determined in the current study that β-arrestin1 among the studied GPCR-related genes shows the most significant changes of expression in the brain of mice with postoperative delirium. Although Ccr10 is found to decrease in the POD mice hippocampus in our results, researches have implied that Ccr10 is mainly localized in hippocampal principal cells and apical dendrites of pyramidal neurons [43], implying its potential involvement in neuronal functions rather than astrocytic reactivity. Genetic deletion of β-arrestin1 exacerbated the pathological phenotypes of POD mice. Biased ligands for either G protein-mediated (G protein-biased) or β-arrestin-mediated or β-arrestin-biased signaling could selectively promote beneficial signaling or negate unexpected actions of receptor activation [44]; therefore, they have been intensively investigated recently. β-arrestins have evolved from terminators of G-protein signaling to multifunctional adaptor proteins that form a central node in multiple G-protein-independent signaling pathways. They regulate numerous extracellular signals by communicating with molecules of critical signaling pathways [45]. For example, β-arrestin2 can act as a critical component of the multi-protein GPCR-eNOS signaling complex that promotes eNOS activation and exacerbates the injured liver [46]. β-arrestin1 was a protective mediator in acute pancreatitis via regulation of NF-κB p65 phosphorylation [47]. In the current study, astrocytic β-arrestin1 interacted with Drp1 and, therefore, inhibited its translocation to the mitochondrial membrane to facilitate mitochondrial fission. Under pathological conditions of postoperative delirium, levels of β-arrestin1 decreased, which released more Drp1 to interact with outer mitochondrial membrane protein Fis1, promoted mitochondrial dysfunctions. Furthermore, we speculated that pharmacological activation of β-arrestin1-biased signals would rescued the excessive mitochondrial fission in the reactive astrocytes of POD mice hippocampus. Carvedilol is an anti-hypertensive drug, which is reported as cardio-protective agent via β-arrestin1mediated β1-adrenoreceptor binding without activating G-proteins, providing an additional mechanism for its clinical efficacy [37]. It also improves age-related cognitive impairment and has been undergoing phase IV used in clinical trials of AD [48]. Here in our study, Carvedilol induced the β-arrestin1-biased signaling and promoted combination of β-arrestin1 and Drp1 to inhibit mitochondrial dysfunctions of neurotoxic astrocytes. This effects underlay its protective roles against the astrocytic inflammatory responses and cognitive impairments of POD mice. Aseptic surgical trauma triggers acute inflammation by inducing inflammatory cytokines and damage-associated molecular patterns (DAMPs). This inflammatory milieu contributes to the recruitment of immune cells at the site of injury, but also affects functions in other organs, including the brain [9]. Previous clinical investigations demonstrated that in patients who suffered from postoperative delirium or had multiple traumas, there is early C3 activation. C3, as a central component of complement system exerts indispensable roles in the regulation of immune responses and inflammation [16,49]. In a murine model of orthopedic surgery, C3 was up-regulated in hippocampal astrocytes [16]. Beyond the simplistic view of supporting elements to neurons, astrocytes play essential roles including inflammatory responses and phagocytic activities [50]. Studies have verified a newly identified astrocyte sub-population triggered by the LPS-activated microglia, termed A1 astrocytes [32,34]. This type of reactive astrocytes, abundant in normal aging and various neurological diseases, have lost most normal astrocytic functions but gain a new neurotoxic functions [51]. As C3 is the characteristic marker of the neurotoxic astrocytes [32], which were altered in mice with postoperative delirium in our study, we, therefore, predicted that postoperative delirium induced astrocytes to be a neurotoxic phenotype. The Fig. 10 Proposed working model of postoperative delirium triggered by astroglial β-arrestin1 deletion. Decreased β-arrestin1 in postoperative delirium induced astrocytic reactivity to facilitate the pathological progression. β-Arrestin1 deletion triggered the disassociation between with β-arrestin1 and cytoplasmic Drp1 to promote mitochondrial translocation of Drp1 and cause excessive mitochondrial fragmentation and mitochondrial dysfunctions. Activation of β-arrestin1-biased signaling by agonist recovers neurotoxic astrocytic reactivity and phenotypes of postoperative delirium mouse model excessive pro-inflammatory cytokines and decreased anti-inflammatory cytokines and neurotrophic factors observed in the POD mice brain in the current study consistently provide an optimal micro-environment for the neurotoxic astrocytes to function properly [32]. It has been pointed out that multiple molecular and functional parameters are necessary to define reactive astrocytes, rather than binary divisions of reactive astrocytes into or A1-vs-A2 depending on astrocytic transcriptome [52], we conducted multidimensional investigations, including detecting the specific markers and identifying dysfunctional characteristics to reveal the pathological phenotypes of astrocytes during POD progression in the current study. The dynamic properties of mitochondria include their fusion, fission, transport and degradation; and all of them are critical for their optimal functions [35]. The interplay of fusion and fission confers widespread benefits on mitochondria, including efficient transport, increased homogenization of the mitochondrial population, and efficient oxidative phosphorylation [35]. Mediated by Drp1, mitochondrial fission should be strictly controlled, because excessive mitochondrial fragmentation often involves in the pathogenesis of neurological diseases [36,53]. Dysfunctional mitochondria released from neurotoxic astrocytes occurs in a Drp1-Fis1-specific manner and suppression of this Drp1-Fis1-dependent process impedes neuronal degeneration [34]. We found in the current study that Drp1 can interact with β-arrestin1 to inhibit its translocation to the mitochondrial membrane and, therefore, prevents Drp1-Fis1-dependent mitochondrial fission. POD-induced decrease of β-arrestin1 may release the cytoplasmic Drp1 to perform mitochondrial fission and, therefore, led to mitochondrial dysfunctions. On the contrary, activation of β-arrestin1 signals facilitated the combination between Drp1 and β-arrestin1 to protect the excessive mitochondrial fission. Although we observed interaction of β-arrestin1 and Drp1, we deduced that this interaction mainly affected the abnormal function of cells under pathological conditions, as no significant mitochondrial malfunctions were observed under basal conditions of β-arrestin1 deletion or supplementation of β-arrestin1-biased agonist. We, therefore, emphasized the cardinal effects of this combination on mitochondrial dynamics under neuroinflammatory stress, which might conceal the compensatory effects under normal physiological state. Despite the interesting findings presented here, it is worth noting the potential limitations of the present study. First, we used healthy mice as the controls in our in vivo studies, instead of delirium-resistant surgery mice for the following two reasons. For one reason, surgery with anesthesia did not uniformly induce behavioral defects in both water maze test and Y-maze test due to symptomatic differences in mice. And also there is no precise and widely-recognized behavioral scoring rules that define delirium and non-delirium mice for the orthopedic surgery-induced mouse model like the clinical Delirium Screening Scale does. Therefore, there are some difficulties in clearly distinguishing the delirium-susceptible mice and the delirium-resistant mice undergoing orthopedic surgery with general anesthesia. For another more important reason, general anesthesia surgery can be a sub-threshold stimulation in individuals, which induces stable and precise pathological changes in the CNS including neuroinflammation and hippocampal atrophy without delirium behaviors. That is to say, the pathological process of POD in mice with non-delirium orthopedic surgery may be already underway. Together, healthy mice was used as the preferable controls to study the molecular pathology of POD in our research. Secondly, we used LPS-MCM in our in vitro studies, which could only replicate the in vivo POD environment to some extent. The process of microglia activation (by LPS) mimicks the pro-inflammatory micro-environment of POD for neurotoxic astrogliosis, but this is not the POD-specific condition. Conclusions In summary, our findings demonstrate for the first time that β-arrestin1 is involved in the progression of postoperative delirium. The underlying mechanism is mediated by Drp1-dependent mitochondrial fission and mitochondrial dysfunctions in reactive astrocytes. Activation of β-arrestin1 biased signals provide novel insights into POD therapeutics. This study extends our understanding of the pathogenesis of postoperative delirium and may aid in the development of drugs for the treatment of POD. Dynamin-related protein 1 ERK Extracellular signal-regulated kinase TNF-α Tumor necrosis factor-α GFAP Glial fibrillary acidic protein GPCRs
2023-05-11T14:04:39.433Z
2023-05-11T00:00:00.000
{ "year": 2023, "sha1": "c7e9c5f82dd8fec7e7a16079f60db766f04a4c94", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "c7e9c5f82dd8fec7e7a16079f60db766f04a4c94", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256929750
pes2o/s2orc
v3-fos-license
Utilization of a Wheat660K SNP array-derived high-density genetic map for high-resolution mapping of a major QTL for kernel number In crop plants, a high-density genetic linkage map is essential for both genetic and genomic researches. The complexity and the large size of wheat genome have hampered the acquisition of a high-resolution genetic map. In this study, we report a high-density genetic map based on an individual mapping population using the Affymetrix Wheat660K single-nucleotide polymorphism (SNP) array as a probe in hexaploid wheat. The resultant genetic map consisted of 119 566 loci spanning 4424.4 cM, and 119 001 of those loci were SNP markers. This genetic map showed good collinearity with the 90 K and 820 K consensus genetic maps and was also in accordance with the recently released wheat whole genome assembly. The high-density wheat genetic map will provide a major resource for future genetic and genomic research in wheat. Moreover, a comparative genomics analysis among gramineous plant genomes was conducted based on the high-density wheat genetic map, providing an overview of the structural relationships among theses gramineous plant genomes. A major stable quantitative trait locus (QTL) for kernel number per spike was characterized, providing a solid foundation for the future high-resolution mapping and map-based cloning of the targeted QTL. Cavanagh et al. 12 released a hexaploid wheat consensus genetic map with 7504 SNP markers from the Wheat9K SNP array using a combination of seven mapping populations. Wang et al. 16 mapped 46977 SNPs from the Wheat90K array to the hexaploid wheat genetic map using a combination of eight mapping populations. Using both Wheat9K and Wheat90K arrays, Maccaferri et al. 17 released a high-density tetraploid wheat consensus genetic map with 30144 markers (including 26626 SNPs and 791 SSRs) by integrating 13 data sets from independent biparental mapping populations. Recently, Winfield et al. 18 documented a hexaploid wheat consensus map with 56 505 SNP markers from the Wheat820K array, spanning 3739 cM in length, using three independent biparental populations. However, although a high-density hexaploid wheat genetic map (>100 000 markers) based on an individual biparental mapping population would be valuable for further genetic research, such as high-resolution mapping and map-based cloning of a targeted major quantitative trait locus (QTL), no such map has been released. A new Whole Genome Shotgun (WGS) assembly of the Chinese Spring (CS) reference wheat genome is now available (http://plants.ensembl.org/index.html; https://urgi.versailles.inra.fr/download/iwgsc/IWGSC-WGA_ Sequences/). However, genetic and genomic studies in wheat continue to lag behind the research in other members of the grass family (Gramineae), such as rice and maize. The gradual enrichment of SNP markers and the sequences released for CS (https://urgi.versailles.inra.fr/download/iwgsc/IWGSC-WGA_Sequences/), Triticum urartu 2 and Aegilops tauschii 1 have facilitated comparative genomic analysis in wheat. Comparative genomic analysis with species whose genomes have been well characterized has been used as an effective method for the construction of high-resolution genetic linkage maps of target wheat genes and for the prediction of candidate genes in regions of interest. For instance, the construction of high-density genetic maps has facilitated the mapping of the gene grain protein content-B1 (Gpc-B1) 19 , the yellow rust resistance gene Yr36 20 and the powdery mildew resistance gene Pm41 21 . Comparative genomics studies have also furthered the understanding of the basic processes of genome evolution and the transfer of information from model species to related organisms and facilitated the cross-referencing of various types of information, such as QTLs, mutants, and gene expression 22,23 . These correlations and integrations will take full advantage of the collective intellectual contributions from scientists across many disciplines 22 . Wheat660K, the Affymetrix ® Axiom ® Wheat660, was designed by the Chinese Academy of Agricultural Sciences and synthesized by Affymetrix. This Wheat660K SNP array is genome-specific with high density and is highly efficient with a wide range of potential applications (http://wheat.pw.usda.gov/ggpages/topics/Wheat660_ SNP_array_developed_by_CAAS.pdf). However, genetic position in relation to Wheat660K SNPs has not yet been documented. In this work, for the first time, we report a high-density map for wheat constructed from this Wheat660K SNP Array. Based on SNP flanking sequences, we assigned SNPs to the genome assembly of T. aestivum cv. Chinese Spring (CS) (https://urgi.versailles.inra.fr/download/iwgsc/IWGSC-WGA_Sequences/). We also compared our high-density genetic map with the consensus genetic maps of Wheat90K and Wheat820K based on the common contigs assembled in the chromosome survey sequencing (CSS) project. Comparative genomic analyses based on the mapped SNP flanking sequences and the corresponding contig sequences were also performed with the genomes of Brachypodium distachyon, Oryza sativa, Zea mays, and Sorghum bicolor. Using this mapping resource along with the phenotypic data, we identified important QTLs for yield-related trait. A major stable QTL for kernel number was identified and then characterized in detail based on the high-density genetic map and comparative genomic analysis. Results SNP scores in the 265 accessions. An F 8:9 recombinant inbred line (RIL) population including 188 lines derived from a cross between Kenong 9204 (KN9204) and Jing 411 (J411) (denoted as KJ-RILs), 65 KN9204derived advanced lines/authorized varieties, three parental lines of KN9204, three control varieties from the Winter Wheat Performance Trial of the Northern Huang-Huai Regional Nursery of China, and Chinese Spring (CS) were genotyped using the 630517 SNPs on the Wheat660K SNP array as probes. The sample call rates ranged from 18.6% to 100.0%, with an average of 98.9% for the 265 accessions (data not shown). The scores for the probes were classified into one of the following six categories according to the cluster patterns produced by the Affymetrix software (Table S1) Of the 339757 functional SNPs, 136973 (40.3%) were polymorphic between KN9204 and J411. Of these, 8407 had more than 10% missing data in the 188 KJ-RILs, and were removed from the linkage analysis. Among the remaining 128566 SNPs, 90567 (70.4%) were transitions, and 37999 (29.6%) were transversions. The 128566 functional SNPs and the previously reported 591 loci 3 were used for linkage analysis and map construction. The 129157 markers fell into 5175 bins, and to create a chromosome frame, only one marker was selected as a representative from each bin. Overview of the high-density wheat genetic map. The 5175 bin markers were used for linkage analysis based on their scores in the 188 KJ-RILs. In total, 4959 bin markers were mapped to the wheat genetic map. The co-segregated markers (redundant markers) were then added to the high-density genetic map based on information of bin serial number and the genetic information of the corresponding bin markers. A high-density genetic map with 119566 loci spanning 4424.4 cM was constructed (Table S2). Of these loci, 119001 were SNP markers derived from the Wheat660K SNP array, and the remaining 565 markers were reported previously by Cui et al. 3 . Of the 119001 SNPs, 83953 (70.5%) were transitions, and 35048 (29.5%) were transversions (data not shown). Most markers were mapped to the B (44.6%) and A genomes (43.7%), and only 11.7% markers were mapped to the D genome. For the map lengths, the A, B, and D genomes covered 36.4%, 27.7%, and 35.8% of the total map length, respectively. The chromosome sizes ranged from 84.4 cM (chromosome 1BL) to 289.1 cM (chromosome 5D), averaging 210.7 cM per chromosome. The number of markers on each chromosome ranged from 78 (chromosome 1BL) to 13898 (chromosome 3B), with a mean of 5693.6 loci per chromosome. Due to the 1BL/1RS translocation of KN9204, the 1RS-or 1BS-specific markers not only showed co-segregation but also exhibited distorted segregation in the KJ-RILs as shown by Cui et al. 3 . These markers were excluded from the linkage analysis and map construction, which reduced the genetic maps for analysis to 1BL only. In addition, no markers on chromosome 5BS were polymorphic between KN9204 and J411, resulting in the release of the 5BL genetic map only. Of the 119566 loci, 33598 (28.1%) were distributed on chromosomal regions near the centromeres. Marker density per genetic distance unit peaked at the centromeric regions, possibly due to a combination of low recombination rate in the centromeric regions and even gene distribution along the chromosomes (Fig. 1). The 4 959 bin markers are shown in the genetic map (Fig. 2). The following mapping-bin sets were observed: approximately 3.7% and 3.2% of the markers were unique for genomes A and B, respectively, and approximately 9.6% of the markers for the D genome showed unique segregation patterns. Considering the unique markers (the 4 959 bin markers), the highest marker saturation was found in genome A (39.3%), followed by genomes B (33.6%) and D (27.1%). The average distance between adjacent bin markers ranged from 0.6 cM for 6B to 1.5 cM for 4D, with an average of 0.9 cM per marker pair. Gaps greater than 20.0 cM but less than 30.0 cM were present in chromosomes 3 A (24.9 cM), 3B (21.9 cM) and 4D (20.4 cM); gaps greater than 10 cM but less than 20.0 cM were present in chromosomes 1 A (13.9 cM, 11.7 cM), 2 A (14.6 cM), Comparative genomic analysis. Of the 119566 loci, 118998 (99.5%) were best hits to 57036 CSS contigs, with 2.1 polymorphic markers per contig. In total, 93.0% contigs had coincident physical and genetic positions, 4.6% were mapped to homoeologous chromosomes such as 1 A in physical position and 1B in the KJ-RIL genetic map, and 2.4% were disordered ( Fig. 2; Figure S1; Table S3). Based on the SNP flanking sequences, we assigned 116 261 SNPs to the recently released wheat genome assembly. SNP order in the present genetic map was in good agreement with that in the wheat genome assembly, with the exception of chromosome 7DL, in which a segment inversion was identified ( Fig. 3; Figure S2; Table S3). An overview (from the wheat genetic map perspective) of the relationships between the five grass family genomes at the resolution of the genetic map in centimorgans is provided in Figures S3 and S4. There were 113288 markers that corresponded to the CDS of Brachypodium (65357 markers) (Table S4; Figure S3a, S4a), rice (58825 markers) (Table S4; Figures S3b, S4b), maize (53745 markers) (Table S4; Figures S3c, S4c), and sorghum (59994 markers) (Table S4; Figures S3d, S4d). In general, chromosomes belonging to the same homoeologous groups showed correspondence with the same grass chromosomes, but some differences were observed. Large and especially small synteny blocks across wheat and grass family chromosomes were observed, indicating the complexity of the wheat genome among the grass family genomes. The structural relationships among the genomes indicate that for some individual wheat chromosomes, there is a preponderance of corresponding grass genes from one or two certain linkage groups. For example, wheat chromosomes 2 A/2B/2D corresponded to Bd5, wheat The names of the marker loci are listed to the right of the corresponding chromosomes. Loci in red were best hits to Chinese Spring (CS) contigs of the short arm of the corresponding chromosomes. Loci in green were best hits to CS contigs of the long arm of the corresponding chromosomes. Loci in black were unknown. Contigs from chromosome 3B were not separated into short/long arm bins, as individual arm datasets were not generated for this chromosome in the Chromosome Survey Sequence (CSS) project. chromosomes 3 A/3D corresponded to Bd2 and Bd3, wheat chromosomes 4 A/4B/4D corresponded to Os3, and wheat chromosomes 7 A/7B/7D corresponded to Sb10. However, for most synteny blocks, the chromosomes were more fragmented and scattered, with a high frequency of breakdown. To specify the coding-region SNPs (cSNPs), perigenic SNPs (pSNPs), and intergenic SNPs (iSNPs) among the SNP markers mapped in the KJ-RIL genetic map, a BLASTX search was performed against the CDS of T. aestivum, using SNP flanking sequences and the corresponding contig sequences as queries. Using SNP flanking sequences as the query, 8.9% (10104 SNPs) were best hits to the CDSs of the T. aestivum, and these were considered to be the cSNPs. When using the contig sequences where the markers were best hits to as the query, approximately 36.8% (41689 SNPs) were best hits for the CDSs of the T. aestivum, indicating that 27.9% (36.8-8.9%) of SNPs were pSNPs. The remaining 63.2% are likely iSNPs (Table S5). QTLs for KNPS and the prediction of candidate genes. Using the mapping resource along with the phenotypic data we identified important QTLs for yield-related trait. A major stable QTL for kernel number per spike (KNPS; qKnps-4A) was verified in 10 environments by using MapQTL 6.0, IciMapping 4.1, and QTLNetwork 2.0 (Table 1 Figure S7). This region harbours 65 predicted genes in wheat ( Figure S8), which might include the candidate gene for qKnps-4A. Discussion The high-density SNP map developed in the present study first documented the genetic positions of 119 001 SNPs from the Wheat660K SNP array. Based on the SNP flanking sequences, we assigned 118 785 SNPs to 56 904 CSS-assembled contigs (Table S3). The physical positions of the corresponding CSS-assembled contigs could be used to validate genetic position (chromosome and chromosomal arms assignment). As shown in Fig. 2 and Table S3, the physical and genetic positions of these mapped markers were generally in agreement. In previous studies, 7504 SNPs from the Wheat9K SNP array 12 , 46977 SNPs from the Wheat90K array 16 , and 56505 SNPs from the Wheat820K SNP array 18 were genetically mapped to the hexaploid wheat genome. These SNPs have also been physically assigned to the corresponding CSS-assembled contigs. Based on the common CSS contigs, we analysed the synteny of the mapped SNPs (Wheat660K SNP array vs. Wheat90K array and Wheat660K SNP array vs. Wheat820K SNP array) across different mapping populations (Figs 5 and 6). These common contigs were generally aligned with the chromosomes in a consistent order across different mapping populations, verifying the accuracy and credibility of our high-density genetic map. Regarding the total map length, the Wheat90K/820 SNP consensus integrative genetic map is 2.4/1.9 times longer (data not shown) than that of the present map. The increased genetic map length is proportionate to the increased mapping population size 16,18 . A relatively small mapping population size resulted in limited identification of recombination events and lower resolution of the genetic map, contributing to the short map length in this study 24,25 . In addition, comparative analysis among these three SNP maps revealed that genetic maps of chromosomes 3 A, 4B, 5D, 6D, and 7B derived from the wheat 820 K SNP array were inverted, with long arms at the top and short arms at the bottom (data not shown) 18 . Chromosome 4 A, derived from the Wheat820K SNP array, might be involved in chromosomal rearrangement compared with the genetic maps derived from the Wheat90K SNP array and our Wheat660K SNP array. The genetic information from the common contigs and their genetic collinearity analysis across different mapping populations will lay the foundation for obtaining a consensus integrative genetic linkage map. The SNP order in the present KJ-RIL genetic map was also in good agreement with that in the physical position, with the exception of chromosome 7DL, in which a segment inversion was identified ( Fig. 3; Figure S2; Table S3). A previous report showed that wheat chromosome 7 was likely to involve a complex interchange 26 . These findings prompted us to search for candidate genes of targeted major QTLs. In wheat, a majority of recombination events occurred on the most distal portions of the chromosomal arms, whereas the recombination events tend to be suppressed around the centromere 3, 27, 28 . These characteristics result in a low resolution of the genetic map in the centromeric region, which was evident in the small genetic distance in the KJ-RIL genetic map corresponding to a large physical region around the centromere compared with the most distal portions of the chromosomal arms ( Fig. 3; Figure S2; Table S3). These findings also indicate the difficulty of high-resolution mapping and map-based cloning of a QTL around the centromere because of the low coverage of genetic markers and the suppression of recombination events. The genetic and genomic research of wheat has lagged behind similar research regarding other important crops, such as rice and maize 29 . Conservation of gene identity and collinearity among gramineous plants will depend on the rates of genome/gene evolution and rearrangement in the investigated species 22 . There is a high level of genome-synteny among gramineous plants, especially wheat, Brachypodium and rice, with wheat being more closely related to Brachypodium than to rice 22,23,[30][31][32] . Wheat improvement programs can benefit from the use of comparative genetics to transfer information about genes from model species to wheat, to help identify genes controlling traits of interest, and to assess within-species allelic diversity so that the best alleles can be identified and assembled in superior varieties. In this study, synteny analyses among common wheat, Brachypodium, rice, sorghum and maize genomes were performed based on the collinearity of the corresponding orthologous genes (best hits of CDSs) (Table S4; Figures S3 and S4). Features of the wheat-grass genome relationships revealed in this study included a high frequency of breakdown in microcollinearity throughout the genomes compared to the previous RFLP-based maps 22,31,[33][34][35][36] . Both large and especially small synteny blocks across wheat chromosomes and grass family chromosomes were observed in this study. These features might be attributed to the higher number of markers used in this study. More recently, Russo et al. 32 conducted collinearity analysis across durum wheat, Brachypodium, and rice based on a high-density durum wheat genetic map derived from the Wheat90K SNP assay. Both large and especially small synteny blocks across wheat chromosomes and grass family chromosomes were also observed in that study. Based on a high-density genetic map, we documented a wheat genome perspective of homologous sorghum and maize genome locations based on comparative sequence analysis. A wheat genome view of homologous gramineous plant genome locations based on comparative sequence analysis would considerably improve the predictability and efficiency of information transfer, and would be benefit evolutionary studies. Wheat yield is determined by three yield components: productive spikes per unit area, KNPS, and kernel weight, determine wheat yield. Among them, the KNPS value has steadily increased, indicating a substantial contribution of increased KNPS to increased wheat yield 37,38 . Over the past several decades, numerous QTLs (or genes) for wheat kernel number have been documented based on both linkage-mapping and association-mapping analyses 4,[38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54] . Some of these studies have also reported QTLs for KNPS on chromosome 4 A 4, 40, 44, 50, 51, 53, 54 . Based on genetic marker sequence flanking for these QTLs and the recently released WGS assembly, we compared physical positions with qKnps-4A to determine whether they were common interacting QTLs or regions across genetic backgrounds. The results are shown in Supplementary Table S6. qKnps-4A from our previous research shared a confidence interval (4 A:664209916-4 A:736771450) with qKnps-4A in this study (4 A:680398739-4 A:683638403). Interestingly, qKnps-4A has been detected in both the WY and WJ populations (two related RIL populations sharing one common parental line of Weimai 8) 4 . QKNS.caas-4AL detected by Gao et al. 54 , also shared a confidence interval (4 A:632 864 778-4 A:688 093 018) with qKnps-4A in our study. These coincidences confirmed the authenticity of qKnps-4A, which should be subjected to fine mapping and map-based cloning in the future. In fact, this work is being conducted based on the secondary mapping population of qKnps-4A's (data not shown). QTL mapping based on the primary mapping population could precisely characterize and locate genes underlying specific agronomic traits, which was also true for both major and moderate/minor QTLs 55 . Previous studies have confirmed that tagged or cloned genes were near their original QTL positions (logarithm of the odds (LOD) peak) [56][57][58][59][60][61][62][63][64][65][66][67] . In this study, qKnps-4A was repeatedly identified using MapQTL 6.0, IciMapping 4.1, and QTLNetwork 2.0, based on mathematical models of composite interval mapping (CIM), inclusive composite interval mapping (ICIM) and the mixed linear model approach (MLMA), respectively. In addition, the KNPS values of the 188 KJ-RILs and their parental lines were evaluated in 10 different environments. Based on the 10 environmental phenotypic values along with our high-density genetic map, we confirmed QTL peak position with the aforementioned QTL mapping software and found the Ax-109844107-Ax-110540586 overlapping confidence interval, which spanned 3. Table 7). These coincidences across different genetic backgrounds, multiple environments and diverse QTL models strongly supported the hypothesis that the genes underlying qKnps-4A are likely located within 4 A:680398739-4 A:683638403. One of the 65 predicated genes within this interval might be the candidate gene for qKnps-4A ( Figure S8). This information is very valuable for future high-resolution mapping and map-based cloning of qKnps-4A. Using Ax-110540586 as a probe, the 188 KJ-RILs were divided into two groups, one group with alleles from KN9204 and the other with alleles from J411, to perform mean comparisons regarding KNPS. The positive alleles from qKnps-4A's increased the KNPS value from 3.0 to 4.6, indicating a tremendous potential for their application in wheat molecular breeding programmes designed to increase yield ( Figure S9). To characterize the use of qKnps-4A's positive alleles in wheat breeding programmes, we dissected the genotypes of 65 KN9204-derived advanced lines/authorized varieties, three parental lines of KN9204, three control varieties from the Winter Wheat Performance Trial of the Northern Huang-Huai Regional Nursery of China, and CS near Ax-109844107-Ax-110540586 ( Figure S10). None of KN9204′s three parental lines carried favourable alleles for increasing KNPS, which accounted for the negative alleles of KN9204 at this QTL. Only 3 (4.6%) of the 65 authorized/ advanced lines derived from KN9204 lines carried favourable alleles for increasing KNPS. Interestingly, two advanced lines (KN1002-13-10 and O-97) carried heterozygous alleles, which were used to develop the secondary mapping population of Knps-4A via self-cross (data not shown). Of the three control varieties from the Winter Wheat Performance Trial of the Northern Huang-Huai Regional Nursery of China, S4185 carried negative alleles, decreasing KNPS; chromosomal regions of Knps-4A in LX99 and JM22 were recombinant regions with alleles that cannot be categorized as favourable or negative alleles that increase or decrease KNPS; and CS carried favourable alleles, increasing KNPS. These findings indicated that favourable qKnps-4A alleles have not been fully utilized in wheat breeding programmes in the Huang-Huai winter wheat region in China. Wheat breeders should strengthen the selection of qKnps-4A favourable alleles in molecular breeding programmes aimed at the development of high-yield varieties. In summary, this paper reports a high-density wheat genetic map based on an individual mapping population. In total, 119 001 SNP markers derived from the Wheat660K SNP array were mapped onto the KJ-RIL genetic map. We observed good good collinearity of our high-density genetic map with the Wheat90K and Wheat820K consensus genetic maps, increasing the possibility of obtaining a consensus integrative higher-density wheat genetic map in the future. This high-density genetic map is also in good accordance with the recently released wheat genome assembly. Our high-density wheat genetic map provides a major resource for future wheat genetic and genomic research. Moreover, this paper provides an overview of the structural relationships between wheat and other gramineous plant genomes based on comparative genomics analysis. Finally, a major stable QTL for kernel number was thoroughly characterized based on this high-density genetic map and comparative genomics analysis. Experimental procedures Plant material. In this study, F 8:9 KJ-RILs derived from a cross between KN9204 and J411 were used for map construction and QTL analysis. The original KJ-RIL population contained 427 RILs. In total, 188 randomly sampled lines from the 427 KJ-RILs were used for genetic linkage analysis. In addition, 65 KN9204-derived advanced lines/authorized varieties, three parental lines of KN9204, three control varieties from the Winter Wheat Performance Trial of the Northern Huang-Huai Regional Nursery of China, and CS (Table S7) were genotyped to trace the key chromosomal segment harbouring the major stable QTL for kernel number per spike (KNPS). Phenotyping. KNPS values of the 188 KJ-RILs and their parental lines were evaluated in ten different environments (five trials that included both low-and high-nitrogen treatments). The nitrogen treatments, field arrangements and experimental designs of the ten environments were performed as described previously 3, 68, 69 . Genotyping. For all subjects, leaf tissues were sampled. Genomic DNA was extracted and hybridized on the Wheat660K SNP genotyping array by Compass Biotechnology Company (Beijing, China). The DNA samples were prepared, and the chip genotyping was performed on the Wheat660K SNP array according to the Affymetrix Axiom 2.0 Assay Manual Workflow protocol. DNA integrity was confirmed on agarose gels, and DNA quantity was measured spectrophotometrically. The Wheat660K chip contains 630517 markers (http://wheat.pw.usda. gov/GG2/index.shtml). Variant quality from the Wheat660K chip genotyping was initially assessed according to Affymetrix best practices. The 188 RILs and their parents were aslo assayed using the 'Wheat PstI (TaqI) 2.3D' DArT array (the medium density array) (http://www.triticarte.com.au/). The PCR-based markers were genotyped as described in our previous study 3 . Genetic map construction. The 188 KJ-RILs and their parental lines were genotyped with the Wheat660K SNP array. SNPs were rejected if they showed minor allele frequency (defined as frequency <0.3) or contained >10% missing data. Markers were binned based on their segregation patterns in the KJ-RIL population using the BIN function in IciMapping 4.1 (http://www.isbreeding.net/) according to Winfield et al. 18 . Markers that shared their segregation pattern with at least one other marker were retained. One marker was chosen to represent each bin on the basis of the least amount of missing data or, when the percentage of missing data was equal, at random. Markers were tested for significant segregation distortion using a chi-square test. SNPs were sorted into groups using the MAP function in IciMapping 4.1, with the previously mapped 591 loci serving as anchored markers 3 . A logarithm of the odds (LOD) score of 3.5 and a recombination fraction of 0.3 were used to sort the SNPs with the Kosambi mapping function 70 . Groups were ordered with the Kosambi mapping function within the JoinMap v. 4.0, using a LOD score ≥3 after preliminary analysis of SNPs with LOD scores ranging from 2 to 10. The long and short arms of each chromosome were identified from the IWGSC wheat survey sequence (http://www.wheatgenome.org/), and groups were orientated to have the short arm above the long arm. MapChart 2.2 (http://www. biometris.nl/uk/Software/MapChart/) was used to draw the genetic map. replication of the 10 environments were assembled to perform combined QTL analysis across environments to identify QTLs with additive-by-environment (A by E) interaction effects. The overlapping confidence intervals detected with the abovementioned programs were used to predict the candidate genes based on the sequence information of the flanking markers and the IWGSC WGA v0.4 assembly of chromosome 4A (https://urgi.versailles.inra.fr/download/iwgsc/IWGSC-WGA_Sequences/). Comparative genomic analysis. The SNP flanking sequences mapped in the KJ-RIL map were kindly provided by Professor Jia JZ. We used the Basic Local Alignment Search Tool (BLAST) (ftp://ftp.ncbi.nlm.nih. gov/blast/executables/release/) to align the SNP probes to the IWGSC survey sequences (contigs). All IWGSC survey sequences were downloaded from http://www.wheatgenome.org/. In addition, contig sequences to which the SNPs were best hits were screened in a BLASTN search against the coding sequences (CDSs) of B. distachyon, rice (O. sativa L.), maize (Z. mays L.), and sorghum (S. vulgare L.). All CDSs were downloaded from http://plants. ensembl.org/index.html. An expectation value (E) of 1E −10 was used as the significance threshold. Synteny analyses with common wheat, Brachypodium, rice, maize and sorghum genomes were performed based on the SNP orders in the KJ genetic map and on the corresponding CDSs in the genome sequences of Brachypodium, rice, maize and sorghum genomes. Using the sequences of the markers (including SNPs, PCR-based markers and DArTs), we conducted comparative genomics analysis against the contigs assembled in the chromosome survey sequencing (CSS) project. All of the contig sequences were downloaded from https://wheat-urgi.versailles.inra.fr/Seq-Repository. More recently, the genome assembly of T. aestivum cv. Chinese Spring (CS) has been released (https://urgi.versailles.inra.fr/ download/iwgsc/IWGSC-WGA_Sequences/). Based on SNP flanking sequences, we assigned SNPs to this wheat genome assembly.
2023-02-17T14:53:31.585Z
2017-06-19T00:00:00.000
{ "year": 2017, "sha1": "a8e580f30e0bbac478394b244205a185d6830dd3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-04028-6.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a8e580f30e0bbac478394b244205a185d6830dd3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
17625069
pes2o/s2orc
v3-fos-license
Lattice $W$ algebras and quantum groups We represent Feigin's construction [22] of lattice W algebras and give some simple results: lattice Virasoro and $W_3$ algebras. For simplest case $g=sl(2)$ we introduce whole $U_q(sl(2))$ quantum group on this lattice. We find simplest two-dimensional module as well as exchange relations and define lattice Virasoro algebra as algebra of invariants of $U_q(sl(2))$. Another generalization is connected with lattice integrals of motion as the invariants of quantum affine group $U_q(\hat{n}_{+})$. We show that Volkov's scheme leads to the system of difference equations for the function from non-commutative variables. Introduction. In this talk I would like to give a brief introduction to the Feigin [22] construction of lattice W algebras and represent some simple results. More complete consideration can be find in the forthcoming work [36]. In 1985 Alexander Zamolodchikov [1] investigated the possibility of existence of new additional infinite symmetries in the context of two-dimensional Conformal Field Theory [4], or, equivalently, the existence of a primary field with conformal dimension (s, 0) or (0, s). (Hereafter we consider only holomorphic part.) By direct use of bootstrap principle he proved that there might exist primary field W 3 with conformal dimensions (3,0). Due to the equation it is the conserved current which generates additional infinite symmetry, while algebra (W 3 , T ) (where T is the stress-energy tensor), is a quadratic one. Namely, operator product expansion of two W 3 currents includes quadratic term on T . In the past few years considerable progress has been made in an understanding of the deep structures underlying these algebras (see for example refs. [2,[5][6][7][8][9][10][11][12][13][14][15]20,21]) as well as its classical limits [16][17][18][19]. It was shown in the works [2,6,11,12,14] that W algebras can be considered as the result of quantum Drinfel'd-Sokolov reduction and the fact that generators of W algebras commute with screenings operators can be taken as the definition of W algebras. Namely, such as screening operators constitute the nilpotent part of the quantum group [24], mathematically W algebra is the algebra of invariants of this group. Lukyanov and Fateev [5][6][7][8] found 1 that such invariants are given by quantum generalization of Miura transformation. We give the lattice version of this picture. We hope that lattice construction can clear up the intrinsic fiatures of W algebras. Our consideration is somewhat different from a number of another works appeared in connection with lattice current algebras [23][24][25][26][27][28][29][30][31]. Plan of this talk is following: 1. Feigin's construction of lattice W algebras. 2. Examples: Virasoro and W 3 on the lattice. 1.1 W algebra in continious theory. Consider bosonic representation of conformal field theory in terms of free scalar fields Let g be a simple Lie algebra with simple roots α i (i=1,...,r=rank(g)) and Cartan decomposition g = n − ⊕ h ⊕ n + . Following to Feigin and Frenkel [12] one can give the following definition: Definition.Vacuum representation of W algebra associated with simple Lie algebra g has a realization as the intersection of the kernels of screening operators We have by this means in the continious theory Lukyanov and Fateev found that bosonic realization of W is given by quantum Miura transformation which in the case of W sl(n) has the following form [5][6][7][8]: and h i = ω 1 − α 1 − . . . − α i−1 are fundamental weights of the n-dimensional vector representation of SL(n). As Bowknegt McCarthy and Pilch have shown, the operators S αi satisfy to the q-Serre relations and realize the representation of quantum group U q (n + ) [24]. Therefore, roughly speaking, W algebra is the algebra of invariants of U q (n + ). It will be our key principle in the lattice construction. Let us note here that finding of integrals of motion in the W-symmetric Conformal Field Theory perturbed by relevant operator [37] where α 0 is the affine root, reduce in the bosonized picture to the determination of Inv[U q (n + )]: in the point i. Denote by Λ the root lattice of g endowed by standard scalar product where (a ij ) is a Cartan matrix of g. Define the multigrading on x i by the map: Imitating the exchange relations for vertex operators determine now skew polynomial algebra with basic relations: Having in our mind to find the analogue of screening operators, put One can immediately prove the following lemma: Lemma. Operators S αi satisfy to the q-Serre relations: (ad These operators S αj constitute the U q (n + ) algebra, and formulas for comultiplication, antipod and counit are of the form: Due to the eqs. (1.27)-(1.2.9) and property q hi S αj = q aij S αj q hi operators h i and S αi constitute the borel part U q (b + ) of quantum universal enveloping algebra U q (g). Consider now the algebra of formal Loran's series In according to the general rule the adjoint action of quantum group U q (n + ) is determined by q-commutation with screening operator We can give the following definition of lattice W algebra: Definition. Generators of lattice W algebra associated with simple Lie algebra g constitute the functional basis of space Here we added new requrements: to have an scaling invariance x i → λx i and to satisfy the requirement of finiteness. Another essential argument is that we are looking for local expression for generators: namely, like local fields W, T lattice generators must to commute if they far enough from each other. So the problem is to find solution of the system of difference equations from infinite number non-commutative variables. It is significant that commutation relations (1.2.5) depend on the sign of the difference (i − j) only. We should try to find all solutions of the system: . Then we will obtain the whole set of generators by shift: etc. (1.2.14) Examples: Virasoro and W 3 on the lattice. In this section we turn our attention to the explicit construction of lattice algebras. Let us assume for simplicity that deformation parameter q is in generic position. In this case one can apply to the classical limit q → 1 and reduce rather complicated system (1.2.13) of the difference equations to the ordinary one. Screening operators in this limit turn to be differential operators of first order acting on the manifold with the coordinates x i . One can easily obtain nonstandard realization of universal enveloping algebras U (b + ) and solve the system of ordinary differential equations. Classical solution help us to "guess" Faddeev-Takhtajan-Volkov algebra. Virasoro algebra is connected with sl(2) algebra. In this case we have the following basic relations: and system of equations is: As we noted before, in fact, one need to solve the following equation: and the solution must have zero grading. We have two obvious solutions of this equation: and zero-grading invariant is All other basic generators of lattice Virasoro algebra are obtained by simple shift. This algebra was found from another point of vew by Volkov and its classical version was appeared in the work Tahtadjan and Faddeev [25]. At the classical level lattice Virasoro has the following Poisson brackets: and Poisson brackets of any S i are obtained by shift Faddeev and Takhtajan found this Poisson structure by studing of Volterra system: Let us consider folowing example of lattice algebra associated to Lie algebra sl (3). There are several ways to define the grading. We put regular coloring of the points, exactly as in the fig.2: and the commuattion relations: x n x 2n+k = qx 2n+k x n , k > 0 , (2.2.2) Applying to the classical limit [35] one can prove that invariants of U q (sl (3)) + has the etc. It is rather combersome matter to find an algebra of these invariants but in the classical limit 2.4) the calculations gives us the following result: (2.2.5) The remaining Poisson brackets are obtained through these Poisson brackets by shifts. Nevertheless this Poisson structure seems to be unwieldy there is exist some diagramm which show some kind of symmetry: where Γ is given by fig.(3-4). (May be it is possible to write down the Poisson brackets for W sl(n) without explicit knowledge of lattice bosonization?) Having such a hamiltonian structure one can define differential-difference chain of non- where Γ i denote that we should summarize all terms in the diagramm fig.4. For example we have: (2.2.8) Probably it would be interesting to investigate this non-linear chain which seems to be integrable. In this section we turn our attention to the question what is the role of second part of quantum group. To begin with let us consider some local lattice field F which has in the neighborhood of point 1 the form ( fig.5): The worth important observation is that screening operator S αi acts on local field F (1) 0 only by halh of infinite sum: 3) The expressions like U + F 1 may be considered as the lattice analogues of the fields dressed by screening operator ( fig.6-7): To investigate lattice representation of quantum group sl 2 if we should extend our space C[x i , x −1 i ] by non-local expressions which are given by the action of screening operators: To introduce action of U q (sl 2 ) − on this space, let us define a q-derivation of the "screening variable" U + : (3.5) One can immediately show that operators (D q , h, S) constitute twisted sl(2) q algebra: (3.6) After the change we will obtain an ordinary quantum group sl(2) q . In previous section we received the realization of quantum group U q (sl (2)) on the lattice. One can try to investigate the representations of quantum groups on this lattice. Simplest possibility of such a module is two dimensional module. Unfortunately, naive "highest weight" x − 1 2 isn't true becouse it creates redducible module. One can prove, however, that proper expression for "highest weight" in this case is given by the expression while second vector in this module is given by the formula: By shift x 1 , x 2 → x 3 , x 4 we find another module and R matrix in the representation 1 2 , 1 2 has the well-known form. In this picture one can find alternative expression for lattice Virasoro algebra. Indeed, the expressions like belong to the invariant space of U q (sl(2)) and therefore have to be invariants. Really, the condition D q (∆ 13 ) = 0 denotes that this expression is local, while invariance under the action of S and h is the definition of Virasoro algebra (sec.1.2). Even without knowledge of exchange algebra one can determine the invariants like where ( fig.8) It is interesting that 3-point invariant on the classical level coinsides with Tahtadjan-Faddeev Virasoro algebra: On the quantum level I have not proved that Σ can be expressed through σ and vice versa. But I think it is right. Let us note, that 3-dimensional module is generated by the S-operator action on x −1 . The exchange relation in this case are determined through the quantum R 1,1 matrix for Hence we can start from exchange algebra [10] without explicit knowledge of vectors in module and define generators of W algebra as an invariant of U q (g). The algebra of invariants gives us W lattice algebra. Lattice integrals of motion Let now turn our attention to the integrals of motion in perturbed conformal field theories with W symmetry [37]. The integrable perturbation of such a theories is given by the relevant field where α 0 is the additional (affine) root of the affinization of Lie algebra g. As Zamolodchikov proved [33], the determination of integrals of motion in perturbed conformal field theories is reduced to the finding of the kernel of operator in the vacuum representation of W algebra. Dealing with bozonizied picture we can rewrite this problem [23] to the finding of screening operators kernels intersection which constitute the nilpotent part of quantum affine group U q (ĝ). In according to the lattice ideology, we have to define additional screening operator through our non-commutative variables x i . Consider, for example,ŝl(n) case with Cartan matrix a ij . Define the multigrading in this case by the regular map ( fig.7): We have (n-1) ordinary screening operators Our idea is to construct affine generator as following sum: which has proper grading and corresponds in the continious limit to the operator V α0 . It is rather simple matter to prove the folowing lemma: Lemma. Operators S i , (i = 0, ..., n − 1) satisfy to the q-Serre relations for the quantum affine group U q (ŝl(n)). (ad Now operators S i gives us the representation of the nilpotent part of quantum affine Lie algebra U q (ŝl(n) + ). Therefore, mathematically, integrals of motion problem is defined by similar way: Definition. Integrals of motion on the lattice constitute the invariant space of nilpotent part of quantum affine Lie algebra: Such as we have infinitely many generators of quantum affine group U q (ŝl(n) + ) (i.e. the number of constraints is infinite) then there is no hope to find local invariants. But we can try to find the "local density" of "integrals". Namely, such functions, commutator of which with screening operator is given by "total derivation": (5.1.8) Volkov's scheme. In this section we will represent Volkov's method possesing to determine the invariant space of quantum affine group through the solution of some system of difference equation. For simplicity consider first example of integrals of motion in the conformal field theory perturbed by field Φ 13 . Such as such a field is represented by vertex operator then its resonable lattice analogue has the form x −1 . One can immeaditely prove that and these generators constitute U q (ŝl(2) + ) algebra. Hence, we have two screenings in this case: Let us consider two points x 1 and x 2 . The main idea is to add "spectral parameter" β to the two-point screening operators and define some analogue of "R" matrix: If we could solve these equations then we construct R i,i+1 by simple shift of variables and the product (or more explicitly, logorithm of this product). gives us the generating funtion for integrals of motion. Let now R 1,2 = R 1,2 (x 1 x −1 2 ; β) = R 1,2 (u; β). Then both equations (5.2.4) are reduced to the following linear difference equation: (βu + 1)R 1,2 (q −1 u; β) = (u + β)R 1,2 (u; β) (5.2.6) which was appeared in the work [27]. For q in generic position one of the the solutions of this equation has the form: This expression is rather interesting: S.Kryukov notice that it can be formally represented as the two-points correlation function of two q-deformed bosonic fields. Moreover, in the limit q → 1 it leads to the expressions like dilogorithms. Integrals of motion. Exampleŝl(3). Let us describe direct generalization of this scheme for the case ofŝl(3) algebra. Additional screening now has the form It is easy to check by induction, that Correspondent difference equations system now has the form: Assuming, that R = R(u 1 , u 2 ) u 1 = x 1 /x 3 , u 2 = x 2 /x 4 , u 1 u 2 = q −1 u 2 u 1 . 5.Conclusion. There are many directions in this approach to be developed: 1.Consider similar picture for the lattice W -algebras associated to other simple (affine) algebras. 2.Investigate similar models for more complicated case when q p = 1, p ∈ Z. 3.Consider non-regular coloring of points. 4.Construct the realization of quantum affine Lie algebras and investigate it representations.
2014-10-01T00:00:00.000Z
1993-07-20T00:00:00.000
{ "year": 1993, "sha1": "54f84935550d8fe8c6fb2d90c9d6cd9ac8db55fc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9307127", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "db4ccb2188361e52c10fe29701c90132609b08d4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
259046718
pes2o/s2orc
v3-fos-license
Automated time-lapse data segmentation reveals in vivo cell state dynamics Embryonic development proceeds as a series of orderly cell state transitions built upon noisy molecular processes. We defined gene expression and cell motion states using single-cell RNA sequencing data and in vivo time-lapse cell tracking data of the zebrafish tailbud. We performed a parallel identification of these states using dimensional reduction methods and a change point detection algorithm. Both types of cell states were quantitatively mapped onto embryos, and we used the cell motion states to study the dynamics of biological state transitions over time. The time average pattern of cell motion states is reproducible among embryos. However, individual embryos exhibit transient deviations from the time average forming left-right asymmetries in collective cell motion. Thus, the reproducible pattern of cell states and bilateral symmetry arise from temporal averaging. In addition, collective cell behavior can be a source of asymmetry rather than a buffer against noisy individual cell behavior. INTRODUCTION Aristotle first noted the astonishing reproducibility of embryogenesis in "Historia Animalium," where he observed, "Generation from the egg occurs in an identical manner in all birds." For example, in bilaterians, the left and right sides generally adopt an identical form. At the cellular level, development entails a reproducible series of cell state transitions representing changes in gene expression state, physical state, and cell fate. These processes can be noisy; for example, cell migration can be either ordered or disordered, and such disorder is part of normal orderly development (1). We now appreciate that gene networks control cell state transitions, but these networks are composed of stochastic molecular processes (2). Despite the remarkable progress in the field of developmental biology in recent decades, it has been difficult to study the reproducibility and robustness of cell state transitions in vivo because of the challenge of systematically defining cell states in space and time. Here, we analyze the spatiotemporal pattern of cell state transitions in the zebrafish tailbud. The vertebrate anterior-posterior body axis develops continuously from head to tail. Progenitors of the trunk mesoderm and spinal cord are largely specified during gastrulation, whereas the tailbud contains a bipotential neuromesodermal progenitor (NMP) cell population that contributes to the posterior body axis (3,4). In zebrafish, fate mapping experiments indicate that the presomitic mesoderm (PSM) formed during gastrulation gives rise to the first 10 to 13 somites (5-7). During early somitogenesis, the mesodermal cells continue to join the PSM via medial convergence, and because of their long transit time through the mesodermal progenitor zone (PZ) and PSM, NMP-derived PSM cells primarily contribute to the tail somites. Neighboring NMP cells disperse to contribute to multiple somites along the tail (8,9). Cells in the tailbud undergo a series of transitions in gene expression and migratory behavior during their differentiation (Fig. 1A, left) (9)(10)(11)(12). The dorsal-medial (DM) tailbud contains the sox2/brachyury expressing NMPs that contribute to both the spinal cord (Fig. 1A, yellow) and the PSM (3). In the zebrafish, cells in the DM migrate toward the posterior in a rapid orderly fashion (Fig. 1A, cyan) (8,9,13). At the tip of the tailbud, mesodermally fated DM cells down-regulate sox2, up-regulate mesodermal genes such as tbx16, and undergo an epithelial-mesenchymal transition (EMT) to migrate ventrally into the PZ (Fig. 1A, magenta). Cell movements in the PZ are more disorderly than the DM. Ultimately, cells leave the PZ, reduce their speed, and assimilate into the left and right PSM (Fig. 1A, green) (9,(14)(15)(16). Cells in the PSM down-regulate tbx16 and begin to express tbx6. Cell velocity in the anterior PSM declines further as the tissue decreases its volume and solidifies (9,(17)(18)(19). The transition from orderly to disorderly motion from the DM to PZ is necessary for proper body elongation (1,9,20). Excessively disordered motion in the DM [via experimental inhibition of bone morphogenetic protein (BMP) or fibroblast growth factor (FGF) signaling] impairs the flow of cells through the tailbud leading to a short body axis. Excessively ordered motion in the PZ (induced by moderate Wnt inhibition) produces prolonged anisotropic fluxes, unequal allotment of cells to the left or right PSM, and a bent body axis. Thus, understanding robustness and reproducibility of vertebrate body elongation requires understanding the nature of these tailbud cell state transitions. In this study, we first define the trajectory of cell states in the zebrafish tailbud during body elongation using dimensional reduction algorithms and then segment the trajectory using a change point detection algorithm. We define gene expression state using singlecell RNA sequencing (scRNA-seq). We validate this algorithmic definition of cell states by comparing wild type and embryos with reduced Wnt, FGF, and BMP signaling and verify quantitative differences in gene expression cell states by multicolor fluorescent in situ hybridization. Next, we identify cell motion states by analyzing cell tracking data using a similar approach. We find that the cell tracking data can be segmented into reproducible cell motion states. We then perform an analysis of the pattern of cell motion states, as these datasets allow direct quantification of cell state dynamics over time. The pattern of cell motion state transitions in a 2-to 3-hour time average are the same in each wild-type embryo, indicating the reproducibility of the cell state dynamics. In addition, the cell state pattern for a single time point is typically the same as the pattern of a 2-to 3-hour time average, revealing some dynamic stability of this pattern. However, individual embryos exhibit transient deviations from the average pattern of cell states. Analysis of these transient deviations reveals them to be due to irregular collective cell migration, which produces bilaterally asymmetric convergence to the midline in the PSM. Thus, both the reproducible pattern of cell motion states and bilateral symmetry arise from temporal averaging. More generally, these results indicate that collective cell behavior does not necessarily buffer noisy individual cell behavior and that collective cell behavior can be a source of inappropriate asymmetry. Prospectively, our approach using a dimensional reduction method and a change point detection algorithm may prove useful in the analysis of other complex time series datasets. Tailbuds were dissected and pooled; scRNA-seq profiles were generated, and a one-dimensional pseudotime was created and segmented into gene expression cell states. (B) UMAP projection of scRNA-seq data colored by cell type. Arrow marks the path of pseudotime in (C). (C) Expression of selected markers over pseudotime. Vertical lines are transition points between cell states as defined by a Bayesian algorithm that minimizes within state statistical error. Note that the segment colors along the pseudotime axis correspond to the colors along the developmental trajectory (arrow) in (B) but with the PZ and PSM being further subdivided into two similarly colored segments in (C). (D) UMAP projection of scRNA-seq data colored by experimental treatment. (E) Quantification of the differences in the proportion of cells that are in a given cell state in each replicate of each experimental condition. See also figs. S1 and S2. Gene expression states We performed scRNA-seq on dissected tails from 10 to 12 somite stage zebrafish embryos (Fig. 1A). We used wild-type embryos and embryos subject to treatments known to alter tailbud cell migration, specifically inhibition of FGF, BMP, or Wnt signaling (1,9,13,20). For each treatment, we prepared four biological replicates, each consisting of 10 to 12 tailbuds and resulting in 30,000 to 35,000 singlecell profiles. In a uniform manifold approximation and projection (UMAP) dimension reduction plot of wild type, the neuronal and paraxial mesoderm form one large cluster with more differentiated cells at each end and common progenitors (cyan) in the middle [ Fig. 1B (arrow) and fig. S1]. Wild-type and experimental samples consist of the same cell transcription profiles highlighting the robustness of gene expression states (Fig. 1D). This result is also consistent with previous scRNA-seq analysis of zebrafish embryos, indicating that perturbation of cell signaling does not create novel cell transcription profiles (21,22). To enable direct quantitative comparisons between experimental conditions, we pooled the data from all wild-type and experimental replicates and created one unified pseudotime to define a single standard for classifying cells. Specifically, the cells in the main cluster were aligned along a neuronal-mesodermal axis from sox3 expressing neuronal cells to mespaa expressing anterior PSM cells (Fig. 1B, arrow) (22). This approach avoids the requirement to define the NMP population a priori. Instead, NMPs will be located in the middle of the pseudotime sequence, and differentiation will proceed toward both ends, i.e., neuronal to the left and mesodermal to the right (Fig. 1C). Marker genes for neuronal and mesodermal development map with respect to pseudotime in the correct developmental sequence, indicating that the procedure was successful. To define gene expression states, we extracted the wild-type data and then used a change point detection algorithm to divide pseudotime into a series of distinct states (23). The change point algorithm identified five transition points ( fig. S2). These transition points (vertical lines in Fig. 1C) divide the pseudotime sequence into six states that generally agree with those predicted previously from marker gene expression (24). The states include a neural state, NMPs, and a succession of mesodermal states. The transition points were then mapped to the full pseudotime sequence, and we calculated the relative abundance of each state in wild-type, Wnt-inhibited embryos, FGF-inhibited embryos, and BMP-inhibited embryos (Fig. 1D). To determine whether this analysis of scRNA-seq data accurately quantifies changes in cell state, we mapped the transcriptional states back onto the embryo and measured their abundance using multicolor fluorescent in situ hybridization for marker genes for the first five states ( Fig. 2A). Sox2 single-positive cells localize in the neural tube (state 1). Sox2-and brachyury-positive NMPs (state 2) occupy the DM. Nascent mesodermal progenitors (state 3) expressing brachyury and tbx16 are located immediately ventral to the DM in the medial PZ. Mesodermal progenitors in the PZ (state 4) are tbx16 single-positive cells located in the ventral and lateral tailbud. The PSM (state 5) is anterior to the transition from tbx16 to tbx6 expression. To validate the scRNA-seq analysis, we chose to test the predictions of changes in the abundance of neuronal and PZ states. First, the scRNA-seq predicts that Wnt-inhibited embryos would have more neuronal cells (Fig. 1E). This is consistent with reports that elimination of Wnt signaling leads NMPs to exclusively adopt a neuronal fate (3). In our partial inhibition of Wnt signaling, 6 of 16 embryos have an abnormal cap of neuronal tissue covering the embryos' posterior, confirming the scRNA-seq results ( fig. S3). A second prediction of the scRNA-seq analysis is that the PZ is smaller in BMP-and Wnt-inhibited embryos but not in embryos subject to FGF inhibition. To test this prediction, we performed fluorescent in situ hybridization for a PZ marker, tbx16, and a PSM marker, tbx6 (Fig. 2B). In wild-type, BMP-inhibited, and FGF-inhibited embryos, the tbx16 and tbx6 signal was measured along the anterior-posterior axis of the embryo for both the left and right sides (Fig. 2C). The PZ/PSM transition was set to the value derived from the scRNA-seq analysis (20% of the maximum value of tbx6), and then, the PZ length was normalized to the total tail length (85% of maximum value of tbx6). Consistent with the scRNA-seq analysis, BMP-but not FGF-inhibited embryos exhibited a decrease in PZ length (Fig. 2D). Because of the bent body axis exhibited by most Wnt-inhibited embryos, the area of the PZ and PSM were quantified. As predicted, Wnt-inhibited embryos have a smaller PZ (Fig. 2E). Thus, this approach to analyzing scRNA-seq data accurately identifies cell states that can be quantitatively mapped back onto the embryo. Cell motion states We hypothesized that similar algorithms to those used to classify gene expression states could be applied to cell motion data to define the cell motion states (Fig. 3A). For this purpose, we used cell tracking data from confocal time-lapse imaging of cells in the DM through PSM collected over 1 to 3 hours in wild-type embryos, 7 to 10 somite embryos, and embryos subject to signaling perturbations (9,13,20). As with the gene expression analysis, a cell motion trajectory for each cell track was used to order the tracks in pseudotime, and the state transitions were defined using the change point detection algorithm and the cell motion statistics such as speed and displacement in a specific duration. The cell states were color coded and spatially mapped back onto the embryo using the original cell track position. Because this is a novel approach, we separately optimized pseudotime assembly and segmentation starting with the wild-type embryos. Unlike gene expression, cell motion cannot be practically characterized instantaneously, and the choice of track interval involves trade-offs. Longer tracks individually contain more information, but the embryo contains fewer of them, and they may average out within-track transitions in cell motion state. We chose to use a sliding window of eight time points (21 min) as an empirically derived base unit. For each track, we calculated a distance matrix in which each element is equal to the cell's displacement between the i-th and j-th time points. To form a linear sequence, we used a variational autoencoder (VAE), a neural network-based machine learning method, as a dimensional reduction tool. It embedded distance matrices from four wild-type embryos into a onedimensional latent space. To construct a pseudotime sequence for each embryo, cells were assigned the rank of their latent space coordinate (z). This procedure successfully reproduced the known developmental sequence ( fig. S4). It indicates that there is a continuity in cell migratory behavior in the tailbud, like that of gene To segment the pseudotime sequences, we sought parameters with a large variance over pseudotime. We found that either cell speed and track straightness ( fig. S5) or, more simply, track displacement (Fig. 3, B and C) gives the best results. The change point detection algorithm classifies the tracks into three cell motion states ( fig. S6). We mapped these states back onto the embryo by plotting cell position at the start of the track and taking either a single time point (Fig. 3D) or a time average created by randomly sampling tracks throughout the time lapse (Fig. 3E). The time averages are reproducible between embryos. Each embryo is segmented into a high-displacement state principally located in the DM, an intermediate displacement state mostly located in the PZ, and a low cell motion state mostly in the PSM (Fig. 3, D and E, and movie S1). These results are consistent with previous manual segmentations of the tailbud into DM, PZ, and PSM (9). We next analyzed the temporal dynamics of the cell motion states. We find that the relative abundance of PSM-type tracks increases over time, while the DM and PZ shrink (Fig. 3F). This is consistent with the gradual decrease in the size of the tailbud as progenitor cells (DM and PZ) differentiate into PSM faster than new progenitors are created (8,25). However, we observed notable fluctuations in cell state size in some embryos. For example, in wild-type embryo 4, patches of cells transition rapidly from PZ to PSM-and then to PSM to PZ-type movements ( Fig. 4A and movie S1). These fluctuations are asymmetric between the left and right PSM and appear to be tied to differences in collective cell motion as they convergence toward the midline (Fig. 4A). To quantify this effect, we created regions of interest (ROIs) for the left and right anterior PSM and measured the abundance of PZ-type tracks and track displacement along the medial-lateral axis (Fig. 4, B and C). We found large differences between experimental replicates. Two embryos (replicates 3 and 4) have antiphase oscillations in medial-lateral displacement between the left and right PSM, where one side efficiently converges toward the midline, while the other has less convergence or even a net lateral motion. These fluctuations roughly correlate with differences in the abundance of PZ-type tracks. Replicate 1 exhibits constant convergence in both the left and right PSM, although the right side is moving significantly faster (P < 0.001) and has more PZ-type tracks. In contrast, replicate 2 has no net movement toward the midline in either PSM and a constant, low abundance of PZ-type tracks. Given this variability, we generated three more wild-type replicates (numbers 5 to 7). Their cells were assigned pseudotime coordinates by mapping them to the nearest tracks in the preestablished pseudotime sequence. They recapitulate the original phenotypes. Thus, cell motion state abundance is variable over time within a single embryo even while the average pattern in each embryo is the same. In addition to the creation of individual pseudotime sequences for each embryo, we also performed a pooled analysis by mapping all cell tracks in wild-type replicates 1 to 4 to one pseudotime sequence and then segmenting that sequence. This approach is analogous to the unified pseudotime assembly used for the scRNA-seq data. Using this approach yields the same results as separate segmentations for three of four replicates ( fig. S7). The outlier is replicate 1 where almost the entire PSM is classified as PZ-type tracks. This indicates that the converging cells of the PSM in this replicate move similarly to the high-displacement PZ cells in the other embryos. However, within a single replicate, the PZ and PSM do have different migratory statistics ( fig. S7), indicating that the relative differences between cell motion states is reproducible from embryo to embryo, whereas the absolute magnitude of the differences may vary (9). Thus, individual versus pooled analyses provide different information, e.g., average cell state patterns, differences within a single embryo over time, and differences among embryos. The effect of signaling perturbations on cell motion state We next sought to determine whether signaling perturbations known to affect cell migratory behaviors would alter the cell state transitions. To this end, we took cell tracks from embryos subject to FGF, Wnt, or BMP signaling inhibition and assigned them pseudotime coordinates by mapping them to the most similar wild-type tracks. We then colored the cells by pseudotime and segmented them using the change point detection algorithm ( fig. S4 and Fig. 5). The pseudotime sequence largely agrees with the developmental sequence, although FGF signaling-and BMP signaling-inhibited embryos have a noisier pattern of cell motion states than wildtype embryos (Fig. 5, A and B). This result is consistent with the observation that BMP inhibition suppresses the average differences in speed and track straightness between the DM, PZ, and PSM regions (9,20). However, the use of additional factors in the VAE algorithm likely aids in recovery of developmental sequences. Wnt signaling inhibition, a perturbation that affects the coordination of cell movement more than its cell-autonomous characteristics (1,9), largely retains the wild-type cell motion state pattern (Fig. 5C). These results indicate that similarly to scRNA-seq, the embryos' cell motion statistics retain sufficient tissue identity to allow for data integration. Inhibition of BMP or FGF signaling decreases coordination of cell motion in the DM, while inhibition of Wnt signaling prolongs left-right asymmetries in cell flux in the PZ leading to a bent body axis (1,9). We hypothesized that these perturbations could have similar effects in the PSM. Therefore, we repeated the measurements for PZ-type track abundance and medial-lateral displacement in the PSM in these backgrounds (Fig. 6). BMP-and FGFinhibited embryos have fluctuations in the abundance of PZ-type tracks and mean medial-lateral displacement similar to wild type, suggesting that this process is not regulated by these signals (Fig. 6, A and B). In the Wnt signaling inhibition case, three of the four embryos exhibited wild-type patterns of antiphase oscillations or sustained but low magnitude differences in convergence. They also generally had similar numbers of PZ-type tracks in the left and right PSM. The outlier, replicate 2, had major asymmetries where the left side was initially moving medially, while the right side moved in a posterior-lateral direction. Subsequently, the left PSM cells ceased motion and adopted almost exclusively PSMtype tracks, while the right PSM began moving medially. This was accompanied by complementary curvatures of the PSM-notochord interfaces. Thus, while embryos subject to moderate Wnt signaling inhibition generally have wild-type PSM behaviors, they can exhibit abnormalities that correlate with gross morphological defects. DISCUSSION The ability to perform automated classifications of cells into distinct states is a powerful tool both for organizing data and the discovery of underlying principles. Extensive effort has gone into developing techniques for identifying gene expression states from scRNA-seq data (26). Meanwhile, cell motion has traditionally been described more qualitatively. Our parallel analysis of gene expression and cell migration states using dimensional reduction algorithm followed by a change point detection algorithm demonstrates that these cell state transitions can be similarly systematically defined and mapped back onto the embryo. Gene expression and cell motion states not only share some similarities but also have many differences. The gene expression transitions are spatially segregated and reproducible. They can be mapped onto the embryo using in situ hybridization (Figs. 1 and 2). The PZ-PSM transition is bilaterally symmetric and reproducible across 10 to 12 somite wild-type embryos (Fig. 2). Signaling perturbations do not create new states but do change the abundance of wild-type states. This phenomenon has been observed in other developmental contexts (27). Overall, gene expression states are very robust. The cell motion states have a somewhat different spatial pattern than the gene expression states. The transitions between DM, PZ, and PSM are roughly in the same position as the gene expression transitions but not always exactly (Fig. 7A). This is likely due to time delays inherent in mRNA processing and translation, posttranscriptional gene regulation, and physical constraints on cell motion. The pattern of cell motion transitions is also spatially noisier than for gene expression states. Some of this difference is technical. Cell motion has far fewer parameters than gene expression. However, much of the noise is biological and due to changes in collective cell behavior, not just the behavior of single cells. While our data do indicate a slowing of cell motion during PSM solidification from the anterior to the posterior, counter examples of the anterior PSM cells moving with a relatively rapid and processive PZ-type motion while the posterior PSM is more static can be readily identified. Thus, PSM solidification does not occur in a smooth anterior to posterior progression. These fluctuations between PZ and PSM type motion occur rapidly and involve small patches of tissue, suggesting that the anterior PSM remains close to the fluid-to-solid jamming transition (19). Creation of a straight body axis requires equally sized left and right paraxial mesoderm. However, paradoxically, the two PSMs often have divergent behaviors in both cell speed and directionality. These sorts of asymmetries have also been observed in the nascent PSM during gastrulation, suggesting that this is a general feature of zebrafish convergent extension (28). These results conflict with models of body elongation (derived from chick) where coordinated convergence movements in the PSMs compress the notochord, forcing it to elongate into the tailbud, thereby displacing PZ cells (29). Furthermore, the notochord can elongate faster than the paraxial mesoderm, such as when loss of tbx16 prevents PZ cells from entering the PSM (30). In these mutants, the elongating notochord buckles. Thus, in zebrafish, notochord elongation does not appear to be driven by convergence of the PSM. One method for achieving robust development from noisy processes is temporal averaging. If a parameter is integrated over a time span significantly longer than the period of fluctuations, then individual replicates will converge toward the mean (Fig. 7B). For gene expression states, cells are able to maintain their state in the face of transcriptional bursting by using the longer stability of proteins (31). Two drivers of PSM elongation are addition of cells to the PSM from the PZ and convergence of PSM cells. In wild-type embryos, the PZ has left-right cell movement oscillations on the order of tens of minutes (1). Wnt inhibition causes permanent asymmetries for an entire 3-hour time lapse and a bent body axis indicating that the reversals are necessary to evenly disperse mesodermal progenitors to the left and right side. We find similar fluctuations of convergence patterns in the PSM with periods on the order of tens of minutes to up to an hour. These fluctuations are not simply noisy single-cell behavior but rather abrupt changes in the collective motion of 10 to 100 cells. Thus, while a straight spine is essential for the biomechanics of the adult, transient small imbalances are tolerable in the embryo. How cell dynamics are tuned and averaged to obtain robust left-right symmetry remains an open question. Data and code availability The scRNA-seq data have been archived at National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO; accession no. GSE173894). Zebrafish methods Tüpfel long fin zebrafish were raised according to standard protocols and experiments approved by the Institutional Animal Care and Use Committee. Experiments were performed before sex determination in zebrafish (32). FGF, BMP, and Wnt signaling perturbations were performed using protocols previously developed to modulate cell migration (9,20). Specifically, starting at the sixsomite stage, embryos were incubated in 50 mM SU5402 or 40 mM DMH1 for 2 hours to inhibit FGF or BMP signaling, respectively. Wnt signaling was inhibited by injecting notum-1 mRNA at a concentration of 150 ng/ml into embryos at the single-cell stage and then incubating them until the 10-somite stage. This treatment yields a phenotypic spectrum, and embryos with nascent body elongation defects were chosen for further experiments (20). Tailbud dissections and scRNA-seq Embryos were incubated until the 10-to 12-somite stage and then dissected in ice-cold Hank's balanced salt solution (HBSS). The tail was collected by cutting immediately posterior to the last formed somite. Groups of tails consisting of 10 tails for wild-type, FGF inhibition, or BMP inhibition or 12 tails for Wnt inhibition were pooled together. Cells were dissociated by incubation in papain solution (20 U/ml; Worthing Biochemical) for 15 min at 29°C with gentle agitation. Halfway through the incubation, the solution was triturated 10 times with a P200 pipette. Cells were spun down at 300g for 5 min and then resuspended in 40 μl of cold HBSS. Cell concentration and viability were checked with a hemocytometer, and the volume of the solution was adjusted if required. Construction of 10x Genomics single-cell 3′ RNA-seq libraries (version 3) and sequencing with an Illumina HiSeq4000 GEM generation and barcoding. Single-cell suspension in RT Master Mix was loaded on the Single Cell A Chip and partition with a pool of about 750,000 barcoded gel beads to form nanoliter-scale gel beads in emulsions (GEMs). Each gel bead has primers containing (i) an Illumina R1 sequence (read 1 sequencing primer), (ii) a 16-nt 10x barcode, (iii) a 10-nt unique molecular identifier (UMI), and (iv) a poly-dT primer sequence. Upon dissolution of the gel beads in a GEM, the primers are released and mixed with cell lysate and master mix. Incubation of the GEMs then produces barcoded, full-length cDNA from polyadenylated mRNA. Post GEM-RT cleanup, cDNA amplification, and library construction. Silane magnetic beads were used to remove leftover biochemical reagents and primers from the post-GEM reaction mixture. Full-length, barcoded cDNA was then amplified by polymerase chain reaction (PCR) to generate sufficient mass for library construction. Enzymatic fragmentation and size selection were used to optimize the cDNA amplicon size before library construction. R1 was added to the molecules during GEM incubation. P5, P7, a sample index, and R2 (read 2 primer sequence) were added during library construction via end repair, A tailing, adaptor ligation, and PCR. The final libraries contain the P5 and P7 primers used in Illumina bridge amplification. Sequencing libraries. The single-cell 3′ protocol produces Illumina-ready sequencing libraries. A single-cell 3′ library comprises standard Illumina paired-end constructs, which begin and end with P5 and P7. The single cell 3′ 16-base pair (bp) 10x barcode and 10-bp UMI are encoded in R1, while R2 is used to sequence the cDNA fragment. Sequencing a single-cell 3′ library produces a standard Illumina BCL data output folder. The BCL data include the paired-end R1 (containing the 16-bp 10x barcode and 10-bp UMI) and R2 and the sample index in the i7 index read. Preprocessing of scRNA-seq data We aligned the scRNA-seq data to Grcz11 and demultiplexed using Cell Ranger (10x Genomics). After the generation of expression matrices for each sample, we used Seurat v3 for preprocessing and clustering of scRNA-seq data (27). First, we excluded cells with an ectopic number of genes or exceeding a specified percentage of mitochondrial genes based on visual inspection for the distribution of these statistics. After the filtering genes, we conducted integration following Seurat's SCTransform integration (33). We applied principal components analysis (PCA) and embedded the 30-dimensional PCA coordinates into two-dimensional UMAP. We clustered cells by Seurat function "FindClusters" with a resolution parameter of 0.5. Pseudotime estimation of scRNA-seq To recover cell state dynamics encoded in the gene expression data, we ordered a subset of scRNA-seq cells that belong to the axis from neural tube to PSM so that its ordering recapitulates the developmental trajectory during body elongation. In particular, we embedded the z scores of 30-dimensional PCA coordinates of cells belonging to specified clusters (Sox3 + , Sox2 + , DM, PZ, pPSM, and aPSM) into one-dimensional UMAP coordinates. Here, we expected that the most variable axis within gene expression space during this process would be the developmental trajectory. For UMAP embedding, we used the "umap-learn" package in Python and set "n_neighbors" as 400 and "min_dist" as 0.1. Segmentation of scRNA-seq pseudotime We segmented the pseudotime trajectory of scRNA-seq into several segments within which each cell c has similar z scores of 30-dimentional PCA coordinate x c to dissect the dynamics along the progression of cell state transitions during zebrafish body elongation. We used a Bayesian algorithm of change detection (23) to find break points of segments b k (k = 1, …, K ), which minimize the total error from the mean of profile of the segment ∑ k E k where and τ c is the discretized rank of the estimated pseudotime. We discretized the pseudotime rank into 30 bins for computational efficiency. We determined K as five scRNA-seq data using the elbow method (34), which chose a saturation point along the group variation curve as a function of the number of clusters. Estimation and segmentation of cell movement pseudotime To recover cell state changes along the developmental axis for the cell motion, we also ordered cell track trajectory into one-dimensional pseudotime. It was constructed using four wild-type embryos. For this purpose, we applied a VAE (35) to distance matrices of cell trajectories between eight subsequent time points. We defined the input for cell c at time point t as the distance matrix D c,t ∈ R 8×8 between the cell c's position at time point t, …, t + 7. Here, we assume that these distance matrices are generated from one-dimensional latent cell state z ∈ R Pðz c;t Þ ¼ Nðz c;t j 0; 1Þ PðD c;t j z c;t Þ ¼ N½D c;t j μ θ ðz c;t Þ; σ 2 θ ðz c;t Þ� where μ θ ; σ 2 θ : R ! R 8�8 is a decoder neural network with convolutional layers, the implementation details of which are shown in fig. S8. To optimize this probabilistic model efficiently and approximate the posterior distribution P(z c,t | D c, t ), we defined variational posterior distribution, q ϕ ðz c;t j D c;t Þ ¼ N½z c;t j μ ϕ ðD c;t Þ; σ 2 ϕ ðD c;t Þ�, where μ ϕ , σ 2 ϕ : R 8�8 ! R are encoder neural network with convolutional layers, the implementation details of which are shown in fig. S8. Here, we derived evidence lower bound Lðθ; ϕÞ ¼ E q ϕ ðz c;t jD c;t Þ ½logP θ ðD c;t j z c;t Þ� þ D KL ½q ϕ ðz c;t j D c;t Þ j P θ ðz c;t Þ� � logP θ ðD c;t jz ϕ;c;t Þ þ D KL ½q ϕ ðz c;t j D c;t Þ j P θ ðz c;t Þ� wherez ϕ;c;t are derived from q ϕ (z c,t | D c,t ), using reparametrized sampling (35). We optimized parameters included in encoder and decoder networks using Adam implemented in PyTorch. We adopted early stopping with patience = 30, which was implemented in PyTorch lightning. We assigned z c,t as a cell movement pseudotime of cell c at time point t. We segmented the cell movement pseudotime using the same methodology for the segmentation of scRNA-seq pseudotime, except that the properties x c,t for minimizing within-group variation are the z scores of the track displacement or speed and track straightness for the eight time points. We specified the number of change points K as using the elbow method (34), which chose a saturation point along the group variation curve as a function of the number of clusters. For three additional wild-type embryos and all embryos subject to signaling perturbations, we mapped their tracks into the preestablished pseudotime by assigning each track the pseudotime coordinate of the closest track in the pseudotime sequence. This approach produced consistent results and is more computationally efficient than continually reconstructing pseudotime each time new data are analyzed. PSM quantification ROIs consisting of the anterior 125 μm of the left and right PSM were constructed. Tracks were assigned to the ROI if the cell was within the region at the start of the track. Mean displacement along the medial-lateral axis and the relative abundance of PZtype tracks were calculated at each time point. Multicolor fluorescent in situ hybridization Probes for sox2, brachyury, tbx16, and tbx6 were purchased from Molecular Instruments. The hairpins and colors are listed in Table 1. Staining of 10 to 12 somite embryos was performed using their recommended protocol (36) with a few modifications. Specifically, batches of 15 embryos were stained simultaneously. The tbx6 probe was diluted 1:10 to avoid excessive bleed through the sox2 channel. DAPI (4′,6-diamidino-2-phenylindole) was added to the amplification mixture. After staining embryos were taken through a series of 25%/50%/75% glycerol in phosphate-buffered saline. The posterior half of the embryo was isolated and mounted dorsal side up in 75% glycerol. Embryos were imaged with a Zeiss LSM 880 Airyscan confocal microscope using a 20× objective. Preprocessing of the microscopy images was done using ImageJ. The sox2 and tbx6 channels were subtracted from each other to eliminate bleed-through. Images were rotated to a consistent orientation, and a maximum intensity projection was created. Adaxial cells were identified in the DAPI channel and manually removed from the image. The midline separating the embryo into left and right halves was identified manually. Subsequent quantification was performed in MATLAB. The image was smoothed with a Gaussian filter, and the ROI was thresholded using Otsu's algorithm on both the tbx16 and tbx6 channels. For wild-type, BMP-inhibited, and FGF-inhibited embryos, average fluorescent intensity was measured along the x axis of the image and normalized to the maximum value. This was done separately for the left and right sides of the embryo. The PZ/PSM boundary was taken to be the first point with a value greater than 20% of the maximum tbx6 value. The anterior end of the PSM was defined as the last point greater than 85% of the maximum tbx6 value. The scaled PZ length was the PZ length divided by the distance from the end of the tail to the anterior boundary of the PSM. Wnt-inhibited embryos had some modifications to the quantification. In bent embryos, the boundary separating the left and right halves was taken to be a line through the midpoint of the tailbud brachyury signal to the notochord and then following the notochord toward the head. For wild-type and Wnt-inhibited embryos, the outer perimeter of the embryo was traced manually. The curve was smoothed with a Savitzky-Golay filter and defined as the embryo's axis. Pixels in the ROI were mapped to their closest points on the perimeter using the distance2curve function from J. D'Errico. The mean intensity along the axis was calculated using a sliding window. The same thresholds were used for the PZ and PSM, as described previously. The boundaries for these regions were taken to be a perpendicular dropped from the axis at the cutoff point. The scaled PZ area was the PZ area divided by the area of the PZ plus PSM. Statistics were calculated using Mann-Whitney U test. Supplementary Materials This PDF file includes: Figs. S1 to S8 Legend for movie S1 Other Supplementary Material for this manuscript includes the following: Movie S1 View/request a protocol for this paper from Bio-protocol.
2023-06-04T06:17:22.329Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "9f6c202028708fae3701f37ff186462fba63bd11", "oa_license": "CCBY", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.adf1814?download=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa283e433f153ecb277d1883f0fe1ff1f21a824c", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
237449466
pes2o/s2orc
v3-fos-license
Design Analysis and Experimental Research of Multi-Segment Linear Reflector The parabolic reflector provides one of the common sound wave convergence methods in acoustic engineering. To reduce the difficulty of fabricating large-size reflectors and improve the convergence effect, this paper proposes a multi-segment linear reflector design method. Through the normal vibration radiation analysis of the flextensional transducer shell, the reflector interface reflection was optimized, the reflection effect was improved and the reflector size was reduced. The transmission theory was used to analyze the impact of different material parameters and sound wave frequency on the reflector transmission coefficient, and the gain effect of the multi-segment linear reflector was further tested. The research results show that the reflector has a significant gain effect in the test frequency band of 20∼100Hz, and the gain at 70Hz I reaches the maximum of 11.2dB. Introduction Using the curved baffle reflection and focusing principle, the reflector converges sound waves according to a certain geometric law, which offers one of the effective methods for concentrating sound wave energy and enlarging propagation distance in acoustic engineering. It is often used for focusing of shock waves or sound waves in water [1][2][3] . As early as 1947, conical reflector was used to improve the directivity of the line source [4] ; in 1968, the cylindrical parabolic reflector was used to focus the sound of the cylindrical transducer, and the transducer placed on different axis positions of the focal point produced different radiated sound fields [5] . Later, parabolic and ellipsoidal reflectors were developed. In recent years, Lei Kaizhuo, Ma et al. [6][7][8] studied elliptical reflectors, finding that machining errors and installation errors could cause errors in the first focus of the reflector and deviations between the sound wave convergence point and the second focus of the reflector, thereby affecting focusing performance of the sound field. According to the ray theory, He Xudong et al. [9] analyzed and proposed the calculation formulas for the ellipsoidal reflector, parabolic reflector and spherical reflector. The analysis showed that ellipsoid has the best convergence effect against point sound source, the paraboloid is able to converge the diffused sound, while sphere has diffraction in convergence of the point sound source and is thus inappropriate for reflector. Moosad et al. [10] studied the use of parabolic reflector to enhance the directivity of the flextensional transducer, using numerical simulation and experiments to study the effect of the specific placement position and specific direction of the flextensional transducer on the directivity. In summary, the parabolic reflector can converge the diffused sound of the flextensional transducer, but the processing error of the reflector will impair the convergence effect. Moreover, large-size parabolic reflector is costly and difficult to manufacture. Therefore, this paper proposes parabolic reflector design method via multi-segment linear approximation, and uses the transmission theory to analyze the impact of different material parameters and sound wave frequencies on the reflector's transmission coefficient, using this as a guide to produce multi-segment linear reflector and then carry out experiments to verify the convergence gain effect of multi-segment linear reflector. The principle of parabolic reflection Commonly used in acoustic focusing design, parabolic reflectors are also suitable for the focusing of flextensional transducers. Parabolic reflectors radiate the sound waves radiated by the pulsed sound source located at the focal point along the axis of the parabola to generate cylindrical waves and reduce expansion loss. The parabolic reflector of the flextensional transducer adopts the MajR scheme as shown in the figure. As shown in Fig. 1, the parabolic equation can be expressed as 2 2 y px  (1) The distance from the shell center to the vertex of the parabola is set to x 0 , and the tangent equation of the parabola corresponding to the shell center is To simplify the analysis, it is assumed that the sound wave radiation is relative to the shell center, which propagates to the reflector and reflects the sound wave in straight forward direction. Therefore, the slope of the tangent to this point on the corresponding parabola is 1, with y 0 =p according to the above formula. Thus, the parabolic equation at this time is (3) For flextensional transducer with elliptical shell(major axis 824mm, minor axis 243mm), take the distance from the transducer center to the vertex of the parabola x 0 =550mm, then the parabolic equation is obtained as follows 2 2200 y x  The curve of the parabolic reflector obtained at this time is shown in Fig. 1. Since the elliptical shell vibrates in the outer normal direction and excites the sound wave, the sound wave propagates in the normal direction and is reflected when it encounters a parabola. The propagation direction after reflection is shown in the figure. Only the sound wave radiated at the shell center is reflected by the parabolic reflector and propagated along the x-axis direction, while the remaining reflection lines move closer to the propagation direction. Multi-segment linear reflection curve design In view of the above-mentioned problems present in the reflection of the elliptical shell structure, we consider multi-segment design of the reflection curve. First, determine the multi-point parameters on the reflection curve, and then obtain the reflection section curve through the spline curve. In this method, the tangent slope of multiple points is known, while the specific position of the point is unknown, which incurs great trouble to curve fitting. Further, we consider the use of multiple straight lines in place of curves. For the specific operations: (1) Starting from the center position (symmetrical on both sides) along the major axis of the elliptical shell, select a number of points with a certain distance interval in the negative direction of the x-axis, plot auxiliary line based on these points (plus the center point) along the positive direction of the y-axis, intersect it with the ellipse; take the elliptic normal at the intersection of the ellipse (the normal bisects the angle between the two focal points and the point); (2) The normal is extended to the parabola, and at the intersection of the parabola, draw a straight line parallel to the x-axis (that is, the direction after the sound wave reflection), take the angular bisection between the elliptic normal and the reflection line. The vertical line of the bisector is a straight line that makes the sound waves parallel to the x-axis. Connect multiple straight lines to obtain the reflector curve; (3) Draw the multi-segment straight line from the straight line close to the origin. The key now is how to determine the boundary of the multi-segment straight line. After analysis and comparison, we consider using the intersection point of the straight line and elliptic normal angular bisection as the boundary of the multi-segment straight line. To further reduce the reflector size, change the starting points of multiple straight lines to obtain multiple sets of reflector curves. In Fig. 3, y-axis size (single side) of the minimum reflector can be reduced to about 1000mm, and the relative parabolic size is reduced by nearly 50%. For the minimal reflector, the reflection of the entire curve can be obtained. Where, the short arrow shows the deviation of the reflection direction on each straight line. It can be seen that the reflection direction on each straight line basically follows the x direction. . Then, it is easy to obtain the ratio of the transmitted sound pressure to the incident sound pressure, as well as the ratio of the transmitted wave sound intensity to the incident wave sound intensity. That is, the sound intensity transmission coefficient can be expressed as In the formula, It can be seen from the formula that the lower the sound velocity and the higher the frequency of the reflector material is, the smaller the required material thickness is. Choose polyethylene with a low sound speed: sound speed , which is basically impossible in actual operation. Therefore, the reflector definitely has transmission, which is specifically calculated by the expression of transmission coefficient. The incident angles i  of the The reflector is made of different materials, so the critical angle of total internal reflection is different. The material density, sound velocity, thickness and sound wave frequency will all affect the transmission coefficient. Figure 4 shows the change of the transmission coefficient with the incident angle under different material parameters and sound wave frequencies. (a) Material density (b) Material sound velocity (c)Material thickness (d) Sound wave frequency Fig. 4 Impact of different material parameters and sound wave frequency on the transmission coefficient Based on the above analysis, it is concluded that the transmission coefficient I t is inversely related to the reflector material thickness D, frequency f, and material density 2  , and is positively related to the incident angle i  . That is, for better effect of the reflector, material with larger material density 2  and structure with higher thickness D should be selected. Based on comparison of different thickness D and different material density 2  , one with larger product of the two should be selected. Finally, in view of the actual processing and production conditions, nylon materials are selected for the production of multi-segment linear reflectors. Experiment and results In this paper, an elliptical cylindrical flextensional transducer was used, and the designed and manufactured multi-segment linear reflector was tested for radiated sound pressure. The specific test of sound pressure is as follows: the audio signal generator generates sinusoidal signals of different frequencies which are amplified by the power amplifier and loaded on the input end of the transducer to drive the transducer operation and externally radiate sound waves. The microphone (B&K4193) placed in the front of the transducer detects sound wave signal, and the signal is amplified, collected and stored by the level recorder or data acquisition device. The microphone needs to be calibrated with a sound calibrator before use. Through test, we obtain the sound pressure waveforms at 10 m in front of the transducer with and without the installation of multi-segment linear reflector, and the variation of the sound pressure level with frequency is derived through further calculation, as shown in Figure 5. It can be seen from the figure that as the frequency increases, the sound pressure level basically maintains an increasing trend, and the installation of reflector does not alter the overall variation trend with frequency. Without the reflector, the maximum sound pressure level is 100.2dB (100Hz). After installation of the reflector, the sound pressure level of the entire test frequency band of the flextensional transducer is greatly improved, with the maximum sound pressure level up to 110.4dB (100Hz). Compared with the case without the reflector, the maximum gain occurs at 70Hz, reaching 11.2 dB. This proves that the design method of multi-segment linear reflector proposed in this paper is feasible with significant gain effect. If we further study the gain effect of the multi-segment linear reflector, especially the gain effect relative to the parabolic reflector, considering that the large-size reflector used in this experiment will increase the production cost and difficulty of the parabolic reflector, we can adopt numerical simulation to carry out further research. Conclusion This paper proposes a multi-segment linear reflector design method. Through the normal vibration radiation analysis of the flextensional transducer shell, the reflector interface reflection is optimized to improve reflection effect and reduce the reflector size. The transmission theory is used to analyze the impact of different material parameters and sound wave frequencies on the transmission coefficient, so that reflector materials and parameters can be selected under its guidance. Finally, the gain effect of the multi-segment linear reflector was experimentally tested. Studies have shown that the reflector has significant gain effect in the frequency range of 20~100Hz, with the maximum gain at 70Hz, reaching 11.2dB. The multi-segment linear reflector proposed herein is applicable to the production of large-size reflectors, which can provide technical reference for related needs.
2021-09-09T20:08:29.728Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "043c66e0f76024c29e5193c9154eb5b5b82abcfb", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2012/1/012046", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "043c66e0f76024c29e5193c9154eb5b5b82abcfb", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
3745649
pes2o/s2orc
v3-fos-license
Proteomic study revealed antipsychotics-induced nuclear protein regulations in B35 cells are similar to the regulations in C6 cells and rat cortex Background Based on accumulating evidence, the regulation of protein expression by antipsychotic drugs (APDs) might be closely related to the control of psychotic symptoms when these drugs are used to treat mental disorders. The low quantity of nuclear proteins in the cell hinders their detection because signal for rare proteins are masked in most proteomic detection systems. Methods Nuclear proteins fractionated from APD-treated B35 cells were labeled with iTRAQ and detected by LC/MS/MS to investigate APD-induced alterations in nuclear protein expression. Western blot, immunofluorescent cell staining, and immunohistochemical staining were applied to validate the findings. Results The expression of ADP/ATP translocase 2, heat shock cognate 71 kDa protein, histone H1.2, histone H3.3, histone H4, non-POU domain-containing octamer-binding protein, nucleolin, nucleophosmin, prelamin-A/C, plectin-1, vimentin, and 40S ribosomal protein S3a was regulated by APDs in B35 cells, according to our proteomic data. According to the results of the gene ontology analysis, all these proteins played important roles in biological processes or in molecular functions in cells. Western blot results showing APD-induced alterations in nuclear protein expression in B35 cells were consistent with the LC/MS/MS results. Heat shock cognate 71 kDa protein and vimentin expression in C6 cells were not affected by the three APDs. As shown in the immunofluorescent cell staining, all the three APDs altered protein expression to similar extents. We also examined whether the expression of these proteins was affected by APDs in the prefrontal cortex of rats administered sub-chronic and chronic APD treatments by western blotting and immunohistochemical staining. Conclusions The findings of the proteomic analysis of APD-treated B35 cells were recapitulated in the APD-treated rat cortex. The expression of some proteins was altered by APDs in rat prefrontal cortex in a time-dependent manner. Electronic supplementary material The online version of this article (10.1186/s40360-018-0199-0) contains supplementary material, which is available to authorized users. Background Antipsychotic drugs (APDs) have been proposed to regulate gene and protein expression in the brain [1][2][3]. Based on accumulating evidence, APDs also ease psychotic symptoms and induce side-effects by regulating gene or protein expression levels [3]. Most psychotic disorders, including schizophrenia, are genetically complex diseases with unclear pathogenic mechanisms. In addition, the complexity of actions of antipsychotic drugs on regulating psychotic symptoms and other drug effects originating from the variety of binding profiles of each APD [4] has hampered our ability to clarify the molecular mechanisms underlying the actions of antipsychotic drugs. Recently, studies of DNA-methylation, post-translational modifications (PTMs) of histone proteins and non-coding RNA regulated protein translation have suggested that proteins in cell nuclei might play important roles in the epigenetic regulation of cell functions [5,6]. In addition, proteins' functions related to the regulations of gene and protein expression in cell nuclei are also major targets for studying the mechanism of the actions of APDs and aetiology of mental disorders. O'Brien and colleagues used 2-dimensional polyacrylamide gel electrophoresis (2D-PAGE) to discover the differential protein expression profiles between the risperidone-treated and control rat striatum and to reveal the possible causes of extrapyramidal symptoms (EPS) induced by antipsychotics [7]. Ji and colleagues also used 2D-PAGE to examine the effects of chlorpromazine, clozapine, quetiapine on rat mitochondria from the rat cerebral cortex and hippocampus and found that abnormal cerebral energy metabolism might be involved in the both the curative effects and side effects of antipsychotics [8]. Kedracka-Krok and colleagues used 2dimensional difference gel electrophoresis (2D-DIGE) to examine protein expression profiles in the cerebral cortex of rats treated with clozapine or risperidone [9]. Clozapine was shown to regulate proteins related to the cytoskeletal structure and calcium homeostasis. Liquid chromatography-tandem mass spectrometry (LC/MS/MS) has been widely applied in proteomic studies over the last two decades to identify possible biomarkers for studying various diseases [10][11][12]. LC/MS/ MS separates and detects digested peptides to identify detected proteins utilizing peptide spectrum algorithms in a peptide identification database. It also precedes relative quantification with or without isotope labelling to determine and compare the expression of proteins in different samples [10]. However, the complexity of the sample used for the analysis represents a critical issue in the ability to produce reliable detection results, even when a sensitive and accurate LC/MS/MS platform that has been improved by developing detection techniques is available. Another breakthrough is the use of stable isotope reagents, such as isobaric tags for relative and absolute quantitation (iTRAQ) reagents [13,14], which were developed for labelling peptides to enable multiplex sample analysis during LC/MS/MS detection processes [15,16]. These stable isotope reagents enable the identification and quantification of multiple samples in one detection run to reduce the possible bias between batches. The enrichment of particular types of proteins, such as glycoproteins (glycome) or phosphoproteins (phosphoproteome) [17,18], is another way to discover the specific low abundance proteins in a crude protein mixture in study of schizophrenia. The fractionation of proteins according to their molecular properties or their location in cells also reduces the sample complexity. In this study, nuclear proteins purified from APD-treated B35 were labelled with iTRAQ reagents and analysed by LC/ MS/MS to discover the regulatory effects of APDs on protein expression in nuclei. Differentially expressed nuclear proteins detected by LC/MS/MS were then further validated in APD-treated B35 cells, C6 cells and the rat prefrontal cortex by western blotting, immunofluorescent staining and/or immunohistochemical staining. Cell culture and APD treatments Risperidone was obtained from Janssen-Pharmaceutica (Beerse, Belgium), and haloperidol and clozapine were obtained from Sigma-Aldrich (St. Louis, MO, USA). C6 and B35 cells were obtained from the Bioresource Collection and Research Center (BCRC) of the Food Industry Research and Development Institute (FRDI), Taiwan. C6 cells were cultured in DMEM high-glucose medium (Invitrogen Life Technologies, Carlsbad, CA, USA) supplemented with 2 mM L-glutamine, 10% horse serum (Invitrogen Life Technologies), and 2% fetal bovine serum (Invitrogen Life Technologies). B35 cells were maintained in MEM medium (Invitrogen Life Technologies) supplemented with 10% fetal bovine serum (Invitrogen Life Technologies). One of the APDs (haloperidol, risperidone, or clozapine) was added to the medium once a day. The concentrations of each APD used in this study were determined based on results from our previous publication. A 1 × PBS solution was added to culture medium to serve as the control supplement. The final concentration of each APD in the culture medium was 4 μg/ml (10 μM) for haloperidol, 4 μg/ml (8 μM) for risperidone, and 2.5 μg/ml (7.65 μM) for clozapine. C6 cells were subcultured when they exhibited a density of approximately 70% confluence to prevent abnormal production of the S100 protein. Cells were harvested when they had reached 60-70% confluence after treatment with APDs for 5 days. Animals and drug treatment All experimental procedures used to treat animals in this study were approved by the Institutional Animal Care and Use Committee (IACUC) of Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation (101-IACUC-028). Male Sprague-Dawley (SD) rats weighing 120-150 g were housed in a temperature-and humidity-controlled feeding room with a 12-h light/ dark cycle and had free access to water and food for 5 days before the APD treatment. Two independent APD treatment batches were performed in this study. In both batches, rats (n = 3 per group) were intraperitoneally injected with haloperidol (1 mg/kg), risperidone (1 mg/kg), clozapine (20 mg/kg), or PBS once daily for 7 days for subchronic APD treatment. For the chronic treatment, two independent APD treatment batches (n = 3 per group in each batch) of SD rats received haloperidol (1 mg/kg), risperidone (1 mg/kg), clozapine (20 mg/kg), or vehicle once daily for 28 days. SD rats were sacrificed by CO 2 asphyxiation 24 h after the last injection. Dissected cerebral cortices were stored in liquid nitrogen or were soaked in 4% paraformaldehyde for 24 h for paraffin embedding. Nuclear protein extraction C6 or B35 cells treated with each APD were harvested for nuclear protein extraction using a Nuclear Extraction Kit (Chemicon International Inc., California, USA), according to manufacture's instructions. Briefly, cells were harvested and were resuspended in 1 × Cytoplasmic Lysis Buffer containing 0.5 mM DTT and a Protease Inhibitor Mix (GE Healthcare Bio-Science, Uppsala, Sweden). The mixture was then placed on ice for 15 min and centrifuged at 250×g for 5 min at 4°C. The cell pellet was resuspended in 1 × Cytoplasmic Lysis Buffer and the cells were disrupted using a syringe with a 27-gauge needle followed by centrifugation at 8000×g for 20 min at 4°C. The supernatant was collected as the cytoplasmic proteins fraction. The nuclear pellet was washed and lysed with Nuclear Extraction Buffer at 4°C for 60 min, and the nuclear extract was then centrifuged at 16,000×g for 5 min at 4°C. The supernatant was transferred to a fresh tube and served as the nuclear protein fraction. iTRAQ labelling and LC/MS/MS proteomic analysis Fractionated nuclear proteins from APD-treated B35 cells were labelled with iTRAQ reagents (Applied Biosystems), according to the manufacturer's instructions. Briefly, 10 μg of nuclear proteins from B35 cells were reduced with a reducing reagent at 60°C for 1 h. Cysteine residues were then blocked with cysteine blocking reagent for 10 min at room temperature, followed by trypsin digestion at 37°C for 16 h. iTRAQ-4 plex labelling reagents were then added to each peptide sample and the mixtures were further incubated at room temperature for 60 min. Each of the peptide samples was then combined and evaporated in a vacuum dryer at room temperature. The dried residue was dissolved in 50 μl of ddH 2 O. Five microliters of iTRAQ-labelled residue were analysed using an HPLC system (Agilent) coupled with a 3200 Q TRAP LC/MS/ MS System (MDS SCIEX, Applied Biosystems). The iTRAQ-labelled residue was separated on the reversephase C18 column at a flow rate of 10 μl/min. The eluent gradient consisted of buffer A (0.1% formic acid in H 2 O) and buffer B (0.1% formic acid in acetonitrile). Peptides were subjected to electrospray ionization followed by tandem mass spectrometry. The collected data were then analysed to identify and quantify the proteins using the ProteinPilot ™ 3.0 (MDS SCIEX) software. Western blot analysis Nuclear protein extracts from both cell lines and the rat cerebral cortex were prepared as described above. Approximately 5-20 μg of nuclear proteins were separated on 12.5% sodium dodecyl sulphate polyacrylamide gels. The resolved proteins in the gel were then transferred to polyvinylidene difluoride membranes. After blocking, membranes were incubated with specific primary antibodies for 14 h at 4°C. After washes with 1 × PBS, membranes were then incubated with horseradish peroxidase-conjugated goat anti-mouse or anti-rabbit antibodies (Cat. # 401215 and # 401315, Calbiochem, Darmstadt, Germany) at room temperature for 1 h. Target protein bands were developed using the Amersham ECL kit (Amersham, Bucks, UK). We examined the lack of GAPDH in the nuclear fraction and the lack of lamin B in the cytosolic fraction to control the quality of nuclear extraction processes. Beta-actin was examined served as a quantification control. Immunofluorescent staining of C6 and B35 cells C6 or B35 cells were seeded onto coverslips in a 6-well plate at a density of 5 × 10 3 cells per well and cultured in medium for 5 days with APDs added daily as described in the drug treatment section above. After APD treatment, coverslips with C6 or B35 cells were transferred to a fresh 6-well plate and washed with 1× PBS. Cells were then fixed by incubating the coverslips in paraformaldehyde for 15 min. Coverslips were then washed with 1× PBS. Cells were further permeabilized with methanol and incubated at − 20°C for 15 min. After blocking with BSA for 15 min at room temperature, coverslips were incubated with a specific antibody for 1 h at room temperature. Coverslips were washed three times with 0.1% PBST and then incubated with specific secondary antibodies for 1 h at room temperature. Antibodies were removed and the coverslips were washed with 0.1% PBST followed by 4′6-diamidino-2-phenylindole (DAPI) staining for 4 min. Coverslips were then washed with 1 × PBS, mounted on slides with mounting gel, and sealed with nail polish. Quantification of the fluorescent intensity was performed using the image quantification software ImageJ 1.50i obtained from the NIH website (http://imagej.nih.gov/ij/). Areas of interest on the image were selected using the polygon selection method to quantify the immunofluorescent staining. The mean intensities of all areas were then calculated to represent the expression level of the detected protein. Immunohistochemistry analysis of rat prefrontal cortex Paraffin-embedded brain samples were sectioned at 5 μm and placed on silane-coated glass slides. Slides containing sections were incubated at 70°C for 2 h and then immediately immersed in Trilogy solution (Cell Marque, BH, The Hague, NL) to remove the paraffin. Slides were pressure heated in Trilogy solution at 120°C for 15 min to remove the traces of paraffin and to retrieval the antigen in the samples. After washing the slides twice with ddH 2 O, tissue sections were incubated with 3% H 2 O 2 in PBS to eliminate endogenous peroxidase activity. Sections were then blocked with 2% BSA in PBS for 20 min and then incubated with a specific antibody for 2 h at 37°C. Slides were washed twice with PBS and incubated with a specific secondary antibody for 30 min at room temperature. Slides were washed twice with PBS, followed by application of the substrate solution, 3,3′-diaminobenzidine tetrahydrochloride (DAB) to develop the result. The developing reaction was stopped by immersing the slides in ddH 2 O twice, followed by the addition of Mayer's haematoxylin for counterstaining (Thermo Fisher Scientific). Slides were then rinsed with ddH 2 O three times, cleared in a nonxylene solution, and mounted with coverslips and the slide mounting medium DPX Mountant (Sigma). For determining changes in protein expression in the immunohistochemistry analysis, two independent colleagues were invited to compare the changes in protein expression on images of the immunohistochemical staining. Total RNA isolation and gene expression assay Total RNA was extracted from sub-chronic APD-treated C6 or B35 cells using TRIZOL Total RNA Isolation Reagent (Invitrogen Life Technologies), according to the manufacturer's instructions. Briefly, cells were directly lysed and homogenized in 1 ml of TRIZOL denaturing reagent in a 10-cm cell culture dish followed by transfer to 1.5-ml centrifuge tubes. Homogenates were incubated on ice for 5 min, vortexed vigorously, and centrifuged at 12,000 x g for 10 min at 4°C. Supernatants were then transferred to fresh tubes. Two hundred microliters of chloroform were added to each ml of supernatant, and the samples were then vigorously mixed and incubated on ice for 5 min, followed by centrifugation at 12,000 x g for 10 min at 4°C. Each aqueous phase was transferred to a fresh tube, and the chloroform extraction procedure was repeated. Each aqueous phase was collected and mixed with 0.7 volume of 2-propanol. The mixtures were incubated at − 20°C for 1 h and centrifuged at 12,000 x g for 30 min at 4°C. RNA pellets were washed twice with 75% ethanol, air dried, and dissolved in RNasefree water. Reverse transcription was performed using High Capacity cDNA Reverse Transcription Kits (Applied Biosystems), according to the manufacturer's instructions. Real-time quantitative polymerase chain reactions (RT-QPCR) were performed using an ABI 7900 HT Fast Real-Time PCR System in combination with continuous SYBR Green detection (Applied Biosystems) after the reverse transcription reaction. RT-QPCR was performed in a reaction volume of 20 μl containing 2 μl of diluted cDNAs, 10 μl of SYBR Green PCR Master Mix (Applied Biosystems), 2 μl each of sense and antisense primers (2 mM), and 4 μl of H 2 O. The primer sequences for each gene used in this study are listed in Additional file 1: Table S1. The general PCR conditions were: polymerase activation at 95°C for 10 min followed by 50 cycles of denaturation at 95°C for 15 s, annealing at 60°C for 30 s, and extension at 72°C for 60 s. After amplification, a melting curve was acquired to determine the optimal PCR conditions. The relative standard curve method was used to quantify mRNA expression. The relative gene expression levels in each sample were calculated from the respective standard curves using a linear regression analysis. The mRNA expression levels of each gene were normalized to beta-actin expression. All experiments were performed in duplicate. Differences in the normalized mRNA expression levels of the examined gene between APD-treated and drug-naive C6 or B35 cells were assessed using one-way ANOVA followed by the Dunnett post hoc comparison test. Oneway ANOVA and the Dunnett post hoc comparison test were performed with SPSS Statistics 17.0. LC/MS/MS revealed the regulation of protein expression in the nuclei of APD-treated B35 cells Nuclear proteins harvested from each of the APDstreated B35 and control B35 cells were labelled with iTRAQ reagents and were analysed using LC/MS/MS. Three independent batches of nuclear proteins from APD-treated B35 cells were used as biological repeats. Twenty-five proteins were identified by LC/MS/MS. As shown in Table 1 Validation of APD-induced regulation of protein expression in the nuclei of B35 and C6 cells Our previous researches mentioned that APDs might induce differential protein expression regulation in different cell types, such as B35 neuronal cells and C6 glial cells. We then further validated APD-induced regulation of protein expression in the nuclei of B35 and C6 cells. Five proteins were selected for validation using western blotting. Almost all selected targets showed consistent changes in expression in the nuclei of APD-treated B35 cells as measured using LC/MS/MS ( Fig. 1(a)-(j)). Only HIST1H4B expression in B35 nuclei was not affected by risperidone ( Fig. 1(a) and (b)). In C6 nuclei, we found that HSPA8 (Fig. 1(c) and (d)) and VIM ( Fig. 1(i) and (j)) expression were not affected by any of the three APDs. HIST1H4B ( Fig. 1(a) and (b)) and NCL (Fig. 1(e) and (f )) expression in C6 nuclei were regulated by APDs, similar to APD-treated B35 cells. Haloperidol and risperidone induced NPM1 expression in C6 nuclei ( Fig. 1(g) and (h)). Clozapine did not significantly affect NPM1 expression in C6 nuclei. Western blots revealed differences in protein expression in the sub-chronic APD-treated rat cortex We also examined HIST1H4B, HSPA8, NCL, NPM1, and VIM expression in APD-treated rat cortices. As shown in Fig. 1, HSPA8, NCL, NPM1, and VIM expression were upregulated in the rat cortex following treatment with any of the three APDs. HIST1H4B expression in the rat cortex was upregulated by haloperidol but reduced by clozapine ( Fig. 1(a) and (b)). The sub-chronic risperidone treatment did not affect HIST1H4B expression in the rat cortex. Immunofluorescent staining revealed APD-induced nuclear protein regulation in C6 and B35 cells Immunofluorescent staining was also used to examine the regulation of protein expression by APD treatments in C6 and B35 cells. In both cell lines, HSPA8, NLC, NPM1, PLEC, and VIM expression were upregulated by each of the three APDs (Additional file 2: Figure S3, Additional file 3: Figure S4, Additional file 4: Figure S5, Additional file 5: Figure S6, Additional file 6: Figure S7, Additional file 7: Figure S8, Additional file 8: Figure S9, Additional file 9: Figure S10, Additional file 10: Figure S11, Additional file 11: Figure S12, Table 3). In contrast to the increase in HIST1H4B expression induced by haloperidol and risperidone, clozapine reduced HIST1H4B expression in both C6 and B35 cells (Additional file 12: Figure S1 and Additional file 13: Figure S2). Moreover, HIST1H4B, NLC, and NPM1 were expressed in cell nuclei. HSPA8 was generally detected throughout cells. The expression of NLM and NPM1 was induced and specifically accumulated in cell nuclei. PLEC and VIM were mainly expressed in cell nuclei, but were also detected in the cell cytoplasm. Furthermore, we also observed accumulation of PLEC and VIM staining on the nuclear membrane of APD-treated cells. Regulation of protein expression in sub-chronic and chronic APD-treated rat cortices We also compared the regulation of protein expression in sub-chronic and chronic APD-treated rat cortices using immunohistochemistry. As shown in Fig. 2, each of the three APDs induced HSP8A, NLC, NPM1, PLEC, and VIM expression in the cortex of sub-chronic APDtreated rats (Additional file 14: Table S2). Haloperidol and risperidone did not significantly change HIST1H4B expression in the rat cortex. Clozapine reduced HIST1H4B expression in the rat cortex. In the chronic APD-treated rat cortex, NLC, PLEC, and VIM expression were induced by each of the three APDs (Fig. 3, Additional file 14: Table S2). Haloperidol and risperidone did not affect HIST1H4B expression, and clozapine reduced HIST1H4B expression in the chronic APD-treated rat cortex. Moreover, none of the three APDs altered HSPA8 and NPM1 expression in the chronic APD-treated rat cortex. Real-time quantitative PCR revealed alterations in gene expression in sub-chronic APD-treated C6 and B35 cells Levels of the HIST1H4B, HSPA8, NCL, NPM1 PLEC and VIM transcripts were measured to investigate the effect of APDs on the expressions of genes encoding the proteins discovered in this study. As shown in Table 4 "↑" means induction of expression compared to control group; "↓" means reduction of expression compared to control group Discussion APDs induce nuclear protein regulation in B35 neuronal cells, C6 glial cells and also in the rat cortex. We only identified 25 proteins in this study, even though we used iTRAQ reagent coupled with LC/MS/MS to discover effects on proteins in the nuclei of APD-treated B35 cells. The number of the proteins identified in this study is quite low compared to other LC/MS/MS-based proteomic analyses. In this study, we used a 3200 Q TRAP LC/MS/MS instrument as the detection system to identify iTRAQ labelled peptides. The sensitivity of the equipment we used extremely limited the detection of Fig. 2 Immunohistochemical staining of the rat prefrontal cortex following sub-chronic (1 week) treatment with APDs Fig. 3 Immunohistochemical staining of the rat prefrontal cortex following chronic (4-weeks) treatment with APDs protein samples. This limited sensitivity is the major reason why we detected a low number of proteins in this proteomic experiment. Also, the use of nuclear proteins as the restricted sample source to reduce the sample complexity is another reason why we identified few proteins in this study. Histones are found in eukaryotic cells and package DNA into nucleosomes. Histones also play important roles in DNA condensation and gene regulation. Histone modifications have been suggested to be related to the pathogenesis of schizophrenia. Some previous studies have reported increased histone deacetylase (HDAC1) expression in postmortem brain samples from patients with schizophrenia [19,20]. Decreased expression of the reelin gene is associated with increased methylation of CpG islands in the reelin promoter [21], and reduced acetylation of histones [22] in NT2 cultures. HDAC inhibition has been proposed to improve neurodegeneration and cognitive function, and to induce neurogenesis in animals [23,24]. HDAC2 overexpression has been shown to reduce the dendritic spine density, synapse number, and synaptic plasticity of neurons [25]. Some conflicting studies have shown that siRNA-and drug-induced HDAC1 inhibition impairs learning and increases cell death in rats. Moreover, increased HDAC1 expression protects rats from ischemic cell death and DNA damage [26]. In addition, as shown in the study by Akbarian and his colleagues, haloperidol and risperidone induce histone H3 phospho-acetylation, which is reversed by MK-801 [27]. A similar finding was also reported by Alessandra, who showed that haloperidol increased phosphorylation of histone H3 on serine28 (H3S28) [28]. In the present study, haloperidol and risperidone increased HIST1H1C, H3F3B and HIST1H4B expression. Clozapine reduced H3F3B and HIST1H4B expression, but not HIST1H1C expression. Although the results in this study might simply infer Data are presented as mean of expression fold change ± SD. All gene expression experiments were performed in biological duplicates. SD = standard deviation; MD = mean difference; df = degree of freedom; ANOVA-p = p-value from one-way ANOVA analysis. a p-value = significant value calculated from Dunnett's C-tests that compare each APD-treated group to control group. b p-value < 0.05. *The mean difference is significant at the 0.05 level in Dunnett's C-tests that APDs regulate histone expression to further modulate chromatin formation and gene regulation, histone modifications are simply the critical biological processes for mediating chromatin formation. Further examinations should be performed to clarify the relations between APDinduced regulation of histone expression and histone modifications. HSPA8 and NPM1 are chaperone proteins that were affected by each of the three APDs in both B35 and C6 cells. APD-induced expression of HSPA8 and NPM1 was also observed in the sub-chronic APD-treated rat cortex. According to a recent study, a polymorphism in the HSPA8 gene (rs1136141) in patients with first-episode psychosis served as a risk factor for psychosis development [29]. Proteomic and epigenetic studies of postmortem brains also revealed downregulation of HSPA8 in patients with schizophrenia [10,30], as well as in patients with Alzheimer's disease (AD) [31]. Another study also revealed downregulation of HSPA8 and NPM1 within layer 2 of the insular cortex, which is closely related to auditory hallucinations and language, in patients with schizophrenia [32]. A microarray study revealed downregulation of NPM1 gene expression in the postmortem cortex of patients with schizophrenia [33]. Based on our data, APD-induced HSP8A and NPM1 expression might be implicated in the possible therapeutic effects of these drugs on treating schizophrenia. As shown in a recent proteomic study, HSPA8 and NPM1 expression are reduced in oligodendrocytes following an acute clozapine treatment and subsequently restored by MK-801 [34]. In our previous research, acute (less than 3 days) APD-induced downregulation of protein expression might be oppositely regulated by a sub-chronic (more than 3 days) APD treatment [35]. This difference might be the reason for the inconsistent APD-induced alternations in the expression of the HSPA8 and NPM1 proteins between the two studies. Interestingly, none of the three APDs modulated HSPA8 and NPM1 expression in the chronic APD-treated rat cortex. This finding is explained by the observation that normal expression levels of HSPA8 or NPM1 should be maintained for these proteins to function as important chaperones in living animals, arguing against chronic APD-induced changes in protein expression. We also propose that living animals may try to maintain the original HSPA8 and NPM1 chaperone functions through various regulatory processes in the brain that prevent changes in protein expression induced by environmental stress. NPM1 and NCL are multifunctional proteins in cells. NPM and NCL transit to the nucleoplasm in response to stress and participate in the repair of DNA damages [36]. NPM1 and NCL also regulate the expression of proteins related to the MAPK signalling pathway [37], which is abnormally expressed in the anterior cingulate (ACC) and dorsolateral prefrontal cortex (DLPFC) of patients with schizophrenia [38,39]. We postulate that increased expression of NPM1 and NCL might be implicated in the regulation of proteins related to the MAPK signalling pathway when patients with schizophrenia are treated with certain APDs. The VIM mRNA levels in superficial, deep, and white matter layers of the anterior cingulate gyrus were not significantly changed in patients with schizophrenia in a previous study [40]. Moreover, VIM expression is not affected by haloperidol or clozapine [41] treatment in rats. In addition, the VIM expression level in postmortem brain samples did not differ between patients with schizophrenia and control groups. VIM interacts with actin filaments through its C-terminal residue and interacts with the actin cytoskeleton through PLEC [42,43]. In several cell types, PLEC also plays an important role in linking actin microfilaments, microtubules and intermediate filaments [42] to maintain cell morphogenesis and cell plasticity. Schizophrenia has been proposed as a disease with abnormal neuronal plasticity and neuronal regeneration in the brain. APDs were thought to induce plasticity in neuronal cells. All three APDs used in this study increased PLEC and VIM expression in both B35 and C6 cells. We also observed increased expression of PLEC and VIM in the cortex of sub-chronic and chronic APD-treated rats. Several publications have reported a lack of change in VIM expression in patients with schizophrenia. Anchoring of PLEC and VIM on the nuclear membrane is also a critical step in the reorganization ofactin microfilaments, microtubules and intermediate filaments. We also observed an accumulation of PLEC and VIM staining on the nuclear membrane of APD-treated B35 and C6 cells in this study. We propose that APDsinduced modulation of cell plasticity is associated with increased expression of PLEC and VIM. Furthermore, glutamate receptor subunit 3A(NR3A) has been suggested to interact with PLEC to function in intracellular processes, including trafficking and targeting of NR3A-containing receptors and PKC activation [44,45]. Abnormal NR3A expression has also been observed in patients with various psychiatric disorders, including schizophrenia [46,47]. Thus, APD-induced PLEC modulation may play important roles in the pathogenesis of schizophrenia by affecting NR3A expression and glutamate receptor function. In this study, we also examined the regulations of the expression of the Hist1h4b, Hspa8, Ncl Npm1, Plec and Vim genes using real-time quantitative PCR. Compensatory regulation of Hist1h4b expression was observed in clozapine-treated C6 and B35cells. We also revealed compensatory regulation of Hspa8, Npm1 and Plec expression in APD-treated C6 cells. Furthermore, compensatory regulation of Npm1 expression was also investigated in B35 cells treated with clozapine. The compensatory regulation of these genes suggested that the expression of these proteins might not be transcriptionally regulated at the mRNA level in APD-treated cells. Although some proteins whose expression was induced by APDs in this study showed similar trends in gene expression at the transcriptional level, additional examinations should be performed to further verify the relations between APD-induced alterations in protein expression and the regulation of gene expression. To clarify whether the alterations in protein expression found in this study are APDs specific or not, we examined NCL expression in C6 and B35 cells treated sub-chronically with all-trans retinoic acid (ATRA, 20 μM), H 2 O 2 (40 μM), or caffeic acid (CA, 10 μM). As shown in Additional file 15: Figure S13(a), ATRA slightly induced NCL expression while H 2 O 2 and CA dramatically induced NCL expression in C6 cells from two independent treatment batches. Moreover, ATRA reduced NCL expression but H 2 O 2 and CA slightly induced NCL expression in B35 cells from two independent treatment batches. We also observed induction of VIM expression in ATRA treated C6 (Additional file 15: Figure S13(b)). H 2 O 2 and CA did not significantly regulate VIM expression in C6 cells. ATRA, H 2 O 2 , and CA induce VIM expression in B35 cells. These results suggest that expression changes of NCL induced by APDs in both cells are not APDs specific effects. Also, most of the proteins identified in this study are the downstream proteins in different signaling pathways and to be regulated by various proteins at the same time in cells. We propose that the regulatory effects on these proteins might be similar between APDs but might not be APDs specific. Based on our LC/MS/MS data, SLC25A5, HIST1H1C, H3F3B, NONO, LMNA and RPS3A expression were regulated in APD-treated B35 cells. HIST1H4B was chosen as a representative of histone protein expression in the follow-up validation experiments. The SLC25A5, NONO, LMNA and RPS3A proteins were not chosen for validation because little information about the relation between schizophrenia and these proteins is available. Although this study is the first to show that APDs (haloperidol, risperidone, and clozapine) regulate the expression of SLC25A5, NONO, LMNA and RPS3A in B35 cells, further studies should be performed to clarify the roles of these proteins in the pathogenesis of schizophrenia and the mechanisms of action of these antipsychotic drugs. As shown in the immunofluorescent staining of B35 and C6 cells, HSPA8 and VIM were expressed in both the cell nucleus and cytoplasm. This distribution might be the reason for the inconsistent results obtained using immunofluorescent staining and western blots in APDtreated C6 cells.
2018-03-08T14:24:57.851Z
2018-03-07T00:00:00.000
{ "year": 2018, "sha1": "407434e9667e4f5998b9ef770ccef9b5af75c5da", "oa_license": "CCBY", "oa_url": "https://bmcpharmacoltoxicol.biomedcentral.com/track/pdf/10.1186/s40360-018-0199-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e6857ab38622c015cdf757ff421856d9f0f361e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
255803012
pes2o/s2orc
v3-fos-license
Functional elucidation of the non-coding RNAs of Kluyveromyces marxianus in the exponential growth phase Non-coding RNAs (ncRNAs), which perform diverse regulatory roles, have been found in organisms from all superkingdoms of life. However, there have been limited numbers of studies on the functions of ncRNAs, especially in nonmodel organisms such as Kluyveromyces marxianus that is widely used in the field of industrial biotechnology. In this study, we measured changes in transcriptome at three time points during the exponential growth phase of K. marxianus by using strand-specific RNA-seq. We found that approximately 60 % of the transcriptome consists of ncRNAs transcribed from antisense and intergenic regions of the genome that were transcribed at lower levels than mRNA. In the transcriptome, a substantial number of long antisense ncRNAs (lancRNAs) are differentially expressed and enriched in carbohydrate and energy metabolism pathways. Furthermore, this enrichment is evolutionarily conserved, at least in yeast. Particularly, the mode of regulation of mRNA/lancRNA pairs is associated with mRNA transcription levels; the correlation between the pairs is positive at high mRNA transcriptional levels and negative at low levels. In addition, significant induction of mRNA and coverage of more than half of the mRNA sequence by a lancRNA strengthens the positive correlation between mRNA/lancRNA pairs. Transcriptome sequencing of K. marxianus in the exponential growth phase reveals pervasive transcription of ncRNAs with evolutionarily conserved functions. Studies of the mode of regulation of mRNA/lancRNA pairs suggest that induction of lancRNA may be associated with switch-like behavior of mRNA/lancRNA pairs and efficient regulation of the carbohydrate and energy metabolism pathways in the exponential growth phase of K. marxianus being used in industrial applications. Background The haploid and thermotolerant Kluyveromyces marxianus is a non-conventional yeast species with several advantageous metabolic properties over Saccharomyces cerevisiae, such as fermentation ability at high temperatures, ability to grow on various hexose and pentose sugars, production of less ethanol in the presence of excessive sugar, and weak glucose repression, which enables the fermentation of mixed sugars, such as hemicellulose hydrolysate and inulin, at higher temperatures [1]. These properties facilitate the development of efficient fermentation processes utilizing K. marxianus. As a result, this Generally Regarded As Safe (GRAS) species shows potential for use as a cell factory with high capability for improving biomass yields in industrially relevant biotechnological applications. For example, K. marxianus has been utilized for the reduction of lactose content in food products as well as for the production of ethanol, various enzymes, heterologous proteins, aromatic compounds, and bioingredients, and for bioremediation [2]. In order to engineer this species to be more suitable for use in various applications, genetic resources, such as its genome and transcriptome, are required at the genomic scale. K. marxianus is a member of the Saccharomycetales; however, the genetics and metabolism of this yeast are considered quite different from those of S. cerevisiae from an evolutionary point of view [1]. For instance, the mode of regulation of genes in the glycolysis and tricarboxylic acid (TCA) cycle pathways differs between the two yeast species, although they are largely conserved [3]. In particular, regulatory mechanisms in metabolic networks governing carbon assimilation have not yet been explored. It has been established that transcriptional regulatory networks, comprising of transcription factors and other auxiliary components, control metabolic flexibility and robustness in response to environmental conditions. Therefore, a full understanding of the cellular response to growth conditions as well as the roles of cognate regulators, such as transcription factors, is necessary for the elucidation of changes in transcript levels of metabolic genes due to the effects of growth conditions. Interestingly, genome-wide transcriptome analyses have demonstrated that the eukaryotic genome is pervasively transcribed [4], e.g., more than 85 % of the genome of S. cerevisiae is transcribed [5]. This is due to a plethora of previously unannotated non-coding RNAs (ncRNAs), which are pervasively transcribed from intergenic and antisense regions of annotated genes. As transcription requires a large amount of cellular energy, nonfunctional pervasive transcription may impose a metabolic and regulatory burden on cells. In accordance with this, diverse functions of ncRNAs, such as in the modulation of gene expression related with metabolism and pathogenesis, have been revealed [4,6]. However, there have been few reports on the functional characterization of ncRNAs in non-model yeast, despite accumulating evidence of the roles of regulatory ncRNAs in model organisms [6][7][8]. The evolutionary conservation of antisense RNA (asRNA) has been reported between S. cerevisiae and Saccharomyces paradoxus, which are in the sensu stricto Saccharomycetales [9,10]. Additionally, the evolutionary conservation of long asRNAs between five sensu stricto Saccharomycetales members and Kluyveromyces lactis has been reported [11]. Thus, the evolutionary conservation of pervasive ncRNA transcription in budding yeasts suggests that these ncRNAs perform important functions in these organisms [9,10]. In order to examine the functions and extent of transcription of ncRNA, we analyzed the transcriptomic changes at three time points during the exponential growth phase in K. marxianus by conducting strandspecific RNA-seq. The results indicated pervasive ncRNA transcription in the exponential growth phase. Additionally, we performed enrichment analysis of differentially expressed transcripts to demonstrate that long antisense ncRNAs (lancRNAs) show functional associations with carbohydrate and energy metabolism. The correlation between transcription levels in mRNA and lancRNA pairs suggests potential mechanisms by which these RNAs perform their functions. RNA-seq at the exponential growth phase We are particularly interested in transcriptional regulation at exponential growth phase where most biomass production is accomplished. In order to measure dynamic transcriptomic changes during exponential growth in the non-model yeast K. marxianus and achieve further understanding of the cellular response to the exponential growth conditions, we sequenced total RNAs isolated at three time points corresponding to early-exponential (EE), mid-exponential (ME), and late-exponential (LE) growth phase, with two biological replicates for each sample (Fig. 1a). We employed the dUTP method for RNA-seq [12] and obtained 23,309,796 mapped reads for EL, 28,076,013 mapped reads for ML, and 37,484,281 mapped reads for LL (Additional file 1: Table S1) [13]. These corresponded to 68.6 % of the genome being transcribed from one strand of DNA and 17.2 % being transcribed from both strands of DNA. Considering only the gene region, 68.8 % of the sense strand and 30.5 % of the antisense strand were transcribed, and 24.8 % of the gene regions were transcribed from both strands. Most of the sequence reads were mapped to the sense strand of protein coding genes (~70 %), intergenic regions (~20 %), and antisense strand of protein coding genes (~10 %) (Fig. 1b). A low number of sequence reads (<1.2 %) were mapped to rRNAs, indicating that rRNA depletion was successfully carried out. During cell growth, the fraction of reads mapped to the sense strand of protein coding genes was increased (68.2 % → 71.0 % → 73.4 %), while that mapped to the intergenic region was decreased (20.5 % → 17.7 % → 16.3 %). The fraction of reads mapped to the antisense strand of protein coding genes was almost unchanged (9.5 % → 10.3 % → 9.6 %). This result suggested that not only mRNA, but also a substantial amount of ncRNAs, were changed to achieve rapid cell growth in the exponential phase. Hierarchical clustering of biological replicates showed that, overall, experimental procedures were reproducibly conducted (Fig. 1c). In particular, the transcriptional landscape of RNA-seq indicated high strand-specificity and distinct transcriptional expression patterns during cell growth (Fig. 1d). Pervasive ncRNA transcription across the genome We annotated 4839 protein coding genes from the K. marxianus genome using AUGUSTUS (Additional file 2: Table S2) [13,14]. Gene units were defined without taking exon-intron structures into account, as the number of introns in the K. marxianus genome is less than 5 % [15]. Cmsearch program in Infernal version 1.1 [16] yielded 273 RNA genes using Rfam data (Additional file 3: Table S3) [17]. Subsequently, we obtained transcription units across the genome by transfrag method and subsequent post-processing (Fig. 2a). Transfrag is defined as contiguous genomic region actively transcribed [18]. Briefly, we discarded transfrags of transcriptional level lower than 1.34 (25 percentile) to reduce false positives in the detected transcripts. In addition, transfrags overlapping with either the forward strand or reverse strand of RNA genes were removed in order to focus on the ncRNAs associated with protein-coding genes. We then classified the transfrags into five RNA classes based on their length (short: length <200 nt and long: length ≥200 nt) [7], coding potential (non-coding: CPAT coding potential <0.364 and coding: CPAT coding potential ≥0.364) (Additional file 4: Figure S1) [19], and location relative to gene annotation (sense, antisense, and intergenic) [7]: (1) mRNA (sense transfrag with coding potential) (2) long antisense ncRNA Table S4). These data demonstrate pervasive ncRNA transcription, which comprised 60 % of transfrags with~30 % of mapped reads (Fig. 1b). Most ncRNA had a lower transcriptional level than mRNAs, although both lancRNA and lincRNA had higher transcriptional level than both sancRNA and sincRNA ( Fig. 2c) [9,10,20]. The average length of mRNA transfrags was~802 bp, which is about half the length of protein-coding genes (average length =~1545 bp) (Fig. 2d). Among the protein-coding genes with mRNA transfrags (78.3 %; 3788 genes), 66.7 % of genes had only one transfrag and 88.2 % of genes had no more than two transfrags. In addition, the number of genes covered by transfrags indicated that the majority of mRNA transfrags covered more than 90 % of the gene annotation (Fig. 2e). Although lancRNA and sancRNA transfrags covered much fewer genes than mRNA transfrags, a substantial proportion of the gene region was covered by lancRNAs. Approximately 40 % of lancRNAs covered more than 50 % of the gene annotation while only~2.5 % of sancRNA covered the same proportion. Taken together, no more than two transfrags were detected for most genes, and thus over-fragmentation of transcripts into multiple transfrags was negligible. In principle, the 5′-and 3′-end position of each transfrag represents the transcription start site (TSS) and transcription termination site (TTS), respectively. These genomic features enabled us to determine whether the transfrags contained artifacts. In order to test this, we compared mapped read enrichment for each RNA class Genome-wide measurement of transcriptome during exponential growth phase. a-e EE, ME, and LE indicate early-exponential (EE), mid-exponential (ME), and late-exponential (LE) growth phase, respectively. a K. marxianus growth curve in YNB-u medium. RNA collection points at exponential growth phase (EE, ME, and LE) are indicated by arrows. b Reads mapping fraction of the three experimental conditions against location relative to gene classes. c Heatmap of hierarchical clustering among the RNA-seq experiments of the two biological replicates of the three experimental conditions, which was carried out by DESeq with variance-stabilizing transformation function. d RNA-seq profile of example genomic region during cell growth. Data for each condition were normalized to RPM (reads per million reads) to make y-axes same scale of K. marxianus with those of S. cerevisiae, sampled at exponential phase [9]. The comparison showed that the mapped read enrichment of all RNA classes was highly similar to that of S. cerevisiae, suggesting that the transfrags are highly accurate and ncRNAs are pervasively transcribed in K. marxianus (Additional file 6: Figure S2). Unexpectedly, lincRNAs and sincRNAs also showed high levels of transcription at the opposite strand, with a much lower transcriptional level than those of lancRNAs and sancRNAs. The proportion of intergenic ncRNA region covered by antisense transcription was~26.0 %. In accordance with this, several cases of antisense transcription in ncRNAs have been reported [21,22]. Regulatory roles of ncRNA In order to investigate whether pervasive ncRNA transcription at the exponential growth phase plays a functions, we focused on mRNA, lancRNA, and sancRNA, as the functions of genes within these RNA classes may be simply inferred from gene annotation [20]. By using DESeq for EE to ME condition and EE to LE condition (p-value < = 0.05), we obtained 3572 differentially expressed transfrags, comprising of 2449 sense, 615 antisense, and 508 intergenic transfrags (Additional file 7: Figure S3) [23]. From these transfrag pairs, significantly enriched KEGG pathways were separately obtained for sense and antisense strands (Fig. 3a) [10,24]. These results demonstrated enrichment of carbohydrate metabolism, including glycolysis and amino acid biosynthesis pathways as well as respiration pathways. These pathways are important for the synthesis of fundamental cellular components and energy production to fulfill energy requirements for rapid growth during the exponential phase [25,26]. However, we observed gradual inactivation of respiration-related pathways, such as the TCA cycle and oxidative phosphorylation during cell growth. The gradual inactivation of the pathways indicates that aerobic condition had been changed to anaerobic condition due to a decrease in dissolved oxygen levels [27]. This pathway enrichment pattern at the exponential phase is consistent with the fact that K. marxianus, being a Crabtree-negative species, uses aerobic-respiration, thereby producing energy from carbon sources [25]. Interestingly, the enriched pathways could be categorized into three groups based on the differential expression of sense, antisense, or both strands. Each group showed distinct functional associations; genes with differential expression of only the sense strand were mostly associated with amino acid metabolism and energy production related to mitochondrial respiration, whereas those with differential expression of antisense strands only, or both strands, were associated with mostly carbohydrate metabolism. The asRNA-mediated regulation of carbohydrate metabolism was mostly conducted by lancRNA (Fig. 3b). In particular, several differentially expressed lancRNAs were located in core carbohydrate metabolic genes, many of which were found to be important for the regulation of their constituent pathways (Fig. 3c). There were three genes in the glycolysis pathway, PFK, GAPDH, and ENO, for which antisense transcription was significantly induced. Antisense transcription of the PFK gene was significantly induced under ME and LE conditions; however, transcription from the opposite sense strand was not induced under these conditions. Considering PFK gene encodes a rate-limiting enzyme of glycolysis in yeast and human cancer cells, it is highly regulated at the transcriptional level by ncRNA [28,29]. We detected two GAPDH homologs that showed significant induction of antisense transcription at ME and LE conditions; however, no transcription induction was observed in their sense strands. The GAPDH gene encodes a key glycolytic enzyme and functions as a metabolic switch to reroute carbohydrate flux to protect against oxidative stress [30]. The ENO gene encodes one of the most highly expressed glycolytic enzymes in many organisms [31] whose activity is known to be regulated by gene expression to a very low extent. We found that the transcription from the sense strand of the ENO gene was slightly reduced, while that from the antisense strand was significantly induced. The antisense strands of two genes, IDH and MDH, which encode enzymes of the TCA cycle, were significantly induced. The IDH gene encodes a rate-limiting enzyme of the TCA cycle. Antisense transcription of the IDH gene was significantly induced while sense transcription was slightly increased. MDH catalyzes the final step of the TCA cycle (conversion of malate into oxaloacetate) [31,32]. Antisense strands were significantly induced in two MDH homologs; however, antisense transcription was increased in one homolog but decreased in the other. This suggests that each homolog is under distinct antisense-mediated transcriptional regulation. We observed that three fermentation genes, PDC, ADH, and ACS, showed significant induction of antisense strands. Induction of the fermentation genes is consistent with the inactivation of the TCA cycle and oxidative phosphorylation genes in Crabtree-negative species [27]. PDC encodes a key enzyme of alcoholic fermentation, which cleaves pyruvate into carbon dioxide and acetaldehyde, and is auto-regulated [33]. Antisense transcription of the PDC gene was significantly induced and sense transcription was concordantly increased with antisense transcription. The antisense strands of three ADH homologs, which are responsible for conversion alcohol into aldehyde in S. cerevisiae, were significantly induced. Two of the sense strands were significantly induced and one was significantly repressed. The ACS enzyme is responsible for the transformation of acetate into acetyl-CoA. We detected differentially expressed transfrags from two ACS homologs. In one homolog, both strands were significantly induced whereas in the other, only the sense strand was significantly induced. The RPE enzyme, a constituent of the pentose phosphate pathway, is responsible for the conversion of ribulose 5phosphate into xylulose 5-phosphate. Both the sense and antisense strands of the RPE gene were differentially expressed. Among the genes described above, we selected three genes, ACS, ADH, and MDH, for the investigation of their antisense transcription pattern. RNA-seq profiles of the genes demonstrated an increase in transcription from both sense and antisense strands under ME and LE conditions, which indicated concordant increase of transcriptional level of mRNA/lancRNA pairs (Fig. 3d-f). Furthermore, they showed obvious strand specificity, except in genes with lancRNA. These demonstrate that their antisense transcription is transcribed and their transcriptional level is simultaneously increased indeed. Mode of regulation of mRNA/lancRNA pairs A given lancRNA can exert both positive and negative regulation of its cognate mRNA [34,35]. Accordingly, we had observed both cases. In order to investigate mode of regulation, we compared the transcriptional levels of mRNA/lancRNA pairs, where either one or both of the pairs were differentially expressed under EE, ME, and LE conditions. LancRNAs regulate their target by base-pairing [20] and this suggests that interaction between mRNA and lancRNA may be associated with the mode of regulation. Therefore, we hypothesized that the following two factors are associated with mode of regulation, (1) three differential expression types of mRNA/lancRNA pairs, as follows: differentially expressed mRNA/differentially expressed lancRNA pairs, differentially expressed mRNA/non-differentially expressed lancRNA pairs, and non-differentially expressed mRNA/ differentially expressed lancRNA pairs, and (2) length fraction of mRNA covered by lancRNA. In order to test the former hypothesis, we compared transcriptional levels of sense and antisense strands for (See figure on previous page.) Fig. 3 Enrichment of KEGG pathways. a Overlap of genes with mRNA, lancRNA, and sancRNA and their enriched KEGG pathways. Blue colored letters indicate amino acid metabolism pathways and red colored letters indicate carbohydrate metabolism or energy metabolism pathways. b Heatmap of significantly enriched KEGG pathways by differentially expressed sense and/or antisense transfrags. Amino acid metabolism pathways are indicated in blue lettering and carbohydrate metabolism or energy metabolism pathways are indicated in red. Three consecutive rectangular demonstrate transcriptional level of EE, ME, and LE conditions of sense and antisense transcription, respectively. c Genes with differentially expressed lancRNAs at core carbohydrate metabolic pathway. d RNA-seq profile near ACS. e RNA-seq profile near ADH. f RNA-seq profile near MDH each differential expression type separately, and found that they showed distinct correlation patterns. Taking all the mRNA/lancRNA pairs into account, our results showed weak positive correlation between lancRNA and mRNA which was consistent with findings in S. cerevisiae (Fig. 4a) [10]. Differentially expressed mRNA/differentially expressed lancRNA pairs demonstrated strong positive correlation whereas differentially expressed mRNA/non-differentially expressed lancRNA demonstrated weak positive correlation (Fig. 4b, c). The strong positive correlation indicates that Fig. 4 Correlation between transcriptional level of mRNA and lancRNA pairs. a All mRNA/lancRNA pairs where either one or both member of each pair was differentially expressed during EE, ME, and LE. b Differentially expressed mRNA/differentially expressed lancRNA pairs. c Differentially expressed mRNA/non-differentially expressed lancRNA pairs. d Non-differentially expressed mRNA/differentially expressed lancRNA pairs. e Pairs with more than 50 % of mRNA covered by a lancRNA. Red points represent genes with high transcription level while blue points represent genes with low transcription level. Black, red, and blue lines indicate least squares fitting of mRNA and lancRNA pairs transcriptional expression level of mRNA is co-regulated with lancRNA for each mRNA/lancRNA pairs. Interestingly, non-differentially expressed mRNA/differentially expressed lancRNA pairs showed unexpected results (Fig. 4d). Although a weak positive correlation was observed when all the pairs were taken into account, they showed obvious negative correlation at low mRNA transcription level while positive correlation at high mRNA transcription level. This shows that a certain threshold of transcriptional level of cognate mRNA is important to determine the mode of enhancing or repressing by lancR-NAs. Thus, we concludes that this was the result of switch-like behavior of lancRNA, as negative regulation at low mRNA transcription levels could be interpreted as ensuring the "off" state of mRNA transcription, and vice versa [36]. Thus, our results suggest that the transcriptional mode of regulation of lancRNA was influenced by differential expression types of the pairs and mRNA transcriptional levels. In order to test the latter hypothesis, we compared the transcriptional level of mRNA/lancRNA pairs in which more than 50 % of an mRNA was covered by a lancRNA (Fig. 4e). The result showed a positive correlation stronger than that observed when the covering length fraction was not considered (Fig. 4a). Furthermore, genes for which more than 50 % of the mRNA was covered by a lancRNA exhibited more obvious enrichment of carbohydrate or energy metabolic pathways (Fig. 5a, b). These data suggest that the covering length fraction is also an important factor in determining the transcriptional mode of regulation of lancRNA. Discussion Pervasive ncRNA transcription, which has been demonstrated in the model organism S. cerevisiae, is evolutionarily conserved in the sensu stricto Saccharomycetales [9,10]. Consistent with this, our results indicate that pervasive ncRNA transcription, from antisense and intergenic regions, also occurs in K. marxianus (Fig. 2b). It was found that ncRNAs accounted for~60 % of all identified transfrags. Additionally, 77.5 % of protein-coding transfrags were found to possess either long or short ncRNAs at the opposite strand. Similar length fractions of genes (73.4 %) had ncRNAs at the opposite strand when only expressed genes were considered. In S. cerevisiae, there are large numbers of unannotated cryptic unstable transcripts (CUTs) and Xrn1-sensitive unstable transcripts (XUTs), which are destabilized after synthesis [37,38]. Furthermore, CUTs are reported to be transcribed from both intergenic and antisense regions [37]. These data suggest that a large length fraction of ncRNAs in K. marxianus may be associated with CUTs and XUTs. Our results show that lancRNA-mediated regulation is enriched for carbohydrate metabolism pathways (Fig. 3a, b). Enrichment analysis for lancRNAs covering more than half of the protein-coding genes in S. cerevisiae revealed a similar enrichment pattern of carbohydrate and energy metabolic pathways (e.g. carbon metabolism, a b Fig. 5 Significantly enriched KEGG pathways of genes with lancRNAs. a Significantly enriched KEGG pathways of genes with all lancRNAs. b Significantly enriched KEGG pathways of genes with lancRNA which covers more than half of coding region. Red words indicate carbohydrate metabolism or energy metabolism pathways glycolysis/gluconeogenesis, and pyruvate metabolism at mid-exponential phase) (Additional file 8: Figure S4) [10]. The evolutionary conservation of pathway enrichment suggested that pervasive ncRNA transcription plays evolutionarily conserved functions in K. marxianus [9,10]. The importance of these pathways for rapid growth via the synthesis of fundamental cellular components and energy production [25,26] suggests that lancRNAs may play major role in rapid growth during the exponential phase through currently unknown mechanisms. Elucidation of the mode of regulation of lancRNA may provide insights into these unknown mechanisms. It is generally accepted that lancRNAs positively or negatively regulate their cognate mRNAs [34,35]. A recent report proposed that lancRNAs function as on/off switches, thereby increasing the variability of gene expression [36]. Several cases of lancRNAs with on/off switch-like behavior have been reported [39][40][41][42]. Our results showed that mRNA/lancRNA pairs demonstrate inverse mode of regulation according to the transcriptional level of mRNA, especially in differentially expressed lancRNA/ non-differentially expressed mRNA pairs (Fig. 4d), although the same trend was also observed in other types of mRNA/lancRNA pairs (Fig. 4b, c). Therefore, the relationship between transcriptional levels of mRNA and lancRNA involves switching mRNA transcriptional level between on and off states. In other words, lancRNAs enhance the transcriptional level of their cognate mRNAs if the mRNA transcriptional level is higher than certain threshold, but repress this if it is lower. Therefore, our results not only support the view of lancRNA functioning as an on/off switch, but also suggest that this represents a widely used mode of regulation, particularly in carbohydrate and energy metabolism pathways. In addition, our results showed that strong positive correlation exists between transcriptional levels of mRNA/ lancRNA pairs, if both pairs are differentially expressed or if the length fraction of mRNA covered by lancRNA is more than 50 % (Fig. 4e). These data suggest that the two factors are important for enhancing mRNA transcriptional levels by a currently unknown mechanism. Consistent with this finding, a recent study showed that lancRNAs indeed play a role in enhancing mRNA transcriptional levels in several cases [43]. Taken together, our results suggest that a single lancRNA may play either a switch-like roles or an enhancing role depending on conditions such as the mRNA transcriptional level and differential expression of mRNA. However, the molecular mechanisms underlying this mode of regulation should be investigated in a specific candidate gene, as our findings are based on observations of the mRNA/ lancRNA population. Besides, evolutionarily conserved enrichment of lancRNA differential expression, and several cases of lancRNA functioning as a switch for regulating mRNA in carbohydrate and energy metabolism pathways suggest that the switch-like function of lancRNA may be prevalent across a wide range of species. Among genes with differentially expressed lancRNA, PFK encodes an enzyme that catalyzes fructose 6phosphate (F6P) into fructose-1,6-bisphosphate (FBP) with the release of energy via ATP hydrolysis. PFK is one of the primary targets of glycolytic flux regulation according to ATP demand, and this regulation is conserved from bacteria to humans [44]. Reporter metabolites, such as ATP for PFK, play an important role in monitoring the environment or nutrient status by modulating the transcriptional level of associated genes [45][46][47]. Most genes with differentially expressed lancRNA are associated with cofactors used as reporter metabolites. The ACS enzyme also uses ATP whereas GAPDH, MDH, and PDC use NAD, and IDH uses NADP as cofactor. Recent studies show that long ncRNAs promote transcriptional poising of the immediate-early response of inducible genes [48,49]. Therefore, the transcriptional status of these genes may serve as a good target for the regulation of glycolytic flux. Additionally, lancRNAs may enable rapid and efficient post-transcriptional switch in response to environmental changes, in contrast to metabolic regulation or gene regulation alone [44,50]. Consistent with this, metabolic fluxes mediated by glycolytic enzymes are regulated at the posttranscriptional level [51,52]. Conclusion In conclusion, K. marxianus transcribes ncRNAs pervasively during exponential growth. Among the ncRNA classes, lancRNAs are enriched for genes comprising carbohydrate or energy metabolism pathways. Further analysis of the correlation between mRNA and lancRNA suggests that lancRNAs enable switch-like behavior of their cognate mRNAs via transcriptional induction. Thus, lancRNA-mediated regulation of mRNA represents a mechanism for efficient regulation of carbohydrate and energy metabolism pathways. Strains and culture conditions K. marxianus var. marxianus ATCC 36907 (KM7) was obtained from the Korean Collection for Type Culture (KCTC) and grown in YBN-u media (0.67 % yeast nitrogen base without amino acids, uracil deprived amino acids, and 2 % (w/v) dextrose) at 30°C in a shaking incubator [53]. Samples were taken at three time points corresponding to the EE (OD =~3), ME (OD =~7), and LE (OD =~10) growth phases, with two biological replicates per sample. RNA isolation Cells were harvested by centrifugation and resuspended in 300 μl lysis buffer (20 mM Tris-HCl (pH 7.5), 140 mM NaCl, 5 mM MgCl 2 , and 1 % Triton-X). Next, the resuspended cells were lysed with 1 mL TRIzol (Invitrogen) and incubated for 5 min at room temperature. After centrifugation for 15 min at 3000 rpm, the supernatant was transferred to a new tube and mixed with 200 μl of chloroform for 2-3 min. After another 15 min centrifugation step, the supernatant was mixed with triple volume of 100 % ethanol or equal volume of isopropanol, 2 μl of glycogen, and 3 M sodium acetate. After centrifugation and resuspension, RNA was washed with 70 % ethanol. After drying, RNA was resuspended in DEPC-treated water. In order to confirm the quality of extracted RNA, total RNA was visualized using agarose gel electrophoresis. The isolated RNA was incubated for 1 h at 37°C with 4 U of rDNase I (Ambion) and 5 μl of 10× DNase I buffer (Ambion) for removal of genomic DNA. The DNA-free RNA was purified by phenol-chloroform extraction and ethanol precipitation. RNA-seq and data processing Ribosomal RNA (rRNA) was removed by using Ribo-Zero Magnetic Gold Kit (Human/Mouse/Rat) (Epicentre) according to the manufacturer's instructions. Two-hundred nanograms of mRNA was then fragmented by using 10× Fragmentation buffer (Ambion). The first strand cDNA was synthesized using the Random primers (Invitrogen) and SuperScript III Reverse Transcriptase (200 U/μl, Invitrogen). The second strand synthesis was done with Escherichia coli DNA polymerase (10 U/μl, Invitrogen), E. coli DNA ligase (10 U/μl, Invitrogen), and E. coli RNase H (2 U/μl, Invitrogen). The libraries for Illumina sequencing were constructed using TruSeq™ DNA Sample Prep Kit (Illumina) according to the manufacturer's instructions. Briefly, the synthesized cDNA was end-repaired and 3′ends of the blunt fragments were adenylated for adapter ligation. The adenylated DNA fragments were ligated with Illumina adapters. A fraction of the adapter-ligated DNA between 180 and 380 bp was size-selected from a 2 % agarose gel after electrophoresis. Size-selected DNA was purified by using MinElute Gel Extraction Kit (Qiagen) according to manufacturer's instructions, and eluted in 1× TE buffer with low EDTA (10 mM Tris-HCl (pH 8.0), 0.1 mM EDTA) for the following enzyme reaction. For degradation of the second strand, which contains dUTP instead of dTTP, 1 U of USER enzyme (NEB) was added to the purified DNA and incubated for 15 min at 37°C. After 5 min incubation at 95°C for enzyme inactivation, the library was enriched by PCR. The amplification was monitored on a CFX96™ Real-Time PCR Detection System (Bio-Rad) and stopped at the beginning of the saturation point. The amplified library was purified by using Agencourt AMPure XP beads and quantified using a Qubit 2.0 fluorometer (Invitrogen). Finally, validated DNAs were sequenced using MiSeq (Illumina) and Miseq® V2 reagent kit of 50 cycles according to manufacturer's manual. Sequenced reads were mapped to the genome sequence from NCBI (AKFM00000000.1) using CLC genomics workbench (masking mode = no masking, mismatch cost = 1, insertion cost = 3, deletion cost = 3, length fraction = 0.8, similarity fraction = 0.9, global alignment = yes, non-specific match handling = map randomly) after trimming of low quality regions [13]. RNA-seq depth profiles were produced by in-house script and visualized using SignalMap (NimbleGen). Gene annotation Due to the lack of gene annotation in K. marxianus genome, we predicted protein coding genes by using AUGUSTUS with default parameters and trained with gene set of K. lactis [54]. For RNA gene annotation, we predicted genes using cmsearch of Infernal (1.1rc4) and Rfam (version 12.0) [16,17]. We discarded manually apparent non-yeast entries, such as bacteria, originating after discarding entries with p-value over 0.01. Gene name was obtained by BLASTP homology search to fungal proteome database. Transfrag identification and functional analysis In order to obtain transcriptional units, we identified transfrags by combining RNA-seq results for all three conditions. We merged nearby transfrags to reduce over-fragmentation if the distance between them was shorter than 40 bp. Nearby transfrags of distance range from 40 to 100 bp were merged if p-value of Wilcoxon rank test with two-sided was less than 10 −20 , assuming that they had statistically similar profiles. Finally, we discarded transfrags with transcriptional expression levels below 25 percentile transcriptional level from DESeq analysis to obtain bona fide ncRNAs [23]. Predicted transfrags were classified as sense, antisense, and intergenic, according to location, compared with annotated genes. If a transfrag covered more than two gene annotations, it was divided into multiple transfrags corresponding to sense, antisense, and intergenic transfrags. This process was conducted by in-house script and manual inspection was followed. CPAT was used for calculation of coding potential of transfrags and for the discrimination of ncRNAs from protein-coding ones [19]. Cutoff value (0.364) for discrimination was determined by R script within the package as described, by using RNA genes as reference non-coding genes [19]. Differentially expressed transfrags were detected using DESeq [23]. For KEGG enrichment analysis, we linked KEGG pathway information by homology search to SwissProt, which has links to KEGG Orthology information, due to the lack of KEGG pathway annotation for K. marxianus. A pathway with a p-value lower than 0.01, as determined by two-tailed Fisher exact test, was considered enriched with statistical significance.
2023-01-15T14:11:18.688Z
2016-02-29T00:00:00.000
{ "year": 2016, "sha1": "b7c7f36f762296b0c8dbca37a9de8e19e4387b30", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12864-016-2474-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "b7c7f36f762296b0c8dbca37a9de8e19e4387b30", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
267277730
pes2o/s2orc
v3-fos-license
Investigation of electrical, corporeal, ocular, and aquaphobic properties of zirconia thin-films by varying substrate temperature for high voltage insulators ABSTRACT The addition of ZrO2 nanoparticles to porcelain insulators and its influence on their electrical and corporeal characteristics are studied throughout a wide range of sintering temperatures. The DC magnetron sputtering with a 99.99% pure zirconium target is utilized to create zirconium oxide thin films on glass and silicon substrates in a 12-inch-diameter chamber. Different temperatures were used to sinter the produced samples (room temperature (RT) − 400°C) for 1 h with gas pressure of 15 mTorr for all depositions. Scanning electron microscopy (SEM) was used to characterize the microstructures of a subset of the samples. In order to assess the physical and microphysical changes brought on by an increase in substrate temperature, the phase composition of various nanocomposites samples was determined using X-ray diffraction (XRD). The breakdown strength, electrical resistivity, and dielectric constant were measured to assess the electrical attributes of the various samples. The obtained findings showed that the electrical and corporeal characteristics of samples sintered at RT were the best. In addition, the sample of nanocomposite porcelain sintered at room temperature has excellent insulating qualities, confirming the possibility of electro-technical porcelain manufacture (water contact angle = 103.5º), dielectric constant = 24.4, refractive index = 2.15, and bandgap = 5.33. Introduction Because of its superior electrical, mechanical, and thermal qualities, insulators are frequently employed in supplying power to utility systems.As a result of the severe environments in which these systems operate, they get affected from environmental factors like dust, mild rain, etc. [1][2][3].Despite the development of more modern materials like plastics and compounds [2,[4][5][6][7], porcelain insulators (PI) have been utilized in electrical applications for centuries.One of the most difficult ceramic materials and the subject of much research due to their potential use in cutting-edge engineering [7][8][9] are PI.When it came to low and high voltage tension insulation, porcelain insulators were among the most often utilized ceramics.Researchers have made significant progress in the last 20 years in enhancing the insulating qualities of porcelain materials [10][11][12][13][14].Many studies have been conducted to boost the efficiency of conventional electrical porcelains [15][16][17] in order to meet these requirements and keep up with technical advancements in the porcelain insulator industry.Instead of just being thrown away, scrap porcelain may be put to good use in several different fields.The feasibility of using porcelain shards in cement, concrete, and permeable pavement aggregate has been the subject of several investigations [18,19].Plastic shaping for insulators can be used to overcome mechanical resistance requirements and textures defect by incorporating minced scuffle of scorched insulators onto "alumina PI" as a relatively rough rare material [20].The use of porcelain scraps in the creation of insulators has been proven to increase the strength of the insulators by a factor of 10, according to a study published in the [21]. One cutting-edge option for improving insulators' ultimate qualities is nanotechnology.While research into the use of nanomaterials in porcelain insulators is limited in contrast to that of other ceramics, the results that have been published so far are highly encouraging and intriguing [22][23][24][25][26]. Properties of samples sintered at 1250 degree Celsius after being subjected to uniaxial pressing were analyzed to determine the impact of nano-sized titanium dioxide (TiO 2 ).The microhardness, compressive strength, bulk density, and porosity were used to characterize the material's physical and mechanical characteristics.Nanostructured porcelain compositions outperformed conventional siliceous porcelain by a factor of 65 in terms of mechanical strength.Incorporating nano-TiO 2 into traditional silicious porcelain had a beneficial influence on the microhardness of the material, as the nanostructured variant demonstrated a 15% improvement over plain porcelain [27]. The loss of dielectric strength in a porcelain insulator is utmost probable attributable to the occurrence of apertures.It needs to be pure to maintain the insulating characteristics of porcelain [28].The features and relative amounts of distinct phases in porcelain determine its dielectric properties [29][30][31].Zirconia (ZrO2) nanoparticles offer excellent corrosion resistance, a high melting point, a low specific gravity, a lovely expected color, high forte, a high conversion durability, a high natural constancy, and a high resilience to chemicals and microbes [32,33]. Owing to its high dielectric constant and supplementary desirable properties, zirconia is also a promising candidate for use as an insulator [34,35].The zirconia crystals used to create nano-sized ZrO2 (zirconia) particles have excellent mechanical qualities, including high strength and flexibility, and a low thermal conductivity [36].Zirconia's high oxygen ion conductivity and heat-insulating properties open a wide range of possible applications [37,38]. The primary objective of this study was to devise a process for the synthesis of an affordable PI with improved physicochemical and electrical properties by employing DC sputtering at varying substrate temperatures.Section 2 presents the experimental setup.Section 3 demonstrates the feasibility analysis and results.Section 4 concludes the proposed approach with possible future scope. Experimental setup DC magnetron sputtering with a 2-inch in diameter, 5-millimeter-thick zirconium target was used to deposit thin coatings on glass and silicon substrates in a 12-inch in diameter chamber.At first, the substrates were engrossed in a supersonic soak for cleaning and then air-dried for 5 minutes.A turbo pump assisted by a rotary pump successfully emptied the chamber.Initial pressure of the chamber was sustained at 5 × 10 6 Torr (Figure 1).After that, the chamber was filled with ultra-pure (99.9%) oxygen and inert (Ar) gas.Both the oxygen and argon flows were maintained at 10 standard cubic centimeters per minute. Mass flow controllers were used to regulate the gas mixture ratio, while capacitance manometers were used to monitor pressure.For all deposits, the pressure of the gas was held constant at 15 mTorr.Since sputtering current is highly subtle to sputtering gas pressure, it was maintained at a consistent level for each deposition.One and a half hours of sputtering was performed at varying substrate temperatures.Following samples were prepared on the substrate with temperatures of 100°C, 200°C, 300°C, and 400°C.The same 60W of electricity was used.There was a gap of 40 mm between the target and substrate.Each deposition procedure used the same conditions (base pressure, power, sputtering pressure, and gas ratio) apart from the deposition temperature. Results and discussion Nanocrystalline thin films of zirconia were formed at various temperatures, and their XRD patterns are displayed in Figure 2. In all deposited samples, the appearance of wide bump at 2θ = 31.48°indicates the existence of (111) plane of the monoclinic ZrO 2 structure which is the most common ZrO 2 phase "JCPDS (reference code: 00-037-1484)" [39,40].Zirconia's (111) monoclinic phase is responsible for the main peak at roughly 31.48º of 2θ.The main peak, however, grows stronger when the substrate temperature is raised from ambient temperature to 400°C.Therefore, elevating the deposition temperature results in a layer with greater crystallinity.The standard Scherrer formula [41] was used to determine the typical crystallite size, t, of the samples.Crystallite size varies when the temperature rises or falls, as seen in Figure 3.As the temperature of deposition rises, so does the size of the crystallites [42]. AFM was used to explore the surface morphology and roughness of the films.Micrographs of zirconia film acquired using AFM with a scan area of 2 μm on a side are displayed in Figure 4.It shows that the structure of the zirconia films is consistent.Figure 4(a-e) displays three-dimensional AFM pictures of the deposited films. The AFM pictures show that the surface of the growing grains is smoother at room temperature (as deposited) (Figure 4a). Figure 4b shows how the substrate's shape changes from a pyramidal type to a cluster type when the temperature is raised from 100°C to 400°C, as the grains merge to form larger grains.As-deposited films have a root-mean-square roughness of 45.4523 nm.After heating the substrate to 100º C, the XRD evidence of grain coalescence and the creation of zirconium oxide phases causes this value to rise to 69.179 nm [42].The AFM's supplementary software was used to assess the surface roughness.The XRD findings were corroborated by the AFM analysis.Figure 6 shows a scanning electron microscopy picture of a thin coating of nanostructured zirconia at a substrate temperature of 300°C. The spectrum changes of transmission for the zirconia films formed onto glass substrate at various deposition temperatures were scrutinized transversely in the wavelength range of 300-800 nm, and Figure 5 displays the results of those measurements.The films that were deposited at varying temperature, had an average transmittance value in the visible range that is up to 90%, 91%, 92%, 93%, and 94% accordingly. The films had a high optical transmittance in the wavelengths that were more than 400 nanometers.It rose when the temperature of the substrate grew to 400°C.It is clear from the graph that as deposition temperature increases, transmission increases.As described above, the films become rougher as the deposition temperature increases.The density of defect centers dropped as the substrate temperature increased, increasing the amount of light that could pass through the substrate [43].This results in an increase in kinetic energy of ad-atom at substrate with temperature increase.When the temperature of the substrate was raised from RT to 400°C, there was an increase in the straight band gap of the films from 5.33-5.52eV.The enhancement in the packing compactness and crystallinity of the film was the cause of the rise in the optical band gap due to an increase in the temperature of the substrate [44].Using Swanepoel's envelope approach, it could calculate the films' refractive index (ɳ) based on the optical transmittance interference data [43].The calculated refractive index at different deposition temperatures at 500 nm wavelength is shown in Table 1.As deposition temperature increases, refractive index increases.The films generally exhibited a rise in their refractive indices in response to an upsurge in photon energy.According to the findings of studies [45,46], the upsurge in the refractive index of the films that occurred when the substrate temperature was increased from RT to 400°C was documented as a rise from 2.15 to 2.19.Because of the increasing temperature of the substrate, the capacity of remaining gas molecules to disperse has improved, making it more challenging for the substrate to absorb them [47]. However, due to the high temperature of the deposited zirconia particles, any change in the substrate's temperature has only a minor impact on the particles.As a result, the film will get denser, leading to an increase in its refractive index [48]. The equation that is presented in [44,47,49] may be used to calculate the thickness of the film.The calculated thickness and the measured thickness (SEM shown in Figure 6) of the zirconia films are shown in Table 1.Both findings are consistent with one another to a high degree.Furthermore, in Table 1 it has been shown that the substrate's temperature did not significantly impact the thickness.This indicates that the deposition amount depends on the number of sputtered atoms that eventually make it to the substrate.Negligible are the other impacts, which include a drop in the local pressure in the sputtering plasma, a change in the sticking coefficients of zirconium or oxygen, and re-sputtering from the substrate [50]. The relationship between water contact angle and deposition temperature is seen in Figure 7.The contact angle increases as the deposition temperature increases.As temperature increases, surface roughness increases and hence hydrophobicity increases. Figure 8 illustrates how the resistivity and dielectric constant of a substrate change as a function of the temperature of the substrate. The resistivity was decreasing as the temperature was increased [49,50].The decrement in the resistivity may be due to higher packing density with temperature, while decrement in dielectric constant was due to decrement in the thickness of the film. Conclusions and scope An aquaphobic zirconia coating of shallow thickness on a glass insulator (substrate) was successfully deposited.It has been discovered that the aquaphobicity of the substrate rises along with an increase in the temperature of the substrate.It was determined that an increase in surface roughness was responsible for the correlation between temperature and aquaphobicity.The highest water contact angle (107°) was found at 400°C.Also, there is increase in transmittance but resistivity and dielectric constant were decreased as deposition temperature was improved.Results of the XRD reveal that crystallinity increases with increase in temperature.Hence, higher temperature cannot be considered as an optimum temperature as it will lead to more leakage current which is undesirable for insulators.Hence, as-deposited films are considered as the optimum films from the temperature point of view.So, at room temperature water contact angle, dielectric constant, refractive index, and bandgap was 103.5°, 24.4,2.15 and 5.33, respectively. Figure 2 . Figure 2. Zirconia X-ray diffraction patterns at various temperatures. Figure 3 . Figure 3. Temperature effects on the crystallite size and surface roughness of zirconia films. Figure 5 . Figure 5. Transmission spectra of ZrO2 as a function of temperature. Figure 6 . Figure 6.Image captured by a SEM of a nanostructured zirconia sheet at a substrate temperature of 400°C. Figure 7 . Figure 7. Changes in surface roughness and contact angle as a function of temperature. Figure 8 . Figure 8. Temperature-dependent shifts in resistivity as well as the dielectric constant.
2024-01-28T16:46:32.145Z
2024-01-02T00:00:00.000
{ "year": 2024, "sha1": "28e7c72e919e9d3ef0ad4ce67321697422fa664a", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21870764.2024.2307694?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "9e9407704c0b9e1e5c76fbbb7bc325da082139d5", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [] }
75547828
pes2o/s2orc
v3-fos-license
Why epidural blood patch in postdural pain headache Spinal anesthesia has some potential advantages over general anesthesia and it has been widely and successfully used for nearly 100 years, especially in surgeries of lower abdomen, perineum and lower extremities (1). Some major advantages of regional anesthesia are the continuation of patient's spontaneous respiration, preservation of oropharyngeal reflexes, postoperative analgesia and shorter length of hospitalisation (2,3). With new local anesthetic drugs and spinal needles introduced, complications are minimized and more professionals prefer this method (4). Introduction Spinal anesthesia has some potential advantages over general anesthesia and it has been widely and successfully used for nearly 100 years, especially in surgeries of lower abdomen, perineum and lower extremities (1).Some major advantages of regional anesthesia are the continuation of patient's spontaneous respiration, preservation of oropharyngeal reflexes, postoperative analgesia and shorter length of hospitalisation (2,3).With new local anesthetic drugs and spinal needles introduced, complications are minimized and more professionals prefer this method (4). Spinal anesthesia is an alternative to general anesthesia in most cases.It can be used simultaneously with general anesthesia, or postoperative for analgesia, acute and chronic pain treatment.Today, most of the C-sections attempts are also carried out with the epidural or spinal anesthesia (5).It was shown that with an appropriate approach, neuroaxial anesthesia methodologies are highly safe, but there can be some complications emerging during and after the procedure.Some of these complications are intravascular injections, nerve injuries, hypotension, bradycardia and dural rupture (6,7).When puncture is performed on dura or araknoid, there is a risk of Post-dural-puncture headache (PDPH).Post-dural-puncture headache is the most common complication of the regional block anesthesia and it is an important condition caused by the leakage of the cerebral spinal fluid (CSF) from the hole opened by the needle during dura puncture and related to CSF pressure (8). The international headache society defines PDPH as a bilateral headache developing within 7 days after the lumbar puncture and disappearing in 14 days (9), while Vandam and Dripps (10) defines it as pain which can affect both sides of the neck and shoulders even though it is generally in frontal and occipital regions. Material and Methods This retrospective study has been approved by ethical committee of the Erzincan University, Faculty of Medicine.16003 patients, who were operated in the Erzincan State Hospital and Erzincan Mengücek Gazi Education and Research Hospital between 2004 and 2011 were involved in this study.Patients complaining from PDPH had nausea and back/neck pain as well as headaches.These symptoms were increasing, especially when the patients were standing.Epidural Blood Patch (EBP) was applied on 159 patients' suffering from PDPH.Spinal anesthesia was applied on 72 of these patients, while combined regional anesthesia was applied on 87 of them.Patients were aged between 19 and 74.Patients were involved in the study after operations in inguinal hernia, C-section, anal region diseases and urological and lower extremity orthopaedics.Patient Demographic Features are given in Table 1. The epidural patch was applied in the surgery room to all patients.After taken to the surgery room, the patients were monitored, veins were opened with 18 G branule from antecubital fossa and 500 ml crystalloid was given.When the patients were in a sitting position, skin was sterilized with povidone iodine.After local anesthesia, process was started in L4-5 range, with loss of resistance method, using 18G Tuohy needle normal saline (N/S).Subarachnoid range was entered 4.5 cm away from skin.When the injector leaves the epidural needle, CSF flow was seen, and it was confirmed as CSF with flow rate and temperature after that, epidural needle was pulled until CSF flow stopped and the pulled distance was confirmed as epidural range with N/S injection aspiration method.Patient's antecubital zone was sterile cleaned with povidone iodine.20 ml autologous blood was taken with a 20 ml IV injector and 15 ml blood was given to the epidural range from epidural range Tuohy needle.Success of EBP was measured as total relief (disappearing of all symptoms), partial relief (being able to clinically perform daily activities) or failure (persistence of serious symptoms). Results 72 females and 67 males have been involved in the study.Surgeries and anesthesia's of the patients are given in Table 2. 27 G Quinke spinal needle was used on all 87 patients spidural.Anesthesia was performed while for spinal anesthesia, 25 G Quinke was used for 30 patients, 27 G Quinke was used for 25 patients, 29 G Quinke was used for 10 patients and 22 G Quinke was used for 7 patients, All patients were discharged 24 hours after the procedure. According to the patients' records, EBP was applied to 159 patients with VAS scores between 7 and 9. Patients with VAS scores 4-5 were conservatively followed (iv liquid replacement, 3000 ml paracetamolcafein combination or analgesic treatment with NSAI drugs).15 ml autologous blood was given to epidural region in the surgery room and to the patients, EBP was applied.EBP was applied 2 days after the dural injury.Symptoms of PDPH disappeared within minutes immediately after the process.After the first application of EBP, 156 patients felt relaxed.3 patients had a relief after the first application, but their symptoms relapsed, and a second injection was performed 24 hours after the first one.Following the second EBP, all complaints of these patients disappeared. No complications emerged in the patients.Patients were discharged 2-3 hours after EBP application. Discussion Even though spinal anesthesia is the most common regional anesthesia methodology, most of the clinicians and physicians had negative feeling towards it and were aversive due to the risks of infection, spinal neurotoxicity, post-spinal headaches and lifethreating complications (11).PDPH, caused by leakage of CSF from the hole in dura opened by the needle used in spinal anesthesia and related to CSF pressure, has many negative consequences and it is a serious condition.The most important factor in its emergence is considered as needle type and thickness (12).PDPH incident varies between 0% and 37% depending on needle type and size (13).As the diameter of the needle increases, risk of the headache occurring also increases (14). There are many studies showing that pen-edged and small-diameter needles can decrease the PDPH indicence (15).Research on durameters with electron microscopy has shown that pen-edged needles causes more damages with respect to sharp-edged needles.Pen-edged needles are also causing irregular rupture and following inflammatory reaction, and less CSF leakage with respect to the sharp-edged needles cutting in U shape (16).Westbrook et.al. (17) shown that pen-edged needles cause less CSF loss and this is revelant with the needle design Different studies show that the PDPH incidence is 40% with 22 gauge needle, 25% with 25 gauge needle (18,19), 2-12% with 29 gauge needle and 2% with 29 gauge needle (20,21).Jeanjean et.al. found PDPH incidence as 0.08% in their study with 24 gauge needle ( 22) and Despond et.al. found incidence as 9.3% with 27 gauge needle (23) and Frenkel et.al, found incidence as 3.5% with 25 gauge needle (24).In our study, PDPH was observed in 5.8% of our patients.PDPH incidence was 5.8% with 22 gauge quincke neede, 10.1% with 25 gauge quincke neede, 5.1% with 27 gauge quincke neede and 1.9% with 29. We believe that success of epidural patch is directly related to the proximity of blood to the injection puncture region.Injection in the same space is advised, even though not necessary. If the same spot cannot be used for any reason, using the space below is advised because there is a possibility of blood diffusion in epidural space.Thus, when there is more than one puncture, the lowest puncture region should be used for puncture region injection (25). Hypotension and bradycardia are common side effects of spinal anesthesia, these situations are easily overcomed by the anesthetist so these side effects do not create a burden and distress in the patient for a long time.However, severe headaches which have been observed in 12-24 hours postoperatively due to dural injuries, can seriously affect patients' comfort as well as mobilization. Epidural blood patch to treat postdural headaches is known as the gold standard in the literature.Because the epidural blood patch is an invasive method and causing a second dural injury, this method does not seen much interest to the anesthesiologists.Although the continued headache, conservative treatment is continued to postoperative 48 hours.(26).In our experience, conservative treatment did not shorten the duration of the postdural headache and this time were approximately 7 to 10 days longer.When applied by an experienced anesthesiologist, epidural blood patch is reduced to a minimal complications ratio; and it has been found to provide a radical improvement in a very short time in the treatment of postdural headache. Hospitalization might be necessary for monitoring patients in case of headaches related to cerebrospinal fluid leakage after C-section.This might cause early dissociation of mother and baby (27).EBP can be used to prevent cerebrospinal leakage and preserve mother-baby bond. Conclusion EBP is a treatment methodology with lower risk of complications, lower costs and high success rate for PDPH patients.It is advised especially after C-section, for preserving mother-baby communication. Conflict of Interest: The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Table 2 : The surgeries of post-dural puncture headache
2019-03-13T13:32:14.852Z
2016-02-15T00:00:00.000
{ "year": 2016, "sha1": "ffeeb6ee5c78264dd40462ace5d878bf52efb0a8", "oa_license": "CCBY", "oa_url": "https://dergipark.org.tr/tr/download/article-file/183951", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "8415aae6a8efc174168d69409f7ffb0b07795a61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
108618088
pes2o/s2orc
v3-fos-license
A Precision-Positioning Method for a High-Acceleration Low-Load Mechanism Based on Optimal Spatial and Temporal Distribution of Inertial Energy High-speed and precision positioning are fund a mental requirements for high-acceleration low-load mechanisms in integrated circuit (IC) packaging equipment. In this paper, we derive the transient nonlinear dynamicresponse equations of high-acceleration mechanisms, which reveal that stiff ness, frequency, damping, and driving frequency are the primary factors. Therefore, we propose a new structural optimization and velocity-planning method for the precision positioning of a high-acceleration mechanism based on optimal spatial and temporal distribution of inertial energy. For structural optimization, we first reviewed the commonly flexible multibody dynamic optimization using equivalent static loads method (ESLM), and then we selected the modifi ed ESLM for optimal spatial distribution of inertial energy; hence, not only the stiff ness but also the inertia and frequency of the real modal shapes are considered. For velocity planning, we developed a new velocity-planning method based on nonlinear dynamic-response optimization with varying motion conditions. Our method was verifi ed on a high-acceleration die bonder. The amplitude of residual vibration could be decreased by more than 20% via structural optimization and the positioning time could be reduced by more than 40% via asymmetric variable veloci ty planning. This method provides an effective theoretical support for the precision positioning of high-acceleration low-load mechanisms. Introduction With the rapid development of electronic manufacturing technology and the electronic market, the demand for high-acceleration and high-precision mechanisms are increasing. For example, some manipulators in packaging equipment run at the speed of 20 000-24 000 cycle times per hour, with peak acceleration of 12g-15g and a positioning precision of 2-5 µm. High-acceleration and short-cycling-time mechanisms are inevitably subjected to elastic deformation and vibrations caused by the inertial force. It is very diffi cult to achieve high positioning accuracy in a very short deceleration phase. Frequent but short acceleration-deceleration cycles could result in abrasion or even failure of the mechanisms [1]. Hence, it is necessary to fi nd a new approach to optimize this type of mechanism. When a mechanism moves at high speed, its components should be regarded as flexible bodies, so the whole mechanism becomes a fl exible multibody dynamic system, in which the rigid-body motion is coupled with the elastic deformation. Therefore, the resulting dynamic model is a set of highdimensional differential equations with time-varying coefficients and non-smooth nonlinear terms, which is difficult to model, analyze, and optimize [2]. For the last two decades, though tremendous progress has been achieved in the analysis of kinematics and the dynamics of flexible multibody dynamic systems [3], the optimization of fl exible multibody dynamic systems is not completely solved. The equivalent static loads method (ESLM) [4][5][6][7][8][9][10] proposed by Park et al. is the most effective method for the dynamic optimization of fl exible multibody dynamic systems. It has been implemented into the commercial software HyperWorks. The ESLM has been successfully used in the optimization of automotive collision dynamics, Boeing aircraft wing structure [9], and so forth. The main idea of this method is to convert the nonlinear dynamic response, by discretizing the time variable, into response equations of a series of equivalent static loads [4][5][6][7][8][9][10]. As the current dynamic topology optimization methods only consider some natural frequencies, but the corresponding mode shapes may not refl ect the real deforma- Research Engineering Volume 1 · Issue 3 · September 2015 www.engineering.org.cn tion. Another important factor is the motion profi le. At present, velocity-planning methods mainly consider geometric smoothing while ignoring the infl uence of curve parameters on the dynamic response. The S curve has a smoother variation in acceleration compared to the trapezoidal profi le, so it can reduce the residual vibration to some extent [11,12]. Input shaping, such as a digital filter, not only causes time delay, but also is diffi cult to apply to the nonlinear dynamic system, where both stiffness and frequencies vary with position [13]. Therefore, a dynamic-response optimization for the velocity planning is necessary. Some scholars have an equivalent motion stage system as a single-degree-of-freedom system to obtain optimal parameters of the S-type motion curve to reduce residual vibration [14]. However, because of high-acceleration low-load mechanism is three dimensional and frequently starts and stops, its velocity planning is a much more complicated problem. In this paper, the nonlinear dynamic response of a highacceleration low-load mechanism is discussed. We derive the transient nonlinear dynamic-response equations of high acceleration mechanisms, which reveal that stiffness, frequencies, and damping (related to the layout of material, i.e., the spatial distribution of inertial energy), as well as the driving frequency (related to the motion profile, i.e., the temporal distribution of inertial energy), are the primary factors. Therefore, we propose a new structural optimization and velocity-planning method for the precision positioning of a high acceleration mechanism based on optimal spatial and temporal distribution of inertial energy. For structural optimization, the ESLM-based fl exible multibody dynamic optimization is reviewed and modified for high-acceleration low-load mechanisms by means of the Rayleigh-Ritz method, which has been done in our previous work [15,16]. For velocity planning, a new asymmetric velocity profi le is proposed based on nonlinear dynamic optimization with varying boundary conditions. Finally, a practical example of a high-speed die bonder is studied, which shows that the residual vibration can be reduced by more than 20% by structural optimization and the positioning time can be reduced by more than 40% by asymmetric variable velocity planning. Numerical tests show that the proposed method is effi cient for structural design and velocity planning for a high-acceleration low-load mechanism. Technical background For mechanisms operating at very high speed, the vibration of the structures must be considered. When the deformation is large, the absolute nodal coordinate formulation (ANCF) is more effi cient [16]. Within the ANCF for highly fl exible bodies, the absolute coordinates are represented by a vector y, characterizing the material points of the bodies by an appropriate shape function. The motion equation is where M(t ), C(t ) and K(t ) are mass, damping, and stiffness matrices at time t, respectively. The motion equation for fl exible bodies has the same format as that of structural vibration. The displacement y consists of rigid motion y R and elastic modes y E . If we consider when the mechanism moves at a position, the mechanism can be regarded as a structure with kinetic degrees of freedom. Let u R and u E be the displacements of the rigid modes and elastic modes, respectively. Therefore, the total displacement of the fl exible multibody is where Φ R and Φ E are the matrices of the modal shapes of the rigid modes and elastic modes; and η R and η E are the corresponding coordinates. The modal shape of the whole system is the combination of Φ R and Φ E : For the rigid modes, there exists i Usually, if we transfer the high-speed motion profile to the frequency domain using a fast Fourier transformation (Figure 1), the input force can be regarded as a series of harmonic excitations: So the corresponding modal velocity should be E E E sin( Eq. (12) shows that the vibration response of the highspeed mechanism is affected by the stiffness, damping, and frequencies of elastic modes, and also by the excitation frequencies. The former is related to the structural design and the latter is related to the motion profi le, so the optimal spatial and temporal distribution of inertial energy would be an effi cient way to minimize the vibration at the end of the manipulator and the fi nal stage of the motion. However, as the vibration response is difficult to obtain due to the variation in stiffness, inertia, and motion conditions, numerical methods such as the nonlinear fi nite element method of fl exible multibody dynamics are needed to solve the dynamic-response analysis and optimization. Optimal structural design using ESLM The motion equation for an equivalent structural response under a dynamic load at position y R can be written as where vectors y R and y E represent the displacement of the rigid body and elastic deformation, respectively, and the damping effect is ignored. With the ESLM [10], we can derive the equivalent static loads using the fi nite element method. Rearranging Eq. (13) leads to or ( ) where Eq. (16) represents the equivalent static loads at time t [10]. For optimization, the number of the equivalent static loads ( Figure 2) can be treated as multiple loading conditions. Thus, the equivalent load set can regenerate dynamic properties such as time-dependent displacement or stress. In fact, the ESLM is an interface between nonlinear response analysis and linear static optimization (Figure 3), and analysis is performed in the analysis domain, where equivalent loads are calculated. Linear-response optimization is performed using the equivalent static loads in the design domain. The process proceeds in a cyclic manner. In high-acceleration low-load mechanisms, the principal loads are the inertial forces induced by accelerations. Hence, mechanical design should consider light-weight structures to minimize such loads. However, the linear static optimization cannot handle dynamic features such as inertial property and dynamic stiffness, even within the ESLM. Linear static optimization needs to be modified with only one iteration in linear static optimization, so that the change of material can be refl ected on the inertial forces. Moreover, we have to modify the sensitivity analysis of the ESLM to meet the requirements of a high-acceleration low-load mechanism. A new structural-design method based on optimal spatial distribution of inertial energy In the original ESLM, only the stiffness changes are considered when removing an element, so the element sensitivity is defined by the element strain energy [10,15]. However, the modifi cation of elements also changes the inertial force, resulting in a change to the strain energy [16]. Assuming that the ith element is removed from the reference structure at the jth position, the change in strain energy is where superscripts T, e, and S refer to transposition, element, and strain, respectively; Δ is the increment; and E is energy. In addition, in a high-acceleration low-load mechanism, the inertial force is the main load, so the inertial property should be considered. Like the strain energy, the inertial property can be measured by the kinetic energy. Accordingly, when an Research Engineering Volume 1 · Issue 3 · September 2015 www.engineering.org.cn element i is removed from the reference structure at position j, the change in kinematic energy is where the superscript K means kinematic; m i is the mass of the ith element; ω j is the rotational velocity at the jth position; and r i, j and y . i, j are the gyro radius and velocity of the center of the ith element in the jth position, respectively. For high-acceleration mechanisms, we need to maximize the stiffness while minimizing the inertia. As with the to Rayleigh-Ritz analysis, we can divide the strain energy by the kinematic energy to quantify the sensitivity of an element: Assuming that the number of discreet positions is m, and that ΔS max, j is the maximum sensitivity at the jth position, the comprehensive sensitivity is defi ned by Evolution structural optimization (ESO) is employed to perform the modification. If the ΔS i is less than a given threshold, the material property is set as a deleted material density, the elastic modulus of which is very low, only 1%-10% of that of the normal material. If we normalize the total E S by the scalar product of displacement and the total E K by the scalar product of velocity, we form the S and K , respectively. Then the ratio of S to K should reach a maximum regardless of the deformation and speed: y y y Ky y y y y y My y y The optimization problem of high-acceleration low-load mechanisms becomes where U is the residual vibration amplitude within a required stationary time and U* is the positioning precision requirement. We made a stand-alone program for sensitivity analysis and generated the command to modify the model. It can be integrated with the commercial software I-DEAS and AD-AMS to perform an optimization design for a high acceleration mechanism. Motion profi le based on temporal optimal distribution of inertial energy As high-acceleration low-load mechanisms start and stop frequently, the primary motion-control policy is open-loop control with a prescribed motion profi le, and the performance is mainly dependent on the parameter of the given motion Let the objective position s * be Q. Then the time segment for each motion is ( ) ( ) (1 ) curve. As we can see in Section 2, the motion profile can be transformed to a series of harmonic excitations in the frequency domain, which causes diffi culty in calculations of the nonlinear dynamic response. However, if we can parameterize the motion profi le as variable motion boundary conditions, we can achieve optimum parameters for the motion profi le using nonlinear dynamic-response optimization. In particular, when the machine moves at very high speed, such as during die bonding and wire bonding, the input signal consists of pulses or jumps with sudden changes. Since the S curve is widely used in industrial situations, we take the asymmetric S curve as an example. The parameters are four jerks for each section (Figure 4), namely G 1 -G 4 . 3 Using the variable motion boundary condition nonlinear dynamic-optimization method, the procedure is shown as follows: (1) Defi ne the design variables, G 1 , G 2 , G 3 , G 4 , and the position objective Q. Define the time variables T 1 , T 2 , T 3 , T 4 , and evaluate them using Eqs. (25)-(28), respectively. Defi ne the time interval T 12 = T 1 + T 2 , T 123 = T 12 + T 3 , and T 1234 = T 123 + T 4 . (2) Build the geometry of the mechanism, assign materials, build joints, and apply motion, using the following function: G 1 , G 2 , G 3 , and G 4 as design variables, defi ne the total positioning time as the objective function, and use the global optimization method to get the optimal result. By using the above variable motion boundary condition nonlinear dynamic-response optimization, the parameters likely to cause resonance will be excluded from the optimization of feasible solutions; thus, velocity planning is obtained in which the temporal distribution of inertial energy is optimal. Moreover, the presented velocity-planning meth- The material used is aluminum alloy 7075, with elastic modulus, mass density, and Poisson ratio of 79.9 GPa, 2700 kg . m -3 , and 0.35, respectively. The radius of gyration is 80 mm, and the required positioning accuracy is ± 0.5 µm (after vibration attenuation). The size of the base structure is 105 mm × 30 mm × 5 mm, while the total mass is 0.3 kg, and the inertia property is 0.97 kg . mm 2 . The loads are the inertial force and the unit force applied to the tip of the capillary. Optimal structural design We have compared the structural optimization with three different methods, namely the traditional structural optimization, the ESLM with only one iteration in each cycle, and the modified ESLM. The nonlinear dynamic simulations of the optimal structures from the above methods are performed under the same motion profile with different parameters, where the motion period of the bonder is 100 ms (normal acceleration), 10 ms (high acceleration), and 1 ms (very high acceleration). The maximum amplitude of the residual vibrations is listed in Table 1. For the sake of comparison, the optimal structure using traditional structural optimization is set as the reference, and the vibration amplitude of the other two are compared with it. The results show that when the die bonder moves with a normal acceleration, the ESLM is nearly the same as the structural optimization (with a reduction of only 1.66%), and the data of the modified ESLM also gives similar results (with a decrease of only 3.09%). When the bonder arm moves with high acceleration, the ESLM is more effi cient (dropping by 11.41%), and the modifi ed ESLM with the vibration amplitude is reduced by 22.66%. It can be seen that the influence of inertial force is significant. When the bonder moves with very high acceleration, both the vibration amplitudes are very large, whilst the modifi ed ESLM (21.11%) is still more effi cient than the ESLM (less than 0.002%). od can also be applied to any other parameterized motion curves, as well as to the parameter optimization of control systems. Numerical examples Consider the structural optimization and velocity planning of a high-acceleration low-load die bonder for an integrated circuit (IC) packaging device, with a production rate of 36 000 dies per hour. The motion time for the bonder to move from the wafer to the lead frame ( Figure 5) is only 50 ms. The external load is the inertial force of the die, which is negligible when compared with that of the die bonder. Motion profi le planning Furthermore, the parameters of the motion profile have been optimized using the nonlinear response optimization (Figure 6), and after four iterations (Table 2), convergence was achieved. In order to show the effi ciency of the asymmetric S curve, the vibration response is compared with the symmetric S curve. If the let G 1 = G 2 = G 3 = G 4 = G in Eqs. (25)-(28), then the jerk of the symmetric S curve with an equivalent drive is the same, and both S curves being compared have the same driven time (Figure 7). We can see that the positioning time changes from 19.40 ms to 11.15 ms at positioning accuracy ± 0.5 µm, a reduction of 42.5%. Optimal structural design The presented method is based on the fi nite element model, with 13 146 elements (6046 elements in the design domain). The material used is aluminum, with a Young's modulus of 79 GPa, Poisson ratio of 0.33, and mass density of 2700 kg . m -3 . After four iterations, the optimal result was found by deleting 2000 elements (Figure 8(c)). The fi nal design is shown in Figure 8(d). The vibration amplitude dropped by 94.00%, and the power consumption was also reduced by 38.46% (Table 3). The final design was selected as the engineering design ( Figure 9). Motion profi le planning In order to show the effi ciency of the proposed motion profi le based on the temporal optimal distribution of inertial energy, the same procedure is applied to the parameter optimization of a proportion-integration-differentiation (PID) control system ( Figure 10). Engineering application In order to show its practical significance in engineering, the presented method is applied to the development of a die bonder. The bonder is a swing arm that is directly driven by a servo moto. The original design is shown in Figure 8(a). www.engineering.org.cn Volume 1 · Issue 3 · September 2015 Engineering The initial values of the three variables K p , K i , and K d are set as 1.00. The original positioning time is 6.3215 s. Using parameter optimization based on nonlinear dynamic simulation using fi nite element analysis, after four iterations (Table 4), the optimal results are found with K p , K i , and K d at the values of 694.20, 14.225, and 340.41, respectively. This procedure also shows the efficiency of the two presented methods. acceleration low-load mechanisms and reduces the residual vibration by 20%; while the nonlinear dynamic response for the asymmetric S curve is decreased by more than 40%. The presented method was also applied to a design of a die bonder and to an optimal design of the PID parameters of a control system; both procedures showed the effi ciency of the metnod. The method has also been tested on a variety of high-acceleration low-load mechanisms in IC packaging equipment design (such as die bonders, wire bonders, surface mount technology (SMT), and high-acceleration robotics, etc.), and a signifi cant achievement has been accomplished. Conclusions In this paper, the optimization problem of the nonlinear dynamic response of a high-acceleration low-load mechanism was discussed. Systematic methods were proposed to solve the problem from the optimal distribution of inertial energy in space and time domains. A modified ESLM was selected for optimal structural design in order to improve the dynamic response of highacceleration low-load mechanisms. A new velocity profi le based on nonlinear dynamic response was proposed in order to reduce positioning time. The numerical example showed that the modified ESLM is efficient for high-
2019-04-12T13:55:14.542Z
2015-10-16T00:00:00.000
{ "year": 2015, "sha1": "afeafe0de82a00d4c1b21786f6f7f50cec193069", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15302/j-eng-2015063", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "924339baeec74d40f5651a1f06081e1db7488b4e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
119237103
pes2o/s2orc
v3-fos-license
Determination of the number of J/$\psi$ events with J/$\psi$ \rightarrow decays The number of $J/\psi$ events collected with the BESIII detector at the BEPCII from June 12 to July 28, 2009 is determined to be $(225.3\pm2.8)\times10^{6}$ using $J/\psi \rightarrow inclusive$ events, where the uncertainty is the systematic error and the statistical one is negligible. (BESIII Collaboration) The number of J/ψ events collected with the BESIII detector at the BEPCII from June 12 to July 28, 2009 is determined to be (225.3 ± 2.8) × 10 6 using J/ψ → inclusive events, where the uncertainty is the systematic error and the statistical one is negligible. I. INTRODUCTION To meet the challenge of precision measurements of τ −charm physics, a major upgrade on the Beijing Electron-Positron Collider (BEPC) and the Beijing Spec-trometer (BES) was completed in 2008 (now called BEPCII and BESIII). BEPCII is a double ring e + e − collider with a design peak luminosity of 10 33 cm −2 s −1 at √ s =3.773 GeV, which is 100 times that of its predecessor. The BESIII detector is a large solid-angle magnetic spectrometer that is described in detail in Ref. [1]. The major improvements in the BES detector are the huge superconducting solenoid magnet with a central field of 1 T, which offers a significant improvement in the momentum resolution of charged particles, and a cesium iodide (CsI) calorimeter for the energy measurement of electrons and photons, which provides more than a factor of 10 improvement in the precision of electromagnetic shower energy measurements. Since the discovery of the J/ψ in 1974, it has always been regarded as an ideal laboratory to study light hadron spectroscopy and to search for new types of hadrons (e.g. glueballs, hybrids and exotics). With 58 million J/ψ events collected with the BESII detector, many important results have been obtained, which underlined the importance of the study of J/ψ decays. Therefore, after a successful commissioning of the BE-SIII detector together with BEPCII, a large sample of J/ψ events was collected from June 12 to July 28, 2009, which allows the study of the properties and the decays of the J/ψ with unprecedented precision. The number of J/ψ events and its uncertainty are two key quantities in the precision measurements of J/ψ decays. At BESII, the number of J/ψ events was determined with J/ψ → 4-prong events, and its systematic uncertainty was 4.7% [2]. The excellent BESIII detector and its good performance allow the determination of the number of J/ψ events with higher precision. To reduce the systematic uncertainty from that in Ref. [2], a new method using J/ψ → inclusive events is introduced. The number of J/ψ events (N J/ψ ) is calculated with where N sel is the number of J/ψ → inclusive events selected from J/ψ data; N bg is the number of background events estimated from the continuum data taken at the center-of-mass energy of 3.08 GeV; ǫ trig is the trigger efficiency; ǫ ψ ′ data is the J/ψ → inclusive detection efficiency determined experimentally from ψ ′ data using ψ ′ → π + π − J/ψ events; f cor is a correction factor for ǫ ψ ′ data , obtained from Monte Carlo (MC) simulation which accounts for the difference between the J/ψ events produced at rest and those produced from ψ ′ → π + π − J/ψ. The correction factor in Eq. (1), which is approximately unity, is determined from where ǫ J/ψ mc is the detection efficiency of J/ψ → inclusive events determined from the J/ψ MC sample and ǫ ψ ′ mc is the efficiency determined from the ψ ′ → π + π − J/ψ (J/ψ → inclusive) MC sample. There are two major improvements over the method in Ref. [2]. One is the generalization of the J/ψ → 4-prong events to J/ψ → inclusive events, which allows the number of J/ψ events to be determined by requiring different numbers of charged tracks; the other is to use the MC samples of J/ψ → inclusive and ψ ′ → π + π − J/ψ (J/ψ → inclusive) events generated with the BesEvtGen generator [3] based on GEANT4 [4] to determine the correction factor, f cor . In this analysis, the events with more than one charged tracks are used to determine the number of J/ψ events. At present only about 50% of the J/ψ decays are observed and listed in the Particle Data Group tables (PDG) [5]. In the MC simulation package, the unknown J/ψ decays are roughly generated with the Lundcharm model. In the Lundcharm model, charmonium decay via gluons is described by the QCD partonic theory, and the partonic hadronization is handled by the LUND model. Extended C-and G-parity conservation are assumed and abnormal suppression effects of charmonium decay are included [6]. At the track level, candidate events are required to satisfy the following selection criteria: 1. Charged tracks are reconstructed using hits in the Main Drift Chamber (MDC) and are required to be in the polar angle range | cos θ| < 0.93, have momentum p < 2.0 GeV/c, and have the point of closest approach of the track to the beamline within 15 cm of the interaction point along the beam direction (V z ) and within 1 cm in the plane perpendicular to the beam (V r ). At the event level, at least two charged tracks are required, and the visible energy, E vis , must be greater than 1.0 GeV. Here E vis is defined as the sum of charged particle energies computed from the track momenta by assuming pion masses and the neutral shower energies deposited in the EMC. According to the distribution of visible energy shown in Fig. 1, this requirement removes two thirds of background events, estimated with the continuum data taken at the center-of-mass energy of 3.08 GeV, while it has little effect on the inclusive events. To remove background from Bhabhas and dimuons, events with only two charged tracks must have the momenta of both charged tracks less than 1.5 GeV/c. Fig. 2 displays the scatter plot of the momenta of two charged tracks, where the clear cluster with the momenta around 1.55 GeV/c corresponds to the contribution from leptonic pairs. Most of the leptonic pairs are removed by the above requirement as indicated by the solid lines in Fig. 2. From the deposited energy distribution of charged tracks in the EMC, shown in Fig. 3, a peak around 1.5 GeV is clearly observed, which corresponds to the contribution of Bhabha events. Therefore, to further remove Bhabha events, the deposited energy in the EMC of each charged track is required to be less than 1 GeV. After the momentum and energy selections there remain 174.28 ± 0.01 million events (N sel ) from the J/ψ data. The distributions of the track parameters for closest approach and track angle V r , V z , cos θ, the total energy deposited in the EMC (E emc ), and the charged multiplicity (N good ) after subtracting background events estimated with the continuum data taken at the center-of-mass energy of 3.08 GeV (see Section III for details) are shown in Figs. 4 through 8, respectively. Also shown are the distributions from MC simulation, normalized to J/ψ data. The distributions of V z , V r , and cos θ of charged tracks, and the E emc distribution for MC simulation are in reasonable agreement with those from data. For the charged multiplicity distribution shown in Fig. 8, neither the MC simulation with the Lundcharm model nor the MC simulation without the Lundcharm model agree very well with the data. However the effect of this discrepancy between data and MC simulation on the correction factor is very small, as described in Section VII. III. BACKGROUND ANALYSIS Background events come mainly from Quantum Electro-Dynamics (QED) processes, beam-gas interactions, and cosmic rays. In this analysis, all of them are estimated with the number of events selected from the continuum data taken at the center-of-mass energy of 3.08 GeV, normalized to the J/ψ data after taking into account the energy-dependent cross section of the QED process: where N bg is the estimated number of background events in the selected J/ψ events; N 3.08 is the number of events selected from the continuum data; £ J/ψ and £ 3.08 are the integrated luminosities for J/ψ and continuum data, respectively; √ s J/ψ and √ s 3.08 are the center-of-mass energies for J/ψ data (3.097 GeV) and the continuum data (3.080 GeV). The integrated luminosities are determined using e + e − → γγ events with the following selection criteria: there are at least two neutral tracks with the deposited energy of the second most energetic shower larger than 1.2 GeV and less than 1.6 GeV; and | cos θ| < 0.8, where θ is the polar angle in the EMC. The number of signal events is determined by counting in the signal region |∆φ| < 2.5 • and the background estimated in the sideband region 2.5 < |∆φ| < 5 • , where ∆φ = |φ γ1 −φ γ2 |−180 • and φ is the angle of photon in x-y plane. Figs. 9 and 10 show the distribution of energy deposited in EMC and cos θ of photons. The integrated luminosities of J/ψ data and continuum data are determined to be 79631 ± 70 (stat.) nb −1 and 281 ± 4 (stat.) nb −1 , respectively. Here, the statistic error is 1.5%, and the systematic error can be cancelled according to Eqs. 3. With the same selection criteria for inclusive events from J/ψ data, 21266 ± 146 events are selected from the continuum data. Therefore the number of background events (N bg ) is estimated to be 5.96 ± 0.04 million using Eq. (3). The background ratio in the selected J/ψ → inclusive events is calculated to be 3.5% by comparing the number of background events to the number of inclusive events selected from J/ψ data. In the above calculation, the background events from cosmic rays and beam-gas interaction are normalized with the same procedure as QED events. In fact, the number of cosmic rays is proportional to the data taking time, whereas beam-gas events are related with the vacuum status and the beam current for taking data, in addition to the data taking time. In this analysis, the difference of the number of background events estimated with and without considering the energy dependence of the cross section for QED processes is taken into account in the overall systematic uncertainty of the number of J/ψ events (see Section VII for details). IV. DETERMINATION OF THE DETECTION EFFICIENCY AND CORRECTION FACTOR Usually the detection efficiency is determined using a MC simulation of J/ψ → inclusive, assuming that the detector response is well simulated. The efficiency is then the ratio between the number of events detected and the number of events generated. In this analysis to avoid the uncertainty caused by any discrepancy between MC simulation and data, the detection efficiency is determined experimentally using 106 million ψ ′ events taken with the BESIII detector. The experimental detection efficiency, ǫ ψ ′ data , is then the number of selected events divided by all J/ψ → inclusive events obtained from the cascade decays of ψ ′ → π + π − J/ψ. To select ψ ′ → π + π − J/ψ events, there must be at least two soft pions that are each reconstructed successfully in the MDC within the polar angle range | cos θ| < 0.93, have V r < 1 cm and |V z | < 15 cm, and have momentum less than 0.4 GeV/c. The π momentum distributions in Fig. 11 show that the MC simulation is in good agreement with data. There are no other requirements on the remaining charged and neutral tracks. The invariant masses recoiling against all possible π + π − pairs are calculated and shown in Fig. 12. A clear peak around 3.1 GeV/c 2 , corresponding to the decay of ψ ′ → π + π − J/ψ, J/ψ → inclusive, is observed over a large flat background. The number of J/ψ → inclusive events, N inc = (19526 ± 10) × 10 3 , is obtained by a fit to the π + π − recoil mass spectrum with a double-Gaussian plus a second order Chebychev background function. To determine the number of selected J/ψ → inclusive events, in addition to the above common selection criteria for the two soft charged pions, the remaining charged tracks and neutral tracks must satisfy the requirements for the J/ψ → inclusive events described in Section III. Fig. 13 shows the invariant mass recoiling against π + π − for the selected events, and the number of selected J/ψ → inclusive events, N sel inc , is determined to be (14432±9)×10 3 from a fit with a double-Gaussian plus a second order Chebychev background function. Finally the experimental detection efficiency of J/ψ → inclusive events, ǫ ψ ′ data , is determined to be (73.91 ± 0.02)%. Since the J/ψ decays in flight, a correction factor defined as in Eq. (2) is used to correct for the kinematical effect in order to determine the detection efficiency for direct e + e − → J/ψ → inclusive decays. With the same procedure, including the event selection criteria and the fit functions, the detection efficiency of ǫ ψ ′ mc = (75.87 ± 0.06)%, is obtained from a MC sample of 2 million of ψ ′ → π + π − J/ψ events. To determine ǫ J/ψ mc , a MC sample of 1 million events of J/ψ → inclusive was generated. With the same selection criteria for J/ψ → inclusive events as listed in Section II, 766893 ± 423 events are selected, and the corresponding detection efficiency is calculated to be (76.69 ± 0.04)%. The correction factor f cor for the detection efficiency, is then determined to be V. TRIGGER EFFICIENCY The trigger efficiency of the BESIII detector has been studied using different physics channels [7] and was found to be very close to 100%. Therefore, we do not repeat a similar study here, but assume a 100% trigger efficiency. VI. THE NUMBER OF J/ψ EVENTS The values of different parameters used in Eq. (1) are summarized in Table I, and the number of J/ψ events is then calculated to be (225.30 ± 0.02) × 10 6 . Here the statistical error is only from N sel , while the statistical fluctuation of N bg is taken int account as part of the systematic uncertainties (see subsection 7.4). The systematic errors from different sources will be discussed in the next section in detail. A. MC model uncertainty The efficiency correction factor (f cor ), which is used to correct the detection efficiency for the in-flight J/ψ decay from ψ ′ data, is a MC simulation dependent parameter. To check the MC model dependence of the correction factor, we also determine the correction factor with MC samples generated without the Lundcharm model. The difference of the correction factors obtained with and without the Lundcharm model, 0.49%, is taken as the systematic uncertainty from the MC model in the determination of the number of J/ψ events. B. Tracking efficiency According to tracking efficiency studies, the consistency of tracking efficiencies between MC simulation and data in J/ψ decays is 1% for each charged track, although it is a little larger at low momentum. In this analysis, the consistency of tracking efficiency between MC simulation and data in ψ ′ decays is assumed to be the same as that in J/ψ decays. Actually there may be a difference in the two data sets taken at different center-of-mass energies. To estimate the corresponding uncertainty, the tracking efficiency in the J/ψ MC sample was varied by −0.5% for the tracks with momentum greater than 350 MeV/c and −1.0% for the tracks with momentum less than 350 MeV/c. The change of the correction factor due to this variation leads to a change of 0.40% in the number of J/ψ events, which is taken as the systematic uncertainty due to the tracking efficiency. C. Fitting of J/ψ peak From the fit of the J/ψ peak we obtain the fitting errors 0.03% and 0.08% in the determination of ǫ ψ ′ data and ǫ ψ ′ mc , respectively, In addition, the uncertainties caused by changing the signal function, background shape, and the fitting range in the fit of the invariant mass spectra recoil π + π − are also taken into account. To estimate the uncertainty caused by a change of the signal function, we also fit the J/ψ peak with the J/ψ histogram shape, which is obtained from the recoil mass spectrum of π + π − in ψ ′ → π + π − J/ψ, J/ψ → µ + µ − . The change of the result is just 0.04%. The uncertainty by changing the background shape from a second order Chebychev function to a first order one is less than 0.16%. If the fitting range is changed from [3.07, 3.13] GeV/c 2 to [3.08, 3.12] GeV/c 2 , the change is 0.32%. The total systematic uncertainty from the fitting, 0.37%, is the sum of these errors in quadrature. D. Background uncertainty In the calculation of the number of J/ψ events, the background events from QED processes, cosmic rays and beam-gas events are estimated by normalizing the selected continuum events by the integrated luminosities according to Eq. (3). Therefore the statistical error of the number of events selected from the continuum data, 0.69% and the uncertainties due to the measurement of the integrated luminosities of the J/ψ data and continuum data, 1.5%, must be taken into account in the background uncertainty. As discussed in Section 3, normalizing cosmic rays and beam-gas events with the energy-dependent factor for QED processes is not correct. To account for this, the difference, 1.1%, between the determinations of the background normalized with and without the energydependent factor is taken as a background uncertainty. To estimate the background uncertainty from the beam-gas events, we select samples of beam-gas events in the J/ψ and continuum data. The candidate beamgas events must have one or two charged tracks with the points of closest approach satisfying |V z | > 5 cm and |V z | < 15 cm and the visible energy less than 0.5 GeV. 26844720 events are selected from the J/ψ data, corresponding to 93470 events expected in the continuum data by normalizing with the integrated luminosities. Compared with 96230 beam-gas events directly selected from the continuum data, the difference between them, 3%, is taken as a background uncertainty. By adding all the above effects in quadrature, the total background uncertainty is 3.6%. Since the background ratio in J/ψ → inclusive events is 3.5%, the systematic uncertainty in the number of J/ψ events is 0.13%. E. Dependence on charged multiplicity In order to reduce the number of beam-gas events in this analysis, the selected J/ψ → inclusive events are required to have at least two good charged tracks (N good ≥ 2). The uncertainty from this requirement is estimated by varying the charged multiplicity requirement from N good ≥ 2 to N good ≥ 3. For comparison, the values obtained for the two cases are listed in Table II. The change of the number of J/ψ events, 0.76%, is taken as the systematic uncertainty of the charged multiplicity requirement. F. Noise mixing Noise in the BESIII detector has been included in the realization of MC simulation by mixing in noise from events recorded using a random trigger for both J/ψ and ψ ′ data. To determine the systematic error associated E (GeV) with the noise realization in MC simulation, the ψ ′ MC sample is reconstructed with the higher noise from J/ψ data, and the change of the detection efficiency correction factor, 0.4%, is taken as a systematic uncertainty in the determination of the number of J/ψ events. In this analysis 106 million of ψ ′ events are used to determine the detection efficiency. However, the noise level was not entirely stable during the period of ψ ′ data taking. To check the effect of the changing noise level on the detection efficiency, the ψ ′ data and the MC sample are divided into three sub-samples, and the detection efficiency is determined for each of the three samples. The change of the detection efficiency and the correction factor lead to a change in the number of J/ψ events. The maximum change, 0.28%, is taken as the systematic uncertainty associated with the changing noise levels. The total systematic uncertainty from the noise mixing effect is estimated to be 0.49% by adding the individual error contributions in quadrature. G. Estimation of N J/ψ with the sideband ofVz The reliability of the determination of the number of J/ψ events obtained from the above method is checked by applying another method entailing two different procedures. One difference concerns the selection of inclusive events, which is essentially the same as in Section 2. except for the requirement on the track vertex position V z along the beam direction. Here we determine the average positionV z of the charged tracks. The signal region for inclusive events is defined by |V z | < 4 cm. This requirement is also applied in the determination of the detection efficiency and the correction factor. TheV z distribution is shown in Fig. 14. The second difference is in the background estimation. The numbers of background events from cosmic rays and beam-gas interactions are estimated from theV z sideband, defined by 6 < |V z | < 10 cm. The subtraction of the sideband events from the events in the signal region removes the cosmic ray and beam-gas events. The sideband subtraction does not account for the QED background events since the V z distribution is similar to that of inclusive events from J/ψ decays. However the continuum data allows us to estimate the contribution of QED processes in the inclusive events selected from J/ψ data. The same event selection is applied to the continuum data to select the QED events. After subtracting the cosmic rays and beam-gas events estimated with the same sideband method as for J/ψ data, the amount of background events from the QED processes in the selected inclusive events is estimated by normalizing according to the integrated luminosities of the continuum and J/ψ data according to Eq. (3). The same procedures have been used to determine the detection efficiency from ψ ′ data and the correction factor with MC samples. At last, the number of J/ψ events is determined to 224.9 million. The change in the number of J/ψ events with respect to the previous method discussed in chapter 6 is 0.20% and is taken as a systematic uncertainty. H. Selection efficiency uncertainty of two soft pions According to a MC study, the selection efficiency of soft pions, ǫ π + π − , recoiling against J/ψ in ψ ′ → π + π − J/ψ depends on the multiplicity of the J/ψ decay. To study its effect on the determination of the number of J/ψ events, ψ ′ → π + π − (π 0 π 0 )J/ψ, J/ψ → µ + µ − , 2(π + π − ) events are selected from data and inclusive MC samples, and then re-weighting factors are determined for J/ψ decaying into different multiplicities by comparing the corresponding selection efficiency of soft pions between data and MC. The difference between the results with and without re-weighting, 0.34%, is taken as the uncertainty due to the selection efficiency uncertainty of the soft pions in ψ ′ → π + π − J/ψ. The systematic uncertainties from different sources studied above are listed in Table III. The total systematic uncertainty, 1.24%, is the sum of them added in quadra- FIG. 11. The π momentum distributions from ψ ′ data (dots with error bars) and MC simulation of ψ ′ → π + π − J/ψ, J/ψ → µ + µ − (histogram). ture. VIII. SUMMARY Using J/ψ → inclusive events, the number of J/ψ events collected with the BESIII detector in 2009 is determined to be where the error is the systematic error and the statistical one is negligible. ACKNOWLEDGMENTS The BESIII collaboration thanks the staff of BEPCII and the computing center for their hard efforts. FIG. 12. The invariant mass recoiling against selected π + π − pairs for ψ ′ data. A clear peak corresponding to ψ ′ → π + π − J/ψ, J/ψ → inclusive is seen. The curves are the results of the fit described in the text. 13. The invariant mass recoiling against selected π + π − pairs for ψ ′ data. Here, in addition to selection criteria on the pion pairs, the remaining portion of the event must satisfy the selection criteria for J/ψ → inclusive events. The curves are the results of the fit described in text.
2012-07-12T07:53:50.000Z
2012-07-12T00:00:00.000
{ "year": 2012, "sha1": "c519bc66bc9a82b3de72395d8a27b9b177fc1294", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c519bc66bc9a82b3de72395d8a27b9b177fc1294", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
208453297
pes2o/s2orc
v3-fos-license
Rhodamin-B increases Hippocampus cell apoptosis in Rattus norvegicus-oxidative stress related to Parkinson, Alzheimer, cancer, hyperactive, anterograde amnesia diseases Rhodamine B is a textile dye compounds containing chlorine (Cl-), alkylating (CH3-CH3), Poly Aromatic Hydrocarbons (PAH) which activate the enzyme cytochrome P-450 as well as the structure of quinone which is very redox that leads to the formation of Reactive Oxygen Species (ROS). ROS increases induce apoptosis of the intrinsic pathway. The imbalance ratio between BAX and BCL-2 stimulates apoptosis in Hippocampus tissue. “The selected design was “the post test only control group” using twenty-eight Wistar female Rattus norvegicus mouse age of 10-12 weeks. There was a significant difference (p-value <0.05) of total BCL-2 expression between the control group to the treatment group. Correlation coefficient of 0.945 indicates that the level of the relationship/ correlation is very strong category. Increasing doses of Rhodamine B was given, accompanied by the decrease in the expression of BCL-2. Correlation coefficient of -0.731 indicates that the level of the relationship/ correlation belongs strong category. It is concluded that Rhodamin B has been verified as capable to increase the expression of BAX and to reduce the expression of BCL-2 in hippocampus tissue on Rattus norvegicus. Introduction Although the ban on the use of Rhodamine B dyes has been regulated in the Minister of Health Regulation No. 722/ Menkes/Per/VI/88 and Permenkes RI No. 239/Menkes/Per/V/85 but the wider community still uses Rhodamin B in food processing. 1,2 Rhodamin B is known to have toxic, carcinogenic and genotoxic effects. 3 The effect of Rhodamin B if it is consumed for a long time is that it will continuously cause irritation in the respiratory, eye disorders, bladder cancer, liver damage, heart, lymph, kidney, pancreas, central nervous system and brain damage. 4,5 Rhodamine B oral exposure in male and female rats for 30 days caused structural damage to liver histology, renal macroscopic changes, histopathology of the kidney proximal tubules of male mice, low radioactivity uptake in the brain, inhibited growth, diarrhea, death, lymphatic liver cancer, bladder dilation, liver poisoning, loss of body weight, body cell volume, total serum protein, discoloration, degradation of hair and skin become red and rough, abnormal behavioral changes (aggressive, cannibal), intrauterine death, impaired growth and internal abnormalities of the fetus, cell damage to liver and kidneys in pregnant rats Rattus norvegicus. [6][7][8][9][10][11][12][13] Rhodamin B has a quinon structure which is a very redox active molecule causing the formation of ROS (reactive oxygen species) which leads to oxidative stress and cell injury to target cells (CNS, hypothalamus, Adenohypophisis). Increased ROS in the blood induces the cell apoptosis phase. Induction of apoptosis in hippocampal tissue is involved in various diseases, especially Alzheimer's. 14,15 Hippocampus is a small organ located in the medial temporal lobe of the brain and is an important part of the limbic system which is the area that regulates emotions, memory, especially long-term memory and plays an important role in spatial navigation. Damage to the hippocampus can cause memory loss and difficulties in building new memories. In Alzheimer's disease, hippocampus is one of the first areas of the brain that is affected causing confusion and memory loss so that it is often seen in the early stages of this disease. 15 In this study, researchers focused on the effects of Rhodamine B on the expression of BAX (Bcl-2 Antagonist X) and BCL-2 (B-cell lymphoma-2) in the hippocampal tissue in Rattus norvegicus. Materials and Methods Rattus norvegicus Wistar strain female mouse, healthy, aged 10-12 weeks found in the experimental animal raising unit (UPHP) with the consideration that mice are experimental mammals (laboratory animals) as a research model before being treated in humans. The experimental animals were all adapted first at room temperature of 22-25 °C for 11 days at the UB Pharmacology Laboratory, Faculty of Medicine. Before being given treatment, rats were synchronized with the estrous cycle with the whitten method for 5 days. Then, the rats were grouped into the control group (given standard feed ad libitum), group I After 36 days of exposure to Rhodamin B, all rats were anesthetized using inhaled chloroform, the mice were turned off and hippocampus tissue samples were taken by surgery on the brains of rats and put into 10% formalin solution and carried out the hippocampus tissue removal slice by observing the mouse brain anatomy for making slides. Analysis of BAX and BCL-2 expression in hippocampus tissue using immunohistochemical staining was conducted and observed with a microscope. Brown hippocampus tissue shows BAX and BCL-2 expression, while if it is purple, it shows no expression of BAX and BCL-2. The calculation of BAX and BCL-2 expressions based on weak color intensity (1), medium (2), strong (3), very strong (4) using the help of OliVIA software. Data results are then processed using SPSS for Windows software. This study has passed ethics at the Ethics Committee of the Medical Faculty of the University of Brawijaya. Results Based on the results of testing the assumptions of normality and assumption of homogeneity, the assumptions of homogeneity are not fulfilled. Furthermore, testing was conducted to determine the effect of nonparametric administration of Rhodamin B on the expression of BAX using the Kruskal-Wallis test (Tables 1-3). Discussion Based on the results of BAX total expression data analysis in Table 1 using the Kruskal-Wallis test, the p-value of 0.0000 was smaller than α=0.05 (p<0.05). Thus, from this test, it can be concluded that there was a significant effect of Rhodamine B in increasing BAX expression. In other words, there are significant differences in the expression of BAX due to the different doses of Rhodamin B. The control group had the lowest average BAX expression. Significantly increased expression of BAX was shown by Rhodamine B at all doses. This was indicated by the average value of ± SD group given Rhodamin B containing different letters from the control group. Meanwhile, the highest average expression of BAX was shown by the group of rats exposed to Rhodamin B at a dose of 18 mg/ 200 gBW, which was 18.71 but not significantly different from the administration of 9 mg/ 200 gBW. This shows that the highest increase in BAX expression is shown in doses of 9 mg and 18 mg. Based on the results of the analysis using ANOVA, it was obtained a p-value of 0,000, smaller than α=0.05 (p<0.05). So, from this test, it can be concluded that there is a significant effect of Rhodamin B on the decrease in BCL-2. In other words, there is a significant difference in BCL-2 due to different doses of Rhodamine B. In the comparison of all levels of the treatment group, a p-value of less than 0.05 was obtained except in the comparison between the dose of 9 mg/ 200 gBW with 18 mg/ 200 gBW. This shows that there is no difference in average BCL-2 between groups of rats given Rhodamin B at a dose of 9 mg/ 200 gBW at a dose of 18 mg/ 200 gBW. Or in other words, giving Rhodamin B with a dose of 9 mg/ 200 gBW and 18 mg/ 200 gBW gave the same effect on the decrease in BCL-2. Based on the results of the analysis, it can be seen that the dose of Rhodamine B which can significantly reduce BCL-2 to the lowest point is the dose of 18 mg/ 200 gBW but not significantly different from the dose of 9 mg/ 200 gBW. Rhodamin B dosage at all levels can significantly reduce BCL-2. Based on the results of testing the correlation between Rhodamin B administration and BAX expression, a correlation coefficient of 0.945 with a pvalue of 0.000 was obtained. At the 5% error level (α=0.05), it was shown that the p-value obtained was less than 0.05 (p<0.05). From this test, it can be seen that there is a significant relationship between Rhodamin B administration and BAX expression. On the results of testing the correlation between the administration of Rhodamin B with BCL-2 expression, a correlation coefficient was found at -0.731 with a p-value of 0.000. At the 5% error level (α=0.05), it was shown that the pvalue obtained was less than 0.05 (p<0.05). From this test it can be seen that there is a significant relationship between Rhodamine administration and BCL-2 expression. Conclusions Rhodamin B has been verified as capable to increase the expression of BAX, to reduce the expression of BCL-2 in hippocampus tissue on Rattus norvegicus.
2019-11-07T15:26:21.302Z
2019-10-30T00:00:00.000
{ "year": 2019, "sha1": "4f8725167b61b4de4458d8946b6287e4540c705e", "oa_license": "CCBYNC", "oa_url": "https://www.publichealthinafrica.org/index.php/jphia/article/download/1175/497", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b98056bd900f48abdb40e61fb0a5edf33ba6890d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
119710998
pes2o/s2orc
v3-fos-license
How regular can maxitive measures be? We examine domain-valued maxitive measures defined on the Borel subsets of a topological space. Several characterizations of regularity of maxitive measures are proved, depending on the structure of the topological space. Since every regular maxitive measure is completely maxitive, this yields sufficient conditions for the existence of a cardinal density. We also show that every outer-continuous maxitive measure can be decomposed as the supremum of a regular maxitive measure and a maxitive measure that vanishes on compact subsets under appropriate conditions. INTRODUCTION Maxitive measures, also known as idempotent measures, are defined similarly to finitely additive measures with the supremum operation ⊕ in place of the addition +. In [33, Chapter I], we studied these measures and the related integration theory based on the Shilkret integral. We were especially interested in the idempotent analogue of the Radon-Nikodym theorem. In this process, we limited our considerations to maxitive measures taking values in the set of nonnegative real numbers. However, this may be quite restrictive for further applications. Let us have a look at classical analysis to understand why. In this framework, it is well known that the Radon-Nikodym theorem holds on certain classes of Banach spaces (e.g. reflexive spaces or separable dual spaces). To formulate such a theorem one needs to extend first the Lebesgue integral to measurable functions taking values in these spaces. This is what the Bochner integral does. More generally, a Banach space B has the Radon-Nikodym property if, for all measured spaces (Ω , A , µ) with finite measure µ and for all B-valued measures m on A , absolutely continuous with respect to µ and of bounded variation, there is a Bochner integrable map f : Ω → B such that m(A) = A f dµ, for all A ∈ A . This property has been at the core of a great amount of research and the source of many discoveries on the structure of Banach spaces. One hopes to obtain analogous results in the framework of idempotent analysis. Idempotent analysis is a well established theory dating back to Zimmermann [38] and popularized by Maslov [24]; the term was coined by Kolokoltsov and made its first appearance in the papers by Kolokoltsov and Maslov [18] and [19]. So one must have such a powerful tool as the Bochner integral available, that would integrate M-valued functions, for some "idempotent space" M. One could think of M e.g. as a complete module over the idempotent semifield R max + = (R + , max, ×), but the appropriate structure still needs to be clarified. Jonasson [15] on the one hand, Akian [2] on the other hand, both worked in this direction. However, Akian chose to integrate dioid-valued (rather than module-valued) functions, and Jonasson remained in the additive paradigm. In order to prepare these kinds of future applications -which are not directly in the scope of this paper-we study domain-valued maxitive measures after Akian. A domain is a partially ordered space with nice approximation properties. Well-known examples of domains are R + , R + , and [0, 1], which are commonly used as target sets for maxitive measures. Many attempts were made for replacing them by more general ordered structures (see Maslov [24], Greco [11], Liu and Zhang [23], de Cooman et al. [8], Kramosil [21]). Nevertheless, the importance of supposing these ordered structures continuous in the sense of domain theory for applications to idempotent analysis or fuzzy set theory has been identified lately. Pioneers were Akian [1,2] and Heckmann and Huth [12,13]. See Lawson [22] for a survey on the use of domain theory in idempotent mathematics. In the case of Banach spaces, it must also be remarked that the Radon-Nikodym property is deeply linked with the Krein-Milman property, which says that every bounded closed convex subset is the closed convex hull of its extreme points. It was proved that the latter property implies the Radon-Nikodym property (see e.g. Benyamini and Lindenstrauss [6, Theorem 5.13]), and the converse statement remains an open problem. Similar problems could be raised in the idempotent case. Another application we have in mind is the idempotent analogue of the Choquet integral representation theorem. In classical analysis, regular measures play a key role; in [33] we have seen that this is also the case in the idempotent framework. This explains why we deal here with regularity properties of maxitive measures, defined on the Borel σ-algebra B of some topological space. On a Hausdorff space, a maxitive measure is regular if it satisfies both following conditions for all B ∈ B: • inner-continuity: • outer-continuity: where A (resp. A) is the supremum (resp. infimum) of a set A, and K denotes the collection of compact subsets and G that of open subsets. We prove a series of conditions that guarantee inner-and/or outer-continuity of maxitive measures. This generalizes results due to Norberg [28], Murofushi and Sugeno [27], Vervaat [37], O'Brien and Watson [30], Akian [2], Puhalskii [34], Miranda et al. [25]. Regularity is an important feature of maxitive measures for a different reason: a regular maxitive measure ν admits a cardinal density in the sense that, for some map c, we have for all Borel sets B. Numerous authors have been interested in conditions that imply the existence of such a density, hence we make the choice to revisit this problem as exhaustively as possible. For some of our proofs we follow the steps of Riečanová [35], who focused on the regularity of certain S-valued set functions, for some conditionally complete ordered semigroup S satisfying a series of conditions, including the separation of points by continuous functionals. We do not use directly her results, for our approach better suits the special case of domainvalued optimal measures. Indeed, a domain is not necessarily a semigroup, nor is it conditionally complete in general. As a last step, we prove a decomposition theorem for outer-continuous maxitive measures, that takes the following form: where ⌊ν⌋ is a regular maxitive measure called the regular part of ν, and ⊥ν is a maxitive measure vanishing on compact subsets under appropriate conditions. This has the consequence that ν is regular if and only if ⊥ν is zero. The paper is organized as follows. In Section 2 we recall basic domain theoretical concepts. In Section 3 we recall the notion of L-valued maxitive measure, for some domain L. Then we specifically consider maxitive measures defined on the collection of Borel subsets of some topological space; we suppose that the space at stake is quasisober, a condition that generalizes the usual Hausdorff assumption. We prove that regularity and tightness of maxitive measures are linked with different conditions such as existence of a cardinal density, complete maxitivity, smoothness with respect to compact saturated or closed subsets, inner-continuity. We focus on the case where the topological space is metrizable and the maxitive measure is optimal in Section 4. In Section 5 we prove the announced decomposition theorem. REMINDERS OF DOMAIN THEORY A nonempty subset F of a partially ordered set or poset (L, ) is filtered if, for all r, s ∈ F , one can find t ∈ F such that t r and t s. A filter of L is a filtered subset F such that F = {s ∈ L : ∃r ∈ F, r s}. We say that s ∈ L is way-above r ∈ L, written s ≫ r, if, for every filter F with an infimum F , r F implies s ∈ F . The way-above relation, useful for studying lattice-valued upper-semicontinuous functions (see Gerritse [9] and Jonasson [15]), is dual to the usual way-below relation, but is more appropriate in our context. Coherently, our notions of continuous posets and domains are dual to the traditional ones. We thus say that the poset L is continuous if ↑ ↑ r := {s ∈ L : s ≫ r} is a filter and r = ↑ ↑ r, for all r ∈ L. Also, L is filtered-complete if every filter has an infimum. A domain is then a filtered-complete continuous poset. In this paper, every domain considered will have a bottom element 0. A poset L has the interpolation property if, for all r, s ∈ L with s ≫ r, there exists some t ∈ L such that s ≫ t ≫ r. In continuous posets it is well known that the interpolation property holds, see e.g. [10,]. This is a crucial feature that is behind many important results of the theory. For more background on domain theory, see the monograph by Gierz et al. [10]. Remark 2.1. To show that an inequality r ′ r holds in a continuous poset L, it suffices to prove that, whenever s ≫ r ′ , we have s r. This argument will be used many times in this work. MAXITIVE MEASURES ON TOPOLOGICAL SPACES 3.1. Preliminaries on topological spaces. Let E be a topological space. We denote by G (resp. F ) the collection of open (resp. closed) subsets of E. The interior (resp. the closure) of a subset A of E is written A o (resp. A). The specialization order on E is the quasiorder defined on E by x y if x ∈ G implies y ∈ G, for all open subsets G. A subset C of E is irreducible if it is nonempty and, for all closed subsets F, F ′ of E, C ⊂ F ∪ F ′ implies C ⊂ F or C ⊂ F ′ . The closure of a singleton yields an irreducible closed set. We say that E is quasisober if every irreducible closed subset is the closure of a singleton. We denote by Q the collection of (not necessarily Hausdorff) compact saturated subsets of E. For instance, ↑ x ∈ Q, for all x ∈ E. We shall need the following theorem, which emphasizes the role of compact saturated subsets for non-Hausdorff spaces. Theorem 3.1 (Hofmann-Mislove). In a quasisober topological space, the collection Q of compact saturated subsets is closed under finite unions and filtered intersections. Moreover, if (Q j ) j∈J is a filtered family of elements of Q such that j∈J Q j ⊂ G for some open G, then Q j ⊂ G for some j ∈ J. The strong form of the Hofmann-Mislove theorem (see [14]) asserts an isomorphism between the family of compact saturated subsets of a quasisober space and the family of Scott-open filters on the lattice of open subsets of the space; Theorem 3.1 is then a simple corollary. Keimel and Paseka [17] provided another proof, and Kovár [20] extended the result to generalized topological spaces. See also Jung and Sünderhauf [16] for an application to proximity lattices, and Norberg and Vervaat [29] for an application, in a non-Hausdorff setting, to the theory of capacities which dates back to Choquet [7]. 3.2. The Borel σ-algebra. Let E be a topological space. The Borel σalgebra of E is the σ-algebra B generated by G and Q; its elements are called the Borel subsets of E. We also write K for the collection of compact Borel subsets of E. If E is T 1 (in particular if E is Hausdorff), then K = Q. In the case where E is T 0 , K contains all singletons {x}, for {x} is the intersection of the compact saturated subset ↑x with the closure x of {x}. In the general case (E not necessarily T 0 ), we let [x] denote the compact Borel subset ↑x ∩ x. This is the equivalence class of x with respect to the equivalence relation x ∼ y ⇔ x = y ⇔↑x =↑y. Notice that ↑[x] =↑x for all x. The quotient set E 0 = E/∼ equipped with the quotient topology is then a T 0 space, and the quotient map π 0 : x → [x] is continuous. Let Q ′ be a compact saturated subset of E 0 , and let us show that Q := This shows that Q is compact; the proof that Q is saturated is not difficult and left to the reader. The remaining assertions directly follow from the continuity of π 0 . Lemma 3.4. For all Borel subsets Proof. For the first assertion, let A be the collection of all B ∈ B such that π 0 (B) is a Borel subset of E 0 and π −1 0 (π 0 (B)) = B. It is easily seen that A is a σ-algebra. Moreover, the combination of Lemma 3.2 and Lemma 3.3 implies that A contains both G and Q. As a consequence, A = B. With the help of Lemma 3.2, the second assertion of the lemma can be proved similarly. In the following result, the concept of reflection refers to category theory. , where E 0 = E/∼ is the quotient set equipped with the quotient topology, which is a T 0 topology, and ∼ is the equivalence Proof. We have to show first that, for all T 0 spaces X and all continuous maps f : E → X, there exists a unique continuous map f 0 : E 0 → X such that the following diagram commutes: Then The uniqueness of f 0 then directly follows. To conclude the proof, firstly recall that, by Lemma 3. is both surjective and injective thanks to Lemma 3.4, and we easily deduce that it is an isomorphism of Borel σ-algebras. In this paper, the maxitive measures considered will only be defined on the Borel σ-algebra of the topological space E at stake. By the previous theorem, we thus may assume that E be T 0 without loss of generality. However, we believe it interesting, from a formal point of view, to explicitly work in a non-T 0 setting. So we make the choice to keep on with general (not necessarily T 0 ) topological spaces; for that reason the following result will be useful. Regular maxitive measures. Let E be a topological space with Borel σ-algebra B, and let L be a filtered-complete poset with a bottom element, that we denote by 0. An L-valued maxitive measure (resp. σ-maxitive measure, completely maxitive measure) on B is a map ν : B → L such that ν(∅) = 0 and, for every finite (resp. countable, arbitrary) family {B j } j∈J of elements of B such that j∈J B j ∈ B, the supremum of {ν(B j ) : j ∈ J} exists and satisfies ν( Note that this definition implies that the image ν(B) is a sup-subsemilattice of L containing 0 (even though L itself need not be a sup-semilattice), and the corestriction of ν to ν(B) is a sup-semilattice morphism. An L-valued maxitive measure ν on B is regular if it satisfies both following relations for all B ∈ B: • inner-continuity: • outer-continuity: We shall also use weakened notions of inner-and outer-continuity for an L-valued maxitive measure ν on B: • weak inner-continuity: • weak outer-continuity: The following result ensures that the terminology we use is consistent. Proof. The easy proof is left to the reader. The notion of weak inner-continuity can be characterized as follows. Proof. First we suppose that ν is weakly inner-continuous. Let O be a family of open subsets of E, and let G = O. The identity we need to show will be satisfied if we prove that ν Conversely, suppose that Equation (1) holds for all families O of open subsets of E. To prove that ν is weakly inner-continuous, fix some G ∈ G , let u be an upper-bound of {ν + (K) : K ∈ K , K ⊂ G}, and let s ≫ u. Since L is continuous, we get u ν(G), and the result follows. The following lemma characterizes weak outer-continuity. Proof. We let c + : ). Let u ∈ L be an upper-bound of {c + (x) : x ∈ K} and let s ≫ u. Then, for each x ∈ K, s ≫ c + (x), so there is some open subset G x ∋ x such that s ν(G x ). Since K is compact and x∈K G x ⊃ K, we can extract a finite subcover and write k j=1 G x j ⊃ K. Thus, s ν + (K). Since L is continuous, this implies that u ν + (K), so that ν + (K) is the least upper-bound of {c + (x) : x ∈ K}. Now we prove the announced equivalence. First assume that ν([x]) = ν + ([x]) for all x ∈ E. Then, for every compact Borel subset K, It happens that we recover regularity if we combine weak inner-and weak outer-continuity. Proposition 3.11. Assume that L is a domain. Then every L-valued maxitive measure on B that is both weakly outer-continuous and weakly innercontinuous is regular and completely maxitive. Proof. Let ν be an L-valued weakly outer-continuous and weakly innercontinuous maxitive measure. Assume that, for some is a compact Borel subset, and K x ⊂ B by Lemma 3.6. So s ≫ ν(K x ) = ν + (K x ) since ν is weakly outercontinuous, hence there exists some G x ∋ x such that s ν(G x ). Since ν is weakly inner-continuous, we deduce s ν(G), where G = x∈B G x ⊃ B, so that s ν + (B), a contradiction. To prove that ν is completely maxitive, we let (B j ) j∈J be some family of Borel subsets such that B := j∈J B j ∈ B. We also take an upperbound u of {ν(B j ) : j ∈ J} and some s ≫ u. Since ν is outer-continuous there exists, for all j ∈ J, some G j ⊃ B j such that s ν(G j ). By Equation (1) in Lemma 3.9 we get s ν( j∈J G j ), so that s ν(B). Since L is continuous we obtain u ν(B). As a consequence, ν(B) is the least upper-bound of {ν(B j ) : j ∈ J}. This proves that ν is completely maxitive. Corollary 3.12. Assume that L is a domain. Then, on a second-countable topological space, every L-valued weakly outer-continuous σ-maxitive measure is regular. Proof. Let E be second-countable and ν be an L-valued weakly outercontinuous σ-maxitive measure on B. Since E is second-countable, there is some countable base U for the topology G . To prove that ν is regular, we want to use Proposition 3.11, thus we show that ν is weakly innercontinuous. So let O be a family of open subsets of E, and let G = O. . By Lemma 3.9, ν is weakly inner-continuous, and the proof is complete. An L-valued maxitive measure ν on B is called saturated if for all K ∈ K we have ν(K) = ν(↑ K). Inner-continuous maxitive measures and weakly outer-continuous maxitive measures are always saturated, while weak inner-continuity does not imply saturation in general. Note however that saturation is always satisfied if the space E is T 1 . Variants of Propositions 3.13 and 3.16 below were formulated and proved in [1] in the case where E is a Hausdorff topological space and L is a continuous lattice, see also [13,Proposition 13]. Another variant of the following result is [29, Proposition 2.2(a)], which treats the case of realvalued capacities on non-Hausdorff spaces. Proof. Let E be quasisober, let ν be an L-valued weakly outer-continuous maxitive measure on B, and let (Q j ) j∈J be a filtered family of compact saturated subsets of E. Recall that Q = j∈J Q j is compact saturated, since E is assumed quasisober. The set {ν(Q j ) : j ∈ J} admits ν(Q) as a lower-bound. Take another lower-bound ℓ, and let G ∈ G such that G ⊃ Q. By the Hofmann-Mislove theorem (Theorem 3.1), there is some j 0 ∈ J such that G ⊃ Q j 0 . Thus, ν(G) ν(Q j 0 ), so that ν(G) ℓ, for all G ⊃ Q. Since ν is weakly outer-continuous, we deduce that ν(Q) ℓ. We have shown that ν(Q) is the infimum of {ν(Q j ) : j ∈ J}. This proves that ν is Q-smooth. Now assume that E is locally compact quasisober, and let ν be an Lvalued Q-smooth saturated maxitive measure on B. If Q is a compact saturated subset, then by local compactness of E there exists a filtered family (Q j ) j∈J of compact saturated subsets with j∈J Q j = Q and Q ⊂ Q o j . Since ν is Q-smooth, this implies that i.e. ν(Q) = ν + (Q), for all Q ∈ Q. Let us show that ν and ν + coincide on K . If K ∈ K , then ν(K) = ν(↑ K) since ν is saturated. Also, because G ⊃↑ K if and only if G ⊃ K for all open subsets G, we have ν + (↑K) = ν + (K). So this gives ν(K) = ν(↑K) = ν + (↑K) = ν + (K), and we have shown that ν is weakly outer-continuous. Remark 3.14. The first part of Proposition 3.13 remains true for L-valued weakly outer-continuous monotone set functions. Tightness. Tightness of maxitive measures can be defined by analogy with tightness of additive measures, so we say that an The following lemma slightly extends [10, Theorem III-2.11], which states that every continuous sup-semilattice is join-continuous. Lemma 3.15. Assume that L is a domain. Let F be a filter of L and t ∈ L such that, for all f ∈ F , t ⊕ f exists. Then t ⊕ F exists and satisfies t ⊕ F = (t ⊕ F ). Proof. The subset t ⊕ F is filtered, hence has an infimum. Suppose that (t ⊕ F ) is not the least upper-bound of {t, F }. Then there exists some A maxitive measure is QF -smooth if it is Q-smooth and F -smooth. The second part of the following result was proved by Puhalskii [34, Theorem 1.7.8] in the case where L = R + . Proof. Let E be quasisober, let ν be an L-valued tight weakly outer-continuous maxitive measure on B, and let (F j ) j∈J be a filtered family of closed subsets of E. Fix some compact Borel subset K, and let F = j∈J F j . Then F j ∩ K and F ∩ K are compact, hence ↑ (F j ∩ K) and ↑(F ∩ K) are compact saturated. Let us show that The inclusion ⊃ is clear. For the reverse inclusion, let x ∈ E such that x / ∈↑(F ∩ K). Then there is some open subset G containing F ∩ K such that x / ∈ G. As a consequence, the compact subset K is included in the union of the directed family (G ∪ (E \ F j )) j∈J , so there exists some j 0 ∈ J such that K ⊂ G ∪ (E \ F j 0 ). This rewrites as F j 0 ∩ K ⊂ G, so that ↑(F j 0 ∩ K) ⊂ G. Hence, x / ∈↑(F j 0 ∩ K), and Equation (3) is proved. By Proposition 3.13, ν is Q-smooth, so j∈J ν(↑(F j ∩ K)) = ν(↑(F ∩ K)). For the converse statement, first assume that E is locally compact quasisober, and let ν be an L-valued QF -smooth saturated maxitive measure on B. Then ν is weakly outer-continuous by Proposition 3.13. Moreover, the collection {E \ K o : K ∈ K } has empty intersection since E is locally compact, is filtered, and is made of closed subsets. Since ν is F -smooth, this implies K∈K ν(E \ K o ) = 0. If t ≫ 0, this gives some K ∈ K with t ν(E \ K o ), so that t ν(E \ K). Since L is continuous, we conclude that ν is tight. Now if E is a completely metrizable space, the second part of the proof of Proposition 3.13 still applies to show that an L-valued F -smooth maxitive measure ν is weakly outer-continuous, for every compact subset K is the filtered intersection of some family (F j ) j of closed subsets with K ⊂ Proof. Let E be Polish, and let ν be an L-valued F -smooth σ-maxitive measure on B. Since E is separable metrizable, every open subset is Lindelöf, hence the restriction of ν to G satisfies Equation (1), i.e. ν is weakly inner-continuous. Now the result follows from Proposition 3.16. Proof. Let E be σ-compact and metrizable, and let ν be an L-valued Ksmooth σ-maxitive measure. Since E is σ-compact, there is a sequence (K n ) n of compact subsets such that E = n K n . Each of these K n is then a Polish space because E is metrizable. By Proposition 3.18, this implies that the restriction ν n of ν to the Borel σ-algebra of K n is (tight) regular, hence completely maxitive by Proposition 3.11. As a consequence, if B ∈ B, then ν(B) = n ν(B ∩K n ) = n ν n (B ∩K n ) = n x∈B∩Kn ν n ({x}) = n x∈B∩Kn ν({x}) = x∈B ν({x}), so ν is completely maxitive. Since complete maxitivity implies weak inner-continuity by Lemma 3.9, it suffices to prove that ν is weakly outer-continuous in order to conclude that ν is regular. By Lemma 3.10, we only need to show that ν({x}) = ν + ({x}) for all x. So let s ≫ ν({x}). Then G := {y ∈ E : s ≫ ν({y})} contains x. We prove that G is open, i.e. that F = E \ G is closed. Let (y n ) be a sequence in F with y n → y. If Q n is the topological closure of {y n ′ : n ′ n}, then Q n is compact, and n Q n = {y} since E is Hausdorff. Since ν is Q-smooth (i.e. K -smooth), this gives n ν(Q n ) = ν({y}). If y / ∈ F , then s ≫ n ν(Q n ), hence there is some n 0 such that s ≫ ν(Q n 0 ). Therefore, s ≫ ν({y n 0 }), i.e. y n 0 / ∈ F , a contradiction. Thus, y ∈ F . Since E is metrizable, it is first-countable, so this proves that F is closed. So G is open, contains x, and s ν(G) because ν is completely maxitive. We deduce that s ν + ({x}) and, with the continuity of L, that ν({x}) ν + ({x}). From Lemma 3.10 we conclude that ν is weakly outercontinuous, hence regular. for all B ∈ B. As a special case, consider e.g. a finite set E with the discrete topology. Then ν admits a cardinal density defined by c(x) = ν({x}), since B = x∈B {x}, where the union runs over a finite set. In the general case, this reasoning may fail, for we may have ν({x}) = 0 for all x ∈ E, even with a nonzero ν, but it is tempting to consider c + (x) := ν + ([x]) instead, where ν + is defined in Example 3.7. This idea, which appeared in [12,13] and [2], is effective and leads to Theorem 3.23. A map c : E → L is upper-semicontinuous (or usc for short) if, for all t ∈ L, the subset {t ≫ c} is open. We refer the reader to Penot and Théra [31], Beer [5], van Gool [36], Gerritse [9], Akian and Singer [3] Proof. Assume that ν is tight and outer-continuous, and let t ≫ 0. Since {ν(E \ K) : K ∈ K } is filtered with an infimum equal to 0, the interpolation property implies that there is some K ∈ K such that t ≫ ν(E \ K). Since ν is outer-continuous, we obtain t ≫ x / ∈K c + (x). This shows that {t ≫ c + } is a subset of K. Since c + is usc, {t ≫ c + } is also closed, hence compact. Conversely, assume that ν is weakly inner-continuous and that c + is upper compact. Let K t denote the compact closed subset {t ≫ c + }. Then Since ν is weakly inner-continuous and G t = E \ K t is open for all t ≫ 0, we have by Lemma 3.10 The following theorem summarizes many of the above results and highlights the relation between the existence of a density, regularity, and complete maxitivity. Part of it is due to [2,Proposition 3.15] and [32, Theorem 3.1]. See also Norberg [28], Vervaat [37]. We also refer the reader to (1) ν is regular, (2) ν has a usc cardinal density, (3) ν is outer-continuous and completely maxitive, (4) ν is weakly outer-continuous and weakly inner-continuous, (5) ν is weakly outer-continuous and σ-maxitive, (6) ν is weakly outer-continuous, (7) ν is Q-smooth and saturated, (8) ν is Q-smooth, weakly inner-continuous, and saturated, (9) ν is Q-smooth, σ-maxitive, and saturated. , hence ν has a cardinal density. The reverse assertion is straightforward. If E is second-countable, use Corollary 3.12. If E is locally compact, use Proposition 3.13. If E is σ-compact and metrizable, use Corollary 3.20. If E is locally compact Polish, use Proposition 3.13 and Corollary 3.20. Corollary 3.24. Assume that L is a domain and E is a quasisober space. If ν is a regular maxitive measure on E, then c + (x) = ν([x]) for all x ∈ E, and c + is the maximal (usc) cardinal density of ν. Theorem 3.25. Assume that L is a domain and E is a quasisober space. Let ν be an L-valued maxitive measure on B. Also, consider the following assertions: (1) ν is tight regular, (2) ν has an upper compact usc cardinal density, REGULARITY OF OPTIMAL MEASURES ON METRIZABLE SPACES Let E be a topological space with Borel σ-algebra B. An L-valued maxitive measure ν on B is continuous from above if ν(B) = n ν(B n ), for all B 1 ⊃ B 2 ⊃ . . . ∈ B such that B = n B n . An optimal measure is a continuous from above σ-maxitive measure. The following result generalizes the Murofushi-Sugeno-Agebko theorem (see [33,). Proof. Let ν be an L-valued continuous from above maxitive measure on B, and let us show that ν is σ-maxitive. So let B 1 , B 2 , . . . ∈ B and B = n B n , and let C n = B \ B ′ n with B ′ n = B 1 ∪ · · · ∪ B n . By continuity from above, we have n ν(C n ) = 0. Let u be an upper bound of {ν(B n ) : n 1}, and let s ≫ u. Then s ν(C n 0 ) for some n 0 , so Riečanová [35] studied the regularity of certain S-valued set functions, for some conditionally-complete ordered semigroup S satisfying a series of conditions, among which the separation of points by continuous functionals. In the following lines we closely follow her approach, although we do not use directly her results, for our approach better matches the special case of L-valued optimal measures. In particular, L is not assumed to be a semigroup, nor to be conditionally-complete. Unlike Riečanová, we do not examine the case of optimal measures defined on the collection of Baire (rather than Borel) subsets of a metrizable space, but we believe that this could be done with little additional effort. The following lemma is based on [10, Proposition IV-3.1]. It allows one to generalize most theorems that hold for [0, 1]-valued maxitive measures to domain-valued maxitive measures. Proof. Let E be a metrizable space and d be a metric generating the topology. Let ϕ : L → [0, 1] be a map preserving filtered infima and arbitrary existing suprema, and let ν ϕ be the map defined on B by ν ϕ (B) = ϕ(ν(B)). The properties of ϕ imply that ν ϕ is an optimal measure. Let A be the collection of all B ∈ B such that ν ϕ (G\F ) 1/2, for some open subset G and closed subset F such that G ⊃ B ⊃ F . Let us show first that A contains all open subsets, so let B be open. Let F n = {x ∈ E : d(x, E \ B) n −1 }. Then (F n ) n 1 is a nondecreasing family of closed subsets whose union is B. Since ν ϕ is an optimal measure, ν ϕ (B \ F n ) tends to 0 when n ↑ ∞. Thus, we can find some closed subset F ⊂ B with ν ϕ (B \ F ) 1/2, and this proves that B ∈ A . We now show that A is a σ-algebra. Clearly, ∅ ∈ A , and B ∈ A implies E \ B ∈ A . Let (B n ) n 1 be a family of elements of A . We prove that B = n B n ∈ A . For all n, there are some G n ⊃ B n ⊃ F n satisfying ν ϕ (G n \ F n ) 1/2. If G = n G n and F = n F n , then G ⊃ B ⊃ F and ν ϕ (G \ F ) 1/2. However, F is not closed in general. So let H n denote the closed subset n k=1 F k . As above, (H n ) n 1 is a nondecreasing family of closed subsets whose union is F , so we can find some closed subset H n 0 ⊂ F with ν ϕ (F \ H n 0 ) 1/2, hence ν ϕ (G \ H n 0 ) 1/2. Consequently, A coincides with the Borel σ-algebra B. Corollary 4.4. Assume that L is a domain. Then, on a separable metrizable space, every L-valued optimal measure is regular. Proof. Let E be a separable metrizable space, and let ν be an L-valued optimal measure on B. Then ν is outer-continuous by Proposition 4.3. As a separable metrizable space, E is second-countable, so ν is also innercontinuous by Corollary 3.12, hence regular. (1) if E is second-countable regular Hausdorff; (2) if E is σ-compact and metrizable; (3) if E is Polish (this results from the definition of a Polish space!). Part of the following result is included in Proposition 3.18. Proposition 4.7. Assume that L is a domain. Then, on a Polish space or on a σ-compact and metrizable space, every L-valued optimal measure is tight regular. The proof is inspired by that of [34,Theorem 1.7.8]. Proof. We only have to prove tightness. First assume that E is a Polish space and let ν be an L-valued optimal measure on B. Since E is separable, there is some sequence (x n ) dense in E. Let ǫ ≫ 0. Let F n,p = B 1,p ∪ . . . ∪ B n,p , where B n,p is the closed ball of radius 1/p and center x n . Then, for all p, E = n F n,p . Since ν is optimal, there is some n p such that ǫ ν(E \ F np,p ). Let K ǫ denote the subset p F np,p . For all α > 0, K ǫ can be covered by a finite number of balls of radius at most α, i.e. K ǫ is totally bounded. Since E is completely metrizable, K ǫ is compact. Moreover, ǫ ν(E \ K ǫ ), for all ǫ ≫ 0. Thus, ν is tight. For the case where E is σ-compact and metrizable, a similar proof can be given, for one can write E = n F n,p , with F n,p = F n,1 compact. DECOMPOSITION OF MAXITIVE MEASURES In [32], we developed part of the following material in a non-topological framework. Here E is again a quasisober topological space, and B denotes its collection of Borel subsets. A poset is a lattice if every nonempty finite subset has a supremum and an infimum. A lattice is distributive if finite infima distribute over finite suprema, and conditionally-complete if every nonempty subset bounded above has a supremum. According to an assumption made all along this paper (see Section 2), a continuous conditionallycomplete lattice, which is a special case of domain, will always have a bottom element 0. The following proposition confirms that the terminology is appropriate. Proposition 5.2. Assume that L is a continuous conditionally-complete lattice and E is a quasisober space. Let ν be an L-valued maxitive measure on B. Then the regular part of ν is a regular maxitive measure on B, with density c + : x → ν + ([x]). Moreover, ⌊⌊ν⌋⌋ = ⌊ν⌋. The following theorem states the existence of a singular part ⊥ν of a maxitive measure ν. Then (I t ) t∈L is a nondecreasing family of ideals of B, and distributivity of L implies that {t ∈ L : B ∈ I t } is a filter, for every B ∈ B. From [32, Proposition 2.3], we deduce that ⊥ν is a maxitive measure. Since B ∈ I t for t = ν + (B), we have ν + (B) ⊥ν(B), thus ν + ⌊ν⌋ ⊕ ⊥ν. For the reverse inequality, one may use the fact that continuity implies join-continuity (see Lemma 3.15). The fact that ⊥ν is the least maxitive measure satisfying Equation (4) is straightforward. CONCLUSION AND PERSPECTIVES It would be interesting to reformulate the results of this work in terms of Baire subsets rather than Borel subsets.
2013-01-20T21:30:07.000Z
2013-01-20T00:00:00.000
{ "year": 2013, "sha1": "13dd33802619b42986c54de48928cac072ab748e", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.topol.2013.01.007", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "13dd33802619b42986c54de48928cac072ab748e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233310338
pes2o/s2orc
v3-fos-license
Effects of Intermittent Fasting on Response to Tyrosine Kinase Inhibitors (TKIs) in Patients With Chronic Myeloid Leukemia: An Outcome of European LeukemiaNet Project The overall survival of patients with Chronic Myeloid Leukemia (CML) treated by using tyrosine kinase inhibitors (TKIs) is very close to that of the healthy population. However, little is known about the effect of specific measures such as intermittent fasting, especially during Ramadan period. A 3-year retrospective study was conducted to evaluate the effect of fasting on patients with CML receiving TKIs by evaluating certain clinical, hematological, and molecular parameters. A total of 49 patients were eligible, with a median age of 46 years (range: 22-86), of these 36 (73.5%) were males and 13 (26.5%) were females. Twenty-seven (55%) patients are Middle Eastern, while 16 (32.7%) from the Indian subcontinent, and 6 (12.3%) Africans. Imatinib was the most common TKI; used in 25 patients (51%). The mean White blood cells (WBCs), neutrophils, and BCR-ABL were found to be reduced after fasting compared to before and during with statistical difference. The use of TKIs while fasting did not result in significant changes in hematological nor BCR-ABL levels in our study. Patients who wish to practice intermittent fasting may be reassured in this regard, yet physicians can adopt the safe trial approach, where they allow the patients to fast, but with instructions such as when to break fasting. Introduction Chronic Myeloid Leukemia (CML) is a clonal bone marrow myeloid disorder that represents a prototype model for targeted therapy and cancer. Identification of a BCR-ABL1 gene fusion is diagnostic of CML and levels of BCR-ABL1 transcripts detectable following treatment with tyrosine kinase inhibitors (TKIs) can be measured using standardized protocols that allow patients to be managed consistent with expert guidelines. 1 Best practice recommendations from the National Comprehensive Cancer Network (NCCN) and the European LeukemiaNet (ELN) networks define optimal responses at key milestones in treatment allowing for early signs of poor adherence or resistance to treatment to be detected early and allow for effective clinical intervention. 2 Dynamic changes in cellular metabolism are known to accompany oncogene activation to support the growth of a rapidly proliferating clone. Cancer cells are believed to be vulnerable to nutrient deprivation and clinical trials investigating the effects of intermittent fasting are ongoing. 3 Restricting calorie intake could offer potential benefits for clinical use of TKIs, however, compliance remains a significant challenge. In Islam, Ramadan fasting represents one of the 5 pillars of creed and is considered as a mandatory religious duty so a robust social structure exists to support adherence to an intermittent fasting regimen. 4 However, Different types of intermittent fasting are recognized, including 5:2 diet; fasting for 2 days per week, Eat-Stop-Eat diet; a 24-hour fast, once or twice a week, Alternate-Day Fasting; fast every other day, The Warrior Diet; Fast during the day, eat a huge meal at night, Spontaneous Meal Skipping; Skip meals when convenient, and the 16/8 Method; fast for 16 hours per day. 5 And fasting during Ramadan displays some overlap with 16/8 method (fasting for 16 hours each day) as in both instances there are recurring periods of fasting and feeding. Intermittent fasting regimens offer the potential to influence metabolic regulation via effects on (a) circadian biology, (b) the gut microbiome, and (c) modifiable lifestyle behaviors. This type of nonpharmacological intervention is cost-effective and associated with a low risk of adverse events and multiple public health benefits. 5,6 No guidelines or standardized protocols exist that can help physicians advise CML patients who wish to practice intermittant fasting. Furthermore, researchers in Malaysia found that Muslim patients with CML disclosed that they had frequently skipped or changed doses of nilotinib to fulfill their religious obligations during Ramadan. 7 The main purpose of this study was to evaluate the effect of fasting on CML patients receiving TKIs by evaluating their clinical course, hematological parameters, and BCR-ABL1 levels. Materials and Methods A retrospective study was conducted by reviewing medical records for adult Muslim patients with CML in a tertiary cancer center in Qatar. Inclusion criteria include: Exclusion criteria include patients who did not fulfill the above criteria. 49 patients fulfilled the above-mentioned criteria, and therefore we evaluated: 1. Complete Blood Count before, during, and after fasting. 2. BCR-ABL1 transcript levels before, during, and after fasting using real-time quantitative Polymerase chain reaction (RT-qPCR) analysis. The reported results were based on triplicate measurements calculated as the ratio of total BCR-ABL1 transcripts to total ABL1 transcripts, reported as a percentage ratio on the International Scale (IS). 8 3. Any loss of hematological response or clinical evidence of disease progression (e.g., increase in spleen size during fasting). Statistical Analysis Anonymous data were collected and entered into a standard electronic database excel sheet designed in view of study design and objectives. Descriptive statistics were used to summarize demographic, clinical, hematological parameters, and other characteristics of the participants. The normally distributed data and results were reported with mean and standard deviation (SD); the remaining results were reported with median and interquartile range (IQR). Categorical data were summarized using frequencies and percentages. Preliminary analyses were conducted to examine the distribution of the data variables using the Kolmogorov-Smirnov test. We used the repeated-measures analysis of variance (ANOVA) to compare various hematological parameters measured before, during, and after intermittent fasting periods. And when the repeated-measures ANOVA was significant (P < 0.05), we performed post hoc tests with the Wilcoxon matched-pairs signed-rank test. One-way ANOVA/Kruskal Wallis test was applied to compare various hematological parameters measured among different TKIs, ethnicity and age groups. Associations between 2 or more qualitative variables were assessed using the Chi-square (w 2 ) test, Fisher Exact or Yates corrected Chi-square tests as appropriate. Quantitative data and outcome measures between the 2 and more than 2 independent groups were analyzed using unpaired 't' test (Mann Whitney U test for non-normal data). Pictorial presentations of the key results were made using appropriate statistical graphs. All P values presented were 2-tailed, and P values <0.05 was considered as statistically significant. All Statistical analyses were done using statistical packages SPSS 22.0 (SPSS Inc. Chicago, IL) and Epiinfo (Centers for Disease Control and Prevention, Atlanta, GA) software. Patients' Characteristics During the study period of 2016, 2017 and 2018, a total of 49 patients fulfilled the criteria of the study, with mean age 46.8 + 14.51 years (median 46 years; IQR 36 to 58 years), of these 36 (73.5%) were males and 13 (26.5%) were females, with a ratio of 2.8:1. In this cohort of patients studied, 27 (55.1%) patients are Middle Eastern, while 16 (32.7%) patients came from the Indian subcontinent and 6 (12.2%) patients were black Arab ethnicity. The main demographic, clinical and other characteristics of the study population are summarized in (Table 1). Multiple TKIs were used, imatinib was the most common TKI; used in 25 patients (51%), nilotinib in 15 patients (30.6%), dasatinib in 8 patients (16.3%) and ponatinib used only in 1 patient ( Figure 1A). GI prophylaxis was reported in <16% of the cases ( Figure 1B). Association Between Various Hematological Parameters With Gender, Age, and Ethnicity Unpaired t and Mann Whitney U statistical tests revealed that the mean (or median for skewed data distribution) values of most of the hematological parameters among males were observed to be significantly higher compared to females in each before, during and after fasting periods. Age was found to be negatively correlated (low to moderate correlation) with all hematological parameters and BCR-ABL1 levels in each period before, during and after fasting (Pearson correlation (r) ranges from À0.10 to À0.46; P ¼ 0.001 to 0.643). Most of the hematological parameters had lower values in a higher age group (age >60 years) compared to age groups (age 40 years and age >40 to 60 years) ( Figure 2, Box Plot). Hemoglobin (Hgb) levels at both before and after fasting were observed to be significantly higher in patients from the Middle Eastern and Indian subcontinent compared to patients from black Arab ethnicities (Kruskal Wallis test P < 0.05). Whereas, Eosinophils measured before fasting was found to be significantly lower in patients from Middle Eastern compared to the Indian subcontinent and black Arab ethnicities (Kruskal Wallis test P ¼ 0.009). Moreover, other hematological parameters measured did not differ significantly across different ethnicities (P > 0.05) (Figure 3, Box Plot). Relationship Between Various Hematological Parameters and TKI Across 3 Periods The mean and SD along with median and IQR for various CBC parameters studied across different types of TKIs and Figure 4, Box plot). TKI and GI prophylaxis and types of food. There was no statistically significant association observed between various TKIs types and food i.e. large and light meals (P > 0.05). Though statistically insignificant (P > 0.05), GI prophylaxis was noted to be more in imatinib (n ¼ 5, 22.7%) and nilotinib (n ¼ 3, 20%) compared to dasatinib (0%) group respectively. Furthermore, none of the patients reported any TKI related gastrointestinal toxicity during fasting. Discussion Different types of fasting have been reported to impact the antitumor effect of chemotherapy. Prolonged fasting (PF, water only for more than 2 days) or of fasting-mimicking diets (FMDs) was shown to enhance the activity of chemotherapy and radiotherapy in preclinical cancer models. 9-12 However, whether these results can be applied in cancer patients is still to be determined. Despite the importance and relevance of this topic and given the burden of cancer, very few scholars have investigated the impact of fasting on cancer patients, with most of these trials being qualitative. A small trial conducted in breast cancer patients found that women who fasted 24 hours before and after TAC (docetaxel, doxorubicin, and cyclophosphamide) chemotherapy cycles experienced less hematological toxicity than women assigned to a regular diet. 13 Moreover, in a small case series, patients who underwent chemotherapy with fasting reported a reduction in chemotherapy fatigue and weakness while fasting. 10 Dorff et al investigated 20 cancer patients who are receiving platinum-based chemotherapy regimens with 3 different fasting periods (24, 48 and 72 hours) in a dose-escalation and feasibility study. Fasting was found to be safe and feasible for cancer patients with some preliminary evidence of reduced DNA-damage evident in host leukocytes after chemotherapy exposure for subjects who fasted for 72 hours. These studies were limited by the small sample size. 14 Up to date, limited data is published exploring the effect of intermittent fasting on cancer patients. In King Fahad Medical City, a small trial was conducted to evaluate the safety and feasibility of combining chemotherapy and intermittent fasting during Ramadan. Eleven patients were followed, side effects and blood counts were compared with values measured in response to a similar dose of chemotherapy, given 2 weeks after the end of Ramadan. The authors concluded that combining fasting and chemotherapy during the month of Ramadan was well tolerated and safe and patients reported fewer side effects. 15 Several factors can make CML as one of the most challenging conditions to manage during intermittent fasting. First, TKIs that are used in CML have distinctive pharmacokinetic properties in which some of these agents might interact with food and ultimately affect drug absorption. For instant imatinib and bosutinib should be taken with food to reduce gastrointestinal irritation. In pharmacokinetic studies, imatinib administration with food resulted in an absolute bioavailability of 98%; achieves maximum plasma concentration approximately 2-4 hours in normal conditions; while ingestion a high-fat meal can prolong the uptake and therefore the time to reach a maximum plasma concentration (t max). 16 The administration of dasatinib and ponatinib is not affected by food, while nilotinib needs to be taken without food. 17,18 Furthermore, optimum administration of TKIs and dosing schedule during the holy month is not well defined until now, particularly in the case of nilotinib, patients are counseled not to eat 2 hours before and one hour after taking the medication. Especially in Ramadan, the twice-daily dosing of nilotinib in the absence of food can be challenging for CML patients. 7,[19][20][21] In addition, drug interactions can be problematic for many patients especially those who experience dyspeptic symptoms, which requires empirical therapy with proton pump inhibitors (PPIs). Drug compliance is also considered an essential factor. As noticed by some scholars, drug compliance during intermittent fasting tends to fall and patients are less likely to adhere to their treatment schedules. [20][21][22] A Malaysian study assessing tyrosine TKIs use in CML patients 7 reported that Muslim patients receiving TKI therapy frequently skipped or changed doses of nilotinib, which needed to be taken twice daily, in order to fulfill their religious obligations during Ramadan. The potential effect of this poor compliance on treatment success is concerning. In this study, patients also reported going against the advice of doctors when advised not to fast. [20][21][22] Our study is the first qualitative study that reports the effect of intermittent fasting on CML patients who are receiving TKIs. In our study, the use of TKIs during fasting did not result in significant changes in hematological indices nor in BCR-ABL levels. It has been reported in the literature that the effect of TKIs is enhanced by fasting due to the inhibition of the mitogen-activated protein kinase signaling pathway. 11 For instance, fasting was shown to potentiate the effect of sorafenib in decreasing hepatocellular carcinoma cell proliferation and glucose uptake, while normalizing the expression levels of genes commonly altered by sorafenib itself in hepatic cancer. 23 Our results confirm the feasibility of fasting accompanying TKIs as patients' responses were not affected during intermittent fasting, Moreover, there was no statistically significant association observed between various TKIs types and food i.e. large and light meals. Though, statistically insignificant, a few patients reported using PPIs during fasting, with most patients receiving imatinib. There is a distinct paucity of scientific and clinical data investigating the effect of fasting in hematological malignancies. 20,21,24 From the limited data available, Muslim patients with hematological malignancies are less likely to fast than those with solid tumors; nevertheless, the data also clearly demonstrates that a proportion of patients who are on active treatment will attempt to practice intermittent fast. The management of CML patients who are willing to fast is complex and should involve a multidisciplinary team, who should be aware of the importance of cultural and spiritual beliefs of Muslim patients and their abilities to fast. Limitations of the Study 1. This is a pilot phase, so our team is planning to do a multi-center study with an appropriate sample size. 2. Only one type of intermittent fasting studied (16/8), other types of intermittent fasting need to be explored, which is planned to be an extension phase of this study. 3. Assessment of side effects during fasting shall be considered to investigate the impact of fasting on treatment tolerance and quality of life. Conclusion TKIs use during intermittent fasting is poorly studied. This study demonstrates that the use of TKIs during fasting did not result in significant changes in hematological indices or BCR-ABL1 levels. Patients who wish to practice intermittent may be reassured in this regard. Yet, physicians can adopt the safe trial approach, where they allow the patients to fast, but with instructions regarding when and how to break fasting. Authors' Note All data generated or analyzed during this study are included in this published article. The article describes an original article. This project has been approved by the IRB in Hamad Medical Corporation (MRC-01-18-411) and has been performed in accordance with the ethical standards noted in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. No consent was required to participate. TKIs use during fasting is poorly studied. The finding of this study highlighted that the use of TKIs during fasting did not result in significant changes in hematological indices nor in BCR-ABL levels. Therefore, patients with CML who wish to practice intermittent fasting should receive clear instructions regarding when and how to break fasting.
2021-04-21T06:16:50.999Z
2021-04-20T00:00:00.000
{ "year": 2021, "sha1": "f388c0cd80add729a8531d4d4112012a2bd6d0ab", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/10732748211009256", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0d6b75dd39f67594fd43c98d48d43599f755dc5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233395087
pes2o/s2orc
v3-fos-license
An Open Source Low-Cost Device Coupled with an Adaptative Time-Lag Time-Series Linear Forecasting Modeling for Apple Trentino (Italy) Precision Irrigation Precision irrigation represents those strategies aiming to feed the plant needs following the soil’s spatial and temporal characteristics. Such a differential irrigation requires a different approach and equipment with regard to conventional irrigation to reduce the environmental impact and the resources use while maximizing the production and thus profitability. This study described the development of an open source soil moisture LoRa (long-range) device and analysis of the data collected and updated directly in the field (i.e., weather station and ground sensor). The work produced adaptive supervised predictive models to optimize the management of agricultural precision irrigation practices and for an effective calibration of other agronomic interventions. These approaches are defined as adaptive because they self-learn with the acquisition of new data, updating the on-the-go model over time. The location chosen for the experimental setup is a cultivated area in the municipality of Tenna (Trentino, Alto Adige region, Italy), and the experiment was conducted on two different apple varieties during summer 2019. The adaptative partial least squares time-lag time-series modeling, in operative field conditions, was a posteriori applied in the consortium for 78 days during the dry season, producing total savings of 255 mm of irrigated water and 44,000 kW of electricity, equal to 10.82%. Introduction Traditional irrigation consumes great amounts of water and electrical energy because it applies water uniformly over every part of the field without considering the variability of soil and crop different water needs. Therefore, following the soil structure and the vegetation, some parts of the field often result to be over-irrigated, while other parts are under-irrigated [1]. As opposed to uniform irrigation, precision irrigation involves the principle of variable rate applications based on plant needs and environmental conditions to ensure optimal production with minimal impact and resource use [2,3]. Generally, the development of modern society is associated with population growth and the consequent necessity to increase the production of world agricultural food. As reported by Alt et al. [4], this means that agricultural production processes should become more efficient, and there is an inevitability of digitalization of all agricultural systems. This is made possible using intelligent technologies (e.g., artificial intelligence, robotics, Internet of things (IoT), unmanned aerial vehicles, etc.), which could increase productivity, reduce production costs, and reduce labor requirements [5]. Generally, in smart irrigation practices, the application of sensors and networking units represents an efficient solution to handle the limitedness of essential resources such as water and consequently increase crop yields. The sensors suitable for this purpose are primarily meant to be operated remotely taking advantage of IoT ecosystems while taking precise measurements relevant for irrigation planning such as the amount of water, crop temperature, and humidity to build a robust supply chain ecosystem and make decisions [6]. To assess the impact of irrigation heterogeneity on crop yield and soil water management practices, some studies integrated forecasting modeling approaches to provide innovative frameworks and predict agronomic and economic impacts. An example is a work carried out by Bellvert et al. [7], which developed a model for scheduling irrigation simulating the actual amount of water evapotranspirated per vine to assess the necessary amount of water to be applied when different irrigation strategies were adopted. The study of Xu et al. [8] developed a decision model interfacing data relative to agricultural production, crop characteristics, and irrigation regularity implemented online for remote web services. The water balance method balancing water inputs and outputs of the soil-plant system is widely used to schedule irrigation in arboriculture [9]. The soil-plant water need is constituted by crop evapotranspiration (ETc), which is obtained by multiplying evapotranspiration of the reference crop (ETo) with the crop coefficient (Kc) as a function between soil-plant system water needs and climate conditions [10,11]. In light of the literature, the present study aims at the creation of a highly accurate adaptive multivariate modeling and the development of a decision support system (DSS) based on a mobile app, open source IoT LoRa weather station, and ground sensors automatically updating field measurements. Finally, the integration of the obtained adaptive supervised predictive models allowed optimal precision irrigation scheduling, therefore calibrating effectively other agronomic interventions. This approach is defined as "adaptive" because it self-learns with the acquisition of new data, updating the model on the go over time. Experimental Field and Setup The location chosen for the experimental setup is a cultivated area in the municipality of Tenna, Trentino, Alto Adige region, Italy. The area is about 50 hectares, located in a mountainous area at 569 m a.s.l., and mainly dedicated to the production of apples and berries. Tenna is characterized by a warm and temperate climate with high rainfall throughout the year, including the driest month. According to Köppen and Geiger [14], the climate is classified as oceanic temperate (Cfb), the average temperature is 10.4 • C, and the average annual rainfall is 862 mm. The difference in precipitation is 56 mm between the driest month and the wettest month. Average temperatures vary by 21.3 • C over the course of the year (Table 1). In recent decades, the area has been characterized by climate change; Table 2 shows the average monthly temperatures of 2019, which highlights sharply increased values. The irrigation of the area is managed by a local consortium of small agricultural producers, the Consorzio di Miglioramento Fondiario di Tenna. The experimental setup consists of two experimental parcels in the Tenna area ( Figure 1). In recent decades, the area has been characterized by climate change; Table 2 shows the average monthly temperatures of 2019, which highlights sharply increased values. The irrigation of the area is managed by a local consortium of small agricultural producers, the Consorzio di Miglioramento Fondiario di Tenna. The experimental setup consists of two experimental parcels in the Tenna area ( Figure 1). The east parcel, with an area of 0.21 ha, is cultivated with golden apples on rootstock M9, planted in 2015, with a planting distance of 0.80 × 3.00 m. The west parcel, with an Both parcels are managed through driplines irrigation systems. The water is collected from the Caldonazzo lake at a quota different by 187 m. The difference in altitude is surmounted by means of an electrical 75 kW pump with a flow of 26 L m −1 . At the top of the pumping system, a cistern of 200 m 3 collects the water pumped from the lake and, from there, drops by gravity to the parcels. Two other 9 kW parallels pumps, with a flow of 18 L m −1 , guarantee the water supply to the upper fields of the consortium. The conventional irrigation scheduling adopted for the entire consortium's parcels is equal to 1 h a day with a water volume of 5.25 mm m −2 . The west experimental parcel was used as control for the irrigation model comparison, being managed through conventional irrigation strategy. Both soil parcels are characterized by sandy loam texture in the first 90 cm of depth. The organic matter (OM) content in T1 (east parcel) is considered moderately low at 30 cm and 60 cm and low at 90 cm, while in T2 (west parcel) soil OM is moderately low at 30 cm and very low (not quantifiable) at 60 cm and 90 cm. Nevertheless, in T2, high content of organic matter on the topsoil was detected. Both soils are considered noncalcareous. The available water capacity (AWC) tends to be low, consistent with the sandy classification of the two soils. This would confirm the need for a specific differential irrigation strategy in terms of volumes and number of interventions. Due to the presence of skeletons (particles with a diameter >2 mm), especially in T2 (west) soil, particular attention was paid to sampling the soil. Soil is managed with inter-row grassing and chemical weeding on the row. The root depth of apple trees, being on weak rootstocks M9, reaches a maximum of 0.40 m. In each experimental parcel was installed a weather station and soil moisture sensors connected via LoRaWAN protocol. The installed weather stations (Table 3) are Davis Vantage Pro2 models, engineered to handle the harshest environments and deliver data with scientific precision. It was used to collect the following data: air temperature and humidity, dew point, wind speed and direction, barometric pressure, and rainfall. The installed model is also capable of communicating data through a mobile Internet connection to the web database and it is also integrated with a LoRaWAN gateway to collect data from remote field sensors. In addition, extra sensors and accessories could be added to the Vantage Pro 2 station to allow the design of sophisticated environmental monitoring systems, building up an adapted configuration. The Open Source Soil Moisture LoRa Device The soil moisture LoRa device is based on open source technology (software and hardware). Mainly, it is equipped with soil moisture sensors, based on resistive technology, which exploits the relationship principal constant between the ohmic variation and pressure given by the soil water tension, soil moisture, air quality, and rainfall height sensors. The model used is the Watermark Soil Moisture Sensors 200SS (The Irrometer Company Inc., Riverside, CA, USA), a well-established method of assessing soil moisture in crops with a good value for money. The acquired information is transferred through radio waves using LoRaWAN (long-range wide-area network) technology. The transmission protocol was chosen because it guarantees greater autonomy of the device and a range of action on several km. The whole system was optimized to reduce energy consumption, making it possible to power it through a LiPo battery recharged through a small solar panel ( Figure 2). autonomy of the device and a range of action on several km. The whole system was optimized to reduce energy consumption, making it possible to power it through a LiPo battery recharged through a small solar panel ( Figure 2). The device was designed in a single electronic board ( Figure 2), integrating different numbers and typologies of sensors. This device can integrate many types of sensors, for example, those typical of a weather station (e.g., rain gauge and air quality), monitor the status of the battery and solar panel, with a dedicated connection for debugging. Finally, the actual communication system, currently based on LoRaWAN, can be replaced for other applications with different technologies such as Wi-Fi and Xbee. The selected soil moisture sensors ( Figure 2) consist of a pair of highly corrosionresistant electrodes embedded in a granular matrix. A current is applied to the sensor to obtain a resistance value. The sensor meter correlates the resistance to centibars (kilopascals) of the soil water tension. The sensor has also been designed to be a permanent sensor, positioned in the matrix to be monitored. Finally, an important feature, for the reliability of the reading, is the presence of plaster installed internally, which provides a buffer effect considering the effect of the salinity levels normally present in crops and irrigated agricultural landscapes. The sensors installed for the experimental activities of this study are equipped with three moisture sensors (installed at 30-60-90 cm depth) and a rain gauge. The device was designed in a single electronic board ( Figure 2), integrating different numbers and typologies of sensors. This device can integrate many types of sensors, for example, those typical of a weather station (e.g., rain gauge and air quality), monitor the status of the battery and solar panel, with a dedicated connection for debugging. Finally, the actual communication system, currently based on LoRaWAN, can be replaced for other applications with different technologies such as Wi-Fi and Xbee. The selected soil moisture sensors ( Figure 2) consist of a pair of highly corrosionresistant electrodes embedded in a granular matrix. A current is applied to the sensor to obtain a resistance value. The sensor meter correlates the resistance to centibars (kilopascals) of the soil water tension. The sensor has also been designed to be a permanent sensor, positioned in the matrix to be monitored. Finally, an important feature, for the reliability of the reading, is the presence of plaster installed internally, which provides a buffer effect considering the effect of the salinity levels normally present in crops and irrigated agricultural landscapes. The sensors installed for the experimental activities of this study are equipped with three moisture sensors (installed at 30-60-90 cm depth) and a rain gauge. The data acquisition and historicization infrastructure is composed by the LoRaWAN gateway, for a direct connection of the sensors via LoRaWAN transmission and an IoT remote service for the creation of the dashboard ( Figure 3) and historicization of data. The data acquisition and historicization infrastructure is composed by the LoRaWAN gateway, for a direct connection of the sensors via LoRaWAN transmission and an IoT remote service for the creation of the dashboard ( Figure 3) and historicization of data. The device transmits the packet containing the read values to the gateway on which it has established a LoRaWAN connection. Then, the gateway forwards the packet to the remote service that will historicize and display according to the created mask. This service is accessible from any internet point. The approximate cost of the open source device with the configuration described above is around EUR 350.00, which can be considered low cost, compared to a commercial device on the market. The developer open source device was operatively compared with the Sentek Drill & Drop soil moisture and temperature sensors installed in the Tenna parcels. These are capacitive frequency domain reflectometry (FDR) sensors with a probe length of 90 cm and a step of measurement of 10 cm in depth. Data Acquisition and App The experimental area sensors operate through an IoT LoRaWAN local network and transmit data to a web-based software framework that manages the back end of the system, the cloud databases, and the interface with final users called via a web application ( Figure 4). The device transmits the packet containing the read values to the gateway on which it has established a LoRaWAN connection. Then, the gateway forwards the packet to the remote service that will historicize and display according to the created mask. This service is accessible from any internet point. The approximate cost of the open source device with the configuration described above is around EUR 350.00, which can be considered low cost, compared to a commercial device on the market. The developer open source device was operatively compared with the Sentek Drill & Drop soil moisture and temperature sensors installed in the Tenna parcels. These are capacitive frequency domain reflectometry (FDR) sensors with a probe length of 90 cm and a step of measurement of 10 cm in depth. Data Acquisition and App The experimental area sensors operate through an IoT LoRaWAN local network and transmit data to a web-based software framework that manages the back end of the system, the cloud databases, and the interface with final users called via a web application ( Figure 4). The front-end software was developed as a mobile progressive web application (PWA), and it was implemented using Ionic and Angular frameworks ( Figure 5). The app is available for Android and iOS and, alternatively, it is accessible via a common browser. The app allows final users to access the data collected by sensors, manage their cultivation, the agricultural activities, and, for admin users, monitor water and energy consumption. Moreover, the web application allows admin users to check the precision irrigation model outputs and evaluate irrigation decisions. The back-end web framework is a server-side REST API (Representational State Transfer Application Programming Interface), implemented in Loopback. It provides a common interface to the front-end application to access the different data sources of the projectsensors data, weather stations data, weather forecast data. It also provides the interface to store and retrieve in a MongoDB web-DB user data such as parcels administrative and geographical data, fields data, agricultural activities log, etc. The front-end software was developed as a mobile progressive web application (PWA), and it was implemented using Ionic and Angular frameworks ( Figure 5). The app is available for Android and iOS and, alternatively, it is accessible via a common browser. The app allows final users to access the data collected by sensors, manage their cultivation, the agricultural activities, and, for admin users, monitor water and energy consumption. Moreover, the web application allows admin users to check the precision irrigation model outputs and evaluate irrigation decisions. The front-end software was developed as a mobile progressive web application (PWA), and it was implemented using Ionic and Angular frameworks ( Figure 5). The app is available for Android and iOS and, alternatively, it is accessible via a common browser. The app allows final users to access the data collected by sensors, manage their cultivation, the agricultural activities, and, for admin users, monitor water and energy consumption. Moreover, the web application allows admin users to check the precision irrigation model outputs and evaluate irrigation decisions. The back-end web framework (Figure 6) has also a specific developed service to execute the predictive modeling for precise irrigation. Precise irrigation model's software implementation consists of a MATLAB code runnable via MATLAB Runtime Compiler. The model execution needs field terrain data stored in the project web database and updated weather forecast data. A server-side Node.js application runs independently at a modeldependent scheduled time, from one time every day to one time every 6 h, collects data from weather service and database, and stores new irrigation prevision results on it; it also preserves previous predictions. The back-end web framework is a server-side REST API (Representational State Transfer Application Programming Interface), implemented in Loopback. It provides a common interface to the front-end application to access the different data sources of the project-sensors data, weather stations data, weather forecast data. It also provides the interface to store and retrieve in a MongoDB web-DB user data such as parcels administrative and geographical data, fields data, agricultural activities log, etc. The back-end web framework ( Figure 6) has also a specific developed service to execute the predictive modeling for precise irrigation. Precise irrigation model's software implementation consists of a MATLAB code runnable via MATLAB Runtime Compiler. The model execution needs field terrain data stored in the project web database and updated weather forecast data. A server-side Node.js application runs independently at a model-dependent scheduled time, from one time every day to one time every 6 h, collects data from weather service and database, and stores new irrigation prevision results on it; it also preserves previous predictions. The irrigation previsions are immediately and automatically available in the mobile web application and could be used to adapt the irrigation strategies. They could also be part of the necessary data needed to pilot the automatic irrigation system. In the mobile web application (Figure 7), admin users could also examine historical previsions data of the specific model and manage the association of the model with compatible parcels of the consortium. Finally, a panel in the app dashboard quickly summarizes the total amount of water used from the beginning of the irrigation season, using the predictive irrigation model instead of the classic daily constant rate. The irrigation previsions are immediately and automatically available in the mobile web application and could be used to adapt the irrigation strategies. They could also be part of the necessary data needed to pilot the automatic irrigation system. In the mobile web application (Figure 7), admin users could also examine historical previsions data of the specific model and manage the association of the model with compatible parcels of the consortium. Finally, a panel in the app dashboard quickly summarizes the total amount of water used from the beginning of the irrigation season, using the predictive irrigation model instead of the classic daily constant rate. Predictive Modeling The database was constructed with data collected from the weather station and from the sensors (commercial ground sensors and open source soil moisture LoRa) distributed in the Tenna consortium from 19 June to 3 September 2019 (77 days). The commercial ground sensors measured soil temperature and moisture at different depths of 20, 30, 40, 50, 60, 70, 80, and 90 cm ( Figure 8). Meanwhile, the open source soil temperature was measured at 30 cm, and the soil moisture LoRa at 30, 60, and 90 cm. Additionally, the irrigation water input estimated indirectly from the consumption in kWh of the pump was added to these data. Predictive Modeling The database was constructed with data collected from the weather station and from the sensors (commercial ground sensors and open source soil moisture LoRa) distributed in the Tenna consortium from 19 June to 3 September 2019 (77 days). The commercial ground sensors measured soil temperature and moisture at different depths of 20, 30, 40, 50, 60, 70, 80, and 90 cm ( Figure 8). Meanwhile, the open source soil temperature was measured at 30 cm, and the soil moisture LoRa at 30, 60, and 90 cm. Additionally, the irrigation water input estimated indirectly from the consumption in kWh of the pump was added to these data. Predictive Modeling The database was constructed with data collected from the weather station and from the sensors (commercial ground sensors and open source soil moisture LoRa) distributed in the Tenna consortium from 19 June to 3 September 2019 (77 days). The commercial ground sensors measured soil temperature and moisture at different depths of 20, 30, 40, 50, 60, 70, 80, and 90 cm ( Figure 8). Meanwhile, the open source soil temperature was measured at 30 cm, and the soil moisture LoRa at 30, 60, and 90 cm. Additionally, the irrigation water input estimated indirectly from the consumption in kWh of the pump was added to these data. The pump of the consortium consumed 900 kWh equivalent to 5.25 mm m −2 . Table 4 shows the parameters related to the soil structure at different depths. The optimum moisture (%) for the crop (at different depths) must lie between the values of available water capacity (AWC) and those of field capacity (FC). Above the FC values, the water is dispersed into the subsoil. The objective of the models is to suggest a correct water supply to keep the crop always in the optimum range without going to waste water-soluble fertilizers and/or energy. In the first phase, the dynamics of the water in the soil at the different layers were studied. To observe the correlation between the input water (rain + irrigation) and the soil moisture at different depths (i.e., 30, 60, and 90 cm), a cross-correlation analysis (Davis, 1986) was carried out on the two columns of daily sampled temporal data. The correlation values at different time (day) lags were calculated, together with the p values indicating the significance of the correlation. Cross-correlation analysis was carried out with the software PAST (version 2.17v) [15]. The precision irrigation model was based on the concept of TimeLag/TimeSeries [16]. TimeLag represents the elapse between the water input and the soil moisture data shifted i days before and analyzed with the aforementioned cross-correlation analysis. Furthermore, the possibility was considered that the event could be related to the variable of specific adjacent (n) days (TimeSeries). Consequently, the possibility to combine the TimeSeries variables was considered to account for variable weights different from the initial condition. The input block (X-block) in the training phase was constituted by the parameters (daily mean, minimum, and maximum) collected by the weather station (i.e., air humidity (%), air temperature (i.e., maximum, minimum, and mean; • C), wind direction, mean wind speed (km·h −1 ), rainfall height (i.e., H 2 O rain (mm)), by the parameters measured by the ground sensor (i.e., daily mean soil moisture at different depths (30, 60, and 90 cm)). The Y-block was constituted by the irrigation water input (mm). The partial least squares (PLS) procedure was elaborated using the PLS Toolbox in MATLAB V7.0 R14 (Math Works, Natick, MA, USA), and included the following steps: (i) the extraction of the dataset (X-block variables); (ii) the creation of a measured values dataset to be used as a reference or response variable (Y-variable); (iii) the data fusion of the two datasets (Y-and X-block) in one analysis dataset (AD); (iv) an analysis dataset partitioning into the model set (MS = 80% of AD) and external validation test set (TS = 20% of ADs) by means of a sample set partitioning based on the joint x-y distances (SPXY) algorithm. This method employs a partitioning algorithm that considers the variability in both x-and y-spaces; (v) application of different preprocessing algorithms to the Xand Y-block (none, Log 1/R, diff1, mean center, autoscale, median center, baseline)-the matrices were preprocessed using the autoscale MATLAB algorithm; (vi) application of chemometric technique-modeling and testing; and (vii) calculation of the efficiency parameter of prediction. Partial least squares consider internal cross-validation of the model set, and we introduced a further validation using the test set. The performances of the model were estimated by evaluating the coefficient of correlation (r) between observed and predicted values, the standard error of prediction (SEP), root-mean-square error of calibration (RMSEC), and bias calculated as the average of the differences between predicted and measured values. Residual predictive deviation (RPD), defined as the ratio of the standard deviation of the laboratory-measured (reference) data to the RMSE, was used to verify the accuracy of the model. The model accuracy and precision were evaluated according to the highest r, minimum SEP, maximum RPD, and bias value very close to zero [17]. After the training phase, the model resulting to be the most efficient and robust was adopted (application phase) for the in-field application and inserted in the web framework for scheduled previsions. The model predicts the irrigation water needs for the same day of the interrogation (T0), for the next day after the interrogation (T1), and the following second and third days after the interrogation (T2, T3). The prevision model replaced the soil moisture at the different depths (30, 60, 90 cm) with the field capacity at the same depths ( Figure 8). The open source weather forecast was implemented at TL > 0 using the service Open Weather Map. A scheme of the application phase at different timing is shown in Figure 8. Results and Discussion Results of the cross-correlation analysis for Tenna show highly significant correlation of input water (rain + irrigation) and the soil moisture at 30 cm at time (day) lag 1 (r = 0.55845; p = 1.31 × 10 −7 ). Additionally, at 60 cm, a highly significant correlation was observed at time (day) lag 2 (r = 0.46996; p = 1.84 × 10 −5 ). At 90 cm, no significant correlations were observed. The results show that soil moisture increase after one day at 30 cm depth from the water input on the surface, and after two days at 60 cm. At 90 cm, maybe stochastic influences arise. In Table 5, cross-correlation and p values at different soil depths (30, 60, and 90 cm) and time (day) lag from 0 to 5 are reported. In the next phase, to build the model and predict the irrigation water input, soil moisture at 30 cm was used at time lag −1 (the day before), and soil moisture at 60 and 90 cm were used at time lag −2 (two days before). The best performing and robust model resulted to be time series 2 (two days used), characterized by eight Latent Vectors and an autoscale preprocessing. The performances of the model are listed in Table 6. The model showed good performances having low errors (RMSEC, RMSECV, SEP) and high r values (0.86 in the model and 0.91 in the internal test). The model appeared to be fairly robust (low RPDRMSE values) due to the limited number of input records. In the operative field condition, the consortium, in the dry season (from the end of April to the end of September), uses to irrigate each parcel every day for 1 h (equivalent to 5.25 mm H 2 O per square meter), rainy days excluded. Considering the period of 78 days of the dry season (from 19 July to 4 September 2019) the Consorzio di Miglioramento Fondiario di Tenna consumed a total of 286 mm of water equivalent to 49,500 kW. The model a posteriori applied on the same period reported a total saving of 255 mm equivalent to 44,000 kW, representing 10.82%. 0.91 Figure 9 shows the daily irrigation comparison between the PLS model predicted (proposed) and the real irrigation input water (A), and the water-saving comparison (B). It is possible to observe how the proposed model significantly reduces the irrigation water input. In the operative field condition, the consortium, in the dry season (from the end of April to the end of September), uses to irrigate each parcel every day for 1 h (equivalent to 5.25 mm H2O per square meter), rainy days excluded. Considering the period of 78 days of the dry season (from 19 July to 4 September 2019) the Consorzio di Miglioramento Fondiario di Tenna consumed a total of 286 mm of water equivalent to 49,500 kW. The model a posteriori applied on the same period reported a total saving of 255 mm equivalent to 44,000 kW, representing 10.82%. Figure 9 shows the daily irrigation comparison between the PLS model predicted (proposed) and the real irrigation input water (A), and the water-saving comparison (B). It is possible to observe how the proposed model significantly reduces the irrigation water input. During the experiment, soil moisture increased after one day from the water input at 30 cm depth, and after two days at 60 cm; this may be attributed to the redistribution of water along the soil profile. At 90 cm, no significant effect was observed. Considering that the two varieties of apple trees are grafted on rootstock M9, which limits the root depth at 40 cm, the water supplies can be considered adequate to the needs of the soil-plant system. Additionally, in other research studies, the sensors situated at 30 cm depth responded rapidly to irrigation, with a clear amplitude response between minimum preirrigation values and maximum terminating irrigation values [9]. In addition, the study During the experiment, soil moisture increased after one day from the water input at 30 cm depth, and after two days at 60 cm; this may be attributed to the redistribution of water along the soil profile. At 90 cm, no significant effect was observed. Considering that the two varieties of apple trees are grafted on rootstock M9, which limits the root depth at 40 cm, the water supplies can be considered adequate to the needs of the soil-plant system. Additionally, in other research studies, the sensors situated at 30 cm depth responded rapidly to irrigation, with a clear amplitude response between minimum pre-irrigation values and maximum terminating irrigation values [9]. In addition, the study by Paris et al. [18] reports a strong relationship between soil moisture profiles and the polydispersity index, indicating an efficient scheduling of the irrigation monitoring soil moisture at 30 and 40 cm of depth. The rapid response of the system to changes in soil water status avoided water stress during the more sensitive phenologic apple tree phases. Irrigation strategies based on controlled depletion of soil available water allow us to obtain efficient results in terms of watering depth and number of irrigations for the season, reducing losses by deep percolation [12]. Finally, the forecasting model, applied for 78 days of the dry season, reported a total saving of 255 mm equivalent to 44,000 kW. The use of the water balance method allows a rapid response to input or output changes, adapting the amount of water to soil-plant system request. Soil water content sensors provided feedback with adequate precision [11,19]. Other authors compared automated scheduling of two independent irrigation plots, differenced by apple trees size (cv. Golden Reinders), with two control plots, the same type of plants, and scheduled manually following a classical water balance. In the plot with automated scheduling irrigation, based on a sensor survey and cultivated with smaller plants, a 24% reduction in water consumption was observed [20]. Nowadays, extremely few contributions document water management variable rate strategies in Italy. Among these, Ortuani et al. [21] experimented a Variable Rate (VR) drip irrigation in northern Italy, achieving a reduction of water input equal to 18% in comparison to the farmer's conventional irrigation system. This turned out to be effective also in terms of the final product quality producing more homogeneous grape maturation and the same yield. Although Domínguez-Niño et al. [20] developed an irrigation automated system to manage an apple orchard, the system does not include a remote mobile app to monitor the orchard heterogeneity. Agricultural digitalization represents potential tools to help decision making be the core of DSS. One of the contributions of the present work, along with the advanced modeling, is data collection and management. Indeed, the developed system collects the data and sends them to the cloud through an IoT structure for remote management of the orchard to lower the energy consumption. One of the main advantages of this smart system is the possibility, in addition to the real-time monitoring, to have the historical data always updated and visible with the possibility to inform watering routines and modify watering schedules in order to improve efficiency. Conclusions The emergence of autonomous irrigation systems based on a variety of technologies is nowadays trying to streamline the orchard's management while reducing cost and resource use, thus with a positive impact on the environment. The use of IoT technology is becoming a popular choice since it enables farmers to make a decision based on readyto-use information for pinpointed irrigation decisions, helping them to optimize the crop yield. Unfortunately, many of these have a rigid functioning, present poor integration among platforms, or have prohibitive costs. In this scenario, the present work integrates different strengths such as low-cost but reliable electronic parts (which is easily replaceable); well-known, convenient, long-range, and low-energy consumption connection protocol (which is LoRaWAN); a flexible design with the possibility to integrate function, data coming from additional sensors, etc.; a user-friendly dashboard and innovative highly performing self-adaptive algorithms. The results are encouraging in relation to the saving percentages found in the literature, as discussed above. Furthermore, the economic advantage of using this system opens up excellent opportunities for expanding the research field, making it also generalizable in other contexts. Future studies, following the potentially integrable system's characteristics, could point to the development and integration, through the introduction of additional sensors, of precision fertigation strategies for improved apple orchard productivity.
2021-04-27T05:14:39.591Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "1bdcac05975af5649f02d3fa8933eb972ba65c69", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/8/2656/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1bdcac05975af5649f02d3fa8933eb972ba65c69", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
98566099
pes2o/s2orc
v3-fos-license
of Chemical Engineering OPTIMIZATION OF THE PRECIPITATION OF CLAVULANIC ACID FROM FERMENTED BROTH USING T-OCTYLAMINE AS INTERMEDIATE This work describes the use of clavulanic acid (CA) precipitation as the final step in the process of purification of CA from fermentation broth as an alternative to conventional methods employed traditionally. The purpose of this study was to use a stable intermediate (t-octylamine) between the conversion of CA to its salt form (potassium clavulanate), thereby enabling the resulting intermediate (amine salt of clavulanic acid) to improve the purification process and maintain the stability of the resulting potassium clavulanate. To this end, response surface methodology was employed to optimize the precipitation step. For the first reaction, five temperatures (6.6 to 23.4 °C), concentrations of clavulanic acid in organic solvent (6.6 to 23.4 mg/mL) and t-octylamine inflow rates (0.33 to 1.17 drop/min) were selected based on a central composite rotatable design (CCRD). For the second reaction, five temperatures (11.6 to 28.4 °C), concentrations of clavulanic acid amine salt in organic solvent (8.2 to 41.8 mg/mL) and concentrations of potassium 2-ethylhexanoate (0.2 to 1.2 molar) were also selected using CCRD. From these results, precipitation conditions were selected and applied to the purification of CA from the fermentation broth, obtaining a yield of 72.37%. INTRODUCTION Clavulanic acid (CA) is a potent inhibitor of certain β-lactamases and its combined use with β-lactam antibiotics has enabled the cure of a variety of infections resistant to conventional therapies without requiring medications with strong side effects.Clavulanic acid is a highly unstable and hygroscopic oil whose use in drugs is only feasible in its salt form, specifically in the form of potassium clavulanate, which is more stable. The purification of clavulanic acid (CA) from fermentation broth involves a series of steps such as filtration and centrifugation to separate its cells, as Brazilian Journal of Chemical Engineering well as extraction techniques that are sometimes followed by an adsorption step for further purification of the antibiotic (Mayer et al., 1996). Although the literature describes different ways of purifying clavulanic acid, its high instability in aqueous solutions always reduces its yield during the steps involved in the purification process (Bersanetti et al., 2005).Therefore, to obtain a higher final yield of clavulanic acid salt, several patents describe clavulanic acid purification processes that include chemical reactions such as esterification and precipitation. The patents filed by Kim et al. (1995), Cook and Wilkins (1995) and Cardoso (1998) describe CA precipitation with potassium 2-ethylhexanoate (salt or sodium) as one of the steps in the purification process of CA from the fermentation medium.An extraction step is performed with organic solvent and the concentrated organic phase is subjected to a precipitation reaction which causes CA to react with potassium 2-ethylhexanoate (salt or sodium) to form the CA salt (potassium or sodium clavulanate). In a previous work we studied the precipitation of clavulanic acid with potassium 2-ethylhexanoate and demonstrated that different combinations of CA and potassium 2-ethylhexanoate concentrations affect the yield, purification and stability of the resulting salt (Hirata et al., 2009).We also found that some of the conditions tested resulted in the formation of oil rather than CA salt, indicating that the direct precipitation of CA with potassium 2-ethylhexanoate only occurred satisfactorily within a limited range, making this technique difficult to employ on an industrial scale. The preliminary conversion of CA into esters or amines (which are called stable intermediates) for subsequent conversion into potassium or sodium clavulanate can be employed to reduce the instability of CA, thereby increasing the yield of clavulanate salt.However, some amines are unsuitable for the manufacture of clavulanate salt or for use as intermediates because the resulting amine salts are toxic or hygroscopic.Moreover, the dissolution of certain salts for subsequent reactions requires large amounts of solvents.Thus, in addition to their economic unfeasibility, they may contaminate industrial effluents and intoxicate workers (Kim et al., 1995).Therefore, it is extremely important to choose the correct amine to produce the stable intermediate. To increase the stability of CA during purification and thus increase the range in which the precipitation reaction occurs successfully and with a high yield, CA was subjected to a preliminary reaction with t-octylamine to form the stable intermediate.This amine was selected because it does not form toxic salts and because it renders CA more stable during handling, thereby reducing its degradation (Yang et al., 1994). Considering that the performance of the precipitation reaction is directly influenced by the concentration of reagents, the temperature and the rate at which the reagents are added, a response surface model was developed to optimize the conditions employed to precipitate potassium clavulanate into a stable intermediate.The best values found in the factorial design were then applied to the purification of CA from the fermentation broth. Source of CA The experiments were carried out using two distinct sources of CA: Potassium clavulanate from the pharmaceutical product Clavulin® produced by SmithKline Beecham, consisting of 625 mg of amoxicillin and 125 mg of potassium clavulanate; and CA produced by fermentation with Streptomyces clavuligerus ATCC 27064.The culture medium presents the following composition (in gL −1 distilled water): glycerol, 10.0; soybean flour, 20.0; soybean oil, 23.0; K 2 HPO 4 , 1.2; MnC l2 4H 2 O, 0.001; FeSO 4 7H 2 O, 0.001; ZnSO 4 7H 2 O, 0.001, pH 6.8 (Ortiz et al., 2007).The fermentation was carried out in a Bioflo III model bioreactor (New Brunswick Sci.), with 3.6 L of production medium, making up 4 L of fermentation broth.The cultivation was conducted batchwise at 28 °C, 800 rpm and 0.5 vvm, and the pH was automatically controlled at 6.8±0.1 by adding 2M HCl or 2M NaOH solution.Dissolved oxygen concentration was monitored by a sterilized galvanic electrode (Mettler-Toledo InPro6000 Series), (Ortiz et al., 2007;Teodoro et al., 2010). Analytical Methods The concentration of CA in the fermentation broth was determined by high performance liquid Brazilian Journal of Chemical Engineering Vol. 30, No. 02, pp. 231 -244, April -June, 2013 chromatography (HPLC), as described by Foulstone and Reading (1982), by imidazole reaction.The samples were analyzed using a HPLC system equipped with a photodiode array detector (Waters 996 PDA) and a 3.9 x 300 mm C18-μ Bondapak analytical column.The HPLC device was operated at 28 °C and a flow rate of 2.5 mL/min, and standard solutions were prepared from the pharmaceutical product Clavulin®. The NMR data were recorded on a Bruker ARX-400 9.4 T spectrometer operating at 400.35 MHz for 1 H and at 100.10 MHz for 13 C.All the NMR data were obtained at 25 ºC, using tetramethylsilane (TMS) as internal reference and deuterium oxide as solvent. The morphology of the crystals of potassium clavulanate was analyzed by scanning electron microscopy (SEM) (Philips XL30 FEG-SEM), an ISIS microanalysis system (Oxford Instruments) and BSE (backscattered electrons).The BSE system shows the image by the difference in atomic weights, while the common system (SE-secondary electrons) shows the topographic image of the sample.The advantage of the BSE system is that it offers a better view of the crystals containing atoms of higher molecular weight than carbon, especially when these crystals are very small and fine, like those obtained for potassium clavulanate.The crystals with the larger atoms exhibited a shimmering white color, facilitating their identification in the sample.This method is suitable for potassium clavulanate due to the presence of a potassium atom in its molecule. Precipitation Procedure The precipitation of potassium clavulanate by passing through a stable intermediate involves two reactions.In the first precipitation reaction, CA reacts with t-octylamine to form the amine salt of CA (stable intermediate).In the second reaction, this stable intermediate reacts with potassium 2-ethylhexanoate to precipitate potassium clavulanate.Figure 1 illustrates the precipitation step to potassium clavulanate through these two reactions.A separate experimental design was devised for each reaction to better evaluate the variables involved in each reaction. The experimental designs were devised with Clavulin®.The CA in the product Clavulin® was chosen for use in the factorial design assays to ensure the reproducibility of results, since they show a standard behavior in terms of CA composition and amount.This behavior cannot always be ensured when working with different fermented broths.The conditions that yielded good values in the experiments with Clavulin® were applied to the CA from the fermentation broth. For the first precipitation reaction, the concentrated solution of clavulanic acid in ethyl acetate was transferred to a jacketed glass reactor and kept at a specific temperature in a thermostatic bath under continuous stirring (at 250 rpm) by a propeller stirrer connected to a speed control.T-octylamine was added dropwise and the solution was stirred for one hour.The induction time of 1.5 hours is defined in the Cook and Wilkins patent (1995).However, preliminary assays indicated that 1 hour was sufficient to obtain the highest yield of amine salts of clavulanic acid (CA).The precipitated product was filtered and washed with acetone.The resulting crystals were vacuum-dried for 24h at room temperature and then weighed. The yield of crystals (Y) was calculated by Equation (1). 100 m m Y% where m exp is the mass obtained experimentally and m t is the theoretical mass for 100% conversion. Brazilian Journal of Chemical Engineering In the second reaction, the CA amine salt was dissolved in 10 mL of isopropanol and 0.12 mL of water.The crystals of CA amine salts must be dissolved in isopropanol to begin the second reaction.However, salt crystals do not dissolve readily, thus requiring the addition of water.The volume of 10 mL was selected based on preliminary assays performed with different volumes to determine the most suitable volume for use in the jacketed reactor.The 0.12 mL of water used here corresponded to the minimum quantity that would allow the total dissolution of the highest concentration of amine salt of CA in isopropanol, defined by the CCRD.The amount of added water should be minimal so as not to interfere in the precipitation reaction for the formation of potassium clavulanate. This solution was transferred to a jacketed glass reactor similar to the one used for the first reaction (with controlled temperature and stirring speed).Potassium 2-ethylhexanoate was added dropwise at a constant rate of 0.75 drop/min.The rate at which the reagent is added is undoubtedly one of the factors that affect the formation of precipitates (Söhnel and Garside, 1992).The rate of 0.75 drop/min was chosen based on a preliminary study of the direct precipitation reaction of AC with 2-ethyl hexanoate, for which a fractional factorial design was used in which the additional flows were variables to be studied.This preliminary study indicated that the best flow rate was 0.75 drop/min. In the factorial design used here, after the addition of 2-ethylhexanoate salt was completed, the stock solution was left at the same temperature and under agitation for 45 minutes.This was the period corresponding to crystal growth, called the induction time, determined from preliminary assays. The potassium clavulanate precipitate was filtered and washed with isopropanol and acetone.The resulting crystals were vacuum-dried for 24h at room temperature and then weighed.The crystal yield (Y) was also calculated by Equation (1). For the CA fermentation broth, the 3kDa membrane permeate was extracted with ethyl acetate at pH 2.0 using sulfuric acid.The experimental procedure of the precipitation step was the same as that employed in the Clavulin® experiments. Experimental Design For the first reaction, five temperatures (6.6 to 23.4 °C), concentrations of clavulanic acid in organic solvent (6.6 to 23.4 mg/mL) and t-octylamine inflow rates (0.33 to 1.17 drop/min) were selected based on a central composite rotatable design (CCRD), as shown in Table 1 (Rodrigues and Iemma, 2005).The central point of the CCRD was performed in triplicate to estimate the error due to random experimental variability. Because of the high rate of CA degradation at temperatures above 35 °C (Bersanetti et al., 2005), these were kept below 30 °C, thus limiting the temperature interval studied here. For the second reaction, five temperatures (11.6 to 28.4 °C), concentrations of clavulanic acid amine salt in organic solvent (8.2 to 41.8 mg/mL) and concentrations of potassium 2-ethylhexanoate (0.2 to 1.2 molar) were also selected using CCRD, as shown in Table 2.The central point in the CCRD was performed in triplicate. The results were analyzed using the Statistica version 5.1 software package (StatSoft). Statistical Analyses The results of the experiments were analyzed using Statistical Analysis System (SAS, 1996) software.A general second-order polynomial was used to correlate the yield of the precipitation reactions to the process variables. The experimental factors and levels are also shown in Tables 1 and 2. Regression Models for Responses Table 3 lists the values of the first experimental design used for the first reaction and the responses obtained for the yield of CA amine salt. Equation ( 2) expresses the model with the coded values that represent the yield (Y) of clavulanic acid amine salt, which was produced using statistically significant parameters (p < 0.1).The terms that were not statistically significant were incorporated into the lack-of-fit to calculate the R-squared value.The coefficient of determination of 0.95 was considered excellent for this type of process. Table 4 shows the ANOVA results for the CA amine salt yield, considering only the statistically significant terms.Based on the F-test, the model is more predictive (p <0.0001) and higher than the listed F (2.51). Figure 2 shows that the experimental points were distributed around the diagonal, indicating the model's excellent performance.The variation of the relative deviations is small (less than 10%), indicating the good fit of the model to the experimental points.Therefore, the coded model expressed by Equation ( 2) was used to generate the response surfaces (Figure 3) for yield of clavulanic acid amine salt.An analysis of the response surfaces and contour curves (Figure 3) indicated that the region of the highest yield of clavulanic acid salt is associated with the low CA concentrations studied here.The maximum values obtained for Y (%) were 94.49% and 93.57%, which corresponded to levels -1 and -1.68, i.e., concentrations of 10 and 6.6mg/mL, respectively. Although the inflow rate of t-octylamine proved statistically significant, its effect on yield was almost negligible when compared to the effect of the CA concentration.The t-octylamine inflow rate had a more marked effect when associated with temperature, i.e., in the lowest range of temperatures (6.6 to 10 ºC), the highest CA salt yield was obtained with an inflow of around 1 to -1.68 (1.0 to 0.33 drop/min) (Figure 3(b)).At high temperatures, a higher t-octylamine inflow rate (0.75 to 1.17 drop/min) was found to produce better results. The factorial design applied here successfully optimized the first reaction, resulting in higher CA amine salt yields. Moreover, no oil was formed in any of the experiments, demonstrating that the passage through the stable intermediate (amine salt) really increased the stability of the potassium clavulanate precipitation reaction in the range of conditions studied here. In all these experiments, nucleation only started after the addition of at least half the amount of amine required for a theoretical conversion of 100%.This may render the reaction stable because it indicates that the initial supersaturation in this system was lower than in the reaction performed without passing through the stable intermediate, in which nucleation has most often occurred in response to the addition of the first drop of potassium 2-ethylhexanoate (Hirata et al., 2009).Thus it is believed that the introduction of t-octylamine as an intermediate increased the metastable zone of supersaturation in comparison to the direct reaction of CA with potassium 2-ethylhexanoate.This increase enabled the precipitation reaction to occur satisfactorily in a wider operating range, which can be considered an advantage in terms of industrial processing. Furthermore, the CA amine salt formed was highly stable and non-hygroscopic, showing no change in its crystals even when left at room temperature for more than a month without controlled air humidity. Brazilian Journal of Chemical Engineering The photomicrographs in Figure 4 were obtained by SEM (scanning electron microscopy), using 800X magnification. In Figure 4, note that the crystals of CA amine salt obtained in experiment 11 of the factorial design (CCRD) are well formed, with no sign of agglomeration. Table 5 lists the coded values of the second experimental design and the real values (in parentheses) utilized for the second reaction, as well as the responses obtained for potassium clavulanate. The regression coefficients of a second-order model using the coded variables were obtained from the results of the potassium clavulanate yield.The parameters with p < 0.1 were considered to be significant for the precipitation reaction under study.The statistically non-significant terms were incorporated into the lack-of-fit to calculate the R-squared value.The value of the F-test obtained for the regression (24.71) was highly significant (p <0.00001). Based on the ANOVA (Table 6), a second-order model (Equation ( 3)) was obtained to describe the potassium clavulanate yield using the coded variables.The variation in relative deviations was lower than 5%, demonstrating that the differences between the experimental responses and those predicted by the model were minimal, indicating the model's excellent fit. An analysis of the response surfaces and contour curves generated by the model (Eq.( 3)) indicated that high concentrations of CA amine salt increase the yield of potassium clavulanate (Figure 6 (a)).The maximum potassium clavulanate yields were 94.72, 94.25 and 92.55%, which corresponded to levels 0, 1 and 1.68, i.e., concentrations of 25, 35 and 41.8 mg/mL, respectively. A higher yield is associated with a range of temperatures between 15 and 25 °C, i.e., about 20 ºC.For a high concentration of CA amine salt, a higher concentration of potassium 2-ethylhexanoate is desirable.The maximum yield (94.72%) was obtained in experiment 14, in which the concentration of 2-ethylhexanoate was the highest (1.2 molar) (Figure 6 (c)). Temperatures below 15 ºC combined with low concentrations of amine salt (less than 15mg/mL) decreased the yield.An analysis of Figure 6(b) indicated that the yield of the precipitation reaction was favored in a temperature range of 15-25 °C, reaching 91.99 and 94.72% at the most extreme levels of 2-ethylhexanoate (-1.68 and 1.68 for runs 13 and 14, respectively), which represent the lowest and highest concentrations of potassium 2-ethylhexanoate (0.2 and 1.2 molar, respectively). From an analysis of the response surfaces it was found that the best temperature for the precipitation reaction under study was about 20 ºC.This temperature was maintained throughout the addition of potassium 2-ethyl (which varied in each experiment due to the variation of the initial concentration of CA amine salt), and also during the induction time of 45 minutes after completing the addition of potassium 2-ethylhexanoate.This information provided by the factorial design is very important because it means less energy spent (industrially) in maintaining a high final yield of potassium clavulanate.In their patent, Cook et al. (1984) stated that, for a similar reaction to obtain potassium clavulanate, after the addition of potassium 2-ethyl, the temperature of the stock solution was kept at 5 ºC for 1.5 h, resulting in a yield of 91.72%.Our experiment 14, which was performed at 20 ºC with an induction time of only 45 minutes and held at that temperature throughout the experiment, yielded 94.72% of potassium clavulanate, which was also the highest yield observed in the factorial design.Brazilian Journal of Chemical Engineering Vol. 30, No. 02, pp. 231 -244, April -June, 2013 All the potassium clavulanate precipitation reactions resulted in a slightly yellow solid crystalline salt. Figure 7 shows the SEM photomicrographs of the potassium clavulanate precipitate obtained in experiment 17 (central point).In this figure, note that the potassium clavulanate crystals obtained in experiment 17 are very small, thin and needle-shaped.Highly supersaturated solutions usually nucleate rapidly, producing numerous small crystals and resulting in the formation of needles or platelets.This was the case of the precipitation reactions of this study. The experimental design also allowed for optimization of the second reaction, resulting in higher potassium clavulanate yields.Like in the first reaction, no oil was formed in any of the experiments, demonstrating that this reaction is more stable than the direct reaction of potassium clavulanate (Hirata et al., 2009).This stability can be attributed to the passage through the stable intermediate, t-octylamine.Nucleation in the second reaction did not occur in response to the addition of the first few drops of potassium 2-ethylhexanoateas was the case in the reaction without passing through the stable intermediate (Hirata et al., 2009), but only after this addition within a range of 10 to 30 minutes, depending on the initial concentration of CA amine salt and the concentration of potassium 2-ethylhexanoate.This indicates that the solution had a lower initial supersaturation than that studied by Hirata et al. (2009), which favored the formation of potassium clavulanate.It is known that the problems commonly found in precipitation reactions (agglomeration, formation of colloidal solutions and crystal incrustations) are due to the very high initial supersaturation of these solutions (Mullin, 1993). Fermentation Broth After optimizing the first and second precipitation reactions of potassium clavulanate through the factorial designs, one experiment of each reaction was performed using the fermentation broth. The same conditions as those used in experiment 11 for the first reaction were chosen, because that experiment presented one of the best responses for CA amine salt yield (93.57%) and lowest requirement of CA (6.6 mg/mL).Moreover, fermentation broths that produce a high concentration of CA are not easily obtained and the literature shows that many studies have focused on increasing its production.Therefore, a precipitation reaction that can produce high yields with a low initial concentration of CA is very advantageous, justifying the choice of this experiment to reproduce the conditions used in the factorial design for the fermentation broth. The yield of CA amine salt obtained in this step was 80.21%.It is believed that other substances also present in the fermentation broth, such as pigments, may have interfered slightly in the precipitation reaction of the CA amine salt, but without precipitating along with it. The signals that indicate the presence of CA hydrogen amine salt are described in Hirata et al. (2007) and were compared here with the signals observed in the 1 H-NMR spectrum (Figure 8), indicating that practically no substance other than the amine salt of clavulanic acid was present.Therefore, this intermediate salt may be used for the preparation of highly pure non-toxic clavulanic acid and its pharmaceutically acceptable salts.This demonstrates that the reaction of CA with t-octylamine is even more selective than the direct reaction of CA with potassium 2-ethylhexanoate (Hirata et al., 2009), promoting high purification of CA from the fermentation broth through the formation of CA amine salt.The CA amine salt obtained in the first reaction from fermentation broth was used in the second reaction.The conditions used in the second reaction were chosen based on stability tests (not shown here), in which experiment 17 presented the best responses to the stability of the resulting crystals. Brazilian Journal of Chemical Engineering The yield obtained in this step was 72.37%.Figure 9 shows the 1 H-NMR spectrum of the precipitate (potassium clavulanate) formed in the second reaction from fermentation broth. An analysis of the 1 H-NMR spectrum (Figure 9) confirmed the high purity of the precipitated potassium clavulanate.The signals that indicate the presence of potassium clavulanate are described in Hirata et al. (2009) and were compared here with the signals observed in the 1 H-NMR spectrum (Figure 9).No signal of aminoketone (one of its degradation products) (Finn et al., 1984) was detected, thus demonstrating that this reaction offers the advantage of yielding potassium clavulanate with no subsequent degradation, unlike other CA purification processes in which solvent or water is evaporated, leading to marked degradation of CA and therefore low yields. The photomicrographs of potassium clavulanate crystals precipitated from the fermentation broth (second reaction) were obtained using the backscattered electron system with 1500X and 10000X magnifications (Figure 10).These photomicrographs reveal that the potassium clavulanate crystals precipitated from CA present in the fermentation broth were in the form of very small thin needles, but did not agglomerate. The crystals were very similar to those obtained in experiment 17 (Figure 7), which was performed with the Clavulin® sample, thus confirming that the precipitation reaction from fermentation broth (performed in the same conditions as those of experiment 17) was very satisfactory.This reaction also showed no formation of oil or colloids, indicating that it is suitable for application on an industrial scale.The purification process described in the literature and traditionally used for purifying clavulanic acid is performed in a series of steps that usually involve the use of adsorption chromatography techniques.However, due to the instability of CA molecules, these techniques afford a very low CA recovery rate during purification (Butterworth 1984;Mayer et al., 1996;Barboza et al., 2002).In the present work, the use of precipitation reactions in place of the aforementioned chromatographic techniques was therefore very advantageous, since it resulted in higher yields of high purity potassium Brazilian Journal of Chemical Engineering clavulanate.Moreover, precipitation reactions imply lower energy costs because, at the end of the precipitation reaction, the crystals can be separated from the stock solution by vacuum filtration.In contrast, chromatographic purification involves lyophilizing the potassium clavulanate to obtain the end productthe salt, thus implying higher operational costs. CONCLUSIONS The use of factorial design and response surfaces proved to be very advantageous, allowing optimization of the reactions, which were then used to promote the purification of CA in the fermentation broth, producing a high yield (72.37%). The results confirmed that the indirect reaction involving the passage through a stable intermediate (t-octylamine) is very suitable for use as the final step in the purification process of CA from fermentation broth.Its advantages over the other options described in the literature are that it promotes purification without causing the degradation of CA, increases the stability of the reaction without forming oils, incrustations or colloids, and allows for a broader operational range, thus favoring its use on an industrial scale. The choice of t-octylamine proved suitable for commercial use since the intermediate formed was neither toxic nor hygroscopic.Moreover, the marked selectivity of the amine was essential to promote high purification of the CA in the fermentation broth. Figure 1 : Figure 1: Precipitation of potassium clavulanate by passage through a stable intermediate. Figure 4 :Brazilian Photomicrographs of clavulanic acid amine salt obtained in experiment 11 using the BSE System with 800X magnification.Table5: Matrix of the experimental design used to investigate the influence of temperature, concentration of CA amine salt and potassium 2-ethyhexanoate concentration on the yield of potassium clavulanate(Journal of Chemical Engineering Vol. 30, No. 02, pp.231 -244, April -June, 2013 Figure 5 Figure 5 presents the experimental versus predicted values, showing they are in good agreement.The variation in relative deviations was lower than 5%, demonstrating that the differences between the experimental responses and those predicted by the model were minimal, indicating the model's excellent fit.An analysis of the response surfaces and contour curves generated by the model (Eq.(3)) indicated that high concentrations of CA amine salt increase the yield of potassium clavulanate (Figure6 (a)).The maximum potassium clavulanate yields were 94.72, 94.25 and 92.55%, which corresponded to levels 0, 1 and 1.68, i.e., concentrations of 25, 35 and 41.8 mg/mL, respectively.A higher yield is associated with a range of temperatures between 15 and 25 °C, i.e., about 20 ºC.For a high concentration of CA amine salt, a higher concentration of potassium 2-ethylhexanoate is desirable.The maximum yield (94.72%) was obtained in experiment 14, in which the concentration of 2-ethylhexanoate was the highest (1.2 molar) (Figure6 (c)).Temperatures below 15 ºC combined with low concentrations of amine salt (less than 15mg/mL) decreased the yield.An analysis of Figure6(b) indicated that the yield of the precipitation reaction Figure 8 : Figure 8: 1 H-NMR spectrum of clavulanic acid amine salt precipitated from the fermentation broth for CA reaction with t-octylamine, using the same conditions as those of experiment 11 (first factorial design). Figure 9 : Figure 9: 1 H-NMR spectrum of potassium clavulanate precipitated from the fermentation broth for CA amine salt reaction with potassium 2-ethylhexanoate, using the same conditions as those of experiment 17 (second factorial design). Figure 10 : Photomicrographs of potassium clavulanate crystals precipitated from fermentation broth obtained with the BSE System: (a) 1500X magnification; and (b) 10000X magnification. Table 3 : Matrix of the experimental design used to investigate the influence of temperature, CA concentration and t-octylamine inflow rate on the yield of CA amine salt (%). Independent variables Response Temperature CA concentration T-octylamine inflow rate Runs Coded Actual (°C) Coded Actual (mg/mL) Coded Actual (drop/min) CA amine salt yield (%) Brazilian Journal of Chemical Engineering
2019-01-03T02:24:39.033Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "b470e098933de88a97c8f52b4f7558b69a5bfbda", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/bjce/a/y94ZrhXmktH9dMrJfb7qqBb/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b470e098933de88a97c8f52b4f7558b69a5bfbda", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
9867139
pes2o/s2orc
v3-fos-license
Loss of heterozygosity on chromosomes 1 and 11 in carcinoma of the pancreas. Little is known of the molecular-genetic changes in carcinoma of the pancreas (CaP). In order to investigate the allele loss, or loss of heterozygosity (LOH), in CaP, we studied 13 patients with exocrine CaP and two with endocrine CaP using restriction fragment length polymorphism analysis. Twenty probes assigned to chromosomes 1, 5, 7, 9, 11, 12, 13, 14, 16, 17 and 18 were used. The frequency of LOH, or fractional allele loss (FAL), was found in two endocrine tumours to be 0.333 and 0.455 respectively; and FAL in 13 oxocrine tumours ranged from 0 to 0.25. Allele loss was shown in both exocrine and endocrine tumours by the probes Lambda MS1 at 1p33-35, and pMS51 at 11q13. Probes for other chromosomes have as yet shown no consistent LOH. In conclusion, the study showed LOH on chromosomes 1 and 11 in both exocrine and endocrine CaP. There are few reports about allele loss in CaP, in contrast to the comprehensive studies of other common malignancies, such as those in breast (Devillee et al., 1989), colorectum (Vogelstein et al., 1989), liver (Fujimori et al., 1991) and lung (Kok et al., 1987). Allele losses on chromosome 11 in both sporadic and familial pancreatic endocrine tumours, related to multiple endocrine neoplasia type 1 (MEN 1), have been reported (Bale et al., 1991;Teh et al., 1990). There have been preliminary reports of allele loss on 5q for exocrine CaP (Michelassi et al., 1989;Westbrook et al., 1990). It is of interest to know whether allele loss on chromosome 11 or other chromosomes also occurs in exocrine CaP, and whether there is any association between allele loss and clinical course in patients with CaP. Here we report a study of allele loss in CaP by screening with 20 RFLP markers, and the relationship between fractional allele loss and clinical parameters. Materials and methods Patients and biopsies Fifteen patients with carcinoma of the pancreas were studied, including two with endocrine CaP and 13 with exocrine CaP. Of the 13 with exocrine CaP 12 had tumours of the head of pancreas while the remaining one had a tumour of the ampulla of vater. All underwent resection of their tumours (either by partial or total pancreatectomy) except one patient with peritoneal secondaries that had palliative bypass (hepaticojejunostomy and gastrojejunostomy). Of the 13 patients with exocrine CaP, four had their tumours localised to the pancreas while the other nine had metastases in local lymph nodes or extension of their tumours in adjacent portal vein. Judged by the operating surgeons, seven patients had small tumours that were resected radically while the remaining six had large tumours or late diseases such that their surgical procedures should be considered palliative. All patients, if applicable, were followed-up for detection of post-operation recurrence. The data were available until 1 year after tumour resection. Surgical biopsies from the tumoral and non-tumoral pancreas tissues were snap frozen in liquid nitrogen at the time of operation. Lymphocytes from peripheral blood obtained pre-operatively were also used as a source of normal DNA. Tissue was stored at -70'C until DNA extraction. None of the patients received chemotherapy or radiotherapy prior to surgery and tumour samples were examined histologically to confirm the type of tumour present and the degree of differentiation of tumour cells. DNA extraction and analysis DNA was prepared from blood and tissue samples by standard methods (Sambrook et al., 1989). Southern analyses were done as previously described (Ding et al., 1991). The 20 RFLP probes for chromosomes 1,5,7,9,11,12,13,14,16,17 and 18 and the appropriate restriction enzymes are listed in Table I. If two alleles appeared as two separate bands in the resultant autoradiograph of the constitutional DNA, the patient was considered 'informative', or heterozygous, for the particular marker. Complete deletion or great loss of intensity of one band in the tumour DNA indicated an allele loss, or an LOH. The fractional allele loss (FAL) was defined in a tumour as the number of chromosomal arms on which allelic loss was observed divided by the number of chromosomal arms for which allelic markers were informative in the patient's normal cells (Vogelstein et al., 1989). As shown in The possible relationship between allele loss and some clinical parameters in exocrine CaP was analysed (Table III). Of seven small tumours (<3 cm), one had an allele loss, while out of six large tumours (>3 cm) five showed LOH (P <0.05). Allele loss was shown in four out of five tumours JJ informative patient with endocrine CaP in our study also had aSize: < 3 cm = small, > 3 cm = large. bb cMetastasis: regional lymph nodes or liver de patients died from operative complication an had very short follow-up. from patients with recurrence, while tumours from the patients without recurr (P < 0.05). There was a trend that tui differentiation or with metastasis had mo the differences were not statistically sign allelee ious ill Llis reguio. illtlestillnlgy, LInere was i.'ixi snUWn by the marker at this region in two of seven informative cases of exocrine CaP, which has not been reported before. Whether the change in this region is involved in the development of exocrine CaP needs further study. There are relatively few cytogenetic studies on CaP, but one study of particular interest showed deletion on B T cnromosome lpi2 in one tumour ana a translocauon involving that breakpoint in a second (Johansson et al., 1991). XMS 1 Allele loss at lp33-35 was shown by the probe Lambda MSI ii in this study in both exocrine (three out of 12 informative rcacle TAih1 T) and Pneocrine ()2/) Tahle i) CaP which mav indicate a possible tumour suppressor gene located there for both types of CaP, but as this region is frequently involved in advanced cancers of other types, its loss may be related to tumour progression (reviewed in Sager, 1989). More cases are needed to confirm the preliminary finding. It is of interest that allele loss also occurred on chromosome lq in both endocrine cases, which may suggest that loss of genetic material in this region may be of importance for endocrine tumours. Recently, loss or mutation of the p53 tumour suppressor B T gene at 17p 13 has been seen at very high frequency in several pMS 51 common human malignancies (Stanbridge, 1990). A recent study in exocrine CaP also showed high frequency of over-Southern hybridisa-expression of mutant forms of p53 by immunohistochemistry MS 32 (Iq42-43) and and of point mutations of the p53 gene by direct sequencing A, N = Non-tumour of genomic DNA (Barton et al., 1991). Hence it was surpris-Ul show allelic losses ing to find that there was no allele loss shown by either iaP while Patient JJ probe (pl44-D6 or pYNZ22) at 17p13 in either group of CaP in our study. This was in agreement with the finding of Westbrook et al. (1990), who did not find any LOH with pYNZ22 in seven informative pancreatic adenocarcinomas. It will be of interest to know if there is any overexpression of iicreas mutant p53 or point mutation of the p53 gene in our two groups of CaP. ele loss Significance Frequent rearrangement or loss of the prototype tumour suppressor gene, retinoblastoma (RB), also occurs in some p <o.05 other types of tumours (Horowitz et al., 1990). No allele loss was shown by one of the cDNA probes from the RB gene in the two groups of CaP in this study. Westbrook et al. (1990) reported allele loss in two out of N.S.b seven informative exocrine CaP on chromosome 5 and suggested that the genetic changes associated with allele loss on that chromosome might be a common denominator in the development or progression of the gastrointestine cancers N.S. including those of colorectum and pancreas. In our study, the one informative endocrine CaP showed allele loss at 5q21-22, but four probes on chromosome 5 did not reveal LOH in the P <0.05 exocrine CaP group. Vogelstein et al. (1989) reported that for colorectal carcinomas, patients with more LOH had a considerably worse 4.S.: Not significant. prognosis than did the other patients. In this study we posits. dTwo of these analysed the possible correlation between frequency of loss of id the remaining two heterozygosity and some clinical parameters within the group of exocrine CaP (Table III). There was a significant correlation found between the frequency of allele loss and the tumour size, and presence or absence of recurrence. The other data in Table III also showed a trend toward more none of the four aggressive behaviour in tumours with LOH. However it rence had allele loss failed to reach statistical significance. A large study should be mours with poorer conducted in order to confirm the significance of these data. tre allelic losses, but In conclusion, the study showed LOH on chromosomes lificant (Table III). lp33-35 and llql3 in both exocrine and endocrine CaP. In the group of exocrine CaP, patients with larger tumours, or recurrence may have more allelic losses in their tumours. Discussion This study showed loss of heterozygosity on chromosomes lp33-35 and 1 lql3 in both exocrine and endocrine carcinomas of the pancreas. Allele loss at 11 ql3 has been revealed in both sporadic and familial tumours arising in the endocrine pancreas (Bale et al., 1991;Teh et al., 1990
2014-10-01T00:00:00.000Z
1992-06-01T00:00:00.000
{ "year": 1992, "sha1": "182d2e62c8d3bf68a7c050ccdf7ba86ceffee275", "oa_license": null, "oa_url": "https://www.nature.com/articles/bjc1992173.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "182d2e62c8d3bf68a7c050ccdf7ba86ceffee275", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253637041
pes2o/s2orc
v3-fos-license
Dissection of a major QTL qhir1 conferring maternal haploid induction ability in maize Among the qhir11 and qhir12 sub-regions of a major QTL qhir1, only qhir11 has significant effect on maternal haploid induction, segregation distortion and kernel abortion. In vivo haploid induction in maize can be triggered in high frequencies by pollination with special genetic stocks called haploid inducers. Several genetic studies with segregating populations from non-inducer x inducer crosses identified a major QTL, qhir1, on chromosome 1.04 contributing to in vivo haploid induction. A recent Genome Wide Association Study using 51 inducers and 1482 non-inducers also identified two sub-regions within the qhir1 QTL region, named qhir11 and qhir12; qhir12 was proposed to be mandatory for haploid induction because the haplotype of qhir11 was also present in some non-inducers and putative candidate genes coding for DNA and amino acid binding proteins were identified in the qhir12 region. To characterize the effects of each sub-region of qhir1 on haploid induction rate, F2 recombinants segregating for one of the sub-regions and fixed for the other were identified in a cross between CML269 (non-inducer) and a tropicalized haploid inducer TAIL8. To quantify the haploid induction effects of qhir11 and qhir12, selfed progenies of recombinants between these sub-regions were genotyped. F3 plants homozygous for qhir11 and/or qhir12 were identified, and crossed to a ligueless tester to determine their haploid induction rates. The study revealed that only the qhir11 sub-region has a significant effect on haploid induction ability, besides causing significant segregation distortion and kernel abortion, traits that are strongly associated with maternal haploid induction. The results presented in this study can guide fine mapping efforts of qhir1 and in developing new inducers efficiently using marker assisted selection. within the qhir1 QTL region, named qhir11 and qhir12; qhir12 was proposed to be mandatory for haploid induction because the haplotype of qhir11 was also present in some non-inducers and putative candidate genes coding for DNA and amino acid binding proteins were identified in the qhir12 region. To characterize the effects of each sub-region of qhir1 on haploid induction rate, F 2 recombinants segregating for one of the sub-regions and fixed for the other were identified in a cross between CML269 (noninducer) and a tropicalized haploid inducer TAIL8. To quantify the haploid induction effects of qhir11 and qhir12, selfed progenies of recombinants between these subregions were genotyped. F 3 plants homozygous for qhir11 and/or qhir12 were identified, and crossed to a ligueless tester to determine their haploid induction rates. The study revealed that only the qhir11 sub-region has a significant effect on haploid induction ability, besides causing significant segregation distortion and kernel abortion, traits that are strongly associated with maternal haploid induction. The results presented in this study can guide fine mapping efforts of qhir1 and in developing new inducers efficiently using marker assisted selection. Introduction Large-scale production and utilization of doubled haploid (DH) lines has become common practice in maize breeding programs during the last decade owing to the associated acceleration and cost reduction in development of inbred lines and deployment of hybrid varieties (Melchinger et al. 2013). In vivo maternal haploid induction (HI) is the backbone of DH line production in maize (Prigge et al. 2012b), which involves pollination of desired populations with special genetic stocks called haploid inducers that induce relatively high frequencies of haploid seeds in the progeny (Coe 1959;Chaikam 2012;Prigge and Melchinger 2012). The phenomenon of in vivo maternal HI is unique to maize and has not been reported in other plant species so far (Hu et al. 2016), although its physiological and molecular bases are still elusive. Elimination of inducer chromosomes after fertilization (Zhang et al. 2008;Li et al. 2009;Xu et al. 2013a;Qiu et al. 2014) and single fertilization followed by parthenogenesis (Sarkar and Coe 1966;Bylich and Chalyk 1996;Barret et al. 2008;Swapna and Sarkar 2012) were proposed to be involved in the production of seeds with haploid embryos and normal triploid endosperms. To understand the genetic basis of HI, several studies have been conducted. HI was determined to be a quantitatively inherited trait, controlled by a small number of genes and improvable through selection (Lashermes and Beckert 1988). It was also suggested that additive and epistatic gene action affect the HI process (Prigge et al. 2011). In first QTL mapping studies on HI with segregating progeny of crosses between non-inducers and inducers, a major QTL on chromosome 1 was identified in bin 1.04 (Deimling et al. 1997; Barret et al. 2008). An extensive QTL mapping study with four bi-parental populations involving inducers CAUHOI and UH400 detected two major QTL, named qhir1 and qhir8, and several minor QTL (Prigge et al. 2012b). The major QTL qhir1 on chromosome 1.04 was the same as reported in the previous studies and explained 66% of the genotypic variance. Besides its effect on HI, qhir1 has also been associated with segregation distortion (SD) and has a strong selective disadvantage (Barret et al. 2008;Prigge et al. 2012b;Dong et al. 2013;Xu et al. 2013a). It was also noted that in vivo HI is associated with embryo and endosperm abortion (Prigge et al. 2012b;Xu et al. 2013a). Less pronounced than the effect of qhir1 was the effect of the second major QTL found by Prigge et al. (2012b), qhir8, which maps to chromosome 9 and explained only 20% of the genotypic variance. However, all these linkage mapping studies resulted in large support intervals for the detected QTL. To delineate the map position and to identify closely linked markers more useful for marker-assisted selection in development of new inducers, qhir1 was fine-mapped to a 243 kb region ) and qhir8 to a 789 kb region (Liu et al. 2015). Considering the confirmation of qhir1 in multiple studies, qhir1 may be considered mandatory for HI ability (Prigge et al. 2012b), while other loci like qhir8 may enhance the function of qhir1 to increase the HIR (Liu et al. 2015). Recently, the large qhir1 support interval described by Prigge et al. (2012b) was dissected by Hu et al. (2016) into two closely linked regions, named qhir11 and qhir12, using a novel type of genome wide association study (GWAS) to detect selective sweeps and address the problem of perfect confounding between population structure and trait expression, as in the case of inducers (cases) and non-inducer (controls). Sub-region qhir11 harbored the 243 kb interval fine-mapped by Dong et al. (2013) and had one major haplotype present in the majority of the inducers and one minor haplotype present only in two inducers studied. The latter occurred also in several non-inducers whose HIR was similar to spontaneous occurrence of haploids. Hence, the minor haplotype of qhir11 was deemed to be neither diagnostic for differentiating inducers and non-inducers nor effective for conditioning HI ability in maize. However, no conclusions were drawn about the major haplotype of qhir11 based on this study. By comparison, qhir12 had a single haplotype allele found in all the 53 inducers and absent in all 1482 non-inducers included in the study and was proposed to harbor three candidate genes related to putative functions involved in HI. To further determine the effects of the qhir12 and qhir11 haplotypes, the authors suggested testing the effect of these haplotypes on HI in near-isogenic lines or selfed progenies of recombinants that segregate at one locus while the other is fixed. The main objective of our study is to adopt this strategy and test the effects of qhir11 and qhir12 haplotypes on HIR using selfed progenies of recombinants in a huge F 2 population derived by crossing a non-inducer with a tropically adapted haploid inducer. In addition, we examined which of the specific sub-regions of qhir1 is specifically associated with segregation distortion and kernel abortion, traits associated with maternal haploid induction. Notation of the genotypes We denote henceforth the qhir11 and qhir12 sub-regions as A and B, respectively. We use the following notations for the various genotypes possible for each sub-region: AA = homozygous for the putative inducer allele at all markers assayed in the qhir11 sub-region; aa = homozygous for the putative non-inducer allele at all markers in the qhir11 sub-region; BB = homozygous for the putative inducer allele at all markers assayed in the qhir12 subregion; bb = homozygous for the putative non-inducer allele at all markers in the qhir12 sub-region; Aa = heterozygous at all markers assayed in the qhir11 sub-region; and Bb = heterozygous for all markers assayed in the qhir12 sub-region. Genetic material One tropically adapted inducer, TAIL8, and one tropically adapted non-inducer, CML269, were used as parents in this study. TAIL8, harboring the A and B alleles in homozygous state has a mean HIR of 9.9% (Chaikam et al. 2016). CML269 has no HI ability and harbors the a and b alleles in homozygous state. The non-inducer (CML269) x inducer (TAIL8) cross was made in the winter season of 2011 at CIMMYT's experimental station at Agua Fria, Mexico (20.26°N, 97.38°W) to generate the F 1 generation. From the F 1 , 100 seeds were planted and selfed to generate the F 2 generation in the summer season of 2011. A total of 7160 F 2 seeds of good quality were genotyped as described below. Recombinants between the qhir11 and qhir12 subregions identified on the basis of the marker assays were grouped into four F 2 genotype classes: AABb; aaBb, AaBB, Aabb, and used for further assays. From each of the four F 2 genotype classes of recombinants, 10 individual plants were randomly selected for selfing to obtain F 2:3 families segregating for the heterozygous sub-region. Only F 3 seeds homozygous for the segregating sub-region were planted in the field at Agua Fria in the winter season of 2016. Hybrid (PDH3 × PDH8), homozygous for liguleless gene lg2 (Prigge et al. 2012a;Chaikam et al. 2016;Melchinger et al. 2016), was used as a female tester to produce testcross seed for evaluating the HIR. The tester was stagger-planted four times at weekly intervals to synchronize flowering with the F 3 plants. Each F 3 plant that produced pollen was crossed on to 10-15 tester plants (based on pollen availability) and was also self-pollinated. Some F 3 plants were found to be haploids based on their weak plant stature, narrow and erect leaves and sterile tassels (Prigge et al. 2011;Chaikam et al. 2016) and were therefore not pollinated. Some plants could not be used for testcrossing because of severe virus infection. Testcross seed was bulked from all the tester plants pollinated by the same F 3 plant. A graphical representation of the scheme followed for developing the genetic material is shown in Fig. 1. Markers delineating the qhir11 and qhir12 sub-regions According to Hu et al. (2016), the physical boundaries for qhir11 are between SNPs PZE-101,081,177 (physical co-ordinate: 1: 68,134,633) and SYN25793 (physical co-ordinate 1: 68,670,617). For qhir12, the borders are between SYN4966 (physical co-ordinate 1: 71,795,509) and PZA00714.1 (physical co-ordinate 1: 75,768,235). All the physical co-ordinates of the SNPs assayed are with reference to B73 AGP V2 (http://ensembl.gramene.org/ Zea_mays). Sets of six markers covering the qhir11 subregion and eight markers covering the qhir12 sub-region were used to genotype each sub-region (Supplementary table 1). Based on the selected SNPs, the haplotypes of TAIL8 and CML269 at each sub-region were compared with the large set of non-inducers and inducers reported by Hu et al. (2016) and verified. All markers used in this study were genotyped using KASP assays (LGC Genomics, UK) developed from the Illumina MaizeSNP50 BeadChip (Ganal et al. 2011), except for one SNP developed from HapMap V.2 (Suppl. Table 1). Analysis of the F 2 population DNA was extracted from 7160 individual seeds of the F 2 population of cross TAIL8 × CML269 following standard procedures (Gao et al. 2008) and genotyped with the abovementioned SNPs. Among the polymorphic SNPs between the two parents available to CIMMYT for the qhir11 and qhir12 sub-regions, two SNPs (PZE0166290049 and PZE0166357949) were selected to represent qhir11 and two SNPs (SYN26730 and PZE101085336) to represent qhir12 in the genotyping of the F 2 seeds. Based on the results, 428 recombinant F 2 seeds in the four F 2 genotype classes described above were selected and planted in the field. Leaf DNA of these plants was extracted at the four-leaf stage following CIMMYT's laboratory protocols (CIMMYT 2001). Because the two SNPs of each sub-region in the seeds did not cover the respective physical interval entirely, we analyzed additionally four SNPs for qhir11 and 10 SNPs for qhir12, which were part of the SNPs on the MaizeSNP50 BeadChip polymorphic between the two parents. This assay was also used to ascertain the classification of the recombinant F 2 plants; plants showing any discrepancy were discarded. Moreover, some F 2 plants did not survive or failed to produce selfed seed. Thus, selfed ears were harvested from 21 AABb, 72 aaBb, 56 AaBB, and 44 Aabb genotypes in the F 2 generation, adding up to a total of 193 ears. Genotyping and phenotypic analysis of F 2:3 families from recombinants Ten ears were randomly selected from each of the four afore-mentioned F 2 genotype classes for raising F 2:3 families. DNA was extracted from ~45 individual seeds from each of the 40 F 2:3 families and genotyped with three SNP markers for both qhir11 and qhir12, covering the entire physical interval of the sub-regions as identified by Hu et al. (2016). From each family, only the seeds homozygous for the qhir11 and qhir12 sub-regions were selected as male parents for pollination of liguleless tester PDH3 × PDH8. Among the 756 F3 plants that were test-crossed, 83.7% resulted in more than 1000 seeds, 12% resulted between 500 and 999 seeds, and 4.2% resulted in less than 500 seeds. For each F 3 plant with more than 1000 testcross seeds, 1000 seeds were germinated in styrofoam trays in a shade house at the Agua Fria experimental station. Each tray accommodated 100 seeds. After 14 days of germination, each tray was evaluated for the number of germinated seedlings and the number of seedlings with and without ligule. For the F 3 plants with less than 1000 testcross seeds, all seeds were germinated. The HIR was calculated as HIR = N L /(N L +N NL ), where N L and N NL refer to the number of plants with and without ligule, respectively. Phenotyping of kernel abortion We refer here to endosperm abortion as kernel abortion, because most endosperm aborted seeds in our study lacked an embryo similar to the observation by Xu et al. (2013a). Selfed ears obtained from the F 3 plants in each of the four genotype classes were visually rated for a kernel abortion score (KAS) on a scale of 1-5, where 1 represents no aborted seed visible on the ear, and 5 represents complete abortion with no seed set on the ear. To measure the extent of kernel abortion quantitatively, the number of normal seeds and number of kernel aborted seeds were counted on each ear from the AAbb and aaBB genotype classes as suggested by Xu et al. (2013a). Kernel abortion rate (KAR) for each entry was calculated as KAR = N a /(N a + N n ), where N a refers to the number of aborted seeds and N n to the number of normal seeds. Statistical analyses The HIR for each F 3 genotype within each F 2 genotype class was calculated as the least-squares means in the following generalized linear model assuming a binomial distribution: where Y ijk is the ith observation in the jth genotype class for the kth F 2:3 family, µ is the general mean, g j is the effect of the jth genotype, f k is the effect of the kth family and e ijk the residual error. The model was fitted using the glm function in the R software package, version 3.3.0. Least-squares means and corresponding confidence intervals were calculated with the lsmeans package, version 2.23-5, and compact letters displays were produced with the multcompView package, version 0.1-7, at significance level α = 5%. We used an over-dispersion factor to account for variance in the data in excess of the binomial sampling variance that may result in an inflation of the standard errors. F 3 plants heterozygous for either of the sub-regions were not tested in this experiment. Significant differences in these tests determine whether the qhir11 or qhir12 sub-region alone is sufficient to exhibit HIR equivalent to qhir11 and qhir12 together. KAS for each F 3 genotype was calculated with the same generalized linear model as for HIR except that a Poisson distribution was assumed and KAS was used as response variable. KAR for the two F 3 genotype classes AAbb and aaBB was calculated with the same generalized linear model but without the family term because of confounding between family and genotype. Segregation distortion (SD) in the F 2 generation was investigated with a G-test for goodness-of-fit to the segregation ratios expected under Mendelian inheritance and applying a significance level α = 5%. The G-test of goodness of fit to expected segregation ratios and the expected allele frequencies was carried out with the R software function GTest from the DescTools package, version 0.99.17. Gene annotations The gene annotations by the MAKER gene annotation pipeline (Cantarel et al. 2008) in the physical interval of qhir11 in the B73 genome sequence (V2) available in http://ensembl.gramene.org/Zea_mays was used to search for putative candidate genes in the studied interval. Recombination and segregation in the F 2 and F 3 generations A total of 475 recombinants in the F 2 generation falling into different genotype classes were identified between qhir11 and qhir12 based on the segregation analysis of 7154 F 2 seeds (Table 1). No recombination was observed between the two sub-regions in most of the F 2 seeds (93.4%), which had the same genotype as the F 1 cross or the parent lines. Single recombination events between qhir11 and qhir12 were observed in 6% of the F 2 seeds, and double recombination events between qhir11 and qhir12 were observed in 0.1% of F 2 seeds. In addition, 0.6% of F 2 seeds had recombination events which occurred within either of the sub-regions. Based on the recombination observed between the distal SNP of qhir11 and the proximal SNP of qhir12, the recombination rate between the qhir11 and qhir12 sub-regions was 3.1%. From the 428 single recombinant F 2 plants between qhir11 and qhir12, a total of 193 plants remained for further analyses, with the following numbers in the four F 2 genotype classes: 72 aaBb, 56 AaBB, 44 Aabb and 21 AABb. Among F 3 plants, highly significant (P < 0.001) segregation distortion against the homozygous inducer genotype was observed for the qhir11 sub-region ( Table 2). The segregation distortion observed for the qhir12 sub-region was also significant (P < 0.01) but against the non-inducer genotype. The same trends were observed for the allele frequencies at both sub-regions. Effects of the qhir11 and qhir12 sub-regions on haploid induction rate F 3 plants with genotype AAbb, derived from F 2 plants in genotype class Aabb, revealed on average a significantly (P < 0.01) higher HIR (6.45%) than aabb plants having a mean HIR = 0.12% (Table 3). Thus, the AA genotype showed a strong positive effect on HIR. In F 3 plants of F 2 genotype class AABb, HIR was significantly (P < 0.01) higher in AAbb plants (7.16%) than in AABB plants (5.92%). Thus, a relatively small negative effect on HIR was observed for the BB genotype in the presence of the AA genotype. This negative effect was not observed in the absence of the AA genotype, because in F 3 plants from F 2 genotype class aaBb, the mean HIR of the aaBB genotypes (0.12%) was not significantly different from the mean HIR of the aabb genotypes (0.09%). In F 3 plants of the F 2 genotype class AaBB, the HIR of AABB genotypes was also significantly (P < 0.01) higher than the HIR of aaBB genotypes. Regarding the HIR of all F 3 plants irrespective of their origin from the four F 2 genotype classes, the highest HIR (5.96%) was observed for genotype AAbb, followed by a significantly (P < 0.05) smaller value (HIR = 5.02%) for genotype AABB (Table 4). A large decrease in HIR was found in the aaBB genotype (HIR = 0.19%) and a further significant (P < 0.05) decrease in the aabb genotype (HIR = 0.12%). Thus, in the presence of AA at qhir11, BB had a reducing effect on HIR but in the presence of aa, it had an increasing effect on HIR, whereas no significant effect was observed in the analysis of means in F 3 genotypes derived from individual F 2 genotype classes. Effects of the qhir11 and qhir12 sub-regions on kernel abortion Most ears harvested from AAbb and AABB genotypic class F3 plants showed some level of kernel abortion while most ears of aaBB and aabb classes did not record any abortion (Suppl. Figure 1a and 1b). Regardless of the genotype at the other sub-region, F 3 plants of genotype AA had a significantly (P < 0.01) higher KAS than the aa genotype. Quantitative evaluation of kernel abortion in the F 3 generation showed that genotype AAbb had a six-fold higher KAR than the genotype aaBB (Table 4). Strategy for genetic delineation of qhir1 influencing haploid induction Genetic delineation of the qhir11 and qhir12 sub-regions required large population sizes in the F 2 generation considering that they are physically located very close to each other on chromosome 1.04 (Hu et al. 2016). Regarding the incomplete penetrance of qhir1 for HIR (Prigge et al. 2012b), the choice of the parents for this study was critical to guarantee sufficient variation in HIR of progenies recombinant for the qhir11 and qhir12 sub-regions. The non-inducer parent CML269 had shown highly significant difference in HIR values between progeny selected for qhir1 in combination with multiple haploid inducers (CIM-MYT, unpublished data). Therefore, CML269 was chosen as non-inducer parent to develop a large F 2 population with the selected tropicalized haploid inducer TAIL8. The 14 SNP markers selected for our analyses provided good coverage of the qhir1 region and were sufficient to delineate the sub-regions qhir11 and qhir12. The recombinants observed in the F 2 generation showed a genetic distance of 3.1 cM between them, which is consistent with the estimate for qhir11 and qhir12 reported by Hu et al. (2016). We did not study the effect of qhir11 and qhir12 in homozygous recombinants (AAbb or aaBB) of the F 2 generation because they were too few to make valid inferences. Given the huge efforts required in phenotyping for HIR, we had to restrict the number of individuals analyzed from each F 2 genotype class to 10 F 2 plants, resulting in 40 F 2:3 families which could be analyzed within and among the four genotype classes. Seed DNA was genotyped for each of these F 3 families to eliminate heterozygotes before planting and conducting testcrosses and selfings with the F 3 plants. A liguless tester was used in testcrosses for measuring HIR because this method was recommended for accurate measurement of HIR in comparison to other methods ) and has been reliably used for determining the HIR in previous studies (Prigge et al. 2012a;Melchinger et al. 2013;Chaikam et al. 2016). Staggered planting of the liguleless tester multiple times allowed achieving synchrony in flowering with the majority of the F 3 plants differing widely in anthesis date (data not shown). For the majority of F 3 plants (83.7%), we could evaluate HIR based on the recommended number of testcross seed (1000) and for only less than 1% of the F 3 plants we had to measure HIR with fewer than 200 testcross seeds, which was the lower limit suggested by Prigge et al. (2012a). Effects of qhir11 and qhir12 on maternal haploid induction rate The F 3 progenies, which were homozygous recombinants for the qhir11 and qhir12 sub-region, showed unambiguous differences in HIR (Table 3). HIR is known to be a trait with incomplete penetrance and hence, has a tendency to show highly variable expression in different genetic backgrounds (Prigge et al. 2012b). In the population studied here, there appeared to be no alleles masking the HIR trait, because HIR ranged from normal inducer levels to non-inducer levels. In contrast to the hypothesis put forward by Hu et al. (2016), the 535 kb segment of the qhir11 sub-region was in our study the only sub-region of qhir1 mandatory for HI ability. The inducer qhir11 allele (A) increased the HIR significantly in the presence of inducer (B) or non-inducer (b) alleles at the sub-region qhir12. The inducer qhir12 allele alone, in the absence of the inducer qhir11 allele did not cause a HIR higher than the spontaneous occurrence of haploids observed in normal noninducer maize lines (Chase 1969). Actually, qhir12 significantly decreased HIR in the presence of the inducer allele at qhir11 but significantly increased HIR in the presence of the non-inducer allele at qhir11. In both cases, the significant differences due to the qhir12 allele were not strong enough to change the overall expression of HI due to the qhir11 allele, but merely modified the HIR. A genome-wide study on 53 haploid inducers publicly available and 1,482 normal maize lines provided strong evidence that qhir11 and qhir12 were fixed in all the inducers and this was exclusively attributed to selection for HI (Hu et al. 2016). The qhir11 sub-region, also found significant in the study by Hu et al. (2016), revealed two haplotypes, where the minor haplotype was shared by two non-inducer lines, which did not have HI ability. Additionally, Hu et al. (2016) identified qhir12 as the most probable genomic segment carrying gene(s) responsible for HI, as this region had a single haplotype that was unchanged in all the inducers. In contrast, the results of our validation study clearly show that the major haplotype of qhir11 found by Hu et al. (2016) is mandatory for HI and that the presence or absence of inducer qhir12 did not affect the HIR significantly. Our study cannot make any inference on the effect of the minor haplotype of qhir11 that was present only among two publically available inducers analyzed. Also, our study cannot make any specific conclusion regarding the 243 kb finemapped genomic region for HIR ), as we have not studied this region in particular, but rather a larger genomic region harboring this fine-mapped region. Traits associated with maternal haploid induction Various authors suggested investigating segregation distortion as a means to further fine-map the qhir11 sub-region influencing maternal haploid induction in maize (Barret et al. 2008;Prigge et al. 2012b;Dong et al. 2013). Strong segregation distortion was reported against the haploid inducer allele in many genetic studies (Barret et al. 2008;Prigge et al. 2012b;Dong et al. 2013). Xu et al. (2013) studied segregation distortion in regard to HI and mapped a major QTL associated with segregation distortion, sed1, on chromosome 1, overlapping with the fine-mapped qhir1 QTL. It is not clear yet, if segregation distortion is due to the same gene causing HI, or if another gene reducing fitness is closely linked to the gene(s) in qhir1 causing HI. It is also not clear exactly what type of reduction in fitness is linked to HI. Barret et al. (2008) suggested impediments in male gametic transmission associated with HI, while Xu et al. (2013) proved that there is both gametic and zygotic selection responsible for segregation distortion associated with HI. Our study did not aim to distinguish whether segregation distortion was caused by the same gene responsible for HI, or by another tightly linked gene. However, we observed in this study that both HI ability and strong segregation distortion against the inducer qhir11 allele, both of which were not observed for qhir12. For qhir12, the observed segregation distortion was significantly smaller, and in the opposite direction, favoring the inducer allele, while a much smaller effect was found on the HIR. In addition to SD, high maternal HI also is strongly associated with the formation of defective kernels, including embryo and endosperm abortion ) and reduced seed set (Satarova and Cherchel 2010). Similar to its effects on SD, the qhir11 sub-region in our study strongly increased kernel abortion while qhir12 had negligible effect on this. It is possible that the same gene(s) conditioning the HIR or another tightly linked gene within the qhir11 region can condition kernel abortion. One hypothesis for this relationship is that one of the sperm cells from the inducer pollen could be defective while the other sperm cell is normal (Geiger 2009). When the defective sperm cell fertilizes the central cell, endosperm abortion can result, and when the defective sperm cell fertilizes the egg cell, a haploid embryo or aborted embryo can result. This hypothesis was supported by the occurrence of morphologically different sperm cells (Bylich and Chalyk 1996), aneuploid microsporocytes which may produce aneuploid sperm cells (Chalyk et al. 2003), and an increase in heterofertilization when haploid inducer pollen is used (Kraptchev et al. 2003;Rotarenco and Eder 2003). Another hypothesis involves epigenetic, dosage-dependent modification of the chromosomes exerted by the sed1 locus which overlaps with the qhir1 locus resulting in incomplete penetrance of the sed1/qhir1 locus . It was proposed that expression of the sed1 locus can differ between the pollen grains resulting in some pollen grains having strong epigenetic modification while others are less modified. A strong modification of the sperm cell chromosomes may lead to kernel abortion or haploid formation while less epigenetically modified pollen leads to normal kernel formation. Further studies are required to understand the exact mechanism(s) behind kernel abortion associated with HI, for which cloning the gene(s) underlying these loci could be critical. Putative candidate genes in the qhir11 physical interval The physical interval of qhir11 in the B73 genome sequence (V2) has 13 protein-coding genes annotated by the MAKER gene annotation pipeline (Cantarel et al. 2008) as available in http://ensembl.gramene.org/Zea_mays (Suppl. Table 2). Out of these genes, 11 are predicted to have protein domains with known functions. Among these, gene Zm00001d029411 is predicted to have a protein which falls into the CULLIN family of ubiquitin ligases. CULLIN-dependent ubiquitin ligases form a class of structurally related multi-subunit enzymes that control the rapid and selective degradation of important regulatory proteins involved in cell cycle progression and development (Thomann et al. 2005). In mice, knocking out a cullin-RING ubiquitin ligase leads to infertile male mice, due to fewer numbers of mature spermatozoa, most of which exhibit morphological defects, rendering them immotile and unable to fertilize eggs. In addition to the morphological abnormalities, chromosomal defects were also observed which may also contribute to infertility (Yin et al. 2011). The gene Zm00001d029411 in B73 had maximum similarity to AtCUL1 in Arabidopsis thaliana, based on a BLAST N alignment (E = 0.0012). CUL1 forms part of the SCF (SKP1-CUL1-F-box) complex in plants and animals, where SCF-dependant ubiquitylation plays a critical role in the control of the cell cycle (Thomann et al. 2005). Consistent with such a role, Arabidopsis cul1 lossof function mutants arrest early during embryogenesis at the zygote stage (Shen et al. 2002). Genetic analysis also indicated a reduction in transmission of the atcul1 mutation through both male and female gametes. Considering the specific roles the protein domain plays in cell cycle and gametophyte development and transmission, this gene could be an interesting putative candidate gene for HI ability. Several recent studies indicate that manipulation of Centromere Histone CENH3 could lead to in vivo haploid induction in Arabidopsis (Ravi and Chan 2010;Seymour et al. 2012;Ravi et al. 2014), and in maize- (Kelliher et al. 2016). However, native CENH3 may not have any role in in vivo HI using maternal haploid inducers in maize. CENH3 is localized on chromosome 6.06 (Prigge et al. 2012b) and no mapping study has so far detected a major QTL for HI in this region. Also, study by Kelliher et al. (2016) showed that altered CENH3 when introduced into maize showed a maximum of 3.6% HIR, which is significantly lower than the high HIR (~10% or more) obtained using the improved maternal haploid inducers (Röber et al. 2005;Prigge et al. 2012a;Chaikam et al. 2016). Our study also showed that none of the annotated genes at qhir11 are related to CENH3. Therefore, cloning of the gene(s) responsible for maternal haploid induction, underlying qhir11, may provide a better insight into the genetic mechanism underlying gynogenesis in maize. It also needs to be explored whether CENH3-mediated HI can be synergistic to the qhir1 mediated HI in maize. Conclusions In this study, the qhir1 region was genetically delineated, and the haploid induction ability of qhir11 and qhir12 subregions was dissected through analysis of recombinants from a large F2 population derived from a non-inducer x haploid inducer cross. The study clearly revealed that qhir11 is the only sub-region with a strong effect on HIR, whereas qhir12 had a negligible effect on HIR, in contrast to the hypothesis of Hu et al. (2016) based on a selective sweep based GWAS approach. Furthermore, our study proved that qhir11 is more strongly associated than qhir12 with segregation distortion and kernel abortion, two traits that are associated with maternal haploid induction. The results of this study give direction in further fine mapping and cloning of the gene/s underlying qhir1. The molecular markers delineating qhir11 can be used for more efficient development of new inducer lines adapted to diverse agro climatic zones using marker assisted selection. Author's note When this publication was in production, three articles (Kelliher et al. 2017;Gilles et al. 2017;Liu et al. 2017) were published about cloning the gene underlying qhir1 QTL that codes for a sperm specific phospholipase and triggers haploid induction. Author contribution statement AEM, SKN, VC and PMB designed the experiments. VC, ML and LLA coordinated the field trials and phenotyping. SKN and VC coordinated the sample collection, DNA extraction and genotyping. WM, SKN, VC and AEM analyzed the data. SKN, VC and WM wrote the manuscript. AEM and PMB edited the manuscript.
2022-11-19T15:25:09.428Z
2017-03-18T00:00:00.000
{ "year": 2017, "sha1": "f15dc326c476e1befd17f549d02b51609380cf0f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00122-017-2873-9.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "f15dc326c476e1befd17f549d02b51609380cf0f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
3954570
pes2o/s2orc
v3-fos-license
Prevalence and Prognostic Significance of Functional Mitral and Tricuspid Regurgitation Despite Preserved Left Ventricular Ejection Fraction in Atrial Fibrillation Patients Background: We investigated the prevalence and prognostic significance of functional mitral regurgitation (MR) and tricuspid regurgitation (TR) in patients with atrial fibrillation (AF) and preserved left ventricular ejection fraction (LVEF). Methods and Results: We retrospectively studied the cases of 11,021 consecutive patients who had undergone transthoracic echocardiography. AF appeared in 1,194 patients, and we selected 298 with AF and LVEF ≥ 50% but without other underlying heart diseases. Moderate or greater (significant) degree of functional MR and of TR was seen in 24 (8.1%) and in 44 (15%) patients, respectively (P=0.0045). In contrast, significant MR and TR were more frequently seen in patients with AF duration >10 years (28% vs. 25%, respectively). During the follow-up period of 24 ± 17 months, 35 patients (12%) met the composite endpoint defined as cardiac death, admission due to heart failure, or mitral and/or tricuspid valve surgery. On Cox proportional hazard ratio analysis, both MR and TR grading predicted the endpoint, independently of other echocardiographic parameters. On Kaplan-Meyer analysis, presence of both significant functional MR and TR was associated with poor prognosis, with an event-free rate of only 21% at the mean follow-up period of 24 months. Conclusions: Significant functional MR and TR are seen in a substantial proportion of patients with longstanding AF, despite preserved LVEF. This MR/TR combination predicts poor outcome for AF patients, who may have to be treated more intensively. spective analysis was therefore to investigate the prevalence and the prognostic significance of the occurrence of functional MR and TR despite preserved LV systolic function in patients with AF. Subjects and Data Collection We retrospectively analyzed the echocardiography laboratory database and the medical records at Osaka City General Hospital, Osaka, Japan. We selected patients with AF and preserved LVEF (≥50%) who underwent transthoracic echocardiography (TTE) in the echocardiography laboratory over a period of 2 consecutive years (June 2012-May 2014). We excluded patients with acute decompensated HF (ADHF), a moderate or greater degree of organic valvular heart disease, apparent degenerative changes in mitral or tricuspid valve complex, a history of coronary artery disease or regional LV wall motion abnormality suggesting myocardial ischemia or infarction, or a T he incidence of atrial fibrillation (AF) increases with age, 1-3 and heart failure (HF) is the most important cause of mortality in elderly AF patients. 4-6 AF can develop after HF, but it can also precede HF with reduced left ventricular ejection fraction (LVEF) or HF with preserved LVEF. 7-9 Patients with AF and subsequent HF have a poor prognosis, but the determinants that generate HF in AF patients are still uncertain. Secondary (functional) mitral regurgitation (MR) and tricuspid regurgitation (TR) can occur as a result of atrial dilatation in AF patients, despite having preserved left ventricular (LV) systolic function. 10-16 These can be termed "atrial functional MR" and "atrial functional TR," respectively. Both atrial functional MR and TR resulting from atrial dilatation have recently received much attention, but little information is available regarding the disease characteristics of these valvular regurgitations and their impacts on HF. In addition, very few descriptions are available for the MR and TR secondary to AF in the current guidelines for valvular heart disease. 17, 18 The aim of the present retro-For all measurements of systolic parameters in AF patients, 1 beat occurring after 2 serial beats with average RR interval was carefully selected. 20 For all measurements of diastolic Doppler parameters in AF patients, 1 beat with an average Doppler-wave contour with an average velocity was visually, but carefully, selected. The LV end-diastolic dimension, LV end-systolic dimension, LVEF, LV mass, and left atrial (LA) dimension were measured according to the guidelines. 21 We indexed these parameters by dividing them by the body surface area. The severity of the MR was defined using a multiparametric approach, including an assessment of the color Doppler-derived jet area, the effective regurgitant orifice area using the proximal isovelocity surface area method, the MR volume and fraction using the Doppler-derived volumetric method, and the pulmonary vein flow velocity pattern. 22 TR grade was also defined using a multiparametric approach, including an assessment of the color Doppler-derived jet area, the continuous wave Doppler-derived jet density and contour, and the hepatic vein flow velocity pattern. 22 The methods for the assessments of these valvular regurgitations were selected per patient under the careful consideration of the methodological advantages and limitations, according to the American Society of Echocardiography guidelines. 22 MR and TR were graded as none, mild, moderate, or severe. When a considerable discrepancy was evident in the grading between the multiple methods or between the different beats in AF patients, we used the following 3 levels of borderline grading: none-mild; mild-moderate; and moderate-severe. For the statistical analysis, the grading was scored as follows: none, 0; none-mild, 0.5; mild, 1; mild-moderate, 1.5; moderate, 2; moderate-severe, 2.5; and severe, 3. A ≥moderate degree of MR or TR (i.e., 2, 2.5, or 3) was defined as significant in the present study. Continuous wave Doppler was used to obtain the TR peak velocity (v, m/s) and the transtricuspid systolic pressure gradient (mmHg), which was calculated as 4×v 2 . The right ventricular systolic pressure (RVSP) was then estimated as the sum of the estimated transtricuspid systolic pressure gradient and definite diagnosis of cardiomyopathy. Patients who had undergone cardiac device implantation or cardiac surgery, patients with AF spontaneously or therapeutically defibrillated within 1 week after TTE, and patients with <1 week of follow-up data were also excluded. In patients who underwent repeated TTE for the 2 years, the earliest TTE was considered as the entry point into the study, and the repeated TTE were excluded. The clinical, electrocardiographic (ECG), and echocardiographic data at TTE were retrospectively collected. Clinical data included demographics, presenting symptoms, cardiovascular risk factors, and medical treatment at TTE. The clinical information parameters are listed in Table 1. The duration of AF was estimated if the ECG data or other specific information in the medical record were available. The echocardiographic data were collected from the TTE database. The occurrence of cardiac events after TTE was also recorded, and the primary composite endpoint was defined as cardiac death that included sudden death with an unknown cause, hospitalization due to worsening HF, or mitral valve and/or tricuspid valve repair. According to the consensus of the Department of Cardiology and Department of Cardiovascular Surgery, Osaka City General Hospital, we usually performed mitral valve repair and concomitant tricuspid valve repair in patients who had permanent AF that had persisted for >1 year, chronic moderate or severe MR with at least mild TR, chronic HF symptoms of at least New York Heart Association (NYHA) functional class II, and at least 1 prior admission for ADHF complicated by severe MR. 12 This study was approved by the institutional review board, with a waiver of individual consent. Echocardiography All TTE was performed by expert level 3 sonographers according to the American Society of Echocardiography definition, 19 and the results were interpreted by experienced attending doctors in the echocardiography laboratory. Any disagreements were resolved on consensus reading. Functional MR and TR in AF With Preserved EF patients who underwent TTE in the echocardiography laboratory for the 2 years, 1,194 (11%) had AF at TTE. We excluded 580 patients who met the exclusion criteria and 245 TTE repeated in the same patients. Of the remaining patients, 298 (2.7% of the total cohort) had LVEF ≥50%, and constituted the final study cohort. All data for the clinical and echocardiographic parameters were collected with missing values <5%, except for AF duration, E/e', and estimated RVSP, which had missing values of 56%, 6% and 24%, respectively. Therefore, we excluded AF duration, E/e', and estimated RVSP from the subsequent Cox hazard proportional hazards regression analysis. Of the 298 patients, 241 (81%) were referred from the outpatient departments. The remaining 57 (19%) were referred from the wards, whereas all of them had stable conditions before discharge. The mean age of the 298 patients was 72±10 years, and 178 (60%) were male. In these patients, significant MR and TR, defined as ≥moderate MR and TR, were seen in 24 (8.1%) patients and in 44 (15%) patients, respectively (P=0.0045; Figure 2). Only 11 (3.7%) had both significant MR and significant TR. Patient characteristics according to significant MR and TR status are listed in Table 1. The patients with both significant MR and significant TR had the highest age (80±7 years old), the highest NYHA functional class (1.8±0.6), and the most frequent history of prior admission due to HF (64%). The echocardiographic data according to significant MR and TR status are listed in Table 2. The patients with both significant MR and significant TR had the largest LV diastolic dimension index (35±5 mm/m 2 ), LV systolic dimension index (23±5 mm/m 2 ), LA dimension index (37±8 mm/m 2 ), and estimated RVSP (42±12 mmHg). In contrast, LVEF was similar regardless of the presence or absence of significant MR and TR. The duration of AF was retrospectively confirmed on ECG or on the description in the medical records in 131 (44%) of the 298 patients. The relationship between AF duration and the prevalence of significant (≥moderate) MR or TR is shown in Figure 3. Of the patients with AF duration ≤1 year, 3% and none had significant MR and right atrial (RA) pressure. The RA pressure was estimated as follows: an inferior vena cava (IVC) diameter ≤2.1 cm that collapsed by ≥50% with a sniff was considered to indicate a normal RA pressure of 3 mmHg, whereas an IVC diameter >2.1 cm that collapsed <50% with a sniff was considered to indicate a high RA pressure of 15 mmHg. In scenarios in which the IVC diameter and collapse did not fit this paradigm, an intermediate value of 8 mmHg was assigned. 23 The transmitral flow velocity curves in diastole and the mitral annular tissue Doppler imaging signals were obtained as previously described. 24 The peak velocity of early diastolic flow across the mitral valve (E) and the deceleration time were measured. The peak early diastolic tissue Doppler velocity of the medial mitral annulus (e') on apical 4-chamber view was measured, and the E/e' ratio was calculated. Statistical Analysis The categorical variables are expressed as absolute values and percentages and were compared using chi-squared test, McNemar test, or Kruskal-Wallis test followed by posthoc pairwise test. The continuous variables are expressed as mean ± SD and were compared using 1-way analysis of variance followed by the post-hoc Tukey-Kramer test, or Kruskal-Wallis test followed by the post-hoc pairwise test. The significant predictors of the primary endpoint were identified on univariate Cox proportional hazards regression analysis. We then performed a multivariate Cox proportional hazards regression analysis based on stepwise selection with a model using significant clinical predictors and a model using significant echocardiographic predictors to determine the independent predictors of the primary endpoint. The Kaplan-Meier method was used to evaluate event-free survival. Statistical analysis was performed using MedCalc (version 15.8, MedCalc Software, Ostend, Belgium). P<0.05 was considered statistically significant. Results Subject selection is shown in Figure 1. Of a total of 11,021 Table 3 lists the results of the Cox proportional hazards regression analysis to identify the predictors of the primary endpoint. In the model using clinical data, the factors of age, NYHA functional class, history of prior admission due to HF, and absence of dyslipidemia could independently predict the endpoint. In the model using echocardiographic data, both the MR grade and the TR grade could predict the endpoint independently of each other and independently of other echocardiographic parameters. The hazard ratios per 1-grade increase in MR and TR were 4.0 (95% CI: 2.3-7.0) and 1.8 (95% CI: 1.1-2.9), respectively. On Kaplan-Meyer analysis, patients with significant functional MR had poorer prognosis than patients without significant functional MR: the event-free rate was 39% with the standard error of 11%, vs. 94% with the standard error of 2%, respectively, at mean follow-up of 24 months (log rank P<0.0001; Figure 4A). As well, the patients with significant functional TR had poorer prognosis than patients without significant functional TR: the event-free significant TR, respectively. In contrast, of the patients with AF duration >10 years, approximately one-fourth (28% and 25%) had significant MR and significant TR, respectively. During the follow-up period of 24±17 months (median, 32 months; range, 0.3-54 months), 35 patients (12%) met the primary endpoint, consisting of 5 cardiac deaths, 23 hospitalizations due to ADHF, and 7 mitral valve and tricuspid valve repairs. In contrast, only 7 patients (2.3%) had cerebral infarction, and only 4 (1.3%) had major hemorrhagic events, including 3 intracranial hemorrhages and 1 traumatic intramuscular hematoma at the shoulder. follow-up of 2 years. Annular dilatation in lone AF patients does not usually cause functional MR, but functional TR is relatively easily induced. 25, 26 In contrast, some other studies have shown that functional MR can occur in patients with AF and an enlarged LA, despite having preserved LV systolic function; this MR is known as atrial functional MR. 10-13,15,16 In our previous study using 3-D transesophageal echocardiography (TEE), AF patients with atrial functional MR had dilated LA, dilated mitral annulus, flattened anterior mitral leaflet along the mitral annular plane, and posterior mitral leaflet bending toward the LV cavity, traditionally referred as the "hamstringing" phenomenon of the posterior mitral leaflet. 13 Utsunomiya et al also used 3-D TEE and showed that AF patients with atrial functional TR had dilated RA and tricuspid annulus without valvular tenting. 14 The present study has shown that the prevalence of atrial functional MR and TR depends on AF duration, which may relate to the degree of LA and RA dilatation. The differences in the prevalence of atrial functional MR and TR between the previous studies and the present study might result from differences in AF duration between the various cohorts. We suppose that the prevalence of functional MR and TR is growing due to the increased number of senior patients with longstanding AF in today's aging population. 1-3 According to some previous studies, however, regurgitation-induced secondary ventricular dilatation or the lack of leaflet remodeling can be added to other known factors (i.e., AF duration, atrial dilatation, and annular dilatation) as possible causes of the occurrence or the worsening of atrial functional MR and TR. 13, 27 The exact determinants of atrial functional MR and TR remain uncertain, therefore further studies are needed to address this issue. AF can lead to HF with preserved LVEF, and their association brings a poor prognosis. 7 In fact, cardiac death is more frequent than stroke-related death in AF patients, and a substantial proportion of the cardiac death results from HF. 4-6 Consequently, interventions beyond antico-rate was 69% with the standard error of 8% vs. 93% with the standard error of 2%, respectively, at mean follow-up of 24 months (log rank P<0.0001; Figure 4B). We excluded mitral and tricuspid valve repairs from the endpoint to avoid possible intentional bias in surgical indication, but prognosis was still poorer for patients with significant functional MR than without significant functional MR: the event-free rate was 53% with the standard error of 13% vs. 95% with the standard error of 2%, respectively, at mean follow-up of 24 months (log rank P<0.0001; Figure 4C). As well, prognosis was poorer for the patients with significant functional TR than without significant functional TR, even after the exclusion of surgical valve repairs from the endpoint: the event-free rate was 74% with the standard error of 8% vs. 95% with the standard error of 2%, respectively, at mean follow-up of 24 months (log rank P<0.0001; Figure 4D). In addition, the patients with both significant functional MR and TR had the poorest prognosis, with an event-free rate of 21% and a standard error of 13% at mean follow-up of 24 months (Figure 5A). Even after we excluded the surgical repairs from the endpoint, the event-free rate at mean follow-up of 24 months was still low in patients with both significant MR and significant TR (27% with the standard error of 16%; Figure 5B). Discussion The present study has described the prevalence and prognostic significance of functional MR and TR in AF patients despite having preserved LVEF. Overall, (1) functional MR and TR were rarely seen in patients with AF duration ≤1 year; (2) functional MR and TR were seen in one-fourth of patients with longstanding AF >10 years; (3) both MR and TR were independent predictors of the primary endpoint defined as cardiac death, admission due to worsening HF, or surgical mitral and tricuspid repair; and (4) the concomitance of MR and TR carried the worst prognosis, with an event-free rate of only 21% at the mean Figure 5. Kaplan-Meier event-free rates of (A) cardiac death, hospitalization due to worsening heart failure, or mitral valve and/ or tricuspid valve repair or (B) cardiac death, hospitalization due to worsening heart failure (i.e., excluding valve repair) vs. the combinations of significant (i.e., moderate or greater) mitral and tricuspid regurgitations (MR and TR). Functional MR and TR in AF With Preserved EF termed "atrial functional MR and TR", is associated with poor prognosis in AF patients with preserved LVEF and may have to be treated more intensively. Disclosures The authors declare no conflicts of interest. agulation are needed to further reduce mortality in AF. The present study confirmed the high probability of future HF events in AF patients who had both significant functional MR and significant functional TR, thereby suggesting that more intensive therapy may be required for these regurgitations to prevent HF events in AF patients. Gertz et al showed that atrial functional MR improved if the sinus rhythm was restored by AF ablation. 11 The initial LA size, however, was not particularly large in their patients who had significant atrial functional MR: the LA dimension was 4.4±0.6 cm. Our previous study showed that surgical mitral annuloplasty with concomitant tricuspid annuloplasty may be an effective treatment strategy for reducing MR, HF symptoms, LA size, and HF admissions in patients with a significant degree of atrial functional MR with more dilated LA: the initial LA dimension was 5.2±0.9 cm. 12 Surgical intervention for atrial functional MR and TR could be the best option for HF issues in AF patients with dilated atriums with preserved LVEF. Study Limitations The present study has several limitations. It was a retrospective study based on the medical records of Osaka City General Hospital, and the echocardiography laboratory database. Consequently, MR and TR were graded using various qualification or quantification methods selected at the sonographer's discretion, therefore grading might not be consistent. In our opinion, however, this may be a strength of the present study, rather than a limitation, because these data represent real-world clinical data. In the clinical setting, discrepancy in the grading of MR and TR in AF patients can easily arise with method-by-method and beat-by-beat assessments. Accordingly, we usually use a multiparametric approach with multiple qualification and quantification methods for assessing MR and TR in AF patients, and we finally grade the regurgitations from a comprehensive standpoint under the careful consideration of the advantages and limitations of each method, as done in the present study. We avoided inaccuracy as much as possible by carefully selecting 1 beat occurring after 2 serial beats with average RR interval for the measurement of the systolic parameters. 20 In addition, we used 3 levels of borderline grades (none-mild, mild-moderate, and moderatesevere) when a considerable discrepancy existed in the grading between the multiple methods or between the different beats. A further challenge is obtaining an accurate evaluation of LV diastolic dysfunction as a possible cause of HF with preserved LVEF in AF patients. 28 We did not assess LV diastolic parameters from the calculation of the average at multiple beats; instead, we used the measurements at a single representative beat, although the single-beat measurement is not recommended in the assessment of diastolic parameters in AF patients. 29 Consequently, the present single-beat measurements might disturb an accurate evaluation of the relationship between diastolic parameters and HF events. Future prospective studies using the recommended methods for echocardiographic measurements are needed to address this issue. Conclusions Significant functional MR and TR were seen in a substantial proportion of patients with longstanding AF despite having preserved LVEF. The combination of these regurgitations,
2018-04-03T00:48:35.699Z
2018-04-25T00:00:00.000
{ "year": 2018, "sha1": "f4d872b29654719360162f57ba9a04147112ac1e", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/circj/82/5/82_CJ-17-1334/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6c84acc7403540b144cddb77b8316c82e2d3805c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57759365
pes2o/s2orc
v3-fos-license
Problem Formulation and Fairness Formulating data science problems is an uncertain and difficult process. It requires various forms of discretionary work to translate high-level objectives or strategic goals into tractable problems, necessitating, among other things, the identification of appropriate target variables and proxies. While these choices are rarely self-evident, normative assessments of data science projects often take them for granted, even though different translations can raise profoundly different ethical concerns. Whether we consider a data science project fair often has as much to do with the formulation of the problem as any property of the resulting model. Building on six months of ethnographic fieldwork with a corporate data science team---and channeling ideas from sociology and history of science, critical data studies, and early writing on knowledge discovery in databases---we describe the complex set of actors and activities involved in problem formulation. Our research demonstrates that the specification and operationalization of the problem are always negotiated and elastic, and rarely worked out with explicit normative considerations in mind. In so doing, we show that careful accounts of everyday data science work can help us better understand how and why data science problems are posed in certain ways---and why specific formulations prevail in practice, even in the face of what might seem like normatively preferable alternatives. We conclude by discussing the implications of our findings, arguing that effective normative interventions will require attending to the practical work of problem formulation. INTRODUCTION Undertaking a data science 1 project involves a series of difficult translations. As Provost and Fawcett point out, "[b]usiness problems rarely are classification problems, or regression problems or clustering problems" [38:293]. They must be made into questions that data science can answer. Practitioners are frequently charged with turning amorphous goals into well-specified problems-that is, problems faithful to the original business objectives, but also problems that can be addressed by predicting the value of a variable. Often, the outcome or quality that practitioners want to predict-the 'target variable'-is not one that has been well observed or measured in the past. In such cases, practitioners look to other variables that can act as suitable stand-ins-'proxies'. This process is challenging and far from linear. As Hand argues, "establishing the mapping from the client's domain to a statistical question is one of the most difficult parts of statistical analysis," [22:317] and data scientists frequently devise ways of providing answers to problems that differ from those that seemed to motivate the analysis. In most normative assessments of data science, this work of translation drops out entirely, treating the critical task as one of interrogating properties of the resulting model. However, ethical concerns can extend to the formulation of the problem that a model aims to address, not merely to whether the model exhibits discriminatory effects. To aid in hiring decisions, for example, machine learning needs to predict a specific outcome or quality of interest. One might want to use machine learning to find "good" employees to hire, but the meaning of "good" is not self-evident. Machine learning requires specific and explicit definitions, demanding that those definitions refer to something measurable. While an employer might want to find personable applicants to join its sales staff, such a quality can be difficult to specify or measure. What counts as personable? And how would employers measure it? Given the challenge of answering these questions, employers might favor a definition focused on sales figures, which they may find easier to monitor. In other words, they might define a "good" employee as the person with the highest predicted sales figures. In so doing, the problem of hiring is formulated as one of predicting applicants' sales figures, not simply identifying "good" employees. As Barocas and Selbst [3] demonstrate, choosing among competing target variables can affect whether a model used in hiring decisions ultimately exhibits a disparate impact. There are three reasons why this might happen. First, the target variable might be correlated with protected characteristics. In other words, an employer might focus on a quality that is distributed unevenly across the population. This alone would not constitute illegal discrimination, as the quality upon which the employer hinges its hiring decisions could be rational and defensible. But, the employer could just as well choose a target variable that is a purposeful proxy for race, gender, or other protected characteristics. This would amount to a form of disparate treatment, but one that might be difficult to establish if the decision rests on a seemingly neutral target variable. The employer could also choose a target variable that seems to serve its rational business interests but happens to generate an avoidable disparate impact-for instance, the employer could choose a different target variable that serves its business objective at least as well as the original choice while also reducing the disparate impact. Second, the chosen target variable might be measured less accurately for certain groups. For example, arrests are often used as a proxy for crime in applications of machine learning to policing and criminal justice, even though arrests are a racially biased representation of the true incidence of crime [28]. In treating arrests as a reliable proxy for crime, the model learns to replicate the biased labels in its predictions. This is a particularly pernicious problem because the labeled examples in the training data serve as ground truth for the model. Specifically, the model will learn to assign labels to cases similar to those that received the label in the training data, whether or not the labels in the training data are accurate. Worse, evaluations of the model will likely rely on test data that were labeled using the same process, resulting in misleading reports about the model's real-world performance: these metrics would reflect the model's ability to predict the label, not the true outcome. Indeed, when the training and test data have been mislabeled in the same way, there is simply no way to know when the model is making mistakes. Choosing a target variable is therefore often a choice between outcomes of interest that are labeled more or less accurately. When these outcomes are systematically mismeasured by race, gender, or some other protected characteristic, a model designed to predict them will invariably exhibit a discriminatory bias that does not show up in performance metrics. Finally, different target variables might be more difficult to predict than others depending on the available training data and features. If the ability to predict the target variable varies by population, then the model might subject certain groups to greater errors than others. Across all three cases, we find that whether a model ultimately violates a specific notion of fairness is often contingent on what the model is designed to predict. Which suggests that we should be paying far greater attention to the choice of the target variable, both because it can be a source of unfairness and a mechanism to avoid unfairness. The non-obvious origins of obvious problems. This might not be surprising because some problem formulations may strike us as obviously unfair. Consider the case of 'financial-aid leveraging' in college admissions-the process by which universities calculate the best possible return for financial aid packages: the brightest students for the least amount of financial aid. To achieve this bargain, the university must predict how much each student is willing to pay to attend the university and how much of a discount would sway an applicant from competitors. In economic terms, 'financial-aid leveraging' calculates each applicant's responsiveness to price, which enables the university to make tailored offers that maximize the likely impact of financial aid on individual enrollment decisions. As Quirk [39] explains: "Take a $20,000 scholarship-the full tuition for a needy student at some schools. Break it into four scholarships each for wealthier students who would probably go elsewhere without the discount but will pay the outstanding tuition if they can be lured to your school. Over four years the school will reap an extra $240,000, which can be used to buy more rich studentsor gifted students who will improve the school's profile and thus its desirability and revenue." Such strategies are in effect in schools throughout the United States, and the impact has been an increase in support for wealthier applicants at the expense of their equally qualified, but poorer peers [42,43]. One might, therefore, conclude, as Danielson [11:44] does, that "data mining technology increasingly structures recruiting to many U.S. colleges and universities," and that the technology poses a threat to such important values as equality and meritocracy. Alternatively, one could find, like Cook [9] in a similar thought experiment, that "[t]he results would have been different if the goal were to find the most diverse student population that achieved a certain graduation rate after five years. In this case, the process was flawed fundamentally and ethically from the beginning." For Cook, agency and ethics are front-loaded: a poorly formed question returns undesirable, if correct, answers. Data science might be the enabling device, but the ethical issue precedes the analysis and implementation. The objective was suspect from the start. For Danielson, however, certain ethics seem to flow from data mining itself. Data science is not merely the enabling device, but the impetus for posing certain questions. Its introduction affords new, and perhaps objectionable, ways of devising admissions strategies. Though they are quite different, these positions are not necessarily incompatible: data science might invite certain kinds of questions, and 'financial-aid leveraging' could be one such example. One might say that data science promotes the formulation of questions that would be better left unasked. But, this is a strangely unhelpful synthesis: while according agency to the person or people who might formulate the problem, it simultaneously imparts overwhelming influence to the affordances of data science. The effort of getting the question to work as a data science problem drops out entirely, even though this process is where the question actually and ultimately takes shape. The issues of genuine concern-how universities arrive at a workable notion of student quality, how they decide on optimizing for competing variables (student quality, financial burden, diversity, etc.), how the results are put to work in one of many possible ways-are left largely out of view. The indeterminacy of the process, where many of the ethical issues are actually resolved, disappears. Problem formulation in practice. While a focus on the work of problem formulation in real-world applied settings has the potential to make visible the plethora of actors and activities involved in data science work, it has not been the focus of much empirical inquiry to date. We still know very little about the everyday practice of problem formulation. In this paper, we attempt to fill this gap. How and why are specific questions posed? What challenges arise and how are they resolved in everyday practice? How do actors' choices and decisions shape data science problem formulations? Answers to these questions, we argue, can help us to better understand data science as a practice, but also the origin of the qualities of a data science project that raise normative concerns. As researchers work to unpack the normative values at stake in the uses of data science, we offer an ethnographic account of a special financing project for auto lending to make visible the work of problem formulation in applied contexts. In so doing, we show how to trace the ethical implications of these systems back to the everyday challenges and routine negotiations of data science. In the following sections, we first situate the paper in a longer history attending to the practical dimensions of data science, specifically the task of problem formulation. We then describe our research site and methodology, before moving to the empirical case-study. We conclude by discussing the implications of our findings, positioning the practical work of problem formulation as an important site for normative investigation and intervention. BACKGROUND Our understanding of the role of problem formulation in data science work draws from a long line of research within the history and sociology of science that describes how scientific methods are not just tools for answering questions, but in fact influence the kind of questions we ask and the ways in which we define and measure phenomena [8,25,27,37]. Through different methods, scientists "mathematize" [29,30] the world in specific ways, producing representations that are both contingent (i.e., they change with a change in methods) and real (i.e., they provide actionable ways to analyze the world). Our practical understanding of a given phenomenon is contingent on the data we choose to represent and measure it with. The emerging field of critical data studies has brought similar insights to data science: data scientists do not just apply algorithms to data, but work with algorithms and dataiteratively and often painstakingly-aligning the two together in meaningful ways. Data science work, Passi and Jackson argue [35:2439], "is not merely a collection of formal and mechanical rules, but a situated and discretionary process requiring data analysts to continuously straddle the competing demands of formal abstraction and empirical contingency." Algorithmic results embody specific forms of "data vision"-rule-based, as opposed to rule-bound, applications of algorithms, necessitating judgment-driven work "to apply and improvise around established methods and tools in the wake of empirical diversity" [35:2436]. Data science requires "thoughtful measurement, […] careful research design, […and] creative deployment of statistical techniques" [21:80] to identify units of measurement, clean and process data, construct working models, and interpret quantified results [4,5,19,32,[34][35][36]. Subjective decision making is necessary throughout the process. Each of these practical choices can have profound ethical implications [10,13,24,31,40], of which data scientists are sometimes well aware. Their everyday work is shot through with "careful thinking and critical reflection" [1:23]. Neff et al. [33], through ethnographic work on academic data science research, show how data scientists often "acknowledge their interpretive contributions" and "use data to surface and negotiate social values." Data, the authors argue, are the starting, and not the end, points in data science. In academic and research settings-the contexts that inform most of our current understanding of data science-the work of data science comes across mainly as the work of data scientists. Data science projects in applied corporate settings, however, are inherently collaborative endeavors-a world as much of discretion, collaboration, and aspiration as of data, numbers, and models. In such projects, several actors work together to not only make sense of data and algorithmic results but also to negotiate and resolve practical problems. Passi and Jackson [36:18-19], through an ethnography of a corporate data science team, describe how specific issues with data, intuition, metrics, and models pose challenges for corporate data science work, and how organizational actors collaborate, through specific strategies, to manage these problems "in the service of imperfect but ultimately pragmatic and workable forms of analysis." As the authors conclude, "[p]roject managers, product designers, and business analysts are as much a part of applied real-world corporate data science as are data scientists" [36:16]. These strands of research call attention to the role of the work of problem formulation within data science. The relationship between formulated problems and the data we choose to address them is not a one-way street-data are not merely things to answer questions with. Instead, the very formulations of data-driven problems (i.e., the kind of questions we can and do ask) are determined by contingent aspects such as what data are available, what data we consider relevant to a phenomenon, and what method we choose to process them. Problem formulation is as much an outcome of our data and methods as of our goals and objectives. Indeed, defining the data science problem is not only about making the data science process fit specific and specifiable objectives but also making the objectives fit the data science process. Data miners have long grappled with the role of human judgment and discretion in their practice. The field of Knowledge Discovery in Databases (KDD)-an important predecessor to what we now call data science-emerged to address how choices throughout the data mining process could be formalized in so-called process models. Knowledge Discovery in Databases The iterative process of applied data mining. While KDD is commonly associated with data mining and machine learning, the history of the field has less to do with innovations around these techniques than with the process that surrounds their use. Dissatisfied with a lack of applied research in artificial intelligence, scholars and practitioners founded the new sub-field to draw together experts in computer science, statistics, and data management who were interested and proficient in the practical applications of machine learning [see : 18]. The significance of this move owed to a shift in the field's professional focus, not to a change in the substance of its computational techniques. When KDD established itself as an independent field, 2 it also instituted a method for applying machine learning to real-world problems-the KDD process, consisting of a set of computational techniques and specific procedures through which questions are transformed into tractable data mining problems [15,16]. Although the terms KDD and data mining are now used interchangeably-if they are used at all-the original difference between the two is telling. While data mining referred exclusively to the application of machine learning algorithms, KDD referred to the overall process of reworking questions into data-driven problems, collecting and preparing relevant data, subjecting data to analysis, and interpreting and implementing results. The canon of KDD devoted extensive attention not only to the range of problems that lend themselves to machine learning but also to the multi-step process by which these problems can be made into practicable instances of machine learning. In their seminal paper, Fayyad, Piatetsky-Shapiro, and Smyth, for example, insist on the obvious applicability of data mining while paradoxically attempting to explain and advocate how to apply it in practice-that is, how to make it applicable [14]. KDD covered more than just a set of computational techniques; it amounted to a method for innovating and executing new applications. The focus on process led to the development of a series of process models-formal attempts to explicate how one progresses through a data mining project, breaking the process into discrete steps [26]. The Cross Industry Standard Process for Data Mining (CRISP-DM) [7], the most widely adopted model, seems to simultaneously describe and prescribe the relevant steps in a project's lifecycle. Such an approach grows directly out of the earliest KDD writing. Fayyad, Piatetsky-Shapiro, and Smyth [14] make a point of saying that "data mining is a legitimate activity as long as one understands how to do it," suggesting that there is a particular way to go about mining data to ensure appropriate results. Indeed, the main impetus for developing process models were fears of mistakes, missteps, and misapplications, rather than a simple desire to explicate what it is that data miners do. As Kurgan and Musilek [26] explain, the push "to formally structure [data mining] as a process results from an observation of problems associated with a blind application of [data mining] methods to input data." Notably, CRISP-DM, like the earlier models that preceded it in the academic literature [14][15][16], emphasized the iterative nature of the process and the need to move back and forth between steps. The attention to feedback loops and the overall dynamism of the process were made especially evident in the widely reproduced visual rendering of the process that adopted a circular form to stress cyclicality [26]. Negotiated, not faithful, translations. Business understanding, the first step in the CRISP-DM model, is perhaps the most crucial in a data mining project because it involves the translation of an amorphous problem (a highlevel objective or a business goal) into a question amenable to data mining. CRISP-DM describes this step as the process of "understanding the project objectives and requirements from a business perspective [and] then converting this knowledge into a data mining problem definition" [7:10]. This process of 'conversion,' however, is underspecified in the extreme. Translating complex objectives into a data mining problem is not self-evident: "a large portion of the application effort can go into properly formulating the problem (asking the right question) rather than into optimizing the algorithmic details of a particular data-mining method" [14:46]. Indeed, the openendedness that characterizes such forms of translation work is often described as the 'art' of data mining [12]. Recourse to such terms reveals the degree to which the creativity of the translation process resists its own translation into still more specific parts and processes (i.e., it is artistic only insofar as it resists formalization). But it also highlights the importance of this initial task in determining the very possibility of mining data for some purpose. CRISP-DM and other practical guidance for data miners or data scientists [see: 6,26] tend to describe problem formulation mainly as part of a project's first phase-an initial occasion for frank conversations between the managers who set strategic business goals, the technologists that manage an organization's data, and the analysts that ultimately work on data. Predictably, those involved in data science work face the difficult challenge of faithful translation-finding the correct mapping between, say, corporate goals, organizational data, and computational problems. Practitioners themselves have long recognized that even when project members reach consensus in formulating the problem, it is a negotiated translation-contingent on the discretionary judgments of various actors and further impacted by the choice of methods, instruments, and data. These insights speak to the conditions that motivate data science projects in a way that escapes the kind of technological determinism or data imperative that pervades the current discourses-as if the kinds of questions that data science can answer are always already evident. Getting the automation of machine learning to return the desired results paradoxically involves an enormous amount of manual work and subjective judgment [3]. The work of problem formulation-of iteratively translating between strategic goals and tractable problems-is anything but self-evident, implicated with several practical and organizational aspects of data science work. As Hand points out, "[t]extbook descriptions of data mining tools […] and articles extolling the potential gains to be achieved by applying data mining techniques gloss over [these] difficulties" [23:8]. In the following two sections, we look at the work of data science that is traditionally glossed over. We first describe our research site and methods before moving on to the empirical case-study through which we show how the initial problem formulation comprises a series of elastic translations-a set of placeholder articulations that is susceptible to change as the project moves through its many iterations. RESEARCH SITE AND METHODS This paper builds on six months of ethnographic fieldwork with DataVector 3 , a multi-billion-dollar US-based e-commerce and new media organization. Established in the 1990s, DataVector owns several companies in domains such as health and automotive. Many of these are multi-million-dollar companies with several thousand clients each. DataVector has a core data science team based on the west coast of the United States that works with companies across different domains. There are multiple teams of data engineers, software developers, and business analysts, both at DataVector and its subsidiaries. One of us worked as a data scientist with the organization's core data science team between June and November 2017, serving as the lead scientist on two corporate data science projects (not reported in this paper) and participating in many others. During ethnographic research, the data science team had eight to eleven members (including one of the authors). The team is headed by Cliff-DataVector's Director of Data Science with 30+ years of industry experience in major technology firms. Cliff and the team report directly to Bill-DataVector's Chief Technology Officer with 20+ years of experience in the technology industry. During the six-month period, one of us conducted 50+ interviews with data scientists, product managers, business analysts, project managers, and company executives and produced 400+ pages of fieldwork notes and 100+ photographs. Interviews and fieldwork data were transcribed and coded according to the principles of grounded-theory analysis [20,41], inductively analyzing data through several rounds of qualitative analysis. In our analysis, we coded the data in two rounds, focusing on the identification of key categories, themes, and topics as well as the relation between them in the data. While we focus on a specific corporate data science project in this paper, we observed similar dynamics across several other projects. We chose this case because the work of problem formulation was particularly salient in this project. CASE-STUDY: SPECIAL FINANCING CarCorp, a DataVector subsidiary, collects special financing data: information on people who need car financing but have either low/bad credit scores (between 300-600) or limited credit histories. The company's clientele mainly consists of auto dealers who pay to receive this data (called lead data) that include information such as name, address, mortgage, and employment details (sometimes even the make of the desired automobile). The company collects lead data primarily online: people who need special financing submit their data so that interested dealers can contact them. People requiring special financing face several challenges ranging from the lack of knowledge about available credit sources to difficulties in negotiating interest rates. As liaisons between borrowers and lenders, companies such as CarCorp and its affiliates act as important, sometimes necessary, intermediaries for people requiring special financing. CarCorp serves several dealers across the country. 4 Few dealers collect their own lead data as the money, effort, and technical skills required to do so is enormous. This is a key reason why dealers pay companies such as CarCorp to buy lead data. CarCorp's technology development and project manager Brian wanted to leverage data science to "improve the quality" of leads. Improving lead quality, Brian argued, will ensure that existing dealers do not churn (i.e., they continue to give their business to CarCorp). Brian (project manager): "The main goal [is] to improve the quality of our leads for our customers. We want to give actionable leads. […] That is what helps us make money, makes customers continue to use our services" (Interview, November 1, 2017). Initial discussions between the business and data science teams revolved around two themes: (a) defining lead "quality" and (b) finding ways to measure it. Defining lead quality was not straightforward. There were "many stakeholders with different opinions about leads" (ibid.). Some described lead quality as a function of a lead's salary data, while some argued that a lead was good if the dealer had the lead's desired car in their inventory. Everyone on the business team, however, agreed on one thing-as CarCorp's business analyst Ron put it: a "good" lead provided business to the dealer. Ron (business analyst): "The business team has been talking about [lead quality] for a long time. […] We have narrowed down the lead quality problem to how likely is someone to purchase or to be able to finance a car when you send them to that dealer?" (Interview, November 8, 2017). Lead "quality" was equated with lead "financeability." It was, however, difficult to ascertain financeability. Different dealers had different special financing approval processes. A lead financeable for one dealer can be, for various reasons, unfinanceable for another. The goal thus was to determine dealer-specific financeability (i.e., predicting which dealer was most likely to finance a lead). The teams settled on the following definition of "quality": a good lead for a dealer was a lead financeable for that dealer. This, in turn, framed the problem as one of matching leads to dealers that were most likely to finance them. CarCorp had a large amount of historical lead data. In 2017 alone, the company had processed close to two million leads. CarCorp, however, had relatively less data on which leads had been approved for special financing (let alone data on why a lead was approved). The business team asked the data science team to contact data engineers to identify and assess the relevant data sources. The data science team, after investigating the data sources, however, declared that there wasn't enough data on dealer decisions-without adequate data, they argued, it was impossible to match leads with dealers. Few dealers in CarCorp's network shared their approval data with the company. The scarcity of this data stemmed from the challenge of creating business incentives for dealers to provide up-to-date feedback. The incentives for dealers to share information about their approval process with CarCorp were too attenuated. While the data science team instructed the business team to invest in the collection of up-to-date data on dealer decisions, further discussions ensued between the two teams to find alternate ways to predict dealer-specific financeability using the data that happened to be available already. In debates over the utility of the available data, business analysts and data scientists, however, voiced several concerns ranging from inconsistency (e.g., discrepancies in data values) to unreliability (e.g., distrust of data sources). Business analyst Ron, for instance, was wary of the multiple ways in which the data was collected and generated: Only a few CarCorp affiliates augmented lead data with additional information such as credit scores. Dealers run background checks on leads (with their consent) as part of the financing procedure and in the process get a lead's exact credit score from credit bureaus such as Equifax. The Fair Credit Reporting Act (FCRA) prohibits CarCorp from getting a lead's exact credit score from credit bureaus without a lead's explicit consent. Leads have no reason to authorize CarCorp to retrieve their credit data because the company does not make lending decisions; it only collects information about a lead's interest in special financing. CarCorp had to rely on either leads' self-reported credit scores that were collected by a few affiliates or on credit scores provided as part of lead data bought from third-party lending agencies. 5 Business affiliates and third-party agencies provide credit scores in the form of an approximate range (e.g., 476-525). CarCorp had hoped that this data would help them to, for example, differentiate between a subset of leads that appeared identical but exhibited different financeability. It is not surprising that a lead with a credit score greater than another lead is more likely to secure special financing (even when the leads are otherwise identical). Credit score data is a significant factor in special financing approval processes. CarCorp had this data for about ~10% of their leads (~100,000). While business analysts found it challenging to predict credit score ranges from the other features using traditional statistical analyses, adding credit score ranges in as an additional feature did improve the company's ability to predict lead financeability. The business team wondered if it was possible to use data science to predict credit score ranges for the remaining 90% of the leads (i.e., perhaps machine learning could work where traditional analysis had failed). If successful, credit scores could help ascertain lead financeability-a financeable lead was a lead with a high credit score. The data science team's attempt to assess if they could predict credit scores, however, faced a practical challenge. As mentioned above, credit score data received from affiliates took the form of ranges (e.g., 476-525), not discrete numbers. Different affiliates marked ranges differently. For example, one affiliate may categorize ranges as 476-525, 526-575, etc., while another as 451-500, 501-550, etc. It was not possible to directly use the ranges as class labels for training as the ranges overlapped. The data science team first needed to reconcile different ranges. As data scientist Alex started working to make the ranges consistent, business analysts came up with a way to make this process easier. Pre-existing market analysis (and, to some extent, word-of-mouth business wisdom) indicated that having a credit score higher than 500 greatly increased a lead's likelihood of obtaining special financing approval. This piece of information had a significant impact on the project. With 500 as the crucial threshold, only two credit score ranges were now significant: below-500 and above-500. Alex did not need to figure out ways to reconcile ranges such as 376-425 and 401-450 but could bundle them in the below-500 category. The above-500 credit score range could act as the measure of financeability-a financeable lead was a lead with a credit score above-500. The matching problem (which leads are likely to get financed by a dealer) was now a classification task (which leads have a credit score of over 500). Decreasing the number of classes to two helped attenuate the difficulty of reconciling different ranges but did not help to circumvent it. Alex (data scientist): If the credit score is below 500, the dealer will simply kill the deal. Leads in the 476-525 range were an issue because this range contained not only below-500 leads, but also above-500 leads. Making mistakes close to the decision boundary is especially consequential for special financing where you want to find people just above the threshold. Alex tried many ways to segregate the leads in this range but the models, according to him, didn't work. Their accuracy, at best, was slightly better than a coin flip. Alex attributed the model's bad performance not only to the presence of leads in the 476-525 range, but also to the limited number of available features (i.e., the data did not present sufficient information for the model to meaningfully differentiate between leads). While the in-house lead dataset was a good starting point, the data scientists knew that accurately classifying leads would require not only creative ways to work with available data, but also a wide variety of data. They had already been scouting for external datasets to augment the in-house lead dataset. Director of data science Cliff had tasked data science project manager Marcus with the work of finding third-party datasets that could help with classification. Their approach was first to use freely available data to see "how far we get" before buying paid data from companies such as Experian. 6 Free, yet reliable, datasets, however, were hard to come by. Marcus found only a few datasets-Internal Revenue Service (IRS) zip-code-level data on features such as income range, tax returns, and house affordability (i.e., how much an owner might be able to pay for a property). Data scientist Alex tried each dataset but declared that none improved model performance. Members of the data science team wondered if it was worth investing in high-quality paid datasets. Alex: I used all the data, but the model does not converge. Marcus: What about Experian data? We can get it if you think it will help. Without access to the data, Alex argued, it was not possible to clearly know its usefulness. If the data was not going to be helpful, however, it made no sense to buy it-the decision needed to be made based on the available description on Experian's website. They eventually ended up not investing in it. Even after analyzing the dataset's description, Alex was not convinced that the data would increase model performance. Two months later, the project was halted in the absence of actionable progress. Different actors justified the project's seeming failure in different ways. Data scientist Alex felt that the data was the culprit. Business analyst Ron felt that perhaps the business team unreasonably expected "magic" from data science. For him, the culprit was the nature of the problem itself: Ron (business analyst): "[It is a] selection-bias problem. We are not dealing with a random sample of the population. […] These individuals [...] why are they submitting a lead with us? Because it was not easy for them to get financed. By definition, our population is people with at least not great credit, and usually bad credit. Why do you have bad credit? There is like one reason why you have good credit. There are a thousand reasons why you have […] bad credit. [...] If you show me someone with good credit, I will show you they pay bills on time, have a steady income, etc. If you show me someone with bad credit, they can have a gambling problem, they can be divorced, they could […] prove income now, but maybe it has been unstable in the past, and we have no way of knowing that. There are literally thousands of reasons that aren't that capture-able" (Interview, November 8, 2017). Business analyst Ron described the failure not in terms of the initially articulated business goal but in terms of the project's current data science problem formulation-not the difficulty of defining the quality of a lead, but the challenge of classifying leads with scores in a specific part of the credit score spectrum. He believed it was possible to classify people with high/low credit scores on the full 300-850 credit score spectrum (e.g., differentiating between a person with a 750 score and a person with a 450 score). He argued, however, that CarCorp's focus on the special financing population meant that the goal was not to classify high/low scores on the full credit spectrum but to demarcate between different kinds of low scores on one side of the spectrum (roughly between 300 and 600). Note how Ron, a business analyst, describes the project's failure in relation to a specific definition of lead "quality"-it was difficult to know which leads were above or below the credit score threshold of 500. The project was halted when developing an accurate model based on this definition proved impossible. The business and data science team could not figure out any other way to formulate the problem at this stage with the data they had. statistics.html). Even such data only contained aggregated information on credit scores and ranges, and not lead-specific credit scores (which required consent). Through the above description, we see how the data science problem was formulated differently at different points in the project based on two different sets of targets variables and their possible proxies. Proxy #1: Dealer Decisions. The business team initially described the project goal as the improvement of lead quality-a formulation of what the business team thought the dealers wanted. Note that this goal was in turn related to the broader objective of minimizing churn rate-a formulation of what CarCorp itself wanted. In this way, the problem specification was just as much about keeping clients as it was about satisfying their business objectives. These high-level goals impacted the actors' initial understanding of the project's goal-the quality of leads was seen in relation to dealers and CarCorp's own success. CarCorp decided that if dealers could finance a lead, it was a good lead. The fact that different dealers had different special financing approval processes further impacted the contingent relationship between quality, dealers, and financeability: if a lead was financeable by a specific dealer, it was a good lead for that dealer. The data science problem, therefore, became the task of matching leads with dealers that were likely to finance them. Proxy #2: Credit Score Ranges. Data available to support the use of dealer decisions as a proxy, however, were limited. While business analysts did not fully understand how dealers made decisions, they acknowledged, based on market research, the import of credit scores in the special financing approval process-leads with scores higher than 500 were highly likely to get special financing. Credit scores thus became a proxy for a dealer's decision, which was itself a proxy for a lead's financeability, which was, by extension, a proxy for a lead's quality-indeed, a chain of proxies. The data science problem thus became the task of classifying leads into below-and above-500 classes. Problem formulation is a negotiated translation. At face value, the relationship between the project's high-level business goal (improving lead quality) and its two different problem formulations (the two sets of target variables and their proxies) may seem like a one-to-many relation-different translations of, in effect, the same goal. Such an understanding, however, fails to account not only for the amorphous nature of high-level goals (i.e., the difficulty of defining the quality of a lead), but also for the project's iterative and evolving nature (i.e., problem formulations are negotiated, dependent on, for instance, actors' choice of proxy). In our case, actors equated (in order): lead quality with financeability, financeability with dealer decisions, and dealer decisions with credit score ranges. Each of these maneuvers produced different formulations of the objective, in turn impacting actors' articulation and understanding of the project's high-level goal (as seen, for instance, in the way business analyst Ron ultimately accounts for the project's failure). 7 Of course, depending on the threshold, some leads will never be sent. This is not to argue that the high-level goal to improve lead quality, at some point, transformed into a completely different objective. Instead, it shows that the translation between high-level goals and tractable data science problems is not a given but a negotiated outcome-stable yet elastic. Throughout the project, the goal of improving lead quality remains recognizably similar but practically different, evident in the different descriptions, target variables, and proxies for lead quality. Each set of target variables and proxies represents a specific understanding of what a lead's quality is and what it means to improve on it. The quality of a lead is not a preexisting variable waiting to be measured, but an artifact of how our actors define and measure it. The values at stake in problem formulation. Scholars concerned with bias in computer systems have long stressed the need to consider the original objectives or goals that motivate a project, apart from any form of bias that may creep into the system during its development and implementation [17]. On this account, the apparent problem to which data science is a solution determines whether it happens to serve morally defensible ends. Information systems can be no less biased than the objectives they serve. These goals, however, rarely emerge ready-formed or precisely specified. Instead, navigating the vagaries of the data science process requires reconceiving the problem at hand and making it one that data and algorithms can help solve. In our empirical case, we do not observe a data science project working in the service of an established goal, about which there might be some normative debate. Instead, we find that the normative implications of the project evolve alongside changes in the different problem formulations of lead quality. On the one hand, for the proxy of dealer-decisions, leads are categorized by their dealer-specific financeability-a lead is only sent to the dealer that is likely to finance them. In formulating the problem as a matching task, the company is essentially catering to dealer preferences. This approach will recommend leads to dealers that align with the preferences expressed in dealers' previous decisions. In this case, lead financeability operates on a spectrum. Financeability emerges as a more/less attribute: each lead is financeable, some more than others depending on the dealer. Effectively, each lead has at least a chance of being sent to a dealer (i.e., the dealer with the highest probability of financing a lead above some threshold). 7 On the other hand, for the credit-score proxy, leads are categorized into two classes based on their credit score ranges and only leads with scores greater than 500 are considered financeable. In formulating the problem as the task of classifying leads above or below a score, the company reifies credit scores as the sole marker for financeability. Even if dealers had in the past financed leads with credit scores less than 500, this approach only recommends leads with scores higher than 500, shaping dealers' future financing practices. In this approach, financeability operates as a binary variable: a lead is financeable only if its credit score is higher than 500. Consequently, leads in the below-500 category may never see the light of day, discounted entirely because the company believes that these leads are not suitable for dealers. Different principles; different normative concerns. Seen this way, the matching version of the problem formulation may appear normatively preferable to the classification version. But, is this always true? If we prioritize maximizing a person's lending opportunities, the matching formulation of the problem may seem better because it increases a lead's chances of securing special financing. If, however, we prioritize the goal of mitigating existing biases in lending practices (i.e., of alleviating existing dealer biases), the classification problem formulation may come across as the better alternative because it potentially encourages dealers to consider leads different from those they have financed in the past. Through the two scenarios, we see how proxies are not merely ways to equate goals with data but serve to frame the problem in subtly different ways-and raise different ethical concerns as a result. It is far from obvious which of the two concerns is more serious and thus which choice is normatively preferableshifting our normative lens alters our perception of fairness concerning the choice of target variables and proxies. In this paper, we have demonstrated how approaching the work of problem formulation as an important site for investigation enables us to have a much more careful discussion about our own normative commitments. This, in turn, provides insights into how we can ensure that projects align with those commitments. Always imperfect; always partial. Translating strategic goals into tractable problems is a labored and challenging process. Such translations do necessary violence to the world that they attempt to model, but also provide actionable and novel ways to address complex problems. Our intention to make visible the elasticity and multiplicity of such translations was thus not to criticize actors' inability to find the perfectly faithful translation. Quite the opposite: we recognize that translations are always imperfect and partial, and wanted to instead shift the attention to the consequences of different translations and the everyday judgments that drive them. Our actors, however, did not explicitly debate the ethical implications of their own systems-neither in the way we, as researchers, have come to recognize normative issues, nor in the way we, as authors, have analyzed the implications of their problem formulations in this paper. Practical and organizational aspects such as business requirements, the choice of proxies, the nature of the algorithmic task, and the availability of data impact problem formulations in much more significant and actionable ways than, for instance, the practitioners' normative commitments and beliefs. Indeed, our analysis of the empirical case makes visible how aspects such as analytic uncertainty and financial cost impact problem formulations. For example, the high cost of datasets coupled with the challenge of assessing the data's efficacy without using it made it particularly challenging for actors to leverage additional sources of information. Yet, as we show in this paper, normative implications of data science systems do in fact find their roots in problem formulation work-the discretionary judgments and practical work involved in translations between high-level goals and tractable problems. Each translation galvanizes a different set of actors, aspirations, and practices, and, in doing so, creates opportunities and challenges for normative interventionupstream sites for downstream change. As Barocas et al. [2:6] argue: "A robust understanding of the ethical use of data-driven systems needs substantial focus on the possible threats to civil rights that may result from the formulation of the problem. Such threats are insidious, because problem formulation is iterative. Many decisions are made early and quickly, before there is any notion that the effort will lead to a successful system, and only rarely are prior problemformulation decisions revisited with a critical eye." If we wish to take seriously the work of unpacking the normative implications of data science systems and of intervening in their development to ensure greater fairness, we need to find ways to identify, address, and accommodate the iterative and less visible work of formulating data science problems-how and why problems are formulated in specific ways. CONCLUSION In this paper, we focused on the uncertain process by which certain questions come to be posed in real-world applied data science projects. We have shown that some of the most important normative implications of data science systems find their roots in the work of problem formulation. The attempt to make certain goals amenable to data science will always involve subtle transformations of those objectives along the way-transformations that may have profound consequences for the very conception of the problem to which data science has been brought to bear-and what consequently appear to be the most appropriate ways of handling those problems. Thus, the problems we solve with data science are never insulated from the larger process of getting data science to return actionable results. As we have shown, these ends are very much an artifact of a contingent process of arriving at a successful formulation of the problem, and they cannot be easily decoupled from the process at arriving at these ends. In linking the normative concerns that data science has provoked to more nuanced accounts of the on-the-ground process of undertaking a data science project, we have suggested new objects for investigation and intervention: which goals are posed and why; how goals are made into tractable questions and working problems; and, how and why certain problem formulations succeed.
2019-01-08T22:56:45.000Z
2019-01-08T00:00:00.000
{ "year": 2019, "sha1": "332487f33e3decaa1b613dfd4afdc68c92160de0", "oa_license": null, "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3287560.3287567", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "332487f33e3decaa1b613dfd4afdc68c92160de0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Sociology" ] }
221587172
pes2o/s2orc
v3-fos-license
Some Biochemical Perturbations May Modify the Understanding of Trypanotolerance in the West African Dwarf Sheep Infected With Trypanosoma brucei brucei and Trypanosoma congolense Trypanosomes are single-celled protozoa that cause severe diseases in both humans and livestock in sub-Saharan African countries. The disease in the West African Dwarf (WAD) sheep is often neglected due to the issue of trypanotolerance. The current study is aimed to evaluate some biochemical changes in this breed that may modify the understanding of trypanotolerance. Fifteen WAD sheep were assigned into 3 groups (A, B, and C). Baseline (day 0) values of the parameters assayed were obtained before groups A and B were infected with Trypanosoma brucei brucei and Trypanosoma congolense, respectively, by intraperitoneal inoculation with 106 trypanosomes per animal. Standard procedures using Quimica Clinica Applicada (Spain) and Randox (UK) test kits were used to evaluate serum levels of AST, ALT, ALP, total protein, albumin, total cholesterol, urea, and creatinine on days 0, 14, 28, 42, 56, and 70 post infection. The infections caused sustained pyrexia, hypoproteinaemia, hypocholesterolaemia, weight loss, hepatitis, and mortalities although parasitaemia was greatly controlled especially in the T congolense infected rams. The findings suggest that the WAD rams are not just passive reservoirs of trypanosomes for human and animal infections, but experience active host-parasite interactions with huge price for resilience, biochemically. Introduction Trypanosomes are unicellular parasitic protozoa belonging to the Trypanosoma Genus and Family Trypanosomatidae. 1 The parasites live and thrive in the blood and other body fluids of vertebrate hosts, and some of them cause disease known as trypanosomiasis, leading to significant morbidity and mortality in both man and animals with enormous economic losses. 2 Trypanosoma brucei rhodesiense and Trypanosoma brucei gambianse cause acute (haemolymphatic) and chronic (meningoencephalitic) forms of human African trypanosomiasis (HAT), respectively, whereas Trypanosoma brucei brucei, the third member of the Trypanosoma brucei group together with Trypanosoma congolense and Trypanosoma vivax are the causative agents of 'Nagana' or African animal trypanosomiasis (AAT) in a wide range of livestock. 3 The Djallonke or West African Dwarf (WAD) sheep, as they are often referred to, inhabit the area south of latitude 14° N including the coastal areas of west and central Africa. These include Nigeria, Dahomey, Ghana, Ivory Coast, Guinea, Senegal, Cameroon, Gabon, Congo, and Southern Mali. They are also found in Angola and Botswana. The WAD sheep are known for their adaptation to tropical hot and humid environment of West Africa, and they are also considered tolerant to trypanosomiasis. 4,5 The WAD sheep (Djallonke) and goats, as well as the Taurine cattle (Bos taurus or N'Dama), which entered Africa from the near east around 5000 BCE and 7000 BCE, respectively, developed innate tolerance to African animal trypanosomosis (AAT), probably as a result of natural selection pressures. [6][7][8] The innate ability of these livestock breeds to survive and remain productive under AAT challenge, with very low mortality and without the use of trypanocidal drugs, is referred to as trypanotolerance. It was estimated that approximately 32% of sheep and 47% of goats in West and Central Africa are trypanotolerant, and this is so because the WAD breeds are predominant in these regions. 9 In N'Dama cattle specifically, trypanotolerance was shown as a better ability to control both parasitaemia and, more importantly, anaemia, and this permits the host to remain productive under disease challenge. 7 Trypanosomiasis is known to cause anaemia, hypoproteinemia, leukocytosis, immunosuppression, hypoglycemia, and changes in serum enzyme and cholesterol levels. 10 However, these changes have not been well evaluated in the WAD sheep and goats and N'Dama cattle breeds partly because of the presumptions associated with trypanotolerance. The present study evaluated some biochemical changes in WAD rams infected separately with T brucei brucei and T congolense to elucidate what trypanotolerance may preclude and widen its understanding or limit overemphasis as well as highlight targets for therapeutic intervention in the so-called trypanotolerant breeds. Clinical Pathology were procured from local breeders. They were allowed to acclimatize for 3 weeks, during which they were dewormed with albendazole at 10 mg/kg body weight (b.w.) per os and ivermectin at 1 mL/50 kg b.w. By buffy coat examination, those already infected with trypanosomes or any other haemoparasite were screened out and excluded from the study. The rams were thereafter assigned into 3 groups (A, B, & C) of 5 rams each. Group A rams were infected with T brucei brucei, group B rams were infected with T congolense and group C served as the uninfected control. The trypanosomes were sourced from the Nigerian Institute for Trypanosome Research (NITR, Nigeria). Infection of groups A and B was by intraperitoneal injection of 10 6 trypanosomes per head. The rams were placed on fresh forage and drinking water ad libitum. Baseline values of the mean body weight, rectal temperature, and serum biochemical assays were determined before infecting groups A and B. The infected animals and the control were monitored for 70 days. Ethical approval The study was approved by the Institutional Animal Care and Use Committee (IACUC) of the Faculty of Veterinary Medicine, University of Nigeria. Parasitaemia Blood samples of the infected groups were examined for parasitaemia daily until parasitaemia was established, then the level of parasitaemia were measured on days 7, 14, 21, 28, 35, 42, 56, and 70 post infection (PI) by buffy coat scoring method. 11 Biochemical analyses On days 0, 14, 28, 42, 56, and 70 PI, 3 mL of blood was collected from the external jugular vein of the rams into clean tubes and allowed to clot followed by centrifugation at 3000 rpm for 10 minutes to obtain clear serum samples for biochemical analyses. The biochemical parameters assayed in serum were aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP) activity, total protein, albumin, total cholesterol, urea, and creatinine. All the biochemical assays were done using Diatek Biochemistry Analyzer (Wuxi Hiwell Diatek Instruments Co Ltd, China). The test kits used for the assay of serum AST, ALT, ALP, total cholesterol, and urea were sourced from Quimica Clinica Applicada (QCA), Spain, while the ones used for the assay of serum levels of total protein, albumin, and creatinine were sourced from Randox Laboratories Ltd, UK. The serum AST and ALT activity were assayed based on the Reitman-Frankel colorimetric method, 12 while assay of the ALP activity was based on the phenolphthalein monophosphate method. [13][14][15] The serum total protein levels were determined by the direct biuret method, 16 while the serum albumin levels were determined based on the bromocresol green method. 17,18 The serum globulin level for each sample was calculated by subtracting the serum albumin level from total protein level. The serum total cholesterol was determined based on the enzymatic colorimetric method, 19 while the serum level urea was determined based on the Berthelot-Searcy method. 20 Histopathology Rams that died due to the infection were thoroughly necropsied and tissue samples were generously collected and fixed in 10% neutral buffered formalin. The tissues were routinely processed, sectioned at 5 µm thickness and stained with haematoxylin and eosin (H&E). However, focus was mainly on the liver tissue because it is the site for the synthesis of most the biochemical elements evaluated in the study. Statistics Data generated from the study were subjected to 1-way analysis of variance using SPSS version 21. The level of parasitaemia was analysed by Student t test. The variant means were separated post hoc using the least significant difference (LSD) method. The level of significance was accepted at probability P < .05. Physiological parameters and parasitaemia There was no significant (P > .05) changes in the mean body weights of the rams except on day 70 PI on which T brucei brucei infected rams had significantly (P < .05) lower mean body weight compared with the control (Figure 1). The rectal temperature of the infected groups were significantly (P < .05) higher than the control from day 14 to the end of the experiment (day 70), with T congolense producing relatively higher temperature elevations than T brucei brucei ( Figure 2). Parasitaemia was first detected on day 5 PI in both T brucei brucei and T congolense infected rams and by day 6 PI all the infected rams had become parasitaemic. Three peaks of parasitaemia were observed in T brucei brucei on days 14, 35, and 56 PI. T congolense peaked on day 14 PI after which no sharp fluctuations were observed. Also, T brucei brucei produced significantly (P < .05) higher levels of parasitaemia compared with T. congolense especially on the peak days ( Figure 3). Biochemical perturbations in T brucei brucei and T congolense infections in the rams There were significantly (P < .05) lower levels of AST activity in the infected groups on days 14, 28, 42, and 70 PI compared with the control (Figure 4). Serum activity of ALT was significantly (P < .05) higher in T brucei brucei infected rams than the control on day 42 PI while T congolense was significantly (P < .05) lower than the control on day 56 PI ( Figure 5). No significant (P > .05) variations were observed in the serum activity of ALP in both the infected rams and uninfected control ( Figure 6). Serum total protein level was significantly (P < .05) higher in T congolense infected rams than in T brucei brucei infected rams on day 42 PI and was also significantly (P < .05) higher than both T brucei brucei infected rams and the uninfected control on day 70 PI, whereas T brucei brucei infected rams had significantly (P < .05) lower total protein level than the uninfected control on day 70 PI (Figure 7). Serum albumin levels of T brucei brucei infected rams were significantly (P < .05) lower than the uninfected control on days 14, 28, 42, 56, and 70 PI, while that of T congolense was significantly (P < .05) lower than the uninfected control on days 14, 28, 56, and 70 PI (Figure 8). Serum globulin level of T congolense infected rams was significantly (P < .05) higher than those of T brucei brucei infected rams as well as the uninfected control on days 42 and 70 PI (Figure 9). Serum total cholesterol level of T brucei brucei infected rams was significantly (P < .05) lower than the uninfected control on days 14, 28, 56, and 70 PI. While that of T congolense infected rams was significantly (P < .05) lower than the uninfected control on days 14, 56, and 70 PI ( Figure 10). Serum urea level of T congolense infected rams were significantly (P < .05) higher than T brucei brucei infected rams on day 28 PI and significantly (P < .05) higher than the uninfected Clinical Pathology control on days 28 and 70 PI ( Figure 11). There were no significant (P > .05) variations in the serum creatinine levels of the infected and uninfected rams excepts on day 28 PI on which the serum creatinine level of T congolense infected rams was significantly higher than that of T brucei brucei infected rams ( Figure 12). Histopathology Mortality occurred in the T congolense infected rams on days 14 and 66 PI and in T brucei brucei infected rams on days 14 and 50 PI. The liver tissue of T brucei brucei infected rams showed severe multifocal random hepatocellular necrosis in the form of uncapsulated microgranulomas with massive infiltration of mononuclear inflammatory cells (Figure 13). The liver of the T congolense infected rams also showed mild coagulative hepatocellular necrosis and perivascular inflammation in the portal area ( Figure 14). There were other organs with significant lesions but they were not relevant to the focus of the research. Discussion From the results of our study, the body weight of the infected rams did not drop significantly and this agrees with the reported observations in trypanotolerant breeds. [6][7][8] However, the significant drop in the body weight on day 70 PI in the The significant rise in rectal temperature in the infected animals is consistent with trypanosome infection and T congolense appears to produce higher level of pyrexia than T brucei brucei probably due to its affinity for the microvasculatures of the brain. 21 The prepatent period of 5 to 6 days for intraperitoneal inoculation of T brucei brucei and T congolense in our study compares well with experimental infections in mice in which the mean prepatent period was 3.8 days for T brucei brucei and 6.5 days for T congolense. 22 Also, in our study, T brucei brucei produced higher level (4.0 × 10 7 trypanosomes/mL of blood) and more fluctuating peaks of parasitaemia than T congolense (1.3 × 10 6 trypanosomes/mL of blood). The understanding of trypanotolerance includes the ability of the tolerant breeds to control parasitaemia. This is very evident in our study because very much higher levels of parasitaemia, up to 1 × 10 9 trypanosomes/mL of blood, are seen in rodent models and other very susceptible animals. 23 It appears that parasitaemia was better controlled in the T congolense infected WAD rams than in T brucei brucei infected ones. Variations in parasitaemia within same species or breed of animals may depend on the strain and pathogenicity of the trypanosome isolate. Significant reductions were observed in the serum AST activity of the infected rams. Also, significant reduction was observed in the serum ALT activity of T congolense infected rams on day 56 PI, but that of T brucei brucei was significantly higher than the control on day 42 PI. Furthermore, no variations were seen in the serum ALP levels of the infected rams. These finding differ moderately from serum enzymes activity reported in other susceptible animals. Increases in the serum activity of AST, ALT, and ALP have been reported in T brucei brucei infection, and were attributed to liver injury caused by the parasite. However, in T congolense infection, the serum enzymes activity remained unchanged. 10,24 Reductions in serum AST and ALT observed in our study may be attributed to inhibition of their cofactor (vitamin B 6 ) following losses of total protein and reductions in hepatic synthesis due to hypoproteinemia. 25 Although the significant increase in ALT in T brucei brucei infected rams on day 42 PI may be due to liver damage induced by the tissue invasive nature of the parasite, this did not affect serum AST probably because AST is a less specific marker for liver damage and has a shorter half-life. 26 Total proteins and gamma globulins increase while serum albumin decreases in several trypanosomes infections. 27 In the present study, serum total protein and globulin levels significantly increased only in the T congolense infected rams, but serum total protein level of T brucei brucei infected rams was significantly lower than those of both T congolense infected rams and the uninfected control. Decrease in serum total protein in T brucei brucei infection has also been reported in boars. 28 However, significant decrease in serum albumin level in the infected groups in our study agrees with the findings in other susceptible species. 27 The degree of hypoalbuminaemia was related to the level of parasitaemia and/or severity of the disease and occurrence of haemodilution. 29 Significant reductions in serum total cholesterol levels were observed in the infected rams. This has also been reported in susceptible goat breeds. 30 It has been reported that the bloodstream forms of trypanosomes are unable to synthesize cholesterol although they require it for growth and synthesis of their membranes and this could contribute to lowering of the serum levels of lipids and cholesterol in infected hosts. 31 Serum urea level was significantly higher in T congolense infected rams on days 28 and 70 PI. Also, serum creatinine level of the infected groups did not vary significantly with the uninfected control. High urea levels were reported in cattle infected with T congolense and it occurred early when parasitaemia was Clinical Pathology high and returned to normal level in chronic trypanosomiasis. 32 It appears that excessive protein catabolism caused by fever contributes to increased blood urea nitrogen. In the present study, both serum total protein and pyrexia were more pronounced in the T congolense infected rams. Moreover, trypanosome infection in the WAD sheep may not be associated with significant muscle or kidney damage that may increase serum creatinine level as reported in the more susceptible species. 24 The histopathological changes in the liver in our study, especially in the T brucei brucei infected group may be responsible for the significantly higher level of serum ALT activity observed in this group. The liver lesions may have been triggered by massive invasion of trypanosomes into the liver parenchyma with attendant inflammatory response and hepatocellular necrosis. Conclusions The results of the present study revealed significant clinical and biochemical perturbations in the so-called trypanotolerant WAD rams ranging from pyrexia, hypoproteinaemia, and hypocholesterolaemia to weight loss, liver damage, and death. These findings suggest that the WAD rams are not passive reservoirs of trypanosomes for human and animal infections, 33,34 but also suffer significantly from the infection where the disease surveillance, control of the insect vector, as well as therapeutic interventions are neglected while trypanotolerance is overemphasized.
2020-08-20T10:05:46.392Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "50d54eedb305d23c1efeb5c9f479b474e3d9c341", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2632010X20938389", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50c3b525c67d1051ab5ffba1bd9ad2392d8a3a54", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
252405075
pes2o/s2orc
v3-fos-license
A PDE‐regularized smoothing method for space–time data over manifolds with application to medical data Abstract We propose an innovative statistical‐numerical method to model spatio‐temporal data, observed over a generic two‐dimensional Riemanian manifold. The proposed approach consists of a regression model completed with a regularizing term based on the heat equation. The model is discretized through a finite element scheme set on the manifold, and solved by resorting to a fixed point‐based iterative algorithm. This choice leads to a procedure which is highly efficient when compared with a monolithic approach, and which allows us to deal with massive datasets. After a preliminary assessment on simulation study cases, we investigate the performance of the new estimation tool in practical contexts, by dealing with neuroimaging and hemodynamic data. The diversified demands characterizing a so large number of different application fields justify the strong interest for data analysis over two-dimensional Riemannian manifolds, both in the statistical and in the numerical literature. Nevertheless, the available methodologies are so far confined to special manifolds, such as spheres or sphere-like domains (see, e.g., [1][2][3][4][5][6][7] and the references therein), or to the spatial dimension only (see, e.g., [8][9][10][11][12][13][14][15] ). The challenge tackled in this paper is consequently twofold, since dealing with space-time data over a general twodimensional manifold. To this aim, we propose a computational procedure which belongs to, and further strongly advances, the class of Spatial Regression with PDE regularization methods reviewed in Reference [16]. In particular, we adopt an estimation functional which combines a least-square data-fidelity criterion with a regularizing term based on the heat equation. The work is inspired by the regression model for spatial data over manifold domains considered in Reference [12], as well as to the spatio-temporal model for planar domains proposed in Reference [17]. In more detail, here we discretize the problem directly on the manifold instead of resorting to a conformal flattening of the domain as in Reference [12]. This allows us to avoid the approximation error characterizing the flattening step. Moreover, we use an iterative fixed point scheme to solve the discrete problem instead of the monolithic approach adopted in Reference [17]. Such a choice ensures a highly computational efficiency, and makes it possible to handle massive datasets, such as those characterizing the applied problems mentioned above. The paper is organized as follows: Section 2 provides some notation related to the differential operators on Riemannian manifolds and to the associated function spaces. Section 3 introduces the proposed PDE-regularized spatiotemporal smoothing method. Section 4 details the discretization used to solve the estimation problem, by distinguishing between the monolithic approach and the new fixed point-based algorithm. Section 5 shows the good performances of the new method through simulation study cases, whereas Section 6 focuses on two applications in Life Science, by considering neuroimaging data and the study of cerebral aneurysms. Finally, Section 7 outlines possible directions for a future research. | PRELIMINARIES AND NOTATION We denote by M & ℝ 3 the two-dimensional Riemannian manifold that constitues the spatial domain of interest, and by 0, T ½ &ℝ the considered time window. We associate with manifold M the Laplace-Beltrami operator, Δ M , and the gradient, r M , involved in the definition of the estimation problem and of the corresponding approximation, respectively. In particular, the Laplace-Beltrami operator generalizes the standard Laplacian to the case of a function defined over a manifold, by providing a simple measure of the local curvature of such a function. Moreover, operator Δ M is invariant with respect to Euclidean transformations (rotations, translations and reflections) of the spatial coordinates. F I G U R E 1 Neuroimaging signal on a cerebral cortex: triangular mesh modeling the cortical surface (left); fMRI signal associated with neuronal activity distributed over the cortex at a certain temporal instant (right) Concerning the function spaces, we introduce the space C 0 M ð Þ of the functions continuous on M (which is meant associated with the closure of M when M is an open manifold), and the Sobolev space, H k M ð Þ, of the functions u : M ! ℝ which belong to L 2 M ð Þ (i.e., which are square-integrable on M), together with the associated partial derivatives up to the order k. 18 Note that L 2 M ð Þ coincides with H 0 M ð Þ. Finally, as space-time function setting, we consider the space L 2 0, T; H k M ð Þ À Á of the functions u defined over (0, T) and taking values in H k M ð Þ, such that 18 | REGRESSION ANALYSIS WITH PDE REGULARIZATION We consider n data locations, {p i , i = 1, …, n}, on manifold M, and m temporal instants, t 1 , t 2 , …, t m , in the time interval [0, T], with 0 = t 1 < t 2 < … < t m = T. We denote by z ij the value of a real-valued random variable of interest, when observed at the space-time location (p i , t j ), for i = 1, …, n and j = 1, …, m. We assume that the random variable coincides with a noisy observation of a smooth function, f : M Â 0, T ½ !ℝ, according to the model where ϵ ij are independent measurement errors characterized by a zero mean and a finite variance. Additionally, we assume that f is twice continuously differentiable in space and continuously differentiable in time. Our goal is to estimate the space-time field f in (1) in the presence of an a priori knowledge on the phenomenon of interest. In particular, as in Reference [17], we assume that the problem under study can be described in terms of a time-dependent law, represented by a parabolic Partial Differential Equation (PDE). The problem-specific information may include also the Boundary Conditions (B.C.) when M is an open manifold, and the Initial Condition (I.C.), that model the behavior of the field f at the boundary, ∂Ω Â {0, T}, of the space-time domain of interest. We propose to estimate f by minimizing the regularized sum of squared function errors with ϕ : M Â 0, T ½ !ℝ twice continuously differentiable in space and continuously differentiable in time, (p, t) the generic space-time coordinate varying in M Â 0, T ½ , and where λ is a positive smoothing parameter. Functional J λ formalizes a trade-off between a data fitting and a model fidelity criterion. On the one hand, the sum of the squared function errors pushes the solution to the minimization problem, denoted by b f , close to the observed data z ij when evaluated at the space-time locations (p i , t j ). On the other hand, the penalizing term controls the regularity, in space and time, of b f . In particular, the employment of the Laplace-Beltrami operator ensures that the smoothness of b f does not depend on the orientation of the domain or of the coordinate system we adopt. Finally, parameter λ tunes the trade-off between data fidelity and regularity, so that the higher the parameter λ, the more regular the estimate; vice versa, the lower the parameter λ, the closer the fit to the observed data. It can be checked that the estimation problem þ B:C: and I:C: for s ℕ þ , and where proper boundary and initial conditions have to be included according to the specific problem at hand. In this paper, we focus on the proposal of an efficient numerical approximation for the estimation problem (3). It turns out that the estimator b f minimizing the cost functional for any function q V 2 T M ð Þ. Equation (5) can be rewritten as a system of coupled parabolic problems, by introducing a suitable auxiliary function g defined on M. 12 Thus, we look for the pair b f , g (4), such that, We remark that the first problem in (6) coincides with a standard (forward) parabolic PDE, whereas the problem associated with g constitutes a backward parabolic PDE, since the time derivative and the diffusive term are characterized by an opposite sign. As a consequence, the initial condition b f p, 0 ð Þ¼ e f 0 is added to the first equation, while the ending condition g p, T ð Þ¼e g T completes the second PDE. Concerning the conditions to be assigned on ∂Ω, we will select the boundary data according to the test case at hand. In particular, the essential boundary conditions will be explicitly included in the definition of space V 1 T M ð Þ. Formulation (6) turns out to be instrumental in view of the discrete counterpart of problem (3). In particular, the numerical procedure proposed in Section 4.2 will be characterized by a considerable computational efficiency, thanks to the introduction of an ad-hoc iterative algorithm. This feature will allow us to handle massive datasets, typical of several applicative contexts. | DISCRETIZATION OF THE ESTIMATION PROBLEM This section represents the methodological core of the paper. We provide an improvement in terms of computational efficiency of the approach used in Reference [17] to tackle system (6) in the simplified case of data distributed over a planar domain according to specific sampling designs (e.g., pointwise spatial/interval temporal data, areal spatial/ pointwise temporal data, areal spatial/interval temporal data). The final goal is to finalize a handy and accurate procedure able to efficiently analyze considerable amount of space-time data, observed over general two-dimensional Riemannian manifold domains. In particular, to approximate the system of parabolic PDEs in (6), we have to define a discretization both in space and time. To discretize the space, we introduce a conformal triangulation, T h ¼ K f g, of the manifold M, h being the characteristic mesh size. To discretize the time dependence, we consider a partition, τ 1 = 0 < τ 2 < ÁÁÁ < τ M = T, of the time window (0, T] into (M À 1) subintervals, (τ kÀ1 , τ k ], of length Δt, with k = 2, …, M. For simplicity of exposition, we assume that the vertices of T h exactly coincide with the data locations p i , and that the times when data are collected identify the time partition, so that M ≡ m and τ j ≡ t j for j = 1, …, m. The reader interested to the more general case where mesh vertices do not necessarily coincide with the data locations is referred, e.g., to References [17,19]. Then, we define the finite element space, ð Þ denotes the space of the polynomials of degree r defined on K. Notice that the (essential) boundary conditions characterizing space T ℝ n of the evaluations of function v h at the n data locations, p 1 , …p n , and the matrix Ψ ¼ ψ T p 1 ð Þ, …, ψ T p n ð Þ ð Þ ℝ nÂN T of the evaluations of the basis functions at the same points, we can relate vectors v and v n via the equality v n = Ψv. In particular, for r = 1, matrix Ψ reduces to the identity matrix, I ℝ nÂn , and v ≡ v n . By extending the notation above, we denote by the vectors gathering the values taken at time t k by v h at the finite element nodes and at the data locations, respectively, In the next sections, we introduce two different approximations based on the above space-time discretization. The former has been recently proposed in the literature in the simpler case of space-time data observed over planar domains 17 and represents the reference context for the numerical assessment of this paper (see Section 4.1); the latter coincides with the new proposed approach which aims at being computationally highly more effective (see Section 4.2). | A monolithic approach We provide here the space-time discretization scheme proposed in Reference [17]. The authors employ finite elements of degree r to approximate the space, combined with the θ-method for the time discretization. This leads to discretize time derivatives through an incremental ratio, whereas the other time-dependent terms are replaced by a convex linear combination of their values at times t k and t k+1 . 20 In particular, in Reference [17], the authors resort to the backward Euler scheme (θ = 1), so that, for each k = 1, …, m À 1, the following system is solved for b f kþ1 h and g k h , both in V r h M ð Þ: Notice that, according to this space-time approximation, the test functions are only space-dependent, in contrast to formulation (6) (and to the discretization adopted in the next section). Following, 12,17 in order to provide the algebraic counterpart of system (8), we introduce the matrices of dimensionality N T the values taken by functions u h and w h at the mesh nodes. From now on, we take r = 1, so that N T ¼ n. Thus, the algebraic counterpart of the space-time discretization in (8) turns out to be where, in accordance with the notation in (7) System (9) is sparse since the Lagrangian basis B is locally supported. Nevertheless, the system is fully coupled due to the opposite time direction characterizing the equations for b f h and g h . Such a coupling leads to adopt a monolithic approach 17 when solving (9). This means to consider simultaneously all the spatial data locations and all the involved times, namely to solve a unique system with a dimensionality equal to 2mn (see Figure 2). This feature might represent an issue from a computational viewpoint, in particular when dealing with large datasets (i.e., for large values of m and n). As a consequence, complex geometries or long time-series are ruled out by the monolithic method, which, in such contexts, becomes very time-and memory-consuming. This is the case of the applications tackled in Section 6 which are out of reach for the monolithic approach when codes are run on a standard laptop* . All these considerations justify the proposal in the next section of a new procedure, which offers us an alternative to the monolithic approach. | A new fixed point-based algorithm The procedure here proposed aims at commuting the whole system (9) into smaller problems in order to make affordable the management of complex amounts of data. In particular, to tackle the coupling between the two equations in (9), we resort to a fixed point approach. 20 Additionally, we adopt a space-time discretization alternative to the one characterizing the monolithic approach. In particular, to be compliant with the weak formulation in (6), where the trial and the test functions depend both on the space and time, we employ space-time finite elements, continuous in space and time. 21,22 Thus, in the generic time interval, F I G U R E 2 Sketch of the dimensionality characterizing the algebraic system associated with the monolithic (left) and with the fixed point-based (right) algorithm (τ kÀ1 , τ k ], a fully discrete function, w h , can be expanded as P s j¼0 t j w h,j p ð Þ, that is, as a linear combination of functions, w h,j , belonging to the finite element space, V r h M ð Þ, with coefficients coinciding with suitable powers of the time independent variable, t. Throughout the paper, we make the choice r = 1, s = 0 in view of a fair comparison between the monolithic and the new approach. We replace the algebraic system (9) with the new one with k = 1, …, m À 1, where the same notations as in (9) To start the algorithm, we have to select the initial guess. In particular: 1. we compute the values b f k,0 n , for k = 2, …, m, by referring to the steady case (see Proposition 2 in Reference [12]), that is, by solving the (m À 1) problems 2. we compute the values g k,0 n , for k = m À 1, …, 1, by solving the (m À 1) problems ii. for k = 2, …, m À 1, iii. for k = m, n g m,j n ¼ 0: The decoupling effect introduced by the fixed point iterations allows us to carry out all the computations in i-iii simultaneously, in the spirit of a Jacobi solver. We highlight that, although the evident similarity between systems (9) and (13), with the new algorithm we are solving, at the same time, m systems of dimension 2n instead of a unique system of dimension 2mn as for the monolithic approach (see Figure 2). This difference in terms of dimensionality justifies the considerable computational gain characterizing the fixed point-based approach when compared with the monolithic formulation, as verified in Section 6 (see Figure 8). Finally, the fixed point algorithm is stopped by introducing a tolerance, TOL, on the relative variation of the cost functional J λ in (2), when evaluated on two consecutive approximations, and after setting a maximum number, NMax, of iterations. The two next sections are meant to numerically investigate the reliability and the efficiency of the fixed point-based algorithm, first when applied to simulation case studies and then by considering a real datasets. | SIMULATION STUDIES In this section, we assess the performances of the new algorithm introduced in Section 4.2 when applied to spatiotemporal data. We compare the proposed method with kriging, the most commonly used technique to analyze spatial and spatio-temporal data (see, e.g., Reference [23] and the references therein). Kriging does not work on generic manifolds. For this reason, to perform the comparison with the new fixed point-based procedure, we combine kriging with a conformal flattening map approach, as detailed below. Figure 3 shows the two test domains we considered for comparison purposes. The first domain is a benchmark geometry, employed, for instance, in Reference [12], here discretized by a mesh with 340 vertices. The second domain coincides with the geometry of a vessel, obtained after simplifying the patient-specific morphology of an inner carotid artery affected by an aneurysm, shown in the left panel of Figure 9. 24 This geometry is of relevance for the investigation carried out in Section 6.1. The mesh in the right panel of Figure 3, which discretizes the vessel geometry, is characterized by 600 vertices. To generate data, over each manifold we consider 50 smooth functions defined by with p = [p [1] , p [2] , p [3] ] T , and where the coefficients a j , for j = 1, 2, 3, are randomly generated from independent normal distributions, with mean equal to zero and standard deviation equal to one. Then, these functions are evaluated at the mesh vertices (so that the data locations, p 1 , …, p n , coincide with mesh vertices), in correspondence with 31 equispaced times in the time window [0, 0.3]. The collected values are hence corrupted by an additive independent Gaussian noise, with mean equal to zero and variance equal to 0.5. The noise level ranges approximately from 0% to 60% of the true signal. The first column in Figures 4 and 5 shows the first smooth function generated according to (14) at different times, over Geometry 1 and 2, respectively. The second column in the same figures provides the corresponding sampled noisy data, at the same times. Now, starting from the noisy data, we resort to the fixed point-based algorithm proposed in Section 4.2 to estimate the 50 smooth functions generated over the two benchmark geometries. To this aim, for both the test domains and for each simulation repetition, we select the smoothing parameter λ in (2) via 5-fold cross validation, 25 while constraining the fixed point iterations with parameters TOL = 5eÀ04 and NMax = 30. The fixed point algorithm converges, on average, after 5 and 6 iterations for Geometry 1 and 2, respectively. The third columns in Figures 4 and 5 show the corresponding estimation, at the different times. The matching with the original data is very good, despite the noise characterizing the sampled data. For the sake of comparison, we compute now the estimates by kriging. (14) The bi-dimensional spatial domains kriging is able to handle are planar or spherical. This is not the case of Geometries 1 and 2. As a consequence, to implement kriging, we resort to a conformal flattening procedure according to what described in Reference [26]. In more detail, following Reference [12], we introduce a continuously differentiable map which changes the Riemannian manifold M & ℝ 3 into a planar domain Ω & ℝ 2 . As an example, Figure 6 shows the result of the conformal flattening when applied to Geometry 1. Note that kriging does not employ the flattened mesh. This simply provides the location of the data on the conformally flattened domain, the data being located at the vertices of the planar mesh. Kriging is thus implemented over the flattened Geometries 1 and 2, by using the R package gStat. 27 In particular, we consider a separable variogram, marginally exponential in space and Gaussian in time, whose parameters, for each simulation replicate, are estimated starting from the values of the empirical variogram, as it is a standard practice for kriging. Moreover, spatio-temporal kriging cannot handle too large datasets. This justifies the simplification we have applied to the original geometry of the patient-specific inner carotid artery (with an associated original mesh of 6017 vertices) to yield the mesh in Figure 3, right panel, consisting of 600 vertices only. The mesh simplification has been performed by exploiting the algorithm in Reference [10]. The fourth columns in Figures 4 and 5 provide the spatio-temporal kriging estimates at the considered times. A qualitative cross comparison among the third and the fourth columns in the two figures highlights the superior performances of the new algorithm proposed in Section 4.2. Indeed, for both the geometries, the fixed point-based algorithm succeeds in removing the artifacts introduced by the noise, while kriging turns out to be less effective on this respect. We enrich the comparative analysis between the fixed point-based procedure and kriging by including the monolithic method adopted in Reference [17] and summarized in Section 4.1. The monolithic approach yields estimates which, from a qualitative viewpoint, are fully comparable with the results provided by the fixed point-based algorithm. Nevertheless, a quantitative investigation highlights that the method proposed in this paper outperforms the monolithic F I G U R E 9 Heamodynamic case study: discretization of the inner carotid artery (left); observed wall shear-stress (middle) and corresponding estimate provided by the fixed point-based approach (right) at two different temporal instants (top-bottom) during the heart beat formulation in terms of computational efficiency. The quantitative analysis is carried out by computing, for each test domain and for each simulation repetition, the Mean Square Error, associated with the corresponding estimate, b f , and the CPU time required by the computational procedure. Figure 7 collects the box plots for the MSE characterizing the three methods here compared, and for both the geometries. The performance of the fixed point-based algorithm and of the monolithic approach in terms of MSE is essentially the same. Instead, kriging exhibits a significantly higher MSE, with a large dispersion and several outliers associated with very high MSE values. As expected, the estimates yielded by the new and by the monolithic algorithms turn out to be more robust, as highlighted by the contained dispersion of the related MSEs. Figure 8 displays the box plots for the execution time, measured in seconds ([s]), demanded by the fixed point-based and by the monolithic methods, when run on the two test geometries. This check reveals the evident superiority of the new algorithm with respect to the monolithic approach in terms of numerical efficiency, with a reduction of the execution time, on average, of about five times for both the geometries. Kriging has not been included in the figure, due to the remarkably higher time characterizing such a method (around 4 min for both the geometries instead of few seconds). | CASE STUDIES We here illustrate the effectiveness of the fixed point-based method through two applied case studies, after having verified the reliability and the computational efficiency of such an approach in the previous section. The first case study concerns the analysis of the shear-stress exerted by the blood-flow over the wall of an inner carotid artery (see Section 6.1). The second application comes from the neurosciences and deals with the study of the neuronal activity on the cerebral cortex (see Section 6.2). Standard kriging cannot be used in these real-word applications (not even disregarding the complex geometry of the domains), due to the high dimensionality of the data. | Study of heamodynamic forces on the arterial walls As a first practical case study, we consider a medical disease whose incidence in the population is very high (around 10 cases per 100,000 people, with mortality or serious health conditions in 60% of cases 28 ). We are referring to the rupture of a cerebral aneurysm, namely, of a large bulge that may modify the standard shape of a vessel wall in the brain. These deformations are very common in the adult population. In the vast majority of cases, cerebral aneurysms are totally asymptomatic and innocuous. The rupture of an aneurysm is an infrequent event, but unfortunately characterized by a very high mortality. The origin of this pathology is still largely unknown. The study of the factors causing the development and the possible rupture of aneurysms has attracted lot of interest in the scientific and medical community (see, e.g., References [29,30]). It is believed that one of the main features influencing the aneurysm pathogenesis is the shear-stress exerted by the blood flow on the arterial wall. In particular, a strong variation of the shear-stress in space, and over the time of the heart-beat, is conjectured to be associated with the aneurysms formation, development and possible rupture; moreover, very low values of shear-stress are thought to be very dangerous (see Reference [30] and references therein). This haemodynamic stress is in turn dependent on the complex morphology of the vessel. For this reason, the study of the spatio-temporal behavior of the shear-stress in patient-specific geometries of arteries affected by cerebral aneurysms, is of great interest for advancing the knowledge on this pathology. Figure 9 shows the considered medical configuration. It coincides with a patient-specific inner carotid artery affected by a large aneurysm. In particular, the wall of the artery has been discretized by a triangular mesh consisting of 6017 vertices (see Figure 9, left). Actually, we are dealing with the same manifold as in Section 5 (Geometry 2). However, the computational efficiency of the fixed point-based algorithm allows us to involve here a finer discretization of such a geometry, with a consequent higher reliability of the associated analysis. Concerning the analyzed data, we refer to the AneuRisk project. 24 In particular, we consider the modulus of the wall shear-stress obtained from computational fluid dynamics simulations. 31,32 This quantity is available at the mesh vertices, at 100 temporal instants, that cover a full heart-beat. A first analysis of these data has been carried out in Reference [12], although restricted to a single timeinstant. Now, we exploit the fixed point-based approach to estimate the spatial wall shear-stress distribution over the inner carotid artery at two distinct times during the heart beat. For this purpose, we choose the smoothing parameter λ by 5-fold cross validation, while selecting values 1eÀ04 and 50 for parameters TOL and NMax, respectively. Six fixed point iterations are demanded, on average, to ensure the convergence at each time, leading to a total elapsed time equal to 16.28 s. Figure 9 compares the raw (middle panels) with the estimated (right panels) wall shear-stress. A visual inspection does not highlight differences between the observed and the smoothed data. This is due to the fact that these data, obtained by computational fluid dynamics simulations, are characterized by very low noise, that is, order of magnitude lower than the data values. For this reason, the proposed algorithm, that correctly identifies the very high signal-tonoise ratio in the data, only filters out the high-frequency variation in the observations. Of course, a higher value of the smoothing parameter λ could be used to return a smoother estimate, that highlights only the main patterns of the signal. The displayed temporal instants are characterized by a significative variation in the shear-stress distribution, in particular with low values of the shear stress within the aneurysmal sac. Independently of the selected time instants, it can be checked that in this location the wall shear-stress remains always very low and fluctuating, thus supporting the conjecture that low values of this stress play a major role in the aneurysmal pathogenesis. | Study of neuronal activity on the cerebral cortex The cerebral cortex is the outermost part of the brain -a thin layer of neural tissue where most of the neuronal activity takes place. From a geometric viewpoint, the cerebral cortex coincides with a highly tangled surface. It can be approximated by a triangular mesh which, unavoidably, turns out to be very complex, as shown in left panel of Figure 1. On the top of this two-dimensional manifold domain, it can be observed a time-varying hemodynamic signal associated with the neuronal activity on the cerebral cortex. Figure 1 shows one temporal snapshot of such a hemodynamic signal, measured during a functional Magnetic Resonance Imaging (fMRI) scan. The propagation of this signal constitutes the object of our investigation. The data here analyzed come from the Human Connectome Project, a wide public database of resting-state and task-based fMRI scans, structural scans, diffusion MRI scans, from a large number of volunteers. 33 Currently, there is a strong effort in the scientific community in setting up methods for the analysis of this kind of data (see, e.g., References [8,34,35]), with the common goal of advancing the knowledge on cerebral functioning and diseases. Despite this considerable interest, the most part of neuroimaging studies is still carried out either by disregarding the spatial dependence F I G U R E 1 0 Neuroimaging case study: observed signal at a fixed temporal instant (left) and corresponding estimate provided by the fixed point-based approach (right) in the signal, or by employing basic methods which exploit the standard Euclidean distance. These simplified endeavors may lead to inaccurate estimates, for instance since functional distinct areas, that are apart over the cortex, result close in three-dimensional space due to the presence of a sulcus. Actually, it has been proved that the possibility to include the highly complex brain anatomy in the data analysis turns out to be a necessary step in order to guarantee a reliable investigation. 14,36 The method adopted in this paper offers a spatio-temporal smoothing procedure able to correctly comply with the cerebral cortex morphology. To assess the fixed point-based algorithm, we start from the data associated with the triangular mesh in the left panel of Figure 1, consisting of 32,492 vertices. The data coincide with the fMRI signals induced over a patient-specific cerebral cortex by the neuronal activity, at 30 temporal instants. The left panel in Figure 10 shows a specific temporal snapshot of this signal. Starting from these noisy data, we run the algorithm proposed in Section 4.2 to estimate the underlying smooth spatio-temporal signal on the cerebral cortex. To this aim, we select the smoothing parameter in (2) by 5-fold cross validation and we set the two parameters, NMax and TOL, characterizing the stopping check to 50 and 1eÀ04, respectively. The fixed point algorithm converges, on average, within 15 iterations, while the whole estimation process takes 400.16 s. Figure 10, compares the raw data (left panel) with the smooth estimate provided by fixed point algorithm (right panel), at the considered temporal instant. A visual comparison between the two panels highlights the accuracy of the estimate, that is able to efficiently smooth the data, appropriately filtering out the noise without generating any artifact. In particular, notice that the data values observed over nearby gyri are not artificially linked by the algorithm. Finally, we remark that higher values for parameter λ could also be used in order to yield an estimate that only captures the macroscopic features of the original signal, thus returning the corresponding main pattern. | DISCUSSION AND POSSIBLE ENHANCEMENTS The proposed fixed point-based approach turns out to be an ideal tool to analyze large amount of spatio-temporal data over general manifolds in ℝ 3 . The numerical assessment in Section 5 shows the superiority of such a new method when compared both with kriging (combined with a conformal flattening of the domain to manage generic manifolds) and with the monolithic procedure proposed in Reference [17], here adapted to non-planar domains. In particular, the fixed point-based algorithm is considerably more reliable than kriging (Figures 4, 5, 7). On the other hand, when compared with the monolithic approach, the new method reveals to be significantly more efficient in terms of computational time ( Figure 8) without waiving the estimate accuracy (Figure 7), and allows us to handle data over general two-dimensional Riemannian manifolds. The gained effectiveness guarantees the possibility to estimate massive datasets as corroborated by the applicative settings analyzed in Section 6. The method introduced in this paper enables several extensions. Among the most interesting ones, we cite the inclusion of space-varying covariates in a semi-parametric setting, analogously to what discussed in References [12,37] for the simplified case of spatial data only. In the heamodynamic framework, this feature would allow us to include into the estimation process the space-varying radius and the curvature of the vessel, to study the role played by these geometrical features in cerebral aneurysm pathology. In the application to neuroimaging data, we would take into account the space-varying cortical thickness, which may have an effect on the hemodynamic signal here considered. Another interesting generalization concerns the adopted finite element discretization which could be replaced by an isogeometric analysis, thus generalizing what done in Reference [15] in a steady setting. ACKNOWLEDGMENTS We are grateful to Eardi Lila for helping us in the processing of the brain imaging data. We also thank Bree Ettinger, who provided us the code implementing the conformal flattening used in Section 5, for comparison purposes with kriging. Finally, the second author acknowledges the research project GNCS-INdAM 2020 "Tecniche Numeriche Avanzate per Applicazioni Industriali." Open Access Funding provided by Politecnico di Milano within the CRUI-CARE Agreement. DATA AVAILABILITY STATEMENT Research data are not shared.
2022-09-22T06:16:46.109Z
2022-09-20T00:00:00.000
{ "year": 2022, "sha1": "6913f46b8635c61c30e8c639b2c67dd637becef8", "oa_license": "CCBYNCND", "oa_url": "https://re.public.polimi.it/bitstream/11311/1223751/1/PontiPerottoSangalli_IntJNumerMethBiomedEngng22.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "868cbd3836a81032132ec54814c7639b62b0f105", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
27381944
pes2o/s2orc
v3-fos-license
Laser-Induced Breakdown Spectroscopy for Rapid Discrimination of Heavy-Metal-Contaminated Seafood Tegillarca granosa Tegillarca granosa samples contaminated artificially by three kinds of toxic heavy metals including zinc (Zn), cadmium (Cd), and lead (Pb) were attempted to be distinguished using laser-induced breakdown spectroscopy (LIBS) technology and pattern recognition methods in this study. The measured spectra were firstly processed by a wavelet transform algorithm (WTA), then the generated characteristic information was subsequently expressed by an information gain algorithm (IGA). As a result, 30 variables obtained were used as input variables for three classifiers: partial least square discriminant analysis (PLS-DA), support vector machine (SVM), and random forest (RF), among which the RF model exhibited the best performance, with 93.3% discrimination accuracy among those classifiers. Besides, the extracted characteristic information was used to reconstruct the original spectra by inverse WTA, and the corresponding attribution of the reconstructed spectra was then discussed. This work indicates that the healthy shellfish samples of Tegillarca granosa could be distinguished from the toxic heavy-metal-contaminated ones by pattern recognition analysis combined with LIBS technology, which only requires minimal pretreatments. Introduction Due to the recent accelerated process of industrialization in developing countries, a large number of toxic heavy metals are discharged into rivers, lakes, and seas. The toxic heavy metal pollution of aquatic products has become an increasingly serious issue. Toxic heavy metals not only disrupt the living condition of aquatic animals in natural water resources but also intoxicate or kill aquaculture fish, which turns out to be a threat to the fish farming industry. Furthermore, heavy metal ions accumulate in the human body from the consumption of the polluted seafood [1,2]. Commonly, many enzymes in the human body will be deactivated by heavy metals, leading to the risk of chronic poisoning. at identifying a suitable machine learning algorithm to distinguish polluted Tegillarca granosa from safe ones. In this study, several types of Tegillarca granosa, including uncontaminated control samples, samples contaminated by cadmium (Cd), zinc (Zn), lead (Pb), and those contaminated by a mixture of all three heavy metals, were evaluated with a combination of LIBS technology and discrimination analysis with pattern recognition. The specific objectives were as follows: (1) to decompose original LIBS spectra via the wavelet transform algorithm (WTA); (2) to extract the characteristic information from the high frequency coefficients of the wavelet transform domain using the information gain algorithm (IGA); (3) to identify Tegillarca granosa samples contaminated by a certain metal or multiple heavy metals using the extracted characteristic information as the input of discrimination models. After an over 24 h sedimentation process, the seawater was then filtered to remove sands for raising the Tegillarca granosa in tanks. The parameters of the seawater were a pH of 8.05 ± 0.10, a temperature of 22.4 ± 5.6 • C, a dissolved oxygen content of >6 mg/L, and a salinity level of 21%. Throughout the experiment, the seawater was exchanged every 24 h, after which the containers were refilled and dosed with the metal toxicant. A total of 150 Tegillarca granosa samples were randomly divided into five equal size groups, i.e., 30 for each group. The Tegillarca granosa samples in Groups I, II, and III were exposed to water dissolved with highly concentrated PbCH 3 COO•3H 2 O (1.833 mg/L), CdC1 2 (1.634 mg/L), and ZnSO 4 •7H 2 O (4.424 mg/L), respectively. Group IV was exposed to a mixture of equal amounts of the above three chemicals. Group V (control) was raised in the seawater without any spiked solution of heavy metal ions. After 10 days, which allowed the heavy metals to accumulate in the sample, they were sacrificed and kept in a refrigerator at −4 • C for 30 min. The samples were freeze-dried for subsequent spectral measurement. Spectral Collection The experimental set-up is shown in Figure 1. Each sample was measured five times at different spots on the translational stage. A pulsed Nd:YAG laser (Litron Nano SG 150-10, Litron Lasers, Warwickshire, England) with a wavelength of 1064 nm, a pulse duration of 6 ns, and an energy of 150 mJ was applied. A plano-convex lens (f = 100 mm) was used to focus the laser beam onto the sample surface at normal incidence. The spot diameter on the sample surface was set at 500 µm to improve the detection accuracy of contamination at certain local areas. A beam splitter was utilized to split a small fraction of energy (10%) for monitoring the pulse energy by an energy meter. The plasma emission was collected using an optical fiber system at a 45 • angle to the incident laser beam, which was then fed into a spectrometer (LTB Aryelle 150, Berlin, Germany) equipped with an optical chopper (with a time resolution of 0.1 µs). The spectral resolution was 6000 nm/nm. A charge coupled device (CCD) camera was used for the spectra acquisition, and the laser was synchronized with the spectrometer. The CCD gate width was 30 µs. The delay from the laser pulse generation to the start of spectral acquisition was set to 1 µs in all experiments. Wavelet Transform Algorithm The wavelet transform algorithm (WTA) is a time-frequency localization analysis method that exhibits excellent performance in the analysis and extraction of non-stationary signal characteristics [34]. In WTA, the scaling and translating operations on the signal gradually achieve multi-scale refinement, which eventually leads to appropriate time resolutions of high frequency signals and remarkable frequency resolutions of low frequency signals. WTA can automatically adapt to the requirements of the time-frequency signal analysis, so as to focus on any signal details of interest. This method provides an alternative to Fourier transform. WTA has been successfully applied in many fields, including medical science and image processing [35,36]. The Wavelet toolbox in Matlab 2012a is used in this work. Information Gain Algorithm In information theory, entropy denotes the average unpredictability of a random variable, which is considered to be equivalent to the content of the information. The information gain algorithm (IGA) can effectively select the important features based on entropy. The expected value of information gain (IG) is the mutual information of the target variables and the independent variables. The reduction in the entropy of one target variable is achieved by learning the state of the independent variable. IGA treats each feature in isolation and estimates how important it is for the prediction of the correct class label. The entropy-based attribute selection and ranking method aims to minimize the entropy value of an attribute, thereby maximizing its IG. The main advantage of this method is that it includes all attributes in the analysis [37,38]. Spectral Calibration and Analysis Methods Samples were randomly divided into two subsets (i.e., calibration and prediction subset) with the ratio of 4:1. In this way, 120 samples in the calibration subset were used to train discrimination models, and the other 30 samples in the prediction subset were to test the accuracy of model. The target was encoded in binary vector format such as 1000, 0100, 0010, and 0001 to represent different contamination types of the samples for discrimination analysis. Here, three extensively used classifiers, Partial least squares discriminant analysis [39][40][41] (PLS-DA), Random Forest [42] (RF), and least squares support vector machine [43,44] (LS-SVM), were employed to calibrate a discrimination model. Wavelet Transform Algorithm The wavelet transform algorithm (WTA) is a time-frequency localization analysis method that exhibits excellent performance in the analysis and extraction of non-stationary signal characteristics [34]. In WTA, the scaling and translating operations on the signal gradually achieve multi-scale refinement, which eventually leads to appropriate time resolutions of high frequency signals and remarkable frequency resolutions of low frequency signals. WTA can automatically adapt to the requirements of the time-frequency signal analysis, so as to focus on any signal details of interest. This method provides an alternative to Fourier transform. WTA has been successfully applied in many fields, including medical science and image processing [35,36]. The Wavelet toolbox in Matlab 2012a is used in this work. Information Gain Algorithm In information theory, entropy denotes the average unpredictability of a random variable, which is considered to be equivalent to the content of the information. The information gain algorithm (IGA) can effectively select the important features based on entropy. The expected value of information gain (IG) is the mutual information of the target variables and the independent variables. The reduction in the entropy of one target variable is achieved by learning the state of the independent variable. IGA treats each feature in isolation and estimates how important it is for the prediction of the correct class label. The entropy-based attribute selection and ranking method aims to minimize the entropy value of an attribute, thereby maximizing its IG. The main advantage of this method is that it includes all attributes in the analysis [37,38]. Spectral Calibration and Analysis Methods Samples were randomly divided into two subsets (i.e., calibration and prediction subset) with the ratio of 4:1. In this way, 120 samples in the calibration subset were used to train discrimination models, and the other 30 samples in the prediction subset were to test the accuracy of model. The target was encoded in binary vector format such as 1000, 0100, 0010, and 0001 to represent different contamination types of the samples for discrimination analysis. Here, three extensively used classifiers, Partial least squares discriminant analysis [39][40][41] (PLS-DA), Random Forest [42] (RF), and least squares support vector machine [43,44] (LS-SVM), were employed to calibrate a discrimination model. Analysis of LIBS Spectra The representative LIBS spectra for Group IV samples in the wavelength region of 200-900 nm is shown in Figure 2. In general, the LIBS spectra for Tegillarca granosa samples were quite complex, containing multiple peaks contributed by different elements. The acquired LIBS spectra have distinct elemental features. These profiles in Figure , were also observed. Although some characteristic peaks in the LIBS spectra of Tegillarca granosa are remarkable, heavy metals like Zn and Pb are greatly interfered by the noises. As a result, samples in the control group cannot be directly distinguished from the ones from heavy-metal-contaminated groups. Therefore, machine learning algorithms were required to classify the corresponding group for each sample. Analysis of LIBS Spectra The representative LIBS spectra for Group IV samples in the wavelength region of 200-900 nm is shown in Figure 2. In general, the LIBS spectra for Tegillarca granosa samples were quite complex, containing multiple peaks contributed by different elements. The acquired LIBS spectra have distinct elemental features. These profiles in Figure , were also observed. Although some characteristic peaks in the LIBS spectra of Tegillarca granosa are remarkable, heavy metals like Zn and Pb are greatly interfered by the noises. As a result, samples in the control group cannot be directly distinguished from the ones from heavy-metal-contaminated groups. Therefore, machine learning algorithms were required to classify the corresponding group for each sample. Analysis of Discrimination Results Using Full Spectra PLS-DA algorithm was employed to extract the information of predictive variables hidden in complex spectral information for achieving the purpose of compression and information extraction. A discrimination model was built based on the full spectral information being regarded as the input variables for the PLS-DA model. Unfortunately, this method was ineffective, providing less than 30% accuracy for validation samples. A few factors may have resulted in the poor discrimination performance: (1) the full spectra contain high noise or irrelevant information, while the characteristic information is less evident between the different contaminated Tegillarca granosa; (2) serious mutual interference from laser pulses and heterogeneous matrix effect could have reduced the quality of the LIBS spectra; (3) PLS-DA, which is generally capable of processing strongly linear problems, is not the best candidate to predict the group of the samples with strong nonlinearity between LIBS spectra and predictive variables [19]. To solve the nonlinearity problems of the LIBS spectra, additional tools working for nonlinear models, including RF and SVM, were adopted to discriminate the different Analysis of Discrimination Results Using Full Spectra PLS-DA algorithm was employed to extract the information of predictive variables hidden in complex spectral information for achieving the purpose of compression and information extraction. A discrimination model was built based on the full spectral information being regarded as the input variables for the PLS-DA model. Unfortunately, this method was ineffective, providing less than 30% accuracy for validation samples. A few factors may have resulted in the poor discrimination performance: (1) the full spectra contain high noise or irrelevant information, while the characteristic information is less evident between the different contaminated Tegillarca granosa; (2) serious mutual interference from laser pulses and heterogeneous matrix effect could have reduced the quality of the LIBS spectra; (3) PLS-DA, which is generally capable of processing strongly linear problems, is not the best candidate to predict the group of the samples with strong nonlinearity between LIBS spectra and predictive variables [19]. To solve the nonlinearity problems of the LIBS spectra, additional tools working for nonlinear models, including RF and SVM, were adopted to discriminate the different types of Tegillarca granosa. However, the accuracy was as low as 40%, possibly because the useful characteristic information may have been submerged in the noise. From this study, we concluded that further data mining methods were indispensable to extract the characteristic bands from the original full spectral information. Analysis of Results Using Characteristic Spectra The LIBS spectra showed that the characteristic peaks from most elements were narrow and very similar to the "high frequency" signals. Based on these characteristics, the discrete wavelet transform (DWT), an algorithm widely applied to time-frequency analysis of signals, was adopted for further spectra analysis. This algorithm can decompose the "high" and "low frequency" parts of the original signals, so that we can apply two different filters to access the different frequency components of the signals. The widely used Daubechies 4 (Db4) wavelet was adopted as the kernel function of DWT to transform the original LIBS spectra. The high frequency components from different decomposition layers were then extracted and taken as the input variables to develop PLS-DA, RF, and LS-SVM discrimination models. The classification results from the high frequency components of different decomposition layers are shown in Figure 3a. Because a few metal elements have multiple excitation states, the contamination category of Tegillarca granosa samples is nonlinear with the spectral lines. The PLS-DA model applied in the different decomposition layers showed an accuracy similar to that shown by the model applied in the full spectra. Compared with PLS-DA, RF and LS-SVM performed slightly better because they can process nonlinear models more effectively. The recognition results analyzed by RF and LS-SVM methods from the first six decomposition layers significantly varied, showing a serious fluctuation in the discrimination accuracy in the prediction dataset. This may be because more characteristic information (i.e., high frequency information) is extracted when the number of decomposition layers increases. The discrimination ratio reached its highest level at the decomposition of the third layer, and the corresponding wavelet coefficients are shown in Figure 4a. The variables in the high frequency part were found to decrease when the decomposition layer number increases. Therefore, it contained less extracted information and less available information, leading to the decline in discrimination ratio. From the viewpoint of low frequency components, which were handled the same way as the high frequency parts for each decomposition layer, it is shown in Figure 3b that the PLS-DA model showed unsatisfactorily low frequency discrimination performance. The RF and LS-SVM models show low frequency discrimination performance that was slightly inferior to the high frequency discrimination, and the best low frequency discrimination accuracy was still lower than 60%. Meanwhile, discrimination performance largely varied in the different layers, like the high frequency results. types of Tegillarca granosa. However, the accuracy was as low as 40%, possibly because the useful characteristic information may have been submerged in the noise. From this study, we concluded that further data mining methods were indispensable to extract the characteristic bands from the original full spectral information. Analysis of Results Using Characteristic Spectra The LIBS spectra showed that the characteristic peaks from most elements were narrow and very similar to the "high frequency" signals. Based on these characteristics, the discrete wavelet transform (DWT), an algorithm widely applied to time-frequency analysis of signals, was adopted for further spectra analysis. This algorithm can decompose the ''high'' and ''low frequency'' parts of the original signals, so that we can apply two different filters to access the different frequency components of the signals. The widely used Daubechies 4 (Db4) wavelet was adopted as the kernel function of DWT to transform the original LIBS spectra. The high frequency components from different decomposition layers were then extracted and taken as the input variables to develop PLS-DA, RF, and LS-SVM discrimination models. The classification results from the high frequency components of different decomposition layers are shown in Figure 3a. Because a few metal elements have multiple excitation states, the contamination category of Tegillarca granosa samples is nonlinear with the spectral lines. The PLS-DA model applied in the different decomposition layers showed an accuracy similar to that shown by the model applied in the full spectra. Compared with PLS-DA, RF and LS-SVM performed slightly better because they can process nonlinear models more effectively. The recognition results analyzed by RF and LS-SVM methods from the first six decomposition layers significantly varied, showing a serious fluctuation in the discrimination accuracy in the prediction dataset. This may be because more characteristic information (i.e., high frequency information) is extracted when the number of decomposition layers increases. The discrimination ratio reached its highest level at the decomposition of the third layer, and the corresponding wavelet coefficients are shown in Figure 4a. The variables in the high frequency part were found to decrease when the decomposition layer number increases. Therefore, it contained less extracted information and less available information, leading to the decline in discrimination ratio. From the viewpoint of low frequency components, which were handled the same way as the high frequency parts for each decomposition layer, it is shown in Figure 3b that the PLS-DA model showed unsatisfactorily low frequency discrimination performance. The RF and LS-SVM models show low frequency discrimination performance that was slightly inferior to the high frequency discrimination, and the best low frequency discrimination accuracy was still lower than 60%. Meanwhile, discrimination performance largely varied in the different layers, like the high frequency results. The above analysis shows that an essential role in data compression and information extraction was played by DWT, and DWT can improve the discrimination performance of RF and LS-SVM. However, DWT is merely the time-frequency analysis of spectral signals. It does not extract The above analysis shows that an essential role in data compression and information extraction was played by DWT, and DWT can improve the discrimination performance of RF and LS-SVM. However, DWT is merely the time-frequency analysis of spectral signals. It does not extract characteristic information for the identification of the different types of Tegillarca granosa. For the variables in the domain of wavelet decomposition, there is irrelevant information that does not contribute to discrimination. Therefore, the approach to eliminate such irrelevant information should be included for better performance. In this study, IGA, a prevalent characteristic data mining method used in classification, was adopted by our group. Based on the previous results, the best discrimination performance was from the data of the third decomposition layer; hence, the high frequency coefficients of the third layer in Figure 4a were further analyzed to extract the characteristic information by the IGA. As shown in Figure 4b, the extracted information was greatly narrowed in comparison with the information of the high frequency of the third layer, and the number of variables was sharply reduced to only 30. These 30 characteristic variables were set as the input variables of the PLS-DA, LS-SVM, and RF models. For the PLS-DA model, the discrimination result was slightly improved, but the discrimination ratio was still lower than 40%. Compared with the discrimination model based on the variables of the full spectra and the high frequency coefficients of the third layer, the discrimination model built based on the variables extracted with the IGA significantly improved the recognition ratios for the LS-SVM and RF models to 86.7% and 93.3%, respectively. This improvement was primarily attributed to the IGA algorithm's eliminating the unrelated or useful information for discrimination. With the combination of DWT and IGA methods, the number of variables was obviously reduced from 30,546 in the full spectrum to 30 effective variables, exhibiting a sharp cut rate of 99%, which also saved a considerable portion of computing power for the recognition process. characteristic information for the identification of the different types of Tegillarca granosa. For the variables in the domain of wavelet decomposition, there is irrelevant information that does not contribute to discrimination. Therefore, the approach to eliminate such irrelevant information should be included for better performance. In this study, IGA, a prevalent characteristic data mining method used in classification, was adopted by our group. Based on the previous results, the best discrimination performance was from the data of the third decomposition layer; hence, the high frequency coefficients of the third layer in Figure 4a were further analyzed to extract the characteristic information by the IGA. As shown in Figure 4b, the extracted information was greatly narrowed in comparison with the information of the high frequency of the third layer, and the number of variables was sharply reduced to only 30. These 30 characteristic variables were set as the input variables of the PLS-DA, LS-SVM, and RF models. For the PLS-DA model, the discrimination result was slightly improved, but the discrimination ratio was still lower than 40%. Compared with the discrimination model based on the variables of the full spectra and the high frequency coefficients of the third layer, the discrimination model built based on the variables extracted with the IGA significantly improved the recognition ratios for the LS-SVM and RF models to 86.7% and 93.3%, respectively. This improvement was primarily attributed to the IGA algorithm's eliminating the unrelated or useful information for discrimination. With the combination of DWT and IGA methods, the number of variables was obviously reduced from 30,546 in the full spectrum to 30 effective variables, exhibiting a sharp cut rate of 99%, which also saved a considerable portion of computing power for the recognition process. Analysis of Reconstructed Spectra Since the input variables of the above discrimination models were processed with the DWT algorithm, their coefficients were in the wavelet domain rather than in the original spectra. Therefore, the characteristic variable information could not directly reflect the characteristic element information of the difference between the varied kinds of Tegillarca granosa samples. To further analyze the related elemental information, the characteristic information was processed with inverse DWT (IDWT) and reconstructed to the original spectral domain. Figure 5 displays the reconstructed spectra of Group IV samples (i.e., the reconstructed coefficients herein were made by the absolute values). Comparisons between the original and the reconstructed spectra show that the latter contain fewer variables, mainly due to the considerable noise variables and the uninformative variables eliminated by WTA and IGA. In this regard, the corresponding attribution analysis of the spectral peaks for the major characteristic variables was conducted based on the reconstructed spectra. The results are shown in Table 1, where the reconstructed spectra comprise the characteristic peaks of Mg, Ca, P, K, Pb, and Cd. Table 1. Analysis of characteristic spectral band attributes. Analysis of Reconstructed Spectra Since the input variables of the above discrimination models were processed with the DWT algorithm, their coefficients were in the wavelet domain rather than in the original spectra. Therefore, the characteristic variable information could not directly reflect the characteristic element information of the difference between the varied kinds of Tegillarca granosa samples. To further analyze the related elemental information, the characteristic information was processed with inverse DWT (IDWT) and reconstructed to the original spectral domain. Figure 5 displays the reconstructed spectra of Group IV samples (i.e., the reconstructed coefficients herein were made by the absolute values). Comparisons between the original and the reconstructed spectra show that the latter contain fewer variables, mainly due to the considerable noise variables and the uninformative variables eliminated by WTA and IGA. In this regard, the corresponding attribution analysis of the spectral peaks for the major characteristic variables was conducted based on the reconstructed spectra. The results are shown in Table 1, where the reconstructed spectra comprise the characteristic peaks of Mg, Ca, P, K, Pb, and Cd. In the spectra, the spectral lines for Mg and Ca, located near 400 nm, had higher absolute values and made a greater contribution to the discrimination model, while the other Mg and Ca bands contributed less. The characteristic spectral lines of heavy metals, Pb and Cd, contributed even less to the model. Besides, it was also discovered that there were no characteristic spectral lines for Zn found in the reconstructed spectra, probably because the Zn concentration, as one of the essential elements in the body, varied slightly in the heavy-metal-contaminated Tegillarca granosa samples. In addition, according to the attributes of the identified spectral lines, the process of heavy metal poisoned Tegillarca granosa is very complicated and often accompanied by many protein syntheses, lysosomal changes, and immune damages [45][46][47]. The Tegillarca granosa's normal food intake may be affected in heavy-metal-contaminated water, leading to the varied intake of essential trace elements, including Mg, Ca, and K. However, the characteristic information for spectra collected under different metal contamination conditions describes not only the elements that are concentrated within the body (e.g., Cd and Pb), but also other biologically imperative elements, including Mg, Ca, and K. Therefore, essential trace elements including Mg, Ca, and K within body are also affected by different heavy metals, which provide the characteristic information for rapidly identifying heavy-metal-contaminated Tegillarca granosa. In the spectra, the spectral lines for Mg and Ca, located near 400 nm, had higher absolute values and made a greater contribution to the discrimination model, while the other Mg and Ca bands contributed less. The characteristic spectral lines of heavy metals, Pb and Cd, contributed even less to the model. Besides, it was also discovered that there were no characteristic spectral lines for Zn found in the reconstructed spectra, probably because the Zn concentration, as one of the essential elements in the body, varied slightly in the heavy-metal-contaminated Tegillarca granosa samples. In addition, according to the attributes of the identified spectral lines, the process of heavy metal poisoned Tegillarca granosa is very complicated and often accompanied by many protein syntheses, lysosomal changes, and immune damages [45][46][47]. The Tegillarca granosa's normal food intake may be affected in heavy-metal-contaminated water, leading to the varied intake of essential trace elements, including Mg, Ca, and K. However, the characteristic information for spectra collected under different metal contamination conditions describes not only the elements that are concentrated within the body (e.g., Cd and Pb), but also other biologically imperative elements, including Mg, Ca, and K. Therefore, essential trace elements including Mg, Ca, and K within body are also affected by different heavy metals, which provide the characteristic information for rapidly identifying heavy-metal-contaminated Tegillarca granosa. Conclusions In summary, a rapid discrimination method of toxic heavy-metal-contaminated Tegillarca granosa was investigated in this study by combining LIBS and pattern recognition. LIBS technology was applied to acquire spectral signals, which could be analyzed to determine their contamination categories. A novel combination of the DWT-IGA method was employed to extract the high frequency information and the characteristic information as the input of the discrimination analysis. The performance of discriminating the healthy samples from the toxic samples with three different classifiers was discussed. Results show that the RF model demonstrates the highest elemental discrimination ratio of 93.3% for different types of Tegillarca granosa. Our work suggests that LIBS technology combined with pattern recognition analysis can conveniently detect toxic heavy-metal-contaminated shellfish samples, which may induce more promising eco-toxicological testing applications. Moreover, rapid in-field identification of healthy shellfish seafood can also be expected by integrating portable LIBS equipment and machine learning methods, for real-time monitoring and protection of the coastal marine environment.
2018-04-03T01:24:24.567Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "4aef8fdc7a0ab8f43f217ebba1bf3d6d81495169", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/17/11/2655/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4aef8fdc7a0ab8f43f217ebba1bf3d6d81495169", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Computer Science", "Medicine" ] }
17991399
pes2o/s2orc
v3-fos-license
NMR spectroscopic detection of chirality and enantiopurity in referenced systems without formation of diastereomers Enantiomeric excess of chiral compounds is a key parameter that determines their activity or therapeutic action. The current paradigm for rapid measurement of enantiomeric excess using NMR is based on the formation of diastereomeric complexes between the chiral analyte and a chiral resolving agent, leading to (at least) two species with no symmetry relationship. Here we report an effective method of enantiomeric excess determination using a symmetrical achiral molecule as the resolving agent, which is based on the complexation with analyte (in the fast exchange regime) without the formation of diastereomers. The use of N,N′-disubstituted oxoporphyrinogen as the resolving agent makes this novel method extremely versatile, and appropriate for various chiral analytes including carboxylic acids, esters, alcohols and protected amino acids using the same achiral molecule. The model of sensing mechanism exhibits a fundamental linear response between enantiomeric excess and the observed magnitude of induced chemical shift non-equivalence in the 1H NMR spectra. C hirality plays a key role in many biological events but is also important in the development of new pharmaceuticals 1 , for control of organic asymmetric reactions 2 , chiral catalysis 3,4 and in several aspects of supramolecular science [5][6][7][8][9][10] . Therefore, the detection of chirality, including analysis of chiral purity and interactions between chiral species, is an intensive area of research. Various procedures for determining analyte chirality have been proposed and those of note are based on optical (for example ultraviolet/visible spectrophotometry (uv/Vis) [11][12][13] , circular dichroism (CD) 14,15 , fluorescence [16][17][18][19] ) and electrochemical 20 methods involving a synthetic chiral host as a chirality detector. However, the use of an achiral host as a detector has only seldom been achieved, that is, through the formation of ternary diastereomeric complexes between chiral substrate molecules and achiral host 21 or by the observation of exciton-coupled CD effects in porphyrin tweezer molecules where helicity is induced by complexation with a chiral guest [22][23][24][25] . Over several decades NMR spectroscopic methods, which depend on a chiral auxiliary such as chiral derivatization agent [26][27][28] , chiral lanthanide shift reagents 26,29 and chiral solvating agents (CSA) [27][28][29][30] have been extensively studied. NMR discrimination methods that do not use an auxiliary have also been developed [31][32][33] although they are considered unusual as selfassociation between chiral analytes is required in the formation of diastereomeric dimers. Use of an achiral auxiliary in conjunction with 31 P NMR spectroscopy has also been reported 34 . However, in that case an achiral derivatization agent was used as a platform for covalent coupling of two chiral analyte molecules resulting in diastereomer formation and three well-separated NMR resonances could be observed. We have previously reported a method that facilitates the latter concept by using a solvating agent where a host-guest complex (formed via non-covalent interaction) between an achiral host and two chiral guests is formed 35,36 . In that case, formation of two rapidly exchanging diastereomers occurs and resulting NMR spectra show two distinct resonances whose separation reflects proportionally the enantiomeric excess of the chiral guest. Chiral liquid crystals [37][38][39] have been used as anisotropic solvents to differentiate enantiotopic directions of prochiral analyte molecules without diastereomer formation although in that case the analyte resonances are observed (however, if chiral analyte molecules are dissolved in chiral liquid crystals diastereomeric pairs are formed). Based on these examples it seems that NMR, as an intrinsically achiral method, lacks the capability to sense chirality or enantiomeric excess (e.e.) in the absence of a chiral auxiliary, or of self-association of chiral analytes, or without formation of at least 1:2 complexes of achiral agent:chiral analyte. In all cases, formation of diastereomeric species occurs in solution and utilization of NMR in conjunction with an achiral reagent in non-diastereomeric binding mode as a probe of enantiopurity remains challenging. Such processes where 'achiral' can sense 'chiral' might have connotations for enantioselective amplification, as well as for the origin of homochirality in living organisms 40 . In this work, we introduce a 1 H NMR versatile prochiral solvating agent (pro-CSA) for determination of chirality and e.e. which does not rely on formation of diastereomeric pairing between prochiral host and chiral analyte; rather, a pair of enantiomeric complexes are formed. Detection of information about a chirality/e.e. without employing formation of diastereomers has not to our knowledge been reported and is an unusual and intriguing aspect of our pro-CSA host. The pro-CSA host molecule contains appropriate prochiral CH reporter groups, which facilitate e.e. discrimination. The nature of chirality detection and e.e. sensing are rationalized in terms of 1:1 hostguest complexation between pro-CSA molecule, represented by the N,N 0 -disubstituted oxoporphyrinogen host molecule H, and various chiral guest molecules G Ã , including carboxylic acids and esters, amino acid esters, N-protected amino acids, and some terpenoids such as menthol and camphor. Experimental and theoretical methods (NMR, uv/Vis, CD, quantum mechanical (QM) calculations and classical molecular dynamics (MD) simulations) are used to elucidate the chirality and/or e.e. sensing properties of the system presented here. Results Symmetry and origin of chemical shift non-equivalence. The e.e. discrimination method presented here relies fundamentally on the concept of prochirality 41 , defined as 'the geometric property of an achiral object (or spatial arrangement of points or atoms) which is capable of becoming chiral in a single desymmetrization step' 42 . It is known that when a prochiral compound (for example, an achiral primary carboxylic acid Ph(CH 2 )CO 2 H) is placed in a chiral environment (for example, ( þ )-(1R,2R)-1,2diphenylethane-1,2-diamine) forming an acid-amine complex, the enantiotopic (prochiral) methylene hydrogens of the acid exhibit chemical shift non-equivalency and thus are rendered diastereotopic 43,44 . The pro-CSA, H, presented here contains enantiotopic CH reporter groups as shown in Fig. 1a. When an enantiopure chiral guest (Fig. 1b) is added to a solution of host molecule H symmetrical splitting (sometimes with an associated up-or downfield shift) of the resonance due to the reporter groups can be observed (Fig. 1c). H complexes with a chiral guest by hydrogen bonding through pyrrolic NH groups located at its centre resulting in downfield shift of the NH proton resonance (Fig. 1c). Taking into account the host's saddle shape and the relative rigidity of its macrocycle, the distance of the available CH reporter groups from the binding site has an important role. Each CH group exhibits different sensitivity (that is, magnitude of nonequivalency) to different chiral guests (Fig. 1c). As H has four such groups of practical use at different spatial positions (accessibility) and distances from the binding site there is a good chance that any chiral guest with affinity to bind to H will cause non-equivalency in selected or all of the CH reporter groups. At this point, the origin of chemical shift non-equivalency of CH reporter groups can be elucidated in terms of the symmetry of H and disruption thereof upon interaction with a chiral guest. The term 'prochirality' is frequently used to describe systems where replacement of a single ligand leads to a stereogenic centre 45 . The equivalent of this in non-covalently interacting systems would be the formation of diastereotopic nuclei from initially enantiotopic nuclei due to an external factor as illustrated above in the case of acid-amine complexes where the methylene carbon of Ph(CH 2 )CO 2 H can be considered a 'stereogenic centre'. However, the CH reporter groups of H do not posses the property that any of its carbon atoms could be converted to a 'stereogenic centre'. For simplicity let us consider the complexation of H with 1a where the CH reporter groups of interest are at the b positions of the N-alkylated pyrrole group (b-H; Fig. 2a,b). In the structure of H, two prochiral planes 44 are present (Fig. 2a). More specifically, two onefold improper rotations S 1 ( ¼ mirror planes s d ), which are identical with prochiral planes, can be identified. These mirror planes make group A and B of b-H interconvertible and hence enantiotopic (magnetically equivalent with isochronous NMR resonances) 44,46 . Note that H also possesses a proper symmetry axis C 2 , which might suggest that groups A and B are homotopic hence isochronous even in a chiral environment. However, groups A and B (as defined in Fig. 2a) cannot be interconverted using C 2 . Therefore, in general, b-H nuclei are required to be identified as groups A and B so that they are interconvertible by improper rotation S n (including plane s or centre of symmetry i) and not by operation of proper symmetry axes C n 44 . In that case, groups A and B are enantiotopic. The presence of a chiral guest leads to magnetic nonequivalence of group A and B as it affects each group of protons in a different manner as can be illustrated (from MD simulations) by the preference of certain orientations of guest occupation (that is, conformations) over others as shown in Fig. 2c,d. A chiral environment breaks mirror symmetry and the initially enantiotopic groups A and B become diastereotopic with distinguishable anisochronous NMR resonances (Fig. 2b). Note that group A and B are contained in the same molecule so that no actual diastereomers are present. This is also the reason why at any arbitrary e.e. the split peaks have the identical intensities. The second-order quartet (which exhibits roofing) appears as a consequence of the close structural connection between H A and H B protons, which have a vicinal scalar 3 J-coupling constant. From QM calculations it can be identified which group of protons will experience upfield or downfield shifts (see Supplementary Fig. S4). QM results for the H Á (R)-1a complex reveal that group A is shifted relatively upfield compared with group B and vice versa due to symmetry of the complex H Á (S)-1a. NMR spectral properties and binding model. Hydrogen bonding between H and chiral guest appears to be weak so that a relatively high concentration of chiral guest is required to obtain well-resolved spectral lines (Fig. 1c). Interestingly, if the concentration of chiral guest is kept constant and e.e. is varied then the magnitude of splitting Dd of spectral lines varies linearly with e.e. (Fig. 3a-d, for other guests see Supplementary Figs S5-S14). The linear dependence of Dd on e.e. is a fundamental property of pro-CSA molecules as will be shown later. When determining the true resonance frequency of the N-alkylated pyrrolic b-H, N-H pyrrolic b-H and hemiquinonoid ortho-H spin systems it must be taken into consideration that these spins are second-order strongly coupled systems, and the fitting procedure should be carried out with appropriate care (see 1 H NMR spectra fitting in Supplementary Methods). The characteristic roofed shape of the split N-alkylated pyrrolic b-H resonances and the cross-peaks between them in 1 H-1 H COSY spectrum ( Fig. 3e) clearly indicates that both peaks originate from the same host molecule. Similar cross-peaks were observed in N-H pyrrolic b-H and hemiquinonoid ortho-H in, for example, H Á (-)À4 complex (see 1 H NMR spectra fitting in Supplementary Methods). As e.e. tends towards 0% (that is, racemic mixture), a single set of sharp wellresolved resonances remain. At 0% e.e. these converge to a centred singlet for H Á 1a, H Á 2a and a doublet for H Á 4 (Fig 3a-c). The lack (at 0% e.e.) of two sets of peaks, corresponding to stable complexes of H Á (R)-enantiomer and H Á (S)-enantiomer, indicates fast exchange of chiral guests at the host's binding site. The above-mentioned features helped us to elucidate the origin of the linear relationship between magnitude of splitting Dd and e.e., as well as a model of the host-guest binding mechanism. The mechanism of binding should be considered as competitive involving binding of water molecules W and the chiral guest G Ã to the host molecule H, as titration of a chloroform solution of H with W yields a not insignificant response (and high binding constant) to traces of water (Fig. 4a, for 1 H NMR spectra see Supplementary Fig. S15) 47 . Moreover, water is ubiquitous and is difficult to remove from chloroform, which also explains why a relatively high concentration of G Ã is required to obtain wellresolved split NMR spectra. Some examples of titrations of H with 3 are shown in Fig. 4b-e (for other guests and other types of titrations see Supplementary Figs S16-S25). For the two types of titration shown it can be observed that the chemical shift of pyrrolic NH resonance (d NH ) increases monotonically while splitting of N-substituted pyrrolic b-H (Dd) does not necessarily increase monotonically (compare Fig. 4b,c). These observations suggest that the NH group reflects only the overall binding of guests (G Ã or W) while the pyrrolic b-H protons are sensitive not only to overall binding but also to the nature of chiral guest and its e.e. Therefore, binding isotherms are required to be constructed (and fitted) simultaneously for both d NH and Dd, as they each partially express the same characteristic (binding strength). Binding isotherms for d NH and Dd can be constructed based on a 1:1 host:ligand (where ligand can be chiral guest G Ã or water W) competitive binding model (Fig. 5a) as described by the equilibrium equations 1. (Fig. 5a) and that binding of (R)-or (S)-enantiomer to the host is statistical due to the symmetry of H. Then d NH and Dd can be expressed in the form of equations 2 and 3, respectively (for the full derivation see Methods). Table 1) were calculated (see Supplementary Fig. S4). From data contained in Table 1 it can be seen that all the chiral guests studied have a low equilibrium binding constant so that direct confirmation of 1:1 binding stoichiometry using Job's plot is not feasible 48 has some interesting consequences. It indicates a linear dependence on e.e., as well as on population of the H Á G Ã complex. The apparent universal linearity of the Dd/e.e. plots indicates in turn that a calibration curve can always be constructed from a single measured point obtained from a sample of enantiopure analyte or a sample of independently measured e.e. (considering that a second point required for construction of a calibration curve has zero chemical shift non-equivalence Dd ¼ 0 for the racemate where e.e. ¼ 0). Also, the value of Dd can be obtained by measurement only as a non-negative value, hence, only the absolute value of e.e. can be determined and enantiomeric identity remains unknown (for example, measurement gives e.e. ¼ 80% but the excess enantiomeric form is not known). This principal indistinguishableness originates from the symmetry between enantiomeric complexes H Á (R)-G Ã and H Á (S)-G Ã and is present for all pro-CSA. It can be overcome only when a small amount of known enantiomer of guest is added. Dd is either increased when the same enantiomer (as that in excess in solution) is added or decreased when the other enantiomer is added 36 (an example can be seen in Fig. 4b,d compared with Fig. 4c,e). In order to investigate the transfer of chiral information from guest to host molecule the uv/Vis and CD spectra were recorded for most of the H Á G Ã pairs ( Supplementary Figs S26-S32). However, CD signal induced in the structure of H upon binding of G Ã is very weak (compared with other systems for example, Mizuno et al. 14 ) and can be detected only when large quantities of G Ã are added to solution. This is due to the rigidity of the host's macrocyclic structure. Temperature effects on the efficiency of e.e. sensing were explored using variable temperature (VT) 1 H NMR measurements ( Supplementary Fig. S33). It can be seen that chemical shift non-equivalency Dd is decreasing with decreasing temperature. This behaviour is rather unusual since the opposite (increasing chemical shift non-equivalency) should be observed with decreasing temperature. However, this is a consequence of the competitive binding model where at depressed temperature water binds to the host more strongly than the chiral guest leading to a decrease of overall Dd value (which is reflected in equation 3 as a decrease of population of H Á G Ã complex p HG Ã ). VT-1 H NMR measurements also show that ambient temperature is optimal for e.e. analysis as it gives Dd close to the maximum values, which is practical from the point-of-view of applications. Discussion Conventional CSA and the pro-CSA sensing method presented here have different characteristics. When a conventional diastereomer-forming reagent, for example a CSA, is used two separate resonances for each enantiomer appear although they cannot be unambiguously assigned to a particular enantiomer. Additional molecular modelling or chiroptical methods (for example, CD) are required to assign enantiomer identity. However, the value of e.e. can be obtained by integration of separate resonances without having an authentic sample of analyte in advance. The non-diastereomeric sensing mechanism of pro-CSA differs conceptually from but is complementary to the existing conventional methods. From the point of practicality and potential use, the pro-CSA H presented here offers some noteworthy properties. First, when e.e. of the same analyte is required to be determined repetitively (for example, as in chemicals or pharmaceuticals industries) a calibration curve using pure enantiomers can be constructed once, which facilitates subsequent rapid analyses of e.e. by using a simple protocol. In this way, particular host-guest pairs would possess characteristic values of Dd under the specified conditions. This method is then also appropriate for use in various enantioenrichment (Soai reaction 49 ) studies as a rapid probe of e.e. values. In fact, in any case where initial purification (by chiral high-performance liquid chromatography or crystallization) of a chiral analyte is required (perhaps for regulatory purposes) our method should be applicable. It is also possible to perform in situ NMR monitoring of variation of e.e. without using a chiral probe, as a chiral probe might bias the outcome of, for instance, enantiomer enrichment processes [50][51][52] . Additionally, when a particular analyte is examined using conventional CSA, there exist a set of CSA suitable for each type of analyte (for example, analyte-CSA: carboxylic acids-chiral amines, aminoalcoholscarboxylic acids). Pro-CSA H based on N,N 0 -disubstituted oxoporphyrinogen is very versatile in this respect as it provides reasonable chemical shift non-equivalence for a great variety of analytes of different functionality. Another significant feature is that chiral information is first translated from analyte to H and then read out from H (in contrast to conventional CSAs) and, as H operates by hydrogen bonding of the analyte, it can be used to analyse many differently functionalized compounds. As a result of this, H has four sets of CH reporter groups, which can be varied synthetically (for example, to C-F groups) so that chiral information can be retrieved from 19 F NMR spectra where other resonances due to analyte and solvent are absent. Finally, this system is also of note from the point-of-view that it does not rely on the formation of diastereomers. This is perhaps counterintuitive given the prevailing opinion that formation of diastereomers is required in the analysis of e.e. In summary, a symmetrical a-/prochiral macrocyclic N,N 0disubstituted oxoporphyrinogen host molecule can be utilized as a detector of enantiomeric purity for a wide variety of guest molecules. We have shown that no diastereomeric species are present in solution as complexes with each enantiomer have identical properties. The mechanism of chiral sensing can be rationalized in terms of the host's symmetry and its breakdown when individual host molecule interacts with chiral guest under a fast exchange regime. Chirality sensing (that is, identification of the major enantiomorph) can be achieved by addition of a portion of known enantiomer guest. When a symmetrical molecule is adapted as an e.e. sensor it has certain intrinsic advantages such as identical binding constants for each enantiomer resulting in e.e. determination, which is not obscured by kinetic resolution. The results presented here indicate that other symmetrical molecules should also be available as detectors of enantiopurity using NMR spectroscopy. Such systems might also improve our understanding of important chirality principles such as majority rule, chirality transfer between molecules and asymmetric reactions. Methods Materials. Chloroform-d (Cambridge Isotope Ltd.) for 1 H NMR spectroscopy was mostly used as received. In some specific titrations residual water was removed using 4 Å molecular sieves (Wako Chem. Co.). Before use molecular sieves were activated at ) and potassium carbonate (10 equiv.) were stirred in refluxing acetone (100 ml) for 2 h then acetone was removed under reduced pressure. Crude product was passed through a silica gel column (eluent: dichloromethane, DCM) to yield the final product as a colourless liquid. D-valine methyl ester and L-valine methyl ester were synthesized from D-valine methyl ester hydrochloride and L-valine methyl ester hydrochloride (both TCI Co. Ltd.), respectively, through neutralization of the corresponding hydrochloride salts. The hydrochloride salt was dissolved in pure water and neutralized using an excess of potassium carbonate solution (pH ¼ c.a. 12) then the aqueous solution was extracted three times using DCM. DCM fractions were combined then dried over anhydrous sodium sulphate. Subsequent filtration and removal of DCM under reduced pressure yielded the final product as a white solid. General spectroscopic methods. 1 H NMR spectra were obtained at 25°C (unless stated otherwise) using an AL300BX spectrometer (JEOL) operating at 300.4 MHz. When variable temperature measurements (VT-1 H NMR) were performed the appropriate JEOL temperature control unit (temperature fluctuation ± 0.3°C) was used. Before the experiment, host H was dissolved in chloroform-d (usually ca. 0.6 mM) in an NMR tube and subsequently ligand (chiral guest or water) was added/injected directly into the NMR tube. Molar ratio of ligand and host was carefully determined by integration of the corresponding peaks in spectra. All spectra were referenced to tetramethylsilane. Numbers of scans typically used (for ca. 0.6 mM of host molecule H): 16-32 scans for samples up to 500 equiv. of guest G Ã ; 32-128 scans for samples containing from 500-1,000 equiv. of G Ã ; 128-512 scans for samples containing in excess of 1,000 equiv. G Ã . Electronic absorption spectra and CD spectra were measured using a J-820 CD spectrophotometer (JASCO) at room temperature (ca. 25°C). When titration experiments were performed a chloroform-d solution of host H (ca. 7 Â 10 À 5 M) was prepared and 0.17 ml of this solution was placed in a 1-mm pathlength quartz cell. In order to maintain host concentration constant chloroform-d stock solutions of host H with various chiral guests were prepared ([H] t ca. 7 Â 10 À 5 M, [G Ã ] t up to ca. 1.5 M, subscript 't' denotes total concentration) and subsequently titrated into the host solution by using a syringe (0-600 ml). Then uv/Vis and CD spectra were recorded simultaneously. Derivation of competitive binding model. From equilibrium equations 1, the equilibrium binding constants and mass balance equations can then be expressed as equations 4. Methods). Then populations of all the host forms (one free and two bound) can be expressed in the form of equations 5. Because of fast exchange between the three states of host H . G Ã , H and H . W (Fig. 5a) the resulting chemical shift of pyrrolic NH, d NH , takes the form of a population averaged chemical shift of the respective states (Fig. 5b) and can be expressed as equation 2. Similarly, a Dd binding isotherm (describing chemical shift non-equivalence) can be constructed under the following assumptions. For reasons of symmetry, we assume that the binding of (R)-or (S)-enantiomers to the host is statistical. When a chiral guest G Ã binds to the host the probability that the site is occupied by (R)-or (S)-enantiomer can be determined as a function of e.e. by equations 6. p R e:e: ; p S e:e: where [G R ] t and [G S ] t are total concentrations of (R)-and (S)-enantiomer, respectively. As mentioned above, the 1 H-1 H COSY spectrum of the pyrrolic b-H resonance indicates that both split peaks must be at the same pyrrolic ring. Moreover, mirror symmetry of the host's binding site and the intrinsic saddle shape of the host's macrocycle causes a preference for certain mutual host-guest conformations (see MD study, Fig. 2c,d) and therefore does not permit isotropic rotation of guest at the binding site. Thus, when chiral guest is present, an asymmetric shielding effect at (for example) pyrrolic b-H positions occurs. In this situation, the chemical shift of respective pyrrolic b-H is given by equations 7.
2018-04-03T01:33:08.750Z
2013-07-17T00:00:00.000
{ "year": 2013, "sha1": "6f580e282faa350abc56f005ae6bd64ef4a98005", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/ncomms3188.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b2f990400cd45972ed626f8ab777df316b45ddb3", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
269780858
pes2o/s2orc
v3-fos-license
Challenges and Opportunities in Intellectual Property Rights (IPR) in the Age of Generative AI: Balancing Innovation and Protection : The advent of Generative Artificial Intelligence (AI) has ushered in a new era of innovation, fundamentally altering the landscape of Intellectual Property Rights (IPR). This research paper aims to explore the intricate balance between fostering AI - driven creativity and safeguarding individual intellectual contributions. Generative AI, with its capability to produce original content, ranging from literary works to scientific research, poses a significant challenge to traditional notions of IPR, which are predicated on human ingenuity and individual creativity. The paper delves into the current legal frameworks governing IPR and examines their adequacy in addressing the complexities introduced by AI - generated content. It highlights key instances where AI has independently created works that could potentially qualify for copyright, raising questions about authorship and originality in the digital age. Furthermore, the paper explores the ethical and economic implications of AI in the realm of IPR, considering both the potential for AI to democratize content creation and the risks of undermining human creativity. The research adopts a multidisciplinary approach, drawing insights from legal studies, technology, and ethics, to propose a revised model of IPR that accommodates the unique characteristics of AI while protecting the rights and incentives of human creators. Introduction In the rapidly evolving digital landscape, the advent of Generative Artificial Intelligence (AI) has marked a transformative era in the realm of creativity and innovation.This technological leap forward presents both unprecedented opportunities and significant challenges for Intellectual Property Rights (IPR).The intersection of AI with IPR raises critical questions about authorship, originality, and the very nature of creativity [4,37].As AI systems become increasingly capable of generating artistic works, literary compositions, and even scientific research, the traditional boundaries of IPR are being redefined [3,5]. The core of IPR has always been to protect and incentivize human creativity and innovation.However, the emergence of AI as a non -human creator challenges this paradigm [23].The legal frameworks that currently govern IPR were not designed to accommodate the creative outputs of AI, leading to a legal and ethical conundrum [25,27].This paper seeks to explore the complexities introduced by generative AI in the context of IPR.It delves into the legal ambiguities, ethical considerations, and the potential need for policy reform to balance the protection of human creators with the innovative capabilities of AI [11,34]. Moreover, the economic implications of AI in the domain of IPR cannot be overlooked.AI's ability to enhance creativity and generate novel content opens new avenues for market expansion and business models, yet it also poses risks of undermining the economic value of human -generated intellectual property [15,28].This paper aims to provide a comprehensive analysis of these challenges and opportunities, offering insights into how IPR can evolve in the age of generative AI to foster an environment where innovation and protection coexist harmoniously. Overview of Intellectual Property Rights (IPR) Intellectual Property Rights (IPR) are legal rights that provide creators protection for their inventions, literary and artistic works, symbols, names, and images used in commerce.These rights are crucial in fostering an environment where creativity and innovation can flourish.IPR is typically categorized into patents, copyrights, trademarks, and trade secrets, each serving a unique function in protecting different forms of intellectual creation [40]. Patents protect inventions, allowing inventors exclusive rights to their creations for a limited period, typically 20 years.This exclusivity incentivizes innovation by providing inventors the opportunity to monetize their inventions [1].Copyrights, on the other hand, protect original artistic and literary works, including books, music, and software.Copyright law grants authors exclusive rights to their works, thereby encouraging creative expression [7]. Trademarks protect symbols, names, and slogans used to identify and distinguish products or services in the market.This protection helps businesses build brand identity and consumer trust, which is essential in a competitive marketplace [12].Lastly, trade secrets encompass formulas, practices, processes, designs, instruments, or patterns used for business purposes.The law protects undisclosed trade secrets to maintain competitive advantages and stimulate business innovation [35]. The evolution of IPR has been influenced by the need to balance the rights of creators with the public interest.balance is intended to encourage the creation and dissemination of new works while ensuring that the public can benefit from these creations [2].However, the advent of digital technology and the internet has introduced new challenges in IPR management, including issues related to digital piracy and the reproduction of copyrighted material [24]. Therefore, Intellectual Property Rights play a pivotal role in the modern economy by protecting the rights of creators and innovators, thereby fostering a culture of creativity and progress.The ongoing evolution of IPR reflects the dynamic nature of innovation and the need to adapt to new technological realities. Emergence of Generative AI The emergence of Generative Artificial Intelligence (AI) marks a significant milestone in the field of technology. Generative AI refers to algorithms that can generate new content, from text to images, after learning from a vast dataset.This technology, powered by advancements in machine learning and neural networks, has revolutionized content creation [13].Notably, Generative Adversarial Networks (GANs) have been pivotal in this evolution, enabling the creation of highly realistic and diverse outputs [21].The capabilities of generative AI extend beyond mere replication, venturing into the realms of innovation and creativity, thus presenting both opportunities and challenges in various sectors, including art, literature, and research [11]. Challenges The integration of Generative AI in Intellectual Property Rights (IPR) presents challenges such as defining authorship for AI -generated content, addressing copyright infringement risks, and updating legal frameworks.These challenges necessitate a reevaluation of traditional IPR concepts to accommodate the unique nature of AI -driven creativity and innovation. Authorship and Originality "Authorship and Originality" in the context of Generative AI and Intellectual Property Rights (IPR) confronts the intricate challenge of attributing creation and originality in the era of AI.Traditionally, these concepts have been intrinsically linked to human intellect, forming the cornerstone of copyright law [5].However, the advent of AI, capable of generating art, literature, and music, has blurred the lines of authorship.The question arises: who is the true author of AI -generated content?Is it the AI algorithm, its developer, or the data source?This dilemma is pivotal as it influences the distribution of rights and rewards in creative domains [20]. The situation demands a reevaluation of existing IPR frameworks to accommodate the evolving landscape of creativity, where the distinction between human and machine -generated content is increasingly ambiguous [3,27]. Copyright Infringement Risks The rise of Generative AI in content creation has intensified the risks associated with copyright infringement, significantly impacting the landscape of Intellectual Property Rights (IPR).AI's ability to synthesize and reproduce content based on existing works poses a substantial challenge in distinguishing between original creation and unauthorized derivative works.This situation raises critical legal questions about the extent to which AI -generated content might inadvertently infringe upon existing copyrights, especially when such content closely resembles or is derived from copyrighted material [23]. Furthermore, the difficulty in tracing the origins of AIgenerated content complicates the enforcement of copyright laws.Traditional copyright infringement assessments, which rely on human intent and direct copying, are not readily applicable to AI, where the 'intent' is ambiguous and the 'copying' process is inherently complex and often opaque [6,34].This scenario necessitates a rethinking of copyright frameworks to effectively address the nuances of AI -driven content creation, ensuring that the rights of original creators are protected while also recognizing the innovative contributions of AI technologies [30,25]. AI and IPR Policy Gaps The integration of Artificial Intelligence (AI) into creative and innovative processes has exposed significant policy gaps in the realm of Intellectual Property Rights (IPR).One of the primary gaps is the inadequacy of current IPR laws to address the authorship and ownership of AI -generated works.Traditional IPR frameworks are built around human creators, leaving a legal vacuum when it comes to creations by non -human entities [3,5]. Another gap is the challenge in applying existing copyright norms to AI -generated content.The current copyright system is not equipped to handle cases where AI algorithms create works independently, raising questions about originality and derivation [6,34].This situation is further complicated by the difficulty in determining the liability for infringement when AI is involved, as traditional legal concepts of intent and knowledge are not easily applicable to machines [23]. Moreover, patent law faces challenges in addressing AI's role in the invention process.The question of whether AI can be considered an inventor, and if so, how the rights to such inventions should be allocated, remains unresolved [1,15]. These policy gaps necessitate a reevaluation and potential reform of IPR laws to effectively encompass the evolving landscape of AI -driven creativity and innovation, ensuring that the rights of human creators are protected while also fostering an environment conducive to technological advancement [27].frameworks, including copyright, patent, and trademark laws, struggle to address issues of authorship, originality, and liability in the context of AI [5,6].The need for legal reform is evident to bridge the gap between traditional IPR concepts and the evolving capabilities of AI [34]. 2.2.2 Limitations in Existing IPR Laws Existing Intellectual Property Rights (IPR) laws face limitations in addressing the complexities introduced by AI, particularly in defining authorship and ownership for AIgenerated creations.These laws, rooted in human -centric concepts of creativity and innovation, struggle to adapt to the autonomous nature of AI technologies [3].This gap highlights the need for legal evolution to encompass AI's role in creative processes [27,34]. Innovation and Creativity Enhancement In the dynamic landscape of generative AI, the interplay between intellectual property rights (IPR) and technological innovation presents both challenges and opportunities.This complexity is especially pronounced when considering the dual aspects of innovation and creativity enhancement, and the economic implications of AI. AI in Content Creation Generative AI has profoundly impacted content creation, offering tools that can generate text, images, and even music, revolutionizing how content is produced and conceived [36].This surge in AI -assisted creativity raises questions about the originality and ownership of AIgenerated content, challenging the traditional notions of intellectual property (IP). The primary challenge lies in defining the authorship of AIcreated content.For instance, should the IP rights of a piece of music created by AI belong to the programmer, the AI, or the user who provided the inputs?This dilemma has sparked debates in legal circles, with some arguing for a redefinition of authorship in the age of AI [9]. AI in Research and Development AI's role in research and development is another area of significant impact.AI algorithms can process vast datasets, identifying patterns and correlations that might elude human researchers, thus accelerating the pace of innovation [39].However, this also introduces challenges in IP rights, especially concerning inventions made with or by AI.For example, if an AI system autonomously designs a new chemical compound, who holds the patentthe AI, its developer, or the user? Market Expansion AI technology empowers businesses to explore new markets and address unmet customer demands by offering personalized products and services.By analyzing consumer data, AI identifies emerging trends and assists companies in creating targeted solutions, which contributes to economic growth and diversification [38]. Nevertheless, this expansion comes with challenges related to intellectual property rights (IPR).There's a considerable risk that AI might unintentionally replicate existing intellectual properties, a concern particularly acute in industries reliant on creative outputs.Balancing the respect for existing IPRs with the promotion of innovation presents a significant challenge. 3.2.2 New Business Models AI is creating new business models, from subscriptionbased AI services to platforms offering AI -driven analytics.These models are reshaping industries, prioritizing efficiency, scalability, and customer centricity.Protecting the proprietary algorithms and data that power these models is crucial yet challenging due to the opaque nature of AI systems.This opaqueness can lead to inadvertent IPR violations or deliberate reverse engineering. Balancing Innovation and Protection Balancing innovation and protection in the context of Intellectual Property Rights (IPR) and Generative AI is a nuanced task.It involves ensuring that AI -driven creativity is fostered while safeguarding the rights of human creators.This balance is crucial for maintaining a healthy ecosystem of innovation.Current IPR laws need to evolve to address the unique challenges posed by AI, such as determining authorship and managing copyright in AI -generated works [5,6].Legal scholars advocate for a flexible IPR framework that recognizes both human and AI contributions, ensuring fair use and encouraging continued innovation [27,34]. The overarching challenge is balancing the encouragement of innovation with the protection of IP.One approach is updating the IPR framework to accommodate the unique aspects of AI -generated content and inventions.This might involve new categories of intellectual property or a redefinition of what constitutes an "inventor" [14]. International cooperation is also vital, as AI and its applications cross borders.Harmonizing laws across countries and creating international guidelines for AI and IPR is essential [20]. Revising IPR for the AI Era Revising Intellectual Property Rights (IPR) for the AI era is imperative to address the unique challenges posed by AI in creative domains.This revision involves redefining authorship, ownership, and infringement in the context of AI -generated works.Legal scholars suggest developing new frameworks or adapting existing ones to recognize AI's role in creativity and innovation [3,17].This includes considering AI as a tool or co -creator and determining the extent of rights and protections applicable to AI -generated content [21,27] 4.1.1 Proposals for Legal Reform Proposals for legal reform in the context of AI and Intellectual Property Rights (IPR) focus on adapting existing laws to the realities of AI -driven creativity.Legal experts suggest amendments to copyright and patent laws to accommodate AI's role in creation and invention.This includes recognizing AI as a tool or co -creator and clarifying the ownership of AI -generated works [5,8].Additionally, there's a call for establishing clear guidelines on liability and infringement in AI contexts [21,27].These reforms aim to protect human creators while fostering an environment conducive to AI -driven innovation. 4.1.2 Balancing Rights of AI and Human Creators Balancing the rights of AI and human creators in Intellectual Property Rights (IPR) is a complex endeavor requiring nuanced legal approaches.This balance involves recognizing the contributions of AI in creative processes while ensuring that human creators retain their fundamental rights and incentives.Legal scholars advocate for a dualsystem approach, where both AI -generated and humancreated works are acknowledged, each with tailored rights and protections [3,26].This approach aims to foster innovation and respect the unique contributions of AI, without undermining the value and rights of human creativity [21,29]. Ethical Considerations Ethical considerations in the realm of AI and Intellectual Property Rights (IPR) revolve around the responsible use and attribution of AI -generated content.This includes addressing concerns about transparency, accountability, and the potential for AI to replicate biases present in training data [4].Ethical frameworks are proposed to ensure AI's use aligns with societal values and respects the rights of human creators, while fostering innovation [32].These considerations are crucial for maintaining trust and integrity in AI advancements. Fair Use and AI Fair use in the context of AI involves adapting this legal doctrine to address the use of copyrighted material by AI for learning and content generation.This adaptation is challenging, as AI's use of data often exceeds traditional fair use boundaries [27].Legal scholars suggest redefining fair use criteria for AI, considering the transformative nature of AI -generated works and their impact on the original work's market [6].This is crucial for balancing copyright protection with innovation in AI technologies. Moral Rights and AI Moral rights in the context of AI and Intellectual Property Rights (IPR) involve the ethical and legal recognition of the interests of human creators in relation to AI -generated works.This concept, traditionally focused on the rights of human authors to protect their works against distortion or derogatory treatment, faces new challenges with AI.The question arises whether and how these rights apply when AI modifies or creates works based on human -authored content.Addressing moral rights in the AI era requires careful consideration of the creator's reputation and the integrity of the original work [6]. Case Studies and Examples In the realm of intellectual property rights (IPR), examining historical precedents and contemporary AI innovations offers valuable insights into the evolving challenges and opportunities in this field. Historical Precedents The Gutenberg Press, invented in 1440 by Johannes Gutenberg [17], serves as an early example of technology disrupting existing IPR norms.Before its invention, books were handwritten, making mass reproduction and distribution nearly impossible.The printing press democratized information but also necessitated the development of copyright laws to protect authors and publishers in the era of mass production. Another significant case is Sony Corp. v. Universal City Studios in 1984, also known as the "Betamax case." This U. S. Supreme Court decision highlighted the delicate balance between protecting copyright holders and the public's interest in new technology.The ruling in favor of Sony, allowing home videotaping of television programs for personal use, set a precedent for considering the implications of new technologies on IPR [18]. Contemporary AI Innovations In recent years, AI advancements have presented new challenges and opportunities for IPR.For instance, DeepMind'sAlphaGo, an AI that defeated the world champion in the game of Go in 2016, raised questions about the intellectual property of strategies developed by AI.Can these strategies be considered new intellectual creations, and if so, who owns them?Similarly, in 2018, Shutterstock used AI to combat copyright infringement.The company implemented AI algorithms to detect and prevent the unauthorized use of copyrighted images.This approach not only demonstrated AI's capability in enforcing IPR but also raised questions about AI's role in identifying and respecting copyright boundaries. The evolution of IPR in the face of technological advancements from the Gutenberg Press to modern AI innovations like AlphaGo and Shutterstock's AI demonstrates a constant need to adapt legal frameworks.As AI continues to evolve, it challenges traditional notions of authorship, creativity, and ownership, necessitating a reevaluation and potentially a redesign of IPR laws to keep pace with technological progress [16, 19]. Future Directions and Policy Recommendations The rapidly advancing field of generative AI presents unique challenges and opportunities for intellectual property rights (IPR), requiring proactive and strategic policy development.must anticipate future advancements and craft policies that not only protect intellectual property but also foster innovation and growth. Anticipating Technological Advancements The pace of AI development suggests that future AI systems will be more sophisticated, capable of creating increasingly complex and original works.As noted in [36], the evolution of AI could lead to systems that not only augment human creativity but potentially surpass it in certain domains.This raises critical questions about the nature of authorship and the definition of creativity in the context of IPR.Furthermore, as AI integrates more deeply into various industries, from pharmaceuticals to entertainment, the nature of inventions and creative works will change.A study by Roger [31] highlights that AI's capacity to analyze vast datasets could lead to breakthroughs in fields like drug discovery, necessitating a reexamination of patent laws and processes. Strategic Policy Development In response to these advancements, strategic policy development is crucial.Policymakers should consider a multi -faceted approach:  Updating Copyright and Patent Laws: Current IPR frameworks, primarily designed for human creators, must be adapted to address AI -generated works.This includes redefining authorship and ownership in the context of AI, as argued by Abbott [33]. Balancing Protection and Innovation: Policies should strike a balance between protecting creators and not stifling AI -driven innovation.The role of open -source AI models, as discussed in Kop's [22], is pivotal in promoting collaborative innovation while respecting IPR. International Cooperation: Given AI's global reach, international cooperation is vital for harmonizing IPR laws.The World Intellectual Property Organization (WIPO) advocates for such collaboration in their 2021 report on AI and intellectual property policy. Ethical Considerations: Policies must also address ethical considerations in AI, as highlighted by the [10].This includes transparency, accountability, and respecting human rights in AI development and deployment.The future of IPR in the age of generative AI requires a forward -thinking and nuanced approach.Policymakers must anticipate technological advancements and develop strategic, adaptable policies that safeguard intellectual property while promoting ethical and sustainable innovation.By doing so, the legal framework for IPR can evolve in tandem with the transformative potential of AI. Conclusion In conclusion, the intersection of Artificial Intelligence (AI) with Intellectual Property Rights (IPR) presents a complex and evolving landscape.As AI continues to advance, it challenges traditional notions of creativity, authorship, and ownership, necessitating a reevaluation and potential reform of existing IPR frameworks.The legal system must adapt to balance the rights and interests of human creators with the innovative capabilities of AI, ensuring fair use, fostering innovation, and maintaining ethical standards.This balance is crucial for encouraging continued technological advancement while protecting the fundamental rights of creators.The future of IPR in the age of AI will likely involve a dynamic and responsive legal system that can accommodate the unique characteristics of AI -generated content and address the ethical, legal, and economic implications of this new form of creativity.Embracing these challenges and opportunities will be key to ensuring a fair, innovative, and prosperous digital future. Current legal frameworks for Intellectual Property Rights (IPR) are primarily designed for human creators, leading to challenges in accommodating AI -generated works.These International Journal of Science and Research (IJSR) ISSN: 2319-7064 SJIF (2022): 7.942 Volume 13 Issue 2, February 2024 Fully Refereed | Open Access | Double Blind Peer Reviewed Journal www.ijsr.net . Such revisions aim to balance the International Journal of Science and Research (IJSR) Access | Double Blind Peer Reviewed Journal www.ijsr.netpromotion of innovation with the protection of human creators' rights in the evolving digital landscape. As AI technology evolves, policymakers and legal experts International Journal of Science and Research (IJSR)
2024-05-16T15:14:26.843Z
2024-02-05T00:00:00.000
{ "year": 2024, "sha1": "82e769a4f76243c20e36c62696dfb19318439abf", "oa_license": null, "oa_url": "https://doi.org/10.21275/sr24209120048", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1eb2b118be9de3661ba0a6ace6fc27f96988f6e3", "s2fieldsofstudy": [ "Computer Science", "Law" ], "extfieldsofstudy": [] }
109930617
pes2o/s2orc
v3-fos-license
Morphology and Thermal Properties of Poly ( 3-hydroxybutyrate-co3-hydroxyvalerate ) / Attapulgite Nanocomposites Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) – PHBV is a biodegradable polyester which has been studied as an option for the production of disposable goods. Attapulgite is a fibrous clay mineral. The aim of this work was to produce and characterize renewable resource derived-nanocomposites based on PHBV and organophilic attapulgite (MAT). The nanocomposites were characterized by XRD, SEM and thermal analysis. It was observed reduction in degree of crystallinity, in melting and glass transition temperatures and in thermal stability of polymer due to the addition of clay to PHBV matrix. The best results were obtained for PHBV films containing 3 and 5% MAT. These films presented a slight increase in processing window and decrease in crystalline temperature and in degree of crystallinity as compared to pure PHBV. Introduction In the last decade, there has been increasing concern over the harmful effects of conventional synthetic plastic materials in the environment.This ecological awareness stimulated the development of new biodegradable materials, especially for single-use plastic items.Poly (3-hydroxybutyrate) (PHB) and its copolymer poly (3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) have attracted much attention as biocompatible and biodegradable thermoplastics with potential application in agriculture, marine and medicine fields.They are natural polyesters produced by various microorganisms in the form of intracellular granules as reserve material of carbon and energy.Despite of presenting properties similar to isotactic polypropylene, there are some limitations that hamper the wider scale application of PHBV, such as: high crystallinity, narrow processing window and higher production cost as compared to traditional polymers 1 .Moreover, the inclusion of HV units in the PHB crystalline lattice reduced nucleation and linear growth rate.Thus, PHBV presents low crystallization rates as compared to the time scale of most industrial moulding and fabrication process 2 . PHB and PHBV nanocomposites filled with layered clay such as montmorillonite, sepiolita and hectorite have been studied [2][3][4][5][6][7] .It was observed that the incorporation of clay to the polymeric matrix improves mechanical behaviour, thermal properties, gas barrier properties and biodegradation rates of the material as compared to neat polymer.The main advantage of nanocomposites is that enhanced properties can be obtained with a small amount of filler due to the large contact area between polymer and clay. Attapulgite, also known as palygorskite, is a clay mineral characterized by a short fibrous morphology 8 .It has a threedimensional structure composed of channels of about 3.7 × 6.4 Å, which have exchangeable cations and water.] . Attapulgite has been also used as nanofiller to improve properties of several polymers, such as polyuretane 9 , polypropylene 10 and polyamide 6 11 . The aim of this work was to prepare and characterize PHBV/ organophilic attapulgite (MAT) nanocomposites.The effect of MAT on the morphology and thermal properties of PHBV was investigated by scanning electron microscopy (SEM), X-ray diffraction (XRD), differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). Preparation of nanocomposites Attapulgite was organically modified according to the following procedure.Attapulgite was gradually added to a hexadecyltrimethylammonium chloride solution at 80 °C and vigorously stirred for 3 hours.The treated inorganic material was plenty washed with distilled water.The filtrate was titrated with 0.1 N AgNO 3 until no further formation of AgCl precipitated to ensure the complete removal of chloride ions.The filter cake was then placed in a vacuum oven at 80 °C for 5 hours.The dried cake was ground with mortar and pestle and screened with a 325-mesh sieve to obtain the organophilic clay (MAT).nitrogen and then were coated with a thin layer of gold prior to SEM observation.Thermogravimetric analysis (TGA) was carried out using a SDT 2960 Simultaneous DSC-TGA instrument under nitrogen flow of 100 mL/min at a heating rate of 10 °C/min from room temperature to 400 °C.DSC experiments were performed using a Perkin Elmer Pyris 6 Differential Scanning Calorimeter.Nitrogen was used as the purge gas for the DSC cell.Samples of about 3 mg were weighed and sealed in the aluminum DSC pans and placed in the DSC cell.They were first heated from room temperature to 200 °C at a rate of 10 °C/min.The samples were kept at 200 °C for 2 minutes and subsequently cooled to -20 °C at 40 °C/min.Then, the samples were re-heated to 200 °C at a rate of 10 °C/min.The thermal parameters were obtained from the second heating scan.The degree of crystallinity of PHBV in PHBV/MAT films (Xc) was estimated according to the Equation 1: where ΔH f is the enthalpy of fusion of sample, W PHBV is the weight fraction of PHBV in the sample and ΔH f ref is the enthalpy of fusion of 100% crystallized PHBV, 146 J.g -1 [12] . The nanocomposites were prepared by solution intercalation method 6 .PHBV was heat-dissolved in chloroform for 2 hours at 65 °C in order to obtain a 7% w/v PHBV solution.1-5 wt.(%) of MAT was added to the polymer solution.The resulting mixture was vigorously stirred and then aged for 3 days with occasional shaking.After this period, the suspension was heated for 1 hour at 60 °C, spread on glass moulds and allowed to dry at room temperature until complete evaporation of the solvent.The nanocomposites were stored at room conditions for at least 20 days prior to characterization analysis.Preliminary tests indicated that within this period samples achieved plateau of crystallinity due to secondary crystallization. Characterization X-ray diffraction analysis of attapulgite and nanocomposites was carried out with a Rigaku-Miniflex diffratometer, operated with CuKα wavelength of 15.42 nm.Data were acquired at ambient temperature in the angular region (2θ) of 3-40° (scanning speed = 2°/minute).The morphology of the fractured surface of PHBV/MAT films was evaluated using a JEOL JSM-6460LV SEM equipment with an acceleration voltage of 15 kV.The samples were fractured in liquid to the (020), ( 110), ( 101), ( 111), ( 121) and (002) reflections of the orthorhombic crystalline lattice, respectively.It can be observed that the peak positions remain practically unchanged in PHBV/ MAT diffractograms.This fact suggests that the PHBV crystalline lattice do not change appreciably in the presence of attapulgite.The characteristic peak of attapulgite which appears at 2θ = 8.42º can be observed in PHBV/MAT nanocomposites diffraction patterns.This peak is attributes to the primary diffraction of the (110) crystal face 2 .Different from layered clay such as montmorillonite, whose diffraction peak shift significantly when clay aggregated/exfoliated, the attapulgite characteristic peak remains unchanged in nanocomposites, since unit layers in an attapulgite single crystal cannot be further separated.Similar to others polymeric nanocomposites 11 , it can also be noted that the intensity of this characteristic peak increases in a function of the amount of MAT presents in sample. Figure 2b shows DSC curves of second heating scan for PHBV and its nanocomposites.Table 1 summarizes the thermal properties obtained from this scan.Owing to the melting point of attapulgite being much higher than 1500 °C, DSC curves showed only the properties of PHBV matrix.The glass transition temperature (Tg) of the nanocomposites with 1% MAT increased as compared to PHBV films.Probably, the mobility of PHBV chains was constrained due to clay-PHBV interaction.This filler-matrix interaction might be related to the homogenous distribution of filler.On the contrary, the incorporation of higher amount of MAT led to a decrease in Tg for PHBV/3%MAT and PHBV/5%MAT.In these cases, the constrained Results and Discussion Figure 1 shows SEM images of typical regions of fractured surface of pure PHBV (a) and PHBV/MAT nanocomposites (b-d) films.White dots with nanometric and submicrometric in size can be visualized on the fractured surface of films containing 1% MAT (Figure 1b).These features can be attributed to the cross section of attapulgite bundles, which might be orientated in a parallel direction to the external surface of nanocomposite.These features are homogenously distributed within polymer matrix.The image of the fractured surface of PHBV/5%MAT (Figure 1c) shows smaller white dots but in this case, they are arranged in micrometric agglomerates.These agglomerates are non-homogenously dispersed within PHBV matrix.Moreover, elongated features which are mostly arranged in large micrometric agglomerates can be also visualized.These features might be attributed to attapulgite fibrils which are perpendicularly oriented in relation to external surface of the film.In some regions, voids around agglomerates are observed, indicating poor filler-matrix interface.The PHBV/3%MAT fractured surface (Figure 1b) presents intermediate morphology, since several submicrometric white dots along with micrometric fibrils agglomerates can be visualized. X-ray diffraction patterns of PHBV films and PHBV/MAT nanocomposites are presented in Figure 2a.The diffraction profile of PHBV sample is very similar of that presented for PHB homopolymer.This similarity was expected due to the low 3HV content of PHBV employed in this work 12 .The profile exhibits well-defined peaks (2θ) at 13.6°, 17.1°, 21.7°, 22.7°, 25.6° and 30.7°, which correspond Table 2).In Figure 3a, the mass plots of samples are shown, while the corresponding derivative of the data (DTG curves) is presented in Figure 3b.The temperature of maximum rate of weight loss (Td), the onset temperature (T onset ) and the endset temperature (T endset ) of samples are listed in Table 2. The TGA curve of pure PHB exhibits a single degradation step with onset temperature at 272.7 °C.A single peak observed in DTG curve confirms this degradation profile and indicates that the maximum rate of mass loss was achieved at 288.5 °C.It is well known that thermal degradation of PHBV occurs almost exclusively by a nonradical random chain scission mechanism involving a sixmembered ring transition state 15 . The thermal degradation process of PHBV/MAT nanocomposites also occurs in only one weight loss step.Nevertheless, in the presence of clay the curves are shifted to lower temperatures indicating a decreasing in thermal stability of nanocomposites compared to pure PHBV.The onset temperature (T onset ) was more affected since it was observed a shift of about -20 °C in relation to neat PHBV.Probably the quaternary ammonium salts used as organophilic modifier enhanced the PHA degradation.It was proposed that the surfactant starts to decompose according to the Hofmann elimination reaction or a nucleophilic attack of the ammonium counter-ion on the ammonium.The decomposition products, amines or acidic protons, could then enhance the random chain scission reaction of PHBV 16 . Conclusion PHBV/attapulgite nanocomposites were successfully obtained by solution intercalation method.The use of organophilic attapulgite led to significant changes in PHBV properties.The presence of action of MAT was reduced and the agglomeration of filler might become a predominant factor for Tg decreasing. For all samples, the presence of attapulgite reduced the crystallization temperature (Tc), indicating an enhanced crystallization ability of PHBV.It can be inferred that the attapulgite acted as nucleation agent during the nonisothermal crystallization process.It was already observed that several nanofillers can act as nucleating agent in PHB or PHBV matrix, such as organically modified montimorillonite (Cloisite 15B) 5 , cellulose nanowhiskers 13 and lignin fine powder 14 .In these cases, the filler reduces the energy barrier for polymer crystallization and increases the nucleating density, originating smaller spherulites in higher number than those of neat PHB or PHBV. There was a decrease in the melting temperature (Tm) of the nanocomposites compared with pure PHBV, excepted for PHBV/1%MAT nanocomposites.The presence of clay seemed to induce crystals defects and/or reduce crystallite sizes, which led to lower melting temperature in relation to neat PHBV 2,6 .However, Pan et al. 11 considered that the decrease in Tm of polyamide 6 nanocomposites containing 8% attapulgite as compared to pure polyamide 6 was resulted from the reduction of filler-matrix interaction.For films with 3 and 5% MAT, there was a formation of two distinct populations of crystals, as evidenced by the presence of two endothermic peaks in DSC curve.This observation suggests that the formation of crystals with larger imperfections and/or smaller crystals, which melted at lower temperatures.According to Wang et al. 4 , the lower temperature peak corresponded to the melting of the crystals formed at the crystallization temperature while the higher temperature was related to the crystals rearranged during heating in the calorimeter. The presence of attapulgite reduced the crystallinity degree (X c ) of PHBV (Table 1).However, the amount of MAT did not influenced significantly X c values for nanocomposites.In the case of PHBV/1%MAT, X c might be reduced due to the restriction of the mobility of PHBV chains.Concerning the presence of 3 and 5% MAT, although the PHBV chains gained mobility in these nanocomposites, as shown by Tg values, the packaging of PHBV chains in crystalline regions might be impaired by the presence of clay particles with different orientations and not well-dispersed in matrix. The influence of attapulgite on the thermal stability of the nanocomposites was investigated by TGA analysis (Figure 3 and 274.9 260.9 277.0 organophilic attapulgite induced PHBV crystallization at lower temperatures, probably acted as nucleating agent.This result may be important for attenuating the problems associated to secondary crystallization of PHBV.These films showed a reduction in degree of crystallinity and a decrease in melting and glass transition temperatures at concentrations of 3 and 5%MAT in relation to unfilled PHBV.However, the incorporation of MAT may lead to an improvement in PHBV processing, since the range of the processing temperature of the nanocomposites (T onset -T m ) increased slightly when compared to pure PHBV films.The attapulgite is presented as an environmentally friendly, naturally abundant and low cost material, which can be used to improve PHBV-based bioplastics properties and to reduce their cost. Figure 2 . Figure 2. a) XRD diffraction patterns and b) DSC thermograms of second heating scan for pure PHBV and PHBV/MAT nanocomposites. Table 1 . Thermal properties obtained from DSC heating curves of second heating scan for pure PHBV and its nanocomposites with 1, 3 and 5%MAT. Table 2 . TGA results for pure PHBV and PHBV/MAT nanocomposites films: onset temperature (T onset ) and endset temperature (T endset ) of the degradation process and temperature of maximum rate of weight loss (T d ).
2019-04-09T14:48:54.032Z
2011-08-05T00:00:00.000
{ "year": 2011, "sha1": "3323eb2092cfbe2bc13c13b97290d5f3dcad560f", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/mr/a/j9ThSgzxdCQsmF5tLZMTKyg/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3323eb2092cfbe2bc13c13b97290d5f3dcad560f", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
258291833
pes2o/s2orc
v3-fos-license
Classification of solutions to Hardy-Sobolev Doubly Critical Systems This work deals with a family of Hardy-Sobolev doubly critical system defined in $\mathbb{R}^n$. More precisely, we provide a classification of the positive solutions, whose expressions comprise multiplies of solutions of the decoupled scalar equation. Our strategy is based on the symmetry of the solutions, deduced via a suitable version of the moving planes technique for cooperative singular systems, joint with the study of the asymptotic behavior by using the Moser's iteration scheme. Introduction We are concerned with the study of the doubly critical system: Along this article we address the analysis of the classification of positive solutions to (S * ). In the case γ = ν = 0, system (S * ) reduces to the classical Sobolev critical equation (1.1) It is well-known that the so-called Aubin-Talenti bubbles solve (1.1), where λ > 0 and it is centered at x 0 ∈ R n .Moreover, they realize the equality in the sharp Sobolev inequality in R n .In [29], the authors proved that any positive solution u to (1.1), such that u(x) = O (1/|x| m ) , at infinity for m > 0, is radially symmetric and decreasing about some point in R n .In the proof the authors used a refinement of the celebrated moving plane procedure developed by themselves in a previous paper.Later on, Caffarelli, Gidas and Spruck in [10] classified all the solutions to (1.1).Making use of the Kelvin transform, the authors showed that the moving plane procedure can start.Finally, they showed that all the solutions u ∈ H 1 loc (R n ) to (1.1) are given by the Aubin-Talenti bubbles (1.2), hence solutions are unique up to rescaling.Recently, these results were also generalized in the case of critical equations involving the p-Laplacian and the Finsler anisotropic operator, where the use of the Kelvin transform is not-allowed, see e.g.[12,15,37,40,48,49]. When γ = 0 and ν = 0, system (S * ) becomes the so-called Hardy-Sobolev doubly critical equation Terracini in [46], by means of variational arguments and the concentration compactness principle, showed the existence of solutions to (1.3), actually a more general one.Moreover, by using the Kelvin transformation and a fine use of the moving plane method, she proved that any solution to (1.3) is radially symmetric about the origin.Finally, thanks to a detailed ODE's analysis, she gave a complete classification of the solutions to (1.3), given by (1.4) , where (1.5) , and µ > 0 is a scaling factor.Obviously, U µ = V µ,0 if γ = 0. The case of p-Lapalce operator was firstly treated in [2], where the authors carried out a very fine ODE's analysis.The radial symmetry of the solutions there was an assumption.Later on, in [36] the authors showed that all the positive solutions to p-Laplace doubly critical equation are radial (and radially decreasing) about the origin.Once the radial symmetry of the solution is proved it is easy to derive the associated ordinary differential equation fulfilled by the solution u = u(r) and, hence, to apply the results in [2]. ¿From now on we focus our attention to the case of systems.Nonlinear Schrödinger problems, like the Gross-Pitaevskii type systems, have a strong connection with some physical phenomena.Such problem appears in the study of Hartree-Fock theory for double condensates, that is a binary mixture Bose-Einstein condensates in two different hyperfine states which overlap in space, see [24,27] for further details.That type of systems arises also in nonlinear optics.Actually, it allows one to study the propagation of pulses in birefringent optical fibers and the beam in Kerr-like photorefractive media, see [5,31] and references therein.In particular, solitary-wave solutions to the coupled Gross-Pitaevski equations satisfies the problem (1.6) where V is the system potential and 1 < p ≤ 2 * 2 .This problem is typically known as the Bose-Einstein condensate system.For the subcritical regime, we refer to [6,7,33,43,45] for some results concerning existence and multiplicity of solutions under different assumptions on V and ν. Concerning the critical case p = 2 * 2 , if V is non-zero constant, then system (1.6) admits only the trivial solution (0, 0).This result follows from a proper application of the Pohozaevtype identity.For V = 0, in [38] the authors showed the uniqueness of the ground states, under suitable assumptions on the parameters of the generalized system; whereas in the paper [16] the competitive setting is considered, i.e. ν < 0, deducing that the system admits infinitely many fully nontrivial solutions, which are not conformally equivalent. As a non-constant potential, from now on we shall consider the aforementioned Hardy-type one V = − γ |x| 2 .Under that choice, the problem (S * ) can be also seen as an extension of (1.6).The mathematical interest in such system lies in their double criticality, since both the exponent of the nonlinearities and the singularities share the same order of homogeneity as the Laplacian.Moreover, inverse square potentials arise in some physical prototypes, such as nonrelativistic quantum mechanics, molecular physics or combustion models. Doubly critical problems has attracted attention in recent years.In the pioneer work [3], a general Hardy-Sobolev type system is studied by means of variational techniques.In the cooperative regime ν > 0, the existence of ground and bound states are obtained depending on α, β, n and a potential function h arising in the coupling term.Recently, that kind of results were extended in [17] by using similar strategies.Such doubly critical system is widely analyzed in [14].Actually, the non-existence of ground states for the competitive case is indeed proved. The aim of this paper is to classify all the positive solutions to problem (S * ), via the study of symmetry and monotonicity properties in the same spirit of the aforementioned papers [10,46]. Hence, we state one of the main results of our paper: with U µ introduced in (1.4), µ 0 > 0 and c 1 , c 2 are any positive constants satisfying the system Remark 1.2.This result holds (and is new) also in the case γ = 0.Under that assumption the explicit solutions in (1.4) reduce to those of (1.2).See [13] for a result related to such a case. Concerning the uniqueness of synchronized solutions we refer [38] for more details. Remark 1.4.From a pure mathematical point of view, let us emphasize that, throughout the paper, a very deep and crucial issue is represented by those cases in which either α < 2, β < 2 or both.In such a case, in fact, the coupling term is non locally Lipschitz continuous. Note that this necessarily occurs if n > 4. In low dimension, however, all the situations are possible. A first important step in our strategy is the proof of the fact that the solutions to (S * ) are radially symmetric about the origin.Actually we shall provide a more general result and, in particular, holds without requiring global energy information. The proof of the radial symmetry of the solutions is based on a fine adaptation of the moving plane procedure.The technique goes back to the seminal works of Alexandrov and Serrin [4,42] and the well-known contributions by Berestycki-Nirenberg [8] and Gidas-Ni-Nirenberg [28].Originally, the technique was introduce to be performed in general domains providing partial monotonicity results near the boundary and symmetry for convex and symmetric domains.Regarding elliptic systems, the moving plane technique was adapted by Troy in [47], where the cooperative case is analyzed.The procedure was also applied for semilinear systems in the half-space in [20] and in the whole space by Busca and Sirakov in the work [9].We refer the reader to [18,19,21,22,25,26,34,39,44] for other interesting contribution about elliptic systems in bounded or unbounded domains. For the reader's convenience we describe the strategy of our proofs that turn out to be tricky somehow. 1.1.The symmetry result.The proof of Theorem 1.5, as recalled above, is based on the moving plane technique.Unfortunately the adaptation of the technique is not straightforward since we work in unbounded domains and the coupling term, in general, is not locally Lipschitz continuous.We overcame such a difficulty studying a suitable translated problem. To obtain symmetry in the x 1 -direction move the Hardy potential to Then we apply the Kelvin transformation.The translation argument allow us to deal with the presence of the Hardy potential.The problem is not invariant under this procedure but the equation that arises is not so bad and we succeed in the adaptation of the moving plane procedure. 1.2.The asymptotic analysis.Once we know that the solutions are radially symmetric, we use a suitable transformation for the right choice of τ > 0. The Moser iteration scheme, applyed to the transformed equation provides a first asymptotic information.Then the study of the associated ODE (together with the Kelvin transform) allows us to deduce the precise asymptotic information at zero and at infinity.where t := log r with r > 0 and δ = n − 2 2 .We prove that there exists C > 0 such that y u = Cy v where C is a zero of the function The number of roots of f is equivalent to the solutions of the system (1.8), then such quantity gives us the number of synchronized solutions.Although in this final issue we are reduced to deal with an ODE analysis, the proof is no longer standard.To the best of our knowledge, only the case of a single root of f has been treated in the literature.Precisely, the hardest part is the case when the function f (•) has more than one zero, a possible issue, see Remark 1.4.When the proportionality of components (y u , y v ) is guaranteed, one can conclude the classification result by direct computation. 2. Radial symmetry of the solutions, proof of Theorem 1.5 The aim of this section is to prove that any solution to (S * γ 1,2 ) is radially symmetric about the origin.We shall therefore consider positive continuous (far from the origin) solutions To prove Theorem 1.5, we need to fix some notations.For any real number λ we set which is the reflection through the hyperplane T λ := {x 1 = λ}.Moreover, given any function w, we will set namely the reflected function.A crucial ingredient in our proof is the use of a translation argument.We fix x 0 = (0, x ′ 0 ) ∈ R n \ {0} and we assume, by translation, that (u, v) solves This will allows us to take full advantage of the Kelvin transformation.In fact we set K : R n \ {0} −→ R n \ {0} defined by Such a transformation is a well-known tool and, given any (u, v) solution to (S * x 0 ), its Kelvin transform is defined as By direct computation it follows that (û, v) weakly satisfies Having in mind the last definition, we prove the following key lemma that will be used in the proof of the symmetry result. Proof.It is immediate to have that for every x ∈ R n .We observe that thanks to our assumptions, the term x, x 0 does not depend on x 1 .Hence, thanks to Schwarz inequality we deduce that for any (2.5) Analogously, we get We shall prove a symmetry result regarding any solution to ( Ŝ * x 0 ), that translates into a symmetry result for the original problem (S * γ 1,2 ).In particular, the couple (û, v) satisfies A crucial point in all the paper, as recalled in the introduction, is represented by the fact that the coupling term may be not Lipschitz continuous at zero.To face this difficulty, we will also use the following: Lemma 2.2.Let (û, v) be a solution to ( Ŝ * x 0 ).Then, there exists R > 0 and µ > 0 such that Proof.First of all, we note that (û, v) satisfies (2.6).Borrowing an idea contained in [23], we point out that, for every ε > 0, we can find a function We note that there exists R > 0 such that x 0 ∈ B R(0).Since û > 0 in B R(0) \ {0}, there exists µ > 0 such that û > µ > 0. In the same way v > µ > 0 on ∂B R(0).Hence, setting and by density arguments we can choose these as test functions in (2.6).For the reader convenience we make the computations just for the first equation of (2.6).Hence.we are able to deduce that Using the Young's inequality, we are able to get that (2.11) Finally, we have that (2.12) Passing to the limit for ε that goes to 0 we get the thesis for û.Arguing in a similar fashion, we obtain the same result for v. The translation argument that we introduced allow us to deduce the following: Proof.Since x 0 = (0, x ′ 0 ) ∈ R n is fixed, then there exists δ > 0 such that x 0 ∈ B δ (0).By our assumptions u, v ∈ C(R n \ {x 0 }), and hence we deduce that u, v ∈ C(B δ (0)).We fix R > 1 δ in such a way that by the definition of Kelvin transformation (2.2) we easily deduce (2.13). Now, we are ready to prove that any positive solution to ( Ŝ * We recall that we are working with the weak formulations (2.6) and (2.7).By Lemma 2.3 we deduce that |û(x)| ≤ C u |x| 2−n and |v(x)| ≤ C v |x| 2−n and for every x ∈ R n such that |x| ≥ R, where C u , C v and R are positive constants (depending on u and v).In particular, for every λ < − R < 0, we have In order to complete the proof of our result, we split the proof into three steps. Step 1: There exists M > 0 large such that û ≤ ûλ and v ≤ vλ in Σ λ \ R λ ({0, x 0 }), for all λ < −M.We immediately see that ).We point out that, for every ε > 0, we can find a function Therefore, by a standard density argument, we can plug Φ and Ψ as test functions in (2.6) and (2.7) respectively, so that, subtracting we get and (2.17) Exploiting also Young's inequality, recalling that 0 ≤ ξ + λ ≤ û and 0 ≤ ζ + λ ≤ v, we get that (2.18) Similarly, we obtain where c(n) is a positive constant depending only on the dimension n and by the absolute continuity of the Lebesgue integral.Analogously, we deduce that Let us now estimate I 3 and E 3 .Here, we recall the monotonicity property of the function f x 0 stated in Lemma 2.1.Moreover, for λ < 0 sufficiently large we have that . Therefore where (2.23) By Hardy's and Young's inequality we also deduce that (2.24) Employing the argument to estimate I 3 , we get Since û(x), ûλ (x), v(x), vλ (x) > 0, by the convexity of t → t 2 * −1 , for t > 0, we obtain for every x ∈ Σ λ .Thus, Therefore, using Hölder's inequality with exponents 2 * 2 , n 2 , we have where in the last two steps we applied Sobolev and Young's inequalities respectively.Arguing in the same way, we deduce The evaluation of I 5 and E 5 is a delicate issue within this argument.Note that, in particular, we may have that either α < 2, β < 2 or both.In all this cases we have to face a nonlinear term which is not Lipschitz continuous at zero.We shall exploit the following Having in mind this argument, one obtains that where in the last inequality we used the cooperativity of our system and the fact that we are working in Σ λ ∩ supp[(u − u λ ) + ].Making use of Lemma 2.3 we deduce that (2.29) By Remark 2.4, we get 3. Then (2.29) rewrites as where the constant C has been relabelled and we have applied Young's inequality. Similarly, we can obtain an analogous estimate for E 5 , i.e. As we argued in (2.26) for I 4 in, we get that Analogously, we deduce that (2.33) Collecting all the previous estimates for I k , E k with k = 1, 2, 3, 4, 5, we deduce that where We obtain an analogous estimate for ζ + λ , i.e. where Summing both the contributions (2.34) and (2.35) we get . (2.36) Now, recalling (2.23) and using the absolute continuity of the Lebesgue integral, we fix M > 0 sufficiently large such that for each λ < −M.Finally, passing to the limit for R → +∞ and ε → 0 + , we obtain that (2.37) for each λ < −M.The last inequalities immediately imply the thesis of the first step. The case (ii) is not possible ,because, since û has a sign at x 0 , it holds that û < ûλ and v < vλ in Σλ \ Rλ({0, x 0 }).Now, we push further the hyperplane Tλ and consider the hyperplane Tλ +ε for some ε > 0. We claim that, for ε > 0 small, we have that where R > 0 is given by Lemma 2.3 and δ > 0 is small.For the reader convenience we set Arguing by contradiction, let us assume that (2.39) does not hold.Hence, if we define we deduce that ∃ P m ∈ G τm , with {τ m } m∈N (τ m → 0) such that û(P m ) > ûλ +τm (P m ) or v(P m ) > vλ +τm (P m ). Without loss of generality, we assume that û(P m ) > ûλ +τm (P m ).Up to subsequence, as m → +∞ we have that Finally we have that or P ∈ Tλ or P / ∈ Tλ. • If P ∈ Tλ, then we have ∂ û ∂x 1 ( P ) ≤ 0. But, by the Hopf boundary lemma, we know that ∂ û ∂x 1 ( P ) > 0 providing a contradiction.Note that here it is needed to run over the argument in [28] that works thanks to the cooperativity condition.Now, we know that (2.39) holds true and we argue as in Step 1 choosing as test functions respectively in (2.6) and in (2.7) so that, subtracting we get (2.40) (2.41) Now, we can define the following sets where, in the set A, we argue as in the case of bounded domains, see [8].Note that Lemma 2.2 ensures that we are working far from zero since û and v are bounded away from zero near the origin and near the translation point x 0 .This allows us to apply the weak comparison principle in small domains (see e.g.[22,23,41]), and deduce that (2.43) Finally, collecting (2.39), (2.42) and (2.43), we deduce supp[(û − ûλ +ε ) + ] = ∅, supp[(v − vλ +ε ) + ] = ∅ and hence û < ûλ +ε and v < vλ +ε in Σλ +ε , which is in contradiction with the definition of λ.So, λ must be 0. Step 3: Conclusion.The symmetry of the Kelvin transform (û, v) in the x 1 -direction follows now performing the moving plane method in the opposite direction.By the definition of (û, v) given in (2.2), the symmetry of û and v w.r.t. the hyperplane {x 1 = 0} implies the same symmetry of the solution (u, v).The symmetry of any solution of (S * γ 1,2 ) follows now recalling that the translation point x 0 is arbitrary.We can repeat the same argument with respect to any fixed direction ν ∈ R n .The last one implies that (u, v) must be radially symmetric about the origin. Asymptotic behavior of solutions The main contribution of the paper is to show that solutions are unique modulo rescaling, in the same spirit of Teraccini's paper, [46].In order to achieve our goal, we need to study the asymptotic behavior of the solutions to (S * ) that we recall for the reader convenience For any τ > 0, let us define In particular, we set which are solutions to the algebraic equation This parameters are well-known in the case of a single equation and appear describing the behavior of the solutions at zero and at infinity.An heuristic argument shows that the behavior at zero should be driven by the weight |x| −τ 1 , while |x| −τ 2 gives us the behavior at infinity.In any case, with this choice of τ , it is easy to check that (u τ , v τ ) weakly solves Such equation, in the weak formulation, is well defined in the weighted Sobolev space D 1,2 τ (R n ) for τ = τ 1 as above.This space has been already exploited in the literature and, after defining it in the classical way [30], we may look at it as the completion (clousure) of The weighted Sobolev inequality in this space is related to the Caffarelli-Kohn-Nireberg inequality [11]; we refer also to [32,3]. With such a transformation at hand, we shall exploit the Moser iteration scheme to prove the following: Proof.For any η ≥ 1 and T > 0 we define the following Lispchitz function Making easy computations one can check that for any w ∈ D 1,2 τ (R n ) it holds the following inequality in the weak distributional meaning Now we make all the computations on the first equation of system (W * ).We remark that, since Υ is a Lipschitz function, then Υ(u τ ), Υ(v τ ) ∈ D 1,2 τ (R n ).Hence, it holds the Sobolev inequality By (3.5), we deduce where in the last inequality we used the estimate tΥ ′ (t) ≤ ηΥ(t) for every t > 0. Integrating by parts in the left hand side of (3.7), and using (3.6) we deduce Arguing in the same way with the second equation of (W * ), we obtain (3.9) Our aim is to apply the Moser's iteration scheme (see [35]).In order to do this, we take into account that u τ , v τ ∈ D 1,2 τ (R n ).Now, let η := 2 * 2 and m u , m v ∈ R + to be chosen later.We claim to give an estimate of the right hand side of (3.8).We point out that it is easy to check that the function g(t) := Υ(t)t α−2 is non-decreasing for every t ≥ 0 and for every α > 1.Hence, taking into account this fact and using the definition of Υ in (3.4), we have Using Hölder's inequality with conjugate exponents Arguing in an analogous way to (3.10) for the second equation of (W * ), we are able to deduce . (3.12) Hence, thanks to (3.11), by (3.8) we obtain Arguing in a similar way with v τ , thanks to (3.12), by (3.9) we deduce Summing both (3.13) and (3.14), we deduce that Hence, from (3.15) we obtain where C depends on m u , m v , ν, n and C S .Moreover, we used that Υ(t) ≤ t η and that Hence, recalling that η = 2 * 2 and taking T → +∞, thanks also to Fatou's lemma one has Finally, we set (3.21) and we obtain the recursive inequality By induction we can easily deduce that log (3.22)where in the last inequality we used that the series +∞ j=2 log C j converges.Hence, thanks to the monotonicity of the integral and by (3.22), for every R > 0, we have (3.23)log Passing to the limit for k → +∞ and thus η k → +∞, we deduce that The thesis follows by the arbitrariness of R. In the previous section we showed that any solution (u Obviously also u τ , v τ are radial and they satisfy Thanks to to Proposition 3.1, we deduce that This is a first information regarding the behavior of the solutions at zero (the behavior that we get at infinity is not sharp).In any case we are in position to get a complete picture of the asymptotic behavior of the solutions by proving the following: ) be any solution to (S * ).Then, we have where u 0 , v 0 ∈ (0, +∞).Moreover, we have where u ∞ , v ∞ ∈ (0, +∞). Uniqueness and classification of solutions The aim of this section is to classify solutions to (S * ) up to the associated rescaling.Given (u, v) a solution to the problem (R * S ), consider y u (t) := r δ u(r) and y v (t) := r δ v(r), (4.1) where t := log r with r > 0 and δ = n − 2 2 .By direct computation, it is possible to deduce Next, we will prove that y u and y v reach a unique maximum point, which actually is the same for both components.To show this fact, we will use the asymptotic behavior of y u and y v .Moreover, there exist cu , Cu , c u , C u , cv , Cv , c v , C v , M positive constants such that c u e (δ−τ 1 )t u τ 1 (e t ) ≤ y u (t) ≤ C u e (δ−τ 1 )t u τ 1 (e t ) for t ≤ −M, (4.2) cu e (δ−τ 2 )t u τ 2 (e t ) ≤ y u (t) ≤ Cu e (δ−τ 2 )t u τ 2 (e t ) for t ≥ M, (4.3) c v e (δ−τ 1 )t v τ 1 (e t ) ≤ y v (t) ≤ C v e (δ−τ 1 )t v τ 1 (e t ) for t ≤ −M, (4.4) c u e (δ−τ 2 )t v τ 2 (e t ) ≤ y v (t) ≤ C v e (δ−τ 2 )t v τ 2 (e t ) for t ≥ M, (4.5) where τ 1 and τ 2 were introduced (3.2).Furthermore, we have that The proof follows by Proposition 3.1 and 3.2 . 1. 3 . The classification result.Once we know the exact behavior of the solutions, we exploit the standard change of variable y u (t) := r δ u(r) and y v (t) := r δ v(r), 2 . are able to apply the Moser's iteration scheme, in order to prove our result.For any k ≥ 1, let us define the sequence {η k } by 2η k+1 + 2 * − 2 = 2 * η k , and we set η 1 := 2 * The choice of η k at each step is clear in the first term of the right hand side of (3.10).Then starting again by (3.10) and (3.12), iterating we deduce that Lemma 4 . 1 . Let y u , y v be the functions defined in (4.1), they satisfy lim Case 1 : f (L) = 0. Without loss of generality we assume that (4.25) f (L) < 0.By (4. Thus f y u (t) y v (t) has a sign at −∞.Hence, we may run over the arguments of Case 1 recovering (4.29).Therefore we have thaty u (t) = Cy v (t) for any t ∈ R,with C = L or, if this is not the case, the argument can be repeated proving the thesis.At this point, we are able to prove the classification result.Proof of Theorem 1.1.Let (u, v) ∈ D 1,2 (R n ) × D 1,2 (R n ) bea solution to (S * ).By Proposition 4.3, one gets y u = Cy v .
2023-04-24T01:15:16.297Z
2023-04-21T00:00:00.000
{ "year": 2023, "sha1": "27f307525d20a95c48b30caf1e53b62ba90db7f3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.matpur.2024.06.010", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "27f307525d20a95c48b30caf1e53b62ba90db7f3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
267142010
pes2o/s2orc
v3-fos-license
Emergence of highly resistant Candida auris in the United Arab Emirates: a retrospective analysis of evolving national trends Introduction The Centers for Disease Prevention and Control lists Candida auris, given its global emergence, multidrug resistance, high mortality, and persistent transmissions in health care settings as one of five urgent threats. As a new threat, the need for surveillance of C. auris is critical. This is particularly important for a cosmopolitan setting and global hub such as the United Arab Emirates (UAE) where continued introduction and emergence of resistant variant strains is a major concern. Methods The United Arab Emirates has carried out a 12 years of antimicrobial resistance surveillance (2010–2021) across the country, spanning all seven Emirates. A retrospective analysis of C. auris emergence from 2018–2021 was undertaken, utilising the demographic and microbiological data collected via a unified WHONET platform for AMR surveillance. Results Nine hundred eight non-duplicate C. auris isolates were reported from 2018–2021. An exponential upward trend of cases was found. Most isolates were isolated from urine, blood, skin and soft tissue, and the respiratory tract. UAE nationals nationals comprised 29% (n = 186 of 632) of all patients; the remainder were from 34 other nations. Almost all isolates were from inpatient settings (89.0%, n = 809). The cases show widespread distribution across all reporting sites in the country. C. auris resistance levels remained consistently high across all classes of antifungals used. C. auris in this population remains highly resistant to azoles (fluconazole, 72.6% in 2021) and amphotericin. Echinocandin resistance has now emerged and is increasing annually. There was no statistically significant difference in mortality between Candida auris and Candida spp. (non-auris) patients (p-value: 0.8179), however Candida auris patients had a higher intensive care unit (ICU) admission rate (p-value <0.0001) and longer hospital stay (p < 0.0001) compared to Candida spp. (non-auris) patients. Conclusion The increasing trend of C. auris detection and associated multidrug resistant phenotypes in the UAE is alarming. Continued C. auris circulation in hospitals requires enhanced infection control measures to prevent continued dissemination. Introduction Invasive candidiasis which encompasses Candida bloodstream infections and deep-seated candidiasis is a significant cause of morbidity and mortality (1)(2)(3)(4)(5)(6), and remains a significant healthcareassociated problem in several countries (7,8).Within the candidemia grouping, the first known case of Candida auris was in an ear infection in Japan in 2009 (9).C. auris has now become a major public health threat, due to its propensity for horizontal transmission (10-13) and its continued nosocomial spread in long-term and acute care healthcare facilities (6,11,14). C. auris has quickly developed into a global concern and cemented its place as a superbug within just a decade after its first isolation in 2009 (9).Since its emergence, it has been identified in hospitals across five continents, particularly increasing in incidence during the COVID-19 pandemic (4,15,16).The role played by the coronavirus disease (COVID- 19) pandemic in this increase is difficult to ascertain, while restricted travel may have decreased the risk of importation of C. auris, difficult-to-control outbreaks of C. auris have continued to be reported in units caring for COVID-19 patients worldwide (17)(18)(19)(20).C. auris presents diagnostic challenges because of difficulty in identifying strains using common microbiological procedures and challenges in treatment given its resistance to multiple anti-fungal agents, including azoles, echinocandins, and polyenes, making it a critical antibiotic resistance threat (21,22). C. auris is now listed among five urgent threats defined in the U.S. Centers for Disease Prevention and Control's (CDC) 2019 Antibiotic Resistance Threats Report due to its global emergence, multidrug resistance, high mortality, and persistent transmissions in health care settings (9, 10, [23][24][25][26].A systematic review and metaanalysis that included cases between 2009 and 2019 from different countries reported an average crude mortality of 45% (95% CI: 39-51%) for C. auris bloodstream infections (21).However, mortality attributable to C. auris remains unclear.The vast majority of strains are fluconazole resistant, with variable proportions resistant to amphotericin B, echinocandins and flucytosine.Reports of antifungal susceptibility data from different geographic locations are varied and some C. auris strains exhibit elevated MICs for three major classes of antifungal drugs.The CDC has suggested tentative breakpoints, and these have been used in most studies, EUCAST and CLSI have yet to recommend clinical breakpoints or epidemiological cut-offs (27-29). An astonishing aspect in relation to the rapid emergence of C. auris is the simultaneous but independent appearance of genetically distinct clades on different continents (4,15).The whole-genome sequence (WGS) analysis of clinical isolates of C. auris collected from South Asia (India/Pakistan), South Africa and East Asia (Korea/Japan) has shown four highly clonal phylogenetic and geographically distinct clades that have emerged seemingly independent of one another, specifically, the South Asian clade (clade I), the East Asian clade (clade II), the South African clade (clade III), and the South American clade (clade IV) (4,15,30).In 2018, a fifth clade, which is exclusively found in Iran (Iranian clade), was identified (10, 24, 31). Antifungal resistance is widespread in C. auris in the South Asia clade I isolates.These isolates are resistant to fluconazole, variably resistant to amphotericin B, and also acquire resistance to echinocandins (32)(33)(34)(35).C. auris South America clade IV includes isolates with variable resistance to amphotericin B (36,37), while South Africa clade III isolates are frequently resistant to azoles antifungals (38).Multidrug resistant C. auris isolates to three major classes of antifungal agents have also emerged (10,39,40).This severely limits treatment options, making infection control and prevention in healthcare settings essential (5). The global number of C. auris cases has been rapidly increasing in the past few years particularly in blood cultures from patients with serious underlying medical conditions and in hospitalized patients with invasive medical devices, such as urinary tract catheters and parenteral nutrition, who have also received broad-spectrum antibiotics (1,3).Mortality in C. auris-associated infections has been reported from 33.3% to 100% worldwide (21), and more recent data has indicated a similar (high) mortality compared to other Candida bloodstream infections (41)(42)(43). Since the time of its first isolation in Japan, C. auris infections have been reported from several countries including South Korea, Malaysia, Kenya, South Africa, India, Pakistan, Colombia, Venezuela, Panama, United States, Canada, China, Russia and Europe (21).Among 17 countries listed under the MENA region, invasive C. auris infections have only been reported from Kuwait in (44)(45)(46), Israel (3), Oman (47,48), Saudi Arabia (49), United Arab Emirates (50), Iran (51) and Qatar (52,53) to date.The real prevalence and epidemiology of C. auris remains unknown in this region. United Arab Emirates Currently, the country hosts a population of nearly 10 million people of which 1 million are Emirati citizens, and the rest are Methods The UAE has been carrying out a national AMR surveillance program over the past 12 years (2010-2021).A retrospective study of emerging C. auris was conducted from 2018 to 2021, using data from the UAE national AMR surveillance program.This data is gathered through a unified WHONET platform (https://whonet.org/).Data collected included demographic and microbiological parameters from all participating centers across the country.The participating sites were managed by trained personnel who gathered AMR surveillance data from routine patient care and submitted it to the National AMR surveillance program.Data was generated, collected, cleaned and analyzed through the national AMR surveillance program as described by Thomsen et al. (55). Identification of Candida auris C. auris identification was performed at the national AMR surveillance sites by medical professionals.C. auris isolates were identified and tested for antifungal susceptibility using mostly commercial, automated systems including VITEK ® (BioMérieux SA, Craponne, France), BD Phoenix™ (Becton Dickinson, New Jersey, United States), and MicroScan™ (Beckman Coulter, California, United States).A few laboratories used Sensititre YeastOne™ (Thermo Scientific, Massachusetts, United States) plates for susceptibility.Only one laboratory (out of 45 labs) relied on a manual API ® (Analytical Profile Index.BioMérieux SA, Craponne, France) system for identification, and only two labs conducted susceptibility testing by manual disc diffusion. Antimicrobial resistance trends in Candida auris This was assessed by analysis of routine national level AMR surveillance data.This data, which covers a spectrum of AMR pathogens including C. auris, was obtained from across a network of 317 participating hospitals (n = 84), centers and clinics (n = 233), and 45 diagnostic laboratories in the country.These participating centers include primary, secondary and tertiary care facilities as well as public and private entities.All data are routinely collected and analysed using a unified platform (WHONET) and training on data collection is provided to ensure quality assurance, standardization and accuracy.The fully anonymized data includes demographic data (age, gender, nationality, hospital site/location etc.), clinical and microbiological data such as specimen source and antifungal susceptibility testing results.For the purpose of this analysis, we applied the CDC tentative breakpoints to determine susceptibility of our isolates (29).Resistance MIC breakpoints were as follows: fluconazole ≥ 32 µg/mL; amphotericin B ≥ 2; caspofungin ≥2; anidulafungin and micafungin ≥4. Data sources and statistical analysis AMR data was extracted from the national AMR surveillance database.p < 0.05 were considered statistically significant.We performed three types of analyses.In the first analysis, binary logistic regression was used to model the proportion of positive C. auris among all reported infections.Estimates of this analysis provide evidence regarding the annual increase in the reported positive C. auris cases among all reported cases.In the second analysis, the binary logistic regression model was used to investigate the proportion of positive C. auris among reported Candida spp.cases only.Estimates of this model provide data regarding the annual increase in the reported positive C. auris cases among Candida spp.cases.One main limitation of the above two analyses is the possibility that the trend in positive C. auris cases over time could be due to a potential increase in the screening of C. auris over time.To adjust for this potential bias, the total number of tests performed to screen for C. auris should be used.Unfortunately, these metrics are not available in the database.To investigate this possibility, we conducted a large simulation study where different scenarios for the annual increase in the screening rate of C. auris are assumed (see Supplementary material for more details).For each hypothetical screening rate, a binary logistic regression model was fitted, and significance and direction of percentage change in C. auris reported.For all three analyses, odds ratio and corresponding 95% confidence intervals were derived, and provide indication of the change over time in the incidence of positive C. auris cases (increase, or decrease, or no change over time).A chi-square test was used to test the association between categorical variables including mortality and ICU admission.The weighted log rank test was used to assess differences in length of stay in hospital.Binary logistic regression analyses and chi-square test for data presented in tables was performed using the R software (R: The R Project for Statistical Computing, n.d.), chi-square test for mortality rate was performed using Epi Info™ for Windows v7.2.4.0. Overview of the UAE national AMR surveillance The UAE national AMR surveillance was initiated in 2010 in the Abu Dhabi Emirate where 6 hospitals and 16 Centers/Clinics adopted the WHONET 2021 Software for AMR surveillance. 1 Additional sites were recruited over the years, starting with only 22 participating sites 1 https://www.whonet.org in 2010, which is the first year during which the study started, and located only in the Emirate of Abu Dhabi to reach a total of 317 surveillance sites from the 7 Emirates, including 84 hospitals and 233 centres/clinics and representing all seven Emirates of the country in 2021.Figure 1 shows the distribution of surveillance sites for National AMR Surveillance program from 2010 to 2021. Data on nationality was available for 632 patients of whom 29.4% (n = 186) were UAE nationals and the remainder (70.6%) comprised of individuals from 34 other nationalities (Figure 3).The demographic distribution of the patients shows a heavily skewed distribution across inpatient settings (809/908, 89%) and predominantly ICU patients (414/908, 45.6%).It also revealed a male preponderance with majority of patients being in the adult age group (Table 1). Admission to intensive care unit A total of 19,353 patients were associated with Candida spp.(non-auris) of whom 3,905 (20.2%) patients were admitted to ICU, while a total of 835 patients were associated with Candida auris, of whom 414 (49.6%) patients where admitted to ICU.The difference in ICU admission rate is statistically significant (p < 0.0001). Length of stay We performed a length of stay (LOS) analysis and assessed the differences in duration of hospitalization using a weighted log-rank test.We included data of patients for whom the date of admission and date of discharge was known.For those patients who were associated with Candida spp.(non-auris) (n = 4,912) the median length of stay was 14.0 days, while for those patients who were associated with C. auris (n = 140) the median length of stay was 33.5 days.The observed difference in length of hospitalization between patients associated with C. auris and non-C.auris spp.was statistically significant (chi square 64.1, p < 0.0001).Based on a total of n = 908 patients during the observation period (2018-2021), a total of 17,706 excess days of hospitalization were observed, attributable to C. auris.For the year 2021 only (n = 614 C. auris cases), a total of 11,973 excess hospitalization days were observed, attributable to C. auris (see Supplementary Figure S1).Kaplan-Meier curve: Mortality rate Analysis on a subset of patients for whom the health outcome was known was performed.A total of 5,694 patients were associated with Candida spp.(non-auris) of whom 1,503 patients died (mortality rate: 26.4%).A total of 171 patients were associated with C. auris, of whom 47 patients (mortality rate: 27.5%) died.The difference in proportion of those who died between C. auris patients and Candida spp.(non-auris) patients is not statistically significant (p = 0.818).Crude mortality rate for patients with C. auris isolates from blood cultures only was 22/61 (36.1%). Trend analysis of Candida auris among all reported infections: approach 1 Table 2 shows the number of cases of C. auris and the total number of national AMR surveillance cases reported from 2018 up to 2021, along with the proportion of positive C. auris cases for each year.Figure 4 shows the trend over time from 2018 to 2021. The cases show widespread distribution across all reporting sites and Emirates (Figure 5). Trend analysis of Candida auris: the simulation study: approach 3 One main limitation of the above two approaches to analyse the trend is the possibility that the trend in positive C. auris cases over time could be due to a potential increase in the screening of C. auris Proportion of significant results according to the hypothetical annual increase rate in the screening of C. auris. over time.To adjust for this potential bias, and due to the non-availability of the total number of tests performed to screen for C. auris, we conducted a large simulation study where different scenarios for the yearly increase in the screening rate of C. auris were assumed.Figure 7 provides, for each hypothetical annual increase in the screening rate of C. auris, the proportion of results with non-significant change, significant increase and significant decrease in the incidence of C. auris over time. From the simulation study above, one can see that positive C. auris cases observed over the 4 years reflect a statistically significant increase in the incidence of C. auris over time if the annual increase in the screening for C. auris does not exceed 176% (blue curve).If the annual increase in the screening for C. auris lies between 177% and 225% then the trend observed is not statistically significant (orange curve), however, if the annual screening rate was above 225% then the positive C. auris cases observed over the 4 years reflect a statistically significant decrease in the incidence of C. auris over time (red curve). Discussion The growth in hospital sites reporting Candida auris, from only 2 centers in the first year to more than 34 sites towards the end of the study period, representing all 7 Emirates demonstrates considerable concern about C. auris.There is increased alertness across the country of the importance of antimicrobial resistance surveillance and mitigation. The first cases of C. auris in UAE were detected in 2018.Since then, we have seen an alarming increase of C. auris isolations to n = 641 in 2021, especially in Abu Dhabi and Dubai.This increase is consistent with global reports of rising C. auris burden (56,57).The COVID 19 pandemic does not seem to have impacted the dissemination of C. auris, and may have exacerbated it (58,59).Nearly 50% of the patients were in intensive care and length of stay for these patients was extended by 19.5 days compared with patients infected with other Candida spp.Crude mortality at 27.5% (blood culture isolates: 36.1%) was similar to that for other Candida spp.and lower than seen in other countries (45% for blood culture isolates) (21). C. auris is usually resistant to fluconazole and often to other antifungal medications (azoles, polyenes, and echinocandins).Multidrug-resistant and even pandrug-resistant C. auris isolates have also been described, which limits us to fewer and fewer treatment options (60)(61)(62).In this study, resistance rates of C. auris were high (fluconazole, 72.6% 2021, amphotericin B, 84.6% 2021), with the emergence of caspofungin and micafungin resistance in 2021, which is of great concern. C. auris breakpoints are currently tentative.EUCAST will soon publish epidemiological cut-offs based on a global collection of isolates from which they have removed multiple epidemic or outbreak strains to minimise bias.Testing for fluconazole susceptibility shows very variable MICs, partly because of up-regulation of efflux pumps.These testing limitations may drive EUCAST to simply recommend that fluconazole is not used for C. auris infections, as they currently do for C. glabrata infections.There is general agreement that the tentative CLSI (and CDC) breakpoint for fluconazole is too high, and our finding that 27.4% of There are no official guidelines for the management of C. auris infection in terms of an optimal antifungal agent(s) with dosing and duration regimen since CLSI/EUCAST breakpoints for this pathogen are yet to be defined (10, 27, 66).Echinocandins remain the first line therapy for C. auris infection, however as demonstrated by our data, resistance to all three main classes of antifungal agents remains a rising problem.Patients should be monitored closely to detect therapeutic failure and/or the development of resistance during their therapy (66). The increasing trend of C. auris detection is suggestive of continued C. auris circulation predominately in hospitals.Thus infection control measures are critical to prevent continued dissemination.Such infection control measures could include better adherence to hand hygiene, appropriate use of transmission-based precautions based on setting, cleaning and disinfecting the patient care environment and reusable equipment with recommended products, communication about patient's C. auris status when patient is transferred, screening contacts of newly identified case patients to identify C. auris colonization, and laboratory surveillance of clinical specimens to detect additional cases (67).Newly described approaches include UV-C light inactivation of C. auris, re-formulation of chlorhexidine for superficial use and silver nanoparticles as examples (68-71). In the MENA region, C. auris has been reported from only six countries.Since genomic studies are lacking in the UAE, it was not possible to ascertain their similarity with C. auris clades from other geographic areas.Additional extensive research is needed on C. auris in the UAE to provide insight into its genetic epidemiology.Moreover, risk factors and methods of transmission need to be exhaustively identified to guide measures for prevention and to control the spread of the pathogen. In conclusion, the emergence of C. auris poses a global health threat primarily to hospitalized and critically ill patients and should be met with a call for urgent action given its resistant patterns to various classes of antifungals.Our analysis of the Ajman and Umm Al Quwain first reported C. auris isolates in 2018.Emergence occurred in all other Emirates in 2019 and spread rapidly.Abu Dhabi and Sharjah have almost doubled cases annually.Dubai identified 4 cases in 2019 to 182 in 2021, representing a 4450% increase in cases in 2 years.The results of the logistic regression show a significant increase over the years in the odds of reporting positive C. auris cases among all reported cases.More specifically, the odds of reporting a positive C. auris cases increases by 161.5% (95% CI: 140.6-185.1%)each year from 2018 to 2021.Figure6shows the predicted versus the observed counts of positive C. auris cases derived from the fit of the binary logistic regression model. cases from 2018 up to 2021, along with the proportion of positive C. auris cases for each year.The results of the logistic regression show a significant increase over the years in the odds of reporting a positive C. auris case among Candida spp.cases.More specifically, the odds of reporting a positive C. auris case increases by 46.2% (95% CI: 35.1%-58.7.6%) each year from 2018 to 2021. FIGURE 5 FIGURE 5Candida auris isolate reporting trends over time by Emirate. FIGURE 6 FIGURE 6Predicted versus observed rate of positive C. auris among all infections. TABLE 1 Demographic distribution of Candida auris cases and Candida spp.(non-auris) patients. TABLE 2 Number of cases of C. auris and the total number of cases reported from 2018 up to 2021. Table 3 shows the number of positive C. auris cases and the number of positive Candida spp. TABLE 3 Cases of C. auris amongst all Candida spp.cases.MIC % RIS distributions were calculated for the collection of C. auris isolates based on the CDC tentative breakpoint recommendations and are presented below in Table 4 and Figures 9A-E. 10.3389/fpubh.2023.1244358Frontiers in Public Health 14 frontiersin.orgnational C. auris AMR surveillance data provides insights into the evolving patterns of disease and antimicrobial resistance in the UAE.The findings highlight the need for a continued surveillance program, particularly genomic epidemiological surveillance, to guide the continued AMR monitoring and active intervention and control measures to address the growing threat of antibiotic resistance.Furthermore continued C. auris circulation in hospitals requires enhanced infection control measures to prevent continued dissemination.
2024-01-24T18:32:07.008Z
2024-01-12T00:00:00.000
{ "year": 2024, "sha1": "9bde1216b38f70d500104f0a300afe357bdbf9f6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1244358/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45b01148233aebe3866939bdbb9cb8c6643ca5b1", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
271172555
pes2o/s2orc
v3-fos-license
Advancing Precise Syphilis Diagnosis: A Nontreponemal IgM Antibody-Based Model for Latent Syphilis Staging Purpose Accurate differentiation between early and late latent syphilis stages is pivotal for patient management and treatment strategies. Nontreponemal IgM antibodies have shown potential in discriminating latent syphilis staging by differentiating syphilis activity. This study aimed to develop a predictive nomogram model for latent syphilis staging based on nontreponemal IgM antibodies. Patients and Methods We explored the correlation between nontreponemal IgM antibodies and latent syphilis staging and developed a nomogram model to predict latent syphilis staging based on 352 latent syphilis patients. Model performance was assessed using AUC, calibration curve, Hosmer–Lemeshow χ2 statistics, C-index, Brier score, decision curve analysis, and clinical impact curve. Additionally, an external validation set was used to further assess the model’s stability. Results Nontreponemal IgM antibodies correlated with latent syphilis staging. The constructed model demonstrated a strong discriminative capability with an AUC of 0.743. The calibration curve displayed a strong fit, key statistics including Hosmer–Lemeshow χ² at 2.440 (P=0.486), a C-index score of 0.743, and a Brier score of 0.054, all suggesting favorable model calibration performance. Decision curve analysis and clinical impact curve highlighted the model’s robust clinical applicability. The external validation set yielded an AUC of 0.776, Hosmer–Lemeshow χ² statistics of 2.440 (P=0.486), a C-index score of 0.767, and a Brier score of 0.054, further underscored the reliability of the model. Conclusion The nontreponemal IgM antibody-based predicted model could equip clinicians with a valuable tool for the precise staging of latent syphilis and enhancing clinical decision-making. Introduction Syphilis is a chronic systemic disease caused by Treponema pallidum, typified by periods of active clinical infection separated by intervals of latent infection. 1,2Latent syphilis is identified by seropositivity yet lacks clinical manifestations.3][4][5] In recent years, the latent syphilis epidemic has exhibited a substantial and swift escalation, making latent syphilis a focal point in syphilis prevention and control. 6,7ccurate differentiation between early and late latent syphilis holds paramount importance, as this differentiation directly correlates with the infectivity status of the patients and underpins treatment and management. 8,9Early latent syphilis typically signifies an ongoing phase of syphilis activity, carrying the potential for secondary syphilis relapse and recurrent infectivity.In contrast, individuals in the late stage of latent syphilis are generally considered noninfectious. 1,2,5,8Hence, distinct treatment approaches are warranted for these two stages.Early latent syphilis demands the prompt application of efficacious antibiotic regimens, while the focus of treatment in the late stage aims at preventing complications in asymptomatic individuals. 1,9However, within practical clinical management, patients often struggle to pinpoint the timing of infection due to the absence of clinical symptoms, this predicament poses a formidable challenge in establishing the staging of latent syphilis.Current serologic assays encompass treponemal and nontreponemal tests, both crucial for syphilis diagnosis, however, evidence indicated that these tests remain insensitive in blood and lack the ability to differentiate between early and late stages of latent syphilis. 10An urgent imperative exists for the development of appropriate tools to enhance precise staging identification. Nontreponemal antibodies, which target the cardiolipin, are developed in response to T. pallidum infection. 11Nontreponemal antibodies are frequently linked to disease activity, with immunoglobulin(Ig) M antibodies being produced early in the infection and maintained by the constant stimulation of active T. pallidum. 12,13Our previous research determined that nontreponemal IgM antibodies can function as a serological marker of syphilis activity, 14 indicating their promising role as identifiers for the staging of latent syphilis.This study aimed to explore the correlation between nontreponemal IgM antibodies and the staging of latent syphilis, and develop a predictive nomogram model utilizing nontreponemal IgM antibodies and other clinical parameters for latent syphilis staging, aiming at equipping clinicians with an effective tool to choose appropriate treatment strategies and make informed management decisions. Study Cohort and Design From January 2020 to December 2020, 4384 patients underwent serological testing for syphilis including nontreponemal antibodies test (Toluidine red unheated serum test, TRUST) and treponemal antibodies test (Chemiluminescence immunoassay, CLIA) at Zhongshan Hospital, Xiamen University.According to CDC 13 and ECDC 15 guidelines, syphilis is diagnosed based on clinical symptoms and serological testing (including Nontreponemal tests and Treponemal tests).Specifically, patients exhibited syphilis clinical symptoms alongside positive results for both serological tests were diagnosed with syphilis.Patients with positive serological tests but without clinical evidence of syphilis were categorized as having latent syphilis, with those infected for less than one year classified as early latent syphilis, and those infected for more than a year referred to late latent syphilis.Finally, 447 patients were diagnosed with syphilis, of whom 360 patients (80.54%) were diagnosed with latent syphilis.The serum samples were collected from latent syphilis patients and stored at −80°C for nontreponemal IgM antibody testing. According to the criteria for developing a clinical prediction model, it is essential that the sample size guarantees a minimum of 10 events for each predictor variable. 16In our case, five variables, CLIA, TRUST, gender, age, and nontreponemal IgM antibody were used to construct the predictive model, requiring a minimum of 50 cases.Excluding 8 patients with HIV and/or autoimmune diseases or refused to participate, we included 352 patients to construct a clinical prediction model, which meets the necessary requirements.Furthermore, data from 60 patients diagnosed with latent syphilis at Zhongshan Hospital, Xiamen University, during January and February 2021, were employed for external validation. Ethics Statement Ethics approval for this research was granted by the Zhongshan Hospital, Xiamen University Research Ethics Committee (No: xmzsyyky2023084). Patient Consent Statement The research complied with the Declaration of Helsinki and national legislation, and adult patients provided informed written consent to participate, and patients under 18 years of age provided informed written consent along with informed written consent from a parent or legal guardian. Testing for Nontreponemal IgM Antibodies The detection of nontreponemal IgM antibodies was accomplished through a two-step indirect immunoassay with a commercially available chemiluminescence test kit as instructed by the manufacturer (Boson Biotech Co., Ltd., Xiamen, China), and employing a BHP9504-02 chemiluminescence meter (Hamamatsu Photon Techniques Inc., Hamamatsu, Japan) to quantify the obtained chemiluminescent reaction in relative light units.The ratio of the chemiluminescence signal to the cutoff value (S/CO) was used to indicate antibody levels, with S/CO ≥ 1.0 being positive and S/CO < 1.0 being negative. Statistical Analyses The statistical analyses were performed utilizing R software (version 3.6.0)and GraphPad Prism version 9.5.0.(GraphPad Software, San Diego, CA, USA).Statistical data were reported in the form of numbers and percentages.The Spearman correlation test was used to investigate the correlation between two variables.R package "rms", "pROC", "regplot", "boot" and "rmda" was utilized for generating, evaluating, and validating the predictive nomogram model.All tests were two-sided with a P value <0.05 deemed statistically significant. Characteristics of the Latent Syphilis Patients The cohort of 352 latent syphilis patients was approximately equally divided between men and women, with 47.44% (167/352) being male and 52.56% (185/352) being female.Within the latent syphilis population, a predominant concentration of patients fell within the 40-59 years age range, accounting for 40.90% (144/352) of the total.This was followed by those aged 19-39 and those aged ≥60 years, both accounting for 28.98% (102/352).Notably, the ≤18 years age group represented only 1.14% (4/352) of the cohort.Among these patients, those in the early stage of latent syphilis comprised 61.36% (216/352) of total patients, and those in the late stage accounted for 38.64% (136/352) of the total.Regarding nontreponemal IgM antibodies, positive nontreponemal IgM antibodies were demonstrated in 20.74% (73/352) of patients, and 79.26% (279/352) of patients exhibited negative nontreponemal IgM antibodies. Correlation Between Nontreponemal IgM Antibodies and Latent Syphilis Staging The overwhelming majority of patients diagnosed with positive IgM antibodies were identified as early latent syphilis cases (71/73, 97.26%), while a mere 2.74% (2/73) were categorized as late latent syphilis cases.Among the 279 patients diagnosed with negative IgM Antibodies, the distribution of early and late latent syphilis cases was approximately balanced (51.97% vs 48.03%) (Figure 1A).This emphasized a potential link between IgM and the staging of latent ).The Spearman correlation findings were visually represented using an optimal-fitting curve along with 95% confidence intervals (Figure 1B). Construction of the Latent Syphilis Staging Model Utilizing CLIA, TRUST, gender, age, and nontreponemal IgM antibody, we constructed a predictive nomogram model to estimate the probability of patients being in the early latent syphilis stage.As depicted in Figure 2, the five variables were annotated with a scale along the line segment, symbolizing their value ranges.The extent of the line segment conveyed their significance in determining the prediction of latent syphilis staging.The distribution of patient counts on both continuous and categorical variables is respectively depicted using density plots or box plots.In the nomogram model for latent syphilis staging, gender (P=0.015),age (P=0.006), and nontreponemal IgM antibody (P<0.001) were identified as independent predictors, while CLIA (P=0.866) and TRUST (P =0.354) were not. The area under the curve (AUC) value served as an indicator of the model's accurate categorization across thresholds, as the AUC value approached 1, it signified superior classification performance.The receiver-operating characteristic curve displayed an AUC of 0.743 (95% CI: 0.692-0.793),signifying the robust discriminative capability of the nomogram (Figure 3A).Furthermore, the model exhibited an AUC of 0.776 (95% CI: 0.645-0.889) in the external validation dataset, further emphasizing its discriminative ability (Figure 3B). Clinical Utility of the Latent Syphilis Staging Model The decision curve analysis and the clinical impact curve were employed to evaluate the clinical practicality by depicting estimated high-risk counts across thresholds, reflecting accurate positive case ratios. 17The decision curve analysis in Figure 5A provided the visual representation of clinical net benefit at different risk thresholds.The blue curve represented The decision curve analysis and clinical impact curve for the external validation set were presented in Figure 5C and D, additionally demonstrating the model's valuable guidance in informing clinical practice. Discussion To the best of our knowledge, this research represents the initial attempt to construct a predictive model for staging latent syphilis, in the hope of offering novel insights into the field of latent syphilis research and laying a promising foundation for future investigations.The incidence of latent syphilis has exhibited a notable and concerning upward trend in recent years.In China, the proportion of latent syphilis cases in the overall syphilis cases has escalated from 14.2% in 1995 to 73.6% in 2016. 7Our study likewise revealed that an overwhelming majority of syphilis cases were attributable to latent syphilis, which accounts for 80.54% (360/447) of all cases.Such a large proportion highlights the imperative for an intensified focus on addressing latent syphilis within public health strategies and clinical management. The distinction between early and late latent syphilis is of paramount significance, due to its intricate connection with syphilis activity and direct implications for patient management and treatment strategies.Failure to accurately manage active syphilis infections can result in the progression of severe complications, including cardiovascular disease and central nervous system disorders. 18Therefore, since patients with early latent syphilis are in the active syphilis infection stage, they should be better managed clinically to minimize the risk of syphilis transmission and to impede disease progression toward more severe manifestations.In accordance with the World Health Organization (WHO) guidelines for syphilis treatment, late latent syphilis necessitates prolonged antimicrobial therapy courses compared to early latent syphilis. 4,19For individuals with undetermined infection duration, treatment often adopts an approach consistent with late latent syphilis, 1,8 which might lead to overtreatment.Overtreatment may pose unnecessary treatment risks to certain populations, and may also have public health consequences, such as causing stock-outs of the drug in resource-limited or hyperendemic areas. 20Hence, there is an urgent demand for a precise tool to effectively determine the staging of latent syphilis. Our prior investigation identified that nontreponemal IgM antibodies could serve as a viable serologic indicator for active syphilis. 14Hence, we postulated that nontreponemal IgM antibodies might help in distinguishing the stages of latent syphilis by differentiating syphilis activity.In line with this hypothesis, we assessed nontreponemal IgM antibodies in a cohort of 352 latent syphilis patients.The outcomes demonstrated that a significant majority of individuals with positive nontreponemal IgM antibodies were classified as early latent syphilis cases, while only a minute fraction were late latent syphilis cases, implying that positive nontreponemal IgM results may correlate with an inclination towards the early stage of latent syphilis.The subsequent Spearman correlation analysis definitively elucidated the association between nontreponemal IgM antibodies and the staging of latent syphilis, indicating that nontreponemal IgM antibodies could serve as a crucial tool for constructing predictive models for latent syphilis staging. Our predictive nomogram model offers a practical and effective tool for clinicians to estimate the probability of patients being in the early latent syphilis stage.By incorporating CLIA, TRUST, gender, age, and nontreponemal IgM antibodies, the model demonstrated a strong discriminative capability, as evidenced by the AUC value of 0.743.The diagnosis of latent syphilis relies on serological testing, which unfortunately cannot differentiate between early and late latent stages.Our established model similarly revealed that gender, age, and nontreponemal IgM antibodies were independent predictive factors for latent syphilis staging, while CLIA and TRUST do not hold such distinction.The calibration curve analysis, the C-index score, and the Brier score further supported the reliability and reproducibility of our nomogram model by providing compelling evidence supporting our predictive model's commendable reproducibility and reliability.The clinical decision curve and the clinical impact curve accentuated the robust clinical applicability of our predictive model, signifying its substantial utility for clinical practice.The external validation set further emphasized the reliability of these results.These findings collectively reaffirm the favorable performance of our predictive nomogram model. The predictive model has gained widespread recognition as an essential component of contemporary medical decision-making. 21The predictive model specifically developed for the staging of latent syphilis has the potential to serve as a valuable and practical tool for clinicians to empower clinical decision-making.By accurately assessing the stage of disease progression, clinicians can tailor treatment strategies effectively, preventing further advancement and enhancing treatment efficacy, thus improving patient quality of life.For patients with latent syphilis, particularly those unsure of their infection timing, we recommend clinicians conduct nontreponemal IgM antibodies testing and utilize the established predictive model.This approach ensures accurate determination of disease stage, enabling precise therapeutic interventions and optimizing healthcare resource allocation. Several limitations should be acknowledged in the present study.First, 360 patients with latent syphilis were identified during our study, but only 352 patients were included in the construction of the model.HIV can have an impact on the course of syphilis, but due to the small number of HIV-positive patients in the study, we were unable to investigate the differences caused by HIV in the latent syphilis group, so we excluded the HIV-positive patients for the stability of the data.Second, since nontreponemal antibodies target the cardiolipin antigen, certain underlying diseases such as autoimmune diseases may cause false-positive results in nontreponemal IgM tests, which could partially affect the results.Therefore, this study excluded patients with autoimmune diseases, which may potentially lead to biased outcomes.Third, the development and validation of the prediction model relied on single-center data, although the sample sizes for both the training and validation sets met the required standards for building the predictive model, they were relatively limited.To enhance its robustness, additional validation and optimization through a prospective study involving a larger, multicenter sample will be necessary in the future. Conclusion In Conclusion, our study indicated a close correlation between nontreponemal IgM antibodies and the staging of latent syphilis.The developed predictive nomogram model based on nontreponemal IgM antibody offered a valuable clinical tool for accurate latent syphilis staging identification and therefore aiding clinicians in making informed treatment decisions.However, further validation and prospective studies are warranted to validate the model's applicability across diverse patient populations and healthcare settings. Figure 1 Figure 1 Relationship between Nontreponemal IgM Antibodies and the Staging of Latent Syphilis Patients.(A) Distribution of clinical phases among patients with different nontreponemal IgM antibodies results.(B) Nontreponemal IgM antibodies correlate with the staging of latent syphilis patients. Model validation was measured by the calibration curve and Brier score.The calibration curve, generated from 1000 bootstrapped resamples, was used to assess the model's ability to provide accurate probability predictions and was further assessed employing the Hosmer-Lemeshow goodness-of-fit test.A Brier score closer to zero indicated better model calibration, reflecting a stronger alignment between predictions and observed outcomes.The C-index was employed to evaluate the predictive accuracy of the model, with a nearing value of 1 indicating a heightened predictive accuracy.The calibration curve of the constructed nomogram model was illustrated in Figure4A.The close resemblance between the apparent curve and the biascorrected curve showed a strong fit, emphasizing the commendable reproducibility and reliability of the predictive model.Additionally, the apparent curve and the bias-corrected curve were relatively close to the ideal curve, suggesting the favorable predictive consistency of the nomogram model.The Hosmer-Lemeshow χ2 statistics of the calibration curve was 2.440 Figure 2 17 2926 Figure 2 The Nomogram Model for Predicting Latent Syphilis Staging. Figure 3 Figure 3 Receiver-Operating Characteristic Curves for Evaluating the Predictive Accuracy of the Latent Syphilis Staging Model.(A) Receiver-Operating Characteristic Curves for the Model in the Training Set.(B) Receiver-Operating Characteristic Curves for the Model in the Validation Set. Figure 4 Figure 4 Calibration Curve for Validating the Latent Syphilis Staging Model.(A) Calibration Curve for the Model in the Training Set.(B) Calibration Curve for the Model in the Validation Set. Figure 5 Figure 5 Decision Curve Analysis and Clinical Impact Curve for Evaluating the Clinical Utility of the Latent Syphilis Staging Model.(A) Decision Curve Analysis of the Model in the Training Set.(B) Clinical Impact Curve of the Model in the Training Set.(C) Decision Curve Analysis of the Model in the Validation Set.(D) Clinical Impact Curve of the Model in the Validation Set.
2024-07-15T15:47:22.845Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "86ac26cd1b545dfc5d7601c96a04f2e4e2d8f1ef", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c4329945c7fb7f14ec77dd744a12880ac907410a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211729386
pes2o/s2orc
v3-fos-license
Detection of Mycobacterium tuberculosis in Dog of Assam Natural infection by Mycobacterium tuberculosisis uncommonly isolated from cases of animal tuberculosis following close, prolonged contact with infectious humans (Michel et al., 2003). Companion animals living in contact with TB patients are at great risk of exposure to this pathogen and post mortem surveys performed in European cities in the first half of the 20th century determined the prevalence of canine TB as varying between 0.1 and 6.7% (Snider, 1971). Also the diseased companion animals can be the potential source of infection to human has been highlighted by previous author (Snider 1971). Introduction Natural infection by Mycobacterium tuberculosisis uncommonly isolated from cases of animal tuberculosis following close, prolonged contact with infectious humans (Michel et al., 2003). Companion animals living in contact with TB patients are at great risk of exposure to this pathogen and post mortem surveys performed in European cities in the first half of the 20th century determined the prevalence of canine TB as varying between 0.1 and 6.7% (Snider, 1971). Also the diseased companion animals can be the potential source of infection to human has been highlighted by previous author (Snider 1971). Tuberculosis is one of India's major public health problems. According to WHO estimates, India has the world's largest tuberculosis epidemic (WHO, 2006). In such environments, epidemiological investigations of both non-clinical M. tuberculosis infection and clinical TB disease have evidenced high levels of M. tuberculosis transmission between people and it can be expected that companion animals living in such environments will be at particular risk of infection by this pathogen. However, TB is prevalent in resource-poor settings in which sophisticated veterinary services are generally unavailable and where cases of canine TB will remain largely undetected. In addition to the lack of clinical data about canine TB, a comprehensive understanding of this disease is further limited by the absence of practical immunological tests for the diagnosis of both clinical disease and nonclinical M. tuberculosis infection in this species. These tests typically rely on the detection of antigen-specific T lymphocytemediated responses as surrogate markers of infection by the causative organism (de la Rua-Domenech et al., 2006). This principle is employed in the in vivo tuberculin skin test (TST) which characterises the inflammatory response to mycobacterial purified protein derivative (PPD). Similarly, the more recently described interferon-gamma (IFN-c) release assays (IGRA) quantify the in vitro release of IFN-c by lymphocytes stimulated by M. tuberculosis-specific antigens. Currently, no standard protocols exist for the TST in canines as it has long been believed that this test is unreliable in dogs and that the use of regular M. tuberculosis and M. bovis PPD as TST stimuli are uninformative (Bonovska et al., 2005). The present study was undertaken for the purpose to investigate the occurrence of M. tuberculosis infection in dogs by TST, IFN-γ and PCR assay and to determine the risk of transmission of M. tuberculosis from infectious human TB patients to contact dogs. Materials and Methods Fifty two dogs with symptoms of harsh, chronic non-productive coughing, fever and anorexia were brought to the Teaching Veterinary Clinical Complex, Guwahati, considering as a referral centre for disease diagnosis and treatment. Single TST Prior to the TST, dogs were sedated with Thiopental Sodium (25 mg/kg, i/v) (Thiosol 1gm, Neon Laboratories Ltd., Mumbai, India). The TST was done by 3 intradermal injections of 0.1 ml of 2TU, 5TU and 10TU of Tuberculin PPD (Arkray Healthcare Ltd., Gujrat, India) in the medial aspect of thigh. Skin thicknesses were measured at both sites before the intradermal injection and after 72 hrs. If the skin thickness is more than 5mm, it is considered as positive (OIE). IFN-γ assay Blood samples were collected aseptically for IFN-γ assay. It was performed according to kit procedures (life technologies IFN-γ canine ELISA kit). Samples were read at a wavelength of 450 nm to calculate optical density. A sample was considered as positive when the difference between mean optical density value of a negative control with mean optical density value of sample is equal or higher than 0.100. Radiographic studies Chest radiography was made of each animal in the dorso-ventral and left recumbent position to determined opaque image in lung lobes, military lesion in the lungs and heart, enlargement of the liver, spleen, hilar and mesenteric lymph node. Gross necropsy Carcasses were inspected with the standard procedure for any gross visible lesion suspected of tuberculosis. Organs and tissue samples were collected from all the carcasses for further analysis. In this study, an animal was considered positive on necropsy if 1 or more lymph nodes or other tissues contained focal or multifocal abscesses or granulomas. Mycobacterial culture and species identification Fresh samples were macerated and decontaminated using NALC and inoculated on to Lowenstein Jensen (LJ) media. Briefly, approximately 1g of tissue exhibiting gross visible lesions was sliced and homogenized and then subjected for decontamination. The supernatant was discarded and the pellet formed re-suspended in 300μl of phosphate buffered saline (140mMNaCl, 26mM KCl, 10.0mM Na2HPO4 and 1.7mM KH2PO4). Then the re-suspended pellets were inoculated in duplicates onto LJ slants (one incorporating glycerol and the other pyruvate). LJ slants were incubated at 37oC and observed weekly for eight weeks. Using a sterile 0.1 μl plastic loop, the re-suspended pellets were spread and fixed at 80oC (for 10 min) onto a labelled slide. The slides were subjected for staining with modified ZN stain. Biochemical analysis were performed for species identification of mycobacteria as per standard protocol, such as Nitrate reduction test (Kubica and Wayne, 1984), Pyrazinamidase test (Wayne, 1974) and Niacin detection test (Gadre et al., 1995). DNA was isolated from bacterial culture and PCR was done targeting hsp65 gene amplifying 441bp as per De Los Monteros et al., (1998). Results and Discussion In the current study, we assayed dogs with TST and IFN-γ and necropsy tissue samples with lesions suggestive of mycobacterial infection from post-mortem dog using ZN microscopy and compared the results with those of culture, biochemical tests and PCR. A total of 52 suspected dogs were tested by using the TST and IFN-γ assay and out of which the 2 dogs were found positive for TST ( Fig. 1) and 3 dogs were positive for IFN-γ. Although precise determination of sensitivity and specificity of each of the PPD employed is not possible, it would be appear that TST results were inconsistent with those of the IFN-γ. The IFN-γ assay is advantageous over the TST because IFN-γ assay has been designed to be highly specific by using welldefined antigens, and it allows for the inclusion of positive and negative controls. Together, these findings support those of other studies which have found the TST ineffective in dogs (Bonovska et al., 2005). Moreover, X-ray was carried out in 4 suspected animals to detect pleural and pericardial effusion, ascites and hepatomegaly, diffuse radio-opaque images in lung lobes, diffuse visible masses in abdominal organs, hilar and mesenteric lymphadenopathy etc. However only one dog had shown suspected lesion in the lung (Fig. 2). Post-mortem was also carried out in 3 suspected tuberculosis cases brought to the Teaching Veterinary Clinical Complex (TVCC), Guwahati, Assam. Out of which one dog was showing liquifactive necrosis on liver and the tissue sample was processed for isolation of Acid-fast (ZN positively-stained) tuberculus bacteria. The isolate was recovered from the suspected lesion and confirmed as M. tuberculosis by biochemical test viz. Niacin production, nitrate reduction, urease production and PCR assay (Fig. 3 & 4). These results were well supported by Parsons et al., (2008). Though transmission of tuberculosis between human and dog is not well established, pet owner, veterinarian, physicians and public should be aware of the potential transmission. However, culture and molecular assay will be helpful in understanding the dynamics of tuberculosis between human and dog.
2019-09-15T03:13:01.497Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "8facadc2f6bf61e06ff22e2485ddf4a24f7b411c", "oa_license": null, "oa_url": "https://www.ijcmas.com/8-5-2019/Acheenta%20G.%20Barua,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7e0ccbc3f38754401deddb662c9c532c918d5edb", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
4707774
pes2o/s2orc
v3-fos-license
EM011 activates a survivin-dependent apoptotic program in human non-small cell lung cancer cells Background Lung cancer remains a leading cause of cancer death among both men and women in the United States. Treatment modalities available for this malignancy are inadequate and thus new drugs with improved pharmacological profiles and superior therapeutic indices are being continually explored. Noscapinoids constitute an emerging class of anticancer agents that bind tubulin but do not significantly alter the monomer/polymer ratio of tubulin. EM011, a rationally-designed member of this class of non-toxic agents, is more potent than the lead molecule, noscapine. Results Here we report that EM011 inhibited proliferation of a comprehensive panel of lung cancer cells with IC50's ranging from 4-50 μM. In A549 human non-small cell lung cancer cells, the antiproliferative activity was mediated through blockage of cell-cycle progression by induction of a transient but robust mitotic arrest accompanied by activation of the spindle assembly checkpoint. The mitotically-arrested A549 cells then override the activated mitotic checkpoint and aberrantly exit mitosis without cytokinesis resulting in pseudo G1-like multinucleated cells that either succumb directly to apoptosis or continue another round of the cell-cycle. The accumulated enormous DNA perhaps acts as genotoxic stress to trigger cell death. EM011-induced apoptotic cell death in A549 cells was associated with a decrease of the Bcl2/BAX ratio, activation of caspase-3 and cleavage of PARP. Furthermore, EM011 induced downregulation of survivin expression over time of treatment. Abrogation of survivin led to an increase of cell death whereas, overexpression caused decreased apoptosis. Conclusion These in vitro data suggest that EM011 mediates antiproliferative and proapoptotic activity in non-small cell A549 lung cancer cells by impeding cell-cycle progression and attenuating antiapoptotic signaling circuitries (viz. Bcl2, survivin). The study provides evidence for the potential usefulness of EM011 in chemotherapy of lung cancer. Background Lung cancer is a leading cause of death worldwide. Nonsmall cell lung cancer (NSCLC) accounts for ~80-85% of all cases of lung cancer, and ~45% of patients present with stage IIIA/B disease [1]. Besides the metastatic nature of this disease, drug resistance that emerges upon prolonged treatment with particular drug/s has been responsible for poor survival statistics, and the overall scenario emphasizes need for effective and well-tolerated treatment regimens. Even with the best currently-available treatment, lung cancer can only be cured at its earliest stage, and the 5-year survival rate is a low 5 percent. Although many traditional cytotoxics have been used as monotherapy in NSCLC, including vindesine, docetaxel, carboplatin, etoposide, ifosfamide, cyclophosphamide, vincristine, mitomycin and cisplatin [2], these drugs produce only small improvements, and several debilitating toxicities significantly compromise the quality of life and decrease survival. Thus, the need for development of more effective therapeutic strategies for NSCLC that offer improved pharmacological profiles and superior therapeutic indices is crucial. The mitotic spindle, a highly evolved elegant structure that orchestrates faithful chromosome segregation during cell division, is a pharmaceutically validated target for anticancer therapy [3,4]. Since dynamic microtubules that compose the mitotic spindle have a critical role in cell division, various microtubule inhibitors have been developed as successful anticancer drugs. Two major classes of microtubule-interfering agents are well recognized in the clinic today. They comprise the taxanes (represented by paclitaxel, docetaxel etc.) that overpolymerize and bundle microtubules, and the vinca alkaloids (typified by vinblastine, vincristine, vinflunine etc.) that depolymerize microtubules. Several of these microtubule depolymerizing agents have been widely employed for the treatment of NSCLC [5,6]. However, due to the extreme effects of these drugs on microtubules, critical physiological functions that microtubules perform, such as intracellular transport, are compromised (reviewed in [7]). In addition, these microtubule inhibitors act on both proliferating and postmitotic cells and thus exhibit microtubule-dependent side effects, including peripheral neuropathy [8,9]. The brominated noscapine analog, EM011, is more active than the parent noscapine, as reported by the 60-cell line anticancer screen conducted by the Developmental Therapeutics Program (DTP) at the National Institutes of Health [19]. EM011 retains the tubulin-binding capacity and preserves the non-toxic attributes of noscapine [13][14][15]17,18]. Earlier reports have shown that EM011 inhibited growth of pgp-and MRP-overexpressing human lymphoma xenografts implanted in nude mice [14,17]. In this study, we wished to evaluate whether or not EM011 that shows potent in vitro and in vivo anticancer activity in several cancer types, is useful for treating lung cancer. Our results demonstrate that EM011 inhibited proliferation of a variety of human lung cancer cells with an IC 50 ranging from 4-50 μM. A robust mitotic arrest through activation of the spindle assembly checkpoint was observed upon drug-treatment of A549 cells. Mitotically-arrested cells then exited mitosis without cytokinesis in a phenomenon called mitotic slippage into a pseudo G1-like interphase state and subsequently activated apoptotic cell-death pathways. However, a small percentage of these mitotically-slipped tetraploid cells entered another round of cell-cycle and thus accumulated massive DNA amounts that possibly trigger apoptosis due to genotoxic stress. Furthermore, some multinucleate cells that are perhaps 'apoptosis-reluctant' undergo aberrant cell divisions into several (more than 2) cells suggesting induction of aneuploidy. At the molecular level, EM011 downregulated survivin, an important member of the inhibitor of apoptosis (IAP) family of proteins. Abrogation of survivin by smallinterfering RNA led to an increased sensitivity of A549 cells towards EM011-induced cell death. These results suggest the potential usefulness of EM011 in the management of lung cancer and warrant further evaluation. In vitro cell proliferation assay Lung cancer cells were seeded in 96-well plates at a density of 5 × 10 3 cells/well followed by next day treatment with increasing gradient concentrations of EM011 ranging from 10 nm to 100 mM. After 48 hrs of drug treatment, cells were fixed with 50% trichloroacetic acid and stained with 0.4% sulforhodamine B (SRB) dissolved in 1% acetic acid. Cells were then washed with 1% acetic acid to remove the unbound dye. Essentially, the SRB assay measures cell density by quantitating colored SRB bound to cellular proteins fixed to the plates by tricholoroacetate. The protein-bound dye was extracted with 10 mM Tris-base to determine the absorbance at 564 nm wavelength [20,21]. The percentage of cell survival as a function of drug concentration was then plotted to determine IC 50 values (drug concentration needed to prevent cell proliferation by 50%). Cell-cycle analysis A549 cells were seeded in culture dishes and grown until 70% confluence. The medium was then replaced with new medium containing either vehicle (0.01% DMSO) or 25 μM EM011 for 6, 12, 18, 24, 36 and 48 hrs. After the incubation period, cells were centrifuged, washed twice with ice-cold PBS, and fixed in 70% ethanol. Tubes containing the cell pellets were stored at 4°C for at least 24 hrs. Cells were then centrifuged at 100 × g for 10 minutes, and the supernatant was discarded. Pellets were washed twice with PBS and stained with PI in the presence of RNase A for 45 minutes in dark. Samples were analyzed on a FACSCalibur flow-cytometer (Beckman Coulter, Inc., Fullerton, CA). Quantitation of mitotic cells by MPM-2 staining A549 cells were collected and fixed with 70% ethanol. Cells were treated with the blocking solution (PBS containing 1% Triton X-100 and 1% BSA) for 1 hr, followed by incubation with MPM-2 mouse monoclonal antibody (Upstate Biotechnology, Lake Placid, NY) in blocking solution for 1 hr. Cells were then incubated with Alexa 488-conjugated anti-mouse secondary antibody followed by PI for DNA staining, and were read flow-cytometrically. Western blot analysis Proteins were resolved by polyacrylamide gel-electrophoresis and transferred onto polyvinylidene difluoride membranes (Millipore). Membranes were blocked in Trisbuffered saline containing 0.2% Tween-20 and 5% fatfree dry milk and incubated first with primary antibodies and then with horseradish peroxidase-conjugated secondary antibodies. Specific proteins were visualized with enhanced chemiluminescence detection reagent according to the manufacturer's instructions (Pierce Biotechnology). Terminal deoxynucleotidyl-transferase-mediated dUTP nick-end labeling (TUNEL) assay for apoptosis DNA strand breaks were identified using the TUNEL assay as described [15]. Briefly, A549 cells treated with 25 μM EM011 for 72 hrs were washed with ice-cold PBS, fixed in 1% paraformaldehyde, and 3'-DNA ends were detected using the APO-BrdU TUNEL Assay Kit (Molecular Probes, Eugene, OR). 10 6 cells were incubated with 25 μM EM011 for 0, 12, 24, 48 and 72 hrs. Caspase-3 activity was measured by cleavage of the small synthetic substrate Z-DEVD-aminoluciferin (CaspaseGloTM 3/7 Assay System Kit, Promega, Madison, WI) that becomes luminogenic upon cleavage. The luminescent signal, which is directly proportional to the amount of caspase-3 activity, was measured in a luminescence plate reader. EM011 inhibits growth of a wide array of lung cancer cells Noscapinoids represent a new generation of anticancer agents that modulate microtubule dynamics but do not significantly alter the total polymer mass of tubulin. Earlier reports have shown that EM011, a brominated noscapine analog, has potent antiproliferative and proapoptotic activity in hormone-refractory breast cancer and drug-resistant lymphoma models [14,15,17]. To evaluate the efficacy of EM011 in lung cancer cells, we first examined the ability of EM011 to inhibit cellular proliferation in a comprehensive panel of lung cancer cells with variable but well-characterized genotypes. The panel included H322 M, H266, H266B, H23, H460, HOP-92, HOP-62, H1299, A549, H522, H157, and H1792 lung cancer cells. These 12 different lung cancer cell lines were treated with gradient concentrations of EM011 and the extent of cell proliferation was measured by the SRB assay, which is based on the stoichiometric binding of SRB dye to all cellular protein components [21]. As shown in Figure 1A, EM011 effectively suppressed cellular proliferation of lung cancer cells. The non-small cell lung cancer cell lines included in the study had well-characterized p53 status, namely, wild type (represented by H226, H460, A549), null (such as H1299) and mutant (H322 M, H23, H522, H157, H1792). Although the parent molecule, noscapine, has been previously shown to induce a p53-dependent apoptosis in colon cancer cells [22], these data show that the half-maximal growth inhibitory concentrations of EM011 did not correlate with the p53 status in lung cancer cells. For example, H1299 that lacks endogenous p53 showed similar sensitivity as A549 (wild type p53) or H522 (mutant p53) cells. The IC 50 values for most lung cancer lines studied were in the range of 4-10 μM. The relatively less-sensitive lung cancer cells included HOP-92 (IC 50 = ~12.5 μM), HOP-62 (IC 50 = ~19.5 μM), H460 (IC 50 = ~28 μM) and H1792 (IC 50 = ~50 μM). Since the A549 human epithelial cell line is widely-accepted as a standardized experimental model with biological properties of alveolar epithelial type II cells, we chose these cells to conduct further studies to gain insights into cellular and molecular mechanisms of EM011 action. Phase-contrast microscopic analysis of cell morphology showed that while vehicle-treated A549 cells proliferated normally, EM011 treatment impaired their proliferation capacity ( Figure 1B). Cells first appeared rounded-up (24 hrs treatment) followed by a fragmented morphology (72 hrs treatment) ( Figure 1B). EM011 transiently blocks cell-cycle progression in the G2/ M phase To investigate the precise mechanism responsible for EM011-mediated antiproliferative effects, we examined the cell-cycle distribution profile of EM011-treated A549 lung cancer cells over time. A flow-cytometric assay using EM011 suppresses proliferation of human lung cancer cells the DNA intercalator dye, propidium iodide (PI), was utilized to monitor cell-cycle progression on the basis of status of DNA amounts. Figure 2A shows the timedependent effects of EM011 on cell-cycle profile of A549 cells in a three-dimensional representation. The x-axis shows amount of DNA depicting different phases of the cell-cycle. While 2N and 4N DNA complements represent G0/G1 and G2/M cell populations respectively, S phase is characterized by variable DNA (between 2N and 4N) and sub-G1 population is usually indicative of degraded DNA, a hallmark of apoptosis. The y-axis represents the number of cells containing that amount of DNA and the z-axis shows the time of drug-exposure. 0 hr depicts cell-cycle profile of control cells. As we go along in time, EM011 treatment caused a significant inhibition of cell-cycle progression in A549 cells resulting in an accumulation of cells in the G2/M phase compared to control cells ( Figure 2B). The G2/M population began to sharply rise as early as 6 hrs (~65%) post-treatment, achieved a maximum at 12 hrs (~80%) and was still about ~60% at 24 hrs of EM011 exposure ( Figure 2B). This increased population of cells with 4N DNA perhaps correlated with concomitant losses from G0/G1 phases ( Figure 2B). Following this, a disappearance of the G2/M population and an emergence of a EM011 perturbs cell-cycle progression of A549 lung cancer cells characteristic hypodiploid DNA content peak (sub-G1) was observed beginning 24 hrs, indicative of apoptotic cells ( Figure 2B). The sub-G1 population at 36 hrs (~15%) increased to ~32% at 48 hrs post-treatment. Figure 2C is a bar-graphical representation that provides an overview of the percent cell population in various cell-cycle phases upon EM011 treatment over time. These results suggest that EM011-treated A549 cells arrested in the G2/M phase preceding cell death. EM011 treatment selectively accumulates cells in mitosis and not G2 phase Although the cell-cycle experiment using PI for DNA readout is extremely useful for an overview of the percent cell population in various cell-cycle phases, it cannot dissect out differences between G2 and M phases as both have 4N DNA amounts. Thus, we next examined the specific cellcycle phase (G2 or M) in which the cells arrested upon EM011 treatment using a mitosis-specific marker, MPM-2. The MPM-2 positive (mitotic) population increased tõ 48% (47.4 ± 2.3%) in 6 hrs and peaked to ~68% (67.6 ± 1.9%) at 12 hrs after which a decrease in the MPM-2 population was observed ( Figure 2D). The MPM-2 positive population was still as high as ~43% (43.2 ± 2.1%) at 24 hrs. However, the mitotic population declined sharply tõ 4% (3.8 ± 0.5%) at 36 hrs ( Figure 2E). EM011 perturbs spindle architecture and interferes with bipolar spindle formation We next visualized cellular microtubules and the spindle apparatus that is composed of microtubules in the absence and presence of EM011 to detect cellular events that were perturbed by drug action (Figure 3). The nuclear morphology of A549 cells was also examined over time of treatment. Immunofluorescence confocal microscopy was employed to examine drug effects-microtubules were A. Confocal immunomicrographs of microtubules (green) and DNA (red) in vehicle (0.01% DMSO) treated A549 cells display-ing hallmarks of a typical cell division process in a cell-cycle, in particular, interphase (a), metaphase (b), early anaphase (c), late anaphase (d), telophase (e), and cytokinesis (f, g) and cytokinesis (f, g). B. In contrast, EM011 treatment arrested A549 cells at mitosis, impaired bipolar spindle formation, induced mitotic slippage, aneuploidy and apoptosis. EM011 treatment did not affect the radial array network of microtubules until 6 hrs of exposure (0 hr, (a); 6 hrs (b)). Aberrantly arrested cells with multipolar spindles were seen at 12 hrs (c) and 24 hrs (d) post-treatment. Cells started to exit mitosis without cell division beginning at about 30 hrs post-treatment (e). Thereafter, at 36-40 hrs post-treatment, several aberrant phenotypes were observed suggesting variability in cellular fate upon drug treatment. Multinucleate cells with small fragments of DNA indicating apoptotic bodies suggested cell death from G1-like tetraploid state (f). Some cells showed signs of DNA fragmentation while in mitosis suggesting mitotic death (g, h). However, some cells were seen to undergo asymmetric chaotic cytokinesis with triple midbody (i) and perhaps, led to the generation of aneuploid cells (j, k). Occasionally, aberrant separation into more than two abnormal cells (even multinucleated) were seen (l, m). At 48 hrs of drug exposure, several fragmented DNA pieces and clusters of small apoptotic bodies were observed (n). stained green (FITC-labeled secondary antibody) and DNA was stained red using PI. Figure 3A represents confocal micrographs of A549 cells that were treated with vehicle (0.01% DMSO). Interphase cells showed normal radial arrays of microtubules ( Figure 3A, a). The doubling time of A549 cells was observed to be about 22-24 hrs. The mitotic population, a small percentage (~6-10%) of the total number of cells in a cell-cycle at any given time, displayed hallmark features of a typical mitotic process. Congression of chromosomes at the metaphase plate followed by anaphase onset, a characteristic telophase and cytokinesis were evident (Fig 3A, b-g). Although EM011treated cells at 0 and 6 hrs showed normal microtubule arrays ( Figure 3B a, b), mitotically-arrested cells were visible at 12 hrs post-treatment ( Figure 3B, c) and continued to accumulate until 24 hrs ( Figure 3B, d). The arrested cells did not show normal bipolar spindles, rather displayed aberrant multipolar spindles. Predominantly, the number of spindle poles observed was 3 or 4 ( Fig 3B, c, d). The mitotic block triggered by the spindle assembly checkpoint is not permanent, and cells can exit mitosis aberrantly by a process known as mitotic slippage (also known as adaptation) [23]. At 30 hrs post-treatment, there was an emergence of G1-like interphase cells that were multinucleated. Most likely, these multinucleated cells resulted from a mitotic exit, that is, cells slipped out of an abnormal multipolar mitosis without cell division ( Figure 3B, e). Upon mitotic slippage, cells enter pseudo G1-like interphase with tetraploid (4N) cells. We observed that these multinucleate cells continued to accumulate until 36-40 hrs. At 40 hrs post-treatment, several phenotypes were observed, perhaps arising from the variability in cellular fates within the same cell line. This was in conformity with recent reports that suggest the existence of a high degree of inter-and intra-line variability in cellular responses to chemotherapeutic drug treatment [24,25]. Multinucleated cells with small pieces of fragmented DNA, reminiscent of apoptotic bodies, were also seen at about 40 hrs post-treatment suggesting initiation of cell death directly from the G1-like multinucleate state ( Figure 3B, f). Perhaps, there was a small percentage of mitotic cells that were disintegrating, suggesting death directly from a prolonged mitotic arrest ( Figure 3B, g, h). Interestingly, at 40-48 hrs of drug exposure, some multinucleate A549 cells (10-15%) were seen to pursue abnormal cytokinesis with triple or quadruple midbodies ( Figure 3B, i). These aberrant cell divisions result into several (more than two) cells that are usually aneuploid ( Figure 3B, j, k). Asymmetric distribution of cytoplasm and chromatin material was observable showing separation into more than two abnormal cells (even multinucleate) suggesting high degree of aneuploidy caused by drug treatment (Figure 3B, l, m). Asymmetric cytokinesis following multipolar mitosis generated three or more aneuploid daughter cells that may perhaps be inviable. In addition, several fragmented DNA pieces and clusters of small apoptotic bodies were seen at 48 hrs post-treatment ( Figure 3B, n). EM011 treatment induced mitotic exit after the transient mitotic block We next quantitated the cell population that slipped out of mitosis without cytokinesis using dual color-flow cytometry (MPM-2 and PI). There was a significant drop in the percentage of MPM-2 positive cells from ~43% (44 ± 3%) at 24 hrs to ~4% (4.2 ± 0.5%) at 36 hrs of EM011 treatment indicating that cells had slipped out of mitosis ( Figure 4A). After mitotic exit, this population of cells then appears as the MPM-2 negative tetraploid (4N) pop-A. Quantitative flow-cytometric representation of mitotic exit using two-color staining (MPM-2 and PI) ulation that increased from ~30% (30.4 ± 2.2%) at 24 hrs to ~50% (49.6 ± 3.2%) at 36 hrs ( Figure 4A). These flowcytometric data correlated with confocal microscopic observations, lending support to the fact that cells with aberrant multipolar spindles stall transiently in mitosis, followed by decondensation of chromatin material, and an exit from mitotic phase. Since cells have failed to successfully progress through mitosis to execute cytokinesis, they have 4N DNA amounts and can be seen as huge multinucleated cells. Figure 4B depicts representative cells from both states, mitotically-arrested (left panel) and pseudo G1-like multinucleate state (right panel). EM011 treatment alters expression of cell-cycle and apoptosis regulatory molecules EM011 activates spindle assembly checkpoint in A549 cells Most antimitotics are known to activate the spindle assembly checkpoint [23]. BubR1, a component of the mitotic checkpoint is phosphorylated during spindle checkpoint activation [26]. Immunoblotting data showed that EM011 activated the mitotic checkpoint as detected by phosphorylation of BubR1 at 24 hrs of drug treatment ( Figure 5A). Intriguingly, the BubR1 signal faded at 48 hrs and reappeared at 72 hrs post-treatment ( Figure 5A). Probing with MPM-2 antibody, a mitosis-specific marker, that recognizes phosphorylation of Ser/Thr-Pro epitopes, also showed re-emergence of positive MPM-2 signals at 72 hrs post-treatment ( Figure 5A). The BubR1 phosphorylation at 24 hrs indicated the initial activation of the spindle assembly checkpoint that accompanied drug-induced mitotic arrest. Since mitotic slippage occurs after prolonged mitotic arrest (beyond 24 hrs), it is reasonable to speculate that the absence of MPM-2 signals indicates exit from mitosis at 48 hrs. However, the absence of BubR1 signals at 48 hrs is intriguing because mitotic slippage from aberrant mitosis usually takes place by overriding an activated checkpoint [23,26]. Both microscopic and flowcytometric data show that A549 cells exited mitosis after a prolonged mitotic arrest upon activation of the spindle checkpoint. To investigate the re-emergence of BubR1 and MPM-2 signals, A549 cells were drug-treated for 72 hrs, stained using PI and MPM-2 followed by dual-color flowcytometric analyses ( Figure 5B). An emergence of a MPM-2 positive cell population (4.9 ± 0.6%) with 8N DNA amounts suggested that a sub-population of 'mitotically slipped-out' cells perhaps, continue the cell-cycle by entering another round of DNA replication and mitosis. Parallel immunostaining experiments revealed that 72 hrs drug-treated cells showed some highly multipolar chaotic mitotic figures (Figure 5C, left) as well as excessively huge G1-like multinucleated/multilobed cells ( Figure 5C, right). It is noteworthy that BubR1 and MPM-2 signals EM011 induces mitotic arrest by the activation of the spindle checkpoint reappear at 72 hrs and at this point of time, the cells are much larger in size compared with the first mitosis which is consistent with cell growth without cell division. We speculate that the massive cells with excessively huge DNA contents perhaps, eventually trigger apoptosis due to genotoxic stress. EM011 downregulates survivin expression in A549 cells Survivin, an antiapoptotic member of the inhibitor of apoptosis (IAP) family is known to block apoptosis by inhibiting caspases and antagonizing mitochondriadependent apoptosis [27]. A decrease in survivin levels plays an important role in human NSCLC and in agree-ment with this notion; several studies have concluded that there is a strong correlation between increased survivin levels and progression of human lung cancer [28][29][30]. Survivin is expressed at high level in many types of cancers, but not in normal tissues from the same organs [31]. We thus asked if EM011 alters survivin levels as part of its anti-proliferative and pro-apoptotic action. To this end, the expression levels of survivin were examined upon EM011 treatment over time. 25 μM EM011 caused a decline in survivin levels as early as 24 hrs and significantly low levels were seen at 72 hrs post treatment (Figure 6A). In contrast, H1792 cells with a high IC 50 value (~50 μM) for EM011, did not show change in survivin lev-A. Immunoblot analysis of A549 cells treated with EM011 for 0, 24, 48 and 72 hrs Figure 6 A. Immunoblot analysis of A549 cells treated with EM011 for 0, 24, 48 and 72 hrs. After the indicated times, cells were lysed and total protein was extracted, separated by SDS-PAGE, electrotransferred onto polyvinylidene difluoride membrane, and subjected to immunoblotting with the indicated primary antibodies followed by incubation with horseradish peroxidase-conjugated secondary antibodies. β-actin was used as a loading control. B. Knock-down of survivin using survivin siRNA enhances the apoptotic response of A549 cells to EM011 treatment. A549 cells were transfected with a plasmid encoding survivin siRNA and were then subjected to EM011 exposure for 0, 24 els over time of drug treatment (see Additional File 1), suggesting that survivin may play a role in conferring resistance to EM011-induced apoptosis. Knock-down of endogenous survivin by survivin siRNA sensitizes A549 cells to EM011-induced apoptosis If survivin is important to EM011-induced apoptosis, knock-down of endogenous survivin expression should render A549 cells to become more susceptible to EM011 treatment. Thus, to determine whether downregulation of survivin contributes to EM011-induced apoptosis, we used a plasmid construct encoding survivin specific siRNA to selectively knock-down endogenous survivin in A549 cells. After the knock-down, cells were drug-treated for 24, 48 and 72 hrs. Downregulation of survivin sensitized A549 cells to EM011-induced apoptosis, whereas control siRNA had no effect. Figure 6B shows representative cellcycle profiles indicating that abrogation of survivin led to an increased sensitivity of lung cancer cells towards apoptosis upon drug treatment. This is evident from the differences in percent sub-G1 populations in cells that were EM011-treated for 48 or 72 hrs after survivin knock-down (~40% at 48 hrs and ~57% at 72 hrs) versus cells subjected to control siRNA transfections (~30% at 48 hrs and 45% at 72 hrs) prior to EM011 exposure for the matched time points ( Figure 6D). Immunoblots showing knockdown efficiency of survivin siRNA at 24 hrs post-transfection compared to survivin levels in cells treated with control siRNA for the same time are depicted in Figure 6C. Overexpression of survivin renders A549 cells resistant to EM011induced apoptosis To further confirm the role of survivin in EM011-induced apoptosis, we overexpressed survivin in A549 cells using a survivin encoding plasmid construct. Ectopic overexpression of survivin protected cells against EM011-induced apoptosis, whereas, control plasmid (empty vector, EV) had no effects. Figure 6E depicts cell-cycle profiles showing in pairs, the control plasmid (EV) and survivin plasmid transfected (T) cells, which were drug-treated for 24, 48 and 72 hrs post-transfection. There was a reduction in percent sub-G1 population in cells that were transfected with the survivin plasmid (~27% at 48 hrs) compared to when cells were transfected with a control plasmid construct (~35% at 48 hrs). The protective effects of survivin overexpression were also visible at 72 hrs of transfection (~32% sub-G1 population) compared to control plasmid (~50% sub-G1 population) ( Figure 6G). However, among the control and survivin plasmid transfected cells, differences in sub-G1 population at 24 hrs post drug-treatment were not statistically significant. Immunoblots showing ectopic overexpression efficiency of the survivin plasmid at 24 hrs post-transfection compared to survivin levels in cells treated with empty pcDNA3 vector are depicted in Figure 6F. EM011 decreases Bcl2/BAX ratio in A549 cells It has been proposed that prolonged mitotic arrest stimulates the phosphorylation of Bcl2, thereby resulting in its inactivation [32][33][34]. Bcl2 in the unphosphorylated form complexes with BAX, and thus its phosphorylation releases BAX from the Bcl2-BAX complex [35][36][37]. Unbound BAX translocates from cytosol to the mitochondrial membrane to signal triggering of the downstream apoptotic cascade, such as release of cytochrome c and activation of executionery caspases [35][36][37]. EM011 activates mitochondrially-mediated intrinsic apoptotic pathway in breast cancer and lymphoma cells [14,15,17]. To examine the time-dependent effects of EM011 on Bcl2 proteins in A549 cells, we analyzed changes in the expression levels of proapoptotic BAX and antiapoptotic Bcl2. EM011 induced mitotic arrest in A549 cells was accompanied by the hyperphosphorylation of Bcl2 and there occurred an increase of BAX protein levels in a timedependent manner ( Figure 6A). This led to a decrease in the antiapoptotic/proapoptotic (Bcl2/BAX) ratio as a function of time of treatment ( Figure 6A) suggesting involvement of Bcl2 family members in EM011-induced cell death. EM011 causes activation of caspase-3 and cleavage of its downstream target, PARP Upon cleavage by upstream proteases in an intracellular cascade, the activation of caspase-3 is considered as a hallmark of the apoptotic process. The levels of cleaved active subunits of executioner caspase-3 were evaluated by immunoblotting cell lysates following EM011 treatment for 0, 24, 48 and 72 hrs. EM011 caused a significant increase in activated caspase-3 following 72 hrs of EM011 exposure ( Figure 6A). To confirm the involvement of caspase-3, the active form of the cysteine protease was monitored using a small conserved modified peptide substrate that becomes luminogenic upon cleavage. EM011 treatment caused a time-dependent activation of caspase-3 in A549 cells ( Figure 6H). Next, we examined the activationmediated cleavage of caspase-3 substrate, poly(ADPribose) polymerase (PARP), which is a reliable marker of apoptosis. Utilizing their cysteine protease activity, caspases separate N-terminal DNA-binding domain of PARP from its C-terminal catalytic domain (89 kDa) [38]. A time-dependent increase in cleaved PARP was observed upon probing with a cleaved PARP specific antibody (Figure 6A). Overall, these results show activation of caspase-3 and PARP cleavage suggesting that EM011 induced apoptotic cell death in A549 cells. Figure 7A shows DAPI-stained micrographs with fragmented nuclei reminiscent of apoptotic bodies at 72 hrs post-treatment, emphasizing the incidence of apoptosis. EM011 triggers apoptosis as seen by an increase of TUNEL-positive cells The percentage of cells with condensed and fragmented nuclei increased with the time of drug treatment (data not shown). To further validate apoptosis, the increase in the concentration of 3'-DNA ends due to fragmentation was quantified using a flow-cytometry based TUNEL assay. EM011-treated A549 cells showed ~44% TUNEL-positive cells ( Figure 7B) at 72 hrs of exposure compared to control cells, suggesting extensive DNA cleavage. Conclusion These data show that EM011 is anti-proliferative and proapoptotic in lung cancer cells. The inhibition of cellular proliferation is perhaps due to induction of a robust transient mitotic arrest in A549 cells. This is followed by an abnormal exit of cells from mitosis without cytokinesis into a pseudo G1-like multinucleate state. These abnormally huge cells perhaps trigger activation of apoptotic cell death program that is mitochondrially-driven and executed through the activated caspase machinery by the cleavage of downstream targets such as PARP. The A549 cell death program is also mediated through downregulation of survivin, in that knock-down of survivin sensitized cells to undergo apoptosis whereas overexpression of survivin reduced EM011-induced apoptosis. This study identifies the potential usefulness of EM011, a non-toxic microtubule-modulating agent, in the management of lung cancer.
2014-10-01T00:00:00.000Z
2009-10-30T00:00:00.000
{ "year": 2009, "sha1": "562dc4475db58b45da38ad1a949ac31f543f69af", "oa_license": "CCBY", "oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/1476-4598-8-93", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "562dc4475db58b45da38ad1a949ac31f543f69af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248025428
pes2o/s2orc
v3-fos-license
EUS-FNA Diagnosis of a Metastatic Adult Granulosa Cell Tumor in the Stomach Abstract Granulosa cell tumors are uncommon ovarian neoplasms, predominantly of the adult type (AGCT). In this report, we present a rare case of a patient with metastatic AGCT to the stomach diagnosed with endoscopic ultrasound–guided fine-needle aspiration (EUS-FNA). A 61-year-old woman without a history of AGCT underwent both a vaginal and an abdominal ultrasound that showed a solid and cystic ovarian mass along with a solid mass in the gastric antral wall. Subsequently, an EUS-FNA was performed to assess the gastric lesion. Cytologic findings showed high cellularity, and the groups of neoplastic cells invaded the muscle layer of the stomach. Notably, these cells formed Call-Exner bodies, whereas some nuclei exhibited nuclear grooves. Immunohistochemistry was performed, revealing positivity for α-inhibin, calretinin, and CD56 in the neoplastic cells, whereas chromogranin, synaptophysin, CD117, and DOG1 were negative. The combination of clinical presentation, radiology, cytomorphology, and immunohistochemistry could facilitate the diagnosis of metastatic AGCT and the management of such patients. A 61-year-old woman without a history of AGCT underwent both a vaginal and an abdominal ultrasound that showed a solid and cystic ovarian mass along with a solid mass in the gastric antral wall. Subsequently, an EUS-FNA was performed to assess the gastric lesion. Cytologic findings showed high cellularity, and the groups of neoplastic cells invaded the muscle layer of the stomach. Notably, these cells formed Call-Exner bodies, whereas some nuclei exhibited nuclear grooves. Immunohistochemistry was performed, revealing positivity for α-inhibin, calretinin, and CD56 in the neoplastic cells, whereas chromogranin, synaptophysin, CD117, and DOG1 were negative. The combination of clinical presentation, radiology, cytomorphology, and immunohistochemistry could facilitate the diagnosis of metastatic AGCT and the management of such patients. Granulosa cell tumors are uncommon ovarian neoplasms, comprising approximately 2% to 5% of all ovarian cancers. They belong to the family of sex cord stromal tumors and are predominantly adult granulosa cell tumors (AGCT; 95%), rather than the juvenile type (5%). Studies have shown that AGCT present clinically with symptoms and signs caused by the presence of an adnexal mass, including abdominal pain or swelling. [1][2][3] Because AGCT are hormonally active, they secrete high levels of estrogen, often resulting in abnormal vaginal bleeding, and they pose a higher risk of patients developing endometrial hyperplasia and cancer. 3,4 They are considered low-grade and indolent ovarian malignancies that grow slowly, and staging is their most important prognostic factor. Most patients are diagnosed at stage I, exhibiting a favorable prognosis. 1-3 Of interest, 5-and 10-year survival rates of patients with AGCT have been reported to be 98% and 84%, respectively. Therefore, these patients exhibit a much better prognosis than patients with other more common ovarian cancers, such as serous ovarian carcinomas. 5 However, patients with AGCT require a long-term follow-up because this neoplasm may behave unpredictably and exhibit aggressive behavior in the long term. Notably, AGCT can recur or metastasize even many years after their initial detection; this may happen in patients initially diagnosed at stage I as well. Metastases of AGCT are mostly confined to the area of the pelvis and abdominal cavity (eg, peritoneum or omentum), yet more distant sites have also been reported, such as the liver, lung, and bones. 1,2,6 Fine-needle aspiration (FNA) is a modality that has been successfully utilized in the diagnosis of metastatic AGCT. 2,[7][8][9][10][11] In this report, we present a case of a patient with metastatic AGCT to the stomach diagnosed with endoscopic ultrasound-guided (EUS) FNA. This is the first cytomorphologic description in the literature of a metastasis to this site diagnosed with this procedure. Case Description A 61-year-old woman without a history of AGCT underwent both a vaginal and an abdominal ultrasound, which showed a solid and cystic ovarian mass and a solid mass in the gastric antral wall, respectively. Subsequently, an EUS-FNA was performed to assess the gastric lesion. The material received was solely used to prepare a cell block. Subsequent H&E-stained slides showed high cellularity. The neoplastic cells were mostly arranged in syncytial groups invading the muscle layer of the stomach and also exhibited a tendency to form rosettelike structures (FIGURE 1). Neoplastic nuclei showed a monotonous appearance with ovoid shape and minimal atypia, and some of them exhibited nuclear grooves. Cytoplasm was of moderate amount, and the cell borders were ill-defined. No necrosis was found. The combination of clinical history (presence of a solid and cystic ovarian mass) and cytomorphology raised the possibility of an AGCT metastasis to the stomach. In this situation, the rosette-like structures would represent Call-Exner bodies. In the latter, the neoplastic granulosa cells are arranged around small lumens containing eosinophilic material, as we saw in our patient (FIGURE 1). Immunohistochemistry was performed on the cell-block material. The neoplastic cells were positive for α-inhibin, calretinin, and CD56 (FIGURE 2). In contrast, they were negative for chromogranin, synaptophysin, CD117, and DOG1. Therefore, by combining the radiologic, cytomorphologic, and immunohistochemical findings, we developed a diagnosis of a metastatic AGCT to the stomach. Discussion Research has shown that AGCT are low-grade ovarian malignancies derived from the granulosa cells of the ovarian follicles. Although most exhibit an indolent behavior, long-term follow-up is required because a few can recur or metastasize even many years after the initial diagnosis. 1,2,6 Clinical history may be unavailable to the pathologist; thus, a diagnosis of metastatic AGCT can be difficult to make, especially when the specimen cellularity is inadequate for ancillary studies. 2,8 A few case series and reports describing cytologic diagnoses of metastatic AGCT have been published, where the latter has been found in sites such as the liver, lungs, bone, omentum, bowel, bladder, spleen, kidney, pleural and ascitic fluids, and lymph nodes. 2,7-11 A recent case series has effectively summarized the published literature on metastatic AGCT diagnosed with cytology. 2 A diagnosis of metastatic AGCT can be suspected when the cytomorphology is classic, including the presence of Call-Exner bodies and nuclei with grooves. 2 Immunochemistry can confirm this suspicion because AGCT cells will most likely be positive for α-inhibin, calretinin, and CD56. 12 Notably, the detection of the FOXL2 mutation (missense point mutation; 402C→G), by either immunohistochemistry or sequencing, is an accurate diagnostic biomarker and pathognomonic for AGCT. Furthermore, FOXL2 immunohistochemistry is more sensitive than α-inhibin and calretinin, besides being highly specific to highlight the presence of AGCT. 1,13,14 For our patient, we formed our differential diagnosis list based on the location of the mass inside the stomach wall and the low-grade cytomorphology of the neoplasm. Low-grade lesions growing in the submucosa/muscularis can include gastric neuroendocrine tumors (NET), gastrointestinal stromal tumors (GIST), leiomyomas, and schwannomas. Studies have shown that NET are composed of cells with "salt and pepper" nuclei, albeit without grooves, that are positive for chromogranin and synaptophysin with immunohistochemistry. 15 Whereas GIST are positive for DOG1 and CD117, leiomyomas and schwannomas exhibit spindle-shaped morphology and immunopositivity for desmin and S100, respectively. 16 Conclusion In conclusion, AGCT are malignant ovarian neoplasms with indolent behavior, yet they have an unpredictable malignant potential that prompts their long-term follow-up. The combination of clinical presentation, radiology, cytomorphology, and immunohistochemistry can facilitate the diagnosis of metastatic AGCT and the management of such patients.
2022-04-09T06:17:37.223Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "3cff74efb7dda2087292527094d0473a7d988788", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/labmed/lmac024", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "559217e86eaebe046f10b991b629d483f1a30f91", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211146080
pes2o/s2orc
v3-fos-license
The Picard group of the universal moduli stack of principal bundles on pointed smooth curves II In this paper, which is a sequel of arXiv:2002.07494, we investigate, for any reductive group $G$ over an algebraically closed field $k$, the Picard group of the universal moduli stack $\mathrm{Bun}_{G,g,n}$ of $G$-bundles over $n$-pointed smooth projective curves of genus $g$. In particular: we give new functorial presentations of the Picard group of $\mathrm{Bun}_{G,g,n}$; we study the restriction homomorphism onto the Picard group of the moduli stack of principal $G$-bundles over a fixed smooth curve; we determine the Picard group of the rigidification of $\mathrm{Bun}_{G,g,n}$ by the center of $G$ as well as the image of the obstruction homomorphism of the associated gerbe. As a consequence, we compute the divisor class group of the moduli space of semistable $G$-bundles over $n$-pointed smooth projective curves of genus $g$. Introduction The aim of this paper, which is a sequel of the paper [FVc], is to study the Picard group of the universal moduli stack of (principal) G-bundles Bun G,g,n , which parametrizes G-bundles, where G is a connected and smooth linear algebraic group k = k, over families of (connected, smooth and projective) k-curves of genus g ≥ 0 endowed with n ≥ 0 pairwise disjoint ordered sections. We refer the reader to [FVc] for the motivation behind this investigation as well as for its relationship with previous results in the literature. Recall (see Theorem 3.0.1) that the stack Bun G,g,n is an algebraic stack, locally of finite type and smooth over the moduli stack M g,n of n-marked curves of genus g and its connected components (which are integral and smooth over k) are in functorial bijection with the fundamental For a geometric interpretation of the weight homomorphism wt δ G and of the obstruction homomorphism obs δ G , see §5. In particular, Im(obs δ G ) ∼ = coker(wt δ G ) is an obstruction to the triviality of the Z (G)-gerbe ν δ G . Theorem D. (see Theorem 5.0.7) Assume that g ≥ 1. Let G be a reductive group and fix δ ∈ π 1 (G). For a definition of the homomorphism ev δ D(G) , see §2.2. In Section 7, we compute coker(ev δ D(G) ) for all reductive groups G such that g ss is a simple Lie algebra, together with its quotient coker( ev δ D(G) ) (see Definition/Lemma 2.2.8(ii)), which is an obstruction to the triviality of the Z (G)-gerbe ν δ G (C) : Bun δ G (C) → Bun δ G (C) := Bun δ G (C) Z (G), for any (C, p 1 , . . . , p n ) ∈ M g,n (k), as shown by Biswas-Hoffmann [BH12], see Theorem 5.0.1. The computation of the image of the obstruction homomoprhism obs δ G carried over in Theorem D will be a crucial ingredient in our upcoming work [FVa], where we will compute the (cohomological) Brauer groups of Bun δ 1.0.2. A curve is a connected, smooth and projective scheme of dimension one over k. The genus of a curve C is g(C) := dim H 0 (C, ω C ). A family of curves π : C → S is a proper and flat morphism of stacks whose geometric fibers are curves. If all the geometric fibers of π have the same genus g, then we say that π : C → S is a family of curves of genus g (or a family of curves with relative genus g) and we set g(C/S) := g. We will denote by ω π the relative canonical line bundle of π. Note that any family of curves π : C → S with S connected is a family of genus g curves for some g ≥ 0. 1.0.3. Given two integers g, n ≥ 0, we will denote by M g,n the stack (over k) whose fiber over a scheme S is the groupoid of families (π : C → S, σ = {σ 1 , . . . , σ n }) of n-pointed curves of genus g over S, i.e. π : C → S is a family of curves of genus g and {σ 1 , . . . , σ n } are (ordered) sections of π that are fiberwise disjoint. It is well known that the stack M g,n is an irreducible algebraic stack, smooth and separated over k, and of dimension 3g − 3 + n. Moreover, M g,n is a DM(=Deligne-Mumford) stack if and only if 3g − 3 + n > 0. We will denote by (π g,n = π : C g,n → M g,n , σ) the universal n-pointed curve over M g,n . 1.0.4. A linear algebraic group over k is a group scheme of finite type over k that can be realized as a closed algebraic subgroup of GL n , or equivalently it is an affine group scheme of finite type over k. We will be dealing almost always with linear algebraic groups that are smooth (which is always the case if char(k) = 0) and connected. Given a linear algebraic group G, a principal G-bundle over an algebraic stack S is a G-torsor over S, where G acts on the right. 1.0.5. In the paper, we introduce several groups and morphisms. To help the reader, we make a table of the main objects together with a reference to their definitions. Preliminaries 2.1. Reductive groups. In this subsection, we will collect some result on the structure of reductive groups, that will be used in what follows. A reductive group (over k = k) is a smooth and connected linear algebraic group (over k) which does not contain non-trivial connected normal unipotent algebraic subgroups. To any reductive group G, we can associate a cross-like diagram of reductive groups In the above diagram, the horizontal and vertical lines are short exact sequences of reductive groups, the morphisms D(G) ։ G ss and R(G) ։ G ab are central isogenies of, respectively, semisimple groups and tori with the same kernel which is equal to the finite multiplicative (algebraic) group Since the two semisimple groups D(G) and G ss are isogenous, they share the same simplyconnected cover, that we will denote by G sc , and the same adjoint quotient, that we will denote by G ad . Hence we have the following tower of central isogenies of semisimple groups The Lie algebra g of G splits as (2.1.4) g = g ab ⊕ g ss , where g ab is the abelian Lie algebra of the tori R(G) and G ab , whose dimension is called the abelian rank of G, and g ss is the semisimple Lie algebra of each of the semisimple groups in (2.1.3), whose rank is called the semisimple rank of G. The semisimple Lie algebra g ss decomposes as a direct sum of simple Lie algebras of classical type (i.e. type A n , B n , C n , D n , E 6 , E 7 , E 8 , F 4 or G 2 ). If G is a semisimple group such that its Lie algebra g = g ss is simple, then G is said to be almost-simple. Remark 2.1.1. It follows from the universal property of the maximal abelian quotient G ab and from the universal property of the universal cover G sc that the morphisms are covariantly functorial with respect to homomorphisms of reductive groups. On the other hand, the morphisms are not functorial with respect to arbitrary homomorphisms of reductive groups, e.g. the inclusion of a maximal torus T ֒→ G does not factor, in general, through R(G) or, equivalently, does not map to zero in G ss or G ad . Recall now that all maximal tori of G are conjugate and let us fix one such maximal torus, that we call T G . We will denote by B G a Borel subgroup of G that contains T G and by N (T G ) the normalizer of T G in G, so that (2.1.5) The maximal torus T G induces compatible maximal tori of every semisimple group appearing in (2.1.3), that we will call, respectively, T G sc , T D(G) , T G ss and T G ad . These tori fit into the following commutative diagram: , where the horizontal line is the canonical decomposition of Λ * (Z (G)) into its torsion subgroup and torsion-free quotient, the vertical line is exact, the lower left diagonal arrow is a finite inclusion of lattices, the upper right diagonal arrow is an inclusion of finite abelian groups. 2.2. Integral bilinear (even) symmetric forms on Λ(T G ). In this subsection, we will prove some results on (W G -invariant) integral bilinear (even) symmetric forms on the lattice Λ(T G ). Given a lattice Λ of rank r (i.e. Λ ∼ = Z r ), we denote the lattice of integral bilinear (resp. even) symmetric forms on Λ by Given b ∈ Bil s (Λ), we will denote by b Q : The above lattices (2.2.1), which are contravariantly functorial with respect to morphisms of lattices, can be described in terms of the dual lattice Λ * := Hom(Λ, Z) in the following way. Consider the following lattices (of rank r+1 2 ) where (Λ * ⊗ Λ * ) s is the subspace of symmetric tensors of Λ * ⊗ Λ * , i.e. tensors that are invariant under the involution χ ⊗ µ → µ ⊗ χ. We will denote the elements of Sym 2 (Λ * ) by χ ·µ : The lattices in (2.2.1) are isomorphic to the ones in (2.2.2) via the following isomorphisms In terms of the isomorphisms (2.2.3), the inclusion Bil s,ev (Λ) ⊂ Bil s (Λ) corresponds to the injective morphism Using the above basis, it follows that (2.2.6) coker(ψ) = (Z/2Z) r . We now come back to the setting of §2.1. Let G be a reductive group with maximal torus T G ⊂ G and consider the natural action of the Weyl group W G on Λ(T G ) and on Λ * (T G ). We now want to describe the lattices Proposition 2.2.1. With the above notation, we have an exact sequence of lattices is integral. Moreover, the exact sequence (2.2.7) is contravariant with respect to homomorphisms of reductive groups φ : H → G such that φ(T H ) ⊆ T G . The above notation Bil s,(ev) means that the result applies by putting Bil s everywhere or by putting Bil s,ev everywhere. In the proof of the above proposition, we will use the following Lemma 2.2.2. Let G be a reductive group with maximal torus T G ⊂ G and consider the natural action of the Weyl group W G on Λ(T G ) and on Λ * (T G ). Then we have isomorphisms Proof. The second isomorphism is proved in [FVc, Lemma 2.1.1]. The proof of the first isomorphism is similar. Proof of Proposition 2.2.1. Clearly, the morphism B * ab is injective and res D •B * ab = 0. In order to complete the proof, we will need the following and x is fixed by W G (because the action of W G is trivial on Λ(R(G))). Hence, Lemma 2.2.2 implies that b(x ⊗ −) is the pull-back of an integral functional on Λ(G ab ), or in other words that b(x ⊗ −) |Λ(T D(G) ) ≡ 0. Since this is true for any x ∈ Λ(R(G)), we get that b |Λ(R(G))⊗Λ(T D(G) ) ≡ 0. The last assertion follows from the fact that Λ(R(G)) Q = Λ(T G ) ab Q and Λ(T D(G) ) Q = Λ(T G ) ss Q . We now go back to proof of the Proposition. Let us first prove that the sequence (2.2.7) is exact in the middle, i.e. ker(res D ) ⊆ Im(B * ab ). Consider a form b ∈ Bil s,(ev) (Λ(T G )) W G such that res D (b) = 0. This assumption, together with the above claim, implies that b |Λ(T D(G) )⊗Λ(T G ) ≡ 0, which implies that b is the pull-back of an integral bilinear (resp. even) symmetric form on Λ(G ab ). Let us now prove that the morphism res D is well-defined, i.e. that res D (b) is integral on Λ(T D(G) ) ⊗ Λ(T G ss ) for every b ∈ Bil s,(ev) (Λ(T G )) W G . For any element y ∈ Λ(T D(G) ), consider the integral functional b(y ⊗ −) : Λ(T G ) → Z. By the claim, we have that b(y ⊗ −) |Λ(T R(G) ) ≡ 0, which implies that b(y ⊗ −) is the restriction of an integral functional on Λ(T G ss ). Since this is true for any y ∈ Λ(T D(G) ), we deduce that res D (b) is integral on Λ(T D(G) ) ⊗ Λ(T G ss ). In order to show that the sequence (2.2.7) is exact, it remains to prove that res D is surjective. Now consider the W G -equivariant short exact sequence of lattices (2.2.9) where Hom s,(ev) (Λ(T D(G) )⊗Λ(T G )+Λ(T G )⊗Λ(T D(G) ), Z) is the lattice of (resp. even) symmetric integral forms on Λ( and the last group is zero since W G is a finite group and Bil s,(ev) (Λ(G ab )) is torsion-free. By taking the long exact sequence in W G -cohomology associated to the exact sequence (2.2.9) and using the vanishing H 1 (W G , Bil s,(ev) (Λ(G ab )) = 0, we get a surjection Hence, the form b of (2.2.8) is the restriction of a form b ∈ Bil s,(ev) (Λ(T G )) W G . By construction we have that res D (b) = b, which concludes the proof of the surjectivity of res D . Finally, the (contravariant) functoriality of the exact sequence (2.2.7) follows from the fact that Bil s,(ev) (Λ(T G )) W G is functorial by [BH10a, Lemma 4.3.1] (and the discussion that follows) while Bil s,(ev) (Λ(G ab )) and the morphism B * ab are functorial by Remark 2.1.3. By combining the two exact sequences of Proposition 2.2.1, we get a new exact sequence Corollary 2.2.3. With the above notation, we have an exact sequence of lattices Moreover, the exact sequence (2.2.11) is contravariant with respect to homomorphisms of reductive groups φ : Proof. Consider the commutative diagram of abelian groups where the rows are exact and the columns are the obvious inclusions. The bottom row is the non-even version the exact sequence (2.2.7). By definition, we have that res −1 D (Bil s,ev (Λ(T D(G) )|Λ(T G ss )) W G ) = Bil s,D−ev (Λ(T G )) W G , and hence the top row is the required sequence (2.2.11). Corollary 2.2.4. The ranks of Bil s,(ev) (Λ(T G )) W G and of Bil s,D−ev (Λ(T G )) W G are equal to We conclude observing that we have finite index inclusions and that the last lattice has rank equal to the number of simple factors of g ss (see e.g. [FVc, Lemma 2.2.1]). We now define evaluation homomorphisms from the exact sequence (2.2.11) onto the vertical exact sequence in (2.1.17). Note that the notation in the above Definition/Lemma is coherent since if G is a torus then ev δ G = ev δ ab G ab and if G is semisimple then ev δ G = ev δ D(G) . In order to prove that the last two evaluation homomorphisms are well-defined, we will need the following Proof of Definition/Lemma 2.2.5. The fact that the evaluation homomorphism ev δ G (resp. ev δ D(G) ) is well-defined follows from the fact that any two lifts of δ (resp. of δ ss ) differ by an element e ∈ Λ(T G sc ), together with Lemma 2.2.6 which implies that b Q (e ⊗ −) is integral on Λ(T G ad ). The commutativity of the left square follows from the fact that if b ∈ Bil s (Λ(G ab )) then where we have used that any lift d ∈ Λ(T G ) of δ satisfies Λ ab (d) = δ ab . Next, observe that, by (2.1.10), any lift d ∈ Λ(T G ) of δ ∈ π 1 (G) decomposes as d = δ ab + d ss , where d ss := p 2 (d) ∈ Λ(T G ss ) has the property that its class in π 1 (G ss ) coincides with δ ss . Therefore, the commutativity of the right square follows since for any b ∈ Bil s,D−ev (Λ(T G )) W G we have that , where we have used that b Q (δ ab ⊗ −) |Λ(T D(G) ) = 0 by the claim in the proof of Proposition 2.2.1. Remark 2.2.7. The last evaluation homomorphism ev δ D(G) can be compared with the evaluation homomorphisms of the semisimple groups D(G) and G ss in the following way. First all, note that we have injective restriction homomorphisms Then we have that: (1) For any δ ∈ π 1 (G), the following diagram is commutative where δ ss is the image of δ in π 1 (G ss ). (2) For any δ ∈ π 1 (G) which is the image of a (necessarily unique) element δ D ∈ π 1 (D(G)) (which happens precisely when δ is a torsion element of π 1 (G), see (2.1.15)), then we have the following commutative diagram The homomorphism ev δ D(G) of Definition/Lemma 2.2.5 can be extended to a slightly larger lattice, as we now show. Definition/Lemma 2.2.8. Fix the same notation as above. Consider the lattice has an elementary 2-abelian cokernel. (ii) For any δ ∈ π 1 (G), the evaluation homomorphism ev δ where d ss ∈ Λ(T G ss ) is any lifting of δ ss ∈ π 1 (G ss ). Part (ii): the fact that ev δ G is well-defined follows from the fact that any two lifts of δ ss differ by an element e ∈ Λ(T G sc ), together with Lemma 2.2.6 which implies that b Q (e ⊗ −) is integral on Λ(T G ad ). The fact that ev δ D(G) • r G = ev δ D(G) is obvious from the definitions. 3. The universal moduli stack Bun G,g,n and its Picard group Let G be a reductive group over k = k. We denote by Bun G,g,n the universal moduli stack of G-bundles over n-marked curves of genus g. More precisely, for any scheme S, Bun G,g,n (S) is the groupoid of triples (C → S, σ, E), where (π : C → S, σ = {σ 1 , . . . , σ n }) is a family of n-pointed curves of genus g over S and E is a G-bundle on C. We will denote by (π : C G,g,n → Bun G,g,n , σ, E) the universal family of G-bundles. By definition, we have a forgetful surjective morphism onto the moduli stack M g,n of n-marked curves of genus g. Note that the universal n-marked curve (C G,g,n → Bun G,g,n , σ) over Bun G,g,n is the pull-back of the universal n-marked curve (C g,n → M g,n , σ) over M g,n . Any morphism of reductive groups φ : G → H determines a morphism of stacks over M g,n The fiber of Φ G,g,n over a n-pointed curve (C, p 1 , . . . , p n ) ∈ M g,n (k) is equal to the k-stack Bun G (C) of G-bundles on C, i.e. the stack over k whose S-points Bun G (C)(S) is the groupoid of G-bundles on C S := C × k S for any k-scheme S. For any morphism of reductive groups φ : G → H, the restriction of the morphism φ #,g,n to the fiber over (C, p 1 , . . . , p n ) ∈ M g,n (k) gives rise to a morphism φ # (C) : Bun G (C) → Bun H (C). We collect in the following theorem the geometric properties of Bun G,g,n and of the forgetful morphism Φ G,g,n . Theorem 3.0.1. Let G be a reductive group. (1) The morphism Φ G,g,n is locally of finite presentation, smooth, with affine and finitely presented relative diagonal. (2) There is a functorial decomposition into connected components Similarly, the fiber Bun G (C) of Φ G,g,n over (C, p 1 , . . . , p n ) ∈ M g,n (k) admits a functorial decomposition into connected components (3) For each δ ∈ π 1 (G), the stack Bun δ G,g,n is smooth and integral of relative dimension over M g,n equal to (g − 1) dim G. (4) Φ δ G : Bun δ G,g,n → M g,n is of finite type (or equivalently quasi-compact) for any (or equivalently for some) δ ∈ π 1 (G) if and only if G is a torus. (5) For any δ ∈ π 1 (G) the morphism Φ δ G : Bun δ G,g,n → M g,n is fpqc (i.e. faithfully flat and locally quasi-compact) and cohomologically flat in degree zero (i.e. the natural morphism 3.1. The Picard group of Bun δ G,g,n . The aim of this subsection is to recall the results on the Picard group of Bun δ G,g,n obtained in [FVc]. We will focus on the case g ≥ 1; the case g = 0 is easier to dealt with and it is completely described in [FVc, Thm. D]. Note that the Picard group of M g,n is well-known up to torsion (and completely known if char(k) = 2 by [FVb]) and the pull-back morphism G is fpqc and cohomologically flat in degree zero by Theorem 3.0.1(5). Therefore, we can focus our attention onto the relative Picard group ). A first source of line bundles on Bun G,g,n comes from the determinant of cohomology d π (−) and the Deligne pairing −, − π of line bundles on the universal curve π : C G,g,n → Bun G,g,n (see [ACG11, Chap. XIII, Sec. 4, 5] for the definition and main properties of d π (−) and −, − π ). To be more precise, any character χ : G → G m ∈ Λ * (G) := Hom(G, G m ) gives rise to a morphism of stacks χ # : Bun G,g,n → Bun Gm,g,n and, by pulling back via χ # the universal G m -bundle (i.e. line bundle) on the universal curve over Bun Gm,g,n , we get a line bundle L χ on C G,g,n . Then, using these line bundles L χ and the sections σ 1 , . . . , σ n of π, we define the following two types of line bundles, that we call tautological line bundles, on Bun G,g,n (and hence, by restriction, also on Bun δ G,g,n ) From the standard relations between the Deligne pairing and the determinant of cohomology, we deduce that See [FVc,Sec. 3.5] for more details. In the case of a torus G = T , the relative Picard group RPic(Bun δ T,g,n ) is generated by tautological line bundles and the following theorem also clarifies the dependence relations among the tautological line bundles. Theorem 3.1.1. [FVc, Thm. B] Assume that g ≥ 1. Let T be an algebraic torus and let d ∈ π 1 (T ). The relative Picard group RPic(Bun d T,g,n ) is a free abelian group of finite rank generated by the tautological line bundles and sitting in the following exact sequences where τ d T (called transgression map) and σ d T are defined by for any χ ∈ Λ * (T ) and ζ ∈ Z n , and ρ d T is the unique homomorphism such that for any χ ∈ Λ * (T ) and ζ ∈ Z n . Furthermore, the exact sequences (3.1.3) and (3.1.4) are contravariant with respect to homomorphisms of tori. Now consider the case of an arbitrary reductive group G. Note that any character of G factors through its maximal abelian quotient ab : G ։ G ab , i.e. the quotient of G by its derived subgroup. Hence, the tautological line bundles on Bun δ G,g,n are all pull-backs of line bundles via the morphism (induced by ab) ab # : Bun δ G,g,n → Bun δ ab G ab ,g,n where δ ab := π 1 (ab)(δ) ∈ π 1 (G ab ). Moreover, Theorem 3.1.1 implies that the subgroup of RPic(Bun δ G,g,n ) generated by the tautological line bundles coincides with the pull-back of RPic(Bun δ ab G ab ,g,n ) via ab # . The next result says that, for an arbitrary reductive group G, the relative Picard group of Bun δ G,g,n is generated by the image of the pull-back ab * # together with the image of a functorial transgression map τ δ G (which coincides with the transgression map τ d T in Theorem 3.1.1 if G = T is a torus). Let G be a reductive group and let ab : G → G ab be its maximal abelian quotient. Choose a maximal torus ι : T G ֒→ G and let W G be the Weyl group of G. Fix δ ∈ π 1 (G) and denote by δ ab its image in π 1 (G ab ). (1) There exists a unique injective homomorphism (called transgression map for G) 1 (2) There is a push-out diagram of injective homomorphisms of abelian groups where Sym 2 Λ * ab is the homomorphism induced by the morphism of tori T G ι − → G ab − → G ab . Furthermore, the transgression homomorphism (3.1.5) and the diagram (3.1.6) are contravariant with respect to homomorphisms of reductive groups φ : Corollary 3.1.3. With the notation of Theorem 3.1.2, there is an exact sequence of lattices Furthermore, the above exact sequence (3.1.7) is contravariant with respect to homomorphisms of reductive groups φ : Proof. This follows from the push-out diagram (3.1.6) together with Proposition 2.2.1. For an analogue of the above exact sequence for g = 0, see (4.0.16). 3.2. Two alternative presentations of RPic(Bun δ G,g,n ). The aim of this subsection is to give two alternative presentations of RPic(Bun δ G,g,n ), for G a reductive group and g ≥ 1. We will freely use the notation §2.1 and §2.2 with respect to a fixed maximal tours ι : T G ֒→ G. The first presentation of RPic(Bun δ G,g,n ) is based on the following homomorphism. Definition/Lemma 3.2.1. Assume that g ≥ 1. Let G be a reductive group with maximal torus T G and Weyl group W G , and fix δ ∈ π 1 (G). There exists a well-defined homomorphism where the first isomorphism follows from (2.2.3), and the second injective homomorphism is the obvious inclusion. (ii) The composition γ δ G • ab * # is equal to the following composition where γ δ ab G ab is the unique homorphism such that Moreover, the homomorphism γ δ G is contravariant with respect to homomorphisms of reductive groups φ : Proof. The fact that γ δ ab G ab is a well-defined homomorphism has been shown in [FVc,Prop. 4 In order to show that there exists a unique homomorphism satisfying (i) and (ii), using that RPic(Bun δ G,g,n ) is the pushout (3.1.6), it is enough to show that Given χ, χ ′ ∈ Λ * (G ab ) and x, y ∈ Λ(T G ), we compute using the isomorphisms (2.2.3): . Hence, we conclude that the equality (3.2.1) holds and we are done. Finally, the (contravariant) functoriality of γ δ G follows from the functoriality of B * ab (see Corollary 2.2.3) and of α and γ δ ab G ab (which are obvious). Using the homomorphism γ δ G , we get the required new presentation of RPic(Bun δ G,g,n ). Theorem 3.2.2. Assume that g ≥ 1. Let G be a reductive group with maximal torus T G and Weyl group W G , and fix δ ∈ π 1 (G). Consider the following group There is an exact sequence where the morphism i δ G is defined as Moreover, the exact sequence (3.2.4) is contravariant with respect to homomorphisms of reductive groups φ : Proof. Consider the following diagram (3.2.6) Claim: The diagram (3.2.6) is commutative with exact rows. Indeed, the first row is exact by Corollary 3.1.3 while the second row is exact by Corollary 2.2.3. The commutativity of the left square follows from Definition/Lemma 3.2.1(ii). In order to prove the commutativity of the right square, using that RPic(Bun δ G,g,n ) is the pushout (3.1.6), it is enough to show that which, by Corollary 2.2.3 and Definition/Lemma 3.2.1(i), is equal to res D •γ δ G • τ δ G . By the above claim, we can apply the snake lemma to (3.2.6) and we obtain the two isomorphisms From the definition of γ δ ab G ab (see Definition/Lemma 3.2.1(ii)) together with (2.2.5), it follows that γ δ ab G ab is surjective. Therefore, the second isomorphism in (3.2.9) implies that also γ δ G is surjective. It remains to prove that the kernel morphism of γ δ G is equal to i δ G . Using the first isomorphism in (3.2.9) and the fact that i δ G = ab * # • i δ ab G ab by definition, it is enough to prove that (3.2.10) the kernel morphism of γ δ ab G ab is equal to i δ ab G ab . With the aim of proving (3.2.10), let us recall some results from [FVc]. Fix an isomorphism G ab ∼ = G r m which induces an isomorphism Λ * (G ab ) ∼ = Λ * (G r m ) = Z r . Denote by {e i } r i=1 the canonical basis of Z r and by {f j } n j=1 the canonical basis of Z n . By [FVc,Thm. 4.0.1(2)], the relative Picard group of Bun δ ab G ab ,g,n is freely generated by (e i , 0), (0, f j ) , for i = 1, . . . , r, and j = 1, . . . , n, L (e i , 0) = d π (L e i ), for i = 1, . . . , r. Take now an element M ∈ RPic(Bun δ ab G ab ,g,n ) and write it as for some unique a ij , b l ∈ Z, ζ k = (ζ k 1 , . . . , ζ k n ) ∈ Z n , with the property that a ii = 0 if g = 1. From the definition of γ δ ab G ab (see Definition/Lemma 3.2.1(ii)), we compute Hence, we have that In other words, M belongs to the kernel of γ δ ab G ab if and only if M has the following form where the second equality follows from [FVc,Rmk. 3.5.1]. This shows that i δ ab G ab is an injective homomorphism whose image is equal to the kernel of γ δ ab G ab , which proves (3.2.10). We now want to get a second presentation of RPic(Bun δ G,g,n ). With this aim, we introduce the following homomorphism. Definition/Lemma 3.2.3. Assume that g ≥ 1. Let G be a reductive group with maximal torus T G and Weyl group W G , and fix δ ∈ π 1 (G). There exists a well-defined (non functorial) homomorphism ω δ G : RPic(Bun δ G,g,n ) → Λ * (T G ) Λ * (T G ad ) uniquely determined by: 21 (i) The composition ω δ G • τ δ G is equal to the following composition where the first isomorphism is induced by (2.2.3) and ev δ G is the homomorphism in Definition/Lemma 2.2.5. (ii) The composition ω δ G • ab * # is equal to the following composition where |ζ| = i ζ i ∈ Z and similarly for |ζ ′ |, and Λ * ab is the homomorphism in (2.1.17). Remark 3.2.4. We remark that the homomorphism ω δ G is not equal to the composition where γ δ G is the homomorphism in Definition/Lemma 3.2.1 and τ δ G is the homomorphism in Definition/Lemma 2.2.5. More precisely, their compositions with the pull-back ab * # are different. By putting together the homomorphisms of Definition/Lemmas 3.2.1 and 3.2.3, we get the following homomorphism With the aim of describing its image, we give the following Definition 3.2.5. Let G be a reductive group with maximal torus T G and Weyl group W G , and fix δ ∈ π 1 (G). Denote by The group NS(Bun δ G,g,n ) is contravariant with respect to homomorphisms of reductive groups φ : H → G such that φ(T H ) ⊆ T G . Definition/Lemma 3.2.6. Let φ : H → G be a homomorphism of reductive groups, and choose maximal tori T G ⊆ G and T H ⊆ H in such a way that φ(T H ) ⊆ T G . Let ǫ ∈ π 1 (H) and set δ := π 1 (φ)(ǫ) ∈ π 1 (G). Pick a lift e ∈ Λ(T H ) of ǫ ∈ π 1 (H). Then there exists a well-defined homomorphism Moreover, if ψ : L → H is another homomorphism of reductive groups and we choose a maximal torus T L ⊆ L in such a way that ψ( Proof. Let us first consider the following two special cases: Special case I: φ : T ′ → T is a morphism of tori. Choose d ′ ∈ Λ(T ′ ) and set d : which is clearly a well-defined homomorphism. Moreover, the association φ → φ * ,NS is compatible with the composition of morphisms of tori. Special case II: φ = ι : T G → G is the inclusion of a maximal torus inside a reductive group G. Choose a lift d ∈ Λ(T G ) of δ ∈ π 1 (G). Pick an element ([χ], b) ∈ NS(Bun δ G,g,n ). Consider the following commutative diagram with surjective arrows (3.2.17) Λ * (T G ) The diagram (3.2.17) is a pull-back diagram since the kernels of the vertical surjections are both equal to Λ * (T G ad ) while the kernels of the horizontal surjections are both equal to Λ * (G ab ) and Λ * (T G ad ) ∩ Λ * (G ab ) = {0}. From this and condition (3.2.14), it follows that there exists a unique lift of [χ] ∈ Λ * (T G ) Λ * (T G ad ) , that we denote by χ b(d⊗−) ∈ Λ * (T G ), with the property that The definition (3.2.15) reduces in this special case to (3.2.19) ι * ,NS : NS(Bun δ G,g,n ) −→ NS(Bun d T G ,g,n ), which is a well-defined and injective homomorphism whose image is equal to We now go back to the general case. Denote the inclusions of the maximal tori in G and H by, respectively, ι G : T G ֒→ G and ι H : T H ֒→ H and set φ T := φ |T H : T H → T G . Set also d := Λ φ (e) ∈ Λ(T G ), which is a lift of δ ∈ π 1 (G). Consider the composition ) which is a well-defined homomorphism by the special cases already treated. Moreover, for any ([χ], b) ∈ NS(Bun δ G,g,n ) and for every x ∈ Λ(T D(H) ), we have that . From the expression (3.2.21), we conclude that φ * ,NS is given by the formula (3.2.15). Finally, the compatibility of the association φ → φ * ,NS with the composition of morphisms is due to the factorization (3.2.22) together with the Special Case I. The group NS(Bun δ G,g,n ) admits a functorial two-step filtration, that we describe in the following Proposition 3.2.7. Let G be a reductive group with maximal torus T G and Weyl group W G , and fix δ ∈ π 1 (G). We have the following commutative diagram, with exact rows and columns where the identification NS(Bun δ ab G ab ,g,n ) = Λ * (G ab )⊕ Bil s (Λ(G ab )) follows from Definition 3.2.5, i 1 is the inclusion of the first factor and p 2 is the projection onto the second factor, the right vertical column is (2.2.11), res NS G is the projection onto the second factor and res NS D := res D • res NS G . Moreover, the diagram (3.2.23) is contravariant with respect to homomorphisms of reductive groups φ : H → G such that φ(T H ) ⊆ T G . Proof. The commutativity of the right square of the diagram is clear, while the commutativity of the left square follows from the fact ab * ,NS = Λ * ab ⊕ B * ab , as it is easily deduced from Definition/Lemma 3.2.6. The exactness of the left and central columns is clear, while the exactness of the right column follows from Corollary 2.2.3. The exactness of the upper row follows from the definition of NS(Bun δ G,g,n ) together with the exactness of the column in (2.1.17). Finally, the functoriality of the diagram (3.2.23) follows straightforwardly from the definition of the pull-back morphism (3.2.15). Using the above homomorphisms ω δ G and γ δ G , we can now give the following new presentation of RPic(Bun δ G,g,n ). Theorem 3.2.8. Assume that g ≥ 1. Let G be a reductive group with maximal torus T G and Weyl group W G , and fix δ ∈ π 1 (G). Consider the following group (1) There is an exact sequence Moreover, the exact sequence (3.2.24) is contravariant with respect to homomorphisms of reductive groups φ : Moreover, if n = 0 then Remark 3.2.9. (i) For n = 0, the subgroup on the right hand side (3.2.25) is well-defined. Indeed, by (3.2.14), is well-defined for any x ∈ Λ(T G ) and it is equal to χ(x ab ) − b(δ ⊗ x ab ) , where x ab is the image of x in Λ(G ab ). (ii) If either n = 0 or g = n = 1 then H g,n = 0, which implies that the map ω δ G ⊕ γ δ G is injective. Proof. The theorem has been proved for a torus in [FVc,Prop. 4.3.1], and we are going to apply this result for G ab in order to prove the case of a general reductive group G. Let us first prove (1) by dividing the proof in two steps. 25 Step I: The image of is contained in NS(Bun δ G,g,n ). In order to prove this, it is enough to show, using that RPic(Bun δ G,g,n ) is the pushout (3.1.6), that and Im(Λ * ab ⊕ B * ab ) ⊆ NS(Bun δ G,g,n ) as observed in Proposition 3.2.7. Take now an element b ∈ Sym 2 Λ * (T G ) W G ∼ = Bil s,ev (Λ(T G )) W G . Using Definition/Lemma 3.2.3(i) and Definition/Lemma 3.2.1(i), we compute From Definition 3.2.5, it follows easily that the element (b(δ ⊗ −), b) belongs to NS(Bun δ G,g,n ), and this proves the second containment in (3.2.26). Step II: The kernel of ω δ G ⊕ γ δ G is equal to j δ G : Λ * (G ab ) ⊗ H g,n ֒→ RPic(Bun δ G,g,n ). Consider the following diagram (3.2.29) RPic Bun δ ab G ab ,g,n res NS D / / / / Bil s,ev (Λ(T D(G) )|Λ(T G ss )) W G whose rows are exact by Corollary 3.1.3 and Proposition 3.2.7, and whose commutativity follows from (3.2.27) and (3.2.28). By applying the snake lemma to (3.2.29), we get that By applying [FVc,Prop. 4.3.1] to G ab , we get that the kernel of ω δ ab G ab ⊕ γ δ ab G ab is equal to j δ ab G ab : Λ * (G ab ) ⊗ H g,n ֒→ RPic(Bun δ ab G ab ,g,n ). By combining this with (3.2.30) and the fact that j δ G = ab * # • j δ ab G ab , Step II follows. Finally, the functoriality of the morphism ω δ G ⊕γ δ G (and hence of the sequence (3.2.24)) follows straightforwardly from (3.2.27) and (3.2.28). 26 Assume now that n = 0 and call I δ G the subgroup of NS(Bun δ G,g,n ) defined on the right hand side of (3.2.25). By [FVc,Prop. 4.3.1], we know that Im(ω δ ab G ab ⊕ γ δ ab G ab ) = I δ ab G ab . Using this and (3.2.31), in order to prove that Im(ω δ G ⊕ γ δ G ) = I δ G , it is enough to prove that Furthermore, using that RPic(Bun δ G,g,n ) is the pushout (3.1.6), the inclusions (3.2.32) are equivalent to the following two conditions , which shows (3.2.34) and concludes the first assertion of part (2). The last assertion follows from (3.2.31) and the analogous result for G ab , see [FVc,Rmk. 4.3.2]. Before doing this, we need to recall the description of Pic(Bun δ G (C)) obtained in [BH10a]. Theorem 4.0.1. [BH10a, Thm. 5.3.1] Let C be a (irreducible, projective, smooth) curve of genus g ≥ 0 over k = k and denote by J C its Jacobian. Let G be a reductive group over k and fix δ ∈ π 1 (G). Then there exists a (contravariantly) functorial exact sequence of abelian groups subject to the following compatibility conditions (a) for some (equivalently any) lift d ss ∈ Λ(T G ) of the image δ ss of δ in π 1 (G ss ), the direct sum is the canonical map given by the addition on the abelian variety J C . The (contravariant) functoriality of the exact sequence is described in [BH10a, Thm. 5.3.1(iv)]. Note that if T is a torus, the group in (iii) is trivial and the conditions (a) and (b) are always satisfied. In particular, the projection on the first two factors give an isomorphism of groups NS(Bun δ T (C)) ∼ = Λ * (T ) ⊕ Hom s (Λ(T ) ⊗ Λ(T ), End(J C )). and hence the elements (l T , b T , 0) of NS(Bun δ T (C)) can been seen as pairs (l T , b T ) ∈ Λ * (T ) ⊕ Hom s (Λ(T ) ⊗ Λ(T ), End(J C )). For the rest of the paper, we adopt the latter presentation for the Neron-Severi group NS(Bun δ T (C)) for the torus case. In general, the Neron-Severi group NS(Bun δ G (C)) can be described as follows. Proposition 4.0.2. [BH10a, Prop. 5.2.11] With the above notation, there is an exact sequence and the image of p is equal to where Bil s,sc−ev (Λ(T D(G) )|Λ(T G ss )) W G is defined in Definition/Lemma 2.2.8 and d ss is some (equivalently any) lift to Λ(T G ss ) of δ ss ∈ π 1 (G ss ). We now describe the restriction homomorphism (4.0.1), using the description of RPic(Bun δ G,g,n ) of Theorem 3.2.8 and the description of Pic(Bun δ G (C)) of Theorem 4.0.1. Theorem 4.0.3. Assume that g ≥ 1 and let (C, p 1 , . . . , p n ) ∈ M g,n (k) be a geometric point. Let G be a reductive group and fix δ ∈ π 1 (G). Then the restriction homomorphism (4.0.1) sits into the following functorial commutative diagram with exact rows ) ). Proof. The theorem has been proved for a torus in [FVc,Prop. 4.3.3]; and, in order to prove the case of a general reductive group G, we are going to use this result for G ab and for a (fixed) maximal torus ι : T G ֒→ G. Observe that the two rows of (4.0.4) are exact by Theorems 3.2.8 and 4.0.1. Moreover, the two outer vertical arrows are well-defined: for res δ G (C) o it is clear; for res δ G (C) NS it follows from: (i) given a lift d ∈ Λ(T G ) of δ ∈ π 1 (G), we can choose a representative χ b(d⊗−) ∈ Λ * (T G ) of [χ] such that (χ b(d⊗−) ) |Λ(T D(G) ) = b(d, −) |Λ(T D(G) ) by (3.2.18) and this implies that is integral on Λ(T G ), namely it is the restriction of the chosen χ : Λ(T G ) → Z; (ii) the orthogonal direct sum is integral on Λ(T G ) ⊗ Λ(T G ), since it is the restriction of id J C •b by the claim in the proof of Proposition 2.2.1. Hence it remains to prove the commutativity of the two squares in (4.0.4). In order to prove the commutativity of the left square in (4.0.4), consider the following diagram RPic(Bun δ G,g,n ) Hom(π 1 (G ab ), J C (k)) In the above diagram, every simple subdiagram commutes: the curved triangles commutes by the definition of j δ G and of res δ G (C) o ; the left square commutes by [FVc,Prop. 4.3.3] applied to G ab ; the right square commutes by the obvious functoriality of the restriction homomorphism; the lower triangle commutes by the functoriality of the morphism j δ G (see Theorem 4.0.1). By using all the above commutativity results, we deduce that res δ G (C) • j δ G = j δ G (C) • res δ G (C) o , i.e. that the left square of (4.0.4) commutes. In order to prove the commutativity of the right square in (4.0.4), choose a lift d ∈ Λ(T G ) of δ ∈ π 1 (G) and consider the following diagram (4.0.6) RPic(Bun δ G,g,n ) where ι * ,NS is the morphism of Definition/Lemma 3.2.6 and ι * ,NS (C) is the morphism of [BH10a, Def. 5.2.5], and they are given by the following formulas where d ss is the image of d ∈ Λ(T G ) in Λ(T G ss ), which is well-defined by conditions (a) and (b) of Theorem 4.0.1. We have the following commutativity properties in the above diagram (4.0.6): , which follows from the functoriality of the restriction homomorphism; and the two results coincide by the fact that b(d ⊗ −) |Λ(T G sc ) = b(d ss ⊗ −) |Λ(T G sc ) by the claim in the proof of Proposition 2.2.1, together with condition (ii). which follows from the functoriality of the homomorphism ω δ G ⊕ γ δ G applied to the inclusion ι : T G ֒→ G (see Theorem 3.2.8(1)). Using the above commutativity relations together with a diagram chase in (4.0.6), we conclude that . Since ι * ,NS (C) is injective (using that id J C is injective since g ≥ 1), we can simplify ι * ,NS (C) from the above expression, and we get , which is exactly the commutativity of the right square in (4.0.4), and this concludes the proof. Proof. This follows from the snake lemma applied to the exact sequence (4.0.4), using that res δ G (C) NS is injective and the explicit description of res δ G (C) o . We now give an alternative description of the restriction homomorphism onto the Neron-Severi group (4.0.7) res δ G (C) : RPic(Bun δ G,g,n ) − −−− → NS(Bun δ G (C)), for any (C, p 1 , . . . , p n ) ∈ M g,n (k), using the description of RPic(Bun δ G,g,n ) in Corollary 3.1.3 and the description of NS(Bun δ G (C)) in Proposition 4.0.2. Theorem 4.0.5. Assume that g ≥ 1 and let (C, p 1 , . . . , p n ) ∈ M g,n (k) be a geometric point. Let G be a reductive group and fix δ ∈ π 1 (G). The homomorphism (4.0.7) sits into the following functorial commutative diagram with exact rows Proof. The rows of (4.0.8) are exact by Corollary 3.1.3 and Proposition 4.0.2. We need to show the commutativity of the diagram (4.0.8). The commutativity of the left square of (4.0.8) follows from the functoriality of the restriction homomorphism res δ G (C) and the functoriality of the homomorphism c δ G (C) (see Theorem 4.0.1). Consider now the following diagram (4.0.9) RPic(Bun δ G,g,n ) The commutativity of the right square of (4.0.8) follows from the following commutativity results: (a) The left triangle commutes because of the definition (4.0.7) of res δ G (C) and of the commutativity of the diagram (4.0.4). (b) The upper square commutes, i.e. θ δ G = res NS D •(ω δ G ⊕ γ δ G ). In order to prove this, it is enough to show, using that RPic(Bun δ G,g,n ) is the pushout (3.1.6), that Condition (4.0.10) holds since θ δ G • ab * # = 0 by (3.1.7) and res NS D •(ω δ G ⊕ γ δ G ) • ab * # = 0 which follows from the fact that (ω δ G ⊕ γ δ G ) • ab * # = ab * ,NS • (ω δ ab G ab ⊕ γ δ ab G ab ) (by the functoriality of the morphism ω δ G ⊕ γ δ G , see Theorem 3.2.8(1)) together with (3.2.23). Condition (4.0.11) follows from )⊗Λ(T D(G) ) by Corollary 3.1.3. (c) The lower square commutes because, using the definitions of the maps involved, we have Corollary 4.0.6. Keep the notation of Theorem 4.0.5 and assume furthermore that id J C : Z ∼ = − → End(J C ) is an isomorphism. Then there exists a canonical short exact sequence (4.0.12) 0 → coker(ω δ ab G ab ⊕ γ δ ab G ab ) → coker(res δ G (C)) → coker(r G ) → 0. In particular, if n > 0 then we have the canonical isomorphism Note that if k is uncountable and C is very general in M g (k), then id J C : Z ∼ = − → End(J C ) is an isomorphism, as it follows easily from [Koi76]. By Theorem 4.0.3, the morphism res δ ab G ab (C)) admits the following factorization res δ ab G ab (C) : RPic(Bun δ ab G ab ,g,n ) By the assumption that id J C is an isomorphism together with the explicit description of res δ ab G ab (C) NS in Theorem 4.0.3, we deduce that res δ ab G ab (C) NS is an isomorphism. Hence, we get a canonical isomorphism (4.0.14) coker(ω δ ab G ab ⊕ γ δ ab G ab ) ∼ = − → coker(res δ ab G ab (C))). By combining the short exact sequence (4.0.13) with the isomorphism (4.0.14), we conclude. We end this section by describing the restriction homomorphism (4.0.1) in genus 0. Using Theorem 4.0.1, it follows that if n > 0 then we have (4.0.15) for some (equivalently any) lift d ss ∈ Λ(T G ) of the image δ ss of δ in π 1 (G ss ). Moreover, using Proposition 4.0.2, we have the following exact sequence for n > 0 (4.0.16) 5. The rigidification Bun δ G,g,n Z (G) and its Picard group Since the center Z (G) of a reductive group G acts functorially on any G-bundle, we have that Z (G) sits functorially inside the automorphism group of any S-point (C → S, σ, E) of Bun δ G,g,n for any δ ∈ π 1 (G). Hence we can form the rigidification (5.0.1) ν δ G : Bun δ G,g,n → Bun δ G,g,n Z (G) := Bun δ G,g,n , which turns out to be a Z (G)-gerbe, i.e. a gerbe banded by Z (G). The aim of this section is to study the Picard group of Bun δ G,g,n and the class of the Z (G)-gerbe ν δ G . From the Leray spectral sequence E 2 p,q = H p (Bun δ G,g,n , R q (ν δ G ) * (G m )) ⇒ H p+q (Bun δ G,g,n , G m ), and using that (ν δ G ) * (G m ) = G m and that R 1 (ν δ G ) * (G m ) is the constant sheaf Λ * (Z (G)) = Hom(Z (G), G m ), we get the exact sequence (5.0.2) where the first homomorphism is given by the canonical action of Z (G) on every G-gerbe, and the second homomorphism is induced by the functor of groupoids L S : Bun δ G,g,n (S) → {Line bundles on S} determined by L. The homomorphism obs δ G (called the obstruction homomorphism) has the following geometric interpretation: given any character λ ∈ Hom(Z (G), G m ), the element obs δ G (λ) is the class in H 2 (Bun δ G,g,n , G m ) of the G m -gerbe λ * (ν δ G ) obtained by pushing forward the Z (G)-gerbe ν δ G along λ. If we take the fiber of (5.0.1) over a geometric point (C, p 1 , . . . , p n ) ∈ M g,n (k), we get the Z (G)-gerbe The Leray spectral sequence for G m relative to the Z (G)-gerbe gives the exact sequence (analogously to (5.0.2)) The weight homomorphism wt δ G (C) and its cokernel, which coincides with the image of obs δ G (C) by (5.0.3), have been determined by Biswas-Hoffmann [BH12,Prop. 7.2], as we now recall (in a form which is slightly different from [BH12]). (1) The weight homomorphism wt δ G (C) factors as where c δ G (C) is the homomorphism of Theorem 4.0.1 and d ss ∈ Λ(T G ss ) is any lift of the image δ ss of δ in π 1 (G ss ). (2) The homomorphism Λ * D of (2.1.17) induces an isomorphism where the identification NS(Bun δ ab G ab ,g,n ) = Bil s (Λ(G ab )) follows from Definition 5.0.3, the morphisms ab * ,NS and res NS D are the restrictions of, respectively, the morphisms ab * ,NS and res NS We can now give an explicit description of RPic(Bun δ G,g,n ), which, via the morphism (ν δ G ) * , can be identified with ker(ω δ G ) ⊆ RPic(Bun δ G,g,n ), see (5.0.12). Theorem 5.0.5. Assume that g ≥ 1. Let G be a reductive group with maximal torus T G and Weyl group W G , and fix δ ∈ π 1 (G). (1) There is an exact sequence where j δ G and γ δ G are uniquely determined by (2) The image of γ δ G is equal to (5.0.16) Remark 5.0.6. For n = 0, the divisibility condition in the right hand side (5.0.16) depends only on the image Remark 5.0.9. Assume that g ≥ 1. Let G = T be a torus and let d ∈ Λ(T ). Then clearly coker(ev d D(T ) ) = 0, which implies that coker(ω d T ) = 0 if n > 0. On the other hand, if n = 0, then, using the explicit basis of RPic(Bun d T,g,n ) given in [FVc,Thm. 4.0.1(2)], it is possible to check that , where div(d) is the divisibility of d in the lattice Λ(T ), with the convention that coker(γ d T ) = {0} if g = 1 and d = 0 (when the above expression for coker(γ d T ) is not well-defined). We end this section by describing the relative Picard group of the rigidifcation Bun δ G,g,n and the cokernel of the weight homomorphism wt δ G , in genus g = 0. Remark 5.0.10. Assume that g = 0 and n ≥ 1. Using (4.0.15), it can be proved that the weight homomorphism wt δ G is equal to the following composition Therefore, using the exact sequences (4.0.16) and (2.1.17) and the fact that ω δ G ab is an isomorphism, it follows that (for some, or equivalently any, lift d ss ∈ Λ(T G ) of the image δ ss of δ in π 1 (G ss )): (i) the homomorphism θ δ G induces an isomorphism RPic(Bun δ G,0,n ) (ii) the cokernel of ω δ G (and hence of the weight homomorphism wt δ G ) is equal to the cokernel of the homomorphism b ∈ Bil s,ev (Λ(T G sc )) W G : 6. The universal moduli space M ss G,g,n and its divisor class group In this section, we will describe the divisor class group of the universal moduli space M δ,ss G,g,n of semistable G-bundles (over n-marked smooth curves of genus g) in terms of the Picard group of Bun G,g,n . Before presenting the results, we need some preparation. (i) Let P → C be a G-bundle over a k-curve C. We say that P is (semi)stable if for any reduction F to any parabolic subgroup P ⊆ G, we have deg(ad(F)) < where ad(F ) := (F × p)/P is the adjoint bundle of F , i.e. the vector bundle on C induced by F via the adjoint representation P → GL(p). We say that P is regularly stable, if either G is a torus or P is stable and Aut(P ) = Z (G). with irreducible fibers, we deduce that the restriction homomorphisms (6.0.4) are surjective and their kernels are freely generated by the irreducible divisors (Φ δ G ) −1 (D) and π((Φ δ G ) −1 (D)), respectively. With this in mind, the theorem follows by repeating the argument of the previous case. Examples The aim of this section is to make explicit the results of this paper for the reductive groups G such that the semisimple factor g ss of the Lie algebra g of G (see (2.1.4)) is simple. We will therefore distinguish several cases according to the type of the simple Lie algebra g ss . For each of these cases, we will first compute the lattices of W G -symmetric bilinear forms appearing in Proposition 2.2.1 and Definition/Lemma 2.2.8 (7.0.1) Bil s,ev (Λ(T D(G) )|Λ(T G ss )) W G r G ֒→ Bil s,sc−ev (Λ(T D(G) )|Λ(T G ss )) W G ⊆ Bil s,ev (Λ(T G sc )) W G , which have rank one by Corollary 2.2.4. Then we will compute, for any δ ∈ π 1 (G), the cokernels of the morphisms ev δ D(G) : Bil s,ev (Λ(T D(G) )|Λ(T G ss )) W G −→ Λ * (Z (D(G))) = Λ * (T D(G) ) Λ * (T G ad ) , ev δ D(G) : Bil s,sc−ev (Λ(T D(G) )|Λ(T G ss )) W G −→ Λ * (Z (D(G))) = Λ * (T D(G) ) Λ * (T G ad ) . appearing in Definition/Lemmas 2.2.5 and 2.2.8. Note that, since ev δ D(G) is the restriction of ev δ D(G) , there is a surjection coker(ev δ D(G) ) ։ coker( ev δ D(G) ), whose kernel is either trivial or isomorphic to Z/2Z by Definition/Lemma 2.2.8(i) and the fact that the lattices (7.0.1) have rank one. 7.1. Type A n−1 (n ≥ 2). Let us first recall some properties of the root system A n−1 . It follows that group P (A n−1 )/Q(A n−1 ) is cyclic of order n and it is generated by ω 1 . The Weyl group W (A n−1 ) of A n−1 is equal to S n and it acts on the above lattices by permuting the coordinates of V (A n−1 ) ⊂ R n . A semisimple group H which is almost-simple of type A n−1 is isomorphic to SL n /µ r , for some (unique) r ∈ N such that r|n. In particular, H sc = SL n and H ad = PSL n . By choosing the standard maximal tours T H of H consisting of diagonal matrices, we get the canonical identifications It follows that the fundamental group of SL n /µ r is equal to (7.1.2) π 1 (H) = Λ(T SL n /µr ) Λ(T SLn ) = n r ω 1 ∼ = Z/rZ, while the character group of the center Z (SL n /µ r ) = µ n /µ r ∼ = µ n/r is equal to (7.1.3) Λ * (Z (SL n /µ r )) = Λ * (T SLn /µr ) Λ * (T PSLn ) = rω 1 ∼ = Z/ n r Z, From now on, we will consider the following Set-up: Let G be a reductive group such that D(G) = SL n /µ r and G ss = SL n /µ s , with 1 ≤ r|s|n. Equivalently, G is the product of a torus and one of the following reductive groups (see [BLS98,(2 7.3. Type D l (with l ≥ 3). Let us first recall some properties of the root system D l . Consider the vector space R l endowed with the standard scalar product (−, −) and with the canonical bases {ǫ 1 , . . . , ǫ l }. We will freely identify R l with its dual vector space by mean of the (restriction of the) standard scalar product (−, −). The root (resp. coroot) lattices and the weight (resp. coweight) lattices of D l are given by We set ω l := ǫ 1 + . . . + ǫ l 2 and ω l−1 := ǫ 1 + . . . + ǫ l−1 − ǫ l 2 . if l is even. The Weyl group of D l is equal to W (D l ) = (Z/2Z) l−1 ⋊ S l = {(ξ = (ξ 1 , . . . , ξ l ), σ) ∈ (Z/2Z) l ⋊ S l : i ξ i = 1}, and it acts on the above lattices in such a way that S l permutes the coordinates of R l while (Z/2Z) l−1 ≤ (Z/2Z) l changes the signs of all the coordinates. A semisimple group H which is almost-simple of type D l is isomorphic to either the (simplyconnected) spin group Spin 2l , or the orthogonal group SO 2l , or the (adjoint) projective orthogonal group PSO 2l or, if l is even, to the one of the two semisimple groups Ω ±1 2l := Spin 2l / ω l− 1 2 ± 1 2 . Note that the two groups Ω ±1 2l are (abstractly) isomorphic; the isomorphism is induced by the automorphism of the Dynkin diagram D l that exchanges the last two nodes. By choosing the standard maximal tours T H of H consisting of diagonal matrices, we get the canonical identifications
2020-02-19T02:01:21.758Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "295b1ccf3ebace76adfbddf1c9b103d067d8eeb6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2009.06274", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8179a6698678cd7c4f3b8a037afebed27b68bbdc", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
132247777
pes2o/s2orc
v3-fos-license
Assessment of sustainability effects in the context of specific applications In an initial investigation possible nanotechnology application contexts were considered and qualitatively evaluated. Also studies to life cycle aspects of nanotechnology were analyzed. So far, only a handful of life cycle assessments (LCAs) on nanotechnologies have been completed. A summary of studies of life cycle aspects identified are provided. Selection of the in-depth case studies In an initial investigation possible nanotechnology application contexts were considered and qualitatively evaluated. Also studies to life cycle aspects of nanotechnology were analyzed. So far, only a handful of life cycle assessments (LCAs) on nanotechnologies have been completed. A summary of studies of life cycle aspects identified are provided. Our goal in the selection process was to cover the spectrum of nanotechnological applications (a variety of manufacturing methods and basic nanoscale structures) as broadly as possible and address a diverse selection of research interests. With this in mind, selection of the case studies was made according to the following categories and associated criteria: 1. Type and scope of environmental impact anticipated eco-efficiency potential (high -low) potential for possible risks and/or toxicity (high -low) 2. Extent of market proximity State of development (already on market -still in long-term development) Market relevance (high -low) Potential application spectrum (wide -narrow) 3. Type of innovation Degree of innovation (small -large) Production turnover or volume (high -low) In selecting the actual application contexts we chose to focus on specific issues/topics. Out of the entire spectrum of nanotechnological applications, four case studies with anticipatable eco-efficiency potential were specifically selected. Integrated technological problem solving innovations were our focus (cf. Kemp 1997, Huber 2004. The possible risks and potential dangers of nanotechnology applications, specifically the issue of nanoparticles, were analyzed and addressed. As a result of these deliberations, five application contexts with corresponding goals for the in-depth case studies were selected. The project results of these case studies follow. Table 9. Overview of the case studies investigated Application context Goal Eco-efficient nanocoatings Presentation of the eco-efficiency potential of nanocoatings in the form of a comparative ecological profile (Nanocoating based on sol-gel technology as compared with waterborne, solventborne, and powder coat industrial coatings) Nanotechnological process innovation in styrene production Presentation of the eco-efficiency potential of nanotechnology in catalytic applications in the form of a comparative ecological profile (Nanotube catalyst as compared with iron oxide based catalysts) Nanotechnological innovation in the video display field Assessment of eco-efficiency potential in video display development by means of a qualitative comparison (Organic LED displays and nanotube field emitter displays as compared with CRT, liquid-crystal, and plasma screens) Nano-applications in the lighting industry Presentation of eco-efficiency potential of nanoapplications in the lighting industry in the form of a comparative ecological profile (White LED and quantum dots as compared with incandescent lamps and compact fluorescents) Potential risks of nanotechnological applications Discussion of possible risks and harzards using titanium dioxide as an example; less a consideration of environmental impact stabilizers for UV protection. Pigments used in undercoatings and primers also serve to cover surface irregularities (filler). Additives are also used to improve application and performance properties. During the drying phase, the solvents become evaporative, volatile substances. Their purpose is to keep the solid components of the coating dissolved or dispersed and to maintain the applicable consistency and working properties of each coating. Environmental impact of paint and surface coatings A 1995 emissions study looked at the twelve business sectors most important for the application of industrial surface coatings -not including mass production surface coating operations in the automotive industry (Mink & Rzepka 1995). According to the study, 335,000 tons of solvent were being emitted annually. In comparison, the emissions from surface coating operations in automobile production are quite low, perhaps on the order of 30,000 tons annually, corresponding roughly to the emissions from automotive paint repair industry. The paint lines in the automobile industry already demonstrate a very high level of environmental awareness. If automobile industry figures are included, annual emissions are close to 370,000 tons. This makes up roughly 40% of total solvent emissions or 15% of VOC emissions. Specific aspects of the investigation and its scope The study looked specifically at the surface treatment of light-alloy parts such as those increasingly being employed in automobile manufacturing. The surface treatment of a 1 sqm aluminum automobile part with various clear lacquers served as a functional reference unit. The system boundaries included the entire life cycle of the coating product, including pretreatment of the surface (see Fig. 13). The individual life cycle stages in this case are : Procurement of raw materials Manufacture of primary materials (binder, solvent, etc.) Manufacture of surface coating Surface pretreatment Application (surface coating operation) Utilization phase Disposal/recycling For the comparative profile, it should be noted that the surface pretreatment processing step could only be dealt with on a qualitative basis due to gaps in the data; the last phase, disposal/recycling, is in every case assumed to be identical and was not considered. The most relevant environmental impact criteria was expected in the surface coating production stages, including manufacture of raw materials, pretreatment, and application, as well as in the utilization stage. Selection of variants Criteria for the selection of the variants: Differentiation by deployment of the basic material forms (binder, etc.) Differentiation according to type of surface pretreatment required Differentiation according to method of application For the environmental significance of the surface coating systems being considered, the following influential parameters were ascertained: Composition of surface coating Required surface pretreatment Coating thickness Using these parameters it was possible to derive four variants that currently reflect the state of the technology. The so-called nanocoating is treated as the fifth variant; however, unlike the other four variants, this has not yet been implemented in automotive production surface coating applications. Variant 1 and 2: single-and dual-component clearcoats (conventional clearcoat) Among the clearcoat finishes under consideration, the single-component clearcoat (1K CC) has been in use the longest. It therefore brings with it a great deal of working knowledge and a greater degree of development. Dual-component clearcoat (2K CC) is principally used in applications demanding a greater degree of quality and durability. In the automotive branch, both coatings are increasingly being replaced by waterborne and powder coatings, as they are no longer able to do justice to the high standards of the Technical Guidelines on Air Quality Control. The solvent percentage in both coating systems is roughly 50%. Various (synthetic) resins are use as binders in both systems. The percentage of additives, however, is relatively small and consists mainly of flow control agents and light stabilizers. Variant 3: Waterborne clearcoat Waterborne clearcoat is the most frequently utilized surface coating process in the automotive industry. The amount of solvent used in waterborne clearcoats is higher than in conventional lacquers, but the primary solvent is water (39.8%), which fully evaporates during the drying phase and therefore harmless to the environment. The use of water rather than volatile solvents as the liquid component also makes a difference in primary energy demand: 62.9 MJ/kg lies far below that of the conventional and powder coatings. Variant 4: Powder clearcoat Unlike all other surface coatings, the powder coat process utilizes no solvents or other liquid carriers and is therefore considered to be particularly safe for the environment. Significant efforts are presently being made to utilize powder coating in more and more application areas. Lack of a liquid carrier also brings with it some disadvantages: For example, dip or immersion application is, of course, impossible. Variant 5: Nanoparticle clearcoat from the firm NTC The newly developed nanoparticle coating differs in many respects from conventional coatings. The process is fundamentally different. Like traditional liquid clearcoats it consists of a binder, a liquid carrier, fillers, and additives. The binder, however, does not have the usual organic structure, but is instead a so-called inorganic-organic hybrid polymer. The nanoparticle coating is manufactured by means of the sol-gel process. The sol-gel process has been in existence for a long time, however through increased research and development activities in recent years it has gained in importance. It is viewed as an especially promising field of nanotechnology. The nanoparticle coating can be applied using customary methods. During the drying phase of the "sol" (the coating in its liquid phase), the particles suspended in it join together to form the so-called "gel." The material is heated to a temperature of 160°C, the liquid carrier fully evaporates from the layer and the particles bind together to form a stabile polymer network (Van Ooij et al. 2002). Significant advantages of this process include: a thinner coating providing the same functionality and the elimination of the chromate pretreatment, which is no longer necessary. Description of the life cycle stages and base data utilized The study "Comprehensive Assessment of Powder Coat Technology as Compared to Other Surface Coating Technologies," by Harsch and Schuckert (1996), provided the base data for further investigation of the life-cycle stages; this study provides a life cycle assessment of the powder coating process as compared to other industrial surface coating technologies. Production and manufacture of raw materials (surface coating components) Chromic acid: Chrome ore is the raw material from which chrome(VI)containing products for chromating are manufactured. South Africa, with between 30% (1992) and 42% (1996) of the market, is the largest worldwide producer of chrome ore, followed by Kazakhstan, Turkey, and India. In 1996, the six largest producers yielded roughly 86% of the approximately 12 million metric tons that were extracted worldwide. Chrome ore is chiefly used in iron and steel metallurgy to produce stainless steel having special properties. In the chemical industry chrome ore is used to produce numerous compounds for application in diverse areas. Among these, chrome(VI) (chromium trioxide CrO3), used for chromating. In 1993, according to the U.S. Bureau of Mines, roughly 77% of the chrome used in the OECD (Organization for Economic Cooperation and Development) countries was used in metallurgy, 9% in the fireproofing industry, and 14% in the chemical industry. The market segment belonging to the chemical industry -which includes the manufacture of chromic acid -is shrinking due to environmental protection concerns. Binders A large number of organic and synthetic resins are utilized in binders for industrial surface coatings. These binder compounds are utilized in various proportions and ratios, as needed, in all industrial surface coatings. Acrylic resins are synthesized from various primary products Epoxy resins are manufactured by means of a condensation process and serve to produce an especially durable and stable surface coating process Polyurethane resins are produced from polyether and polyester and have outstanding surface characteristics Polyester resins are condensation products resulting from saturated monomers Alkyd resins are produced from polyvalent alcohols and polycarboxylic acids Melamine resins are synthesized from the source materials melamine and formaldehyde Solvent Organic substances are the most common solvents. Before application they serve to maintain the coating in a liquid state. Following application the solvent evaporates and the coating becomes solid. Commonly used solvents include: Diacetone alcohol N-Methylpyrrollidone (NMP) Aromate Butyl acetate Butyl diglycol acetate N-butanol Secondary butyl alcohol (SBA) Butyl (poly)glycol Pigments, fillers, and other additives are also used in very minimal quantities, and were not included in further calculations. Liquid coating systems Manufacture of the coating is chiefly a process of mixing together the necessary components. Losses at this stage are minimal and can therefore be ignored. In a premixer the binder, liquid carrier or solvent, pigment, and fillers are mixed together and then ground. Any remaining additives are then added to the mixture in a let-down tank. After filtering, the mixture is packaged. Powder coat systems In the manufacture of powder coatings, all ingredients are first carefully weighed out and then fed into an extruder. The mixture is repeatedly powdered in several stages and finally conveyed to a packaging facility. Losses in powder coating manufacture are greater than with liquid coatings and amount to roughly 2 5%. Nanocoat system The manufacture of a nanocoating varies not so much in the processes, but rather in the binder that is utilized. As a rule, the binders used in industrial coatings have an organic structure. But inorganic binders are also utilized in certain applications. Their advantage is in their hardness and chemical durability. But because of their serious drawbacks, including difficult ap-plication and brittleness after hardening, these coating materials do not have an extensive field of application. In nanotechnology-based coatings, the so-called inorganic-organic hybrid polymers come into application. These new binders are a mixture of organic and inorganic binders and bring together numerous advantages from both types of binder. As the nanocoating cures, the inorganic particles begin to form a glasslike network with cross-linked organic elements. The fundamental chemical reaction behind the manufacture of nanocoatings is based on the sol-gel process. This process frequently utilizes silicon-organic compounds, the so-called silanes. The synthesis of these binders is achieved by the hydrolysis of alkoxysilane. In this case study, the product Dynasylan® Glymo from Degussa was utilized. This part of the reaction leads to the formation of the inorganic part of the binder. At the same time the formation of the organic part of the binder takes place. Organic side chains on the silane compounds undergo reaction to organic chains. As this inorganic-organic network forms, the coating hardens. The finish condition of the coating, however, remains as it was before hardening. The binder is available as a low-viscosity colloidal suspension, whose particles have a diameter of 40 50 nm. The solvent, at this point, contains unreacted silane, silanol, and partially formed polysiloxane. This colloidal system is chemically in the so-called sol state; followed, after application and hardening of the coating, by the gel state. The entire process is therefore referred to as the sol-gel process (Wagner o.J.). Base data and assumptions: Summary life cycle assessment data from the study by Harsch and Schuckert (1996) provided the base data for the four existing surface coating technologies. Since no quantified data exists for the primary materials in the nanocoating, specifically for the silane that is used, the data from the dual-component coating was also used for the nanocoating and multiplied by a "safety factor" of 1.5. The base data utilized for each 1 kg of applicable coating are provided in the appendix (Table 43). Surface pretreatment As a rule, before application of the surface coating, steel and aluminum components must receive a series of successive surface treatments to protect against corrosion. This is achieved through the application of a chemical conversion coating. Chromating is the usual corrosion prevention process, particularly for aluminum. The pretreatment occurs by means of a dip or spray process utilizing chrome(VI)-containing products. Aluminum ent upon the temperature of the bath, the treatment time, and the solution chemistry of the chromate bath. An overview of the various bath chemistries is given in Table 12. Phosphating Phosphating is commonly used as a pretreatment for steel and iron. But it is an alternative to chromate for the pretreatment of aluminum as well. Treating aluminum with a dilute phosphoric acid solution produces a thin film of aluminum phosphate. Compared to chromating and anodizing, phosphating is certainly more expensive and more complicated, but it results in a more corrosion-resistant surface, which ensures a better bond for surface coatings. Phosphate coatings solutions for aluminum generally have a composition similar to those used for treating steel. The main components include metal hydrogen phosphates, oxidizing agents and complex fluorides (Brock et al. 1998). Anodizing The anodizing process is comparable in number of baths and complexity to chromating. But it offers significant advantages with respect to environmental impact (chrome-free, fluoride-free). The chrome(VI)-and fluoridecontaining solutions used in chromating are replaced by conventional sulphuric acid. This leads to a much higher quality water and waste discharge. Furthermore, the resulting conversion coating is more corrosion-resistant than in the chromate process. With nanocoatings no extensive pretreatment is required. Only the very first stage, a mild alkaline rinse, is necessary. Base data and assumptions: Because appropriate, quantifiable data for the given pretreatment methods is not available, it is not possible to include this processing step in further environmental impact calculations. However, the comparison already makes clear that nanocoatings offer unique environmental advantages. The surface-coating process (application) Following pretreatment, the nano-clearcoat -particularly emphasized in this study -is applied in the same manner as a single-component clearcoat. The main difference to the other coating systems is that only a tenth of the usual coating material is needed. Because the application process for nanocoatings is the same, we will rely heavily on Harsch and Schuckert (1996). The comparison of the various coating systems by application process is based on design data for new facilities of the same production capacity. The general steps of application are shown in Table 15. From this it can be seen that application generally consists of three steps: Interior, exterior shell, and manual touch-up. The application process is influenced by certain variables that have an influence on quality and quantity of the output. These include formulation of coating, the coating system, energy source, and production capacity. For the purposes of this study average values for these variables were used. The application base data is available in the appendix (Table 44). Use phase, disposal, and recycling In the use phase the various environmental impacts of the variants studied -resulting from the various coating quantities applied -are determined. A useful life of 200,000 km was assumed for the automobile. The consumption reduction rule used by Harsch and Schuckert (1996) was likewise utilized. The so-called consumption reduction rule says that given the weight and consumption of a specific automobile type, a 10% reduction of weight will result in a 2.5-6% reduction in consumption. In the area between 90 and 100% of the original total weight, it is assumed that weight saved and fuel saved are proportional. For the baseline consumption, an automobile type of average fuel consumption was selected from the GEMIS 4.1 database. It is assumed that the disposal/end-of-life phase will not significantly vary and it is therefore not included in the assessment. Life cycle inventory analysis In the life cycle inventory analysis, the material and energy relationships between the coating systems being evaluated and the environment are noted, i.e. inputs from the environment and outputs into the environment are recorded. The goal of the life cycle inventory analysis is to establish a data inventory based on functional equivalents for the selected variants. Since quantitative data for the pretreatment is not available and the disposal/recycling life-cycle stages were not considered, estimates were necessary with the various variants for the coating process, including raw material production, application, and the use phase. The following tables depict the calculated inputs and outputs for the variants studied. The total primary energy consumption is primarily determined by the energy requirement of the application and includes not only the actual coating process but also energy expenditures for drying, etc. The differences here in the variants studied are minimal. The roughly 35% less total primary energy consumption of the nanocoating results is due to reduced quantity of coating material. However, during the use phase, the reduced mass also leads to savings in fuel consumption. With the VOC emissions the advantages of the nanocoating are also quite evident, particularly in the manufacture and use life cycle phases. The VOC emissions of the nanocoating are ca. 65% lower than those of the other variants. The results for the other emissions values in Table 19 are similar. The reduction in waste generation also reflects the environmental advantages of the nanocoating. Life cycle impact assessment To complete a life cycle impact assessment, it is necessary to have access to emissions data to which specific environmental impacts can be allocated. With the data available, it is only constructive to present an analysis for the impact categories greenhouse effect, acidification, and eutrophication. Greenhouse effect The energy in the sunlight that strikes the surface of the earth in the course of the day is stored as thermal energy and then released at night as infrared radiation. A part of this infrared radiation is absorbed by trace gases in the troposphere (0-10 km) and reflected back to earth. This natural greenhouse effect is essential, otherwise the earth's surface would cool to inhospitable below-zero temperatures. The greenhouse effect we speak of as an environmental problem refers to the additional warming of the surface of the earth that is due to the increase in trace gases and the appearance of new greenhouse gases in the troposphere, for example HFCs (fluorocarbons). The most important greenhouse gases are carbon dioxide, methane, ozone, HFCs, and nitrous oxide, which arise to 50% from energy consumption, 20% from the chemistry industry, 15% from agriculture, and another 15% from the destruction of rain forests. The greenhouse effect covers a wide range of effects that result from the warming of the earth's atmosphere. Among these are not only the rising mean sea level, but also the increase in extreme climatic weather conditions such as hurricanes, storm floods, catastrophic drought, etc. Changes in the composition and the range of flora and fauna are also already being looked at. Acidification Acidification is a collective term referring to several effects. The phenomenon can be primarily traced back to sulfur dioxide and nitrogen oxide emissions from the burning of fossil fuels in power plants and increasingly in motorized transport. In addition, ammonia, hydrogen chloride, and hydrogen fluoride emissions also contribute to acid rain. Sulfur dioxide and nitrogen oxide emissions react with atmospheric oxygen and water to produce sulfuric and nitric acid. In the life cycle impact assessment, acidification potentials (AP) were generated using coefficients from a study by the Center for Environmental Science (CML) in the Netherlands (Heijungs 1992). Substances that only contribute to acid rain after oxidation (e.g. ammonia) or hydrolysis (e.g. SO2) are likewise included. In the CML models, only air emissions are considered; water emissions do not enter into the calculations. Eutrophication Eutrophication describes the spread of chemical nutrients into water bodies. The anthropological contributions of nitrogen compounds (e.g. nitrates) from excessive applications of fertilizers as well as phosphorus compounds (e.g. phosphates) from detergents or agricultural runoff lead to overfertilization of waters. In addition to these two groups of compounds, the COD (chemical oxygen demand) is enlisted as a measure for calculating organic pollutants. A consequence of the excessive nutrient enrichment is the appearance of vast algae growths. Dying algae decompose under a high degree of oxygen consumption and therefore lead to a shortage of oxygen in the water body. Decomposition and decay processes are the result and produce toxic substance such as hydrogen sulfide, which in turn leads to fish die-off. This means that the increased nutrients in our waters will do long-lasting and in part irreparable damage to a fragile ecological structure. For the life cycle impact assessment the following factors were explored. Quantitative analysis of the life cycle impact assessment The greenhouse effect is addressed in the investigation through emissions of carbon dioxide (CO2), methane, (CH4), and nitrous oxide (N2O); in this, the carbon dioxide emissions from the use of fossil fuel energy sources make up the greatest portion. Much as with total primary energy consumption, the nanocoating comes out roughly 1/3 better than the other coating variants. The acidification is addressed through emissions of nitrogen oxides, sulfur dioxide, ammonia, hydrochloric acid, and hydrogen fluoride, whereby the last three substances play a lesser role in the scenarios being looked at. In addition to the phosphorus compounds in detergents, nitrogen oxide emissions and organic pollutants, which are released due to the chemical oxygen demand (COD), also contribute to eutrophication. In our case nitrogen oxide emissions can primarily be held responsible. In this impact category as well, the nanocoating makes clear its advantage, ranking 40% better than the other variants. Qualitative aspects of the impact assessment A generally accepted quantitative process for representing ecological and human toxicity in a life cycle analysis does not exist. Therefore at this point we take a brief qualitative look at individual substances that demonstrate an impact on ecological and human toxicity. Because of the methods, no local or time-independent evidence could be given in the impact assessment. Those substances that are of global significance beyond their source of emission should be recorded. Furthermore, substances without an effectivity threshold should be included. The goal should be to minimize or replace such substances to the extent possible. Carbon monoxide (classified as mutagenic) has long been a problem for atmospheric pollution, particularly emissions in the transport sector. The successful reductions in CO achieved through the use of catalysts has made possible a tremendous reduction in emissions, such that the Federal Environment Agency, in a publication on technological options for the reduction of the impact of transport, came to the result that carbon monoxide no longer represents an air quality problem (UBA1999). Case-study Summary This case study makes impressively clear that in the area of surface coatings with respect to all emissions and environmental considerations studied that the utilization of nanotechnologically based coatings offers very great eco-efficiency potentials. Beyond this, the further advantages of simplified pretreatment were at least qualitatively shown. The minimal thickness necessary for such coatings leads to greater efficiency in use of resources; advantages in the usage phase can particularly be expected in the transport sector in the course of the trend to lightweight construction. In addition to the automotive industry, this potential would have an even greater effect on the airline and rail industries. A further potential for optimization can be found in the reduction of the solvent quota in nanocoating applications. Contents, goals and methods of the case study Catalytic processes are among the earliest applications of nanotechnology. In principle, continuing gains in the efficiency and cost-effectiveness of catalytic processes can be achieved through the use of ever-smaller nanoparticles. Our understanding of catalysis is radically changing: the development of catalysts once relied upon empirical methods and values based on experience, but today the application of nanoanalytic methods, such as scanning tunneling microscopy (STM), makes it possible to reveal in detail the mechanisms of catalytic reaction. In the course of this case study, we will investigate and look at an example of the ecological impact of nanotechnology-based catalytic applica-tions. As a specific example, we will look at the application of a nanostructured catalyst utilizing nanotubes to the production of styrene. The ecological impact will be evaluated by means of a comparative life cycle assessment (LCA) of the specific processing stages as well as the entire styrene product life cycle. Investigation of the ecological aspects will be made by means of comparisons to existing chemical processes for styrene synthesis. Overview: Catalytic processes and nanotechnology Catalysts are involved in the production of a great number of articles in everyday use. Catalysts are utilized in refining oil, setting free the energy in batteries and fuel cells, producing medicine and agrochemicals, plastics and paints, and in quite a number of environmental applications, for example the three-way catalytic converter in the automobile. The generally accepted definition of a catalyst comes from Wilhelm Ostwald, who was awarded the Nobel prize in 1909 for his work on catalysis: "A catalyst is an agent which increases the rate of a chemical reaction without being consumed by it and without altering the final state of the thermodynamic balance of this reaction." This definition holds true for all catalysts; they differ in functionality, but not in their effect. There are three different types of catalysts, each with its own distinguishing characteristics: 1. Heterogeneous catalysts 2. Homogeneous catalysts 3. Biocatalysts Heterogeneous catalysts are those that exist in a different phase than the reactants. In homogeneous catalysis, the catalyst is in the same phase as the reactants. This has the distinct advantage that a greater number of active catalyst molecules are therefore available, whereas in the case of a heterogeneous catalyst, only the surface molecules are active. Biocatalysts, also called enzymes, are the most wide-spread catalysts. Without them life would not be possible, as almost all processes in nature are controlled by biocatalysts. Catalysis has far-reaching importance for the chemical industries: 90% of all chemical production processes are based on catalysis. More than 80% of the output of the chemical industry is achieved by means of processes that occur in the presence of catalysts. With a world-wide catalyst market presently estimated at about 12 million dollars (US), the value of the resulting products ranges, according to various estimates, from 1.2 to 6 trillion dollars. The German share of catalyst production on the world market, about 4%, is not commensurate, however, with the significant role the chemical industry plays in the German economy (Herrmann 2000). Fig. 21. Areas of application for catalysts 66 Catalysts are among the oldest applications of nanotechnology. The catalytic converter, for example, has been a nanotechnology application from its very beginning. Nanotechnological progress in the development of new catalysts is already being achieved through improvements in the production of nano-scale particles. The reason for this is that the catalytic process is enhanced by these nanoparticles, which have a smaller diameter: catalytic reactions take place on catalytically reactive surfaces and the smaller the diameter, the greater the specific surface area in relation to volume. This yields significant advantages: the size of the catalyst can be reduced while retaining the same reactive surface area; likewise the number of reactive atoms on the surface can be increased for the same given mass. Along with progress in the production of nanoparticles, developments in the field of nanopermeable materials and nanostructured surfaces are also beneficial for catalysis. This was shown in a study on technological development in catalytic converters for automobiles (Steinfeldt et al. 2003 metal group particles (PGM) relies upon this surface-to-volume-ratio effect. This, along with continual improvements in the thermally stable bonding of these particles in the converter and reductions in the aging process, reduces the amount of PGM required and likewise makes it possible to meet ever stricter international emissions standards. Environmental relief is thus obtained through a reduction of pollutants in automotive exhaust emissions and through the reduced environmental impact achieved by "saving" PGM, the production of which is very expensive. Depending on the type of emission being considered, the eco-efficiency potential ranges from 10 -40% (Steinfeldt et al. 2003). Our understanding of catalysis is also changing radically. The development of catalysts once relied solely on empirical methods and experiential values. Nanoanalytical methods such as scanning tunneling microscopy enable us to investigate the mechanisms of catalytic reactions in evergreater detail and to better understand and model them. Focus of the investigation: styrene synthesis Styrene production is considered to be one of the ten most important petrochemical processes and styrene is one of the most important base chemicals in the chemical industry. Numerous important synthetic polymer materials -the chief one being polystyrene -are made from the monomer styrene. Alongside ethylene and vinylchloride, it is one of the most important monomers. Styrene: Economic impact and market data Worldwide demand for styrene is estimated to be more than 20 million tons and growing by 5% annually (Rohden 2001). In 20 the styrene industry did face a 1.8% lapse in demand, but by the following year demand had again increased by 5.1% (Childre 2003). An overview of the development of styrene production is provided in Table 21 and Table 22. Styrene is solely used to produce polymeric products, above all polystyrene. Other possible applications include the styrolacrylnitril copolymer (SAN), terpolymer from acrylnitril, butadiene and styrene (ABS), SBR composition rubbers and unsaturated polyester resins. However, the proportion of polystyrene being used in manufacturing is decreasing. In 1975, 60% the product groups worldwide belonged to polystyrene, but today it is only about 50% (ISEF 2001). As with all other petrochemical derivatives, styrene is very much dependent upon crude oil prices and thus is subject to major price fluctuations. Description of styrene production Styrene is produced in a multi-stage process that can be traced back all the way to the procurement of the raw material. The production of styrene is based on intermediate products obtained from petrochemical resources, and the entire process therefore stretches from the exploration, extracation, and refining of crude oil and gas to the synthesis of styrene in the refinery. Fig. 22 provides an overview of the entire production process. Naphtha and other petroleum products are produced by atmospheric distillation under normal pressure and temperatures ranging from 350 to 370° C. Subsequently, long-chain saturated hydrocarbons of the naphtha are split by means of steam cracking into low-molecular compounds such as butane (C 4 H 10 ), benzene (in small amounts), and ethylene (C 2 H 4 ) . Ethylene, propene (propylene), butane and other products are likewise obtained from natural gas by cracking. Ethylbenzene is produced catalytically by alkylation of benzene with ethene (using AlCl 3 or silica gel) under pres-67 Source: CEFIC (2006) 68 Source: VCI (2003) sure. It is an aqueous phase ethylation at an ethylene-benzene ratio of about 0.6/1, benzene conversion of 52 55%, temperature of 85 95°C, and atmospheric or slightly greater pressure. Styrene (C 6 H 5 C 2 H 3 ) is produced from ethylbenzene (C 6 H 5 C 2 H 5 ) by means of a catalyst. In a further processing step, the styrene is converted by polymerization to polystyrene, thus forming the base material for the production of numerous synthetic materials. Scope of investigation and accessibility of data The scope of this case study is not specifically the entire styrene product life cycle, but rather the stages of processing in the production of styrene. However, it does make reference to the entire styrene product life cycle, i.e. the stages of raw material extraction, pre-production, and production of styrene are looked at in order to be able to assess the eco-efficiency potential within the overall plan. The functional reference quantity for the comparison is 1 kg of styrene. Comprehensive topical data on material and energy flow in the production of styrene are available from the Association of Plastics Manufacturers in Europe (APME). This includes summary life cycle assessment data for the classic production of styrene, i.e. data on all stages, from crude oil and natural gas extraction and processing to the actual production of styrene (APME 1999). Moreover, APME also provides life cycle assessment data on the intermediate products benzene and ethylene. Between these intermediate products and the product styrene are two more stages: the ethylbenzene process and the styrene process, with the consequence that no clear differentiation is possible by means of this database. There are further life cycle assessment data available in the Gabi materials database (Gabi 4 Datenbank 1999b) likewise for the overall styrene production process, but also for the ethylbenzene process (from a production plant in the Netherlands, Gabi 4 Datenbank 1999a). Unfortunately, comparison of all available data sets did not reveal sufficient congruence for the emission data specific to the styrene process. This may be due to differences in calculation procedures or different data sources. Therefore only a differentiated evaluation of the energy consumption in styrene production was possible, as well as specific estimates of individual material flows (heavy metals). No quantified process data is available for the alternative styrene process based on a carbon-nanotube catalyst. However, assumptions about energy consumption can be derived from the description of the technology. Selection of the variants The production of styrene is a chemical process whose efficiency very much depends on the utilization of suitable catalysts. In contrast to the technologically established use of iron oxide based catalysts, a newly developed catalyst based on nanostructured carbon tubes, so-called nanotubes, is now available. Variant 1: Classical styrene synthesis using an iron oxide catalyst Styrene is synthesized in an industrial process that has been known for roughly 60 years: the dehydrogenation of ethylbenzene. The dehydrogenation of ethylbenzene to produce styrene is a reversible endothermic balance reaction (Lieb & Hildebrand 1982): C 6 H 5 C 2 H 5 C 6 H 5 C 2 H 3 + H 2 H 600°C = 124.9 kJ/mol (1) The dehydrogenation of ethylbenzene to produce styrene takes place at temperatures of 600°C in an ethylbenzene-water mixture and is assisted by potassium-promoted iron oxide catalysts. Along with this main reaction, other reactions take place; these are listed in the appendix (Fig. 53). Two technologically different processes are used in the industry; they differ primarily in the way heat for the endothermic reaction is applied (Lieb & Hildebrand 1982;Schoen 2002). In more than 75% of the styrene production plants operating worldwide, dehydrogenation is carried out adiabatically. Multi-stage reactors or a reactor beds in series are used. a. The adiabatic process developed by Dow Chemical Company: The necessary heat for the reaction is applied in the form of superheated steam (approx. 830°C), which is added to the ethylbenzene steam before the catalyst. The mass ratio of ethylbenzene to superheated steam is between 1.5:1 and 2:1. The temperature of the reactive mixture as it enters the reactor is approx. 650°C and 570°C as it leaves. The styrene output after this first stage is still relatively low; therefore the reactive mixture is reheated to 640°C in a second (and if required in a third) stage and is dehydrogenated once more. The production capacity of such a plant is presently about 500,000 t/year. In this procedure, a constant temperature is maintained in tubular reactors that are heated by circulating flue gas or molten salt. The feed temperature of the reactive mixture is at about 600°C and can be kept nearly constant during the reaction with the catalyst layer. In this way the amount of superheated steam needed can be kept to about half of that used in the adiabatic process. Newly built dehydrogenation furnaces have a per unit capacity of up to 150,000 t/year. The addition of steam is an inherent disadvantage of both procedures, as the energy for producing superheated steam can only be minimally recovered. Another weak point is the reversibility of the dehydrogenation process, which inhibits maximum styrene output. The maximum styrene conversion is therefore limited to 40 50% in the first pass, even in modern plants. However, to ensure sufficient selectivity (and as few side reactions as possible), a multi-stage plant brings the conversion rate up to 65 70%. This means that 30 35% of the original ethylbenzene passes by the reactors unprocessed and must be separated and recovered in an energyconsuming procedure. A purity of more than 99.8% is required to make the styrene usable for subsequent processes such as polymerisation. Separation of ethylbenzene and other byproducts from styrene, a process that relies on the tendency of styrene to polymerize and the 9°C difference in boiling point between styrene and ethylbenzene, is extremely difficult and very cost-intensive. Aspects of catalyst implementation The catalyst consists primarily of iron oxides (80%), potassium (10%), and chromium oxide, as well as other selectivity-increasing heavy metal promoters (such as Cr, Ca, Al, V, W or Li). The styrene catalyst is classified as an SLP catalyst (supported liquid phase). These catalysts consist of a porous solid carrier, which can be catalytically inert or active. On the carrier we find the catalytically reactive active components (promoters) in the form of a fused material or in a liquefied state (Hagen 1999). The potassium-promoted iron oxide catalyst is gradually deactivated by use and must be replaced every two to three years. Considering the vast size of the reactors used in the industrial process, it can be assumed to be an extremely cost-intensive process. Besides the already mentioned causes of deactivation of the catalyst -blocking and coke deposit -another three reasons for gradual failure of the catalyst are given (Maximova 2002): -Loss or redistribution of the potassium promoters -Major changes in the oxidation state of the iron oxide -Physical deterioration of the catalyst One can give rise to the other and all take place simultaneously; the deactivation of the iron oxide catalyst can therefore be viewed as an extraordinarily complex process. Variant 2: Styrene synthesis with carbon nanotube catalyst By being able to get a nanometer-scale "look" at events taking place during catalytic styrene synthesis, the actual sequences of the styrene synthesis could be recorded by scientists at the Fritz-Haber-Institute of the Max-Planck-Society. New research suggests that the coke layer is constantly present during styrene synthesis, even on the active catalyst surface. The coal gasification and coke formation are in a balanced state. Therefore it is assumed that coke is not simply a black layer that promotes deactivation, but rather that there are different types of coke with different properties (Ketteler 2002). These investigations suggest that the carbon film that forms contains the real catalytically active species and that the potassium iron oxide film is only necessary for the formation of this active carbon species, i.e. it serves only as a "co-catalyst." Moreover, it has been demonstrated that various carbon species (carbon black, graphite, "nanobulbs," and "nanofilaments") all show excellent activity and output (MPG 2002). The enterprise Nanoscape AG was founded in 2001 with the goal of technically implementing the results of this fundamental research. Their aim is to develop a new method of technical styrene synthesis using a nanotube-based catalytic process. Following the successful production of styrene on a small-scale, this is now being expanded as part of an EU pro-ject to a reactor with a 100 g catalytic volume. This will be followed by a correspondingly larger pilot plant. A new nanostructure catalyst consisting of multi-wall nanotubes will be used. This not only permits increases in styrene output, it also changes the procedure from an energy-intensive endothermic process to a more energyefficient exothermic process. Additionally the new catalyst makes it possible to run the reaction by adding air instead of water. Moreover, at the same conversion rate selectivity can be increased and the process temperature lowered by 200°C, which significantly lowers the specific expenditure of energy. The plant schematic makes clear that this process is also characterised by a simpler plant structure as compared to the traditional production of styrene. The advantages of the new styrene production process on the basis of nanotube catalysts and the associated ecological effects can be summarized as follows. Reduction of the specific energy consumption by (at least) 1.2 MJ/kg styrene assuming H = 124.9 kJ/mol, Dependent on the exothermic conditions, which technologically can be kept low; moreover waste heat from the reactor could be used for other processes, such as preheating of reaction gas, heating of tubes, etc. Reduction of the reaction temperature by about 200°C from 600°C to 400°C Reduction of the specific energy consumption Change of the reactive medium from superheated steam to nitrogen/oxygen or air Reduction of the specific energy consumption, since production and processing of steam is very energyintensive; reduction of the plant costs, requirements of reactor construction, heat exchanger (e.g. process water separation is eliminated completely), etc., tube dimensions are reduced due to lower temperature level and different corrosion characteristics of the reactive media Higher selectivity at same conversion rate Reduction of the specific energy consumption for less distillation and recycling Use of carbon nanotube catalyst Replacement for heavy metals, no heavy metal contamination Easier catalyst management (assumption) Catalyst production Higher (energy) expenditures for the production of the nanotube catalyst are to be expected with the CVD process, knowing that the technical requirements for multi-wall nanotubes cannot be compared to those of single-wall nanotubes. Detailed life cycle assessment data for the alternative styrene synthesis are not available. However, on the basis of the description of the technology, the following deduction about the energy consumption for this process will be made in order to assess the energy consumption at this process stage: 73 Source: Mestl (2004) and authors' own data 1. The change of reaction type results in a reduction of the specific energy consumption by 1.2 MJ/kg styrene. 2. Reduction of the reaction temperature by about 200°C from 600°C to 400°C effects a 25% energy savings. 3. With the change of reaction medium from superheated steam to nitrogen/oxygen or air as well as the higher selectivity at the same conversion rate, a further 5% saving of energy can be assumed. Life cycle inventory analysis In the life cycle inventory analysis, the material and energy relationships of the various styrene production processes are noted, with a view to possible environmental impacts, i.e. inputs from the environment and outputs into the environment are recorded. The goal of the life cycle inventory analysis is to establish a data inventory based on functional equivalents for the selected variants. Life cycle assessment data are available in summary form for traditional styrene production. The data encompass all processes, from crude oil and natural gas extraction and processing to styrene production. Intermediate processes as well as energy and transport processes are also included (APME 1999). As an example, gross energy demands are represented in the following. Additional LCA data are available in the appendix (Table 45ff). For further discussion there arises the question as to what portion of these material and energy flows and/or the associated total environmental impact is to be attributed to the styrene process under investigation. As already discussed, using the available data it is only possible to make a precise differentiation with respect to the energy requirements of styrene production. This reveals that the feedstock proportion, i.e. the energy content of the material, makes up the greatest share (56%) with respect to the gross energy demand. The direct energy demand for styrene production makes up 44% of the gross energy demand at 37.04 MJ/kg. The actual styrene process requires about 6.4 MJ/kg of the 37.04 MJ/kg. This corresponds with a 17% share of the energy demand for styrene production. This share is influenced by the higher efficiency of the alternative styrene production process. The alternative process based on a nanotube catalyst already yields at this stage a potential energy saving of almost 50%. Fig. 26. Comparison of the energy demands of the styrene production processes 77 With respect to the total energy requirement for styrene production, this would result in an 8 9% increase in energy efficiency. Another advantage ot the alternative styrene production process is the replacement of heavy metals, which would otherwise be present in the catalyst. Heavy metal emissions into water from styrene production would be reduced by about 75%. 77 Source: authors (database: authors', APME 1999; Gabi 4 Datenbank 1999b; Gabi 4 Datenbank 1999a) However, in real-world applications care must also be taken with the new nanotube catalysis method to minimize to the extent possible emissions of carbon nanotubes. The nanotubes should be firmly bonded to the carrier structure of the catalyst so as to avoid "tear off" of the nanotubes from the carrier surface. Moreover nanotube emissions should be minimized by means of sealed-plant technology. A discussion of possible risks and hazards associated with nanotube emissions takes place in the specific case study with a focus on risk potential. Life cycle impact assessment For the life cycle impact assessment, it is necessary to have access to emissions data which can be associated with specific environmental impacts. Inasmuch as the available data only allowed for a reasonable calculation of the energy demand, no additional impact assessment was possible. Impediments to the introduction of nanotechnology to the marketplace In the context of this case study it makes sense to discuss the subject of impediments to the launch of nanotechnology. It is not necessarily a given, that new nanotechnological processes, for example for styrene synthesis, that demonstrate environmental as well as economical advantages will succeed in the market place. Within the context of this study, a more indepth interview was conducted to look at the market implementation of this nanotechnological solution. From the results it is clear that a number of styrene producers who have been introduced to this process do not deny its potential advantages. Since it is a procedure which so far has only been realized on a laboratory scale, upscaling up to a larger-sized plant is necessary to prove the economic feasibility of the nanotechnological process. This requires considerable investment in further development, with all the risks that such development entails. The financing for this is not yet settled. At the same time, we find plant manufacturers and operators using the traditional processes who do not plan any re-investment and who point to their already extensive experience with the existing technology. In the end, it is these sunk costs or unrecoverable past expenditures (already invested capital as well as plant production experience) of the facility operators that may finally prevent implementation of the new process. Case-study Summary In the course of this case study, we will investigate and look at an example of the ecological impact of nanotechnology-based catalytic applications. Use of a nanostructured catalyst based on nanotubes for the chemical process of styrene synthesis serves as a specific example. Implementation of the new catalyst would greatly increase energy efficiency (by almost 50%) at the process level. This improvement in efficiency results from two specific effects: first, it is possible to replace the former endothermic reaction with an exothermic one; second, the reaction temperature can be lowered considerably, the reaction medium altered, and the plant power input minimized. With regard to the overall styrene production life cycle, this would mean an increase in efficiency of about 8 9%. Furthermore, the new catalysis would make possible considerable reductions in heavy metal emissions during the product life cycle. Investigations into potential risks associated with the utilization of nanotubes must continue and be accounted for in facilities planning. Case study 3: Nano-innovations in displays 4.4.1 Contents, goals, and methods The display market is a dynamic and immensely growing market due to the importance of display screens in the fields of information and communication technology. The present sales volume of 26. Several different display technologies are competing in the market. In the past decade, display technology research and development has intensified with respect to the application of nanomaterials. Formerly CRT displays were the most common in use. These are increasingly being squeezed out of the market by new display technologies. The new display technologies offer many advantages: improved technical specifications (e.g., reduced radiation emissions) and easier handling (e.g., smaller size and less weight). So far, only a slightly weaker performance in some respects (visual angle, brightness, etc.) and, above all, a higher price have kept the new technology from spreading even faster. In the course of the case study, a comparative investigation of conventional and nanotechnology-based display technologies in the form of a life cycle assessment profile was carried out. Possible eco-efficiency potentials were quantified to the extent possible. Subjects of the investigation The detailed investigations in this case study are limited to the following display technologies: Cathode-ray tube -CRT Liquid-crystal display -LCD Organic light-emitting display -OLED Plasma display panel -PDP Carbon nanotube-based field-emitter display -CNT FED The CRT, the conventional and currently most widely used display technology, and the likewise well-established LCD face two new nanotechnological rivals: the OLED and the CNT FED. These are still primarily in research and development, but some product applications using OLEDs are already on the market. Plasma displays are already available commercially in various configurations and are being promoted as the best solution for large displays and information read-outs. In exactly this segment of the market, FEDs are said to have great potential; comparisons are thus made to plasma technology. Variant 1: the cathode-ray tube (CRT) The cathode-ray tube is the oldest and best known facility for generating moving images. A CRT monitor consists of a vacuum-filled (10-6 to 10-7 torr) glass bulb plus a heated cathode (voltage about 25 kV), also known as the hot cathode or electron gun (Abrams et al. 2003). When heated, the electrons of the negatively charged cathode begin to oscillate and are then emitted from it. Between the cathode and the anode exists an accelerating potential of several kV. Due to this voltage difference, the electrons are accelerated in the direction of the anode and generate a point of light when they strike a phosphor coating on the side of the glass vessel (Tannas 1985). Color is generated by three individual electron beams, which strike differently endowed phosphor layers through a hole or a slotted mask, thus generating red, green, and blue light. The grid regulates the intensity of the electron beam and thus the brightness of the resulting light spot and is controlled by the video brightness signal. The electron beam passes through an electromagnetic field (the deflection yoke) and is thus able to reach every point of the screen. By means of a focusing as well as magnetic coils and a shadow (slot) mask, the electron beam is concentrated and directed to a corresponding pixel (see Lohmann 1997; Blankenbach 1999; and others). Fig. 28. Schematic diagram of a CRT display Due to their construction, CRTs are very heavy as compared to other display technologies, require a lot of space, and are characterized by high energy consumption. Image quality is also negatively affected by screen burn-in, which occurs when a fixed image is displayed over a longer period of time. The advantage they offer over other display technologies is their considerably lower price. Variant 2: the liquid-crystal display (LCD) Liquid-crystal displays are passive, i.e., transmissive, displays requiring a light source placed behind the screen. The crystals function like a valve, either allowing light to pass or blocking it. This function is based on the anisotropic properties of liquid crystals. Their liquid crystalline aggregate state combines the molecular orientation in the solid, crystalline phase with the mobility of the liquid state. A liquid-crystal display consists of a system of two glass plates with a metallized lattice of conductors; positioned between them are the liquid crystals. In addition, a polarizer filter is positioned in front of and behind the glass plates, respectively, such that their orientation is crossed. If a light beam is sent through the first polarizer, only one polarization component remains. Unless a voltage is applied, the light is then rotated in its plane of polarization by the liquid crystal and thus can pass through the second polarizer filter, resulting in a bright pixel. When a voltage is applied, the light is no longer refracted and cannot pass through the second Variant 3: the plasma display (PDP) PDP -plasma display panels -consist of a system with two glass plates, each with a metallized lattice of conductors; between them is a mixture of inert gases such as argon or neon. An electric field is created at the crossover points by applying a voltage to the grid of conductors. This field raises the gas atoms to a higher energy level. When returning to the original level, the absorbed energy is emitted in the form of photons. Each pixel is directly generated "on site" by its own light source. A large pixel matrix is situated between the flat glass panels. Each pixel consists of three cells, each filled with inert gas. As a rule, neon and xenon are the main components; some manufacturers also add helium. The amount of gas used is very small and the pressure is also minimal. mixtures. Charged electrons create tiny gas explosions which cause shortterm changes of the aggregate state from gas to plasma. The resulting ultraviolet radiation generates -depending on the coating of the rear and lateral sides of the cell -red, green and blue light via the phosphors. This phosphorescent light is visible as a pixel through the front panel. The process of light generation is similar to that of the fluorescent tube, but on a much smaller scale, with the result that the energy efficiency of the discharge is only about six percent (Jüstel et al. 2000). Its advantages include outstanding image quality (fully distortion and flicker-free, high resolution), a wide viewing angle, and immunity from electromagnetic interference (Blankenbach 1999). Disadvantages of PDP technology include, above all, high energy consumption, weight, and difficulties in achieving high brightness and strong contrast at the same time (see Deschamps 2000 and others). Variant 4: the OLED OLEDs (organic light-emitting diodes) differ from the usual LED displays through the use of organic emitter materials. The principle functionality of an OLED, like the inorganic LED, is based on injection electroluminescence. Positive and negative charge carriers, which are injected at the respective electrodes, are brought to radiative recombination in an emitting layer. A DC voltage of only a few volts is sufficient for injection electro-luminescence. A significant advantage of OLEDs is their independence from the substrate material (Scott et al. 2000;Steuber 2000). Organic light-emitting diodes were invented by C. W. Tang and S. A. Van Slyke (1987) of Kodak, who were the first to discover that organic semiconductors of the p-and n-type could be combined -in a way similar to the formation of p-n-transitions in crystalline semiconductors -to make diodes. Moreover, the polymers generate light by recombination of holes and electrons in a very efficient manner, similar to gallium-arsenide and III-V semiconductors. In contrast to the manufacture of III-V LEDs, where crystalline perfection is necessary, organic semiconductors can be vacuummetallized as amorphous layers. Two OLED structures are depicted in Fig. 30. By means of such a permutation of layers, a bright emission of green light with a 10-12 cd/A quantum yield and roughly 4 lm/W at 2,000 cd/m² energy efficiency was achieved and utilized in products of the Pioneer Corporation. T. A. Ali, A. P. Ghosh, and W. E. Howard of IBM and eMagin Corporation have modified this structure to integrate OLEDs on a silicon chip with the intention of using it for micro-displays in headsets. They start with a high work function metal anode and end with a transparent cathode and an ITO layer. Transparency of the metal cathode is achieved by its negligible thickness of only 10nm. They also modified the active light-emitting layer. By using diphenylene-vinylene (DPV) as a blue-green emitter and doping with red colorant, they were able to generate white light (Ali et al. 1999). The worldwide technological development of OLEDs is very dynamic; in Germany, a group at the Institute of Applied Photophysics at TU Dresden has succeeded in lowering the operating voltage for OLEDs to 3 V and improving the performance efficiency by p-doping of the hole transport layer (HTP) (Blochwitz 2001). Prototypes of OLED displays up to 40 have already been shown by companies such as Samsung (Samsung 2005) and LG Philipps (Pressetext 2004). Driving this development are the anticipated production savings over other displays. Variant 5: CNT FED The basic components of a field-emitter display are the rear plate with the cathode layer, a vacuum gap, and the subsequent front plate with an ITO layer on the interior side that serves as the anode. The cathode layer on the rear plate (emitter layer) is one of the essential components of FE displays. The type, characteristics, and nature of the emitters are decisive for image quality, energy consumption during use, and service life. There are several known emitter materials. Farthest along is the research and development of microtips made of molybdenum and tungsten and emitter layers made of carbon nanotubes. Because of their unusual properties, the feasibility of using carbon nanotubes as field emitters was investigated shortly after their invention. General requirements for field emitters include the following: Stability at high current density High conductivity Low energy loss High chemical stability These requirements are well met by nanotubes. With their very high length-to-diameter ratio, high current density at low voltage, and high thermal and chemical stability, nanotubes were predestined to be wellsuited materials as field emitters. Moreover, field emitter layers can be more easily produced using nanotubes than microtips and can be manufactured at a vacuum of 10-8 torr as compared to 10-10 torr for tungsten and molybdenum (Baughman et al. 2002). This permits more cost-effective manufacture of CNT FEDs than of microtips (Information Society Technologies 2003). The most intricate component of a CNT FED is the cathode layer. It includes a glass substrate upon which a layer of conductor paths is imprinted. Above this layer the "cold cathode" is situated, an emitter layer consisting of nanotubes. Above that there is another electrode layer, the conductive paths of which lie crosswise to the first one. In this way the second orientation of the pixels is defined, which are situated at each of the junctions of the first and second layers. Electrons from the emitter layer can pass through the gaps etched into the second layer, through the vacuum, to the anode layer. Between the cathode and anode layers is a vacuum and the distance (as a rule, less than 1 mm) is defined by so-called spacers. Opposite each pixel of the cathode layer is a phosphor area of similar size above a transparent conductor material, which functions as anode. The anode layer is situated on a glass substrate. The rear electrode contact is made by means of a metal layer on the glass substrate (Burden 2001). Fig. 31. Schematic layout and enlarged image of a CNT FED 80 Field-emitter flat screens function much like cathode-ray tubes with many simultaneously emitting cold-field electron sources per pixel and achieve the same performance as the conventional CRT with respect to brightness, color reproduction, viewing angle, and rendering speed. During operation, a voltage of 1-7 kV is applied at the anode, depending on the size of the vacuum gap and requirements of the display. Thus a difference in potential (voltage difference) is generated between anode and cathode, thus effecting the emission of electrons from the (nanotube) emitter layer. Acceleration of the electrons in the vacuum gap is thus crucial. Current flow, electron energy, as well as the type and quality of the phosphor determine the color and brightness of the pixels. A significant advantage of field-emitter displays is that the application of an electric field produces a "cold" electron emission from the tip, with the result that it consumes much less energy than does the traditional cathode-ray tube, which must be considerably heated before it emits electrons. Scope of investigation and availability of data The scope (system boundaries) of the comparative assessments should cover the entire life cycle of the respective display technologies. The individual life cycle stages include: Raw material procurement, pre-production Display manufacture Use phase Disposal/recycling For the CRT and LCD assessments we were able to rely on a very extensive American life cycle assessment study. The study, commissioned by the EPA Design for the Environment program, was carried out by Socolof, Overly, Kincaid and Geibig of the University of Tennessee, Center for Clean Products and Clean Technologies, and published in December 2001 (Socolof et al. 2001). It is notable for the cooperation of all major American and Asian manufacturers of CRTs and LCDs. The study took as its functional unit for investigation the life cycle of a computer monitor. No quantitative life cycle assessment data is available for the other variants that were examined. For these technologies, qualitative descriptions of the manufacturing processes were carried out in order to arrive at a basis for making assumptions for the comparative assessments. With respect to the particularly relevant use phase, quantified data could be provided for all technologies. Data for the disposal/recycling phase are likewise only available for the first two variants. Following the example of the existing life cycle assessment, the life cycle of a 15 LCD monitor and 17 CRT monitor were selected as the functional unit for the comparative life cycle assessment profile. Particularly for the use phase, the existing data was recalculated for this display size. For all other variants, the same service life as current CRT and LCD devices is assumed. This specifically makes the assumption that for the OLED the current problem with long-term stability of the organic luminescent material will be solved. Description of the life cycle stages Numerous raw materials are used in the manufacture of displays. In the following, the essential production steps and raw materials utilized for each of the display technologies considered are described and summarized. Detailed illustrations can be found in the chapter on life cycle stages. Production of a CRT monitor includes manufacture of the two main components, the panel glass and the glass bell; the glass contains a fair amount of lead. The individual luminescent phosphors are then applied to the panel glass and sealed with a protective coating. A pre-assembled shadow mask or aperture grille is then attached. The two glass components are fitted together to form the glass flask or picture tube into whose neck the cathode is then fused. Subsequently, the air is pumped out and the tube is sealed. After manufacture of the tube, other components such as the deflection unit are added in the final assembly (see Fig. 34). The following section describes the usual steps in the manufacture of an LCD display. One should be aware that various methods can be used, particularly in applying the liquid crystal layer. First, the rear glass plate with TFT layer and the front glass plate and color filters are produced. After that, the ITO layer is sputtered or printed, followed by application of the hard layer and tempering. Then the polyimide (PI) layers are printed and the cured PI layer is rubbed to enable subsequent orientation of the LC molecules. After that spacers are sprayed on, followed by seal deposition and curing and application of external contacts for later wiring. During the subsequent cell assembly, the two glass plates are aligned and assembled to complete a panel. After curing in the hot press oven, the liquid crystal fluid filling is added to the panels and polarizer filters are applied (Crystec Technology Trading GmbH 2003)(see Fig. 35). The manufacture of a plasma display begins with the fabrication of two glass plates, usually 3 mm thick, which are subsequently cleaned. Metal electrodes are applied in rows to the front plate. The next step is the preparation of the black matrix. The entire front plate is covered with a transparent dielectric ceramic layer, which cures at a temperature of almost 600°C. The front plate is then covered with a thin layer of magnesium oxide (MgO). On the rear plate, electrodes are also applied (in columns/matrix) and likewise coated with a dielectric ceramic layer. After curing of the ceramic layer, a magnesium oxide layer is applied. Subsequently, the so-called barrier ribs are formed on the parallel electrode surfaces and the phosphors are deposited into the resulting channels. The rear plate is then fired, before the two plates are finally assembled into one panel and fused together. The air is then evacuated from the inner space and replaced by a gas mixture (mostly helium and xenon) at a pressure of about 500 torr (Deschamps 2000)(see Fig. 36). The manufacture of OLEDs begins with the application of the transparent ITO layer to a glass substrate. In the following steps, all additional organic and metallic layers are then applied by means of thermal evaporation. The organic materials require a temperature of 300-500°C for evaporation, but silver requires a temperature of 1200°C. The layers are applied in a precisely followed series of complex process steps within a vacuum. Subsequently, the front plate is mounted and the entire assembly is sealed air-tight with a form of epoxy resin. Presently, only one production plant capable of series production of the first OLEDs is in operation, at SK Display Corporation, a joint venture of Kodak and Sanyo (Webelsiep 2003). A prototype of an in-line OLED production system, funded by the Federal Ministry of Education and Research (BMBF), exists at the Fraunhofer IPMS in Dresden (IPMS 2003). The plant consists of eleven process modules, in which 300 x 400 mm samples are coated. The substrates move vertically through the deposition chambers. Up to twelve line sources are available for deposition of the organic layer systems. Additionally, two PVD and inorganic evaporation sources are integrated into each of the electrode deposition systems (see Fig. 37). The following two illustrations make clear the reduction in manufacturing complexity of the OLED as compared to the LCD (described above). From the diagrams it can be seen that the OLED can be manufactured much more efficiently than the LCD because, e.g., no backlight is in-volved. This makes it possible to reduce the depth of the screen from 5.5 mm in the LCD to 1.8 mm in the OLED. An assessment of the materials required for a typical OLED: Glass (0.7 mm) 100 nm ITO 100 nm organic material 100 nm Mg:Ag (for the contacts) Glass (0.7 mm) for a 17 display with a viewable area of 918 cm², this results in an organic materials consumption of ca. 370 mg per display (including waste). The annual production of 1 million 17 displays would require roughly 0.4 t of organic materials. After production of the rear glass plate of the CNT FED, metal catalysts are applied to the pixel areas of the glass substrate by means of various procedures, including sputtering, lithography, microcontact printing, and ink jet printing. The carefully controlled growth of the CNT on the catalysts takes place in a precisely moderated CVD process at low temperatures (temperatures less than 500°C and lasting only for a few minutes) (NEDO). On the front plate, which also consists of glass, an ITO layer is applied as an anode to the interior face and is then treated with a phosphor layer. Finally, the two plates are combined to form a panel and bonded together (see Fig. 38). The advantages of the new nanotechnology-based display technologies and their associated environmental impact may be summarized as follows. Detailed life cycle assessment data for the production and prior processes are only available for the CRT and LCD. For the other display technologies examined, the necessary processes could only be qualitatively described, as no material or energy data was available. A differentiated comparative assessment is not possible on the basis of these data. However, based on descriptions of the technology, the following assumptions could be made regarding energy consumption, pre-production, and production for the three variants, in order to at least be able to make an estimate of energy consumption: PDP: The same energy consumption as in the LCD is assumed. OLED: On the basis of the simpler assembly, lower energy consumption can be assumed (variant A: 10% energy reduction, variant B: 30% energy reduction). CNT FED: On the basis of the simpler assembly, lower energy consumption can be estimated for pre-production (Assumption: As with the OLED variant A, 10% energy reduction). Also the energy expenditure for manufacturing -even if one assumes application of the most demanding production technology -is likely to be less than for the LCD; however, in order to allow for a safety margin for nanotube production, the same expenditure as for the LCD is assumed. Use phase As important as image quality and usability are from a technical point of view, because of their long use phase, electrical energy consumption is the decisive parameter for an environmental assessment of display technology. To be able to compare our display technologies, we need one (common) energy consumption value for one (common) display size. In the case of the CRT and LCD displays, this was the 15 monitor size. The energy consumption data available in the literature is for various display sizes, which makes comparisons difficult at best. Cambridge University's Department of Engineering assumes an energy consumption of 50-70 W for a 38 CNT FED with an image quality comparable to a CRT or plasma display (Amaratunga 2003). Samsung estimates 150 W energy consumption for a 42 CNT FED (Samsung 2003). Futaba produces an 8 full-color display with a 1000 cd/m² brightness, which consumes only 7 W. According to Mounier (2002), a 10 FED display will only consume 2 W. A new 20 OLED display by IDTech (IBM) with 1280 x 768 pixels uses 25 W at a brightness of 300-500 cd/m². In comparison, the energy consumption of a 19 LCD display is 40 W (VDI-Nachrichten 2003). Energy performance data used in the case study are listed in the following table and were taken from a display technologies road map, details of which can be found in the appendix (see Table 51). Data for the year 2005 were converted to the respective display size in the case study as necessary. Disposal / Recycling Hard data for this life cycle stage for the CRT and LCD displays are found in the life cycle assessment by Socolof et al. (2001). No quantitative data is available for the other display technologies. One can see from the assessment data for the CRT and LCD that this phase plays a minor role in the overall assessment and is not quantitatively addressed in the rest of the process. Life cycle inventory analysis In the life cycle inventory analysis, the material and energy relationships between the display technology being studied and the environment are recorded, i.e., the input flows from the environment and the output flows that are returned to the environment are noted. The goal of the life cycle inventory analysis is to establish a data inventory based on functional equivalents for the selected variants. Complete quantitative life cycle inventory data are available for the CRT and LCD. On this basis and using assumptions described below, an assessment of the energy consumption of all display technologies for the pre-production, production, and use phases was made. A simple comparison of the total mass of the major components of a 17 CRT and a 15 LCD monitor makes obvious the considerable difference in mass between traditional tube technology and flat screen technology -and that includes the other displays. A look at the bottom line for material and energy consumption over the entire service life underscores the advantages of flatscreen technology. As an example, the total energy consumption of a CRT display is 7.3 times higher than the total energy consumption of the respective LCD display, 4.4 Case study 3: Nano-innovations in displays 133 due to the increased amount of energy needed for glass production. Detailed data regarding material, chemical substances, and energy totals from the life cycle assessment utilized can be found in the appendix (Table 52ff). The environmental advantages of the LCD versus the CRT become clear, particularly in the comparative list of environmental categories. With the exception of two environmental impact categories, the LCD shows better results than the CRT. What further potential savings or improvements can these other display technologies offer -particularly as compared to those based on nanotechnology? This is a question for further discussion. Inasmuch as quantitative analyses with respect to substance inputs are not available, eco-efficiency potentials in the area of energy consumption can only be estimated. To do so, existing CRT and LCD data, as well as assumptions derived from them, for the product life cycle stages of preproduction, manufacturing, and use were used to arrive at energy consumption values. The energy consumption chart makes clear that the OLED variants, due to lower energy consumption in the use and production phases, fare particularly well when compared to current LCD displays. Energy efficiency increases and thus an improved eco-efficiency of ca. 20% for the OLED 10% variant and 35% for the OLED 30% variant over the entire product life cycle are possible in this field. Of course, this is dependent on R&D success in achieving the envisaged material and energy efficiencies, as well as solving the problem of long-term stability of luminescent substances. Under these assumptions, the CNT FED display would also score significantly better as compared to the LCD due to its greater efficiency in the use phase. The eco-efficiency potentials are very much dependant on the CNT FED production process being made as efficient as LCD production. With respect to difficult-to-assess substances, the study by Socolof et al. (2001) looks at lead, mercury, and liquid crystals. A considerable amount of lead is present in the glass used in the CRT, as well as in the frit, although the lead in the glass is firmly bound in the glass matrix. Mercury is required in small quantities for the LCD backlight (3.99 mg). The mercury is vaporized by application of a voltage and generates ultraviolet light. During the process of manufacturing the backlight there is the risk of mercury emissions due to lamp breakage, mercury leakage, and waste. The risk can be minimized by process optimization and protective measures, but cannot be fully eliminated. Moreover, mercury emissions are also a component in the production of electric energy, as was already discussed in the case study on lighting applications. For the LCD display, an electrical energy production contribution of 3.22 mg mercury was assigned; for the CRT, due to its higher power consumption, 7.75 mg. Similar mercury emissions will occur with the other display technologies. Due to the lack of sufficient information, no final assessment on the toxicity of the liquid crystals could be made. Toxicological tests done by liquid-crystal manufacturers showed, e.g., that in 95.6% (562 out of 588) of the liquid crystals tested, a potential toxic hazard did not exist. In 99.9% (614 out of 615) of the liquid crystals tested, it could be shown that no carcinogenic risk exists. The German Federal Environment Agency also came to the conclusion that liquid-crystal substances from the company Merck represent a very low risk and that no special requirements are necessary for the disposal of LCDs (Merck KGaA 2000). Instead of liquid crystals, other organic materials are used in OLEDs. Potential risks from extremely dangerous substances in OLEDs are unlikely. The safety data sheets provided to us by the Fraunhofer IPMS regarding some of the substances used in OLEDs do not indicate any potential hazards. The carbon nanotubes used as field emitters in CNT FEDs are grown on the substrate in a carefully controlled CVD process; the result is tightly sealed in the product. Therefore the potential for the release of carbon nanotubes appears to be very low. Life cycle impact assessment To complete the impact assessment, it is necessary to have access to emissions data which can be allocated to specific environmental impacts. Since these are estimates and furthermore a direct relationship between energy consumption and the relevant emissions exists, this would also be seen in the categories greenhouse effect, acidification, and eutrophication and not provide any new knowledge. We therefore make no presentation with respect to this. Case-study Summary The objective of this case study was an investigation of the eco-efficiency potential of new nanotechnological products currently being developed for the display industry. For this purpose, OLED and CNT FED displays were compared with conventional CRT, LCD, and plasma displays. Due to differing stages of development of the technologies under investigation, the resulting eco-efficiency potential assessments come with a certain degree of uncertainty. In the overall product life cycle, the manufacturing phase is responsible for an ever increasing share of the environmental impacts. The successful implementation in mass production of the material and energy efficiency increases offered by OLEDs will make it possible to realize significant eco-efficiency potentials. At the very least, a 20% savings in energy as compared to LCDs over the entire product life cycle should be possible. Likewise, development of the eco-efficiency potential of the CNT FED will become possible once the manufacturing process, particularly in the highly complex production of nanotubes for field emitters, becomes as efficient as current processes. Risk potentials from these new technologies are unlikely. Case study 4: Nano-applications in the lighting industry 4.5.1 Contents, goals, and methods Light in its many applications is a large consumer of energy. In Germany, about ten percent of the electrical energy consumed is used for lighting (ZVEI 2003). In this case study we investigate the ecological potential of new nanotechnology-based solutions for lighting. The German agenda "Optical Technologies for the 21st Century" identifies light-emitting diodes as an efficient and environmentally favorable light source and is promoting them in the course of its "Optical Technologies" development program under the title "Nanolux -white light-emitting diodes for lighting." Specifically, this case study investigates white LEDs (light-emitting diodes) and compares them with conventional light sources (the incandescent and the compact fluorescent lamp). Evaluation of the ecological relevance is carried out by means of a comparative life cycle assessment. Quantum dots and their potential for improvements in efficiency in the lighting industry are also qualitatively addressed. Introduction Light can be defined as electromagnetic waves with frequencies in the visible range and thus perceivable as having a particular brightness and color. Waves of other frequencies have names that primarily characterize their use, but no color. Many of our present-day light sources are thermal radiators. This includes the Sun, candles, incandescent, and tungstenhalogen lamps. The luminous color of the object is dependent upon its temperature; i.e. such light sources generate light as a secondary product of heating up. The other group of light sources generates light by electric radiation, luminescence, or crystal radiation. This includes discharge-type lamps (including compact fluorescents) and the even more advanced light-emitting diodes, which are rapidly and continually being developed in order to make them competitive on the mass market. The following diagram provides a survey of the most important lamp types today. Other discharge-type lamps include the low-pressure sodiumvapor lamp, ultraviolet, sodium-xenon, xenon, reflector lamps, various automotive lamps, etc. Subject of the investigation The case study looked at the use of light sources for illumination. For this purpose, three different types were compared: The two traditional light sources, the incandescent lamp and the energy-saving lamp, and the white LED based on nanoscale layers. Variant 1: the incandescent lamp The incandescent lamp, with a luminous efficiency of ca. 15 lm/W (WKO 2003), is the most widely used electric light source. It can be arbitrarily switched on and off and is used in all fields of interior and exterior lighting. The average service life is roughly 1,000-1,500 hours. The incandescent filament, a double-coiled wire between two lead-in electrodes inside a glass bulb, begins to glow when an electrical current is applied. A gas mixture inside the glass bulb prevents rapid vaporization of the filament. Tungsten, with its high melting point of roughly 3,400°C, is used for the When the current passes through the bulb, the filament is rapidly heated up to a temperature of about 2,600°C. This causes bright light to be radiated. The electrical energy is converted into light very inefficiently -90-95% of the energy is converted into undesired heat. The incandescent lamp has a very poor energy efficiency. Only a small percentage of the energy is converted into visible light. The energy efficiency can be enhanced by increasing the filament temperature; this requires filling the glass bulb with a halogen mixture to maintain the service life of the lamp. This further development is called the tungsten-halogen lamp. In halogen-filled lamps, the tungsten wire may achieve a temperature of roughly 3,000°C. Variant 2: the energy-saving lamp (compact fluorescent) Saving energy and resources is a guiding principle of our time. Using discharge-type lamps allows us to apply energy much more efficiently for lighting than in the case of conventional incandescent lamps. They include the common fluorescent tube lamp, often found in the home and workplace. In particular, energy-saving lamps (compact fluorescents) -a miniature, enhanced form of the fluorescent lamp -make four to five times better use of electrical energy than do incandescent lamps. They are more expensive than normal incandescents, but convert up to 25% of the energy applied to light and have a luminous efficiency of about 60 lm/W (WKO 2003). Moreover, compact fluorescent lamps have a service life of 8,000 to 14,000 hours (KEVAG 2003). (2002) A gas mixture, e.g. mercury, is a major component in discharge-type lamps. When, as a consequence of an applied voltage, a current begins to flow, electrons move from one electrode to the other. The electrons collide with the mercury atoms in the gas mixture inside the tube. These collisions cause energy in the form of ultraviolet light to be released. The ultraviolet light is absorbed by the phosphor coating (e.g. metallic salts) on the inner surface of the glass tube. The phosphors are thus stimulated and emit visible light. A discharge-type lamp requires a ballast, which provides the necessary high-voltage for ignition and also the normal operating voltage. Variant 3: the (white) light-emitting diode Recently the use of white light-emitting diodes (LEDs) has come into discussion as an alternative to conventional light sources such as incandescents and fluorescent tubes; this is because it is generally assumed that LEDs can more efficiently produce light. Light generation by means of light-emitting diodes is based on semiconductor lighting technology. This makes it possible, for the first time, to generate a cold light without a significant heat component. The technical problem is that although the internal quantum efficiency of an LED is very high, only a small portion of the light can be decoupled from the component. Internally, up to 90% of the energy can be transformed into visible light, but only about 20-30% (this number is increasing) is externally available (CJ-Light GmbH 2003). Light-emitting diodes have a very long service life. The service life of the LED is characterized by the number of broken components during a given period and by the decrease of light flux as compared to the original value (50%). A complete failure of all components during service life is practically impossible when operating norms are observed. Service life, efficiency, and light color of the LED depend very much on temperature, with the consequence that the thermal balance must be carefully observed when using LEDs. Additionally, the light-emitting diode can be destroyed by application of a too-high voltage. Additional electronic components in the form of a ballast are necessary for LED systems and modules. The maximum service life of 100,000 hours can be achieved in red, yellow, and orange LEDs under normal ambient temperatures and at 50% of the maximal allowed current recommended by the manufacturer. The service life of white light-emitting diodes is greatest at an ambient temperature 30 K below the specified value and at 50% of the original current rating (FGL 2003). White LEDs have an average service life of about 15,000 hours. At the core of light-emitting diodes are semiconductor crystals that generate light when a current is sent through them. Inside the crystals is an nconducting region having a surplus of electrons (negative charge) and a p-conducting region with a deficit of electrons (positive charge). Between those is a transition area (called the p-n junction or depletion zone). By applying a voltage, the electrons in the n-region gain enough energy to overcome the depletion zone. As soon as these electrons arrive at the p-region, they unite with the positive charges. Energy is released, which is discharged in the form of electromagnetic radiation. The semiconductor material determines the color of the light emitted. The colors red, green, yellow, and blue can generated. The materials are produced on the basis of aluminum indium-gallium phosphide or aluminum gallium arsenide (AlnGaP or AlGaAs) for red and yellow LEDs, and (indium) gallium nitride (InGaN or GaN) for green and blue diodes (FGL 2003). There are two ways to produce white LEDs: By the additive mixture of colors or by luminescence conversion. In the first case, a white LED is produced by combining chips emitting light of different colors into one LED. The various emitters (e.g. red, blue, and green) are placed together so tightly in the device that from a sufficient distance they cannot be distinguished by the human eye and appear as one color; thus producing the impression of white light. This method is also suitable for producing many other colors of light. LEDs producing colored light in this manner (including white) are also called multi-LEDs. The white light produced by multi-LEDs, however, is not as convincing as that produced by means of luminescence conversion. The rendition of the color quality using the multi-LED is poorer; the differing brightness and operating conditions of the various LED chips make this a complicated and therefore expensive solution. Generation of white light by luminescence conversion is achieved us- ing a combination of a blue or ultraviolet LED and a luminescence dye. When current is flowing, part of the short-wave light is absorbed by the dye, exciting it to emit light. Yellow-orange (long-wave, low energy) light is emitted. The superimposition of various spectral colors is perceived as white light. Only with the development of white light LEDs, did the devices first become interesting for illumination purposes. The first white light LED was developed by the Japanese firm Nichia in 1995 and was based on the blue LED, also developed by them; it has been commercially manufactured since 1997. At the same time as Nichia, the Fraunhofer Institute for Applied Solid-State Physics (IAF) developed a white light LED as well as -in close cooperation with Osram OS -a manufacturing process. The know-how was transferred there and production began in summer 1998. Agilent (Lumileds) likewise began mass production of white LEDs in summer 1998. General Electric (GELcore) and Toyoda Gosei have been in production since 1999. White LEDs are now offered by almost all major manufacturers. Today LEDs are increasingly used for the most varied lighting purposes. Table 34 lists possible applications of LEDs according to their properties. Scope of investigation and availability of data The scope (system boundaries) of the comparative assessments includes the entire life cycle of the light sources. The individual life cycle stages include: Raw material procurement, pre-production Manufacture of light sources Use phase Disposal/recycling Data from the most recently available research study was used for the assessment of incandescent and energy-saving lamps. The study, authorized by the Federal Energy Office of Switzerland, was completed by Mani and published in August 1994. Mani worked closely with the manufacturer OSRAM to collect the required data. Specifically, the data on raw materials procurement und light source manufacturing were taken from this study. For the third variant, white LEDs, quantitative data was only available for the production of the 0.35 g semiconductor chip, the core of the LED. The data on LED chip production that included pre-production and raw materials procurement data were taken from the GABI database and are for 1999. No detailed assessment data were available for the other components of the white LED (the housing, series resistor, fluorescent material, etc.). This must be seen in perspective, however, as the greatest share of the environmental impact results from the use phase. In the submitted comparative assessment of incandescent and energysavings lamps (Mani 1994), a 75-watt incandescent lamp manufactured by OSRAM (variant 1) was compared to a 15-watt energy-saving Dulux EL lamp with integrated electronic ballast also by OSRAM (variant 2a). As the performance parameters of energy-saving lamps have regularly been improved (particularly with regard to service life), the case study also considers the current energy-saving lamp Dulux EL Longlife (variant 2b), however the production data is based on the former Dulux EL. These conventional lamps are compared to two white LEDs, the "white LED of today" (variant 3a) and the "white LED of tomorrow" (version 3b), each having a power rating of 1 watt. These two variants make possible an assessment of currently available LEDs (assumed luminous efficiency: 18 lm/W) as well as the potential of future efficiency enhancements (assumption: 65 lm/W). A defined quantity of light was established for the assessment profile. The average quantity of light emitted by the Dulux EL energy-saving lamp from Mani, 6,579 million lumen-hours (lmh), was established as the reference value and functional unit for the profile. The result is arrived at by multiplying the service life of the Dulux EL by the mean luminous flux. The resulting technical data for the five variants used in the case study can be found in Table 35. Raw material procurement and production A great number of raw materials are used in the production of light sources. In the following, the essential production steps for three of the light sources are shown, with special reference to the production of LEDs. The manufacture of incandescent lamps consists of four major production steps: Production of the glass envelope, production of the tungsten filament and its support structure, production of several small components, and final assembly of all components. 4.5 Case study 4: Nano-applications in the lighting industry 145 The production of compact fluorescent lamps also consists of four major steps: Production of the glass envelope, production of the luminescent material, production of the electrode structure and various small components, and final assembly of the components. Fig. 44. Stages in production of the compact fluorescent lamp The production of light-emitting diodes generally follows a different procedure than that described above and is depicted in the following chart. The wafer-processing and subsequent steps will be explained below. Fig. 45. Stages of production for the LED The core of a light-emitting diode is the LED chip (consisting of elements from the third and fifth groups of the periodic table), which is produced in the wafer-processing stage. After the mono-crystalline base material is cut into individual wafer-like discs, various semiconductor layers are applied to the wafers using the epitaxial process. This refers to the controlled epitaxial growth of a substance on a mono-crystalline base, the substrate. A key technology in this process, Metal-Organic Chemical Vapor Deposition (MOCVD) has proven to be the best method for the manufacture of optoelectronic and electronic semiconductor layers in modern light sources such as LEDs and lasers, high-output solar cells, and high-frequency electronics and power electronics. Aixtron AG, who builds MOCVD plants worldwide, is the market leader in this field. In Metal-Organic Chemical Vapor Deposition, nanolayers are grown from gaseous, metallo-organic substances on wafer slices. Pre-reactions of the elements to be deposited are avoided by either conveying them into the reactor in a hydrated form or by means of associated organic molecules (metal-organics). These substances are also called "precursors." The source substances are transported by means of a carrier gas (hydrogen, nitrogen) across a substrate surface in the reactor heated to 350° to 1200°, depending on the material system. In this procedure, the precursor molecules dissociate thermally (pyrolysis) on the substrate surface (Dadgar 2003). The core of an MOCVD plant is the Planetary Reactor. Geometrically arranged wafers revolve like the planets on a likewise rotating carrier disk. Manufacture of monocrystalline raw materials Seed crystals lowered into melt; withdrawn under rotation Czochalski process Floating zone process Wafer production and milling Slicing of crystal into thin wafers Application of luminescence coating by epitaxial process Assembly of components Lithographic application of gold alloy contacts and backside wafer coating Production of pntransitions and bonding Bonding/alloying of chip to lead frames Connection of upper contact with two electrodes Cutting of wafer into semiconductor chips In this way the wafers are evenly exposed to the process gases. Growth of the layers on the wafer is determined by the quantity and composition of the process gases, temperature, pressure, and time. Up to 50 different layers are needed to produce functional chips; these layers are applied to the wafer during a 5-6 hour production "run." At this point the MOCVD process is complete and the coated wafers leave the reactor. The crystalline deposit of the materials can be very precisely controlled, thus allowing the controlled deposition of the various materials one above the other to create the desired layer properties. A modern blue LED, for example, consists of a great number of different materials such as gallium nitride, used as the base material; indium gallium nitride, for the luminescent layer; and aluminum gallium nitride to enhance efficiency. Silicon and magnesium are used for the n-and p-regions (Krost & Dadgar 2002). In the MOCVD process, by manipulating gas valve switching times and the gas flow, ultrathin layers and abrupt boundary layers can furthermore be produced. The process parameters are easily scaled up to large reactors for mass production. The process can be universally implemented, as there exists a metallo-organic precursor for each chemical element (Grahn 2003). Fig. 46. Schematic representation of the chip structure of an AlInGaP LED After the wafers leave the reactor, they are characterized (contacts of the pn-transitions). Thereafter electrical contact points consisting of a gold alloy are applied by means of a lithographic photoresist process and an additional layer of metal is applied to the backside of the wafer. To avoid energy losses in the gold alloy, it undergoes an annealing process that lowers its resistance to a fraction of the original value. In the next steps the wafer is attached to an adhesive film, enclosed in a tension ring, and cut with a diamond saw into individual segments or dies 0.35mm by 0.35mm in size. The functionality of the dies is then checked (Aixtron AG 1999). In the case of blue-or ultraviolet-emitting diodes used in the manufacture of white LEDs, the diode chip in the reflector is covered with a drop of luminescent dye. Thereafter the various components of the light-emitting diode are assembled. LEDs come in a great number of structural configurations. Diverse metal/glass -or more commonly, epoxy or plastic -housings are used. The latter can easily be produced in suitable numbers for mass production. T-type LED: This form is the original structural shape of the light-emitting diode. It consists of the LED chip, the contacts, gold or aluminum connecting wire, and the plastic housing. After fixing the chip to the reflector on the metal lead, the contact to the second lead is established using a thin gold wire. Finally this is all molded into a stable unit with epoxy resin or other plastic material. The optical characteristics of the LED are determined by the reflector geometry, the shape of the plastic housing, and the position of the chip inside the housing (Vossloh Schwabe Deutschland GmbH 2003). SMD LED: There are also the LED soldered to the back of the printed circuit board and the SMD (Surface Mounted Device), an extremely miniaturized form of the LED. Unlike the standard LED, the SMD LED has no metallic reflector. There are many different SMD structural shapes and sizes, varying according to application. COB LED: Tightly packed, effective heat-dissipating solutions can be achieved using chip-on-board (COB) technology. The raw chips are directly installed on the circuit board. The chip connections and the board are connected with gold wire. The chip is covered and protected with a drop of epoxy resin. This technology allows more light to be radiated from a smaller surface. LED modules are standardized or customized LEDs that consist of several LED-bearing circuit boards. These are readily used by manufacturers of display and signal light technology. The boards to which the diodes are attached can have many possible shapes. By varying the dimensions, mounted components, and wiring, the housing shapes can be varied with relative freedom (Haller 2003). Furthermore, the first LED modules which could someday replace conventional lamps already exist: The socketmount LED module. It combines LED, necessary electronics, and the standard base (e.g. E27) to form a complete lamp that may be used in an existing luminaire housing. Use phase Incandescent and energy-saving lamps are used everywhere: in business and industry as well as in the home; however the incandescent lamp is used much more commonly in the private household. More common in the business place is the energy-efficient fluorescent tube. Because of economic reasons already discussed, white LEDs are still used mostly in spe-cial applications. In order to nonetheless assess and compare potential environmental impacts for the entire field, white LEDs are compared in this case study to conventional light sources. The use phase is of major consequence in the life cycle assessment profile, as light sources are a significant consumer of electrical energy in this phase. The energy consumption is based upon the production of a specific reference light quantity (RLQ), which serves as an equivalence value. Specific energy consumptions of the variants are derived from multiplying power input, service life, and number of lamps required. Emission factors for the current electrical power mix in Germany as taken from the GEMIS 4.1 database and listed in the appendix (Table 56) form the basis for the calculations of environmental impacts in the use phase (GEMIS 4.1 2003). Disposal There are no legal regulations governing the disposal of incandescent lamps, as the environmental impact of almost all substances they contain is negligible due to their nature and the amounts involved. Therefore almost all incandescent lamps are disposed of in the household waste. The disposal of compact fluorescents, however, is regulated. In 1996, the German Waste Avoidance, Recycling and Disposal Act came into effect. It places discharge-type lamps (in the case of disposal and recycling) into the category of waste requiring special attention. The lamps must be collected separately and classified as hazardous waste or be recycled. The regulations governing the disposal of electrical and electronic devices changed in 2002. Following modifications to the European Waste Catalogue, in January 2002, almost all electronic devices and electrical devices are now classified as waste requiring special handling. They may not be disposed of in household or ordinary waste. The fluorescent tube and the light-emitting diode are also listed in this catalogue. They are classified as materials for special handling and supervision -the energy-saving lamp because of its mercury content and the LED because of possible quantities of arsenic, gallium, phosphor, or compounds containing them (Senatsverwaltung für Stadtentwicklung Berlin 2003). Newer developments include the EU regulations on to-be-disposed-of electrical and electronic devices, in effect since February 2003. The directive 2005/96/EG regulates the disposal of electronic and electrical devices, whereas directive 2005/95/EG limits the use of certain hazardous substances in these devices. The European directives stipulate that the listed products (including fluorescent tubes and LEDs) must be returned to and accepted by the manufacturers and properly disposed of. Additionally, directive 2002/95/EG contains several exceptions with regard to the avoidance of certain hazardous substances in products. For example, it is explicitly mentioned that no more than 5 mg mercury will be allowed in compact fluorescent lamps in the future. The directives are to be incorporated into national law by August 2004; draft regulations for the disposal of electrical and electronic devices, based upon the Waste Avoidance, Recycling and Disposal Act, already exist. Detailed data for this life cycle stage are included in the environmental assessment by Mani (1994) for the incandescent lamp and the energysaving lamp. No quantitative data is available for LEDs. As it is clear from the assessment data for the incandescent lamp and the energy-saving lamp that this stage only plays a minor role in the overall investigation, the question of disposal will not considered quantitatively, but only discussed with respect to problematic substances (mercury). Life cycle inventory analysis In the life cycle inventory analysis, the material and energy relationships between the lighting system being studied and the environment are recorded, i.e. the input flows from the environment and the output flows that are returned to the environment are noted. The goal is to establish a data inventory based upon functional equivalents for the selected variants. Since quantitative data for disposal are not available, calculations can only be made for manufacturing, inclusive of raw materials procurement, and the use phase. The total absolute material and primary energy amounts needed for the manufacture of the quantity of lamps required to generate the set reference light quantity (RLQ) of 6.579 million lumen hours are listed in the table below. The total material flows for the five lighting variants can be found in Table 57 in the appendix. In looking at the material quantities for the first three variants, consideration must be given to the packaging. The ratio of materials required to product quantity is 2.1-2.6:1. The large material quantities for LED chip production in proportion to product quantity reflect the fact that semiconductor technologies require considerable volumes of raw materials such as ore and stone and auxiliary materials in order to produce a small, highly complex quantity of product. The ratio is 314:1. Moreover, an assessment of a complete LED lighting system must also consider materials for the housing, ballast resistor, packaging, etc. The situation is different for the primary energy requirement. The primary energy requirement for LED chip production is lower than that for conventional light sources. The difference must again be put into perspective, as the missing system components in the LED lighting system must be included. Production of the circuit board for the ballast for the Dulux EL15 energy-saving lamp alone consumes 17.3 MJ of the indicated 32.6 MJ/BLM. It can therefore be assumed that the primary energy requirement of an LED lighting system is comparable to that of conventional lighting sources. If energy consumption in the use phase is compared to that of the production phase, it becomes obvious that, depending on the variant, 97-99% of the energy is consumed during the use phase, with the result that the deviation caused by the incomplete consideration of the LED lighting system can be viewed as minimal. The central measure for the environmental assessment of light sources being used for illumination is energy consumption during the use phase and the associated emissions. Energy consumption for raw materials procurement und manufacture of the light sources is minimal. Moreover it becomes very clear that the current white LED is at a disadvantage by a factor of 3 as compared to the energy-saving lamp. Only, if the future scenario for the white LED comes to pass, i.e. a luminous efficiency above roughly 65 lm/W is achieved, will energy consumption become comparable to energy-saving lamps. This fact is also generally confirmed by the calculated emissions, which accumulate as relative quantities. Here again, emissions resulting from power consumption during the use phase dominate. These emission quantities are shown in brief in the table below and in detail in Table 58 in the appendix. 4.5 Case study 4: Nano-applications in the lighting industry 153 The total emission quantities expressed in the bar chart also make clear that the white LED of today is better than the conventional incandescent, but at a disadvantage by a factor of three when compared to the energysaving lamp. Only if the future scenario for the white LED is realized, will the emissions become comparable to those of energy-saving lamps. With respect to critical substances used in the light sources, mercury (found in the energy-saving lamp) and arsenic (used in the manufacturing process for the white LED), in particular, must be considered. Technological improvements in recent years have made it possible to significantly reduce the proportion of mercury contained in fluorescent tubes. The energysaving lamp Dulux EL by OSRAM contained as much as 10 mg of mercury in 1994; currently the compact fluorescent by OSRAM contains roughly 4 mg of mercury; this represents its emission potential, in the case of release caused by improper disposal. If lamps break at the waste disposal site, mercury can escape directly into the environment. Mercury and numerous mercury compounds are volatile and highly poisonous, which is one of the main reasons for disposing of these lamps as hazardous waste or recycling them. Furthermore, mercury emissions in the production phase and the use phase must be considered in the overall assessment, as emissions also take place, for example, during power generation. The (potential) mercury emissions mostly arise during the use phase. Only in energy-saving lamps are considerable emission quantities added by the mercury contained in the product itself. Here it is clear that current energysaving lamps with respect to total quantity still are better than current white LEDs. Only in the white LED of tomorrow scenario would a significant avoidance of mercury emissions be possible. No quantitative data for arsenic and its compounds was available for evaluation. The most common materials for the production of LED chips are aluminum indium gallium phosphide and aluminum gallium arsenide (AllnGaP and AlGaAs) for red and yellow LEDs, and indium gallium nitride (InGaN and GaN) for green and blue diodes (FGL 2003 and others). Since the production of white LEDs requires either blue, green, and red diodes or else blue-emitting diodes, these substances might also be present in white LEDs. Gallium arsenide, like gallium nitride and gallium phosphide, belongs to the semiconductor groups III/V. With respect to handling and disposal, arsenic and its compounds are of the greatest significance. The toxicity of arsenic and its compounds varies greatly, but the substances used in the semiconductor industry tend to represent a certain hazard potential. The carcinogenicity, mutagenicity, and reproductive toxicity of many arsenic compounds is indisputable (BLU 2002). As a semiconductor material, gallium arsenide is not poisonous. But in the presence of oxygen and water, an ultra-thin, very toxic layer may form on the surface of the material, which could cause environmental damage at a conventional landfill disposal site. Furthermore, an extremely poisonous gas is produced in the manufacture of gallium arsenide: arsine (arsenic hydrogen), chemical formula AsH3, which is used to guarantee the purity of the semiconductor material. Even minimal concentrations of a few arsine molecules per million gas particles in the air can cause severe health damage or be even lethal. The arsenic-hydrogen bond is highly toxic. It blocks nerve receptors and can impede the transport of oxygen in the body. Therefore, light-emitting diodes also require special disposal treatment and supervision due to the possibility that they may contain arsenic, gallium, and phosphor compounds. According to the European directives, listed products must be accepted in return by the manufacturers and properly disposed of. A recycling procedure for LEDs does not yet exist (Grezcmiel 2001). Researchers at the University of Marburg found an alternative to arsine years ago. The alternative substance is less volatile and has a much lower vapor pressure. It is considerably less harmful for the environment than arsine. The substance is of interest for semiconductor industry, because the much lower hazard risk reduces the costs of a semiconductor production facility. Waste quantities are also much lower (Thimm 1999). Life cycle impact assessment To complete the impact assessment, it is necessary to have access to emissions data which can be allocated to specific environmental impacts. Since the data shows a direct relationship between energy consumption and the relevant emissions, this would also be seen in the categories greenhouse effect, acidification, and eutrophication and not reveal any new knowledge; we therefore refrain from making this presentation. Definition and background physics Quantum physics describes particles having wave functions. This particularly applies to electrons, as the smallest stable particles with a rest mass. Should the wave function of the electron be increased until it reaches the geometric material realm, quantum mechanical effects can be anticipated, leading to interesting new properties. This holds true for nanoparticles below about 20 nm in size (typically 1-5 nm). They are at the borderline between individual molecule and dimensional crystal. The movement of the electrons is constrained by the minuteness of the nanoparticles. Since the electrical and optical properties of solid bodies are determined by their electrons, a discretization of energetic states and a broadened band gap between valence band (completely filled with electrons) and conduction band (not completely filled so electrons are mobile) can be observed with decreasing material size (Haase & Kömpe 2003). The resulting new properties can only be explained with the help of quantum physics. Such nanoparticles are therefore called quantum dots and are more or less zero-dimensional. The electrons feel "squished" by the restricted particle boundaries and thus the distance (band gap) between original state and excited state increases with the decreasing size of the particles. If an electron shifts from the excited state into the original state, light is emitted; its wavelength and energy level are dependent on the band gap. It is therefore possible to generate wave lengths ranging from UV through the visible spectrum on up to the infrared (350-2300 nm) using quantum dots of the same composition (Evident Technologies 2003). The optical properties are determined by the size of the quantum dots. This is what makes the quantum dot so interesting for lighting technology. Earlier it was only possible to generate light of different wavelengths by combining different substances. Quantum dots even permit using substances for light generation that were previously unsuitable (Bertram & Weller 2002). Additionally, more than 50% of the atoms lie at the surface due to the small size of the quantum dots, thus permitting a precise tuning of light-emitting properties and suggesting that the emission of several colors from a single dot may be possible (SNL 2003). Quantum dot crystals used in combination with other crystals or phosphors can emit any desired color. Quantum dots can be produced technically by means of epitaxial growth, for example in vacuum precipitation processes or by chemical methods from colloidal solutions (Rubahn 2002). Applications of quantum dots By irradiating a solid body with light of a suitable wavelength, electrons can be advanced from the valence band to the conduction band. The energy input must be at least as great as the band gap. After a short time, the electron falls back to the valence band, with the surplus energy often being emitted in the form of light. The energy or the color of this emitted fluorescent light corresponds to the band gap energy. Several methods are being applied for using quantum dots to produce light. Direct-charge injection: In direct-charge injection, electrons in the quantum dots are brought into the excited state by the transport of electrons or electron voids and the resulting collision processes. Research on this method is taking place at the Massachusetts Institute of Technology (MIT, Cambridge) and elsewhere; in 2003 they succeeded in making a major step forward in development. The problem in direct-charge injection is to stimulate as many of the electrons flowing through the carrier material as possible to electron excitation in the quantum dots and thus light production. MIT researchers were able to augment the efficiency of direct-charge injection by a factor of 25 by placing CdSe dots (cadmium selenide) between two organic layers (Riebeek 2003). With this technology an efficiency increase of up to 100% (all injected charges generate light) may be possible, more than with any other light source. This would raise the energy efficiency of light sources to a new dimension. The colors being emitted are two to three times as pure as those of ordinary OLEDs (Riebeek 2003). After further development, this technology, an extension of OLED display technology, will revolutionize flat-screen technology. The durability and stability of quantum dots are positive factors in this respect. Stimulation by UV light: Researchers at the Department of Energy's (DOE) Sandia National Laboratories, Albuquerque, have chosen a different approach. They developed the first white light-emitting device using quantum dots (SNL 2003). In contrast to direct-charge injection, stimulation is effected by UV light (380-420nm wavelength) rather than a current. The quantum dots are covered and encapsulated with suitable organic molecules such that they emit light in the visible spectrum when stimulated by ultraviolet light. The required UV light is produced by traditional LED technology. Therefore the efficiency cannot be higher than that of bluelight LEDs. However, it is higher than in traditional white-light LEDs, as their phosphor materials show poor absorption for the blue wavelength and thus low efficiency. Optical backscatter losses of traditional phosphor materials reduce the efficiency by 50%. This can be overcome with the new technology. In 2004, researchers tried to increase the concentration of quantum dots in the capsules to enhance light efficiency and to understand the behavior of quantum dots in higher concentrations. So far, researchers at MIT and Sandia have mainly used quantum dots of the semiconductor materials cadmium sulfide and cadmium selenide, which contain the poisonous heavy metal cadmium and are therefore not appropriate for mass production. Harmless alternatives, however, are being investigated, including nanocrystalline silicon and germanium with a surface of light-emitting manganese ions. The Nanoelectronics Research Centre (NRC), at the University of Glasgow, has already produced Si-SiGe quantum-dot-based LEDs, which according to the researchers, would be ideal as light sources in optical circuits, because they can be directly integrated into the Si chips (Tang et al. 2003). In addition to the direct generation of light, quantum dots are also used as markers in fluorescence microscopy of biological and medical preparations. They are tied to biomolecules (e.g. DNA fragments) and stimulated to emit light of various colors when irradiated with UV or blue light. Compared to current dyes, quantum dots are more stable and tend less to fade. This allows more detailed investigation of a preparation and the differentiation of details becomes easier. By using quantum dots, different parts of the preparation can be colored differently, thus allowing the recognition and determination of various parts of the tissue simultaneously (Bertram & Weller 2002). Use of quantum dots promises faster, more flexible, and less expensive tests and immediate biological analyses and patient diagnoses. However they must be prepared with sufficient precision if they are to be used in medicine and research. Components containing quantum dots also promise improvements in the development of lasers, detectors, optoelectronic switches, and memory elements (e.g. optical high-density memories). Nano-electronics also has great hopes for the application of quantum dots. Due to their high quantization energy, nanoparticles are capable of storing discrete amounts of electrons (charge carriers) (Bertram & Weller 2002). Attempts to use quantum dots as memory devices are already being made by several firms. According to Bertram and Weller (2002), a combination of quantum dots as memory cells with a molecular switch leading to a braid of nanowires would be the first step on the way to the nanocomputer. Generally it can be stated that research work around the world on quantum dots technology has recently greatly intensified. Summary and prospects for the implementation of quantum dots Quantum dots are nanoscale particles at the boundary between molecule and solid body. Their composition and tiny size are responsible for their extraordinary optical properties, which may be adjusted to meet specific requirements by changing the size and chemical composition (of the surface). Under stimulation they can be induced to emit light. In contrast to other light sources, they generate an extremely pure and bright light and can at the same time emit an entire spectrum of colors by stimulation of a single wavelength. It is anticipated that quantum dot technology will have a firm place in display technology in the long term, especially in combination with OLEDs, however their development is not yet as far along as that of the OLED. The use of quantum dots will further enhance the efficiency of light sources, generate more brilliant colors, and reduce the number of necessary manufacturing steps in display manufacture. These are economic and environmentally relevant advantages in favor of quantum dots and the use of their optical properties in lighting technology. However, an industrial application is not to be expected in the near future. Case-study Summary The objective of this case study was an investigation of the eco-efficiency potential of new nanotechnological products in the lighting industry. For this purpose, white LEDs were compared to conventional light sources including the incandescent lamp and the energy-saving lamp (compact fluorescent), and the future potential of quantum dots was considered. The case study demonstrates very clearly that energy consumption during the use phase is by far the most significant factor in the environmental assessment of light sources for lighting purposes. The current white LED compares more favorably to the classic incandescent lamp, but is at a disadvantage by a factor of three when compared to the energy-saving lamp. Only at luminous efficiency values above approx. 65 lm/W, values which were assumed for a future white LED, will they be able to compete with the energy-saving lamp with respect to environmental impact. This was confirmed by Arpad Bergh, president of the US Optoelectronics Industry Development Association (Interview by Siemens NewsDesk 2003). He makes the assumption that the luminous efficiency of the white LED must be increased to 85-100 lm/W before it will become interesting for everyday illumination purposes. It can be assumed that the use of quantum dots in the future will make possible increases in the energy efficiency of light sources. Quantum dot technology is expected to have a firm place in display technology over the long term, particularly in combination with OLEDs. Actual application of quantum dots in commercial products, however, is still some years away. Case study 5: The risk potential of nano-scale structures The following section presents and discusses the potential risks of nanotechnological applications. The discussion specifically focuses on nanoparticles, as nanoparticles are already being produced and applied on a large scale and thus form the basis for numerous other applications; fur-thermore, relatively broad discussions on the risk potential of nanoparticles are already taking place. The chapter consists of four parts. The first part looks at those risks that already can be expected on the basis of the properties of intentionally manufactured nano-scale products, agents, and materials; furthermore, a literature survey on the problematic effects of selected substances and structures for humans and the environment is presented. The second part, taking as its example the nanoscalar titanium oxide found in suntan lotions, discusses the problem of equating substances of micro-and nano-size. The third part analyses the life cycle of nanoparticles. Finally, we compile the results of the study and illuminate possible consequences of these findings. In the course of the case study, a written survey was conducted and experts in the field of nanoparticle toxicology from Great Britain, Germany, and the United States were polled. The results are noted in the relevant sections of this chapter. A detailed presentation of the survey results can be found in the appendix. Potential risks of nanotechnology Development of the potential of nanotechnology is only at its starting point, with relatively simple applications and products already entering or at the threshold of the market, many of which involve the application of nanoparticles. One example is nanoscalar carbon black, added to automobile tires to improve abrasion resistance, which has already been in use for a very long time. In many cases, the examples discussed can also be viewed as a continuation of lines of decades-long development. Many of these examples still compete with older traditional solutions. In such cases, nano-scale innovations are thus only replacing or enhancing already existing conventional solutions. The novelty of these examples is for the most part only to be found in the nanoscalar size of the particles. Nonetheless, the question arises as to the extent to which the drive into the area of nanotechnology entails new types of effects. Do nanoscalar materials have new (or do they enhance already known) properties that could be detrimental to the environment and/ or human health? Such new and/or enhanced effects are normally to be expected. After all, it is such altered or enhanced properties that make nanomaterials interesting for production purposes. However, questions concerning new (or enhanced) properties or effects and possible negative side effects and consequences have so far rarely been systematically investigated or formulated. The neglect of side effects is partly the result of institutional delay. Established procedures for regulating hazardous substances have not yet been adapted to address these new issues. The non-governmental organization ETC Group (ETC Group 2002), for example, notes that the addition of considerable quantities of titanium dioxide nanoparticles to high-sun-protection-factor suntan lotions required no new investigations in the USA. Conventional titanium dioxide had already been tested and approved; new applications of titanium dioxide nanoparticles were considered to be equivalent to pre-nanotechnology applications with respect to environmental and health impacts (see ETC Group 2002). Whether this equivalency is justified, cannot be settled here, but in light of the altered properties found at the nano-level, it should at least be addressed. This is of particular importance, knowing as little as we do, so far, about the environmental and health aspects of nanoparticles. Those investigations that have been conducted on the effects of ultra-fine particles resulting from combustion processes are alarming. However it should be noted that the common properties of nano-scale systems are predominantly due to the scale of the materials and products. This means, on the one hand, that in nanotechnology's role as a crosssectional technology, problems may occur that are primarily "contingent on the technology." On the other hand, other problems may arise from a specific combination of circumstances within the context of specific applications. The technologies as well as the application contexts may greatly diverge and likewise the potential risks to environment and health that need to be addressed. For example, nanotechnological procedures and products and their impact in the field of biotechnology will differ fundamentally from those in areas such as electronics and thin-film technology. Moreover, different impacts on the various environmental compartments can also be expected. Technology-specific characteristics of nanoparticles as possible hazard sources An initial preliminary technical characterization of nanoparticles and nanostructured surfaces reveals the following lowest common denominators. It can be stated, that: We are dealing with structures in the nano-dimensional realm Properties of the molecules are altered in this dimension (thus leading to desired effects and behavior) In particular, the ratio of surface to volume changes Additionally, one must specifically mention that substances such as fullerenes and nanotubes do not generally exist in the natural environment. The impact of such new materials on environment and health is difficult to foresee. In addition to the properties of nanoscalar materials already mentioned, the following aspects are of particular importance: Mobility in environmental compartments and inside the organism Reactivity and reactive specificity Bioaccumulation Persistency Non-occurrence in nature Pulmonary intrusion Water solubility, liposolubility Carrier and piggyback effects Agglomeration, dispersion Other properties As science learns more about conditions of implementation and application contexts, other aspects must also be considered, for example: Nature of application and quantities Contained / non-contained (open/closed) applications Aspects of the life cycle such as raw material expenditures, recyclabil-ity… In an attempt to characterize nanomaterials in a standardized manner, we can make the following statements: Nanotechnological systems and products, particularly nanoparticles, for the most part currently represent the further development of known processes and products. The key difference is that it is increasingly possible to shape things at the atomic and molecular level. This means several scientific and technical disciplines can contribute to the production of nano-scale building blocks and components. Industrially produced nanomaterials cannot be viewed as a uniform substance or materials group, thus making the assessment of potential risks and hazards even more difficult. There is, in fact, an enormous range of structures and materials in the nanoscalar dimension. Their distinguishing characteristics: A large number of chemical elements and compounds are utilized They exist at the nano-scale level in various sizes and have various surface structures As yet, no particularly worrisome group of substances to be singled out for investigation has been identified. Consequently there can be no simple answer to the question, as to whether nanoparticles are safe and what impact they will have on human life and the environment. The differences in size, shape, surface, chemical composition, and biopersistency require that each nanomaterial be individually investigated with respect to possible environmental and health hazards. Very similar compounds may have very different effects. It is also known from toxicity research that certain substances may be comparatively harmless as long as they are applied to the skin or taken orally, but can be extremely toxic when inhaled (Colvin 2003a, Hoet 2004. The question, as to whether and to what extent general statements on hazards caused by specific substance groups or structures can already be made, will be dealt with later in this paper. Nanoparticles Nanoparticles can be classified as belonging to the transitional area between the realm of individual atoms and molecules and that of larger ensembles. Many of the particularly interesting new properties of nanoparticles are due to the altered ratio of surface to volume. Because of their incredibly small size, nanoparticles have an enormous surface relative to their volume. They are, therefore, very reactive, reacting comparatively quickly and aggressively, in part even spontaneously with their environment. The reason for their reactivity is found in the surface of the nanoparticle. Due the altered surface volume ratio, the surface of the nanoparticle has many more free electrons, which, due to their position on the surface, are capable of reacting with their environment. Thus nano-scale materials and substances are more reactive than structures having a smaller relative surface area. For this reason, existing, known substances that now can be produced in nano-sizes can suddenly have properties and effects very different from those of their larger counterparts. Some experimental results have already given rise to the question as to whether nanoparticles because of their tiny size and pulmonary intrusiveness and low biodegradability alone may provoke certain toxic effects almost independently of their specific composition. Other findings have contradicted this thesis. Asked whether the effects of nanoparticles are rather the result of their size, or their chemical composition, or the nature of the surface, the experts surveyed in the course of this project responded: Negative effects are due to size, as well as chemical composition, and surface structure. A weighting of the three aspects is currently not possible. Likewise, dosage, coating, shape, distribution in the tissue, distribution of charge or electrical potential between molecules, and degree of agglomeration may also have an influence. Effects are also determined by the condition of the human body: condition of the immune system, intake path (skin, respiratory, direct injection, etc.), and general state of health. One survey participant held the opinion that there might be differences between the effects of individual free nanoparticles and those bound in, for example, a composite material. Nanoparticles are also capable of adsorbing molecular contaminants. Adsorption can assist foreign matter in gaining access to parts of the body and the cell, from which it otherwise would be blocked. Transport of such biological contamination could be a greater risk for biological systems than nanoparticles as such, scientists say (New Scientist 2003). Behavior of nanoparticles released into the environment The problem of assessing nanoparticle effects is further complicated by the fact that a possible negative impact also depends on the agglomeration behavior of nanoparticles. It is known from the behavior of ultra-fine particles in combustion processes that they already begin to attach to each other shortly after combustion to form larger groups of particles (agglomerates). Oberdörster points out that agglomerated particles are neither more nor less problematic than other particle forms, but that individual nanoparticles can cause severe problems. It is likewise known that nano-scale titanium dioxide agglomerates more easily in aqueous than in lipophilic solutions. So far, it has not been possible to make generalized assumptions about the behavior of nanoparticles in differing media. This statement is also confirmed by our expert survey. General statements of the kind "All fullerenes agglomerate" cannot be made. The statements given on common behaviors are far less general and were all qualified: Nanoparticles can definitely form aggregates in gaseous and liquid media. Aggregates spread less evenly than individual nanoparticles. Airborne particles agglomerate homogeneously within ten seconds if the concentration is relatively high (107-108 particles/cm³). Lower concentrations need a considerably longer period of time. Agglomeration behavior depends on the surface coating, chemical reactivity, and electrical potential. If through the accretion of further atoms and molecules larger agglomerates are formed, the total surface area and thus the reactivity of the agglomerate may again change. The larger the surface, the greater the possibility that substances will aggregate. Agglomeration is also determined by the surface reactivity. Aggregated particles may de-aggregate again in the human body. Toxicity of particles may deteriorate during their life span. Ultra-fine particles and nanoparticles The presently limited knowledge of expected and possible effects and concerns about the negative effects of nanoparticles is based partly on analogies to the knowledge on ultra-fine particles. Ultra-fine particles (PM0.1) are the same size as nanoparticles; however, they are not industrially manufactured, but are the result of combustion processes. Ultra-fine particles have an average diameter of less than 0.1 micrometer (µm). Epidemiological investigations of larger particles (diameter less than 10 micrometers) suggest with comparatively strong evidence that chronic exposure to particles in the air has a harmful effect on the cardiovascular system (Dockery et al. 1993, Kunzli et al. 2000. It is presently under discussion whether ultra-fine particles automatically cause greater damage than do larger particles because of their greater surface. Only a few studies on ultra-fine particles exist, but there are indications that such particles are more dangerous than larger ones (Howard 2004b). Indications for negative effects of ultra-fine particles were also found by in-vitro investigations. Diabeté (2002) writes: "By means of in-vitro tests it was shown that ultra-fine synthetic particles have a stronger cytotoxic effect than larger particles of the same chemical composition. Flue dust from a waste incineration plant containing nano-scale particles increased the release of pro-inflammatory cytokines in lipopolysaccharid-stimulated macrophages and inhibited the formation of NO radicals. In co-cultures of macrophages and pulmonary epithelial cells it was shown that they release more cytokines than the total of the respective individual cell cultures." These indications may suggest that nanoparticles, which are in part even smaller, in similar concentrations could a priori likewise be harmful to the cardiovascular system. Since ultra-fine particles are combustion byproducts, equating the effects of the two types of particles can only be done to a limited extent. Nanoparticles may cause additional and/or other problems (Colvin 2003a, Kreyling et al. 2004. The above statements were confirmed by the results of the survey. According to the experts, substantial empirical data on the toxic impact of ultra-fine particles in the context of air pollution caused by articles of natural and combustion origin exist. The effects of ultra-fine particles on the pulmonary system, the cardiovascular system, and human blood are well documented. Claims about possibly harmful effects of inhaled particles from combustion processes, however, do not allow any direct conclusions about industrially manufactured nanoparticles. An example for a possible similarity between both classes of particles is the capability of ultra-fine particles to bypass the body's defense mechanisms, penetrate cells, navigate through the body, and cause inflammatory reactions. Findings derived from particle toxicology thus point to concerns which should not be disre-garded. On the other hand it is necessary to develop new methods of investigation and assessment that are suitable for industrially produced nanoparticles. Impact on health and environment The following summarizes conclusions that suggest some possible effects. Results from relevant studies and the more or less substantiated conjectures of researchers in this field are also presented. These statements are meant to be representative only -they could be supplemented by more examples -even so, no general conclusions about nanotechnology can be drawn from them. What does become clear, however, is that this is a field with a high degree of uncertainty and a large number of unknowns, whose systematic assessment has not yet begun. Effects on the human body Nanoparticles are very mobile: in animal subjects they have entered the liver, the brain, and even the fetus. Howard (University of Liverpool, England) reports a possible migration of nanoparticles into the fetus. His analysis was not yet available at the time of publication, but he was confident that evidence to support his initial conclusions would soon be available (Howard 2004a). Howard assumes that there is a natural transport path for nanoparticles, which they use to enter the body and move within it. He assumes that nanoparticles pass through the caveoles (openings) of the cell membrane. The openings have a width between 40 and 200 nm and seem to play a role in transport of macromolecules and proteins. Caveoles are big enough to transport nanoparticles (Howard 2004b). There is nothing known about possible effects in the body. One should add that the particles investigated by Howard were specifically prepared to penetrate such membranes. Oberdörster (University of Rochester, New York) traced the distribution of carbon particles of 35 nm diameter after applying them to the nasal mucous membrane of rats. One day later the nanoparticles were found in the brain. The ends of the olfactory nerves had absorbed the particles and transported them into the bulbus olfactorius. Concentrations increased until the experiment was stopped seven days later. It is unclear what effects the particles have on the brain (Oberdörster 2004). Oberdörster reports in the context of the survey, that the same effect appears with other substances (30 nm virus, 50 nm colloidal gold, 30 nm Mn oxide) and also if applied to primates. Mn oxide caused reactions in rats that suggested inflammation at a concentration of 450 µg/m³ and twelve days' exposure. In a somewhat older study, Öberdörster et al. reported carbon, between 20 and 29 nm in size, being found in the livers of rats after six hours of inhalation (Oberdörster et al. 2002). The particles entered the blood circulation via the lungs. Oberdörster et al. emphasize that translocation into the blood and the organs may vary with other nanoparticles. Nanoparticles may enter through the human lungs because they are too small to be filtered or to be destroyed by the protective mechanisms of the respiratory system (pulmonary mucus and macrophages). The pulmonary mucus that lines the lungs like a carpet is unable to filter fine and ultra-fine particles from the air or to transport them upwards. In the alveoli that lie beneath, where the gaseous interchange between lungs and blood occurs, macrophages absorb undesirable substances. Macrophages do not recognize structures/substances smaller than 70 nm as matter extraneous to the body. The problem of nanoparticle translocation may become aggravated; as Donaldson (Napier University, Edinburgh) commented in the survey, ifas planned -nanoparticles are added to the human blood for medical treatment in the future. However, medical applications would be subjected to much stricter regulations and testing than the particles in industrial production that are being considered here. There is likewise the suspicion that nanotubes could be harmful for the human organism. Lam introduced 0.1 and 0.5 mg carbon nanotubes into mice via the windpipe. After seven days all test animals had developed granulomae and in some cases inflammations, which after 90 days had worsened (Lam 2003). Warheit obtained comparable results in similar investigations with rats, but the granulomae did not continue to worsen after 30 days (Warheit et al. 2004). The two studies differ not only in their laboratory animals, but also in the nanotubes used. Warheit used "laserevaporated nanotubes," Lam used "HiPco and carbon-arc nanotubes." The differing degrees of intensity of the inflammations could be caused by the differing nanotubes, suggesting a varying impact of similar substances. Furthermore, Warheit reports that in his test 15% of the rats died from a mechanical blockage of the upper respiratory tract due to agglomerated nanotubes. This suggests the necessity of integrating the behavior of released particles into the assessment of possible negative effects. In the project survey, Kreyling (GSF -Forschungszentrum für Umwelt und Gesundheit) suggested that nanotubes might also be carcinogenic. It is known from the toxicology of mineral fiber that incorporation (especially inhalation) of biopersistent fibers of 20 µm length and above greatly increases the risk of cancer. Because of their length of more than 50 µm and their extraordinary stability, it cannot be ruled out that nanotubes, first of all, are biopersistent (they cannot be dissolved or broken down in concentrated acids), and secondly, increase the risk of cancer after inhalation. So far, the impact of nano-scale titanium dioxide when inhaled is also unclear. This is, first of all, a concern for the labor force producing nanoscale titanium dioxide. Colvin points out that titanium dioxide may cause inflammation in the lungs (Colvin 2003a). Rehn et al. administered a coated and an uncoated form of ultra-fine titanium dioxide (diameter < 100 nm) in single doses to rats in work-place relevant dosages (0.15, 0.3, 0.6, and 1.2 mg). They could not, however, prove inflammation and assume that both forms of titanium dioxide are inactive in the lungs (Rehn et al. 2003). On the basis of another test arrangement, Bermudez et al. found contradictory evidence. In an inhalation experiment, rats, mice and hamsters were exposed to airborne ultra-fine titanium dioxide particles (e.g. 10mg/m³) for 13 weeks. In rats and mice they found indications for inflammation of the pulmonary tissue, which disappeared with decreasing exposure. In both species, the self-purification mechanism of the lungs appeared to be overloaded by such concentrations, which was visible in the delayed clearance of the particles. Rats showed more severe symptoms of inflammation than mice and hamsters showed none, which was interpreted to be a sign of their higher capacity for pulmonary particle clearance (Bermudez et al. 2004). Oberdörster reported in the survey that nano-scale titanium dioxide causes pulmonary cancer in rats. High doses were applied in the test and it is not clear whether human beings are exposed to such high doses. Pulmonary cancer in rats was the result of two years' enforced inhalation of nano-scale titanium dioxide in high concentrations (10 mg/m³ and 250 mg/m³). Whether these results can be transferred to humans is therefore questionable. In summary, one can state that there is the suspicion of a problematic impact of nanoparticles. Simultaneously, the experts responding to our survey confirmed that investigations were carried out under laboratory conditions administering substances of mostly very high dosages to laboratory animals. Exposure of humans to equally high concentrations is hardly imaginable except in case of severe industrial or transport accidents. A current overview of this analysis and the unanswered questions is provided by Oberdörster et al. (2005) and IOM (2005). Impact on the environment In addition to the above-mentioned experiments, which attempt to investigate the effects of nanomaterials on humans by means of animal testing, there are a few investigations on the environmental toxicity of nanoparticles. Generally it must be noted that the behavior of nano-scale materials and substances in the environment needs to be studied (Tomson et al. 2003). Research on the behavior of nanomaterials in different environmental compartments and conditions is only just beginning. Researchers at Rice University are investigating the effects of nanomaterials in the soil (biopersistency, dissolution, biodegradation, aggregation, adsorption into environmental matrix) and aquatic environments (dissolution and suspension in aqueous media, sedimentation) as well as the effects of bio-accumulation (earth worms and aquatic animals) (Tomson et al. 2003). Preliminary results of these investigations show: Aggregation of nanoparticles may differ in various aqueous media. Adsorption of foreign matter on the surface of nanoparticles is very high. Adsorption/desorption of organic compounds in nanoparticles could be long-term. Nanomaterials in natural aquatic environments could considerably change the behavior and mobility of pollutants. Brumfiel (2003) reports that researchers at Rice University have investigated the behavior of buckyballs. They dissolved buckyballs in water, which was poured onto the ground. The buckyballs behaved with very different consequences in the ground. When buckyballs agglomerated and formed particles of micrometer size, they were absorbed by the soil like any other organic substance. When they spread without agglomeration, it was observed that water formed a kind of protective shell around the buckyballs. In this way the buckyballs may be able to pass through the soil without being absorbed and thus represent a hazard to ground water. Moreover, there are indications that nanomaterials could enter the food chain. Brumfield reports that nanoparticles might be ingested by earthworms (Brumfiel 2003). There are concerns in research on the possible consequences of nanoscale structures for the environment with respect to the ability of nanoparticles and microparticles to take up heavy metal, radionuclides, and other hazardous substances as foreign matter. It is notable that nanoparticles would react with heavy metals and be able to transport them. Wiesner (Rice University) undertakes research on the behavior of nanomaterials in water and has made the following statements: "Nanomaterials can move with great speeds through aquifers and soil … nanomaterials provide a large and active surface for sorbing smaller contaminants, such as cadmium and organics. Thus, like naturally occurring colloids they could provide an avenue for rapid and long-range transport of waste in underground water" (see Colvin 2002). One international research project concluded that nanoparticles can form in the intermixing of mining effluents and running fresh water. The nanoparticles pick up highly concentrated toxic heavy metals found in mining operations run-off. Here, too, the scientists assume that the particles might transport the poisonous metals further down the river due to chemical bonding (see Vista Verde 2002). At the same time that scientists succeeded in making nano-scale watersoluble in order to be able to administer a medication more or less directly to the desired location, they made it possible for these particles to move freely in the groundwater. Eva Oberdörster (Southern Methodist University, Dallas) reported on a survey in the context of her latest investigation into the impact of fullerenes on large-mouth bass and water fleas. Although the fullerenes were toxic for water fleas, the fish did not die and showed no indication of disease. However there were indications for cerebral damage and possible inflammation; furthermore, the fullerenes produced strong reactions in the fish, although the dosage applied in the investigation (1 ppm) would rarely appear in practice unless the result of an accident, the researcher reported. High concentrations of nanoparticles with unknown consequences could occur when used in suntan lotions or cosmetic products that are washed away from human bodies in water. At the same time, the behavior of nanoparticles must be taken into consideration. Presumably, individual nanoparticles would show different characteristics than agglomerated ones when inhaled or released in aqueous form. Additionally, Oberdörster reported that not all artificially produced nanomaterials had the same effect. Her investigations revealed, for example, that fullerenes are toxic for E. coli bacteria. Coated single-walled nanotubes however are not toxic for E. coli. Similar to the situation in the health field, it must be stated that although systematization of the field has progressed, little detailed knowledge is available with regard to actual effects in the environment. The IOM (2005:40) states that "the assessment of the environmental impact of nanomaterials, as for any other materials, will have to focus on residence time in the environment, toxicity (acute and long-term), bioaccumulation potential and persistency in living systems. Little is known about [any] of these." Both Oberdörster et al. (2005) and US-EPA (2005) systematize the contexts and underscore that nothing is so far known about the potential impact of nanoparticles in the environment. In a summary of research results compiled by the US Nanotechnology Initiative Funding, Dunpy Guzman et al. (2006) conclude with regard to exposure, environmental fate, and transport that: "it is unknown if engineered nanoparticles, especially those coated to reduce aggregation, will behave similarly (compared to incidental nanoparticle aerosols), the expo-sure assessment studies … have focused on worker exposure, but exposure of the ecosystem and the public to nanoparticles, from either manufacturing or the use and disposal of nanoparticle-based products, needs to be quantified … Transport studies to date have been limited to aerosol transport in the atmosphere and transport studies in porous media. However, each ecosystem component must be considered: soil, sediment, oceans, surface waters, groundwater, and the atmosphere." Regarding toxicity, a number of studies are listed that suggest negative effects. "Many believe that surface coatings have the potential to greatly alter the toxicity, solubility, reactivity, bioavailability, and the catalytic properties of underlying nanoparticles, thus minimizing their health and environmental impacts. Unfortunately, these coatings may not persist indefinitely after release of the underlying nanoparticle into the environment" (p.1405). Studies of CdSe quantum dots with a surface coating do indicate that the coating prevents the cytotoxicity of the quantum dots; however, the stability of the coatings is unclear. Finally the authors also point to the potential global impact, particularly the atmospheric impact, referring to studies which show that nanoparticles are key components in many biogeochemical processes. Summarizing the current status of knowledge it must be stated that with respect to the possible effects of the nanoparticle even the most recent surveys tend to look like catalogues of questions and that little verified knowledge exists. This also holds true for problem studies such as those undertaken by E. Oberdörster (IOM points out that the effects found by E. Oberdörster are not necessarily consequences of nanoparticles) as well as for "all-clear" messages, which likewise are frequently based on single studies. Nanoscalar titanium dioxide in suntan lotions The use of nano-scale titanium dioxide in suntan lotions addressed in the following case study. Titanium dioxide is increasingly used as an effective protective substance against ultraviolet radiation in several cosmetic products. Nano-scale titanium dioxide was chosen for this case study essentially for two reasons. First, because it involves a non-contained application on a large scale. Secondly, because it is the application of the substance which highlights the problem of assessing the effects of new nanomaterials. The inclusion of nano-scale titanium dioxide in suntan lotions is scientifically disputed. Titanium dioxide in its micro-size has a long tradition as a component in suntan lotions. Onlu more recently have manufacturers been producing and adding nano-scale titanium dioxide to suntan lotions. As we know now, a nano-scale substance can have completely different characteristics (chemical, optical, etc.) than its micro-scale counterpart. Nano-scale titanium dioxide is transparent, while micro-scale titanium dioxide is white and therefore is used as a pigment, for example, in wall paint. Nevertheless nano-scale titanium dioxide was classified as equal to micro-scale titanium dioxide regarding effects by the respective regulatory authorities in the USA (Food and Drug Administration 1999) and the EU (SCCNFP 2003). The classification and regulation of nano-scale titanium dioxide and its impact on the human organism can serve as an example for the discussion processes concerned with nanoparticles and their hazards and risks. Nanoscale titanium dioxide was also chosen for this case study because it is at the center of the arguments brought forward by ETC Group against nanotechnology. The NGO is demanding a worldwide moratorium on nanotechnology research. Using the example of nano-scale titanium dioxide, the NGO underscores the repeatedly expressed considerations of scientists with respect to possible hazards and lack of knowledge about the environmental behavior of nanoparticles. At the same time, the NGO criticizes the regulatory equation of macro-scale and nano-scale titanium dioxide, which does not sufficiently consider their differing behavioral characteristics. Titanium dioxide belongs to the group of physical light protective filters (also called inorganic or mineral filters). These are metal oxides, which filter the UV light predominantly by reflection and diffusion. Major representatives are TiO2 and ZnO. They are biologically and chemically stable and very rarely cause irritations and phototoxic or photo-allergic reactions. With their high capacity to absorb ultraviolet radiation, nano-scale titanium dioxide and zinc oxide are ideal for use in cosmetics. Hazards of nano-scale titanium dioxide The cosmetics industry defends its use of nano-scale titanium dioxide in suntan lotions with own investigations demonstrating the innocuousness of titanium dioxide. The safety of nano-scale titanium dioxide was also confirmed by the Scientific Committee on Cosmetic Products and Non-food Products Intended for Consumers (SCCNFP), an EU body, in October 2000. The committee decided that along with its conventional counterpart, nano-scale titanium dioxide could also be included in the list of approved UV filters: "The SCCNFP is of the opinion that titanium dioxide is safe for use in cosmetic products at a maximum concentration of 25% in order to protect the skin from certain harmful effects of UV radiation. This opinion concerns crystalline titanium dioxide, whether or not subjected to various treatments (coating, doping, etc.), irrespective of particle size, provided only that such treatments do not compromise the safety of the product" (Europäische Kommission 2004). The SCCNFP decision was based however solely on industrial studies not accessible to the public. In the application of suntan lotions containing nano-scale titanium dioxide, the question arises as to whether it can reach living cells, enter, and either damage them or migrate even further into the body. Moreover, it is not known whether titanium dioxide particles can transport other undesirable substances into the body. Colvin assumes that the risk from titanium dioxide risk is much lower than that of sun bathing itself (Colvin 2003a). Scientific investigations on coated nano-scale titanium dioxide report that when applied to the skin, the majority of the titanium dioxide remains localized in the upper layers of the skin (stratum corneum). The concentration of titanium dioxide in the upper layers of the stratum corneum decreases drastically over the time, as the skin regenerates (exfoliation). In the deeper layers of the skin, however, the concentration of titanium dioxide particles decreases much more slowly over time (Rickmeyer 2002). Pflückner reports that coated nano-scale titanium dioxide can be found in hair follicles, which are found in the deeper layers of the skin. However, there are no indications for their movement from the follicles into living cells (Pflücker et al. 2001). This confirms the results of earlier investigations (Lademann et al. 1999). Rickmeyer reports that only very low concentrations of coated titanium dioxide were found in the tissue surrounding the follicles. Contact with living cells "exists only in an evanescently minimum scope" (Rickmeyer 2002). Bennat/Müller-Groymann report that nano-scale titanium dioxide in fatty solutions can penetrate the skin better than in aqueous solutions and do not rule out the possibility of penetration of particles into deeper layers of the skin (Bennat/ Müller-Groymann 2000). However, Tinkle et al. report that beryllium particles of one-half to one micrometer in size rubbed into the skin have reached the epidermis and in rarer cases the deeper dermis thus bringing them into contact with living cells (Tinkle et al. 2003). Admittedly this result does not say anything about the impact of titanium dioxide on the skin, but it makes clear that in addition to particle size substance class must also be considered in judging possible effects. The safety of nano-scale titanium dioxide is also being called into question elsewhere. A report by the Royal Society of Engineering e.g. suggests that with the reduction of particle size the number of free radicals on the surface of titanium dioxide increases, possibly leading to skin damage (Royal Academy of Engineering 2003). This view is backed by an investigation by Dunford et al. and Uchino et al., who report that when titanium dioxide is exposed to sunlight, in vitro as well as with human cells, the cell's DNA is damaged by photocatalysis (Dunford et al. 1997, Uchino et al. 2002, Colvin 2003a. Rickmeyer assumes that these photocatalytic effects may be reduced drastically by coating, but cannot be entirely avoided (Rickmeyer 2002). Moreover it is known that nano-scale titanium dioxide causes cell death in the fetuses of Syrian hamsters (Rahman et al. 2002). Butz (2006) reports on the results of the Nanoderm research project being promoted by the EU, which is specifically investigating the absorption of titanium dioxide by the skin. His conclusion is that, normally, penetration is restricted to the stratum corneum disjunctum, occasionally Ti is found in s.c. compactum, Ti is rarely detected in the stratum granulosum, Ti is detected in the stratum spinosum in most cases, and Ti spots in the dermis are identified as contaminations. Moreover Butz states: "To our surprise, the particle shape had no influence, it appears that TiO2 particles are mechanically rubbed into the horny layer / hair follicles / furrows without diffusive transport and thus far there is very limited exposure to vital tissue, but there are open questions: particles in the 1-2 nm range might behave like small macromolecules and penetrate; transglandular pathway clearance from follicles (+ glands?)" All in all, these research results suggest that nano-structured titanium dioxide as used in the cosmetics industry does not penetrate into the deeper layers when used on healthy skin and that possible negative effects on the cells do not represent a problem of the first priority. Regarding potential effects within the individual environmental compartments, however, little is known. Expert survey In the course of the expert survey, a number of new as yet unconsidered aspects emerged with respect to this case study. Asked about possible negative effects of nano-scale titanium dioxide, the experts gave the following answers: The majority was of the opinion that there are no indications for the penetration of nano-scale titanium dioxide through all layers of the skin. Only one expert was generally more skeptical about the problem, but without providing any new information. They all agreed on the following problem: No investigations have looked at application to wounds or persons with dermal allergies. And there are no investigations on application to inflamed and (sun) burnt skin. In both cases there is the possibility of nano-scale titanium dioxide coming into contact with blood and/or living cells. Studies on the persistence of particles in the body, par-4.6 Case study 5: The risk potential of nano-scale structures 175 ticularly in babies and children, are required. Therefore, precaution is still necessary.. One expert noted: Nano-scale titanium dioxide can easily penetrate the skin when coated with electrophilic, oily molecules. Summary The case study on the use of nano-scale titanium dioxide in suntan lotions illustrates the problems associated with future applications of nanoparticles. A number of experts question the full safety of the application. At the same time, a number of important investigations on the effects of nanoscale titanium dioxide in the human body are lacking, although the main use, i.e. application to the healthy skin, appears to be unproblematic, according to our latest knowledge. However, the fact that this is a noncontained, environmentally open application could cause a number of problems. We do not know how the structures/substances behave in nature after being washed off or leaving the skin in the course of natural skin exfoliation. In light of already occurring problems with suntan lotion residues on lake and sea shores, these questions should not be neglected. Another problem is that nano-scale titanium dioxide is already being produced in large quantities, yet no knowledge is available on potential hazards in the production plants. The question of the effect of coatings on nano-scale titanium dioxide also remains unanswered. Assuming they really prevent catalytic effects, it is not known how long the coatings remain stable and how they react in surroundings other than on the human skin. The EU's equal treatment of micro-scale and nano-scale titanium dioxide is at the very least problematic in its lack of consideration of possible consequential environmental effects. Furthermore, publication of the related industry studies should increase the trust in the EU decision. At the same time, it is necessary to pursue unanswered questions regarding the safety of particles. With this in mind, the initial avoidance of noncontained applications, a reasonable measure based on the precautionary principle, should only be discontinued when, as in the case of nano titanium dioxide, essential, fundamental knowledge is available. Life cycle analysis of nanomaterials As already remarked, possible harm by nanoparticles may occur especially in environmentally open applications. A look at the life cycle of nanomaterials reveals several points where such a release may take place: 1. Nanomaterials production processes vary greatly. Industrially manufactured nanomaterials are not generally produced by combustion processes (with the exception of CVD/DVD and flame-assisted deposition), but mainly in liquid or closed gas-phase reactors. Therefore, direct exposure to nanomaterials ought to be limited. Exposure in the work place and research laboratory and possible exposure of man and environment due to accidents are all problematic. 2. Products: Most nanoparticles are contained or immobilized in products. Examples include nanotubes in video monitors or particles in paint coatings. The probability of release is presumably low. 3. Disposal and recycling: Behavior during disposal and recycling has not yet been extensively investigated, but release of individual nanoparticles is presumed to be limited. However, one must recognize that only very preliminary knowledge exists concerning this problem. In the course of the survey, most experts pointed out possible risks that occur during the life cycle. Exposure at the work place, accidents during manufacture and transport, and environmental pollution via industrial wastes are mentioned as possible sources. In the case of possible contami- Release and possible effects (Self-) regulation nations due to accidents, it is entirely unclear, so far, what impact the individual nanomaterials have on affected ecosystems and humans. The problems will become worse if additional consumer goods are produced without thorough testing or even no testing at all in the future, as is the case in the cosmetics industry. Closely connected to the question of the effects of accidents, waste, and disposal is the issue of biopersistency. It is necessary for the effectiveness of some nano-applications, such as those that must release their effects in a specific part of the body. However, sufficient knowledge for an assessment of long-term effects is not yet available. One expert remarked in this context that a great deal of future production of nanomaterials will take place under cleanroom conditions, which will reduce the exposure of production staff. Presently, it is absolutely unclear whether and to what extent workers in plants or researchers in laboratories are exposed to particles. Industrial accidents still represent a potential risk. Likewise, one expert pointed out that due to the currently still small volume of nanoparticles being produced, the sum risk of exposures in the work place is rather small. In the field of medical applications, the experts assumed that existing test procedures would reveal potential risks, before mass production takes place. On the basis of the differing approval procedures in specific fields, the question arose as to whether the same would be guaranteed, for example, in the field of consumer goods. In summary, one can conclude that: most manufacturing takes place in contained systems Particles are often securely integrated in products and thus risks are limited the problem of disposal and recycling is unsolved intended and unintended release is problematic These statements probably must be amended for new and modified production systems. Discussion The behavior of nanoparticles differs from that of structures at the macrolevel; this also holds true for identical substances. Some of the studies discussed above document a surprising behavior. As shown, there are numerous reasons for concern as well as clear indications of toxic effects of nanomaterials, particularly nanoparticles, on the environment and human health. Nanotubes and buckyballs could be of special significance in this respect. The knowledge we have is all preliminary, in part even contradictory, and deals with only a fraction of all possible effects. At the same time, the transferability of the knowledge gained so far appears to be low. The survey of the experts revealed that current knowledge is much too incomplete to form the basis for a comprehensive risk assessment or to implement evidence-based risk management. For this reason measures based on the precautionary principle are required. Asked whether general statements can be made about the toxicity of nanoparticles, the experts' responses were mainly negative. It is too early to classify nanomaterials into groups and categories that could be characterized with respect to their negative effects. Classification is only possible when specific nanoparticle properties responsible for toxic effects are identified. The statement that smaller particles are, as a rule, more toxic than larger ones, as their surface-volume ratio is greater was repeatedly made. Admittedly, the chemical composition and physical structure must always be considered. One expert even limited this rule to those particles having low solubility. One expert proposed a classification according to: Biopersistency Biologically accessible surface area Aggregation behavior in various environments This type of classification seems to encompass the greatest known potential hazards and thus, with the inclusion of mobility in environmental compartments and in the human body, could become a guideline for future research on risks. In addition to the uncertainty about possible negative effects, it can also be stated that the occurrence of negative effects due to manufactured particles presently appears to be relatively rare. As a consequence, continued research on the behavior and potential effects of nanoparticles in case of a release is required. Since general statements are not currently possible, there is likewise an urgent demand for classification. Several experts remarked in the course of the survey that the possibilities offered by nano-scale structures and systems are enormous. To suspend development efforts now already on the basis of a profound suspicion of possible toxicity, would be far out of proportion in relation to the expected gains from nanotechnology. Instead, attention should be given now to establishing scientifically based criteria for the evaluation of risk and risk management. Since research on the risks of nanoparticles is still in its infancy, only very general advice for handling nanomaterials can be given: Differences in size, surface structure, and chemical composition require that the possible effects on human health and environment for each individual nanomaterial be investigated. There is a great need of classification. Non-contained applications of some nanoparticles and nanostructures should be avoided until potential environmental effects have been addressed. This holds true for the majority of nanoparticles at present. The behavior of nanomaterials released during disposal should also be investigated. Biodegradability and increased tendencies for agglomeration could be ways to minimize or to avoid environmental hazards (ecotoxicity). In this context the development of a strategic research and development concept is important. It should provide the necessary means and also maintain a balance between technological development and understanding of effects. This would mean a better intercoupling of both research directions. One expert proposed a comprehensive concept for the investigation and assessment of possible hazards: "In my opinion, the most urgent need for action with respect to regulation is in the development of a new, strategic concept for the toxicological evaluation not only of the sites and organs where absorption occurs, but also the secondary target organs. This concept should be based on modern, genomic, proteomic and toxiconomic investigations with high-throughput technology. This concept would not only allow comprehensive evaluation and specific regulation of a new product, but would provide the manufacturer, in connection with a suitably equipped toxicological laboratory, with a prompt risk assessment. Such a strategic concept should be developed in interaction between research, manufacturers, and the regulatory authorities. In this way Germany could extend its leading position in the field of sustainable development of new nanoproducts." Basically it seems important, on the one hand, to understand the basics of the effect of nanoparticles; on the other hand, more research must be carried out in those areas where production quantities are foreseeable. Ref-erences from the experts as to contamination resulting from accidents in production and transport underscore the urgency. A further research focus can be found in answers given to the question as to whether certain nanoparticles or structures are intended for use in open applications; if so, their potential impact must be more closely investigated beforehand.
2019-04-26T14:22:24.330Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "99843caa757f28c4ba9fdd2173929f7e0ce171e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e08b480e15a3950dabcab92037d1d295fd3e1dfa", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Business" ] }
259145675
pes2o/s2orc
v3-fos-license
Efficacy, safety and clinical outcome associated with statin use for primary prevention in Korean patients with low-density lipoprotein cholesterol level ≥ 190 mg/dL: A retrospective cohort study Background Although the current guideline recommends the use of high-intensity statin to reduce the low-density lipoprotein cholesterol (LDL-C) level by 50% in patients with baseline value of ≥ 190 mg/dL, direct application of this recommendation to Asian populations is still questionable. This study was performed to investigate the statin response of LDL-C in Korean patients with LDL-C ≥ 190 mg/dL. Methods A total of 1,075 Korean patients (age 60.7 ± 12.2 years, women 68%) with baseline LDL-C ≥ 190 mg/dL without cardiovascular disease was retrospectively reviewed. Lipid profiles at 6 months, side effects and clinical outcomes during the follow-up period after statin treatment were assessed according to statin intensity. Results Most of the patients (76.3%) were treated with moderate-intensity statins, 11.4% with high-intensity statins, and 12.3% with a statin + ezetimibe. The reductions in LDL-C percentage at 6 months were 48.0%, 56.0% and 53.3% in patients treated with moderate-intensity statins, high-intensity statins and statin + ezetimibe, respectively (P < 0.001). Side effects requiring dose reduction, medication switch or drug interruption were observed in 1.3%, 4.9% and 2.3% of patients treated with moderate-intensity statin, high-intensity statin and statin + ezetimibe, respectively (P = 0.024). During the median follow-duration of 815 days (interquartile range, 408–1,361 days), the incidences of cardiovascular events were not different among the 3 groups (log-rank P = 0.823). Conclusions Compared to high-intensity statin, moderate-intensity statin was effective enough in reaching target goal of LDL-C without increase in cardiovascular risk and with fewer side effects in Korean patients with LDL-C ≥ 190 mg/dL. Introduction Low-density lipoprotein cholesterol (LDL-C) is a well-established risk factor for cardiovascular disease (CVD), and the clinical benefit of lowering LDL-C with statin is confirmative [1][2][3][4]. The current guideline recommends the use of high-intensity statin to reduce LDL-C level by 50% in patients with baseline value of � 190 mg/dL [5]; however, direct application of this recommendation to Asian populations is still questionable. Several Asian studies have shown that less aggressive LDL-C-lowering therapy can be sufficient to produce a substantial and beneficial risk reduction for the prevention of CVD [6][7][8][9][10][11][12][13]. More specifically, it has been reported that LDL-C reduction with rosuvastatin 10 mg in Chinese was significantly greater than in Western people (-52.8% versus -40.9% to -49.7%) [14]. The same study also showed that to achieve LDL-C > 40%, Westerners required atorvastatin 80 mg or rosuvastatin 20 mg, while Asians only needed atorvastatin 19 mg and rosuvastatin 14 mg [14]. However, the range of LDL-C levels of subjects in those studies showing statin effects in Asians was vast. Data indicating whether Asian patients with LDL-C � 190 mg/dL also achieve similar benefits as Western patients at lower statin dose has scarcely been reported [7]. This study was performed to investigate the statin response of LDL-C in Korean patients with baseline LDL-C � 190 mg/ dL. Clinical outcome according to statin types was also assessed. Study population This single center retrospective cohort study was performed at Boramae Medical Center, a general hospital in Seoul, South Korea. Between January 2010 and March 2016, a total of 4,016 subjects having LDL-C � 190 mg/dL at baseline without cholesterol-lowering medications and without documented cardiovascular disease were identified using a computerized record inquiry system. In order to rule out the possibility of the secondary cause of hypercholesterolemia, 905 subjects with triglyceride � 200 mg/dL were excluded. In addition, 1,396 subjects were also excluded due to thyroid and chronic liver disease, the use of lipid-lowering medications other than statin, including fibrate and omega-3 fatty acid, or follow-up loss at 6 months after statin treatment. Therefore, the remaining 1,075 subjects were included in this study, and their medical records were retrospectively reviewed. All study subjects received statin or statin combined with ezetimibe. The study protocol was reviewed by Institutional Review Board (IRB) of Boramae Medical Center (Seoul, Korea) (IRB number, 26-2016-181). Informed consent was waived by IRB due to retrospective design and routine nature of information collected. defined as fasting plasma glucose level > 126 mg/dL. A subject who smoked regularly within the previous 12 months was considered a current smoker. Ischemic heart disease included acute myocardial infarction and coronary revascularization. Stroke was defined as a neurologic deficit with evidence of infarction or hemorrhage by brain imaging. Laboratory data obtained from venous blood after overnight fasting were also available, such as white blood cell count, hemoglobin, glucose, glycated hemoglobin (HbA1c), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), aspartate aminotransferase (AST), alanine aminotransferase (ALT), creatinine and C-reactive protein. Estimated glomerular filtration rate (GFR) was calculated using the Diet in Renal Disease Study (MDRD) equation. Assessment of side effects Side effects of statin medications were reviewed. The side effects caused by statins were defined when they met the following conditions: 1) occurrence during statin use, 2) improvement when statins were reduced or stopped, and 3) no other clear causes to explain the side effects. Clinical outcomes The primary study end-point was a composite of all-cause death, non-fatal myocardial infarction and non-fatal ischemic stroke. Myocardial infarction was defined as elevation in cardiac troponin values with at least 1 value above the 99th percentile upper reference limit, with symptoms of myocardial ischemia, new ischemic electrocardiography changes, development of pathologic Q waves, or imaging evidence of myocardial infarction. Ischemic stroke was defined based on a focal neurologic deficit lasting more than 24 hours with evidence of infarction of brain tissue in imaging study. Clinical follow-up was done every 3 to 6 months and whenever clinical event took place. If clinical follow-up could not be done for more than 6 months, we obtained data on death from the Ministry of the Interior and Safety of Korea. All events were identified by a physician in charge and confirmed by the principal investigator. Statistical analysis All numeric data are expressed as mean ± standard deviation for continuous variables and percentage for discrete variables. Study subjects were divided into 3 groups according to the statin therapy (moderate-intensity statin vs. high-intensity statin vs. statin plus ezetimibe), and the differences in clinical characteristics and laboratory findings among the 3 groups were compared using analysis of variance (ANOVA) for continuous variable and the chi-square test for discrete variables. Comparisons of variables between pre-and post-treatment of statin were made using the paired t tests. Survival and event-free survival rates among the groups were compared using Kaplan-Meier survival analysis with log-rank test. A P value of < 0.05 indicated statistical significance. All statistical tests were performed by using SPSS for Windows version 22 (IBM Co., Armonk, NY, USA). Results Names and doses of initial statins used in this study patients are listed in Table 1. In total study patients (n = 1,075), mean age was 60.7 ± 12.2 years, 68% were female and baseline LDL-C was 207 ± 21 mg/dL. The majority of patients (76.3%) were taking moderate-intensity statins, and only 150 (11.4%) and 145 (12.3%) patients were taking high intensity statins + ezetimibe, respectively. Table 2 shows baseline clinical characteristics of the study patients according to statin types. Most of the clinical parameters, such as age, sex, body mass index and cardiovascular risk factors, were not different among the 3 groups except a higher prevalence of hypertension in the statin + ezetimibe group. In cholesterol profiles, TC (275 ± 24 mg/dL in moderate-intensity statin group; 291 ± 37 mg/dL in high-intensity statin group; 284 ± 40 mg/ dL in statin + ezetimibe group; P < 0.001) and LDL-C (205 ± 16 mg/dL in moderate-intensity statin group; 217 ± 29 mg/dL in high-intensity statin group; 211 ± 33 mg/dL in statin + ezetimibe group; P < 0.001) were higher in the high-intensity and statin + ezetimibe groups compared to moderate-intensity statin group. HDL-C, TG, HDL-C/LDL-C and TG/HDL-C were not different among the 3 groups. Glucose profiles as well as hepatic and renal functions were also similar among the 3 groups. Changes in cholesterol profiles after 6 months of statin treatment are demonstrated in Table 3 and Fig 1. The levels of total cholesterol and LDL-C were significantly decreased after statin treatment in all 3 groups (205 ± 16 mg/dL to 106 ± 31 mg/ dL [P < 0.001] in moderate-intensity statin group; 217 ± 29 mg/dL to 95 ± 31 mg/dL PLOS ONE [P < 0.001] in high-intensity statin group; 211 ± 33 mg/dL to 99 ± 34 mg/dL [P < 0.001] in statin + ezetimibe group). The reductions in LDL-C percentage at 6 months were 48.0%, 56.0% and 53.3% in patients treated with moderate-intensity statins, high-intensity statins and statin + ezetimibe, respectively (P < 0.001 for each). The HDL-C level did not change in either of the 3 groups. The triglyceride level was significantly decreased in the moderate-and highintensity statin groups, but not in the statin + ezetimibe group. The total cholesterol/HDL-C Table 4). The incidence of relatively well-known statin side effects, muscle symptom and liver enzyme elevation, was also very low, and it was not different among the 3 groups (0.4% in moderate-intensity statin group, 0.8% in high-intensity statin group and 1.5% in statin + ezetimibe group; P = 0.238). The incidences of all side effects associated with statin use were higher in the high-intensity statin group (P = 0.024) (Fig 3). During the median clinical follow-up duration of 815 days (interquartile range, 408-1,361 days), there were 41 cases of composite clinical events (3.8%): 30 cases of death, 2 cases of non-fatal myocardial infarction, and 9 cases of non-fatal ischemic PLOS ONE stroke. Event-free survival rate (log-rank P = 0.823) and survival rate (log-rank P = 0.884) were similar among the 3 groups (Fig 4). Discussion In this study, we compared the efficacy and safety among 3 groups with different LDL-C lowering therapies (moderate intensity statin, high-intensity statin and statin + ezetimibe) in Korean patients with LDL-C � 190 mg/dL for primary prevention. Main findings of this study are as follows: 1) although the LDL-C reduction effect was greatest in the high-intensity statins (56.0% reduction of LDL-C at 6 months), moderate-intensity statins also showed strong LDL-C lowering effects comparable to those of high-intensity statins (48.0% reduction of LDL-C at 6 months), 2) the incidence of side effects related to statin use, such as statin associated muscle symptom and liver enzyme elevation, was higher in the high-intensity group and 3) clinical outcomes were similar among 3 groups. Comparisons with previous studies It has been well established that patients with LDL-C � 190 mg/dL have a significant increase in the incidence of cardiovascular events and mortality [15][16][17][18], and that LDL-C lowering therapy with statin can effectively reduce their cardiovascular risk [15][16][17][18]. Based on this evidence, recent guideline recommends the use of high-intensity statin for an LDL-C reduction PLOS ONE of � 50% from baseline in these high-risk population [5]. However, all of the existing studies referenced in the guidelines are all Western-centered data [15][16][17][18]. Therefore, it may be unreasonable to apply these guidelines directly to Asians whose genetic background, body size and living environment are different from those of Europeans or North Americans. Indeed, there have been reports indicating the efficacy and safety of statin in Asians differ from those of Europeans or North Americans. In those studies, it has generally been suggested that Asians can obtain sufficient effects with lower strength statins than Europeans or North Americans [6][7][8][9][10][11][12][13]. In a study involving 6 centers in Asia, 10 mg of atorvastatin used for 8 weeks lowered LDL-C by 43% compared to baseline [13]. In a large study of 51,321 patients with hypercholesterolemia conducted in Japan, 5 mg of simvastatin lowered LDL-C by 29% during the 6-year follow-up period, which is as effective as a dose of 20 mg used in Western countries [12]. In the Management of Elevated Cholesterol in the Primary Prevention of Adult Japanese (MEGA) study, the first trial to evaluate the clinical outcomes of statin therapy in Asians, showed that using low-intensity pravastatin 10-20 mg reduced cardiovascular risk by 33% [11], which was similar to results of Western studies using higher statin doses [19,20]. With this background, the question will naturally arise as to whether high-intensity statins should be used for Asians with LDL-C � 190 mg/dL. Patients with LDL-C � 190 mg/dL are at increased risk of atherosclerotic cardiovascular events [5]. The ACC/AHA guidelines defined patients with LDL-C � 190 mg/dL as one of four groups requiring high-intensity statin treatment [5]. Because it has great clinical significance, it is essential to address the Asian issue for those with LDL-C � 190 mg/dL. However, to the best of our knowledge, only 1 Asian study has addressed this issue. Kim et al. conducted a study by retrospectively reviewing the medical records of 179 Korean patients with LDL � 190 mg/dL, and showed that LDL-C reduction rates did not differ between the moderate-and high-intensity statin groups [7]. In contrast to this study, our results showed that high-intensity statins were more potent in reducing LDL-C compared to moderate-intensity statins. This may be because the statistical significance could be obtained by enrolling a much larger number of patients. Our study also has strength because it analyzed the incidence of clinical events. On the other hand, there are some results of Asian studies indicating that greater benefits can be obtained by increasing statin dosage [21,22]. Lipid profiles significantly improved when 10 mg of rosuvastatin was increased to 20 mg in patients with heterozygous familial hypercholesterolemia in Japan [22]. In a study of patients with documented cardiovascular disease in 5 Asian countries, the LDL-C target reached 72% when simvastatin 20 mg was used, whereas the LDL-C target reached 94% when a of 80 mg dose was used [21]. These studies imply that high-intensity statin use may be necessary in some patients with severe hypercholesterolemia. Through a well-designed randomized control study, it is necessary to draw more clear conclusions by comparing the efficacy and side effects between high-intensity and moderate-intensity statins in patients with LDL � 190 mg/dL. Clinical implications Since beneficial effects of statin is greater than side effects, high-intensity statins are actively recommended for high-risk patients [5,23]. However, in the case of similar efficacy, a lowerintensity statin should be selected to reduce the incidence of side effects and increase patient adherence. Asian studies have reported that the efficacy of moderate-intensity statins is as effective as those of Europeans or North Americans even in high-risk patients [6][7][8][9][10][11][12][13]. Our study of Koreans also showed that although moderate-intensity statin therapy had a less potent LDL-C lowering effect than high-intensity statin therapy, the difference in LDL-C lowering degree was not clinically significant, 56% versus 48%. Even with moderate-intensity statin therapy, it is possible to lower LDL-C by nearly 50% compared to the baseline suggested in the guidelines. Also, compared to high-intensity statins, moderate-intensity statins were as effective in preventing cardiovascular events and have lower side effects in high-risk patients with LDL-C � 190 mg/dL. Even if there is severe hypercholesterolemia (LDL-C � 190 mg/dL), Asians do not necessarily need to select a high-intensity statin from the beginning. Instead, it can be recommended to initially use a moderate-intensity statin, and then select appropriate statin intensity according to the degree of LDL-C reduction and side effects in this high-risk population. However, there is one caveat in interpreting our findings, particularly concerning the occurrence of clinical events. In our study, the incidence of composite events, including death, myocardial infarction, and stroke, was very low at 3.8% during the median follow-up period of 2.23 years. In Korean patients with LDL > 190 mg/dL, moderate-intensity statin therapy may be effective and safe, at least for this relatively short clinical follow-up period. A prospective study with long-term follow-up is needed to secure the justification for moderateintensity statin use in patients with LDL-C > 190 mg/dL. Our study also showed that the statin + ezetimibe, low-intensity statin and moderate-intensity stating groups produced similar effects as the high-intensity statin group, and had fewer side effects. Although there was no statistical difference, statin + ezetimibe group showed greater LDL-C reduction than the moderate-intensity statin group. Based on these results, it is also worth considering of a combination of statin and ezetimibe instead of high-intensity statins in patients with LDL-C � 190 mg/dL. Study limitations The current study has several limitations. First, although efforts were made to obtain accurate information on the occurrence of adverse events or clinical events related to statin use, some events may have been missed in this retrospective study. Second, despite a relatively large number of enrolled patients, there was a possibility that the number of clinical events was too small to reach a statistical difference among the 3 groups. There is a risk of type 2 error because it is a retrospective study and power analysis was not performed. The reasons for the low incidence of cardiovascular events in our study may be that high-risk patients with a history of previous cardiovascular disease were excluded, and that LDL-C was lowered by � 48% by statins in all study patients. Third, there may have been changes in statin types or doses during clinical follow-up in some patients, but this was not properly reflected in the study. Fourth, given that baseline higher LDL-C level is associated with better statin response [24], the LDLlowering effect might be enhanced in high-intensity statin groups due to the higher baseline LDL-C levels compared to moderate-intensity statin group. Fifth, because of unavoidable limitations of retrospective study, it was not possible to standardize treatment strategies among doctors. Lastly, since the patients analyzed in this study are all Koreans, it is difficult to apply our results to other ethnic groups. However, despite these shortcomings, our study has several strengths. Our study sample size is relatively large. We also provided Asian data in patients with LDL � 190 mg/dL, which has been scarcely reported. Another advantage of our study is that it presented outcome data on cardiovascular events as well as side effects associated with statin use. Conclusions Compared to high-intensity statin therapy, moderate-intensity statin therapy was less potent in reducing LDL-C, but was effective in reaching LDL-C target in Korean patients with LDL-C � 190 mg/dL. Moderate-intensity statin therapy was as effective as high-intensity statin therapy in protecting cardiovascular events. Also, moderate-intensity statin therapy had fewer side effects than high-intensity statin therapy. Therefore, it can be optional to start with a moderate-intensity statin in this high-risk population. Further prospective studies with large sample sizes are needed to confirm our findings.
2023-06-14T05:05:53.140Z
2023-06-12T00:00:00.000
{ "year": 2023, "sha1": "e722ffdd01b18e8299f8df9faad089d594cf4526", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e722ffdd01b18e8299f8df9faad089d594cf4526", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24419568
pes2o/s2orc
v3-fos-license
Identification of three regions essential for interaction between a (cid:115) -like factor and core RNA polymerase The cyclic interactions that occur between the subunits of the yeast mitochondrial RNA polymerase can serve as a simple model for the more complex enzymes in prokaryotes and the eukaryotic nucleus. We have used two-hybrid and fusion protein constructs to analyze the requirements for interaction between the single subunit core polymerase (Rpo41p), and the (cid:115) -like promoter specificity factor (Mtf1p). We were unable to define any protein truncations that retained the ability to interact, indicating that multiple regions encompassing the entire length of the proteins are involved in interactions. We found that 9 of 15 nonfunctional (petite) point mutations in Mtf1p isolated in a plasmid shuffle strategy had lost the ability to interact. Some of the noninteracting mutations are temperature-sensitive petite (ts petite); this phenotype correlates with a precipitous drop in mitochondrial transcript abundance when cells are shifted to the nonpermissive temperature. One temperature-sensitive mutant demonstrated a striking pH dependence for core binding in vitro, consistent with the physical properties of the amino acid substitution. The noninteracting mutations fall into three widely spaced clusters of amino acids. Two of the clusters are in regions with amino acid sequence similarity to conserved regions 2 and 3 of (cid:115) factors and related proteins; these regions have been implicated in core binding by both prokaryotic and eukaryotic (cid:115) -like factors. By modeling the location of the mutations using the partial structure of Escherichia coli (cid:115) 70 , we find that two of the clusters are potentially juxtaposed in the three-dimensional structure. Our results demonstrate that interactions between (cid:115) -like specificity factors and core RNA polymerases require multiple regions from both components of the holoenzymes. 5-FOA medium. strain yJH71, yJH64 transformed pJH119 functional MTF1 gene the transformants sporu- lated to generate mtf1: hisG (pJH119) haploid progeny genotype was tested on both YPG and 5-FOA media. All strain constructions were confirmed using a 1-kb Eco RI fragment of pJH117 (pJJ517) as an MTF1 probe. The complex RNA polymerases of eukaryotes and prokaryotes require auxiliary factors specific for the initiation phase of transcription. These factors associate with the core polymerases to form holoenzymes competent for promoter recognition, selective DNA binding, and opening of the double-stranded DNA at the start site of transcription. Shortly after initiation, the factors are released as the RNA polymerase makes the transition into its elongating form. The factors associated with eukaryotic nuclear RNA polymerase II (Pol II) before initiation, and released shortly after transcription is initiated include TFIIB, TFIID, TFIIE, TFIIF, and TFIIH (Conaway and Conaway 1993;Zawel and Reinberg 1995). In bacterial cells, members of the family of factors carry out most of the functions of the many eukaryotic nuclear factors (for review, see Helmann 1994). The interaction of a factor with the core polymerase alters the confor-mation of both the polymerase and the factor to expose amino acids critical for promoter recognition, and to allow the loading of the polymerase onto the DNA (Dombroski et al. 1993;Polyakov et al. 1995). Although much has been learned about how factors and other sequence-specific binding factors interact with DNA, relatively little is known about the interactions of these factors with the subunits of core polymerases and how these interactions influence conformational changes in both components of the holoenzyme. In this work, we have used the yeast mitochondrial RNA polymerase (mt RNAP) as a simple model to examine the interaction between a promoter specificity factor and a core polymerase. The core mt RNAP is a single polypeptide encoded by the nuclear gene RPO41. Rpo41p shares nine regions of amino acid sequence similarity with the single subunit RNA polymerases of the T7 and T3 bacteriophage (Masters et al. 1987;Jang and Jaehning 1994). These regions include the amino acids known to be required for struc-ture and function of the catalytic domain of the phage polymerases (Delarue et al. 1990;Sousa et al. 1993). However, unlike the phage polymerases that function independently, Rpo41p requires a specificity factor encoded by the nuclear MTF1 gene. Mtf1p has only limited amino acid similarity to factors (Jang and Jaehning 1991), but functions in many ways like in that it is required for promoter recognition and initiation of transcription. Although Mtf1p does not bind to its simple nine-base promoter (consensus ATATAAGTA; Osinga et al. 1982) on its own, it interacts with the core polymerase in solution to create a holoenzyme capable of promoter recognition (Mangus et al. 1994). Mtf1p is released after a short transcript has been synthesized and is available for interaction with a new core subunit, also reminiscent of factors (Mangus et al. 1994). The mitochondrial RNA polymerase therefore undergoes the same cycle of interactions as do the more complicated prokaryotic and eukaryotic nuclear enzymes, but requires only two polypeptides rather than the four to more than 30 used in the more complex systems. In this work we have investigated the requirements for interaction between Mtf1p and Rpo41p. A comprehensive deletion analysis of both proteins failed to identify a simple interaction region, indicating that several regions of both proteins may be involved in the protein-protein interactions. We have demonstrated that this is the case for Mtf1p by identifying three regions required for interactions with Rpo41p. Two of these regions are similar to regions of -like factors shown to have a role in interactions with the prokaryotic and eukaryotic core polymerases. This analysis establishes the yeast mt RNAP as a useful model for the analysis of protein-protein interactions during the transcription cycle, and demonstrates that core polymerase/accessory factor interactions involve complex-binding surfaces on both components of the holoenzyme. The interaction between Mtf1p and Rpo41p can be detected in two-hybrid constructs The analysis of interactions between -like factors and core polymerases is complicated because of the number of polypeptides in most core RNA polymerases. With the two-component mitochondrial RNA polymerase it is possible to use the powerful technique of two-hybrid analysis (Bartel et al. 1993;Phizicky and Fields 1995) to determine regions and/or specific amino acids of the Rpo41p core and the Mtf1p -like specificity factor necessary for protein-protein interactions. Although twohybrid analyses of proteins that normally interact in the cytoplasm have been successful in the nuclear environment required for the technique, it was critical that we establish that the relatively weak interactions between the mtRNAP subunits (Mangus et al. 1994) could be detected in two-hybrid constructs. Initially, full-length polymerase subunits were tested for interaction in this assay. Mtf1p was fused to the LexA DNA-binding do-main, whereas Rpo41p was fused to the Vp16 transcriptional activator in vectors described by Hollenberg et al. (1995;Materials and Methods). ␤-Galactosidase activity was undetectable with the fusion constructs on their own (except for a weak positive signal with the LexA:Mtf1p construct), or in combination with the unfused vector constructs. High levels of ␤-galactosidase were only observed when the LexA:Mtf1p construct was present with the VP16:Rpo41p construct (see below). Deletions of Rpo41p and Mtf1p fail to identify a discrete interaction region Two-hybrid analyses have been used to delineate small regions of proteins necessary and sufficient for proteinprotein interactions (Bartel et al. 1993;Phizicky and Fields 1995). Therefore, we asked if deletion constructs could be used to define the region of interaction between the two proteins. As shown in Figure 1, several deletion constructs were made for both Mtf1p (Fig. 1A) and Rpo41p (Fig. 1B) in two-hybrid vector backbones. The six Mtf1p deletions included three carboxy-terminal deletions, one amino-terminal deletion, and two internal deletions. The two internal deletions removed amino acids conserved with regions 2.1 and 2.2. All constructs were expressed in yeast cells and produced proteins of the predicted sizes ( Fig. 2A; data not shown). Furthermore, the deletion constructs were expressed at levels that were equivalent to the full-length product ( Fig. 2A). However, none of the Mtf1p deletion constructs showed any interaction, as measured by ␤-galactosidase activity, with the core polymerase in this assay (data not shown). These results suggest that it is not a single domain but multiple regions of the folded structure of Mtf1p whose contact with Rpo41p is required to produce a stable interaction. A similar analysis was conducted with Rpo41p. Rpo41p is larger than its phage relatives and contains an amino-terminal extension of ∼300 amino acids that has no similarity to the phage polymerases (Masters et al. 1987). Because the phage polymerases do not require a specificity factor, we have speculated that the aminoterminal extension may have a role in the interaction with the specificity factor (Jaehning 1993). To determine if this sequence alone is capable of interacting with Mtf1p, we fused the amino-terminal sequence (amino acids 1-311) of Rpo41p to the Vp16 activation region. The amino-terminal fragment showed no interaction with Mtf1p by this analysis (data not shown). However, this fusion construct was not detected in vivo by Western blot analysis. Four additional Rpo41p constructs (Fig. 1B) were tested for interaction with Mtf1p. Each of the constructs accumulated in yeast cells (Fig. 2B), but none was capable of interaction with Mtf1p (data not shown). Two constructs (Rpo41p1-597 and Rpo41p1-918) contain the amino-terminal extension, but are unable to mediate interaction with Mtf1p. The carboxy-terminal polymerase region, Rpo41p311-1351, is also incapable of interacting with Mtf1p. Mtf1p may therefore interact with amino acids in both the amino-terminal extension and the polymerase domain. Isolation of point mutations in Mtf1p that confer a petite phenotype Because deletions in Mtf1p and Rpo41p failed to identify discrete regions sufficient for interaction, we turned to analysis of point mutations to identify specific residues required for the interaction. Although MTF1 is not essential for yeast cell growth, it is required for the stable replication and transmission of the mitochondrial genome (Lisowsky and Michaelis 1988). Yeast strains lacking functional Mtf1p rapidly lose their full-length mitochondrial DNA, producing a petite phenotype that cannot be complemented by the subsequent expression of MTF1 (Jang and Jaehning 1991). Isolation of nonfunctional MTF1 mutations therefore requires the use of the plasmid shuffle technique where a plasmid bearing a wild-type copy of the gene is used to cover a chromosomal mutation (Sikorski and Boeke 1991). A yeast strain was created for this procedure as outlined in Figure 3. For mutagenesis, MTF1 was amplified using low fidelity PCR conditions to create random mutations throughout the gene (Materials and Methods). The mutant fragments were transformed into the recipient strain and then grown on 5-fluoro-orotic acid (5FOA) to select for cells that had lost the wild-type MTF1 plasmid with the URA3 marker. The mtf1 mutants were tested subsequently for mitochondrial function by growth on a nonfermentable carbon source. Strains that were unable to grow on glycerol plates (petite), or unable to grow on glycerol plates at 37°C temperature-sensitive petite (ts petite) were collected for further analysis. From 10,000 primary transformants, 22 petite and ts petite mutants were identified and sequenced to determine the mutation responsible for the petite phenotype. Six of the mutants contained nonsense codons, whereas the other 17 mutants contained either single (6), double (9), or triple (2) missense mutations (Table 1). Missense mutations were identified between codons 40 and 247. To increase the pool of single missense mutations, we reconstructed many of the mutations as single point mutations using site-directed mutagenesis. Oligonucleotides based on the sequenced mutations were designed for mutagenesis of the wild-type sequence (Materials and Methods). The isolated single mutations were retested using the plasmid shuffle to determine the effect of the single mutations in vivo. Nine of the 10 reconstructed mutations were either petite or ts petite when present as single mutations (Table 1). Although not all of the identified mutations were reconstructed, we did isolate a substantial pool of defective Mtf1p mutations. In total, 15 petite or ts petite point mutations were identified. Rpo41p deletions were fused to the Vp16 activator region and tested for interaction with a full-length LexA:Mtf1p construct. Strains, vectors, and protocols are described in Materials and Methods. Numbers refer to amino acid positions retained in the constructs. Shaded boxes in the full-length maps of MTF1 and RPO41 refer to regions conserved between factors and Mtf1p (Jang and Jaehning 1991) or between Rpo41p and T7 RNA polymerase (Masters et al. 1987;Jaehning 1993). Transcription of all classes of mitochondrial genes is reduced in ts mtf1 mutants It was possible that some of the defects in Mtf1p function could be attributable to failure to recognize one or more of the promoters in the mitochondrial genome. We therefore analyzed patterns of mitochondrial transcription at the permissive (30°C) and nonpermissive (36°C) temperatures for several ts petite mtf1 mutant strains ( Fig. 4; data not shown). Strains of yeast bearing the wildtype or temperature-sensitive (L53H and I154T) alleles of MTF1 were grown in minimal glucose media at the permissive temperature and then shifted to the nonpermissive temperature. Samples of yeast were collected at intervals and total RNA was isolated, fractionated, and hybridized with oligonucleotide probes specific for mitochondrial 14S rRNA, mRNAs for the COB and OLI1 genes, and the tRNAs for Glu, Ser1, Trp, Thr (ACN) , f-Met, and Phe, as well as the cytoplasmic 18S rRNA as a normalization control. These genes represent 7 of the 12 known mitochondrial transcription units and include several of the different promoter sequence variations (Dieckmann and Staples 1994). Some of the hybridization analyses are shown in Figure 4A and the quantitation of these data are presented in Figure 4B. Three conclusions can be drawn from the results in Figure 4. First, mitochondrial gene expression is reduced significantly in the ts mutants relative to wild-type expression, even at the permissive temperature (Fig. 4A). Because the strains bearing the mutant alleles of Mtf1p grow almost as well as wild type on glycerol medium, mitochondrial function can be maintained apparently with only 10%-20% of the wild-type level of mitochondrial transcripts. Second, there is a rapid decrease in mitochondrial RNA abundance in both the wild-type and mutant strains after the shift to the nonpermissive temperature, but the mutants do not recover and RNA levels decrease to almost undetectable levels (Fig. 4B). Finally, the abundance of all mitochondrial transcripts that we analyzed was reduced to a similar extent (Fig. 4A,B; data not shown). These mtf1 mutations therefore affect the transcription of all mitochondrial genes and are not specifically defective for recognition of a subset of mitochondrial promoters. Under the conditions used in these assays, petites do not start to accumulate in the mutant strains until about 24 hr after the shift to the nonpermissive temperature (data not shown). During this period the mutants maintained levels of mitochondrial DNA similar to those found in the wild-type strain (data not shown). Therefore, the petite phenotype caused by the mtf1 mutations is caused by a defect in transcription of mitochondrial genes, not by a rapid loss of mitochondrial DNA. This defect could be caused by failure to interact with the core polymerase, inability to recognize or bind to the mitochondrial promoter, or loss-of-function in other steps in initiation. In the following sections we have tested these mutations for the first of these possible defects. Some of the mtf1 mutations no longer interact with Rpo41p in the two-hybrid assay We introduced each of the point mutations into the twohybrid LexA fusion vector (Materials and Methods). Unlike the deletion mutations described above, most (10 of 15) of the mutants retained the ability to interact with Rpo41p, as determined by qualitative production of ␤galactosidase in filter assays (data not shown). However, five of the 15 mutations (L53H, V135A, I154T, S218R, and D225G; see Fig. 5) were negative for ␤-galactosidase production, indicating that they had lost the ability to interact with Rpo41p. The inability of these mutants to interact was not attributable to lowered or abolished expression of the fusion constructs, as all of the noninteracting mutants are expressed at levels equivalent to wild-type (Fig. 2C). Because many of the petite MTF1 mutants were isolated as temperature-sensitive mutations, we repeated the filter assays after growing cells at 37°C (data not shown). Two additional mutations (Y42C and K157E) fail to interact at the nonpermissive temperature. When the qualitative assays were followed by quantitative measurement of ␤-galactosidase activity, we found that many of the petite mutations associate with Rpo41p at levels that are indistinguishable from the wild-type protein (Fig. 5). However, consistent with the filter assays the noninteracting mutants L53H, V135A, Figure 3. Plasmid shuffle isolation of PCR-generated mtf1 mutants. Plasmids, strains, and protocols are described in Materials and Methods. A haploid strain (yJH71) containing a single functional copy of MTF1 on a plasmid was transformed with plasmids bearing mutant mtf1 genes on a LEU2 selectable plasmid. The transformants were plated onto glucose medium containing 5-FOA to select for the loss of the URA3 plasmid bearing the wild-type copy of MTF1. The mutant alleles were then tested for Mtf1p function by plating the cells on a nonfermentable carbon source (YPG) . Petite and ts petite mutants were identified. I154T, S218R, and D225G produce no ␤-galactosidase activity above background levels. We also identified two mutants that have intermediate levels of interaction. Mutant I221K generates <50% and mutant H44P produces <25% of the ␤-galactosidase activity of the wild-type construct. The two temperature-sensitive mutations Y42C and K157E produce ␤-galactosidase at levels that are equivalent to the wild-type protein at their permissive temperatures (30°C and 23°C, respectively), but are reduced severely in interaction at their nonpermis- Wild-type (wt) and ts petite strains bearing mtf1 mutations L53H and I154T were grown in minimal glucose medium at 30°C and shifted to 37°C. Total RNA was harvested from cells immediately before and at the indicated time intervals after the shift to the nonpermissive temperature. (A) Blots of the isolated RNAs were hybridized with labeled 14S rRNA, COB, or tRNA THR(ACN) oligonucleotide probes to analyze the abundance of mitochondrial transcripts. (B) Hybridization to the blots was quantitated by PhosphorImager analysis to directly compare levels of transcripts. The blots were hybridized to an 18S rRNA oligonucleotide probe to normalize for loading and transfer. Strains, plasmids, and protocols are described in Materials and Methods. Twenty-two petite or ts petite mtf1 mutants were identified by a plasmid shuffle technique (Materials and Methods). Mutants containing nonsense, missense, and multiple missense mutations were identified. Many of the multiple missense mutations were isolated so that single mutations (shown in bold) could be analyzed separately. GENES & DEVELOPMENT 2901 Cold Spring Harbor Laboratory Press on July 18, 2018 -Published by genesdev.cshlp.org Downloaded from sive temperatures (36°C and 30°C, respectively). In all, nine mtf1 mutations are fully, partially, or conditionally defective for interaction with the core polymerase. Confirmation of the two-hybrid analyses with biochemical association assays To confirm the two-hybrid data we used a biochemical affinity assay (Mangus et al. 1994) to demonstrate that Rpo41p and Mtf1p interact in solution in the absence of DNA. For this assay the mtf1 mutants were fused to glutathione S-transferase (GST) and mutant and wildtype Mtf1p-GST fusions were bound to a glutathione agarose column (Materials and Methods). A whole-cell yeast extract containing Rpo41p was loaded onto the fusion protein columns that were washed subsequently and step-eluted to release any Rpo41p that had bound to the Mtf1p fusions on the columns (Materials and Methods). Rpo41p that was bound to and eluted from the columns was detected by an anti-Rpo41p antibody (Mangus et al. 1994). As shown in Figure 6, the column chromatography results confirm the two-hybrid observations. As controls, the wild-type fusion and two interacting mutants, S81N and E114V ( Fig. 6; data not shown) were tested for interaction. Rpo41p bound efficiently to all three fusion constructs. Consistent with the two-hybrid results, Rpo41p showed little ability to bind to the noninteracting mutants V135A and S218R (Fig. 6). Noninteracting mutants L53H, I154T, and D225G also show no Rpo41p binding in this assay (data not shown). Mutants H44P (Fig. 6) and I221K (data not shown) show intermediate levels of interaction with Rpo41p in this assay, consistent with the intermediate interaction observed in the two-hybrid assay (note ␤-galactosidase units in parentheses in Fig. 6). Additionally, mutant K157E, which is temperature-sensitive for interaction in the two-hybrid system, exhibited intermediate binding in this assay (data not shown). In contrast to the results with the other conditional and partially defective mutants, the Y42C fusion protein did not interact at all with Rpo41p under the conditions of this assay (Fig. 7, top). The buffers used in all of the binding assays shown in Figure 6 contain Tris and are buffered to a pH of 7.9 at room temperature, which is pH 8.3 at the 4°C temperature used for the binding assay. Although the pH of the yeast mitochondrion has not been reported, it seemed likely that the in vivo pH ex- Interaction between the Mtf1p mutants and Rpo41p was determined for cells grown at 30°C (shaded bar). Activity for ts mutants Y42C and K157E is also shown for cells grown at 37°C (hatched bar) or 23°C (crosshatched bar), respectively, to demonstrate the temperature-sensitive nature of the interaction. ␤-Galactosidase activity is expressed in Miller units (Miller 1972). perienced by the Y42C mutation in the nucleus for the two-hybrid assay or in the mitochondrion for the complementation assay could be significantly lower than pH 8.3. In addition, the change from tyrosine to cysteine alters the residue from hydrophobic to a potentially charged amino acid; the pK of cysteine is in the range of 8-9 depending on context (Walsh 1979). We therefore repeated the binding assay at pH 7.3 to determine if partial deprotonation of the cysteine residue had reduced binding to Rpo41p at the higher pH. As shown in Figure 7 (bottom), at pH 7.3 the Y42C mutant interacts with Rpo41p at levels equivalent to wild type. Furthermore, when this mutant is isolated at the nonbinding pH (pH 8.3) it still retains the ability to interact with Rpo41p when it is shifted to pH 7.3 (data not shown). Binding of the wild type and other mutant Mtf1p constructs is indistinguishable at the two different pHs (data not shown). The simplest explanation of these data is that partial negative charge on the deprotonated cysteine residue directly reduces the affinity of the mutant Mtf1p for Rpo41p. The reversible nature of the interaction is further support of the fact that these point mutations are not unfolded or unstable proteins-they have simply lost the ability to bind to Rpo41p at levels detectable in our assays or sufficient for transcription in vivo. Discussion In this work, we have confirmed further the functional homology of the mitochondrial RNA polymerase specificity factor, Mtf1p, with the large family of prokaryotic and eukaryotic nuclear -like factors. Despite the limited amino acid sequence similarity between many of the members of this family, there are many shared functions (for review, see Helmann and Chamberlin 1988;Helmann 1994) including roles in suppression of nonspecific interactions with DNA, selective promoter sequence recognition, promoter melting, and, as described in this work, interactions with the core RNA polymerase. The mutational strategy (PCR mutagenesis and plasmid shuffle screening) used in our studies resulted in the identification of a large number of petite and ts petite mutations useful for delineating the functional regions of Mtf1p. As shown in Figure 8, these mutations span much of the length of the MTF1 gene. The mutations that affect interactions with Rpo41p are shown above the linear map of the gene; the nine mutations fall in three discrete clusters that we have designated A, B, and C. Below the map are the six petite mutations that retain the ability to interact with Rpo41p. In several cases, the two classes of mutants are closely apposed. Additional mutations in Mtf1p have been identified using a site-directed mutagenesis approach (Shadel and Clayton 1995). Although only three of the 14 mutations created in that study resulted in a petite phenotype, the position of the nonfunctional alterations confirmed that the regions of similarity with factor were essential for Mtf1p function. Our collection of 15 point mutations serves to establish further the importance of these conserved regions. Ten of the 15 petite mutations lie in the conserved regions identified previously (Fig. 8). The five mutations not localized to the conserved regions lie in the areas between regions 2.1/2.2 and 2.3/2.4, and between regions 2 and 3 (Fig. 8). Although we did not identify point mutations in the region carboxy-terminal to conserved region 3, there are likely to be additional essential residues in this part of the protein based on the deletion analysis reported by Shadel and Clayton (1995). They found that an additional 30 or more amino acids carboxy-terminal of region 3 were required for full function of Mtf1p. One of our point mutations (Y42C) is in the same position identified as important by Shadel and Clayton (1995); another (L53H) is immediately adjacent to a mutation identified in that study (D52A). The fact that the adjacent ts petite mutant Y54F still interacts with the core (Fig. 5) means that we cannot predict whether the D52A mutation is defective for interaction or another step in the transcription reaction. This region of the pro- GENES & DEVELOPMENT 2903 Cold Spring Harbor Laboratory Press on July 18, 2018 -Published by genesdev.cshlp.org Downloaded from tein therefore appears to have two distinct functions closely interdigitated in the folded structure. It is interesting that both mutations at amino acid 42 (Y42C and Y42R) result in a ts phenotype in vivo (Table 1; Shadel and Clayton 1995). Because we have shown that the Y42C mutation affects core interactions, this is probably also the defect for the Y42R mutation. In an in vitro binding assay we can correct the defect of the Y42C mutation by lowering the pH to create the uncharged form of cysteine (Fig. 7). The fact that both mutations are ts in vivo may therefore reflect alterations in pH or ionic environment in the mitochondrion that occur at elevated temperature. Under these altered conditions, a charged residue (cysteine or arginine) at position 42 does not support interactions with the core. We found that the temperature shift dramatically decreases mitochondrial transcription even when wildtype Mtf1p is present. When Mtf1p is replaced by a ts noninteracting mutation (I154T or L53H), abundance of all classes of transcripts drops rapidly at the elevated temperature and does not recover to levels that support mitochondrial function. The fact that the reduction was similar for all the promoters we examined is consistent with the idea that the subunits of the holoenzyme could no longer interact in the altered conditions of the mitochondrion at the nonpermissive temperature. We have not confirmed directly that RNA synthesis shuts off at the nonpermissive temperature. However, our observation that mitochondrial DNA levels do not change during the several cell generations represented by the extended time course of the experiment indicates that the decrease in transcript abundance is not attributable to a loss of template. This observation also calls into question the hypothesized role of the mtRNAP in replication of mitochondrial DNA (Clayton 1991). If, as we have shown, the mutated enzyme is nonfunctional for the synthesis of all classes of transcripts, how can it still be active for primer synthesis? This view is consistent with the work of Fangman et al. (1990), who have shown that mitochondrial DNA can be replicated in the absence of functional Rpo41p. Comparison to the core interaction regions of factors: cluster A mutations/conserved region 2 Previous studies of the core interaction regions of bacterial factors have demonstrated clearly that amino acids in conserved regions 2.1 and 2.2 are of critical importance (Shuler et al. 1995;Tintut and Gralla 1995;Joo et al. 1997). The amino acid sequences of these regions of Escherichia coli 70 and 32 , and Bacillus subtilis E are shown in Figure 9A aligned with Mtf1p. Also included are a region of the human RAP30 subunit of RNA Pol II factor TFIIF shown to be protected when in a complex with the bacterial core RNA polymerase (McCracken and Greenblatt 1991), and a short region of E. coli 54 demonstrated to be important for core interactions (Tintut and Gralla 1995). Although no point mutations of 70 have been reported that abolish core interactions, Lesley and Burgess (1989) identified several deletions that decreased core binding. A deletion that encompassed region 2.1 had the most deleterious effect, and a synthetic peptide spanning this region (indicated by the broken line in Fig. 9A) was found to bind to core polymerase (Lesley and Burgess 1989). However, this peptide also bound with similar 175-193 (Tintut and Gralla 1995)] are shown. A 30-amino-acid peptide from 70 with affinity for core polymerase (Lesley and Burgess 1989) is shown by a broken line above the 70 sequence. The position of mutations that reduce interaction with core polymerase are shown in green for E , 32 , and 54 , and in red for Mtf1p. Colored shading is used to indicate similarity between the amino acid sequences. (B) The locations of the noninteracting mutations shown in A are highlighted on the structure of region 2 of 70 as determined by Malhotra et al. (1996) using InsightII (Biosym, San Diego, CA). The Mtf1p noninteracting cluster A mutations are shown in red. Positions required for interaction in E , 32 , and 54 are shown in green. Note that the carboxyl terminus of region 2.4 (marked with an arrow) is brought close to the position of the noninteracting mutations in regions 2.1 and 2.2 in the three-dimensional structure. affinities to the holoenzyme form of E. coli RNA polymerase and to 70 (Lesley and Burgess 1989). Severinova et al. (1996) have extended these observations using a tryptic fragment of 70 . They found that a fragment encompassing part of region 1 and most of conserved region 2 (amino acids 114-448) bound to the core polymerase. Binding by this fragment is specific, as full-length 70 was able to compete with the fragment for core binding. However, the fragment binds core with an affinity 30fold lower than full-length 70 , suggesting that other regions also make important contributions to core binding. Recent work has focused on regions 2.1 and 2.2 of factors to identify individual residues critical for core interactions. Shuler et al. (1995) screened site-directed mutations in regions 2.1 and 2.2 of B. subtilis E and identified two point mutations (highlighted in Fig. 9A) that interfered with core interactions. Tintut and Gralla (1995) noted that although 54 and 70 share very little amino acid sequence similarity, a short motif can be found in both proteins. Mutagenesis of this region of 54 resulted in the identification of the residues highlighted in Figure 9A as critical for core interactions (Tintut and Gralla 1995). Joo et al. (1997) found that a mutation in region 2.2 in 32 reduces affinity for the core polymerase. Note that the mutations identified in these screens (shown in green) flank the highlighted mutations in cluster A of Mtf1p (shown in red), and in one case, identify the same position in the alignment. Although some of the noninteracting mutations do overlap the original synthetic peptide of Lesley and Burgess (1989), it is clear from this comparison that core interaction requires many amino acids in regions 2.1 and 2.2 in addition to those included in the peptide. This conclusion is consistent with predictions made by Gribskov and Burgess (1986) and Helmann and Chamberlin (1988) based on the high level of amino acid sequence conservation in these regions of the bacterial factors. The bacterial factors and the eukaryotic nuclear factor RAP30 (McCracken and Greenblatt 1991) all form complexes with the bacterial core polymerase. As shown in Figure 9A, the amino acid sequence of this region of the different proteins has not been highly conserved. It is possible, however, that structural elements of the interaction region are similar. We have used the recently reported structure of a portion of 70 (Malhotra et al. 1996) to model the position of the cluster A noninteracting mutations from Mtf1p as well as the region 2.1 and 2.2 mutations of 32 , E , and 54 . As shown in Figure 9B, the highlighted positions define a structural domain including the helical region of 2.1 and a turn or bend connecting this element to helical region 2.2. It is of course difficult to predict the structure of these other -like factors accurately, especially as there are some single amino acid insertions and deletions in the region to be modeled (Fig. 9A). It is however of interest to note the location of the tyrosine residue at an accessible position near the beginning of the turn. We have shown that in Mtf1p, the ionization state of this residue in the Y42C mutation is critical for interaction. Most factors have a bulky hydrophobic residue in this position (Lon-etto et al. 1992). The fact that 54 and RAP30 do not share this residue (Fig. 9A) indicates that the determinants for binding are more complex than simple interactions between single amino acids. Cluster B mutations The reported structure of 70 only extends to the carboxyl terminus of region 2 (Malhotra et al. 1996), so the additional noninteracting mutations in clusters B and C (Fig. 8) cannot be modeled relative to the cluster A mutations in region 2. However, as indicated by the arrow in Figure 9B, the folded structure does bring the carboxyl terminus of the region 2.4 helix into close juxtaposition with the region 2.1/2.2 interaction domain. This means that the noninteracting mutations that we identified in cluster B are probably very close to those in cluster A in the folded structure. Although there is no obvious amino acid sequence similarity between this region of Mtf1p and the factors (Jang and Jaehning 1991), common structures may exist. The amino acid differences may be important for the selective interactions with different types of core polymerases. Because none of the original deletion mutations tested by Lesley and Burgess (1989) selectively removed this part of the protein between regions 2 and 3, this question has not yet been addressed. Cluster C mutations/conserved region 3 The third cluster of mutations identified in our screen lie in conserved region 3. All of these mutations (S218R, I221K, and D225G) are interposed very closely with petite mutations that retain the ability to interact with the core (Q219R and L228S). The spacing between these mutations indicates that they may define two important surfaces (potentially faces of a helical or sheet structure), one of which is critical for core interactions and the other required for another essential function. There is precedent for amino acids in region 3 also having a role in core interactions. Lesley and Burgess (1989) described one deletion that removed region 3 and reduced core interactions by a factor of 5 in vitro. In addition, Zhou et al. (1992) reported that a small deletion in 32 reduced significantly the affinity for the core polymerase. Combined with the fact that we were unable to define deletions of either Rpo41p or Mtf1p that retained the ability to interact in vivo, all of these observations strongly support a model of a complex interaction surface created by distant regions in the amino acid sequence brought together in the folded structure. Because our mutagenesis was not exhaustive, it is possible that even more than the three regions that we have identified are important for interactions. Interaction regions in the core polymerase There is currently no information on the particular residues or regions of any core polymerase required for factor interactions. 70 has been shown to make contacts with all of the subunits of the core [␤, ␤Ј, and ␣ (Coggins et al. 1977;McMahan and Burgess 1994;Greiner et al. 1996)]. It is probable that the eukaryotic nuclear -like factors also make contacts with more than one subunit of the core. The RAP30 and RAP74 subunits of TFIIF each contain core interaction regions in support of this idea (Mc-Cracken and Greenblatt 1991;Wang and Burton 1995). Although the analysis of interaction elements of the single subunit Rpo41p core should be simpler than for the multisubunit enzymes, our analysis of deletion mutations in Rpo41p (Fig. 2) support the idea that these interaction elements will encompass several widely spaced regions in the RPO41 gene. It will be interesting to ultimately determine if these elements share any amino acid sequence or structural similarity with elements in the multisubunit prokaryotic and eukaryotic core polymerases. These studies could help to elucidate the origins of this unusual RNA polymerase. Although there are as yet no identified homologs of the yeast MTF1 gene, it appears that most eukaryotes do possess an Rpo41p-type mitochondrial core polymerase (Cermakian et al. 1996;Chen et al. 1996;Tiranti et al. 1997). The analysis of the interaction regions in the single polypeptide enzymes could therefore eventually explain the alterations that caused the phage-related proteins to lose the ability to recognize a promoter on their own and to substitute a required accessory factor. The conservation of structures and functions among the RNA polymerases of phage, bacteria, and the eukaryotic nucleus and mitochondrion will allow observations of the relatively simple mitochondrial RNA polymerase to guide further experiments with the multisubunit as well as the single polypeptide enzymes. Media and genetic methods Standard media such as YP medium containing 2% of either glucose (YPD) or glycerol (YPG), synthetic complete medium (SC) lacking the appropriate amino acids, and sporulation medium were prepared as described by Guthrie and Fink (1991). 5-FOA medium was prepared by adding 5-FOA to synthetic medium at a concentration of 500 mg/liter (Sikorski and Boeke 1991). Yeast cells were transformed using the lithium acetate method (Ito et al. 1983). Mating, sporulation, and dissection were carried out by standard methods (Guthrie and Fink 1991). Two-hybrid plasmid constructs and assays MTF1 was cloned as a 1.0-kb EcoRI fragment from pJJ525 (Mangus et al. 1994) into the corresponding site of pBTM116 (Bartel et al. 1993) to create plasmid pJJ832. Insert orientation was confirmed by restriction enzyme analysis. For mutant mtf1 constructions, pJJ832 was digested with NsiI and religated to produce an internal deletion of 101 bp. The resulting plasmid was digested with either AflII and MscI or AflII and PstI. The fragments were treated with shrimp alkaline phosphatase. mtf1 mutations created by site-directed mutagenesis in pBluescript SK(+) were digested with AflII and PstI (vector sequence) and ligated to pJJ832. mtf1 mutants isolated as single point mutations (I221K, L53H, I154T, and S218R) were digested with AflII and MscI, and ligated to the corresponding digest of pJJ832. Resulting plasmids were tested by restriction analysis to confirm the replacement of the mtf1 deletion with the point mutation constructs. Mutants Y42C and K157E were first cloned into pBluescript SK(+), then the AflII to PstI fragments were cloned into pJJ832. Two-hybrid deletion constructs For carboxy-terminal deletions, plasmid pBTM116 was digested with EcoRI and SmaI. The vector was dephosphorylated with shrimp alkaline phosphatase (U.S. Biochemical), then separated by electrophoresis on an agarose gel and extracted from the gel using Qiaex beads (Qiagen). The vector was ligated to the following MTF1 fragments of pJJ525. For the Mtf1p1-224 construct, the EcoRI to BamHI fragment of MTF1 from pJJ525 was used as the insert. The EcoRI to BglII fragment of this same vector was used for the Mtf1p1-296 construct. And finally, the EcoRI to XmnI fragment was used to clone the Mtf1p1-315 construct. Inserts were prepared as follows. pJJ525 was digested with either BamHI, BglII, or XmnI. Blunt ends were then created by filling in the overhang with Klenow (New England Biolabs) followed by digestion with EcoRI. Fragments were separated by electrophoresis through 0.8% agarose gels and purified using Qiaex beads (Qiagen). For the amino-terminal deletion (Mtf1p52-341) a ScaI to PstI (partial ScaI digest) fragment from pJJ525 was inserted into the SmaI and PstI sites of pBTM116. Vector and insert were prepared as described above. The PCR fragments were cloned into TA cloning vectors (pGEMT, Promega; pCRII, Invitrogen) and then digested with the appropriate enzymes for cloning. Insert fragments as well as vector fragments were separated by electrophoresis and purified as described above. The Rpo41p1-1351 insert was prepared by the amplification of an RPO41 plasmid using the NOTI-5Ј and NOTI-3Ј primers. The insert was created by a NotI digest and cloned into the corresponding site of pVP16 (Hollenberg et al. 1995). The Rpo41p1-311 insert was prepared using the BAMHI-5Ј and 3Ј-311 primers. The insert was created by a BamHI digestion and ligated into the BamHI site of pVP16. Primers 5Ј-318 and NOTI-3Ј were used to amplify the Rpo41p318-1351 insert. The insert was cloned into the BamHI and NotI sites of pVP16 after digestion with the same enzymes. The Rpo41p1-918 insert was created by digesting the Rpo41p1-1351 clone with NotI and MscI. The fragment was ligated to pVP16 that had been digested with EcoRI, filled in with Klenow to make blunt ends, then digested with NotI. The Rpo41p318-597 insert was created with the 5Ј-318 and 3Ј-597 primers. The insert was cloned into the BamHI site of pVP16. The Rpo41p1-597 fragment was amplified with the BAMHI-5Ј and 3Ј-597 primers. The insert was created by partial digestion with BamHI (there is a BamHI site in the Rpo41p sequence) and ligated to the corresponding site of pVP16. Two-hybrid constructs containing LexA and VP16 fusions were transformed into yeast strains AMR70 and L40, respectively (Hollenberg et al. 1995). To test the constructs for interactions, the haploid strains were mated and diploids were selected on Ura−, Trp−, Leu− medium. The strains were tested initially for ␤-galactosidase production by filter lift assays (Breeden and Nasmyth 1985), and for growth on plates lacking histidine and containing 5 mM 3-aminotriazole (Phizicky and Fields 1995). For quantitative assays, strains were grown to mid-log phase in selective medium and ␤-galactosidase activity was assayed in permeabilized cells (Miller 1972). Plasmid and strain construction for the MTF1 plasmid shuffle Plasmids pJH118, pJH121, and pJH124 were constructed by inserting a 1.5-kb EcoRI fragment containing the promoter and entire coding sequence of MTF1 (Mangus et al. 1994) into the EcoRI site of pUC18, pUC7, and pBLUESCRIPT SK(+). Plasmids pJH119 and pJH142 were made by cloning this EcoRI fragment into the URA3 + vector YCplac33, and LEU2 + vector YCplac111 (Gietz and Sugino 1988), respectively. To construct plasmid pJH133, a 3.8-kb BglII-BamHI fragment bearing a 3.8-kb hisG-URA3-hisG cassette was isolated from plasmid pNKY51 (Alani et al. 1987), and was inserted into the BglII site of plasmid pJH121, disrupting the MTF1-coding sequence. Uracil-containing single-stranded DNA was produced from plasmid pJH124 in E. coli strain CJ236 [dut ung (Kunkel et al. 1987)]. Each mutagenic oligonucleotide was phosphorylated with T4 polynucleotide kinase and used separately in in vitro mutagenesis reactions. The resulting mutated DNAs were used to transform E. coli strain NM522 [dut ung (Kunkel et al. 1987)] to select for the mutated plasmids. Each mtf1 mutation was confirmed by sequencing and subcloned into the EcoRI site of YCplac111 to give individual constructs and retested in yJH71 as noted above. RNA isolation and analysis Yeast were grown to mid-log phase, collected by centrifugation and frozen as pellets in liquid nitrogen. Yeast total RNA was isolated and purified as described by Elder et al. (1983). Yeast total RNA (10-20 µg) was electrophoresed through agarose gels containing formaldehyde and blotted by capillary action to Zetaprobe nylon membranes (BioRad). Oligonucleotide probes specific for mitochondrial 14S, COB, and tRNA Thr and cytoplasmic 18S RNAs (Ulery et al. 1994) were end-labeled with [␥-32 P]ATP (Amersham) and hybridized at 40°C in 0.5 M sodium phosphate buffer (pH 7.5) and 7% SDS. An oligonucleotide specific for 18S rRNA was used to normalize for differences in RNA samples on the blot. Signal intensity was quantitated with a PhosphorImager (Molecular Dynamics). Western blots Yeast cell extracts were prepared by growing cells to an OD 595 of 0.6. Cells (10 ml) were harvested and resuspended in 1 ml of ice-cold Z buffer (60 mM Na 2 HPO 4 , 40 mM NaHPO 4 , 10 mM KCl, 1 mM MgSO 4 , and 50 mM ␤-mercaptoethanol at pH 7.0). Cells were pelleted in a microfuge and resuspended in 200 µl of ice-cold Z buffer. Glass beads (300 µl) were added and the cells were vortexed for 5 min at 4°C. The extract was cleared by a 5-min centrifugation (maximum speed) and the supernatant was collected. Protein concentrations were determined by the Bradford method (Bradford 1976) using BSA as a standard. Total protein (50-100 µg) were separated by SDS-PAGE and blotted to Immobilon-P (Millipore). Rpo41p was detected by polyclonal antibodies as described previously (Mangus et al. 1994). Twohybrid Mtf1p constructs were detected using anti-LexA antibody generously provided by Dr. Roger Brent (Harvard Univer- GENES & DEVELOPMENT 2907 Cold Spring Harbor Laboratory Press on July 18, 2018 -Published by genesdev.cshlp.org Downloaded from sity, Cambridge, MA). Hybridization conditions were as described by Harlow and Lane (1988). Detection was by chemiluminescence using ECL kits from Amersham Corp. GST-fusion plasmid constructs and GST-Mtf1p affinity chromatography The internal AflII-BglII fragment from the mtf1 mutants was used to replace the wild-type sequence of MTF1 in pGEX-1 [plasmid pJJ526 (Mangus et al. 1994)]. pJJ526 was digested with NsiI and religated to produce a deletion of 101 bp. The resulting plasmid was digested with AflII and BglII followed by dephosphorylation of ends by shrimp alkaline phosphatase (U.S. Biochemical). The vector was isolated by gel electrophoresis and ligated to AflII-BglII fragments from the MTF1 mutants. The resulting plasmids were tested by restriction enzyme analysis to confirm insertion of the full-length mutant fragments. Interaction studies with GST-Mtf1p constructs were performed as described previously (Mangus et al. 1994) with the following adjustments. Proteins were eluted with a 5-column volume step of T(500) buffer. The pH of the T(50) and T(500) solutions was 7.9 at 25°C and 8.3 at 4°C. When chromatography was performed at pH 7.3, MOPS buffer was used instead of Tris-HCl in the solutions.
2018-04-03T05:42:01.306Z
1997-11-01T00:00:00.000
{ "year": 1997, "sha1": "23f75ee5dcac988d425f98e2f2f9d2a6808d7913", "oa_license": null, "oa_url": "http://genesdev.cshlp.org/content/11/21/2897.full.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0e608b6110eba588e00333dc81f680ccfe2a77ba", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266222840
pes2o/s2orc
v3-fos-license
Retinal morphological and functional response to Idebenone therapy in Leber hereditary optic neuropathy We read with interest the letter [1] from Josef Finsterer & Sounira Mehri about our article regarding the morphological and functional response to Idebenone therapy in patients with Leber hereditary optic neuropathy (LHON), where two pediatric patients, genetically confirmed Dear Editor, We read with interest the letter [1] from Josef Finsterer & Sounira Mehri about our article regarding the morphological and functional response to Idebenone therapy in patients with Leber hereditary optic neuropathy (LHON), where two pediatric patients, genetically confirmed, were periodically followed-up over a period of one year after we initiated the treatment [2].We thank them for the interest shown in this topic and we want to answer the questions they have formulated regarding our study. The first concern was the fact that the heteroplasmy rates of the causative mitochondrial deoxyribonucleic acid (mtDNA) were not provided in our study.According to the symptoms, the ophthalmological examination, and the functional and morphological investigations, the two patients were suspected of LHON, they were genetically tested and confirmed.Because they fulfilled all the inclusion criteria, they were included in the National LHON Program and were eligible to receive the treatment with Idebenone.We considered that, being symptomatic carriers, our patients do not require this determination.It is already known that clinically manifesting carriers of one of the primary LHON mutations: mtND1: m.3460G>A, mtND4: m.11778G>A, mtND6: m14484T>C, usually, present a homoplasmic state of that variant [3,4]. Regarding the mtDNA copy number, Giordano et al. observed that symptomatic carriers have a decreased rate of mtDNA copy number [5].Baglivo et al. correlated, in their study, the amount of mtDNA and treatment response and hypothesized that in m.3460G>A variant a protective role of mitochondrial biogenesis could be considered since in m.11778G>A variant, there was no correlation between the mtDNA copy number and disease penetrance [6].In our study, we obtained a good response to Idebenone in patient with m.3460G>A mutation and no response in patient with m.11778G>A mutation.Nevertheless, in Romania, neither the determination of heteroplasmy rates, nor the mtDNA copy number is performed routinely in public or private health systems. A concern was about the patients' adherence to treatment and if we ensured the correctness of the treatment.The Idebenone therapy was initiated in the hospital, in the presence of a parent for each patient.We explained and monitored the treatment intake for the first three days and then the patients continued at home.Both the patients and their parents confirmed the correct intake of Idebenone, 300 mg, three times/day. Another question was related to the presence of other signs and symptoms, especially neurological and cardiological, that could determine the presence of LHON plus syndrome.As it is written in the article, the brain magnetic resonance imaging was normal for both patients.Also, the patients were sent for complete neurological and cardiological examination before Idebenone initiation, and both were normal, so we excluded a LHON plus syndrome. Regarding the spontaneous recovery in patient with m.3460G>A variant in mtND1, we considered the improvement in visual acuity and all the functional tests due to Idebenone therapy, because since the diagnosis until the treatment initiation there were 15 months in which the visual acuity and functional and morphological investigation showed a severe progression of the disease.In three months after we started the treatment, we obtained a good result which continued to improve during the treatment.There are reported cases of LHON patients with spontaneous recovery in visual function, but especially with m.14484T>C mutation [7][8][9].On the other hand, patients with m.3460G>A mutations have not only a 4% chance of spontaneous recovery [10], but they also have a bad prognosis without treatment, and good response to Idebenone therapy [11,12]. In conclusion, our study focused on the retinal changes due to Idebenone therapy in our two patients with LHON.We concluded that Idebenone may improve the retinal function, with no effect on the morphological tests, a fact that is R J M E
2023-12-16T05:06:52.914Z
2023-09-30T00:00:00.000
{ "year": 2023, "sha1": "d3a5f1d8abc80f385b818226c32872f549976535", "oa_license": "CCBYNCSA", "oa_url": "https://rjme.ro/RJME/resources/files/640323443444.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3a5f1d8abc80f385b818226c32872f549976535", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237393096
pes2o/s2orc
v3-fos-license
DeepConsensus: Gap-Aware Sequence Transformers for Sequence Correction Pacific BioScience (PacBio) circular consensus sequencing (CCS) generates long (10-25 kb), accurate “HiFi” reads by combining serial observations of a DNA molecule into a consensus sequence. The standard approach to consensus generation uses a hidden Markov model (pbccs). Here, we introduce DeepConsensus, which uses a unique alignment-based loss to train a gap-aware transformer-encoder (GATE) for sequence correction. Compared to pbccs, DeepConsensus reduces read errors in the same dataset by 42%. This increases the yield of PacBio HiFi reads at Q20 by 9%, at Q30 by 27%, and at Q40 by 90%. With two SMRT Cells of HG003, reads from DeepConsensus improve hifiasm assembly contiguity (NG50 4.9Mb to 17.2Mb), increase gene completeness (94% to 97%), reduce false gene duplication rate (1.1% to 0.5%), improve assembly base accuracy (Q43 to Q45), and also reduce variant calling errors by 24%. Introduction Modern genome sequencing samples the genome in small, error-prone fragments called reads. At the read level, the higher error of single molecule observations is mitigated by consensus observations. In Illumina data, the consensus is spatial, through clusters of amplified molecules 1 . Pacific Biosciences (PacBio) uses repeated sequencing of a circular molecule to build consensus across time 2 . The accuracy of these approaches, and the manner they fail, ultimately limits the read lengths of these methods and the analyzable regions of the genome 3,4 Recent breakthroughs in PacBio throughput have enabled highly accurate (99.8%) long reads (>10 kb), called HiFi reads 5 to set new standards in variant calling accuracy 6 and the first telomere-to-telomere human assembly 7 . The remaining sequencing errors are strongly concentrated in homopolymers 3,8 , and the need to manage these errors constrains the minimum number of passes required for acceptable accuracy, and therefore the yield and quality of PacBio sequencing. The existing algorithm for consensus generation from HiFi sequencing data uses a hidden Markov model to create a draft consensus sequence, which is iteratively polished 9 . The underlying process of removing errors using an alignment of reads is also used in genome assembly 10 , and in assembly polishing methods like Racon 11 , Pilon 12 , and PEPPER-Margin-DeepVariant 13 . All of these methods correct from a given alignment to a reference or contig. These methods use statistical heuristics for the correction model itself, except for PEPPER-Margin-DeepVariant. To improve consensus generation of HiFi sequencing data, we introduce a deep learning-based approach leveraging a transformer 14 architecture. Transformers have gained rapid adoption in natural language processing 15 and computer vision 16 . In biology, transformers have been applied to Multiple Sequence Alignment (MSA) of protein sequences 17 and dramatically improved AlphaFold2's protein structure prediction 18 . We present DeepConsensus, an encoder-only transformer model that uses an MSA of the PacBio subread bases and a draft consensus from the current production method (pbccs). DeepConsensus incorporates auxiliary base calling features to predict the full sequence in a window (by default 100bp). Since insertion and deletion (INDEL) errors are the dominant class of error in this data, we train the model with a novel alignment-based loss function inspired by differentiable dynamic programming 19 . This gap-aware transformer-encoder (GATE) approach more accurately represents misalignment errors in the training process. DeepConsensus reduces errors in PacBio HiFi reads by 41.9% compared to pbccs in human sequence data. We stratify performance across mismatches, homopolymer insertions and deletions, and non-homopolymer insertions and deletions, and DeepConsensus improves accuracy in each category. DeepConsensus increases the yield of reads at 99% accuracy by 8.7%, at 99.9% accuracy by 26.7%, and at 99.99% accuracy by 90.9%. We demonstrate that using reads from DeepConsensus improves the contiguity, completeness, and correctness of genome assembly when compared to assemblies generated using pbccs reads. Similarly, we demonstrate improved accuracy of variant calling when using DeepConsensus reads. Finally, we demonstrate that improvements in accuracy allow for longer PacBio reads lengths while retaining acceptable read accuracy, enabling improvements in contiguity of genome assembly and increasing the experimental design options for PacBio sequencing. Results Overview of DeepConsensus Figure 1 Overview of DeepConsensus This figure illustrates the DeepConsensus workflow. Subreads are combined with a CCS read divided into 100 bp partitions. Each partition is converted to a tensor object containing the pulse width, interpulse duration, signal-to-noise (SN) ratios, and strand information. These tensors can then be used during training or inference using an encoder-only transformer. The trained model produces a polished segment which is stitched together to produce a polished read. An overview of the DeepConsensus algorithm is shown in Figure 1. PacBio CCS sequencing produces a set of subreads which are processed by pbccs to produce a consensus (CCS) read. Subreads are combined with the CCS read, and divided into 100 bp partitions. Each partition is then transformed into a tensor to be used as input to the DeepConsensus model for training or inference. The tensor contains additional information beyond the sequence extracted from each subread. This includes the pulse width (PW) and interpulse duration (IP). These are raw values provided by the basecaller that are used to call bases. Additionally, DeepConsensus incorporates the signal-to-noise ratio for each nucleotide, and strand information. For training, we use a custom loss function that considers the alignment between the label and predicted sequence. For inference, the outputs for each 100bp partition in the full sequence are stitched together to produce the polished read. ) for the intersection of pbccs and DeepConsensus (HG002 chr20 11kb) reads. Each light green dot ( corresponds to a single read. Dark green dots represent reads that were perfect matches to the reference. (b) Observed read accuracy across the number of available subreads for DeepConsensus and pbccs. DeepConsensus increases HiFi accuracy and Yield We first evaluated the performance of DeepConsensus (v0.1) by aligning polished 11kb chr20 reads from HG002 against a high-quality diploid assembly 20 . HiFi reads output from pbccs were processed similarly, and we used a custom script to calculate a phred-scaled read accuracy score for each read ( ; See methods). When examining the intersection of reads to assess relative improvement, we observe accuracy improvements are distributed across the full range of pbccs scores ( Figure 2A). We observe an average of 28.94 for DeepConsensus and 26.6 for pbccs, which corresponds to an average read quality improvement of 2.34 points. We also examined read accuracy by the number of subreads used to generate each HiFi read and observe improvements for all subread bins ( Figure 2B). Sequencing errors can be classified by type (SNP, Indel) and according to their sequence context (homopolymer, non-homopolymer). Homopolymer indels have previously been characterized as the largest contributors to PacBio HiFi error rates 5 . We used bamConcordance 5 to examine the improvements for each error class. Notably, DeepConsensus reduces errors across all error classes, including significant reductions in homopolymer indels and a 70.39% reduction in non-homopolymer insertions (Table 1). We next asked how improvements in read accuracy contribute to increases in sequencing yield. DeepConsensus and pbccs are both configured to output reads with a predicted Q > 20. We compared the total yield and yields at Q thresholds of 20, 30, 40, and perfect match. We observed that DeepConsensus increases sequencing yield across all quality bins ( Table 2). In addition to producing a polished sequence, our model also outputs predicted base qualities. To evaluate the improvements achieved in de novo assembly with the increased yield (Supplementary figure 2-5, Supplementary table 1) and higher quality reads from DeepConsensus, we generated phased assemblies of four human genome samples using the hifiasm 21 assembler. We generated assemblies with reads from two SMRT Cells (HG003, HG004, HG006, HG007) and three SMRT Cells (HG003, HG004, HG006). To assess the contiguity, we derived the contig N50, NG50 and genome coverage against GRCh38 using QUAST 22 . In Figure 3, we show the improvements in assembly quality and contiguity as a result of increased yield and quality of reads from DeepConsensus. With reads from two SMRT Cells, we see that the NG50 of the assemblies with DeepConsensus reads (17.23Mb, 12.37Mb, 31.54Mb, 8.48Mb) are on average 3x higher than assembly NG50 with pbccs reads (4.91Mb, 3.72Mb, 18.55Mb, 1.94Mb) (Figure 3a, Supplementary table 2). We evaluated the correctness of the assembly using YAK 21 , which overlaps the assembly with k-mers observed in short-read sequencing. The YAK estimated quality of the assemblies with DeepConsensus reads achieve Q44 on average compared to Q42 with assemblies using pbccs reads (Figure 3b, Supplementary table 3). We also used dipcall 23 to derive the small variants from the assembly and compared the small variants against Genome-In-a-Bottle (GIAB) truth sets 24 of the associated sample. We observe the assemblies derived from DeepConsensus reads have on average 43% fewer total errors (false positives and false negatives) compared to the assemblies derived from pbccs reads (Supplementary table 4, 5). To evaluate the gene completeness of the assemblies, we used asmgene 25 with the Ensembl homo sapiens cDNA sequences as input and GRCh38 as the reference sequence. We observe that the assemblies generated with pbccs have a 2-fold higher false duplication rate (average 540 false duplications) compared to the assemblies generated with DeepConsensus (average 231 false duplications) (Supplementary table 6, 7). Similarly, in assemblies generated with three SMRT Cells, we see that the contig NG50 of the assemblies with DeepConsensus reads (55Mb, 41Mb, 51Mb) are on average 1.3x higher than contig NG50 with pbccs reads (33Mb, 36Mb, 41Mb) (Supplementary table 2 In summary, we observe consistent improvements in contiguity, correctness, and completeness in assemblies generated with reads from DeepConsensus, using either two or three SMRT Cells. Using DeepConsensus reads improves variant calling accuracy Use of longer reads improves yield, assembly, and variant calling With higher consensus accuracy for HiFi reads, the number of passes can be reduced while maintaining accuracy (Figure 2b), potentially allowing for sequencing of longer insert sizes while preserving the quality of downstream analyses. To test this, we sequenced a HG002 sample with 15kb and 24kb insert sizes, each with two SMRT Cells on Sequel II System using Chemistry 2.2. We generated DeepConsensus reads for the 15kb and 24kb insert size ( Figure 5a, Supplementary table 9). Details on the library preparation protocol for 24kb reads are provided in the online methods. In Figure 5, we show the improvements in genome assembly and variant calling we achieve with 24kb reads compared to 15kb reads of HG002 sample. The hifiasm assembly with 24kb reads achieves higher contig NG50 of 34.05Mb compared to 24.81Mb with 15kb reads, though the assembly quality is higher with 15kb (Q51.7) reads than with 24kb (Q50.8) (Supplementary In summary, the increased accuracy of DeepConsensus expands the window of experimental choices. This allows researchers to consider using longer reads for applications which disproportionately benefit, such as assembly of genomes with high duplication rates, difficult to assemble regions such as the MHC, phasing across a long gene or amplicon, or variant detection in hard-to-map regions. Discussion The correction of errors in sequencing data is fundamental to both the generation of initial data from a sequencer and to downstream analyses which assemble, map, and analyze genomes [28][29][30] . We introduce a transformer-based consensus generation method, reducing errors in PacBio HiFi reads by 42% and increasing yield of 99.9% accurate reads by 27%. We show that with existing downstream methods, the improved reads result in better assembly contiguity, completeness, and accuracy, as well as more accurate variant calling. The problem of correcting errors from a MSA of repeated sequencing is a single example of a broader category of problems that analyze the alignment of similar sequences. The most similar adjacent applications are error correction of Unique Molecular Identifiers 31 , as well as error correction of Oxford Nanopore Duplex reads. Genome assembly polishing, which uses alignments of sequences from many molecules, is a similar application 11,13,32 . DeepConsensus models could be trained for these applications with minimal changes to its architecture. The gap-aware loss function used in the GATE approach could have utility to broader MSA-related problems. For example, related work by Rao 2021 17 demonstrated improved prediction performance across multiple tasks, including contact maps and secondary structure, and Avsec et al. 2021 33 used a long-range Enformer to predict gene expression. These applications could potentially benefit from the incorporation of alignment-based loss used in DeepConsensus, or the DeepConsensus framework could be applied to similar problem areas. DeepConsensus presents opportunities to alter experimental design to better leverage its improvements to accuracy. We demonstrate that DeepConsensus allows for longer read lengths while maintaining a high standard of read accuracy and yield. Certain applications, such as assembling difficult genome regions, may disproportionately benefit from use of longer reads. Additionally, because DeepConsensus learns its error model directly from training data, it allows a tighter coupling between library preparation, instrument iteration and informatics. DeepConsensus could be trained on data from a modified procedure or additional data stream to more accurately estimate the potential advantage of the new method, decreasing the chance that the modification's advantages might not be apparent due to optimization of the informatics to the older approach. The improvements we demonstrate to assembly and variant calling use unmodified downstream tools (hifiasm), or tools with unmodified heuristics that use an adapted model (DeepVariant). Further iterating on the heuristics in these methods may allow them to take additional advantage of the DeepConsensus error profile, or better use its higher yield of longer reads. Future improvements to DeepConsensus include training with an expanded dataset that includes additional samples and chemistries, since our current training datasets only include Sequel II data from a few SMRT Cells. Assessing DeepConsensus on non-human species and, if needed, supplementing training data with diverse species is an area of active development. There are substantial opportunities for improvements by refining the attention strategy, for example AlphaFold2 uses a modified axial attention 34 , or by leveraging efficiency improvements to the transformer self-attention layer to consider wider sequence contexts [35][36][37] . Investigating the trade-offs between model size and accuracy could also enable faster versions which preserve high accuracy. These and other improvements will enable DeepConsensus to help scientists realize the potential yield and quality of their sequencing instruments and projects. Dataset preparation For all SMRT Cells, we ran pbccs on the subreads to generate CCS sequences. pbccs generates a prediction for the overall read quality for each CCS read, and reads below Q20 are filtered out of the final HiFi read set. For dataset generation, we did not apply any filtering based on read quality for the CCS reads, and reads of qualities were included for training and inference. To generate labels for each set of subreads, the CCS sequence predicted by pbccs was mapped to the HG002 diploid assembly. The coordinates of the primary alignment were used to extract the label sequence from the HG002 diploid assembly. Subreads and labels were aligned to the corresponding CCS sequence. The cigar string from this alignment was used to match bases across the subreads and assign a label for each position. Subreads were broken up into 100bp windows, and the corresponding label for each window was extracted from the full label sequence. In some cases, the label was longer than the subreads due to bases for which there was no support in the subreads. Each subread base has associated pulse width (PW) and interpulse duration (IPD) values, and each set of subreads has four signal to noise ratio (SN) values, one for each of the four canonical bases. PW and IP values were capped at 9, and SN values were rounded to the closest integer and capped at 15. Model and training The Transformer has emerged as the primary architecture for language understanding and generation tasks 14,15 . It uses self-attention to efficiently capture long and short-range interactions between words, crucial for understanding text. In recent work, this capability has been successfully leveraged to improve modeling of protein sequences 38 . We train a six layer encoder-only transformer model with a hidden dimension of 560 and 2 attention heads in each self-attention layer. The inner dimension of the feedforward network in each encoder layer is 2048. The model considers 100 bases at a time from the full subreads, and the input at each position contains subread sequences and auxiliary features. The maximum number of subreads considered is 20. Auxiliary features include the pulse width (PW) and interpulse duration (IPD) measured by the basecaller, the signal to noise ratio (SN) for the sequencing reaction, the strand of each subread, and the sequence of the CCS read as predicted by pbccs. Each feature type is embedded using a separate set of learned embeddings, which are trained jointly with the model. An embedding size of two is used for the subread strand, and all other embeddings are of size eight. We used positional encodings that were a mix of sampled sine and cosines, as defined in the Transformer 14 . For training, the Adam optimizer was used with a learning rate of 1e-4, and input, attention, and ReLU dropout values were set to 0.1. Our implementation builds off the one provided in the Tensorflow Model Garden. For some examples, there exists a base in the label for which there is no evidence in any of the subreads. The predicted sequence for such examples would be longer than the input sequence length. The transformer encoder block outputs an encoding for each input token. In natural language applications, variable-length prediction is implemented using a decoder block, which is not constrained in the number of outputs. For consensus generation, we did not use a decoder block due to computational constraints. To allow for variable-length prediction using only the encoder, we add a fixed number of padding tokens to the input sequence for each window. This allows the model to predict sequences longer than the subread sequences by replacing some of the padding tokens with additional bases. The outputs from the encoder block are independently decoded using a shared feedforward layer with softmax activation. At each position, we predict a distribution over the vocabulary, which consists of the four canonical bases, A, C, T, G, and an additional token to represent alignment gaps or padding, which we denote as $. For training, we used chromosomes 1-19 from PacBio Sequel II sequencing of HG002, an extensively characterized genome curated by Genome in a Bottle 39 . Chromosomes 21 and 22 were used for tuning model parameters, and chromosomes 19 and 20 were held out entirely during training and used for final assessment. For additional full holdouts, we use PacBio Sequel II sequencing of HG003, HG004, HG006, HG007. Models were trained for 50 epochs on 128 TPU v3 cores TPUs with a batch size of 256 for each core. Five models were trained with the production settings, and we chose the checkpoint with lowest loss on the tuning data. A custom gap-aware alignment loss was used, which is described in more detail in the following section. We call the combination of the gap-aware loss with transformer-encoder architecture GATE (gap-aware transformer-encoder). Loss Function Given an input MSA consisting of subreads and a consensus read and auxiliary features, the output of the transformer is a sequence of probability distributions over the 5-letter = of the transformer output and of the correct nucleotide sequence may differ due to possible insertion or deletion in the consensus read). If we know that a given position of the 1 ≤ ≤ transformer output should predict the nucleotide at position of the true sequence, 1 ≤ ≤ then it is natural to use the cross-entropy loss to assess how good the ( , ) =− log ( ) prediction is. However, we need to choose which position of predicts which position of . For that purpose, we formally define an alignment of length as an increasing subset of positions in both π = {1 ≤ π( , 1) < π( , 2) <... < π( , ) ≤ , 1 ≤ π( , 1) < π( , 2) <... < π( , ) ≤ } and , such that position in predicts position in , for . Given such an π( , ) π( , ) = 1,..., alignment, positions of not in the alignment correspond to insertion errors, and ideally the π( ) ‾ prediction in those positions should be so that they are removed from the prediction at test $ time. For those positions, we therefore use the cross-entropy loss . On the other ( , $) hand, positions of not in the alignment corresponds to deletion errors, i.e., nucleotides in π( ) ‾ the correct sequence that are missed in the MSA. For those errors, we consider a fixed error , which is a parameter to be tuned. In total, given an alignment , the total loss is defined γ > 0 π as the sum of cross-entropy losses over aligned positions and insertion/mutation losses: This loss depends on the arbitrary alignment , which ideally should be chosen as a function of π and so that the total loss is small. We therefore finally define the alignment loss as a (smooth) minimum over : , where is a parameter to π ϵ ( , ) = − ϵ log( π ∑ − π ( , )/ϵ ) ϵ ≥ 0 control how suboptimal alignments contribute to the loss. At the limit , we simply keep the ϵ = 0 best alignment , and taking allows us to create a smoother 0 ( , ) = π min π ( , ) ϵ > 0 loss function to better align and . This loss is a particular case of the losses studied previously 19 , and we follow their approach to derive an efficient implementation to compute the loss and its gradient in using differentiable dynamic programming, with a specific wavefront formulation to accelerate the computation on GPUs and TPUs. Output FASTQ generation DeepConsensus predictions for each 100bp window are joined together and $ tokens are removed to produce the final sequence that is output to FASTQ. Predicted base quality scores are generated from the output distribution at each position. The raw quality score for each base, , is computed as follows, where is the output distribution at position : . Each raw quality score is rounded to the closest integer and = − 10 an overall predicted quality above 20 were written to the final output FASTQ, along with the corresponding quality string. Analysis Methods Assessing read accuracy HG002 11kb predictions were mapped to a high-quality HG002 diploid assembly 20 . For each primary alignment, the calculate_identity_metrics.py [link] script was used to compute identity which is defined below. The read identity values are used to compute the concordance read qualities, , which are computed as phred-scaled scores of the identity: . = − 10 10 (1 − ) Reads with identity scores of 1 are separately categorized as having a 'perfect match.' Subread counts were determined using the np tag (number of full-length passes). The np tag was extracted from the consensus reads BAM output by pbccs . We also use the bamConcordance tool, which reports the concordance between a read and a reference sequence along with error counts for each read. Error counts are broken down into five categories: mismatches, homopolymer insertions and deletions, and non-homopolymer insertions and deletions. We use the bamConcordance output to assess the quality of reads and calculate the percentage error reduction across different categories. Generating phased diploid assemblies with hifiasm We used hifiasm version 0.15.3-r339 to generate phased assemblies. We used the default hifiasm parameters which have duplication purging on for the phased assemblies. We converted the primary assembly graph to get the primary assembly sequence and each of the haplotype graphs to generate the assembly sequences for each haplotype. Detailed execution parameters and commands are provided in the supplementary notes. Reference free assembly quality estimation with YAK We used YAK version 0.1-r62-dirty to derive estimated quality of the assemblies. For each sample, we generated a k-mer set with k=31 from Illumina short-reads of the same sample. Then we ran YAK to determine the quality of each haplotype sequence we generated during the hifiasm assembly generation process. YAK reports a Q value for assemblies which is a Phred-scale contig base error-rate derived by comparing 31-mers in contigs and 31-mers in the short reads of the same sample. We report the balanced_qv value reported by YAK as the estimated quality value of the assembly. The parameters and commands used are provided in the supplementary notes. Assembly-based small variant calling assessment using dipcall We used dipcall version 0.3 to derive small variants from the phased assemblies. Dipcall aligns the contigs to a reference sequence and derives a set of variants from the contig to reference alignment. We then compared the derived small variants against the Genome-In-a-Bottle truth set of the associated sample. For all male samples we used -x hs38.PAR.bed parameter as suggested in the documentation of dipcall. To assess the small variants derived from the HG002, HG003 and HG004 sample we used GRCh38 as reference and GIAB_v4.2.1 as the truth set for small variants. For HG005, HG006, HG007 samples, we used GRCh37 and GIAB_v3.3.2 as the truth sets. All truth sets are the latest available truth set from GIAB for the associated sample. We used hap.py to assess the quality of the variant calls. Commands and parameters used to run dipcall are provided in the supplementary notes. Gene completeness assessment with asmgene We used asmgene version v2.21 to determine the gene completeness of the assemblies. First, we aligned the Ensembl cDNA sequences release 102 to the GRCh38 reference genome using minimap2 (v2.21) and found 35374 single-copy genes and 1253 multi-copy genes in the reference. Then for each sample, we aligned the sample cDNA sequences to each of the haplotype sequence of the assemblies and derived how many single-copy genes remained single copy (full_sgl reported by asmgene) and how many were duplicated (full_dup reported by asmgene). Similarly, we reported how many multi-copy genes remained multi-copy in the assembly (dup_cnt reported by asmgene). We derived the following metrics to assess the gene completeness of the assemblies: Assembly statistics with QUAST We used QUAST version v5.0.2 to derive assembly N50, NG50, total assembly size and genome completeness of the assembly. QUAST is a reference-based assembly evaluation method that uses a reference sequence of the same sample or to determine the quality of the assembly. For our analysis, we used GRCh38 as the reference for each assembly. We used N50 which is the sequence length of the shortest contig at 50% of the total assembly length to determine contiguity of the assembly. NG50 is the sequence length of the shortest contig at 50% of the estimated genome length. For our human genome assemblies, we used GRCh38 as the reference sequence so we used 3272116950bp (3.2gb) as the estimated genome length to derive NG50. We only report N50, NG50, total assembly length and genome completeness against GRCh38 from the QUAST report. Detailed parameters and commands are provided in the supplementary notes. Variant Calling DeepVariant performs variant calling in three stages: make_examples, call_variants, and postprocess_variants. The make_examples stage identifies candidate variants and generates input matrices containing pileup information. call_variants runs the input matrices through a neural network model, and the postprocess_variants converts the neural network outputs to a variant call and outputs a VCF. We
2021-09-03T13:15:32.515Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "4dda0b2878e9a1d0664f3e4a49dac47064ab7ad9", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/08/31/2021.08.31.458403.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "4dda0b2878e9a1d0664f3e4a49dac47064ab7ad9", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
270100579
pes2o/s2orc
v3-fos-license
Advancing Livestock Technology: Intelligent Systemization for Enhanced Productivity, Welfare, and Sustainability : The livestock industry is undergoing significant transformation with the integration of intelligent technologies aimed at enhancing productivity, welfare, and sustainability. This review explores the latest advancements in intelligent systemization (IS), including real-time monitoring, machine learning (ML), and the Internet of Things (IoT), and their impacts on livestock farming. The aim of this study is to provide a comprehensive overview of how these technologies can address industry challenges by improving animal health, optimizing resource use, and promoting sustainable practices. The methods involve an extensive review of the current literature and case studies on intelligent monitoring, data analytics, automation in feeding and climate control, and renewable energy integration. The results indicate that IS enhances livestock well-being through real-time health monitoring and early disease detection, optimizes feeding efficiency, and reduces operational costs through automation. Furthermore, these technologies contribute to environmental sustainability by minimizing waste and reducing the ecological footprint of livestock farming. This study highlights the transformative potential of intelligent technologies in creating a more efficient, humane, and sustainable livestock industry. Introduction In recent years, the agricultural sector has undergone profound transformation driven by technological advancements that have changed traditional farming practices, and one key area that has witnessed significant progress is livestock technology.The integration of cutting-edge technologies has not only improved the efficiency of livestock production, but also paved the way for sustainable and more humane practices [1]. Advanced livestock technology comprises a wide range of innovations, including precision farming, data analytics, genetic engineering, and intelligent systemization [2].These advancements aim to address the challenges faced by the livestock industry, such as feeding a growing global population, minimizing environmental impact, and ensuring animal welfare.Precision farming within the realm of livestock technology involves the strategic use of information technology, satellite positioning data, and sensor technologies [3].These tools aid farmers in optimizing the management of their resources, from feed distribution to monitoring animal health, with the purpose of maximizing productivity while minimizing waste and environmental impact.The advent of big data analytics has ushered in a new era for livestock farming.Data analytics tools process vast amounts of information collected from various sources, such as animal wearables, environmental sensors, and health records [4].This wealth of data enables farmers to make informed decisions, identify patterns, and implement strategies that enhance their overall productivity and efficiency.These advanced technologies leverage genetic engineering to enhance desirable traits in animals [5].As previously reported, through selective breeding and genetic modifications, farmers can develop livestock with improved disease resistance, higher reproduction rates, and optimized feed conversion rates [6].This not only contributes to better animal welfare, but also enhances the economic viability of livestock farming.Moreover, at the core of advanced livestock technology is the concept of intelligent systemization, which involves the integration of artificial intelligence (AI), machine learning (ML), and automation into various aspects of livestock production, with the main purposes of IS being to analyze data in real-time, enable proactive decision making, the optimization of resource utilization, and streamlining overall farm management [7,8].These will lead to more sustainable practices, which are the key in the advancement of livestock technology.From mitigating environmental impacts to promoting responsible resource management, technology plays a crucial role in fostering sustainable practices, such as the precision application of fertilizers, reduced water consumption, and the integration of renewable energy sources into power farm operations [9].These tools also can help with the early detection of diseases, stress, or nutritional deficiencies, which allows farmers to intervene promptly, reducing the need for antibiotics and enhancing overall animal welfare.At the same time, robotics and automation have found their way into livestock farms, transforming labor-intensive tasks [10].Automated feeding systems (AFS), robotic milking machines, and AI-powered sorting systems contribute to increased operational efficiency, which helps farmers to focus on strategic decision making and animal care while routine tasks are handled seamlessly by technological solutions.Furthermore, the integration of smart infrastructure, which includes connected devices and Internet of Things (IoT) applications, helps to facilitate the real-time monitoring of environmental conditions, such as temperature, humidity, and air quality [2,9,11].Smart infrastructure enhances the overall living conditions for livestock, contributing to healthier and more comfortable environments.Also, livestock farmers are increasingly leveraging collaborative platforms that connect them with industry experts, researchers, and fellow farmers.These platforms facilitate the sharing of best practices, research findings, and innovative solutions [12][13][14].The collaborative nature of these platforms accelerates the adoption of advanced technologies and fosters a community-driven approach to livestock farming. The aim of this paper is to delve into the recent advancements and practical applications of intelligent technologies in livestock farming.The paper focuses on how these technologies can be systematized to improve productivity, animal welfare, and sustainability.By examining various intelligent tools and methodologies, the paper provides insights into the challenges and opportunities within the industry.The goal is to offer a comprehensive understanding of how IS can transform livestock farming into a more efficient and sustainable sector. Importance of Intelligent Systemization in Livestock Production Intelligent systemization (IS) plays an important role in shaping the future of livestock production.This involves the integration of AI, ML, and automation to optimize various aspects of farming operations.The importance of IS in livestock production can be understood through its impact on several key areas, as presented in Table 1.To facilitate effective predictions and decision making, techniques such as data analytics and ML are required.ML models have recently been utilized to forecast various key variables relevant to decision making, such as sales and feed performance [15].Consequently, the acquisition and analysis of data play pivotal roles in gaining a competitive edge in the agricultural sector.As reported by Lee and Shin [16], the accessibility and availability of well-organized data are essential for the efficacy of ML models.Thus, exploring the complete process of data analytics is imperative for the practical application of enhanced data analytics techniques, as shown by others [17].Data-driven decision processes in animal farms involve the use of real-time data in diverse aspects of livestock management.Utilizing data for performance recording and animal identification is vital for farmers, as the automated data acquisition of animal identity and trait measurements provides crucial insights into animal health, productivity, and efficiency, in line with ICAR guidelines for ruminants [18].Adopting datadriven decision making revolutionizes livestock management by offering unprecedented insights into farm operations, thus enhancing efficiency and productivity.Through data analytics and technology, farmers and livestock managers can optimize resources and increase farm sustainability.The primary benefits of using data-driven decisions in livestock management include capital optimization, market demands prediction, environmental sustainability, preventive health care, and improved reproduction methods [19]. Since the 1970s, precision livestock farming (PLF) has evolved significantly, from electronic milk meters for cows to behavior-based estrus detection and rumination activity monitoring.PLF integrates information technology, data science, and animal husbandry [18,19].Research in PLF is driven by perspectives from animal science, veterinary medicine, computer science, agricultural engineering, and environmental science.The demand for animal-derived products is rising due to population growth and increased affluence, necessitating the optimization of animal feed intake, disease mitigation, and welfare improvement [20,21].Collaboration among farmers, feed manufacturers, and stakeholders is crucial for achieving optimized animal-based production, focusing on catering to individual animal needs, known as PLF.This transition involves collecting and analyzing comprehensive data on animal growth, production outcomes, disease prevalence, behavior, and environmental conditions.This data-driven approach aligns with the broader trend of precision farming observed in various agricultural disciplines [22].Intelligent automation has revolutionized livestock feeding through automated feeding systems (AFSs), which dispense precise amounts of feed at optimal times, ensuring that each animal receives the necessary nutrients for growth and productivity.This reduces wastage, enhances cost efficiency, and optimizes resources.Although AFSs are popular in the United States, they are relatively new in European countries [23].These systems alleviate the workload of livestock farmers, saving time and enhancing flexibility, allowing them to focus more on organizational duties [24].Hansen et al. [25] found that automatic milking systems (AMSs) improve farmers' well-being by reducing barn operation responsibilities.Lovarelli et al. [26] highlighted the potential of automatic systems in precision livestock farming (PLF) to enhance economic viability.Bragaglio et al. [27] compared traditional dairy farms to precision-agriculture-based farms, noting that AMSs and AFSs reduced energy consumption and environmental impact.Additionally, Rodrigues et al. [28] reported that administering a total mixed ration with an AFS can decrease ammonia (NH3-N) emissions from dairy cows' waste compared to conventional feeding, though there was no significant difference in greenhouse gas (GHG) emissions. Enhanced animal welfare is another significant benefit of IS, which monitors and responds to individual needs.Automated climate control systems, for instance, can adjust temperature and ventilation based on real-time data (such as the number of animals, NH3 levels, and dust particles), ensuring optimal conditions [29].This not only improves livestock well-being, but also enhances the quality of final products.An overview of different ISs used in agriculture and livestock farming for decision making is presented in Table 2.By analyzing complex datasets, managers can gain a comprehensive overview, allowing them to optimize the functional flow of their operations, as mentioned by Wang et al. [30]. The integration of data analysis with new technologies has significantly boosted herd productivity in the past decade [31].Platforms like HerdX democratize data analysis access, accelerating technology adoption in animal farming.Genetic editing, notably CRISPR-Cas, and omics technologies have transformed animal breeding [31].Whole-genome sequencing and genomic information integration have become pivotal.Third-generation sequencing techniques have spurred technological breakthroughs in omics, aiding in gene function discovery, accurate genetic variation detection, and precise large-scale sequencing data analysis [32].In transcriptomics, they offer a detailed understanding of gene expression related to production, reproduction, and well-being. Data integration into a Data Hub enables continuous monitoring and informed decision making [33].Farm applications utilize advanced analytics, integration, and real-time predictive tools like Dairy Brain for enhanced decision making.Data-driven decision making is crucial for addressing agriculture's environmental impact [34].Online platforms such as CAP'2ER, CoolFarmTool, Farm Carbon Toolkit, and AGRECALC democratize carbon footprint calculations, aiding users in understanding and reducing emissions [34].These tools offer systematic methods for calculating carbon footprints, raising awareness, and facilitating emission reduction actions, allowing users to explore cost-efficient mitigation measures. Remote monitoring and management IoT, cloud computing, satellite imagery, drones, multispectral cameras. The ability to remotely monitor and manage livestock operations is a game-changer for farmers.IS enables real-time surveillance of farms, allowing farmers to address issues promptly and efficiently.This is particularly beneficial for large-scale operations where physical presence may be challenging [43,44] Smart farming Knowledge base, multi-agent technology Smart farming involves different stages like collection of information on the farm, field, culture, data analysis, and decision making and implementation of decisions-agrotechnical operation [45,46] The integration of AI, ML, and automation in livestock farming can drive significant improvements in productivity, animal welfare, and overall sustainability (Figure 1).Enhanced productivity is achieved through precision feeding, where AI systems analyze data to determine the optimal feeding regimes for each animal, improving growth rates and feed efficiency.Health monitoring benefits from ML algorithms that predict and detect diseases early by analyzing patterns in behavior, temperature, and movement, reducing illness and associated productivity losses [35].Additionally, automated reproductive management systems monitor cycles and detect optimal breeding times, improving reproductive success rates.Improved animal welfare results from behavioral monitoring, where AI tracks and analyzes animal behavior to identify signs of stress, discomfort, or illness, allowing for early interventions.Automated systems also maintain the optimal living conditions, such as temperature, humidity, and ventilation, reducing stress and promoting better health [23].Furthermore, automation reduces the need for frequent human-animal interactions, minimizing stress and creating a more stable environment.Sustainability is enhanced through AI-driven systems that optimize resource use, such as water, feed, and energy, reducing waste and lowering the environmental footprint of farming operations [41,42].Improved feed efficiency and health management also help to reduce methane emissions from livestock, contributing to lower greenhouse gas emissions.Comprehensive data collection and analysis enable more informed decision making, promoting practices that are both economically and environmentally sustainable [15,16].methane emissions from livestock, contributing to lower greenhouse gas emissions.Comprehensive data collection and analysis enable more informed decision making, promoting practices that are both economically and environmentally sustainable [15,16]. Intelligent Systematization in Livestock Growth Analysis IS in livestock growth analysis represents a pioneering approach at the intersection of agriculture, data science, and AI, as reported by Pivoto [47].As the global demand for food continues to rise alongside the complexities of livestock management, there is an imperative need for innovative methodologies to optimize production efficiency, ensure animal welfare, and sustainably meet consumer demands.Livestock growth analysis traditionally relies on empirical observations and manual data collection, which can be timeconsuming, error-prone, and limited in scope.However, with the appearance of IS, powered by advanced computational techniques such as ML, data mining, and predictive analytics, a paradigm shift is underway.IS, rooted in advanced computational techniques and a data-driven approach, is fundamental for monitoring and holds immense potential to positively impact both productivity and quality within various sectors, including agriculture and livestock farming [48,49].Firstly, in terms of productivity, IS enables farmers to optimize their resource allocation and management practices [50].Secondly, IS allows for the precise monitoring of environmental conditions, such as temperature, humidity, and air quality [51], enabling farmers to create optimal conditions for animal growth and productivity.Furthermore, IS facilitates automation and process optimization, reducing manual labor and operational inefficiencies [52].Through the integration of sensors, actuators, and IoT devices, tasks such as feeding, watering, and monitoring can be automated, freeing up valuable time and resources for other critical activities.This increased efficiency not only boosts productivity, but also reduces costs and enhances overall farm profitability. Utilizing Advanced Technologies for Animal Growth Monitoring IS gathers the vast amounts of data generated within livestock farming operations, encompassing factors such as animal health records, genetic profiles, environmental conditions, and feed composition.By integrating these diverse datasets and employing sophisticated algorithms, it enables real-time monitoring, analysis, and decision making, fostering a holistic understanding of livestock growth dynamics.One of the primary goals of IS in livestock growth analysis is to optimize resource allocation and management practices.As reported by others, through predictive modeling and optimization algorithms, farmers can anticipate growth trajectories, identify potential health issues, and optimize feed formulations tailored to individual animals or specific groups [15,53].This personalized approach not only enhances production efficiency, but also minimizes waste and environmental impact.Moreover, IS empowers farmers with actionable insights to enhance animal welfare and mitigate risks [54].By detecting early signs of disease or stress through Intelligent Systematization in Livestock Growth Analysis IS in livestock growth analysis represents a pioneering approach at the intersection of agriculture, data science, and AI, as reported by Pivoto [47].As the global demand for food continues to rise alongside the complexities of livestock management, there is an imperative need for innovative methodologies to optimize production efficiency, ensure animal welfare, and sustainably meet consumer demands.Livestock growth analysis traditionally relies on empirical observations and manual data collection, which can be time-consuming, error-prone, and limited in scope.However, with the appearance of IS, powered by advanced computational techniques such as ML, data mining, and predictive analytics, a paradigm shift is underway.IS, rooted in advanced computational techniques and a data-driven approach, is fundamental for monitoring and holds immense potential to positively impact both productivity and quality within various sectors, including agriculture and livestock farming [48,49].Firstly, in terms of productivity, IS enables farmers to optimize their resource allocation and management practices [50].Secondly, IS allows for the precise monitoring of environmental conditions, such as temperature, humidity, and air quality [51], enabling farmers to create optimal conditions for animal growth and productivity.Furthermore, IS facilitates automation and process optimization, reducing manual labor and operational inefficiencies [52].Through the integration of sensors, actuators, and IoT devices, tasks such as feeding, watering, and monitoring can be automated, freeing up valuable time and resources for other critical activities.This increased efficiency not only boosts productivity, but also reduces costs and enhances overall farm profitability. Utilizing Advanced Technologies for Animal Growth Monitoring IS gathers the vast amounts of data generated within livestock farming operations, encompassing factors such as animal health records, genetic profiles, environmental conditions, and feed composition.By integrating these diverse datasets and employing sophisticated algorithms, it enables real-time monitoring, analysis, and decision making, fostering a holistic understanding of livestock growth dynamics.One of the primary goals of IS in livestock growth analysis is to optimize resource allocation and management practices.As reported by others, through predictive modeling and optimization algorithms, farmers can anticipate growth trajectories, identify potential health issues, and optimize feed formulations tailored to individual animals or specific groups [15,53].This personalized approach not only enhances production efficiency, but also minimizes waste and environmental impact.Moreover, IS empowers farmers with actionable insights to enhance animal welfare and mitigate risks [54].By detecting early signs of disease or stress through data patterns and behavioral analysis, interventions can be implemented proactively, leading to improved growth, health outcomes, and reduced mortality rates.Furthermore, IS facilitates continuous learning and adaptation within livestock farming systems [55,56].By leveraging historical data and feedback loops, algorithms can refine their predictive accuracy over time, adapting to changing environmental conditions, market dynamics, and regulatory requirements.The adoption of IS in livestock growth analysis is not without challenges.Data privacy, the interoperability of systems, and the need for domain expertise in both agriculture and data science require careful consideration.However, the potential benefits, including increased productivity, sustainability, and profitability, will probably far outweigh these challenges. Data-Driven Approaches in Analyzing Livestock Development In analyzing livestock development, data-driven approaches indicate a new era in agriculture, leveraging the vast amounts of data generated within livestock farming operations to optimize productivity, welfare, and sustainability [57,58].These approaches collect diverse datasets, encompassing factors such as animal health records, genetic profiles, environmental conditions, and feed composition, fostering a comprehensive understanding of livestock development dynamics.One of the key advantages of data-driven approaches is their ability to provide actionable insights for optimizing livestock development strategies.By analyzing historical data and identifying patterns, trends, and correlations, farmers can make informed decisions regarding breeding programs, nutrition management, disease prevention, and growth optimizations [59].This personalized approach not only maximizes productivity, but also enhances animal welfare and minimizes environmental impact.Furthermore, data-driven approaches facilitate the early detection and mitigation of risks to livestock development.By leveraging predictive analytics and anomaly detection algorithms, farmers can identify potential health issues, environmental stressors, or management inefficiencies before they escalate, enabling timely interventions to minimize losses and optimize outcomes [60].Moreover, data-driven approaches foster continuous improvement and innovation within livestock development systems.By analyzing performance metrics, feedback loops, and experimental data, farmers can refine their practices, adapt to changing conditions, and optimize resource allocation over time [61].This iterative process of learning and adaptation enables farmers to stay ahead of challenges and capitalize on emerging opportunities in the livestock industry [62].However, the adoption of data-driven approaches in analyzing livestock development is not without challenges.Integrating AI in livestock farming faces several key challenges, including ensuring data quality, as the accuracy, consistency, and reliability of collected data are essential for effective AI applications [63].Another issue is the interoperability of systems, as different technologies need to work seamlessly together despite varying standards and protocols [64].Additionally, there is a need for skilled personnel with expertise in both agriculture and data science to effectively implement and manage AI technologies [65].Additionally, concerns regarding data privacy, security, and regulatory compliance require careful consideration to ensure the ethical and responsible use of data [66].Overall, as mentioned above, the power of data and advanced analytics hold the promise of maximizing productivity, enhancing animal welfare, and ensuring the long-term viability of livestock farming operations in an increasingly complex and interconnected world. Intelligent Systematization Impacts on Productivity and Quality In terms of quality, IS enables farmers to implement precise and targeted interventions to improve the health, genetics, and overall well-being of livestock.By analyzing vast amounts of data, including genetic profiles, health records, and environmental factors, farmers can identify and address potential issues before they impact product quality.For example, recently, it was reported by Hassan et al. [67] that by monitoring animal behavior and health indicators in real time, farmers can detect signs of stress or disease early on, allowing for timely interventions and preventing the deterioration of product quality.A recent study conducted by Pak et al. [68] concluded that digital livestock systems have the potential to enhance the well-being and behavior of laying hens, while also positively impacting egg cholesterol and fatty acid compositions when compared to traditional livestock methods.The same author later demonstrated that the use of a digital livestock system can improve the growth performance of swine compared to both a conventional livestock system without a probiotic mixture and a conventional system with a probiotic mixture [69].Other authors reported that the implementation of a smart poultry feeding system demonstrated significant improvements in egg production, egg quality parameters, blood parameters, immune cell growth, and cecal microflora balance compared to conventional feeding systems [70].They also concluded that this technology, which integrates information and communications technology for the remote control management of livestock production environments, offers promising potential for enhancing the productivity and welfare of laying hens.Further, IS facilitates traceability and quality assurance throughout the production process.By recording and analyzing data at every stage, from breeding and feeding to processing and distribution, farmers can track the origins of products, monitor their quality attributes, and ensure compliance with regulatory standards and consumer expectations [71].This transparency not only enhances consumer trust, but also enables farmers to command premium prices for high-quality products.The adoption of IS in agriculture and livestock farming has the potential to revolutionize productivity and quality across the entire value chain.Moreover, in the context of greenhouse gas emissions (GHG), Balafoutis et al. [72] stated that it is imperative to prioritize further research into quantifying the effects of IS and precision agriculture technologies on reducing GHG emissions, as well as their impact on productivity and income.They reported that the existing evidence suggests that these systems significantly contribute to mitigating climate change while enhancing production efficiency in terms of yield and economic outcomes.Recently, Di Vaio et al. [73] reported that digital technologies such as AI, ML, IoT, Cloud, and Blockchain are crucial in achieving supply chain traceability in the post-COVID era, as a pivotal method for safeguarding consumers and enhancing agricultural production quality.Therefore, it is essential to investigate these aspects thoroughly. Automated Facilities in Livestock Management In the context of modern agriculture, one notable advancement is the implementation of automated facilities in livestock management, which comprise a wide array of applications (monitoring animal health and behavior to optimizing feed distribution and waste management).These systems are typically deployed in various agricultural settings, including dairy farms, poultry farms, and swine production facilities, helping farmers to streamline operations, improve productivity, and enhance overall animal welfare. One of the key functionalities of automated systems in livestock management is environmental sensing and control [74].These systems employ a network of sensors to monitor crucial environmental parameters such as temperature, humidity, air quality, and lighting conditions within livestock facilities.For instance, in a poultry farm, sensors can detect fluctuations in temperature and adjust ventilation systems accordingly to maintain the optimal conditions for bird growth and health, as reported by others [68,69].Moreover, automated systems can regulate feed and water delivery based on real-time environmental data, ensuring that animals always have access to sufficient nutrients and hydration.By optimizing environmental conditions, these systems help to mitigate stress levels among livestock, reduce the risk of disease outbreaks, and enhance overall production efficiency and management.Another significant advantage of automated facilities in livestock management is the reduction in manual labor and the consequent efficiency gains.Traditionally, tasks such as milking, egg collection, feeding, watering, and waste removal require con-siderable time and labor investments from farm workers, as reported by Bhoj et al. [75].However, with the integration of automated systems, many of these repetitive tasks can be mechanized and controlled remotely through centralized management platforms.For example, AFSs utilize programmable feeders equipped with sensors to dispense precise amounts of feed at scheduled intervals, eliminating the need for manual feeding by farm workers.Similarly, automated watering systems ensure a continuous supply of clean water to livestock, reducing the labor associated with monitoring and refilling water troughs.Furthermore, automated waste management systems, such as robotic manure scrapers and composting units, streamline the process of waste removal and recycling, minimizing the labor required for manual cleaning and disposal, as shown in a case study from [76].However, a significant concern arises with the adoption of automated facilities in livestock management such as the potential displacement of traditional manual labor jobs and the consequent need for a shift in the required skill sets for agricultural workers.Nevertheless, by automating these labor-intensive tasks, farmers can efficiently allocate resources, concentrate on higher-value activities, and attain enhanced operational scalability.The implementation of automated facilities in livestock management offers multiple benefits, ranging from environmental sensing and control to labor reduction and efficiency gains.By supporting and implementing advanced technologies, farmers can optimize animal welfare, improve production outcomes, and sustainably meet the growing demand for high-quality animal products in a rapidly evolving agricultural landscape. Maintenance and Management In the complex world of livestock operations, maintenance and management are indispensable pillars that ensure the smooth functioning, efficiency, and sustainability of agricultural enterprises, as reported by Stoliarchuk et al. [77].From the upkeep of equipment and infrastructure to the management of resources and personnel, effective maintenance and management practices are essential for optimizing productivity, minimizing risks, and safeguarding the well-being of both animals and farmers. Maintenance encompasses a wide range of activities aimed at preserving the functionality and longevity of the equipment, facilities, and infrastructure used in livestock operations.This includes routine inspections, repairs, and preventive maintenance tasks to identify and address potential issues before they escalate into costly problems.Additionally, maintenance involves activities such as remaking, redesigning, and rethinking processes to improve efficiency and sustainability [78].For example, the regular servicing of tractors, feeding equipment, and ventilation systems helps to prevent breakdown and ensures uninterrupted operation, minimizing downtime and maximizing productivity.Furthermore, proactive maintenance strategies, such as predictive maintenance, leverage data analytics and sensor technologies to anticipate equipment failures and schedule maintenance activities accordingly.By monitoring equipment performance metrics and detecting early signs of wear or malfunction, farmers can take preemptive action to prevent costly breakdowns and extend the lifespan of their assets.This not only reduces maintenance costs, but also enhances operational reliability and efficiency. Effective management practices are equally critical for the success of livestock operations, encompassing a wide range of responsibilities, including animal health and welfare, resource management, financial planning, and regulatory compliance [79].Skilled managers must balance competing priorities and make informed decisions to optimize resource allocation, mitigate risks, and achieve operational objectives.For example, in the realm of animal health management, effective disease prevention and control strategies are paramount for safeguarding the health and welfare of livestock.This may involve implementing biosecurity measures, vaccination programs, and quarantine protocols to prevent the spread of infectious diseases and minimize the risk of outbreaks.Additionally, sound financial management practices, such as budgeting, cost analysis, and revenue forecasting, are essential for ensuring the long-term viability and sustainability of livestock operations.Moreover, effective management practices extend beyond the farm gate to encompass broader issues such as environmental stewardship, community relations, and market access [80].Sustainable management practices aim to minimize the environmental footprint of livestock operations, reduce resource consumption, and promote biodiversity conservation.By adopting sustainable practices such as rotational grazing, manure management, and energy-efficient technologies, farmers can minimize their impact on the environment while enhancing the resilience and profitability of their operations. All in all, both maintenance and management are integral components of successful livestock operations, ensuring the efficient operation, productivity, and sustainability of agricultural enterprises.By implementing proactive maintenance strategies and adopting sound management practices, farmers can optimize resource utilization, minimize risks, and achieve their operational objectives in a rapidly evolving agricultural landscape. Importance of Accurate Data in Livestock Management In the context of livestock management, accurate data serve as the cornerstone upon which informed decisions are made and operational efficiency is achieved [81].Livestock operations encompass a multitude of variables, including PLF, animal health, welfare, productivity metrics, and environmental conditions.Without precise data collection and analysis, farmers are left navigating these complexities blindfolded, risking inefficiencies, suboptimal outcomes, and even potential harm to their animals.Accurate data enable farmers to monitor and evaluate various aspects of their livestock operations with precision [82].Technologies such as IoT sensors, RFID tagging, and automated data collection systems allow for the continuous monitoring of key parameters, including animal behavior, health indicators, feed intake, and environmental factors such as temperature and humidity, as reported recently [83,84].Through data collection in real time, farmers gain invaluable insights into the well-being of their animals and the overall performance of their operations.Moreover, accurate data facilitate proactive decision making and preventive maintenance strategies.These aspects are very important, because by analyzing historical data trends and patterns, farmers can identify potential issues before they escalate into costly problems [85].For example, by tracking changes in animal behavior or health indicators, farmers can detect early signs of disease or distress and intervene promptly, thus minimizing the risk of disease outbreaks and reducing the need for expensive treatments.Furthermore, precise data enable farmers to optimize resource utilization and operational efficiency.By monitoring production metrics such as milk yield, weight gain, or egg production in real time, farmers can adjust feeding regimens, breeding programs, and other management practices to maximize productivity while minimizing waste.Additionally, accurate data on environmental conditions allow farmers to optimize housing conditions, ventilation systems, and other infrastructure elements to ensure the comfort and well-being of their animals. Real-Time Monitoring and Decision Support Real-time monitoring technologies, enabled by IoT devices, wireless sensors, and cloud computing, provide farmers with instant access to critical information about their livestock and operations, regardless of their location.These technologies allow farmers to monitor key parameters such as animal health, behavior, and environmental conditions continuously [86].For example, wearable devices equipped with sensors can track vital signs such as heart rate, body temperature, and activity levels, providing real-time insights into the health status of individual animals.Automated monitoring systems can also track feed consumption, water intake, and other production metrics, allowing farmers to detect deviations from normal patterns and take corrective action as needed [15].Moreover, real-time monitoring systems enable farmers to respond swiftly to changing conditions and emerging issues.Alerts and notifications can be configured to notify farmers of potential problems, such as equipment failures, power outages, or adverse weather conditions, allowing them to intervene promptly and mitigate the impact on their operations.In some cases, advanced algorithms and AI can analyze the data collected by monitoring systems, providing farmers with actionable insights and predictive analytics to inform their decision making [87].By leveraging real-time monitoring technologies and decision support systems, farmers can optimize operational efficiency, improve animal welfare, and enhance overall productivity.These technologies enable farmers to make data-driven decisions, minimize risks, and seize opportunities in a rapidly evolving agricultural landscape, ultimately driving greater profitability and sustainability in the livestock industry. Welfare Livestock Farming Welfare livestock farming (WLF) represents a significant departure from conventional farming practices, placing a strong emphasis on the well-being and humane treatment of animals.At its core, this approach seeks to redefine the relationship between humans and livestock, recognizing animals as sentient beings deserving of compassion, dignity, and respect.Unlike traditional farming methods that prioritize maximum production at the expense of animal welfare, WLF adopts a holistic approach that considers the physical, mental, and emotional needs of animals.This paradigm shift is fueled by growing awareness among consumers, activists, and industry stakeholders regarding the ethical implications of intensive farming practices [88,89].Central to WLF is the concept of providing animals with environments and conditions that allow them to express their natural behaviors and lead fulfilling lives.This entails moving away from confinement systems towards more spacious and enriched living environments.For example, instead of cramped cages, pigs may roam in open-air pastures, and chickens may have access to outdoor spaces for foraging and dust bathing.Moreover, WLF prioritizes the implementation of humane handling and management practices throughout the entire lifecycle of animals, from birth to slaughter [90,91].This includes minimizing stress during transportation, ensuring prompt access to veterinary care, and employing humane slaughter methods that prioritize a quick and painless death.Furthermore, WLF integrates advancements in technology and innovation to enhance animal welfare outcomes.From precision feeding systems that tailor nutrition to individual animal needs to monitoring devices that track health parameters in real time, technology plays a vital role in optimizing animal care and management.Ultimately, WLF represents a shift towards a more ethical, sustainable, and compassionate approach to animal agriculture.By prioritizing the well-being of animals and acknowledging their inherent value, this farming model not only meets the demands of consumers who seek ethically produced food, but also fosters a more harmonious relationship between humans and animals.As society continues to evolve, WLF stands as a beacon of progress, demonstrating that agriculture can thrive while respecting the dignity and welfare of all living beings. Integrating Technology for Animal Welfare In the agriculture context, the integration of technology has sparked a revolution, transforming traditional farming practices into efficient and sustainable systems.One significant aspect of this transformation is evident in WLF, where technology is used not only for maximizing productivity, but also for enhancing the welfare of animals.This paradigm shift reveals a commitment to the compassionate and ethical treatment of livestock while meeting the demands of a growing global population.The application of technology in WLF encompasses various aspects, from advanced monitoring systems to precision feeding techniques [92,93].For instance, sensors and monitoring devices are utilized to track vital parameters such as temperature, humidity, and even the behavioral patterns of animals.This data-driven approach allows farmers to detect signs of distress or illness early, enabling timely interventions and medical care.Moreover, AFSs ensure that animals receive the appropriate nutrition tailored and designed to their specific needs, promoting their overall health and well-being [90]. Ethical Considerations in Livestock Technology As technology continues to permeate every aspect of livestock farming, ethical considerations take center stage in the discourse surrounding WLF.Central to this discussion is the acknowledgment of animals as sentient beings deserving of dignity and respect.Ethical frameworks guide the development and implementation of technological solutions, emphasizing the importance of minimizing stress and suffering among livestock [94].One of the primary ethical concerns addressed by livestock technology is reductions in antibiotics, confinement, and overcrowding, as reported by Rollin [95].Innovations such as robotic herding and automated handling systems mitigate the need for intensive confinement, allowing animals to exhibit their natural behaviors and access open spaces.Additionally, advancements in transportation technology ensure that animals are transported humanely, minimizing discomfort and distress during transit. Achieving Sustainable and Humane Practices WLF represents a harmonious convergence of sustainability and humane practices, where the welfare of animals is intricately linked to environmental stewardship and social responsibility.Sustainable agriculture principles guide the management of resources, ensuring the long-term viability of farming operations without compromising the well-being of current and future generations [96].Technological innovations play an important role in promoting sustainability within WLF.For instance, precision farming techniques optimize resource utilization, minimizing the ecological footprint of livestock operations [97].This shows that, by integrating adequate technology, addressing ethical considerations, and prioritizing sustainability, this innovative farming model not only enhances the welfare of livestock, but also fosters resilience in the face of global challenges.Obviously, as we continue to evolve towards a more enlightened relationship with the natural world, WLF stands as a beacon of progress, demonstrating that agriculture can thrive while respecting the intrinsic value of all living beings. Addressing Livestock Challenges Addressing livestock challenges involves a comprehensive approach that extends beyond disease control, environmental impact reduction, and energy efficiency.It encompasses a wide range of innovative strategies and practices aimed at addressing the various issues confronting the livestock industry, from animal welfare concerns to economic sustainability and social responsibility.Central to this approach is the recognition of the interconnectedness between livestock farming and broader societal and environmental systems.Innovations in animal welfare science are driving advancements in housing systems, handling techniques, and management practices that prioritize the physical and psychological well-being of animals [98].From enriched environments that allow for natural behaviors to low-stress handling methods that minimize fear and anxiety, these innovations are reshaping the way animals are raised and cared for on farms.Moreover, addressing livestock challenges involves engaging with stakeholders across the agricultural value chain, including farmers, researchers, policymakers, and consumers.Collaborative initiatives are being launched to promote knowledge sharing, capacity building, and technology transfer, fostering a culture of innovation and continuous improvement within the livestock industry.Furthermore, efforts to address social and economic challenges in livestock farming are gaining traction, with a growing emphasis on equity, diversity, and inclusion [99].Initiatives aimed at improving the livelihoods of smallholder farmers, empowering marginalized communities, and promoting fair labor practices are integral to creating a more just and sustainable food system.Additionally, addressing livestock challenges involves leveraging emerging technologies such as AI, blockchain, and big data analytics to enhance traceability, transparency, and efficiency within the livestock supply chain.These technologies enable the real-time monitoring of key performance indicators, facilitate data-driven decision making, and empower consumers to make informed choices about the products they purchase, seeking to create a more resilient, equitable, and sustainable food system that meets the needs of present and future generations. Disease Control Strategies One of the foremost challenges in livestock farming is the management and prevention of diseases that can have devastating consequences on animal welfare, economic stability, food security, and public health [100].Addressing this livestock challenge involves the implementation of proactive disease control strategies that minimize the risk of outbreaks and mitigate their impact when they occur.Innovative approaches to disease control include vaccination programs, biosecurity measures, and genetic selection for disease resistance.Advancements in biotechnology have led to the development of novel vaccines and diagnostic tools that target specific pathogens, providing farmers with more precise and effective disease management options [101].Additionally, it was reported by Drewe et al. [102] that the adoption of data-driven approaches, such as predictive modeling by using ML and epidemiological surveillance, enables the early detection of disease threats and facilitates rapid response measures.Furthermore, biosecurity protocols are being enhanced to minimize the risk of disease introduction and transmission within and between farms [103,104].From strict hygiene practices to controlled access measures, these biosecurity measures help to create barriers against pathogens, safeguarding the health and well-being of livestock populations. Reduction of Environmental Impact Livestock farming is often associated with environmental challenges, including air and water pollution, habitat destruction, and greenhouse gas emissions.Addressing these challenges requires innovative solutions that prioritize environmental conservation and sustainability.Technological innovations in waste management, such as anaerobic digestion systems and nutrient recovery technologies, offer sustainable alternatives for managing animal waste while reducing environmental impacts [105].The same authors reported that these systems facilitate the conversion of organic waste into biogas for energy production and nutrient-rich fertilizers for soil enrichment, contributing to circular economy principles.Furthermore, precision farming techniques, such as GPS-guided equipment and remote sensing technologies, optimize land use and resource management, minimizing environmental footprint while maximizing productivity [106,107].By enabling the targeted application of inputs, these technologies reduce the environmental impact of livestock production, conserving natural resources and preserving ecosystem integrity.Moreover, strategies to optimize feed efficiency and reduce methane emissions from enteric fermentation contribute to lowering the carbon footprint of livestock production. Energy Efficiency in Livestock Production The energy-intensive nature of livestock production also poses a challenge to sustainability and economic viability [108].As reported by the authors, these livestock challenges involve embracing energy-efficient practices and technologies to minimize energy consumption and maximize resource utilization.Energy consumption is a significant contributor to the environmental footprint of livestock production, encompassing both direct energy use on farms and indirect energy inputs associated with feed production and transportation [109,110].Addressing energy efficiency challenges requires a multifaceted approach that integrates renewable energy sources, improves operational efficiency, and promotes resource conservation.The adoption of renewable energy technologies, such as solar panels and wind turbines, enables farms to generate clean, sustainable energy onsite, reducing reliance on fossil fuels and lowering greenhouse gas emissions.Additionally, energyefficient practices, such as improved insulation, LED lighting, and automated ventilation systems, optimize energy use and reduce operational costs [111].Moreover, efforts to enhance energy efficiency extend beyond on-farm operations to include supply chain management and transportation logistics.Strategies such as the local sourcing of feed ingredients, optimizing transportation routes, and implementing energy-efficient transport vehicles help to minimize energy consumption and the carbon emissions associated with livestock production. Future Directions and Challenges The future of livestock technology is filled with opportunities to boost productivity, animal welfare, and sustainability in agriculture.Intelligent monitoring, like sensors and IoT devices paired with advanced analytics, provides real-time insights into animal health and environmental conditions.Precision livestock farming, powered by automation and AI, shows potential in optimizing feeding, health monitoring, and breeding.Nutrition and feed efficiency research, especially on alternative sources and precision feeding, is crucial for reducing environmental impacts while maintaining animal health.Advancements in health management, including early detection tools and improved vaccines, contribute to healthier livestock.Sustainability remains a key focus, with initiatives to cut emissions and develop innovative waste solutions.AI-driven behavioral analysis and precise housing designs support animal welfare, aligning farming with ethical standards.Supply chain optimization through blockchain and smart logistics boosts transparency and efficiency.Industry collaboration and farmer education are crucial for technology adoption and maximizing benefits.Challenges like data privacy, technology accessibility, regulations, and costs must be addressed.Ethical dilemmas, such as balancing productivity with animal welfare, interdisciplinary collaboration, and climate resilience, require thoughtful solutions.In conclusion, advancing livestock technology demands a holistic approach.By tackling challenges and embracing emerging opportunities, we can pave the way for a sustainable, efficient, and ethically responsible future in livestock farming. Figure 1 . Figure 1.Graphical representation of the improvements of integrating artificial intelligence (AI), machine learning (ML), and automation to optimize various aspects of farming operations. Figure 1 . Figure 1.Graphical representation of the improvements of integrating artificial intelligence (AI), machine learning (ML), and automation to optimize various aspects of farming operations. Table 1 . The main methods and outcomes of using intelligent systems in livestock production. Table 2 . Intelligent systematization technology used in livestock production for decision making. [38,[41][42]rmers to monitor and manage factors such as water usage, greenhouse gas emissions, and waste disposal more effectively.By optimizing resource utilization and adopting sustainable practices, livestock producers can contribute to environmental conservation[38,39]Supply chain optimization AI, ML, blockchain technologyIntegration with logistics, processing, and distribution systems ensures that the road from farm to table is efficient and transparent.This not only reduces waste, but also enhances the overall quality and safety of livestock products[40][41][42]
2024-05-30T15:08:01.377Z
2024-05-28T00:00:00.000
{ "year": 2024, "sha1": "d3741977c385fe049a184c5cf6e3f69accad619c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2624-7402/6/2/84/pdf?version=1716884147", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6461f2e69793fd6cc0b0adebd7a584fe9b2e1e6a", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
245898266
pes2o/s2orc
v3-fos-license
Public Health Policy Pillars for the Sustainable Elimination of Zoonotic Schistosomiasis Schistosomiasis is a parasitic disease acquired through contact with contaminated freshwater. The definitive hosts are terrestrial mammals, including humans, with some Schistosoma species crossing the animal-human boundary through zoonotic transmission. An estimated 12 million people live at risk of zoonotic schistosomiasis caused by Schistosoma japonicum and Schistosoma mekongi, largely in the World Health Organization’s Western Pacific Region and in Indonesia. Mathematical models have played a vital role in our understanding of the biology, transmission, and impact of intervention strategies, however, these have mostly focused on non-zoonotic Schistosoma species. Whilst these non-zoonotic-based models capture some aspects of zoonotic schistosomiasis transmission dynamics, the commonly-used frameworks are yet to adequately capture the complex epi-ecology of multi-host zoonotic transmission. However, overcoming these knowledge gaps goes beyond transmission dynamics modelling. To improve model utility and enhance zoonotic schistosomiasis control programmes, we highlight three pillars that we believe are vital to sustainable interventions at the implementation (community) and policy-level, and discuss the pillars in the context of a One-Health approach, recognising the interconnection between humans, animals and their shared environment. These pillars are: (1) human and animal epi-ecological understanding; (2) economic considerations (such as treatment costs and animal losses); and (3) sociological understanding, including inter- and intra-human and animal interactions. These pillars must be built on a strong foundation of trust, support and commitment of stakeholders and involved institutions. INTRODUCTION Neglected Tropical Diseases (NTDs) predominantly affect communities in low-and middle-income countries and impose a significant human, economic and social burden, thus perpetuating a cycle of poverty. Schistosomiasis is caused by infection with parasitic worms of the genus Schistosoma. An estimated 240 million people are infected 1 and the disease is classified by the World Health Organization (WHO) as an NTD. The main Schistosoma species responsible for human disease are Schistosoma haematobium, causing urogenital disease, and S. mansoni, S. japonicum, and S. mekongi causing intestinal disease. They differ in geographical distribution and host species they infect. Schistosoma haematobium and S. mansoni mainly infect humans as the definitive host. Schistosoma japonicum (found in the Philippines, China and Indonesia), and S. mekongi (Cambodia and Laos) use human and non-human mammals as definitive hosts, driving zoonotic transmission. Over 12 million people are estimated at risk of zoonotic infection in Asia with three million requiring treatment. Though vital to transmission, the number of animals at risk is generally not reported 2 (1). The life-cycle of zoonotic schistosomes is maintained by human and animal contact with contaminated freshwater sources, where the intermediate hosts (speciesspecific freshwater snails) are present, and where the access to safe water and sanitation is limited ( Figure 1). Infections can occur through recreational, habitual and employment activities of humans, and watering or grazing of animals. Human intestinal schistosomiasis symptoms range from abdominal pain, diarrhoea, blood in the stool, to liver and spleen enlargement, cancers and death (2). There is a dearth of data regarding the clinical impact of S. japonicum and S. mekongi in animals. In January 2021, schistosomiasis was targeted for elimination as a public-health problem (EPHP) globally in the WHO's Road map for neglected tropical diseases 2021-2030 (3). This is achieved when the proportion of heavy intensity infections (over 400 eggs per gram of stool by Kato-Katz diagnostic) is reduced below 1% (3). The cornerstone of schistosomiasis control in endemic areas is preventive chemotherapy with the anthelmintic praziquantel. However, drug treatment alone is likely insufficient to reach elimination goals since only a few infected individuals can maintain the transmission cycle (4). Significant gains towards S. japonicum elimination have been achieved with integrated intervention approaches (including preventive and selective chemotherapy, mollusciciding, health-education, sanitation and environmental improvement) in countries such as China. Although transmission was interrupted in some provinces for over 10 years, other provinces remained endemic, and reemergence has recently been observed (1,(5)(6)(7). Progress towards elimination seems to be slowing, with transmission still ongoing in many regions, and zoonotic schistosomiasis remains a publichealth problem, particularly in the Philippines (8). Efforts have been largely concentrated on the more common non-zoonotic schistosomiasis. Data needs have been reviewed for various NTDs (9) and models have been successfully used to inform control strategies (10,11). Nevertheless, zoonotic schistosomes pose a unique challenge to achieving EPHP as multiple definitive host species contribute to transmission dynamics (12). The complexity of zoonotic schistosomiasis must be captured by these models if they are to continue playing a major role in public-health policy. Data on animal reservoirs need to be collected along with data on humans to adequately calibrate the models. Historical and current modelling approaches to capture the dynamics of schistosomes and zoonotic transmission are reviewed elsewhere (8). Here we discuss the challenges of collecting and analysing data and developing models to inform control programmes, and how a One-Health approach, recognising the interconnection between people, animals, and their shared environment (13), can improve them. Our vision builds on three cross-cutting pillars that are vital to sustainable public-health policy: (1) understanding epidemiology; (2) economic considerations; and (3) accounting for sociological aspects. All pillars must stand on a strong foundation of trust and support, requiring involvement and commitment of stakeholders and other involved institutions (14). We illustrate the three pillars framework for building a sustainable control programme in the context of zoonotic schistosomiasis EPHP, where animal reservoirs' contribution to transmission accompanies all pillars. Nevertheless, this framework can be applied to improve public-health policy across diseases. PILLAR I: EPIDEMIOLOGY This first pillar encompasses our understanding of the disease in its geographical context, including environmental drivers, host heterogeneity, parasite interactions and intervention efficacy, derived from a combination of epidemiological data (parasitological, surveys, etc.) and analyses. Prior to data collection, mathematical modelling can inform on optimal study sample sizes, sampling frequencies, and target participant groups necessary to accurately estimate parameters (15). Next, in collaboration with stakeholders, optimal and accepted epidemiological and biological data collection can be undertaken (16)(17)(18). To complete the understanding of disease transmission dynamics and to predict impacts of interventions, mathematical models calibrated to, and informed by, available data provide quantitative evidence, capable of informing both clinical trials and the final implementation of large-scale interventions, thus making these critical aspects of disease control more ethical and effective in the long-term. Model parameters and assumptions are based on field data, guided by stakeholder knowledge, in turn determining model quality (9). Mathematical models have informed intervention programmes and played a vital role in informing the WHO's roadmap for NTDs 2021-2030 (11,19). Model development and the coordination of necessary data collection should be conducted in consultation with stakeholders, who are ideally placed to understand and advise on local situations, expectations of modelled outputs, and optimal methods for data collection (14). Ensuring that modelling results are interpreted correctly, and that strengths and limitations of the models are recognised, requires communication at several structural levels. Introductory and follow-up meetings with institutions involved in policy decision-making, and discussions with stakeholders at community, district and national level (14) will result in the strongest interventions. Analyses focused exclusively on this first pillar can identify the most effective strategies to achieve a target health outcome and determine timelines for achieving them (20)(21)(22)(23). It is widely acknowledged that to provide relevant insights, data quantity and quality needs to be improved (24). Specific data needs for some NTDs, including schistosomiasis, are reviewed elsewhere (9). The need for community-wide data is recognised, in particular, age-infection intensity profiles from adultshindered because most studies focus on school-aged children. Low sensitivity and questionable specificities of available diagnostic tools (e.g. Kato-Katz and antigen-based diagnostics) also pose a challenge to data quality (25,26). This becomes a greater problem as infection intensities are reduced after decades of preventive chemotherapy, further reducing the proportion of infections detected when egg-based diagnostics are the norm. Regarding interventions, if not applied yet, the lack of data can be overcome by making suggestions either based on data from different locations, or based on current interventions and expertise assumptions from stakeholders increasing uncertainty of modelled results. Many infection and population characteristics which influence disease transmission, such as present prevalence, age-intensity distributions, community-and age-contact rates and transmission rates, vary spatially. Therefore, studies mapping the spatial distribution in endemic areas need to be performed (27)(28)(29)(30)(31)(32). Furthermore, some biological aspects of infection and hosts cannot be, or are difficult to be, measured. For example, it is still not known what the dominant mechanism is behind age-intensity distributionage-dependent water contact rate, acquired immunity, worm density dependence, concomitant immunity or (likely) some combination. These unknown aspects must be further explored (33)(34)(35)(36)(37). Zoonotic transmission makes data collection and analyses more complicated. When human and animal data are collected independently, it becomes harder to consolidate and unify, hindering calibration and evaluation of multiple-host models. As highlighted by the WHO, providing centralised data access FIGURE 1 | Life cycle of S. japonicum and S. mekongi. An infected definitive host, a mammal, passes schistosome eggs through the faeces into freshwater. The eggs may hatch into free-swimming larval miracidia that infect intermediate hosts, freshwater snails. After multiplication and development, snails shed free-swimming cercariae daily. The cercariae penetrate the skin of a mammal which comes in contact with water. Within the mammal, the cercariae shed their forked tail to form schistosomulae which mature and become worms. Paired male and female adult worms copulate and migrate to the mesenteric venules of the bowel and/or rectum where they lay thousands of eggs a day. Some of the eggs get back into the water through faeces and start the life cycle again, while some eggs get trapped in the organs causing disease symptoms. will facilitate understanding, expediting analyses and consequently the decision-making process (3). An example of this is the Pan-African Rabies Control Network, which established a platform for centralised rabies data collection and analysis, with an option for open data sharing (38,39). We see opportunity in extending such a framework to other zoonotic diseases across a wider geographic area. Mapping the spatial distribution can help to identify locations and groups where efforts should be concentrated to prevent infection, improve interventions and increase coordinated communication across veterinary and human health teams (40). In communities where zoonotic schistosomiasis is endemic, livestock can be a major asset, residing closely with owners. However, this also fosters an environment where domestic animals can be responsible for a considerable amount of transmission to humans (4,41). Infection dynamics in these systems with multiple hosts differ significantly from single-host systems, and are more complex to model with greater data requirements. Transmission rates vary across definitive host species, which can be due to differing behaviour like water contact, driving diversity in exposure and contamination rates. Similarly, each definitive host species will have different epidemiological characteristics in terms of recovery, birth, and mortalityrates. Capturing definitive host heterogeneities in disease dynamic processes translates to more accurately predicting higher prevalences and intensities of infection (42,43). Additionally, estimating the contribution of each definitive host species to parasite transmission is vital (13,19). The best combination of interventions is expected to vary spatially according to animal host species' densities. For example, in China, attention had initially been exclusively on bovines, because historically, most of the research was conducted in lake areas where bovines were ubiquitous (41,(44)(45)(46). It is known today that rodent species are the main animal hosts of S. japonicum in mountainous provinces, whilst bovines drive infection around the lakes (4, 47-49). To understand how to manage multi-host zoonotic systems, it is crucial to determine prevalence in animals and identify locationspecific dominant animal reservoirs (50). Quantifying each host's contribution to transmission enables identification of maintenance and essential hosts, and the predicted impacts of control strategies targeting these hosts (51)(52)(53)(54)(55). Human-only treatment for schistosomaisis is insufficient to achieve EPHP in some settings because transmission is maintained by untreated reservoirs. Alternatively, interventions focused on the main reservoir predict success in reducing transmission to humans (4,46,49). Other work has explored the possible impact of a range of animal-and environment-focused controls (46,55,56). Different modelling approaches for the dynamics of intestinal helminths, including schistosomes and zoonotic transmission are reviewed elsewhere (8). PILLAR II: ECONOMICS The second pillar accounts for the economic implications of, in this case, different interventions to achieve EPHP. Primarily, a centralised data collection and analysis platform similar to that described above could reduce costs, as similar data needs across NTDs facilitate cross-cutting data collection and treatment activities (9). Furthermore, for an effective intervention strategy to be feasible, it must be affordable to individuals, governments and/or donors in addition to biologically effective. This includes the generally high upfront start-up investment, as well as recurrent maintenance costs. A useful economic evaluation approach is costeffectiveness analysis, where costs and non-monetary health effects of different control interventions can be compared (57). Results are expressed as additional costs per unit of improved health outcome, such as reduction in transmission rate, prevalence/ incidence, or deaths (58). The communication of results to policy-makers is key, as sometimes the most effective intervention for reducing disease might not be cost-effective (14). Cost-effective analysis has already been used for numerous NTDs and can be integrated into detailed infection transmission models. In the context of zoonotic diseases such as S. japonicum and S. mekongi, it should be leveraged to explore the costs and effectiveness of animal-based and other combined interventions (59). The costs to be considered include expected resources used for implementing and eventually maintaining an intervention, including the net savings to patients and healthcare providers due to reducing the disease burden (57). Savings comprise out-ofpocket and health system expenses, travel costs of care-seeking and opportunity costs of ill health, such as reduced patient productivity or school and work attendance, which are unfortunately often overlooked. In zoonotic schistosomiasis, additional costs are incurred due to animal death or illness, which adds time and cost to replace animals, as well as reduced livestock productivity (60). One challenge remains regarding the appropriate metric to use for health outcomes that is generalizable. Disabilityadjusted life-years (DALYs) are widely used to measure disease burden, with one DALY representing the loss of one life-year lived in optimal health, thereby translating both disease mortality (the years of life lost, YLL), and morbidity (the years lived with disability, YLD) into a single metric (61,62). This enables comparison across studies, settings, and interventions targeting the same or different diseases (58). However, DALYs face some limitations, as they frequently disregard the infection-associated mental health burden or the need to adjust for co-morbidities. To estimate morbidity, DALYs rely on general estimates of disability weightsmost of them estimated by an expert medical panel, instead of a preference-based valuation method, raising universality concerns (63). This leads to the health impacts of infection being underestimated. Lastly, DALYs are unsuitable when evaluating zoonotic diseases because they disregard effectiveness resulting from improving animals' and owners' quality of life and well-being due to averted animal morbidity/ mortality. Efforts have been made to quantify the zoonosis burden on humans and animals simultaneouslyzoonosis disability-adjusted life-years (zDALYs) (64). These include an additional component called animal loss equivalents, which converts expected livestock production and local per capital income losses to the equivalent number of human YLD. Nevertheless, this metric has been rarely used in costeffectiveness analyses of zoonotic diseases (65,66), and only once for schistosomiasis (67). Most economic evaluations of schistosomiasis interventions have focused on chemotherapy with praziquantel (68)(69)(70). The WHO NTD roadmap has highlighted the benefits, including financial, of cross-cutting interventions (3). However, the use of different effectiveness measures (i.e. health outcomes) for evaluation of new control interventions hampers comparison within and between NTDs (71,72). Standardising the use of a common metric across economic analyses will enable costeffectiveness comparisons across multiple NTDs and beyond. Such metrics should be extendable, as appropriate to zoonotic diseases. Programmes that consider the first and second pillars together are more sustainable, as knowledge of the most effective interventions from an epidemiological perspective can be supplemented by evaluations of their economic impact and the time horizon for cost benefits. PILLAR III: SOCIOLOGY The third pillar acknowledges the impact of human behaviour on intervention outcomes. This is most commonly considered in the context of adherence, referring to how individuals interact with a given intervention. For example, when offered medication, whether a person will ingest it or not, in the context of animal treatment, whether an owner gives the drug to the animalan important differentiation between treatment coverage and treatment compliance (36,73). If enough individuals do not adhere to control measures, effectiveness at population levels can be lower than predicted. This can lead to failure in meeting targets like EPHP and/or increases in intervention costs, which could change the cost-effectiveness of the programme. Indeed, numerous studies across diseases (not just NTDs) suggest that it is not realistic to expect full compliance with any control measure (74)(75)(76)(77)(78). Intervention strategies can be better informed when models explicitly account for inconsistent adherers (36,79). Assessing the impact of novel interventions in a given location can be challenging when the social determinants of participation are unknown. The WHO has suggested that Water, Sanitation, and Hygiene (WaSH) methods should be incorporated into NTD control programmes (3). This will be crucial in the context of zoonotic schistosomiasis where efficacy of WaSH interventions can be strongly affected by noncompliance because very few individuals (human or animal) are required to maintain transmission. Modelling the impacts of WaSH interventions is not novel for non-NTDs (80)(81)(82) or NTDs (83). Existing studies focused on WaSH interventions to control schistosomiasis have been reviewed (84), and new studies have been performed (85) or are currently ongoing (86). Interventions that include participatory processes are more likely to be successful as people are more interested and perceive responsibility/ownership for the outcomes of the control programmes (3,87,88). Pre-intervention consultations with communities may help identify challenges in intervention design that can be addressed. For example, lack of clarity of ownership and maintenance obligations were reported as reasons to limit or halt usage of unmaintained boreholes (89). Quantification of preferences for different WaSH interventions can be obtained using discrete choice experiments (DCE) which enable the comparison of preferences for WaSH and healtheducation interventions, considering monetary and nonmonetary costs (90). Health-education interventions can help people understand the transmission cycle, including animal contribution, and increase their perception of the disease threat, thus improving the adherence to WaSH and other interventions (90). Interventions informed by DCEs in areas endemic for zoonotic schistosomiasis will need to include preferences for animal-related interventions. The outcome of a strategy built on these pillars will provide more robust, sustainable policy recommendations. DISCUSSION Sustainability has been discussed in many fields, and always encompasses three pillars, environmental, economic, and social. Here, we have linked these three pillars in the context of NTDs, for the purpose of informing sustainable global-health policy. This is of particular importance for zoonotic NTDs (e.g. schistosomiasis caused by S. japonicum and S. mekongi), where the interconnection between people, animals, and their shared environment should be recognisedemphasising the importance of a One-Health approach. For the first pillar, epidemiology, we focus on definitive host heterogeneity and quantifying their contribution to overall transmission, enabling us to inform optimal intervention strategies and timelines to programmatic targets. Epidemiological understanding will be strengthened through access to improved cross-disease data. The second pillar, economics, can inform interventions by accounting for costs to all stakeholders involved and the time horizon for cost benefits. Challenges remain in establishing a standardised metric for cost-effectiveness evaluation that is appropriate for zoonotic diseases. The third, and last, pillar, sociology, considers the impact of human behaviourparticularly adherenceon programmatic success. A strong foundation of stakeholder engagement and commitment is needed from scientists across disciplines, politicians, and public-health experts. Only by combining these three pillars and commitment of all involved, can the best strategies to achieve the desired target health outcomes be developed, whilst considering costs, that will be championed by relevant stakeholders approved by local communities. The modelling community may use these recommendations to make the model predictions more accurate, assisting decision-makers to design sustainable control programmes to reach the WHO's ambitious 2030 goal. AUTHOR CONTRIBUTIONS EJ and JP conceived the study. EJ, JC, and OK wrote the original draft. All authors critically reviewed and edited the manuscript. JP provided overall supervision of the project.
2022-01-13T16:21:00.981Z
2022-01-10T00:00:00.000
{ "year": 2022, "sha1": "565c1452c8084749e86ba8625b7f316cdd5d4bfe", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fitd.2022.826501/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "4735948f96837c91eb6aeb8b14655d5d86a1abc2", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Political Science" ], "extfieldsofstudy": [] }
54080543
pes2o/s2orc
v3-fos-license
The binary fraction of planetary nebula central stars: the promise of VPHAS+ The majority of planetary nebulae (PNe) are not spherical, and current single-star models cannot adequately explain all the morphologies we observe. This has led to the Binary Hypothesis, which states that PNe are preferentially formed by binary systems. This hypothesis can be corroborated or disproved by comparing the estimated binary fraction of all PNe central stars (CS) to that of the supposed progenitor population. One way to quantify the rate of CS binarity is to detect near infra-red (IR) excess indicative of a low-mass main sequence companion. In this paper, a sample of known PNe within data release 2 of the ongoing VPHAS+ are investigated. We give details of the method used to calibrate VPHAS+ photometry, and present the expected colours of CS and main sequence stars within the survey. Objects were scrutinized to remove PN mimics from our sample and identify true CS. Within our final sample of 7 CS, 6 had previously either not been identified or confirmed. We detected an $i$ band excess indicative of a low-mass companion star in 3 CS, including one known binary, leading us to to conclude that VPHAS+ provides the precise photometry required for the IR excess method presented here, and will likely improve as the survey completes and the calibration process finalised. Given the promising results from this trial sample, the entire VPHAS+ catalogue should be used to study PNe and extend the IR excess-tested CS sample. INTRODUCTION The precise origin of the shaping of planetary nebulae (PNe) remains uncertain. Observations find that approximately 80% of PNe are non-spherical, with the majority of these exhibiting slight ellipticy or localised asymmetries. These mildly aspherical morphologies can be explained by interaction of the PN wind with the interstellar medium (ISM), or via the Generalised Interacting Stellar Wind (GISW) model (eg. Kwok et al. 1978, Balick & Frank 2002, which invokes helen.barker@postgrad.manchester.ac.uk a dense equatorial torus to funnel the stellar winds. However, it is unclear how such an equatorial torus could form, and there is no single star model that generates the required rotation rates and magnetic fields to form highly elliptical or bipolar PN morphologies (García-Segura et al. 2014). A second model of PN formation requires an orbiting binary companion to shape the central star's (CS) ejecta via the gravitational interaction or via the generation of rotation and magnetic fields, allowing the formation of a wider range of aspherical morphologies. Given the prevalence of aspherical PNe, this second model has been expanded into the Binary Hypothesis (De Marco 2009), which states that observable PNe are preferentially formed by binary systems. It is generally accepted that CS binarity is an import factor in the shaping of PNe, (e.g., Miszalski et al. 2009, Hillwig et al. 2016), but there is evidence that binary evolution also impacts other physical properties of the PN. For example, Frew & Parker (2007) suggested that post-common envelope (CE) PNe, where a companion has spiralled in due to interaction with the asymptotic giant branch (AGB) star's ejecta to form a very close binary, have, on average, lower ionised masses. Frew et al. (2016) found that known PNe with close binary CS have a restricted range of Hα surface brightnesses, and suggested that this is a physical rather than a selection effect as previously interpreted (Bond & Livio 1990). Corradi et al. (2015) observed several post-CE binaries, and concluded that the large difference between the abundances calculated using the optical recombination lines and the collisionally excited lines, known as the abundance discrepancy factor, is related to the binary stellar evolution. This has since been supported by other observations . Therefore, if the Binary Hypothesis is valid, there will be implications for any calculation which has assumed single-star PN formation, such as studies of stellar evolution, molecule formation, and ISM enrichment. One way to test the Binary Hypothesis is to compare the CS binary fraction to the binary fraction of the progenitor main sequence population, measured to be (50 ± 4)% by Raghavan et al. (2010). Alas, detecting CS binarity is not trivial. While detecting binaries with periods shorter than a few days is relatively easy (e.g., Hillwig et al. 2017, high precision observations and careful reduction techniques are required for wider binaries (e.g., Hrivnak et al. (2017)). This means that sample sizes are small, making statistical analysis of the total PN population difficult. Here, we use the infra-red (IR) excess method, described fully in Section 2, to search for binaries in PNe. This builds upon the work of De Marco et al. (2013) and Douchin et al. (2015), who attempted to constrain the CS binary fraction by searching for I and J band flux excess in samples of CS within the volume-limited sample of Frew (2008). Small number statistics made it impossible to accurately account for observational biases and constrain the total PN binary fraction, so Douchin et al. (2015) also used archival Sloan Digital Sky Survey (SDSS) data to search for z band excess. It was concluded that although SDSS photometry is intrinsically sufficiently precise to detect IR flux excess, other limitations such as the inability to subtract bright nebulae resulted in too high an uncertainty, on average, to extend the PN sample. Here, we examine a trial sample of PNe within the footprint of the VLT Survey Telescope (VST) Hα Survey of the Southern Galactic Plane and Bulge (VPHAS+) (Drew et al. 2014), to determine whether the new high-resolution images will allow us to identify new CS candidates, and if the photometry is sufficiently precise enough to detect CS IR excess. If successful, the techniques developed in this work can be applied to all the known PNe within the survey footprint, giving a much larger sample from which the CS binary fraction can be determined. The specification of VPHAS+ and calibration performed are summarised in Section 3. The sample of known PNe within VPHAS+ with identified CS or strong candidates are presented in Section 4. The results of the investigation into CS binarity are presented in Sections 5 and 6, with notes on individual objects presented in Section 7. We conclude in Section 8. IR EXCESS METHOD Although low-mass main sequence stars, the most common type of companion in binary CS, are relatively dim compared to the CS itself, they can be detected in a spatially unresolved binary system as the CS radiates primarily at blue and UV wavelengths, and the main sequence star radiates mostly at red and IR wavelengths. To detect binary CS in practice, observations in at least two blue bands and one IR band are required. In this work, the two blue bands used are the Sloan u and g bands, and the IR band is the Sloan i or 2MASS J band. The J band is more sensitive to cool companions, but is not included in VPHAS+ data, so literature J band measurements are used where sufficiently precise values were available. The IR-excess detection method is summarised as follows: (i) We adopt a model stellar atmosphere of an appropriate effective temperature to represent a given single CS. The temperature is either obtained from the literature, a value adopted following a Zanstra analysis (see Pottasch (1983) for a description of the method) and other considerations, or is assumed to be 100kK. (ii) We compare the modelled to the observed u−g colour to determine the reddening, E(B − V ), of the CS. (iii) We de-redden all observed magnitudes using this E(B − V ) value. (iv) We compare the modelled g − i (and g − J where available) colour to the observed, de-reddened colour. Any excess in the observed de-reddened colour is interpreted as a red or IR excess, possibly indicative of a low mass main sequence companion. It should be noted that this method can only select binary candidates. Follow-up observations are required as IR excess can come from other sources, such as a non-standard stellar atmosphere, or simple line-of-sight superposition of stars. Another source of uncertainty can also come from nebula contamination, the effects of which are illustrated in Figure 1. The transmission windows of the Sloan u, g, r and i bands are shown, overlaid with a typical, un-reddened PN spectrum, that of NGC 3587 (Kwitter & Henry 2001 The u − g, rather than the u − r colour, is used to calculate E(B − V ) as it should be least affected by the presence of a possible low-mass companion. However, a strong [OIII] line that is not properly subtracted would make the u − g colour redder than it should be. This extra 'redness' would be attributed to interstellar reddening, erroneously increasing the calculated E(B − V ) value and causing any companion's contribution to the overall flux to be reduced in the de-reddening process. Nebula contamination can therefore lead to missed detections of binary companions ( negatives). To avoid this, large, dispersed, and faint PNe observed under good seeing conditions are required for this method, since accurate nebula light subtraction is almost impossible to achieve for bright and/or compact PNe. The greatest limitation of the IR excess detection method is the requirement for extremely precise CS photometry (and PN subtraction for bright PNe), as the flux emitted by a putative low-mass companion is very small both intrinsically and in comparison to the highly luminous CS. On the "horizontal track", the transition at constant luminosity between the AGB and white dwarf (WD) cooling track, CS are so luminous that they outshine all but the brightest main sequence companions. Figure 2 shows spectral energy distributions representative of CS on the horizontal track with a fixed luminosity of 3160L , a mass of 0.6M , at a distance of 1kpc (no reddening), and temperatures of 100kK, 120kK and 140kK. Also shown is the spectrum of a K5V star, with mass, radius and temperature parameters from Boyajian et al. (2012). All objects are assumed to radiate as blackbodies. The dashed lines show the combined flux of the binary system and the filter curves of the Sloan u, g, r and i bands, and 2MASS J band are overlaid. The relatively bright K5V star makes only a small contribution to the total flux, and is only discernible from the CS at the redder wavelengths. Figure 3 shows the spectra of CS on the cooling track with temperatures of 100kK, 120kK and 140kK using mass and radius parameters from Vassiliadis & Wood (1994) and the spectra of a range of possible low-mass main sequence companions (Boyajian et al. 2012). As CS luminosity decreases with temperature on the cooling track, lower mass main sequence companion stars become distinguishable from the CS. These plots also allow us to identify another limitation of the IR excess detection method; a K5 star is blue enough that its flux contaminates the g band. Similar to the effect of nebula contamination, this g band contamination will lead to a larger-than-real E(B − V ), leading in turn to dereddened colours that are too blue, resulting in a reduction of the red/IR flux excess. In this way, the K5 star may be entirely removed during the de-reddening process, making its detection impossible. It should also be noted that for spectral types earlier than approximately G0V, the anal-ysis presented here simply does not work, as the G star's colours, reddened by interstellar dust, simply dominate and our assumption of the intrinsic CS colours fails. However, the complete breakdown of our technique serves in itself to indicate the presence of a bright companion, which can then be detected and confirmed with alternative techniques. Figure 4 allows us to visualise the combined effects of low companion luminosity and g band contamination. A grid of binary system colours were calculated for a CS on the white dwarf cooling track and a range of main sequence companion spectral types. The u − g colours of each CS -companion combination were compared to those of the CS only, and the difference attributed to reddening. The reddening value derived from this comparison was used to de-redden all the bands of the synthetic binary. The g − i colour of the binary system was then compared to that of the CS alone, and any difference attributed to an excess of flux in the i band. The calculated i band excess is shown as a contour plot. The bunched contours around an M0V companion spectral type show the quickly decreasing i band excess caused by the g band contamination. The precision photometry required for this technique can be appreciated; even under ideal conditions with a low temperature faint CS and a late M companion star, we only expect an i band excess of a few hundredths of a magnitude. Overview VPHAS+ is an ESO public survey surveying the Galactic Plane south of the celestial equator at latitudes |b| < 5 • , extending to |b| < 10 • across the Galactic Bulge, providing photometry down to ∼20th magnitude in the Sloan u, g, r, and i bands, and in an additional Hα narrowband. The median seeing in g is 0.8 arcsec and ∼1.0 arcsec in u, sampled at 0.21 arcsec per pixel by OmegaCAM. The field of view is one square degree and is imaged onto a mosaic of 32 CCDs. Observations are split into red and blue observing blocks to ensure the more stringent u band observing conditions are met whilst allowing the survey to progress in a timely manner. The red block consists of r, i and Hα exposures. The blue block contains a second r band exposure, labelled r2, and exposures in the u and g bands. As well as being used to check the stability of VPHAS+ photometry, the repeated r band measurements provide an independent test for CS variability. In this work, we have used data from VPHAS+ data release 2 (DR2), which covers 24% of the total survey footprint. To minimise the survey footprint gaps due to the spaces between the CCDs and reduce the impact of poor photometry at CCD edges, every VPHAS+ pointing comprises of several offsets. Two of these offsets comprise of an exposure in every filter; these are referred to as block A and block B. The third offset, referred to in this work as Block C, comprises of a g and Hα exposure. Calibration The data used throughout this work was acquired from the Cambridge Astronomical Survey Unit (CASU) 1 , where a pipeline checks and processes raw VPHAS+ data; see Drew et al. (2014) for details. The pipeline produces a single-band catalogue in each filter for every CCD in a VPHAS+ pointing. The catalogues contain flux counts for objects using a range of apertures. A summary of the radii of these apertures is provided in Table 1, along with the fraction of the point spread function (PSF) inside this aperture for a seeing of 0.8 arcseconds. 3.2.1 Calibrating the g, r and i bands As a by-product of pipeline illumination correction, VPHAS+ magnitudes are referred to the photometric scale of the American Association of Variable Star Observers (AAVSO) Photometric All-Sky Survey (APASS), which provides magnitudes in the AB system. This procedure directly calibrates the Sloan g, r and i bands common to both APASS and VPHAS+, using two-dimensional functional fits spanning the full 32-CCD detector footprint. Although the VPHAS+ pipeline provides aperture corrections to move VPHAS+ magnitudes onto a standard 1 http://casu.ast.cam.ac.uk/ scale, they were not used in this work as a global VPHAS+ calibration has not yet been finalised, and regardless, the high precision photometry required here merits its own calibration process. The remainder of this section describes how we calibrated photometry from the VPHAS+ single-band catalogues. To match in-common stars within a VPHAS+ CCD to APASS, VPHAS+ detected stellar objects falling within a 1.5 arcsecond radius of each APASS star with more than one observation were examined. The brightest VPHAS+ star in each instance was selected as the true match, reducing the number of mismatches due to inaccurate astronometry in APASS. Matches where the star was flagged as saturated, brighter than 13th magnitude or fainter than 16th magnitude in the VPHAS+ catalogue were then removed, leaving only those with the best photometry. This magnitude range is limited so as to avoid saturation in VPHAS+, and poor faint star photometry in APASS. For every object on this list of APASS-VPHAS+ matches, VPHAS+ AB magnitudes were calculated using each of the pipeline-generated aperture fluxes. A new customised aperture correction was then calculated to account both for any shift in overall zero point, and for flux not enclosed by the aperture. The aperture correction used is equal to the mean magnitude difference between APASS and VPHAS+ measurements for stars between 13th and 16th magnitudes, with the error equal to the standard deviation. Using this calibration method, VPHAS+ magnitudes measured using the 1.0 arcsecond radius aperture typically have systematic errors less than 0.01 magnitudes, and formal errors on single measurements of a ∼19 th magnitude object are less than 0.016, 0.025, and 0.044 in the g, r and i bands, respectively. The uncertainty on i band measurements is greater than for the g and r bands as there is more noise associated with the photon count. These formal errors, of course, decreases for objects with multiple measurements, and there are usually two or more detections available. Calibrating the u band The VPHAS+ u band was calibrated as described by Drew et al. (2014), by plotting (u − g) vs (g − r) colour-colour diagrams and overlaying synthetic stellar reddening tracks. These tracks are provided in the Vega magnitude system, so VPHAS+ magnitudes were first converted using: where the offset is the ratio of the flux zero-points, Fzp, of the filter, in Janskys, for each magnitude system : The flux zero-point of the AB magnitude system is defined to be 3631.0 Jy. The zero-points of the VPHAS+ filters in the Vega system were obtained from the Spanish Virtual Observatory (SVO) website 2 . The calculated Vega to AB corrections for VPHAS+ are shown in Table 2. It should be noted that the AB-Vega offset in the u band is almost invariably too large by up to 0.2 magnitudes, but this is corrected for by the calibration process. Vega magnitudes are used throughout the rest of this work. The calibrated single-band CCD catalogues were then combined to create a band-merged catalogue for each block in a VPHAS+ pointing, and (u − g) vs (g − r) colour-colour diagrams plotted using the calibrated g and r2 Vega magnitudes. The u band Vega magnitudes were adjusted until the maximum number of stars lay between the un-reddened main sequence and G0V reddening lines. This method of calibrating the u band results in larger systematic magnitude errors; typically 0.025 magnitudes using the 1.0 arcsecond aperture, with a typical total error on a single measurement of a ∼19 th magnitude object of 0.048 magnitudes. Predicted colours of central stars in VPHAS+ The expected colours and magnitudes on the Vega system of both CS and main sequence stars when observed in VPHAS+ were calculated. First, the VPHAS+ bandpasses and an additional J band were constructed, because although the VPHAS+ bandpasses are based on those of SDSS, they are not identical, and OmegaCAM is more sensitive to blue wavelengths than the detector used by SDSS. The constructed bandpasses included CCD response and atmospheric extinction at Paranal (Patat et al. 2011). Spectra from the literature were then convolved with these bands, as described below. Colours of single central stars To calculate the expected colours and magnitudes of CS in VPHAS+, synthetic spectral energy distributions (SEDs) from the TMAP database were used: theoretical NLTE model stellar atmospheres calculated using the code TMAW Werner & Dreizler 1999;Werner et al. 2003). The chosen models have a range of effective temperatures T eff from 20kK to 170kK, with log(g) values in the range 4.0 to 8.0. Colours calculated for each of these models are listed in Table A1. Here, it can be seen that the colours are almost independent of the choice of surface gravity, and in the hotter models, almost independent of temperature. The effect of reddening on CS colours was then investigated. The spectrum of the T eff =100kK, log(g)=7.0 TMAP model, chosen as a "typical" CS, was reddened over a range of E(B − V ) values using the reddening laws of Cardelli et al. (1989) and of Fitzpatrick & Massa (2007), both with RV =3.1. The colours of each model were calculated, and are listed in Table A2. Colours using a T eff =50kK, log(g)=7.0 model were also calculated, but the differences were found to be negligible. Figure 5 shows the (u − g) vs. (g − r) colour-colour plot of VPHAS+ pointing 1738, overlaid with the CS reddening lines derived from the two reddening laws, with tick marks indicating the values of E(B − V ). Also shown are the synthetic main sequence locus, and G0V and A3V reddening lines from Drew et al. (2014). As well as being used for calibrating the u band, these plots allow true CS to be identified; we would expect the observed colours of any true CS to lie close to the predicted CS reddening line. For this purpose, the choice of reddening law will not impact our results, as it is clear a true CS will lie far from the main cluster of stars in the upper parts of the plot. The second use of the predicted, reddened CS colours is to derive the value of E(B − V ) from an observed CS's u − g colour, as described in Section 2. From Table A2, we can see that the difference in the u − g colour predicted by each reddening law is very small, less than 0.04 for E(B − V ) < 2.0. There is a bigger impact on the g − r colour, but that is not used in here. Therefore again, the choice of reddening law will have a negligible effect on our results, and we continue using the reddening law of Cardelli et al. (1989). Main sequence stellar colours In order to estimate the spectral type of a CS companion given an observed CS IR excess, the main sequence spectral flux library from Pickles (1998) was used to calculate the expected colours of main sequence stars in the VPHAS+ bands. These expected colours are presented in Table A3. The expected absolute V band magnitudes and V − J colours from De Marco et al. (2013), combined with the colours calculated in this work were used to approximate the expected absolute magnitudes in the J and i bands (Mi and MJ , respectively). These are later used to give an indication of CS-companion spectral type. PN SAMPLE There are approximately 1500 currently known true or likely true PNe listed in the HASH database within the footprint of VPHAS+. Around 830 of these can be found within the currently completed sections of the survey (up to DR3). For this work, 90 true or likely true PNe within DR2 pointings fully competed before 30th September 2013 were examined to investigate whether VPHAS+ photometry is sufficiently precise enough to search for CS i band excess. If successful, the techniques developed in this work can be applied to the remaining PNe within the VPHAS+ footprint Of the 90 PNe examined here, 20 have a known CS, or a visible candidate, that was detected in all the required u, g, and i bands during the VPHAS+ catalogue generation process. From this sample, the following CS were removed as the APASS data available was insufficient for calibration: PNG 000.6+03.1, PNG 001.5-02.8, PNG 002.3-01.7, PNG 002.9-03.0, PNG 018.8-01.6, PN H 1-45 and PNG 001.2+02.8. Abell 48 was removed from our i band excess investigation as it is a known Wolf-Rayet star (Todt et al. 2013, Frew et al. 2014) 3 , although its photometry is listed. Images of the remaining 12 PN and Abell 48 are shown in Figure 6. The VPHAS+ catalogue and calibration details are summarised in Table B1, along with the seeing and calculated magnitude corrections for each observation, and the observation dates and times. There are two CS entries for Hf 38, labelled CS 1 and CS 2, as in this paper we suggest a new CS candidate. There are also two CS listed for Sh 2-71, again labelled CS 1 and CS 2, as the true CS is debated. The CS candidates for Hf 38 and Sh 2-71 are labelled in Figures 6a and 6m respectively. Removing central star impostors The 14 CS candidates for the 12 PN in our sample were investigated to determine whether they are the true, or likely true CS. To be considered a true CS, the CS candidate had to be notably blue and well-centred in the PN. Table 3 summarises our findings and lists CS candidate coordinates in VPHAS+. A brief discussion of the rejected objects is provided below. The CS candidate identified here for PNG 003.4+01.4 is the only star within the PN visible in VPHAS+. However, this star is not well centred in the nebula and a faint star closer to the geometric centre is visible in the VVV image. Upon first glance, PNG 354.8+01.6 appears to be a round PN in the VPHAS+ Hα/r quotient image, centred on the star listed here. VPHAS+ images of these objects, and neither have colours consistent with a CS. We therefore conclude that they are stars, as pointed out in the HASH database. In previous images of Hf 38, only the brighter, westerly star (labelled CS 1 throughout this work) was visible. Here, we present a superior resolution image in which we are now able to identify a candidate CS located close to the centre of the nebula, labelled throughout this work as CS 2. Plotting the observed colours of both these objects on a (u − g) vs. (g − r) colour-colour plots, shown in Figure 7, reveals that the colours of CS 2 lie close to the CS reddening line while those of CS 1 lies within the main group of stars. Hf 38 CS 1 was therefore rejected from our sample. The CS candidate chosen for PNG 344.4+01.8 was the only star with photometry available in VPHAS+. However, it is off-centre and has colours inconsistent with those of a CS. Inspection of the VVV image suggests that our CS candidate is in fact two, possibly three, stars unresolved in VPHAS+. The true CS therefore remains unidentified. Table 4 lists the photometry on the Vega system of the remaining 8 CS candidates in our sample, along with the best J band magnitudes from the literature. The photometry of rejected objects can be found in Table C1. This photometry was obtained by calibrating each of the VPHAS+ pointings containing an object as described in Section 3.2. A suitable aperture for each object was chosen by considering the seeing in each filter and the amount of crowding around the CS. VPHAS+ photometry Multiple measurements, available due to the VPHAS+ offset observing pattern, were combined provided our confidence in the calibration of each measurement did not differ significantly, and any changes in seeing were unlikely to increase nebula contamination of CS photometry. The C block g band exposure, taken on a different date to the A block exposure, for Sh 2-71 CS 2 was not used, as this object is a known variable Kohoutek (1979). For the pointings in our sample, none of the red and blue observing blocks (containing the r and r2 measurements, respectively), were completed on the same night. This allows us to examine the stability and consistency of VPHAS+ photometry, although it should be noted that the r band magnitudes are not used in the IR excess method presented here. In general, we find that r and r2 measurements agree to better than 0.027 magnitudes, except the r and r2 measurements of Hf 38 CS 2 and PNG 288.2+00.4 (both in VPHAS+ pointing 1738) that differ by approximately 0.12 magnitudes. We find this discrepancy is consistent with all stars of a similar magnitude within the pointing, leading us to believe that it is due to a difference in the calculated calibration parameters, likely due to the large difference in seeing (listed in Table B1) between the two measurements. The calibration of the r band is more consistent with APASS than that of r2, so is our preferred measurement. Distances The catalogue of Frew et al. (2016) was searched to find distance measurements to each of the PN in the sample. These are summarised in Table 5. For all other PN in the sample, distances were calculated using the same method. It should be emphasised that the distance is not used in the detection of IR excess, but is used to give an indication of companion spectral type to within a few subclasses. Interstellar reddening It is crucial that the E(B − V ) value determined using our method be uncontaminated and accurate, as explained in Section 2. First, E(B − V ) values were calculated by comparing the observed CS u − g colour to those in Table A2, which lists the expected colours of a 100kK CS reddened by different E(B − V ) values when viewed in the VPHAS+ bands using the reddening law of Cardelli et al. (1989). The errors listed are the formal errors derived from the observed u and g colours. This will be the main source of uncertainty on the calculated E(B −V ) values, given the extremely weak dependence of the expected CS u − g colour on temperature, as discussed in Section 3.3.1. Several other methods to determine the reddening were then used to check the accuracy of this value, and are discussed below. Spectra of the PN from Acker et al. (1992) were down-loaded from the HASH website where available, and the nebula Balmer decrement calculated, from the observed Hα/Hβ line intensity ratios. Other spectra available via HASH were not used as they are not flux calibrated. The Hα to Hβ line ratio was computed by calculating the area under Gaussian curves fitted to the lines. The literature was also searched to find calibrated Hβ fluxes and radio fluxes of the PN and E(B − V ) calculated using the radio/Hβ ratio. This is useful, as the extinction derived from the Hα/Hβ ratio depends on the optical reddening law assumed, and this method does not (for a review of this subject see Nataf (2016)). The reddening map of Green (2015) was searched using distances listed in Table 5. While values from the reddening map are not accurate, large discrepancies from this value may indicate contamination of CS photometry. The E(B − V ) values calculated for each CS in our sample and values from the literature are listed in Table 6. A discussion of these values is provided below. Similar to previous results, e.g., Ruffle et al. (2004), we find that extinction values derived from radio data are lower than those from optical data. Table 6 also makes it clear that radio E(B − V ) values for Sh 2-71 decrease with frequency. Therefore, only E(B −V ) values derived from op-tical data were compared to that derived from the observed u − g colour. The E(B − V ) value for Hf 38 CS 2 derived from the u − g colour is larger than that derived by Tylenda et al. (1992), and that calculated in this work from the Balmer decrement using the spectrum from Acker et al. (1992). This is likely because of g band contamination by Hf 38 CS 1, and possibly the fainter red star to the south of CS 1, visible in Figure 6a. For NGC 6337, the colour E(B − V ) value is lower than that derived from the Balmer decrement calculated here using the spectrum from Acker et al. (1992), and by Tylenda et al. (1992) using their own spectrum. This is unexpected, as any contamination would act to increase this value. However, there is good agreement between the colour E(B − V ) values and that calculated by Frew (priv. comm.), suggesting the reddening calculated from the nebula is not representative of the reddening of the CS. There is a large discrepancy between the line of sight reddening and the E(B−V ) value calculated for PNG 242.6-04.4, suggesting that the PN is highly self-reddened, or that there is extensive CS g band contamination by a companion star. For PTB 25, the E(B − V ) value derived from the u − g colour is smaller than both values from Boumis et al. (2003), who calculated E(B − V ) from the Balmer decrement, and a second, larger value that accounted for higher extinction in the direction of the Galactic Bulge. The good agreement between the colour E(B−V ) value and that from Frew (priv. comm.) increases our confidence in the value. It is reasonable to assume that the E(B − V ) value of a PN derived from the CS colours will be similar to that derived from the nebula. Therefore, given the good agreement between the literature values and the E(B − V ) values derived from the u−g colour of Sh 2-71 CS 1, it seems unlikely that CS 2 is the true CS. Sh 2-71 CS 2 is therefore removed from our sample. Confirming the central stars To confirm that the objects in our sample are the true CS, the luminosity of the CS was estimated by scaling the 100kK and 150kK log(g)=7.0, solar abundance TMAP models of CS atmospheres to match the VPHAS+ de-reddened CS g band magnitudes. The g band was used rather than the u band due to the larger uncertainty of the u band observations, and the r and i band were not used as they are more prone to contamination by low-mass stellar companions. De-reddened magnitudes were calculated using the adopted E(B − V ) value for the PN from Table 6 and the observed magnitudes from Table 4. The total flux, F , of the scaled spectrum was then calculated, and the luminosity, L, obtained using the standard equation, L = F/4πd 2 , with distance values, d, from Frew et al. (2016), or calculated in a similar manner. The luminosities thus calculated, listed in Table 5, are only approximate. All estimated luminosities except those of PNG 242.6-04.4 and PNG 288.2+00.4 are consistent with those of a typical CS (Miller Bertolami 2016), suggesting the light in these two objects is dominated by that of a main sequence star. Table 5. Luminosity estimates of each of the CS in the sample using magnitudes de-reddened using the adopted E(B − V ) value and distance estimates from Frew et al. (2016). Distances for three PNe were calculated in a similar manner. Luminosities were calculated assuming a CS temperature of 100 and 150kK. CS temperatures Temperatures were estimated for PNe with spectra and a total Hβ flux observation in the literature using pyCloudy (Morisset 2013). These were calculated by assuming a spherically symmetric PN with typical abundances, and a grid of CS temperatures and luminosities from the cooling track parameters of Vassiliadis & Wood (1994). The models were reddened using the adopted E(B − V ) values for each PN, listed in Table 6. The model CS temperature that reproduced the literature Hβ flux and Hβ/O[III] line ratios for each PN is listed in Table 7, along with CS temperature measurements derived from nebula observations found in the literature. While not accurate, these pyCloudy models and a study of the spectra allows us to discount the lower stellar temperatures determined for NGC 6337 and Sh 2-71, and the high energy-balance temperature of Hf 38. HeII lines are observed in the spectra of Hf 38 and NGC 6337, meaning the CS temperature must be greater than ∼90kK. They are not observed in the spectrum of Sh 2-71, but that is a noisy spectrum, especially at shorter wavelengths, so it is likely the signal was simply too low. Based on these arguments, we assume the temperature shown in bold in Table 7 for these three CS, and all other CS are assumed to have a temperature of 100kK. We can see from Table A1 that the u−g and g −i colours of CS vary by ∼0.07 magnitudes over the 50kK -140kK temperature range, meaning that we can be confident that this assumption will have a minimal effect on the detection of IR excess. RESULTS The de-reddened colours of each of the objects believed to be true CS were calculated and compared to the predicted colours of a single CS, listed in Table A1, to search for i and J band excesses. Figure 8 shows the de-reddened g − i and g − J magnitudes and associated errors of the CS in the sample, dereddened using the adopted E(B−V ) value listed in Table 6. The black dashed lines show the predicted colour of a single star as a function of temperature. Log(g)=7.0 was assumed throughout and the CS assigned the same temperature have been shifted for clarity. Objects whose g −i (or g −J) colours are redder than that of a single star by more than can be justified by the uncertainty are binary candidates. Objects below the line have an unphysical colour, suggesting the Table 6. E(B − V ) values calculated for each object using different methods. Reddenings from the reddening map of Green (2015) use the distance listed in Table 5. Perek ( Frew, priv. comm., * Not believed to be the true value, see Section 7. c 0000 RAS, MNRAS 000, 000-000 Gorny et al. (1997), 3 Stanghellini et al. (1993), 4 Shaw & Kaler (1989), 5 Amnuel et al. (1985), 6 Hillwig et al. (2006) 7 Stasińska et al. (1997, 8 Acker et al. (1992), 9 Perek (1971), 10 Kaler (1983) object is not a true CS, E(B − V ) has been over-estimated due to contamination, or a brighter companion star is dominating the light. The E(B − V ) values, i band excess and J band excess, and an estimate of companion spectral type based on each excess for each PN in our sample are shown in Table 8. Companion spectral types were estimated by comparing the observed g − i and g − J excess to the predicted colours of main sequence stars in Table A3. The algorithm used preferentially chose the later spectral type in cases with a degenerate solution, as from Figure 4 we know this detection method can only detect spectral types later than G0V. Hf 38 The new, superior resolution image from VPHAS+ has allowed us to identify the true CS of Hf 38. The object labelled throughout this work as Hf 38 CS 2 has a luminosity and colours consistent with a CS, whereas the colours of the offcentre star suggest that it is a main sequence star. The E(B − V ) value calculated using the u − g colour of CS 2 is larger than those derived from other methods. Given the close angular proximity of CS 1, CS 2 and the presence of a faint red star to the south of CS 2, it is likely that this is due to contamination. This object should be re-examined at a higher resolution to determine the true E(B − V ) value and search for i band excess. While it is likely that CS 1 is a foreground star, it is also possible that CS 1 and CS 2 are a binary system. Assuming the CS is at a distance of 2.25kpc , the separation between the two stars is ∼5000 AU. On the main sequence, binary systems with this separation are observed, but are rare (Raghavan et al. 2010). We also present here the Hα/r quotient image of Hf 38 from VPHAS+ in Figure 9. There is some previously unseen Table 6. The predicted colour of a single CS is shown as a black dashed line. Objects above the prediction line exhibit an i or J band excess consistent with the presence of a low mass main sequence companion. nebulosity approximately 34 arcseconds north west of the CS, along the polar axis of the PN. Again, assuming the CS is at a distance of 2.25kpc, the distance from the CS to the nebulosity is around 74 000AU (∼0.36 parsec). NGC 6337 NGC 6337 is a known close binary with an orbital period of 0.17 days (Hillwig et al. 2010), and is a pole-on elliptical PN (Corradi et al. 2000). This large, bright nebula is well-studied, with several distance and temperature measurements in the literature, and so provides a good test of the i band excess binary detection method used here. Hillwig et al. (2010) used time-resolved photometry and spectroscopy to model this binary system. They note the well-behaved brightness variability, also observable in the r Table 8. i and J band excesses of the CS sample, and an estimate of the spectral type of the companion needed to produce this excess. Upper and lower limits are shown in square brackets. CS magnitudes were de-reddened using the adopted E(B − V ) values. PN Name and r2 VPHAS+ measurements presented here, and therefore assume that the secondary star's radius is not greater than its Roche lobe, putting an upper mass limit on the secondary. They found the best agreement of their observations to their models using a CS temperature of >90kK and a main sequence companion with a mass of 0.35M , corresponding to a mid-M spectral type. However, the irradiation of the secondary star was not fully accounted for in their models. Given the substantial variability of the light in the system, the mid-K spectral type from the i band excess presented in this work is not counter to their findings. PNG 242.6-04.4 In the new, higher resolution VPHAS+ image we can see more clearly the 'S'-shaped north-south outflow from the star at the centre of the nebula. This star is a good CS candidate based on its location, however on a colour-colour diagram, this appears to be a main sequence star. Using the reddening map of Green (2015), at the distance of this PN, 6.6±1.2 kpc, the sight-line has E(B − V ) = 0.47 +0.033 −0.041 . The E(B − V ) calculated for this PN using the CS u − g colour is 1.77, suggesting the PN is very selfreddened. We suggest that the CS examined in this work is the true CS, but the light is dominated by that of a bright companion main sequence star, meaning the E(B − V ) cal-culated in this work is not the true value as our assumption that the CS light dominates the u and g bands does not hold. A crude blackbody analysis of this CS was performed in an attempt to constrain it's constituents. This analysis was inconclusive; no combination of model binary systems adequately reproduced the observed magnitudes and followup observation are required. PNG 288.2+00.4 Following inspection of the VPHAS+ and POPIPlaN images, we are confident that we have identified the true CS. However, the g−i colour of this CS lies below the expectation value, the u − g and g − r colours are not definitively consistent with a true CS, and the estimated luminosity is very inconsistent with that of a true CS. We therefore suggest that there is considerable contamination, possibly due to a bright unresolved companion, causing the derived E(B − V ) value calculated using the CS colours to be too large. No other sources were available from which E(B − V ) could be calculated. PNG 293.4+00.1 This round, faint PN is ideal for the IR excess method presented here, and the blue star located in the centre of the PN is almost certainly the true CS. The CS has an i band excess consistent with a mid-M spectral type companion star. Unfortunately, there are no J band measurements in the literature with which to corroborate our detection. PTB 25 This PN was discovered by Boumis et al. (2003) and simultaneously catalogued in the preliminary results of the MASH catalogue (Parker et al. 2006). The field of this PN is very crowded, but a particularly blue star in the centre of the PN has been identified in VPHAS+ as the CS candidate. De-reddening using the E(B − V ) value derived from the u − g colour, an i band excess consistent with a mid-K spectral type star is detected. It remains unclear whether this is due to an unresolved binary companion, or contamination by neighbouring stars. Sh 2-71 The true CS of Sh 2-71 has been the subject of debate. Historically, the 13.5 magnitude star labelled as CS 2 throughout this paper has been considered the CS (cf. Močnik et al. 2015). However, Frew & Parker (2007) claim that the fainter star labelled here as CS 1, is the true CS. This object is closer to the geometric centre of the PN, and has magnitudes more consistent with that of a typical CS. The E(B −V ) calculations presented in Table 8 support this, as the E(B − V ) value calculated using the u − g colour of CS 1 is a far closer match to the those in literature (derived from observations of the nebula), than that of CS 2. This similarity in E(B − V ) values strongly suggests that both the nebula and CS 1 are at the same distance and are related. Nonetheless, a crude blackbody analysis was performed using the observed VPHAS+ magnitudes of CS 2 to determine if it could possibly be the true CS. The analysis performed assuming CS 2 comprises of a CS and a main sequence star, that it is at the distance of the nebula, 1.32 kpc Frew et al. (2016), and that it has a line of sight reddening similar to the nebula of E(B − V )=0.86, suggests that the star is a 140kK CS with an A0 companion. This is similar to the result of Cuesta & Phillips (1993) who suggested the nebula hosts an ionising star with a temperature of 100kK and a A7V companion. A second analysis was also performed without the assumption that CS 2 contains a CS, and that it may be a foreground star superimposed on the nebula. Optimising over a grid of possible main sequence binaries, distances and reddenings, the best match to the observed magnitudes of CS 2 was a binary consisting of a F0V and a F5V star, at a distance of 800 pc with a line of sight reddening of E(B −V ) = 0.6. Using the method of determining E(B −V ) from the u − g colour presented in this work, an observer would calculate the E(B − V ) of this system to be 1.54. This is less than the measured E(B − V ) = 1.91 calculated here, suggesting that if CS 2 is a foreground star, it is selfreddened. Sh 2-71 CS 2 is known to vary in brightness by at least 0.7 magnitudes (Kohoutek 1979) with a possible period of 68.132± days (Mikulášek et al. 2005). The VPHAS+ B and C block g band measurements of CS 2 were taken approximately 730.047 days apart, giving a 0.715 period offset between measurements. The 0.527 g band magnitude variation measured in this work is consistent with the known brightness variability. The r and r2 VPHAS+ measurements were taken only 3 nights apart, but a difference of 0.529 magnitudes is detected in Sh 2-71 CS 1, an object with no previously observed variability. This difference in r band measurements is far greater than other stars in the CCD, leading us to believe it is a real variability. CONCLUSION The goal of this work was to assess the potential of VPHAS+ to study PNe and their CS, and to determine if the photometry provided is sufficiently precise enough to detect CS IR excess indicative of a low mass main sequence companion. We conclude the following: (i) The high-resolution images provided by VPHAS+ allows the identification of previously unobserved PN features, such as the internal nebula structure of PNG 242.6-04.4 presented here. The u band images allows the identification of new CS candidates; a good example is that of PTB 25, which lies in a particularly crowded field. (ii) VPHAS+ has good seeing (with a median seeing of ∼0.8 arcseconds in g) and stable photometry, with the magnitudes of objects observed several times being consistent to within a few percent. These repeat measurements taken as part of the multi-offset observing plan also reduces the formal error on the magnitude measurements, and allows the identification of CS variability. (iii) Using the calibration method presented here, the typical systematic errors between APASS and VPHAS+ photometry using the 1.0 arcsecond aperture in the g, r, and i bands is less than 0.01 magnitudes, with typical total formal errors on a single VPHAS+ ∼19 th magnitude measurement less than 0.016, 0.025, and 0.044 magnitudes, respectively. The uncertainty is greater in the i band as there is more noise associated with the photon count. These error decreases for objects with multiple measurements, and for the targets in this work, were generally considerably lower. The u band calibration is generally less accurate, as there are no u band magnitudes in APASS to calibrate to. Here, the typical systematic uncertainty using the 1.0 arcsecond aperture is 0.025 magnitudes, resulting in typical formal error on a single VPHAS+ ∼19 th magnitude measurement of 0.048 magnitudes. It is hoped that as VPHAS+ completes (in 2019 at present rates of progress) and a final global calibration is put in place, APASS g, r and i data can be supplemented with that from other digital surveys such as PanSTARRS and Skymapper, supporting a more precise u calibration. This will also reduce the number of objects lost due to insufficient calibration data. In this work, 7 of the initial 20 CS candidates had to be rejected from our sample due to poor APASS coverage. (iv) Despite the current challenges in calibrating the u band, we find VPHAS+ photometry is sufficient for the highly precise IR excess analysis presented here. We detect an i band excess in 3 of the 7 objects in our final sample. While follow-up observations are required to confirm CS binarity, excess flux was detected in the known close-binary NGC 6337 (Hillwig et al. 2006) in the i band, and in the J band using data from 2MASS, giving us confidence that our technique reliably detects CS companions. (v) We also present evidence that the light of 2 CS in our sample is dominated by that of an unresolved main sequence companion, an indication of CS binarity. The IR excess results of the trial sample of CS used in this work have not been used estimate the CS binary fraction. Rather, the work presented here highlights the value of the blue sensitivity of VPHAS+ for studying CS, and has shown that examination of the entire VPHAS+ catalogue to search for IR excess is viable. Given the large number of known PNe within the VPHAS+ footprint, this will likely provide a statistically significant number of IR excess-tested CS from which the rate of CS binarity can be determined. This work also highlights the need for near IR photometry of CS, as the J band is more sensitive to low-mass binary APPENDIX A: SYNTHETIC COLOURS IN VPHAS+ This appendix presents the expected colours of the central stars of PN and main sequence stars when observed using VPHAS+. The synthetic post-AGB atmospheres used are from TMAP calculated with the code TMAW Werner & Dreizler 1999;Werner et al. 2003) or the German Astrophysical Virtual Observatory grid calculations TheoSSA 4 . Also shown are the colours of a T=100kK, log(g)=7.0 synthetic atmosphere when reddened over a range of E(B − V ) values using the reddening laws of Cardelli et al. (1989) and Fitzpatrick & Massa (2007) with Rv=3.1. The spectra used to calculate main sequence star colours are from Pickles (1998). APPENDIX B: OBSERVATIONAL DATA FROM VPHAS+ Reported here are the VPHAS+ observation times and VPHAS+ single-band catalogue information, downloaded from CASU, of the objects discussed in this work. APPENDIX C: REJECTED CS CANDIDATES Reported in this appendix is the VPHAS+ photometry of objects rejected from our CS sample, as we do not believe that they are the true CS. The C block g band exposure of Abell 48, and A block measurements of PNG 344.4+01.8 in all bands were not used, as the calibration to APASS was poor. Table A1. Synthetic, intrinsic colours of CS using VPHAS+ bands, calculated using the theoretical stellar atmosphere models of TMAP. Table A2. Synthetic VPHAS+ colours of a CS, calculated using a synthetic stellar atmosphere from TMAP with a temperature of 100kK and log(g) = 0.7, reddened using the reddening laws of Cardelli et al. (1989) and Fitzpatrick & Massa (2007). Table A3. Synthetic intrinsic main sequence star colours using spectra from Pickles (1998). M V values are from De Marco et al. (2013).
2017-12-08T17:38:07.000Z
2017-12-08T00:00:00.000
{ "year": 2017, "sha1": "d5299504d410880610c71b24614ff4bf42331868", "oa_license": null, "oa_url": "http://uhra.herts.ac.uk/bitstream/2299/20139/2/Barker_et_al_MNRAS_stx3240_published_version.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d5299504d410880610c71b24614ff4bf42331868", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256036335
pes2o/s2orc
v3-fos-license
A graphic approach to gauge invariance induced identity All tree-level amplitudes in Einstein-Yang-Mills (EYM) theory and gravity (GR) can be expanded in terms of color ordered Yang-Mills (YM) ones whose coefficients are polynomial functions of Lorentz inner products and are constructed by a graphic rule. Once the gauge invariance condition of any graviton is imposed, the expansion of a tree level EYM or gravity amplitude induces a nontrivial identity between color ordered YM amplitudes. Being different from traditional Kleiss-Kuijf (KK) and Bern-Carrasco-Johansson (BCJ) relations, the gauge invariance induced identity involves polarizations in the coefficients. In this paper, we investigate the relationship between the gauge invariance induced identity and traditional BCJ relations. By proposing a refined graphic rule, we prove that all the gauge invariance induced identities for single trace tree-level EYM amplitudes can be precisely expanded in terms of traditional BCJ relations, without referring any property of polarizations. When further considering the transversality of polarizations and momentum conservation, we prove that the gauge invariance induced identity for tree-level GR (or pure YM) amplitudes can also be expanded in terms of traditional BCJ relations for YM (or bi-scalar) amplitudes. As a byproduct, a graph-based BCJ relation is proposed and proved. Introduction Gauge invariance has been shown to provide a strong constraint on scattering amplitudes in recent years. It was not only used to solve amplitudes explicitly (see e.g., [1][2][3][4]) but also applied in understanding hidden structures of amplitudes. One interesting application is that a recursive expansion of single-trace Einstein-Yang-Mills (EYM) amplitudes can be determined by gauge invariance conditions of external gravitons in addition with a proper ansatz [5,6] (the approach based on Cachazo-He-Yuan (CHY) formula [7][8][9][10] is provided in [11]). Applying the recursive expansion repeatedly, one expresses an arbitrary tree level single-trace EYM amplitude in terms of pure Yang-Mills (YM) ones whose coefficients are conveniently constructed by a graphic rule [12]. This expansion can be regarded as a generalization of the earlier studies on EYM amplitudes with few gravitons [13][14][15][16]. When the relation between gravity amplitudes and single-trace EYM amplitudes are further considered [5], we immediately expand any tree level gravity amplitude as a combination of Yang-Mills ones. It was shown that the recursive expansion for single-trace EYM amplitudes can be extended to all tree level multi-trace amplitudes via replacing some gravitons by gluon traces appropriately [17]. 1 In the explicit form of the recursive expansion proposed in [5], gauge invariance conditions for all but one graviton (so-called 'fiducial graviton') are naturally encoded into the strength tensors as well as amplitudes with fewer gravitons. Nevertheless, the gauge invariance for the fiducial graviton does not manifest and further implies an nontrivial identity between EYM amplitudes. This identity plays as a consistency condition, which guarantees locality, in the Britto-Cachazo-Feng-Witten (BCFW) recursion approach [18,19] to the expansion of EYM amplitudes [5]. Equivalently, if we take the gauge invariance condition of the fiducial graviton in the pure YM expansion, we immediately obtain a nontrivial identity for color-ordered YM amplitudes whose coefficients are polynomial functions of Lorentz inner products and depend on both polarizations of gravitons and external momenta. Similar discussions on generating relations for YM amplitudes by imposing gauge invariance can be found in [1,3,6,14,17,20,21]. An application of the gauge invariance induced identity is the proof of the equivalence between distinct approaches [12,22,23] to scattering amplitudes in nonlinear sigma model [21]. JHEP05(2019)012 Apart from the gauge invariance induced identity, color-ordered Yang-Mills amplitudes have also been shown to satisfy Kleiss-Kuijf (KK) [24] and Bern-Carrasco-Johansson (BCJ) relations [25], which reduce the number of independent amplitudes to (n − 3)! (proofs can be found in [26][27][28][29]). A notable difference from the gauge invariance induced identity is that KK and BCJ relations do not include external polarizations in the coefficients. Thus it seems that the gauge invariance induced identity provides a new relation beyond KK and BCJ relations. However, direct evaluations of several examples [5,6,17] imply that the gauge invariance induced identity is just a combination of BCJ relations. Due to the complexity of the coefficients in the expansion of EYM amplitudes, these examples provided in [5,6,17] cannot be straightforwardly generalized to arbitrary case. Hence there is still lack of a systematical study on the full connection between the gauge invariance induced identity and BCJ relations. In this paper, we investigate the relationship between an arbitrary gauge invariance induced identity, which is derived from the expansion of tree level single-trace EYM amplitudes or pure GR amplitudes, and general BCJ relations. We propose a 'refined graphic rule' in which different types of Lorentz inner products · , · k and k · k, constructed by half polarizations of gravitons µ and external momenta k µ , are distinguished by different types of lines. Similar graphic rules have already been applied in the study of CHY formula (see [30,31]). By expressing the coefficients in the gauge invariance induced identity according to the refined graphic rule and collecting those terms with the same structure of the lines corresponding to coefficients · and · k (such a structure is called a skeleton) in particular examples, we find that the gauge invariance induced identity can always be expressed by a combination of BCJ relations. To generalize our observations to arbitrary case, more hidden structures of graphs should be revealed. We show that a skeleton of a graph corresponding to the gauge invariance induced identity always consists of no less than two maximally connected subgraphs (components) which are mutually disjoint with each other. Any physical graph, which agrees with the refined graphic rule, can be reconstructed by (i) first connecting these components (via k · k lines) into a graph with only two disjoint (called final upper and lower ) blocks properly, (ii) then connecting the two disjoint blocks by a k · k line. For any given configuration of the final upper and lower blocks, spurious graphs are also introduced for convenience. We prove that the spurious graphs belonging to different configurations of the two disjoint blocks always cancel in pairs. Hence the sum over all physical graphs containing a same skeleton can be given by summing over both physical and spurious graphs with the same skeleton. A critical observation is that the total contribution of all physical and spurious graphs containing a same configuration of the final upper and lower blocks induces a graph-based BCJ relation which can be expanded in terms of traditional BCJ relations. As a consequence, the general gauge invariance induced identity from the expansion of tree level single-trace EYM amplitude is precisely expanded in terms of (traditional) BCJ relations. Similar discussions are applied to the gauge invariance induced identity of tree level GR amplitudes. The only notable difference is that the transversality of polarizations and momentum conservation in the pure gravity case should be taken into account. By the help of the language of CHY formulas [7][8][9][10], we further conclude that the JHEP05(2019)012 gauge invariance induced identities of Yang-Mills-scalar (YMS) and pure YM amplitudes are combinations of BCJ relations for bi-scalar amplitudes. The structure of this paper is following. In section 2, we review the expansion of tree level single-trace EYM amplitudes, the gauge invariance induced identity as well as traditional BCJ relations, which will be useful in the remaining sections. The refined graphic rule for the gauge invariance induced identity, which is derived from single-trace EYM amplitudes, as well as the main idea of this paper is provided in section 3. In section 4, direct evaluation of a typical example is presented. Inspired by this example (and examples presented in appendix), we provide discussions on the structure of skeletons in section 5, the construction of all graphs for a given skeleton in section 6 and the graph-based BCJ relation in section 7. The gauge invariance identities induced from pure gravity and YM amplitudes are discussed in section 8. We summarize this paper and provide some further discussions in section 9. Conventions of notations, more examples and the proof of the validity of the construction in section 6 are included in the appendix. Expansion of tree level single-trace EYM amplitudes, gauge invariance induced identity and traditional BCJ relation In this section, we review the recursive expansion of single-trace EYM amplitudes, the graphic rule, the gauge invariance induced identity from single-trace EYM amplitudes as well as the BCJ relations for Yang-Mills amplitudes. Here the position of i in the permutation σ is denoted by σ −1 (i). 2 Graphic rule for the pure YM expansion of tree level single-trace EYM amplitudes Applying the recursive expansion (2.1) repeatedly, we finally express the single-trace EYM amplitude A(1, 2, . . . , r H) in terms of pure YM ones A(1, 2, . . . , r H) = σ σ σ∈{2,...,r−1}¡ perms H C(1, σ σ σ, r)A(1, σ σ σ, r). (2.4) In the above equation, we summed over all possible permutations σ σ σ ∈ {2, . . . , r − 1} ¡ perms H in which the relative order of elements in {2, . . . , r − 1} is preserved and perms H are all possible permutations of elements in H. The coefficient C(1, σ σ σ, r) for any permutation σ σ σ is given by where {G(1, σ σ σ, r)} is used to denote the set of graphs F constructed by the following graphic rule: (1) Define a reference order ρ ρ ρ of gravitons, then all s gravitons are arranged into an ordered set The position ρ −1 (l) of l (l ∈ H) in the ordered set R is called the weight of l. Apparently, h ρ(s) is the highest-weight node in R. (2) Pick the highest-weight element h ρ(s) (the fiducial graviton for the first step recursive expansion) from the ordered set R, an arbitrary gluon (root) l ∈ {1, 2, . . . , r − 1} as well as gravitons ). By considering each particle in the set {l, i 1 , i 2 , . . . , i j , h ρ(s) } as a node, we construct a chain CH = h ρ(s) , i j , . . . , i 1 , l which starts from the node h ρ(s) towards the node l. The graviton h ρ(s) , the gluon l and gravitons i 1 , i 2 , . . . , i j are correspondingly mentioned as the starting node, the ending node and the internal nodes of this chain. We defined the weight of a chain by the weight of the starting node of the chain. The factor associated to this chain is JHEP05(2019)012 (3) Picking l ∈ {1, 2, . . . , r − 1} ∪ {i 1 , i 2 , . . . , i j , h ρ(s) }, the highest-weight element h ρ (s ) (which is the fiducial graviton for the second step recursive expansion) in R as well as gravitons we define a chain CH = {h ρ (s ) , i j , i j −1 , . . . , i 1 , l } starting from h ρ(s ) and ending at l . This chain is associated with a factor (4) Repeating the above steps until the ordered set R is empty, we obtain a graph F in which graviton trees are planted at gluons (roots) {1, . . . , r − 1}. For any given graph F , the product of the factors accompanied to all chains produces a term C [F ] in the coefficient C(1, σ σ σ, r) in eq. (2.4). Thus the final expression of C(1, σ σ σ, r) is given by summing over all possible graphs constructed by the above steps, i.e. eq. (2.5). Gauge invariance induced identity from tree level single-trace EYM amplitudes Gauge invariance requires that an EYM amplitude has to vanish when the 'half' graviton polarization h i is replaced by momenta k h i for any h i ∈ H. Consequently, the pure-YM expansion eq. (2.4) should become zero under the replacement h i → k h i . Assuming that the half polarization h i is included in the coefficients in eq. (2.4), our discussion can be classified into the following two cases: • If h i is the highest-weight element h ρ(s) (the fiducial graviton for the first-step expansion) in the reference order R, it must be a starting node of some chain but cannot be an internal node of any chain. The gauge invariance condition for h i is not manifest and implies the following nontrivial relation for pure Yang-Mills amplitudes σ σ σ∈{2,...,r−1}¡ perms H C(1, σ σ σ, r) h ρ(s) →k h ρ(s) A(1, σ σ σ, r) = 0, (2.9) in which the coefficient C(1, σ σ σ, r) is obtained from eq. (2.5) via replacing h ρ(s) by k h ρ(s) . In other words, the chains led by h ρ(s) are of the form k h ρ(s) · F i j · · · · · F i 2 · F i 1 · k l . • If h i is some graviton other than h ρ(s) , it can be either an internal node or a starting node of some chain. The former case vanishes naturally because of the antisymmetry of the strength tensor F µν h i . The latter case is achieved if the gauge invariance condition eq. (2.9) is already satisfied by amplitudes with fewer gravitons because in this case: (i) h i plays as the fiducial graviton for some intermediate-step recursive expansion in the graphic rule, and (ii) the sum over all the graphs, which contain the same chain structure produced by the preceding steps, is proportional to the l.h.s. of the gauge invariance condition (2.9) (with h ρ(s) → h i ) for fewer-graviton EYM amplitudes (Elements on chains produced by the preceding steps are considered as gluons). JHEP05(2019)012 Since the gauge invariance condition for h i = h ρ(s) is always achieved when the identity eq. (2.9) for fewer gravitons holds, our discussion can just be focused on the case with h i = h ρ(s) . Refined graphic rule and the main idea In the previous section, we have shown that the gauge invariance condition for a singletrace EYM amplitude induces a nontrivial identity eq. (2.9) for pure Yang-Mills ones. The difference from BCJ relation is that coefficients in eq. (2.9) contain not only Mandelstam variables k a · k b but also other two types of Lorentz inner products a · b and a · k b which involve half polarizations µ . Such feature can be straightforwardly understood when all strength tensors F µν i on both types of chains a ·F i j ·· · ··F i 2 ·F i 1 ·k l and k ρ(s) ·F i j ·· · ··F i 2 ·F i 1 ·k l are expanded according to the definition (2.3). As shown by explicit examples in [5], the identity eq. (2.9) in fact can be expanded in terms of BCJ relations eq. (2.10) without referring any property of external polarizations µ i . Nevertheless, this observation cannot be trivially extended to the general identity eq. (2.9) because of the complexity of coefficients in eq. (2.9). Thus the relationship between the gauge invariance induced identity eq. (2.9) and BCJ relation eq. (2.10) is still unclear. In this section, we propose a refined graphic rule and show the main idea for studying the relationship between the identity eq. (2.9) and BCJ relation eq. (2.10), which will be helpful for our generic study in the coming sections. Refined graphic rule for single trace tree-level EYM amplitudes In the graphs constructed by the rule in section 2, the three types of Lorentz inner products a · b , a · k b and k a · k b cannot be distinguished. To investigate the general relationship between the gauge invariance induced identity (2.9) and BCJ relation eq. (2.10), we propose the following graphic rule by expanding all strength tensors F µν a s.t. the three types of Lorentz inner products are represented by three distinct types of lines: (1) Internal nodes. In the original graphic rule, each internal node stands for a strength tensor F µν a . When Then the strength tensor F µν a becomes the sum of the two graphs in figure 1. As shown in figure 1 (a), we associate a plus with an arrow pointing to the direction of root. An arrow pointing away from root is associated with a minus (see figure 1 (b)). (2) Starting nodes of chains. In the gauge invariance induced identity (2.9), each starting node of a chain is associated with either a 'half' polarization µ a of some element other than h ρ(s) or the momentum k µ h ρ(s) of the highest-weight element h ρ(s) in the ordered set R. Thus two distinct types of starting nodes are required. Noting that all chains are directed to roots, we introduce these two types of starting nodes figure 2 (a) and (b) by removing (. . . · k) and (−1)(. . . · ) from the internal nodes shown by figure 1 (a) and (b) respectively. (3) Ending nodes of chains. Each internal/starting node of a chain or a root (the element in the original gluon set {1, . . . , r}) can also be the ending node of another chain. The contraction of an ending node b of some chain with its neighbor on the same chain always has the form (. . . · k b ). Therefore, the ending node of a chain should be attached by a line of form k a · k b or a · k b . (4) Three types of lines between nodes. Contractions of Lorentz indices are represented by connecting lines associated to nodes together. There are three distinct types of lines as shown by figure 3 (a) (type-1), (b) (type-2) and (c) (type-3) corresponding to the Lorentz inner products a · b , a · k b and k a · k b . JHEP05(2019)012 Figure 4. Examples of old-version graphs for the gauge invariance induced identity eq. (2.9). Here, the node h 4 denotes k µ h4 , while the nodes h 1 , h 2 and h 3 denote µ h1 , µ h2 and µ h3 , respectively. We color the element r by red to remind that the element r always plays as the rightmost element in any permutation corresponding to a given graph. Only 1, . . . , r − 1 can be roots. With the above improvement, the coefficients C(1, σ σ σ, r) h ρ(s) →k h ρ(s) of the gauge invariance induced identity eq. (2.9) is then written by summing over all refined graphs. Thus eq. (2.9) becomes where G(1, σ σ σ, r) denotes the set of all refined graphs which are allowed by the permutation {1, σ σ σ, r} according to the refined graphic rule. It is worth noting that a given graph F can be allowed by various permutations. Examples for the refined graphic rule Now we take the identity induced by the gauge invariance of the amplitude A(1, . . . , r h 1 , h 2 , h 3 , h 4 ) as an example. We assume the reference order is R = {h 1 , h 2 , h 3 , h 4 } and consider the gauge invariance induced identity eq. (3.1) with h 4 → k h 4 . Example-1. For any given permutation σ σ σ ∈ {2, . . . , r − 1} ¡{h 1 } ¡{h 2 } ¡{h 3 } ¡{h 4 }, there must be old-version graphs figure 4 (a) contributing terms of the form . The refined graph for such a coefficient is shown by figure 5 (a), in which each chain led by h 1 , h 2 or h 3 consists of only one type-2 line, while the chain led by h 4 is a type-3 line. . According to the refined graphic rule, this coefficient is given by summing the two graphs figure 5 (b) and (c) together: . Thus the total contribution of such a graph is . According to the refined graphic rule, the coefficient for a given i ∈ {1, . . . , r − 1} is provided by the sum of the four graphs figure 5 (d), (e), (f) and (g): The main idea Having established the refined graphic rule, we are now ready for studying the relationship between the gauge invariance induced identity eq. (3.1) and BCJ relation eq. (2.10). In the coming sections, we will prove that the gauge invariance induced identity eq. (3.1) can be expanded into a combination of BCJ relations. The main idea is the following: Step-1. The factorization of coefficients. For any graph F constructed by the refined graphic rule, the coefficient D [F ] can be factorized as a product of two coefficients P [F ] and K [F\F ] associated with a total factor (−1) N (F ) . Here, the skeleton F is the subgraph which is obtained by deleting all type-3 lines from F . The factor P [F ] associated to the skeleton F contains only factors of forms ( a · b ) and ( a ·k b ). We use F \F to stand for the subgraph which is obtained by deleting all type-1 and 2 lines from F (i.e. the complement of the skeleton). 3 The factor K [F\F ] corresponding to F \ F contains only Mandelstam JHEP05(2019)012 variables (k a · k b ). The total factor (−1) N (F ) depends on the number N (F ) of arrows pointing away from the direction of roots (except the one connected to the highest-weight node h ρ(s) because we do not associate a minus to the arrow figure 2 (b)). For instance, the factor D [F ] for the graph figure 5 With this factorization, the l.h.s. of the gauge invariance induced identity eq. (3.1) is expressed by Here, we emphasize that a given skeleton F can belong to different permutations σ σ σ ∈ {2, . . . , r − 1} ¡ perms H. Step-2. Collecting all terms corresponding to the graphs F ⊃ F , 4 for any given skeleton F . When all graphs containing the same skeleton are collected together, the expression eq. (3.4) becomes where we summed over (i) all possible skeletons F , (ii) all possible graphs F which are constructed by the refined graphic rule and satisfy F ⊃ F for a given skeleton F , (iii) all permutations σ σ σ F for a given F . Step-3. Finding out the relationship between terms associated with a skeleton and the l.h.s. of BCJ relations. For any given skeleton F in eq. (3.5), we will prove that the expression in the square brackets can be written in terms of the l.h.s. of BCJ relations. We first present direct evaluations of simple examples in the next section and extract common features which will be systematically studied in sections 5, 6 and 7. Direct evaluations In this section, we verify that the expression in the square brackets in eq. (3.5) can be expanded in terms of the l.h.s. of BCJ relations, through an explicit example. Some common features that will be useful for the general proof are further extracted. More examples with |H| ≤ 3 are included in appendix B. A typical example When H = {h 1 , h 2 , h 3 } and the reference order is chosen as R = {h 1 , h 2 , h 3 }, the expression eq. (3.5) has nine skeletons, all of them are displayed in figure 6. We now consider the skeleton F = figure 6 (i) and calculate the expression in the square brackets in eq. (3.5) JHEP05(2019)012 explicitly. We will find that the sum over all possible graphs F satisfying F ⊃ F in the square brackets of eq. (3.5) can be written as a proper combination of BCJ relations. The skeleton figure 6 (i) consists of three mutually disjoint components which correspondingly contain all elements of {1, . . . , r}, the single node h 3 and the subgraph with nodes h 1 and h 2 . The factor P [F ] for figure 6 (i) is h 1 · h 2 and all possible graphs The total contribution of all graphs figure 7 (a) and (b) thus is written as When collecting the coefficients (−1) N (F ) K [F \F ] for any given σ σ σ (see table 1) and summing over all possible σ σ σ, we arrive the total contribution of the graphs figure 7 (d) and (e) In order to reorganize T 1 + T 2 in terms of the l.h.s. of BCJ relations, we introduce so-called spurious graphs figure 7 (c) and (f) which also contains the skeleton figure 6 (i) but are not real graphs constructed by the refined graphic rule. The two spurious graphs figure 7 (c) and (f) have the same structure with opposite signs, thus they must cancel with one another. Relative orders of h 1 , h 2 and h 3 in a spurious graph figure 7 (c) or (f) JHEP05(2019)012 Relative orders for graphs in the lower part of figure 7 and their factors are collected on the last four rows. JHEP05(2019)012 (1). Structure of skeletons F . As shown by the example in section 4.1 and the examples given in appendix B, each skeleton consists of at least two mutually disjoint components: the one containing the highest-weight graviton h ρ(s) (type-II component) and the one containing all elements in {1, . . . , r} (type-III component). Other components (type-I components), each of which always involves a type-1 line, may also appear in a skeleton (see figure 6 (i)). (2). The final upper and lower blocks. For any given skeleton, one can connect all its components into only two disjoint blocks properly: one block contains the highest-weight graviton h ρ(s) and is called the final upper block U , the other block contains the elements in {1, . . . , r} and is called the final lower block L . Apparently, if a skeleton has only two disjoint components (as the examples in appendix B), the final upper and lower blocks are just the type-II and the type-III components. For skeletons with at least three components (as the example shown in section 4.1), one should connect the type-I components towards either the type-II component or the type-III component properly, via type-3 lines. After all type-I components have been considered, the skeleton becomes a graph having only two disjoint parts which correspond to the final upper and lower components (see figure 8). (3). Physical and spurious graphs. For a given configuration of final upper and lower blocks U , L , we need to connect U and L together by a type-3 line. If the original skeleton consists of only two components (as the examples in appendix B), all graphs allowed by the refined graphic rule are obtained by connecting any node a ∈ U and b ∈ L \ {r} via a type-3 line. If the original skeleton consists of at least three components, one should be more careful: a graph produced by connecting a type three line between U and L \ {r} may be either physical (in agreement with the refined graphic rule, e.g. figure 7 (a), (b), (d) and (e)) or spurious (not allowed by the graphic rule e.g. figure 7 (c) and (f)). However, one can associate proper signs to the spurious graphs s.t. all spurious graphs cancel in pairs (as the cancelation between figure 7 (c) and (f)). Thus the total contribution for a given configuration of the final upper and lower blocks can also be given by summing over all those graphs produced by connecting any node a ∈ U and b ∈ L \ {r} via a type-3 line. (4). The sum of all (physical and spurious) graphs for a given configuration of the final upper and lower blocks. The sum of all (physical and spurious) graphs corresponding to a given final upper and lower blocks can be written as a combination of the l.h.s. of BCJ relations. In the coming sections, we generalize the above observations to arbitrary cases. In section 5, we prove the general structure of skeletons observed in (1). In section 6, we provide the general rule for constructing graphs corresponding to a given skeleton (as stated by the points (2) and (3)): first construct all final upper and lower blocks U , L , then construct all physical and spurious graphs for a given configuration of U , L . The contributions from all (physical and spurious) graphs for given U and L can be arranged as a proper combination of the l.h.s. of BCJ relations, due to a so-called graph-based BCJ relation which will be proved in section 7. JHEP05(2019)012 5 General structure of skeletons To study the general pattern of a skeleton, we recall that there are two kinds of chains in a graph (defined by the old-version graphic rule): (i) a chain led by any node a in H \ {h ρ(s) } has the form a · F i j · . . . · F i 1 · k b (here b can be either a node on a higher-weight chain or an element of {1, . . . , r − 1}), (ii) the chain led by the highest weight element h ρ(s) has the in the expansion of EYM amplitude. We shall study the structures of these two types of chains in turn: (i) When all strength tensors F µν are expressed by its definition eq. (2.3), a chain belongs to a higher-weight chain or an element in {1, . . . , r − 1}) is expanded as a sum of chains defined by the refined graphic rule. Each in the sum has the general form a ·( In the language of graphs, such a chain is characterized by figure 9 (a). If l = 0, n l+1=1 ≥ 0, the chain eq. (5.1) in this case has no type-3 line. If l > 0, the value of integers n i , m i satisfy n 1 , n l+1 ≥ 0, n i=2,...,l > 0, m i=1,...,l > 0 and the chain in this case has at least one type-3 line. . . , r − 1}) can also be expanded as a sum of chains defined by the refined graphic rule. Each chain in the sum has the general form ..,l > 0 (for l > 0). In the language of graphs, this chain is characterized by figure 9 (b). Since the arrow lines connected to the starting node h ρ(s) and the ending node b in figure 9 point to opposite directions, there must be at least one type-3 line in figure 9 (b). In order to obtain the skeleton F of a graph F , we delete all type-3 lines from F . Then both types of chains are divided into disjoint sectors in general. All sectors can be classified into the following types. • Type-I sector: A sector containing a type-1 line (see figure 10 (a)). This sector can also have type-2 lines whose arrows point to the direction of the two end nodes of the type-1 line. Both types of chains figure 9 (a) and (b) can contain type-I sectors. JHEP05(2019)012 ) provides a type-3 line. • Type-II sector: A sector only containing type-2 lines whose arrows point to the direction of the starting node of the chain (see figure 10 (b)). Only the highest-weight chain figure 9 (b) involve a type-II sector. The single node h ρ(s) with no line (on this chain) connected to it (i.e. figure 9 (b) with n 0 = 0) is considered as a special type-II sector. • Type-III sector: A sector only containing type-2 lines whose arrows point to the direction of the ending node of a chain (see figure 10 (c)). Both types of chains figure 9 (a) and (b) contain type-III sectors. The single node b with no line (on this chain) connected to it (i.e. figure 9 (a) or (b) with n l+1 = 0) is a special type-III sector. According to the refined graphic rule, the highest-weight chain in a graph must have form figure 9 (b). This chain at least contains two mutually disjoint sectors: a type-II sector and a type-III sector. It can also have type-I sectors. The type-III sector of the highestweight chain figure 9 (b) must end at a root b ∈ {1, . . . , r − 1} while all nodes on this chain can be ending nodes of type-III sectors of other chains. If we look at a chain of the form figure 9 (a), we find that the type-III sector of this chain can end at either a node of a higherweight chain or a root b ∈ {1, . . . , r − 1}, while any node on this chain can be the ending node of the type-III sector of a lower-weight chain. After putting all these sectors together, we conclude that a general skeleton is composed of the following types of components: Here, both A and B are type-I components. The kernel of A is given by the nodes h 4 and h 8 with a type-1 line between them (i.e. the factor h4 · h8 ), while the kernel of B is given by h 2 and h 9 with a type-1 line between them (i.e. h2 · h9 ). On each side of the component A , the arrows of type-2 lines point to the corresponding end node of the kernel. The component C is the type-II component. The set R = {1, . . . , r} itself is the type-III component (Although only the elements 1, . . . , r − 1 can play as roots in the graphic rule, we also include the last element r in the component R for convenience; a more general type-III componnet can involve type-II lines whose arrows pointing to roots). type-1 line in it and may also have type-2 lines pointing to the direction of the end nodes of the type-1 line. A type-I component can be reconstructed by connecting a type-1 line between two separate parts, each only contains type-2 lines. We define the part to which the highest-weight node of a type-I component belongs as the top side. The other part is called the bottom side. The type-1 line with its two end nodes together is defined as the kernel of this type-I component. • Type-II component: the component containing the highest-weight graviton h ρ(s) (see the C components in figure 11 for example). This component consists of the type-II sector of the highest-weight chain with possible type-III sectors (belonging to other chains) attached on it. Apparently, type-II component involves only type-2 lines whose arrows point to the direction of the node h ρ(s) . • Type-III component: the component containing the set {1, . . . , r} (see the R components in figure 11 for example). This component is obtained by connecting possible type-III sectors to roots b ∈ {1, . . . , r − 1}. Thus type-III components contains type-2 lines whose arrows point to the root set. All the above three types of components can be considered as connected subgraphs where tree structures with only type-2 lines pointing to (i) the kernel (for type-I components), (ii) the highest-weight node h ρ(s) (for the type-II component) and (iii) roots (for the type-III JHEP05(2019)012 component). If the highest-weight chain contains only one type-3 line in the original graph F while other chains do not have any type-3 line, the skeleton of F must consist of only two mutually disjoint components: the type-II and the type-III components (see figure 6 (a)-(h) for example). With the general structure of skeletons in hand, we will construct all graphs for an arbitrary skeleton in the next section. 6 Constructing all physical and spurious graphs for a given skeleton As observed in section 4, all graphs corresponding to a given skeleton can be obtained by two steps: i) First, by attaching all type-I components to either the type-II or the type-III components via type-3 lines in a proper way, one constructs the final upper and lower blocks; ii) Second, for a given configuration of the final upper and lower blocks U , L , one obtains a (physical or spurious) graph by connecting arbitrary two nodes a ∈ U and b ∈ L \ {r} via a type-3 line. In this section, we generalize this observation to arbitrary case. We first show how to construct the final upper and lower blocks for a given skeleton. All physical and spurious graphs are then obtained by connecting a type-3 line between the final upper and lower blocks. We show that all spurious graphs must cancel in paires. In appendix C we further prove that all the physical graphs constructed by this section precisely match with the graphs directly obtained by refined graphic rule. With these discussions, we can rearrange the expression in the square brackets in eq. (3.5) s.t. it is expressed as a combination of so-called graph-based BCJ relations which will be proved as a combination of traditional BCJ relations. The construction of the final upper and lower blocks for a given skeleton Assuming that a skeleton is composed of N +2 components, as already analyzed in section 5, there must be one type-II and one type-III component, which are respectively denoted by C II and C III . The number of type-I components is thus N . To construct the final upper and lower blocks, we should connect all N type-I components C I i towards the direction of either the type-II or the type-III component via N type-3 lines so that the skeleton becomes a graph which only has two disjoint maximally connected subgraphs U and L . More specifically, the final upper and lower blocks for a given skeleton is constructed as follows: Step-1. Define the upper and lower blocks U and L by the type-II component C II (i.e. the component containing the highest weight node h ρ(h) ) and the type-III component C III (i.e. the component containing elements {1, . . . , r − 1}) respectively. Define the reference order of all type-I components R I C ≡ {C I 1 , C I 2 , . . . , C I N } by the relative order of their highestweight nodes in R = {h ρ(1) , h ρ(2) , . . . , h ρ(s) } which is already introduced in the (refined) graphic rule. We pick out the highest-weight type-I component C I N as well as some other type-I components C I a 1 , C I a 2 , . . . , C I a i (not necessary in the relative order in R I C ), then construct a chain of components which starts from C I N and ends at either the upper block JHEP05(2019)012 U or the lower block L . 5 Correspondingly, we get two possible structures . . , C I a i }, U and L by the new obtained upper and lower blocks U and L respectively. Step-2. Pick out the highest-weight component C I Step-3. Repeat the above steps until the ordered set R C becomes empty. In each step, we construct a chain which is led by the highest-weight component in the redefined R C , towards either the upper block U or the lower block L obtained in the previous step. Then redefine the upper and lower blocks by the new constructed maximally connected subgraphs that containing the highest-weight node h ρ(s) and roots {1, . . . , r − 1}, respectively. After the above steps, graphs with only two disjoint connected blocks (the final upper and lower blocks U and L ) are finally produced. A simple example is already given by figure 8. A more complicated example is given by the construction of all possible configurations of the final upper and lower blocks for the skeleton figure 11 and can be . . , i) and y0 ∈ C II , a chain of components, which starts from C I N , passes through C I a 1 , C I a 2 , . . . , C I a i and ends at C II , is defined by connecting each (xj+1, yj) pair (for j = i, i − 1, . . . , 1) via a type-3 line and denoted by There are two important features for chains of components: (1) All the kernels of the starting and internal components must be passed through by this chain (i.e. the two nodes yj, xj for a given j = 1, . . . , i must coming from opposite sides of the component C I a j ). (2) A chain must starts from the top side of the starting component (i.e. xi+1 must belong to the bottom side of C I N ). Physical and spurious graphs for a given configuration of the final upper and lower blocks For a given skeleton F and a given configuration of the final upper and lower blocks U and L , a graph is obtained by connecting two arbitrary nodes respectively belonging to U and L \ {r} via a type-3 line. In general, such a graph may not be allowed by the refined graphic rule (e.g. the spurious graphs figure 7 (c) and (f)). Therefore, we should analyze whether such a graph is a physical or a spurious one. This can be achieved by noting the following constraints from the refined graphic rule: • Assume we have two components C I and C . Here, C I is a type-I component while C is either the type-II or a type-I component whose weight is higher than that of C I . The structures shown by figure 12 (a) and (b), 6 in which the chain (of nodes) led by the highest weight node in C passes through only a single side (the top or the bottom side) of the component C I (via two type-3 lines), are forbidden. This is because if only the top (or bottom) side of C I is passed through by the chain, there must exist an internal node which is attached by two conflict arrows on that chain (for example the node h 8 in figure 12 (c)). Such a structure is forbidden by the definition of the strength tensor F µν . Therefore, only the graphs figure 12 (d) and (e) where the higher-weight chain pass through both sides of C I are allowed (a more specific example is given by figure 12 (f)). In other words, if a chain of higher weight passes through a type-I component C I , the kernel of the component C I (e.g. h 4 · h 8 for figure 12 (f)) must be on this chain. • A chain, say CH 1 , starting at the highest-weight node of a type-I component C I must pass through both sides of C I (or equivalently the kernel of C I ). If not (as shown by figure 13 (a) or a more explicit example figure 13 (c)), there must exist another chain, say CH 2 which ends at some node a ∈ C I t ∩ CH 1 . The node a plays as the ending node of CH 2 but can only supply a to CH 2 , which apparently conflicts with the refined graphic rule. Recalling that a chain of components (which just reflects the chain lead by the highest weight node in these components) defined in section 6.1 always starts at the top side of its starting component, the structure shown by figure 13 (a) is automatically avoided when the final upper and lower blocks are connected by a type-3 line. Hence only the nonphysical structures figure 12 (a), (b) may appear. If a graph obtained by connecting the final upper and lower blocks via a type-3 line does not contain any structure of figure 12 (a) and (b), it is a physical one; otherwise, spurious. As an example, all physical graphs for the skeleton JHEP05(2019)012 Figure 12. Assume that C is the type-II component or a type-I component with a higher weight than C I . The graphs (a) and (b), in which a higher-weight chain passes through only one side of C I , are not allowed by the refined graphic rule because a chain cannot contain conflict arrow line towards root (e.g. the graph (c)). Thus if a chain passes through a component, it must pass through both sides of the component. In other words, the kernel of the component C I must be on that chain (see the graphs (d) and (e) as well as the explicit example (f)). Figure 13. If there is a chain starting at the highest-weight node of a type-I component C I , this chain must contain the kernel of C I . In other words, the bottom-side end node of the kernel must be nearer to root than the top-side one. Thus the graph (b) (a more concrete example is given by (d)) is allowed but (a) (a more concrete example is given by (c)) is not allowed. In appendix D, we prove that all the physical graphs corresponding to all possible configurations of the final upper and lower blocks for a skeleton F precisely reproduce all F ⊃ F (in eq. (3.5)) defined by the refined graphic rule. In the remaining of this JHEP05(2019)012 Figure 14. A typical spurious graph. There are m spurious structures C I a2 , . . . , C I am on the path from the highest weight node h ρ(s) towards a root. The chains CH I 1 , CH I 2 , . . . , . . , C I am ) must belong to the final lower (upper) block. The lowest-weight chain CH I l among CH I i (i = 1, . . . , m) can belong to either the final upper block (with sign (−1) m−l+1 ) or the final lower one (with a sign (−1) m−l ). Thus two spurious graphs with the same configuration but opposite signs are obtained from two distinct configurations of the final upper and lower blocks, respectively. subsection, we associate spurious graphs with signs properly, s.t. all these spurious graphs for a given skeleton cancel out. Signs of spurious graphs Spurious graphs may appear when we connect the final upper and lower blocks into a connected graph by a type-3 line. This is because the chain led by the highest-weight node h ρ(s) ∈ C II is produced and may have the structures figure 12 (a) To show that spurious graphs for distinct configurations always cancel in pairs, we assume that the path from h ρ(s) ∈ C II towards a root passes through spurious components C I a i (i = 1, . . . , m) in the order C I am , . . . , C I a 2 , C I a 1 for convenience. The chains containing components C I a 1 , C I a 2 , . . . , C I am are correspondingly denoted by CH I 1 , CH I 2 , . . . , CH I m and their weights are denoted by W I 1 , W I 2 , . . . , W I m . The lowest-weight chain among CH I 1 , CH I 2 , . . . , CH I m is assumed to be CH I l . This typical spurious graph is shown by figure 14. We find that the following properties of chains CH I 1 , CH I 2 , . . . , CH I m in a given spurious graph must be satisfied: (i) The weights of chains W I 1 , W I 2 , . . . , W I m must satisfy JHEP05(2019)012 This is because we always connect chains, in the descending order of their weights, to either the upper or the lower block (as shown in section 6.1). (ii) The chains CH I 1 , CH I 2 , . . . , CH I l−1 (CH I l+1 , CH I l+2 , . . . , CH I m ) and structures attached to them belong to the final lower (upper) block (see figure 14). If not, for example the chain CH I i for given i < l belongs to the final upper block, there must be at least one component (i.e. C I a l ) with only a single side on CH I i before connecting the upper and lower blocks together. This conflicts with the construction in section 6.1). According to the construction process in section 6.1, the lowest-weight chain CH I l (and structure attached to it) among CH I 1 , CH I 2 , . . . , CH I m can be considered to be connected with either the upper or the lower block, which have been defined in the previous steps, via a type-3 line (see figure 14). Thus the chain CH I l (and structure attached to it) in a given spurious graph can belong to either the final upper block or the final lower block. Correspondingly, the signs for these two spurious graphs, which have the same structure and distinct configurations of the final upper and lower blocks, are (−1) m−l+1 and (−1) m−l . As a result, all the spurious graphs for a given skeleton must cancel out in pairs. The sum over all physical and spurious graphs Now we are ready to rearrange the expression in the square brackets of eq. (3.5) in an appropriate form for studying the relationship between the gauge invariance induced identity eq. (2.9) and BCJ relations. For any skeleton F , the expression in the square brackets of eq. (3.5) is given by The sum over all (physical) graphs F containing the skeleton F can be achieved by the following two summations: (ii) For a given configuration U ⊕ L in the previous summation, sum over all possible physical and spurious graphs F ∈ (U , L ) which are obtained by connecting two nodes in distinct blocks via a type-3 line (recalling that the sum of all spurious graphs cancel in pairs). The kinematic factor provided in this step is the k · k factor for the type-3 line between the two blocks and is denoted by JHEP05(2019)012 To sum up, I can be expressed by The I [U , L ] for a given configuration U ⊕ L in the above equation is defined by Comments on the expression eq. (6.6) Permutations σ σ σ F . From the graphic rule, we can see permutations σ σ σ F for a given graph F ∈ (U , L ) do not rely on types of lines. They only depend on the relative positions between nodes in a graph. Therefore, in the following we replace all types of lines in F by dashed lines with no arrow when considering permutations σ σ σ F established by a graph F . For each graph F , the possible permutations σ σ σ F are collected as follows: • The relative order of roots is always (σ F ) −1 (1) < (σ F ) −1 (2) < · · · < (σ F ) −1 (r − 1). We set σ −1 (r) = ∞ to require that the element r is always the last one in the permutation σ σ σ F . • For tree structures planted at roots {1, 2, . . . , r − 1}, if two elements a, b ∈ H are on a same path which starts from an element of H and end at an arbitrary root l ∈ {1, 2, . . . , r − 1} and a is nearer to the root l than b, we have ( • If a node a ∈ H in a connected tree structure T (shown by figure 15 (a)) is nearer to the root than all other nodes in T and a is attached to by M connected sub-tree structures (branches) T 1 , T 2 ,. . . ,T M (M ≥ 2) (where the nodes nearest to a in these branches are a 1 , a 2 ,. . . ,a M correspondingly), the collection of all possible relative orders for nodes in T are recursively expressed by Rearrangement of eq. (6.6). Since the graph F is given by connecting any node a in the final upper block U and any node b in the final lower block L via a type-3 line, the factor K [F\U ⊕L ] is just k a · k b while the sum over all F ∈ (U , L ) is given by summing over all choices of nodes a ∈ U and b ∈ L . Permutations σ σ σ F can be understood as follows: (i) Suppose that all tree structures planted at roots l 1 , l 2 , . . . , l M ∈ {1, . . . , r − 1} in a given (physical or spurious) graph F are T 1 , T 2 , . . . , T M and the nearest-to-root elements with respect to these tree structures are a 1 , a 2 , . . . , a M correspondingly (see figure 16). The final upper block U (connected to the node b ∈ L ), when all lines therein are replaced by dashed lines, must belong to some connected tree structure T i which is attached to a root l i . Permutations σ σ σ F then satisfy . (6.10) (ii) Any permutation σ σ σ F satisfying eq. (6.10) can be obtained by shuffling ζ ζ ζ ∈ U a (a relative order of elements in U ) with γ γ γ (a relative order of elements in L ) . Thus all permutations σ σ σ F in eq. (6.6) satisfy σ σ σ F ∈ 1, γ γ γ ¡ζ ζ ζ, r (σ F ) −1 (b)<(σ F ) −1 (a) for a given γ γ γ satisfying eq. (6.11) and a given ζ ζ ζ ∈ U a . Having the above observations in hand, we reexpress eq. (6.6) by JHEP05(2019)012 where all γ γ γ satisfy eq. (6.11) and ζ ζ ζ ∈ U | a are the permutation for the final upper block when considering a as the nearest to root node in U . For a given U and a given node a ∈ U , N (U ) and S(U ) are denoted by N (U a ) and S(U a ). Note that N (L ) is independent of the choice of b in the final lower block L . Since the choices of nodes a and b are independent of each other, the above expression can be further arranged as JHEP05(2019)012 As we have shown in the example in section 4, when we collect coefficients for any given permutation σ σ σ ∈ γ γ γ ¡ζ ζ ζ together, the expression in the square brackets in eq. (6.13) is given by (k a · Y a (σ σ σ))A 1, σ σ σ ∈ (ζ ζ ζ ¡γ γ γ), r . (6.14) Here Y a (σ σ σ) is the sum of all momenta of elements in γ γ γ (the element 1 is also included) appearing on the l.h.s. of the node a in σ σ σ. The factor (−1) N (Ua)+S(Ua)+1 in eq. (6.13) depends on the choice of a. Once the node a ∈ U (i.e. the node nearest to root in U ) under the first summation in the braces of eq. To sum up, the factors (−1) N (Ua)+S(Ua)+1 associated with graphs corresponding to a = c 1 and a = c 2 , where c 1 , c 2 ∈ U are two adjacent nodes connected by an arbitrary type of line, always differ by a factor (−1). Replacing the expression in the square brackets of eq. (6.13) by eq. (6.14) and considering the relative signs, we rewrite eq. (6.14) as γ γ γ a∈U f a ζ ζ ζ∈U |a σ σ σ (k a ·Y a (σ σ σ))A 1,σ σ σ ∈ (ζ ζ ζ ¡γ γ γ), r , where the factor (−1) N (L )+N (Uc)+S(Uc)+1 for a fixed c ∈ U eq. (6.13) has been exacted out. The relative signs f a for choosing a as other nodes can be fully fixed because the factor for any two adjacent choices of a must be differ by a factor (−1). In the next section, we will prove that for any given γ γ γ, the expression in the brackets of eq. (6.15) must vanish because it can always be written as a combination of BCJ relations. Graph-based BCJ relation as a combination of traditional BCJ relations In this section, we introduce the following graph-based BCJ relation 8 Here, γ γ γ is an arbitrary permutation of elements in {2, . . . , r − 1} and T is an arbitrary connected tree graph. We use T | a to denote the relative orders between nodes of T when the node a is the leftmost one (see section 6.3). For a given graph T , the factor f a is a JHEP05(2019)012 relative sign depending on the choice of a. This factor is fixed as follows: (i) Choose an arbitrary node c and require f c = 1, (ii) For arbitrary two adjacent nodes c 1 and c 2 , we have f c 1 = −f c 2 . In the following, we use B c G (1 | T , γ γ γ | r) to stand for the l.h.s. of eq. (7.1) with choosing f c = 1 (c ∈ T ). We will prove that the B c G (1 | T , γ γ γ | r) (thus eq. (6.15) and eq. (6.5)) is a combination of the l.h.s. of traditional BCJ relations eq. (2.10) (see eq. (2.11)). As a result, the gauge invariance induced identity eq. (2.9) can always be expanded in terms of traditional BCJ relations. Examples Now we present several examples for the graph-based BCJ relation eq. (7.1). Example-1. The simplest example is that the tree graph T consists of only a single node h 1 . The l.h.s. of eq. (7.1) is nothing but the l.h.s. of a fundamental BCJ relation Example-2. The next simplest example is that T consists of two nodes h 1 and h 2 with one (dashed) line between them. The l.h.s. of eq. (7.1) in this case reads (7.3) Notice that the last term can be replaced by The second term of the above equation together with the first term of eq. (7.3) produces the l.h.s. of the traditional BCJ relation eq. (2.10) with β β β = {h 1 , h 2 }, i.e. B(1 | {h 1 , h 2 }, γ γ γ | r). Thus eq. (7.3) is finally given by which is a combination of the l.h.s. of traditional BCJ relations. It is worth pointing out that the above expression is not unique. When exchanging the roles of h 1 and h 2 in eq. (7.5), we get another equivalent expansion JHEP05(2019)012 Example-3. Now we consider a graph with three nodes T = . The l.h.s. of eq. (7.1) for this graph reads The expression in the square brackets can be considered as ¡ where h 2 ≺ h 1 for a given permutation ξ ξ ξ means that ξ −1 (h 2 ) < ξ −1 (h 1 ). When the first equality of eq. (7.5) in example-2 is applied, ¡ The first term and the third term on the last line are just a combination of l.h.s. of BCJ relations with β β β = {h 2 , h 3 } and β β β = {h 3 } respectively. The sum of the second term on the last line of the above equation and the first term in eq. (7.7) is nothing but the l.h.s. of traditional BCJ relation eq. (2.10) with β β β = {h 1 , h 2 , h 3 }. Hence eq. (7.7) is finally expanded into the following two parts where B (1) JHEP05(2019)012 Similar with example-2, B h 1 G 1 , γ γ γ r can also be expanded in other equivalent forms. For example, we consider B h 2 G 1 , γ γ γ r which is expressed by eq. (7.7) associated with a minus sign. We keep the first term (the term where h 2 is considered as the leftmost node in the graph T ) in the square brackets of eq. (7.7) and rewrite other two terms according to eq. (7.2) as follows , {h 2 } ¡γ γ γ r in eq. (7.12) is a combination of the l.h.s. of graph-based BCJ relations with one-node subtrees (here, h 2 is in the α α α set), thus it is also a combination of traditional BCJ relations (according to eq. (7.2)). Example-4. The first example of graphs with complex chains is the star graph with four nodes. The l.h.s. of the graph-based-BCJ relation eq. (7.1) for this example reads: According to eq. (7.12), the last term is given by JHEP05(2019)012 From eq. (7.13), we know that the second term of the above equation is expressed in terms of the l.h.s. of those BCJ relations where h 4 belongs to the α α α set. Since h 1 is also in the α α α set (of the traditonal BCJ expansion) of the second term, ¡ 1 ,¡ 2 B (2) G 1 , {h 4 } ¡ 1 {{h 1 } ¡ 2 γ γ γ} r h 4 ≺h 1 is already a combination of the l.h.s. of BCJ relations where both h 4 and h 1 belong to α α α set and satisfy h 4 ≺ h 1 . The first term in eq. (7.15) reads The sum of the last term in eq. (7.16) and the first term in eq. (7.14) is which is a combination of the l.h.s. of traditional BCJ relations with all the four nodes belonging to the β β β set. The sum of the first term in eq. (7.16) and the last term in eq. (7.15) defines which is a combination of the l.h.s. of traditional BCJ relations where h 1 is an element of α α α set. The combination of amplitudes can also be expressed in another way when we consider (7.20) JHEP05(2019)012 Applying eq. (7.2), we rewrite the last three terms of the above equation as follows The sum of the last terms of the above equations and the first term of eq. (7.19) gives rise The general proof In general, the l.h.s. of graph-based BCJ relation eq. (7.1) for an arbitrary tree T can be expanded in terms of the l.h.s. of BCJ relations eq. (2.10). Particularly, we pick out an arbitrary node a ∈ T and assume that there are N subtrees attached to a, say T 1 a , T 2 a ,. . . ,T N a . The corresponding nodes adjacent to a in T 1 a , T 2 a ,. . . ,T N a are a 1 , a 2 , . . . , a N . We will prove that the l.h.s. of eq. (7.1) can be expanded as The first term of eq. (7.26) is the following combination in which all nodes of T are in the β β β set. The second term of eq. (7.26) is a proper combination of BCJ relations eq. (7.1) where the node a is always in the α α α set. The explicit expression of this part can be determined recursively. In the following, we will assume eq. (7.26) holds for all subtrees T 1 a , T 2 a , . . . ,T N a and prove eq. (7.26) by induction. To prove eq. (7.26), we first show that B a G (1|T , γ γ γ|r) can be expressed by those B a G 's with subtrees T 1 a , . . . , T N a connecting to the node a. Let us explain this point explicitly by considering the example with T = figure 18. According to the definition, B a G (1 |T = figure 18, γ γ γ| r) can be written as Here a 1 , a 2 and a 3 are the nearest-to-a nodes which belong to substructures T 1 a , T 2 a and T 3 a correspondingly. The total contributions from nodes in T 1 a (i.e. contributions with i = 1 in the second term of eq. (7.28)) is given by a | i in eq. (7.29) are those established by the subgraph T 1 a when i ∈ T 1 a is considered as the leftmost element. Specifically, JHEP05(2019)012 For fixed permutations ζ ζ ζ (2) , ζ ζ ζ (3) and a given permutation ξ ξ ξ ∈ {a, ζ ζ ζ (2) ¡ζ ζ ζ (3) } ¡γ γ γ, terms in eq. (7.29) are collected as Thus the sum of all contributions from T 1 a in eq. (7.29) reads (recall that Following similar discussions, the sum of contributions from T 2 a and T 3 a are respectively given by the replacements of labels 1 ↔ 2 and 1 ↔ 3. The expression eq. (7.28) is then expressed by terms with respect to subtree structures The above discussion is naturally generalized to arbitrary tree structure T . For any B a G , we have where we wrote the sum over multi-shuffle permutations by ¡ for short. According to the inductive assumption, each term in the square brackets for a given i can be expressed by (7.40) Therefore, the sum of all the last terms of in eq. (7.35) over all i = 1, . . . , N and the first term of eq. (7.34) gives which is just B (1) G in eq. (7.26). JHEP05(2019)012 • The sum of the first term in eq. (7.35) and eq. (7.36) over all i = 1, . . . , N . This part defines where ξ ξ ξ (i) is summed over all permutations satisfying eq. (7.37). In the square brackets, the first term is precisely the l.h.s. of BCJ relation eq. (2.10) where a is in the α α α set. According to the inductive assumption, the second term is a combination of BCJ relations where both a i is in the α α α set. Since a ∈ ξ ξ ξ (i) , a must also be in the α α α set. Thus the second term of eq. (7.42) is a combination of the l.h.s. of those BCJ relations with a i ≺ a. All together, we have proven that the l.h.s. of the graph-based BCJ relation is a combination of the l.h.s. of traditional BCJ relations. JHEP05(2019)012 The gauge invariance condition for any graviton a states that the GR amplitude M (1, 2, . . . , n) must vanish under the replacement a → k a . If the graviton a belongs to {2, . . . , n − 1}, it can be an element of either A or H for a given split in eq. (8.1). For the latter case, a is considered as a graviton in an EYM amplitude whose gauge invariance induced identity has been studied in the previous sections. For the former case, the gauge invariance is already included in the coefficients W (1, σ σ σ, n) due to the antisymmetry of the strength tensors. Therefore, only the identity induced by the gauge invariance condition for a = 1 or n requires further study. In the following, we review the old-version graphic rule for the construction of BCJ numerators n 1|ζ ζ ζ|n in eq. (8.4). Then we expand these graphs by refined graphs and prove that the gauge invariance induced identity ζ ζ ζ∈S n−2 n 1|ζ ζ ζ|n 1 →k 1 A(1, ζ ζ ζ, n) = 0 (8.5) is a combination of BCJ relations. The identity induced by the condition n → k n can be studied in a similar way. Once all GR, EYM and YM amplitudes are replaced by YM, YMS (Yang-Mills scalar) and BS (biscalar) amplitudes correspondingly, the discussions are immediately extended to the identities (for BS amplitudes) induced by the gauge invariance of YM amplitudes. The factor corresponding to the chain CH = [n, i j , i j−1 , . . . , i 1 , 1] is Redefine the reference order by R → R = R \ {i 1 , . . . , i j , n} ≡ {ρ (1), . . . , ρ (n )}. 9 Although the interpretation of the rule in this paper is different from the version in [12], they essentially provide the same construction. Step-4. Repeat the above step. In each step, we define a chain which starts from the highest-weight element in the new defined R and ends at any node on chains constructed previously except the node n (the root 1 should be included). Redefine R by removing the starting and internal nodes which have been used in the current step. This process is terminated when R is empty. All chains together form a connected tree graph with the root 1. Multiplying all factors corresponding to the chains and summing over all possible graphs for the permutation ζ ζ ζ, we finally get the BCJ numerator n 1|ζ ζ ζ|n . Understanding the gauge invariance induced identity eq. (8.5) When the condition 1 → k 1 is imposed and all strength tensors F µν i are expanded explicitly according to the definition eq. (2.3), the three types of chains (see figure 3) are distinguished. Hence the old-version graphic rule becomes a refined rule. The coefficients n 1|ζ ζ ζ|n 1 →k 1 in the identity eq. (8.5) is then provided by n 1|ζ ζ ζ|n 1 →k 1 = F ∈ G(1,ζ ζ ζ,n) where G(1, ζ ζ ζ, n) is the set of physical graphs defined by the refined graphic rule and we summed over all possible graphs F ∈ G(1, ζ ζ ζ, n). The factor D [F ] denotes the coefficient for a given graph constructed by the refined rule. The identity eq. (8.5) then turns to   A(1, ζ ζ ζ, n) = 0. (8.11) Before showing the relationship between the identity eq. (8.11) and BCJ relation eq. (2.10), we point out the following features of graphs F in eq. (8.11): (i) For any permutation 1, ζ ζ ζ, n, the nodes 1 and n are always the leftmost and the rightmost nodes respectively. The node 1 always plays as the root of a graph F . The node n cannot be the end node of other chains. (ii) Since 1 is replaced by k 1 , the node 1 must be connected by lines with arrows pointing to it. Hence only type-2 lines (with the arrows pointing to 1) and type-3 lines can be connected to 1. JHEP05(2019)012 (iii) Any starting node of a chain is associated with a polarization, thus each of them starts a type-1 line or a type-2 line pointing towards the direction of the root. (iv) Being different from the graphs for the identity induced from EYM amplitudes, graphs that do not involve any type-3 line are allowed. In such a graph, all lines are type-2 lines pointing to the direction of the root 1. To study eq. (8.11), we classify all graphs into two categories according to wether the node n is connected by a type-1 line or a type-2 line. Then the l.h.s. of eq. (8.11) is written as the sum of T 1 and T 2 : ζ ζ ζ, n), (8.12) where G 1 (1, ζ ζ ζ, n) and G 2 (1, ζ ζ ζ, n) denote the sets of graphs where n is connected by a type-1 line and a type-2 line (pointing to the direction of root), respectively. We study these two parts separately. The T 1 part. If the node n is connected by a type-1 line, the chain CH = [n, i j , i j−1 , . . . , i 1 , 1] contributing to T 1 (see eq. (8.12)) can be described by figure 9 (b) when replacing the nodes b and h ρ(s) therein respectively by the node 1 and the node i j that is further connected by a type-1 line starting at the node n. According to discussions in section 5, such a chain must involve at least one type-3 line. As a result, the skeleton (the graph obtained by removing all type-3 lines) for a graph contributing to T 1 must be a disconnected graph. Following a similar discussion with section 5, we find that there are three types of components corresponding to a given skeleton in general (as in the EYM case): • Type-I component: A component which contains neither 1 nor n and involves a type-1 line (see figure 19 (a)). In general, such a component can have type-2 lines whose arrows point to the direction of the ends of type-1 lines (i.e. the kernel). • Type-II component: A component containing the node n (see figure 19 (b)). Components of this type also have a kernel: the type-1 line with two ends n and a (a = n, 1). The difference from type-I components is that all possible type-2 lines must be only on the bottom side (i.e. the side containing node a) of this component. The node n is the starting node of the kernel (the type-1 line) of this component, while it cannot be the ending node of any chain. Thus there cannot be lines ended at n. • Type-III component: A component containing the root 1 (see figure 19 (c)). Such a component only involve type-2 lines whose arrows point to the direction of the root 1. A typical skeleton for graphs contributing to T 1 with n = 5 is presented by figure 20 (a). Having defined these three types of components, we can follow all the discussions in section 6 and appendix D. The only thing we need to pay more attention to is that the node JHEP05(2019)012 . Graphs (a) and (b) are typical skeletons (with three components) for graphs contributing to T 1 and T 2 (see eq. (8.12)) with n = 5, respectively. Here the reference order is chosen as R = {1, 2, 3, 4, 5}. In graph (a), the type-1 line with its two ends 2 and 3, the type-1 line with its two ends 4 and 5, the single node a are respectively type-I, -II and -III components. In graph (b), the type-2 line with its two ends 1 and 2 (the arrow points to 1), the type-1 line with its ends 3 and 4, the single node 5 are respectively type-I, -II and -III components. n in the type-II component cannot be connected with nodes in other components via type-3 lines and it must be the rightmost element in any permutation. For any configuration of final upper and lower blocks, the sum of all physical graphs and spurious graphs is proportional to the l.h.s. of the graph-based BCJ relation. For example, the sum of graphs figure 21 (a) and (b) provides (−1) 2 ( 2 · 3 )( 4 · 5 )(k 1 · k 2 )B 4 G 1 , {2, 3} 5 while the sum of graphs (c) and (d) contributes ( 2 · 3 )( 4 · 5 )(k 2 · k 4 )B 3 G 1 , ∅ 5 . Therefore, we conclude that the T 1 part can always be written as a combination of the l.h.s. of graphbased BCJ relations eq. (7.1) which have been proven to be a combination of traditional BCJ relations eq. (2.10). The T 2 part. Graphs contributing to T 2 involve the node n which is connected by a type-2 line (pointing towards the direction of root). To expand T 2 by BCJ relations, we should adjust the definition of skeleton by those graphs which are obtained by removing both type-3 lines and the type-2 line attached to the node n. Any skeleton under this definition is a disconnected graph. The definition of type-I and type-III components are same with those in the T 1 case. The type-II component is defined by the single node n. Along the line pointed in section 6, we define the upper and lower blocks by the maximally connected graphs containing the node n and 1 respectively. At the beginning, the upper and lower blocks are correspondingly the type-II and the type-III components. According to disucssions in section 6.2, we attach a chain of components to either the upper or the lower block and redefine these blocks in each step. In this case, we should require that the node n can only be connected by the first chain that is attached to the upper block, via a type-2 line (with the arrows pointing towards root). When all components are used, we JHEP05(2019)012 (f), we should connect a node b in the final lower block with n via a type-2 line whose arrow points to a. After this step, a graph is a spurious one (for example figure 22 (b), (d) and (f)) if it contains spurious components (whose only a single side is passed through by the path from n to 1), otherwise, it is a physical graph (for example figure 22 (a) (2) If the final upper block consists of only the node n, n can be connected to any node b in the final lower block via a type-2 line n · k a (see figure 22 (c), (d) and (e), (f) for distinct configurations of the final lower block). The permutations established by graphs with all choices of b are same. Thus we can extract a total factor a∈{1,2,...,n−1} n · k a = n · (−k n ) = 0, (8.13) where the transversality of polarization and momentum conservation have been applied. Hence this part must vanish. JHEP05(2019)012 To sum up, the identity eq. (8.5) which is induced by the gauge invariance of gravity can always been expressed as a combination of BCJ relations. Conclusions and further discussions In this paper, a graphic approach to the relationship between gauge invariance induced identity (where coefficients are polynomials of Lorentz inner products · , · k and k · k) and BCJ relations (where coefficients are functions of only Mandelstam variables k · k) is provided. By establishing a refined graphic rule, the three types of Lorentz inner products · , · k and k · k are represented by type-1, -2 and -3 lines correspondingly. To find out the relationship between the identity induced from the gauge invariance of tree level single-trace EYM amplitude and BCJ relations, we collected terms containing the same skeleton (the subgraph obtained by deleting all type-3 lines). We further proved that the sum of physical graphs for a given skeleton can be given by (a) summing over all possible configurations of final upper and lower blocks; (b) for a given configuration, summing over all possible (physical and spurious) graphs constructed by connecting a type-3 line between the final upper and lower blocks. Each summation in (b) is in fact proportional to a graph-based BCJ relation which was further proved to be a combination of traditional BCJ relations. Following a similar discussion, we proved that the identity induced by the gauge invariance of tree level GR (YM) amplitudes could also be expanded in terms of traditional BCJ relations. Although, the gauge invariance identity and standard BCJ relations have quite different forms and origins, they can be related when an amplitude is written as the double copy form. As pointed in [25], BCJ relation among Yang-Mills amplitudes can be considered as a result of the Jacobi identities satisfied by BCJ numerators. On another hand, once we write a full color-dressed EYM amplitude or a GR amplitude into a double copy form, (according to discussions in [2]) the gauge invariance conditions for 'half polarizations' living in one copy numerators is ensured by the Jacobi identities for the other copy of numerators, which further result in the BCJ relations for color-ordered amplitudes. Thus the results in this paper can be regarded as a realization of the relationship between the gauge invariance and BCJ duality at amplitude level. We close this by pointing several interesting topics that deserve further study: • First, how to extend the discussions to the identities induced from multi-trace EYM amplitudes? Two types of recursive expansions of tree level multi-trace EYM amplitudes were established in [17]. Correspondingly, there are two identities (mentioned as type-I and II in [17]) respectively induced by the gauge invariance condition of the fiducial graviton and the cyclic symmetry of a gluon trace. The type-II identity for amplitudes with two gluon traces and no graviton can be understood as the graphbased BCJ relation eq. (7.1) in which the tree graph T is a simple chain. However, this observation cannot be straightforwardly generalized to amplitudes with an arbitrary number of traces and gravitons. Thus the relationship between identities induced from arbitrary multi-trace EYM amplitudes still deserves further consideration. JHEP05(2019)012 • Second, how to calculate helicity amplitudes in four dimensions by the refined graphic rule? In four dimensions, the tree level single-trace EYM and GR amplitudes with maximally-helicity-violating (MHV) configurations were shown to be proportional to (Hodges) determinants (see [33] for GR and [34] for EYM) which can be further expanded by graphs [34][35][36]. Such brief formulas cannot be trivially extended to arbitrary helicity configurations. It is worth studying the relationship between the graphic expansion of the Hodges determinants [33] and the refined graphic expansion of amplitudes in the current paper. This may provide hints for finding more simple formulas of helicity amplitudes in four dimensions. • Third, it will be very interesting if one could find out the full relationship between the refined graphic rule and the expansion of CHY formula. Graphic expansions of CHY formula were already discussed in e.g., [30,31,37,38]. In the work [31], a graphic expansion which is similar with the refined graphic rule has been proposed. It seems that there should be a very close relation between the expansion in [31] and the refined graphic rule in this paper. • Last but not least, as shown in [26,27], BCJ relation can be derived from monodromy of string amplitudes. It will be interesting if one can incorporate gauge invariance with string monodromy. JHEP05(2019)012 X i (σ σ σ) for σ σ σ ∈ α α α ¡ β β β and i ∈ β β β: the sum of all momenta of elements in both β β β and α α α (including the element 1) that appear on the l.h.s. of i in permutation σ σ σ. Type-I sector of a chain: A sector containing a type-1 line and possible type-2 lines whose arrows point to the two ends of the type-1 line Type-II sector of a chain (the chain led by the highest weight node h ρ(s) ): A sector only containing type-2 lines whose arrows point to the direction of the starting node h ρ(s) of the chain. JHEP05(2019)012 Type-III sector of a chain: A sector only containing type-2 lines whose arrows point to the direction of the ending node of the chain Type-I component C I : component containing a type-1 line and possible type-2 lines whose arrows pointing to the direction of the two ends of the type-1 line. Kernel where the first line in the above expression comes from the last term of eq. (B.4), the second line gets contributions from both the second and the last terms of eq. (B.4), the third line gets contributions of all the three terms of eq. (B.4). Apparently, all the three cases can be uniformly written as k h 2 · X h 2 (σ σ σ) where X h 2 (σ σ σ) is the sum of all elements (including the element 1) appearing on the l.h.s. of h 2 in the permutation σ σ σ. Therefore eq. (B.4) can be reorganized as which can be further arranged as Relative orders of h 1 , h 2 and h 3 Table 2. Consider the graphs in figure 26, which contain the skeleton F = figure 6 (a). According to the refined graphic rule, the factor (−1) N (F ) for all the three graphs in figure 26 is (−1) 0 = 1. For fixed l, l and a fixed permutation β β β satisfying eq. (B.11), the coefficients (−1) N (F ) K [F \F ] for each permutation σ σ σ ∈ β β β ¡ {h 3 } are collected. For example, if the relative order of elements of figure 26 contribute to this permutation. The sum of figure 26 (a) for all m satisfying σ σ σ −1 (m) < σ σ σ −1 (h 3 ) provides k h3 · Y h3 (σ σ σ), while graphs figure 26 (b) and (c) contribute k h3 · k h1 and k h3 · k h2 , respectively. Thus the total factor is k h3 · Y h3 (σ σ σ) + k h3 · k h1 + k h3 · k h2 , as shown by the first row. for each permutation σ σ σ ∈ β β β ¡ {h 3 }. As shown by table 2, coefficients for any permutation σ σ σ ∈ β β β ¡ {h 3 } can be uniformly written as k h 3 · X h 3 (σ σ σ). Thus the expression in the square brackets of eq. (3.5) for the skeleton F = figure 6 (a) (with given l and l ) is expressed by where we summed over all β β β satisfy eq. (B.11). Apparently, the expression in the square brackets in the above equation is the l.h.s. of a fundamental BCJ relation. Relative orders of h 1 , h 2 and h 3 Graphs F ⊃ F Sum of coefficients (−1) Fig. 27 (a), (c) (see table 3). Then sum over all possible σ σ σ ∈ β β β ¡ {h 1 } ¡ {h 3 } and all possible β β β satisfying eq. (B.13). From table 3, we find that the coefficients in the first three rows and the last three rows can be respectively written as −k h 1 · X h 1 (σ σ σ) and k h 3 · X h 3 (σ σ σ). Thus the expression in the square brackets of eq. (3.5) for the skeleton figure 6 (c) (l ∈ {1, . . . , r − 1}) is given by in which the first summation is taken over all β β β satisfying eq. (B.13). The first term in the above expression can be rewritten as Relative orders of h 1 , h 2 and h 3 Graphs Since the sum of the last term on the second line of the above equation and the second term of eq. (B.14) is just a combination of l.h.s. of BCJ relations eq. (B.14) is finally arranged as (3) The skeleton F = figure 6 (e) gives a factor P [F ] = ( h 1 · k h 3 )( h 2 · k h 1 ) and consists of two disjoint components. All possible graphs satisfying F ⊃ F are displayed in figure 28. When expressing the last two terms respectively by D General rules for the construction of physical graphs All (physical) graphs F s.t. F ⊃ F in eq. (3.5) should be directly derived from the refined graphic rule. Now we show the general construction rule for all F (s.t. F ⊃ F ), which is based the on refined graphic rule. We further prove that all the physical graphs obtained in section 6 (by connecting final upper and lower blocks) precisely match all the graphs produced by the rule introduced in this section. Further noting that all spurious graphs cancel out, the expression in the square brackets of eq. (3.5) is thus equivalent with eq. (6.5). As pointed in section 5, each skeleton F in eq. (3.5) at least contains a type-II component C II and a type-III component C III . Moreover, N type-I components C I 1 , C I 2 , . . . , C I N may also be involved. To construct all physical graphs for a skeleton with N + 2 components, we should connect these components together via N + 1 type-3 lines in a proper way. By a direct analysis of the refined graphic rule in section 3, this can be achieved by the following rule. The construction rule Step-1. Assuming that the reference order of all type-I and type-II components is given by R C = {C I 1 , C I 2 , . . . , C I N , C II }, we pick out the type-II component C II (the highest-weight component) as well as arbitrary type-I components C I a 1 , C I a 2 , . . . , C I a i (not necessary to preserve the relative order in R C ) and construct a chain CH of components, whose starting component is C II and internal components are C I a 1 , C I a 2 , . . . , C I a i , towards C III : CH = C II , C I a i t (or b) − C I a i b (or t) , . . . , C I a 1 t (or b) − C I a 1 b (or t) , C III . (D.1) On this chain, two adjacent regions separated by commas (e.g. C II and C I a i t (or b) ) are connected via a type-3 line. This chain is naturally physical because the structures figure 12 (a) and (b) are avoided. Redefine the reference order by R C → R C ≡ R C \ C II , C I a 1 , C I a 2 , . . . , C I a i . Step-2. After the redefinition in the previous step, the ordered set R C = R C only consists of type-I components. Now we pick out the highest-weight component, say C I a i as well as arbitrary type-I components C I a 1 JHEP05(2019)012 , . . . , C I a i from R C and construct a chain CH towards a component C ∈ C II , C I a 1 , C I a 2 , . . . , C I a i , C III , which has been used in the previous step, as follows: Step-3. Repeat the above steps. In each step, construct a chain, whose starting component is the highest-weight one in R C redefined by the previous step, towards an arbitrary component which has been used in the previous steps. Then redefine R C by removing the starting and internal components which have been used in this step. This procedure is terminated till the ordered set R C becomes empty. We obtain a physical graph containing the skeleton F . All graphs (all possible chain structures of components and all possible choices of the end nodes of each type-3 lines) together form the set of all physical graphs F for the skeleton F . By this rule, one can reproduce all the physical graphs figure The above construction rule naturally inherits the chain structures of the refined graphic rule. More specifically, the chains of components are constructed by keeping track of the chains led by the highest-weight nodes in them. In the following, we prove that there exists a one-to-one correspondence between all graphs F (F ⊃ F ) obtained by the above rule and all graphs constructed by connecting all possible configurations of the final upper and lower blocks corresponding to the same skeleton F . The equivalence between the construction rule and the approach in section 6 Any graph constructed by the rule given in the current section can be understood as follows. We define the upper block by U = C II , the lower block by L = C III and the reference order R I C ≡ R C \ C II = {C I 1 , C I 2 , . . . , C I N }. For a given graph, we find out the chain led by the highest-weight type-I component C I N in the reference order R I C . According to the construction rule, this chain can be ended at any component on the chain led by the type-II component C II because the weight of C II is higher than C I N . Since the chain that is led by C II starts from C II , ends at C III and can also have possible internal type-I components, the ending component of the chain led by C I N could be (i) U = C II (figure 32 (a)), (ii) L = C III (figure 32 (b)) or (iii) any internal type-I component of the chain led by C II (figure 32 (c) and (d)). For the case (i), we redefine the upper block by the chain U → U = C I N t − C I N b , . . . , U = C II that starts at C I N and ends at the upper block U = C II . For the case (ii), we redefine the lower block by the chain L → L = C I N t − C I N b , . . . , L = C III that starts from C I N and ends at the lower block L = C III . JHEP05(2019)012 For the case (iii), if the chain led by C I N (defined by the construction rule in the current section) has the structure shown by figure 32 (c) i.e. this chain ends at one side C I i b/t of a component belonging to the chain led by C II , while the opposite side C I i t/b is connected to the part containing C II via a type-3 line, we redefine the upper block by the chain (defined in section 6.1) U → U = C I N t − C I N b , . . . , C I i b − C I i t , . . . , U = C II . Contrarily, if the chain led by C I N has the form figure 32 (d), we redefine the lower block by the chain (defined in section 6.1) L → L = C I N t − C I N b , . . . , C I i t − C I i b , . . . , L = C III . After finding out the chain led by C I N defined in section 6.1, we remove the starting and internal components of this chain from the reference order R I C ≡ R C \ C II = {C I 1 , C I 2 , . . . , C I N }. Next, we find out the chain that is led by the highest-weight component in the redefined R I C by following the previous step but using the redefined R I C , U and L . Repeating these steps until the ordered set R I C becomes empty, we get graphs with only two disjoint maximally connected subgraphs: the final upper and lower blocks. Then the graph is finally given by connecting the final upper and lower blocks U and L via a type-3 line such that there is no unphysical structure like figure 12 (a), (b). This description of the physical graphs obtained by the construction rule precisely agrees with the construction of physical graphs in section 6. Conversely, any physical graph constructed according to the method in section 6 can also be obtained by the construction rule provided in this section. We define reference order R C ≡ {C I 1 , C I 2 , . . . , C I N , C II }. One can always find out a path from the highest-weight component C II in R C to C III . This path can be considered as the chain C II , . . . , C III which starts at C II and ends at C III . Redefine the ordered set R C by deleting the starting and internal components of this chain. In the same way, we find out the chain led by the highest-weight component of the reference order R C that is defined in the previous step and redefine R C again. Repeat these steps until R C becomes an empty set. Then we find that a physical graph constructed in section 6 can be obtained by the rule in this section. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
2023-01-21T14:44:17.085Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "16e21e349dd3b8071f032a44b0213d31b060aa18", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2019)012.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "16e21e349dd3b8071f032a44b0213d31b060aa18", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
112956717
pes2o/s2orc
v3-fos-license
Mechanical grading in PGI Tropea red onion post harvest operations The growing interest expressed by consumers toward food products quality as well as toward their linkage to the territory, has led producers to fit to the continuous rising demand for “typical products”, and to look for new and more efficient production and marketing strategies. An emblematic case is represented by Tropea red onion that, as a typical product, plays an important role in economical and rural development of the territory to which it is linked. The organoleptic features offered by “Tropea Red Onion”, PGI certified (Calabria), have to be associated as well to the quality of services that accompanies its processing. Technology application in post-harvest operations, has certainly contributed to make faster and less tiring all processing tasks. The main problem related to the mechanization of Tropea red onion post-harvest operations lies in the removal of the various layers of the external tunic, making it impossible for optical or electronic grader to achieve this task in a satisfactory way since the sensors are not able yet to separate the “bulb” from its involucre. In this context, the current study aims to assess the productivity of three different machines used for round Tropea red onion grading, and determine their work efficiency. The carried out analysis highlighted the ability of the studied machines to ensure a high work capacity, while maintaining a high level of precision during calibration process. Such precision allows to decrease laborer employment and increase processing chain speed, rising as well the annual use of the machines, allowing consequently processing cost savings. For a more profitable employment of such graders, it is, however, necessary from one hand, to properly form the technicians responsible of processing plants management, and from the other hand, to be able to take advantage of a technical assistance network, able to serve users in a short time. Introduction Nowadays, the modern agri-food chain requires an increasingly advanced planning that aims to improve and increment productions qualitative level (Cavalli et al., 2011).The added value associated to the "territory" of some products, such as the Red Onion of Tropea (Calabria-Southern Italy), can therefore assume a great importance and contribute considerably to rural development and agricultural diversification (Gulisano et al., 2000).In addition, organoleptic features provided by this product that has PGI label (EC, 2008), should be connected as well to the quality of services that accompanies its processing and marketing (Fafani et al., 1997).Such scenario should lead the agri-food industries to strengthen their productive systems and modify the current production, distribution and marketing settings (Fabbris, 1989), that are still not suitable to supply markets of typical products.Consumers' expectations and consequently global markets require standardized and homogeneous products with quality and size tolerances tending to zero.Moreover, productions should be characterized by a continuity of supply that have a strict compliance with delivery time, as well as bargaining forms for consistent provisions programmable for the medium and long terms.Regardless intrinsic qualitative features that provides the red onion of Tropea (nutritional, organoleptic and sanitary ones), the optimization of some post-harvest operations such as those related to its grading can certainly contribute to make faster and less burdensome this processing phase.Diverse types of machines and plants are available to carry out this operation (Amirante et al., 1988;Baraldi et al., 1998;Peri, 1986;Oberti et al., 2001;Mignani, 2001).But, the main problem concerns the removal of the external layers of the "protection coat", which make difficult to employ optical or electronic graders in a satisfactory way.In this context, the present experimental study has been achieved in order to analyze one of the most important post-harvest processing phases, which is grading, and assess work efficiency of Tropea red onion mechanical graders, considering productive outputs and highlighting critic points and optimization possibilities of the employed machines. Materials and methods Experimental trials have been carried out in two firms, indicated in this paper as A and B, among the most advanced technologically within Tropea red onion production area.Processing chain in these firms is almost standardized.Grading, that is products sorting according to their size (Menesatti, 2000;Ortiz-Cañavate et al., 2002), begins with manual sorting of the bulbs and the subsequent "tailing" that consists in the removal of the dried leaves, also known as tail, at about one centimeter from the bulb.The process then, continues by placing the onions in the graders, to which follow weighing and packing operations.Grading in firm A is achieved by mean of two different machines: the first one (Figure 1) presents three processing lines, composed of opposite and divergent pairs of counter-rotating endless belts.The bulbs move on between the rollers, whose difference creates a growing gap through which the onions fall according to their size, inside of hoppers, from where they are then sent in apposite bins. The second machine, however, is a continuous cycle rollers type composed of a sequence of forced rotation bobbins pairs that create a gap equal to onions size between two consecutive pairs, thanks to the inclination they undergo (Figure 2). Bobbins system ensure the continuous and independent adjustment of grading size in each separate section, enclosing so, the largest dimensional range, while the forced rotation of the bobbins ensure the ideal positioning that enable grading of each single product. Firm B, however, employs a grade screen sizer (Figure 3): a conveyer belt carries the products to the sorting zone, where metallic cylinders with holes of diverse dimensions are set up.While vibrating, they grade the onions according to their size, and then, unload the processed products in the appropriate bins. The conducted trials, aimed to determine grading operations accuracy of the considered processing lines.A series of bulbs with different size have been selected to verify the influence of the product size on the precision.The bulbs have been numerated, then, the maximum diameter of the transversal section of each one has been measured manually with a Vernier caliper.Such diameter is commonly known as maximum "equatorial diameter", although the largest transversal section of the bulb is rarely encountered in the median zone.Subsequently, onions underwent a normal grading cycle in order to assess machines working efficiency.Moreover, working time as well as graders performances have been analyzed, according to C.I.O.S.T.A. ranking requirements (Bolli et al., 1987), considering machines starting as the beginning of trials, the final point, however, corresponded to the end of products unload. Firm A As reported previously, in this firm, there are two onions graders, the first one with inclined belts, and the second one with rollers, having the same four dimensional classes or grading issues ( 30 Figure 2. Onion grader with rollers. Graphic 2 reports, however, the average size of graded onions samples, as well as the respective standard errors.It can be observed that onions appertaining to lower dimensional classes are hardly identified by both graders, while, onions that belong to the most frequent commercial classes, that is, equal or more than 70 mm are graded with more accuracy independently from grader type. According to Graphic 3, the highest graders efficiency is reached when values obtained from the ratio between not correctly graded onions and total graded one for each dimensional class, are closed to zero.Explaining so, the capability of the grader to recognize the highest number of onions appertaining to a specific dimensional class.In general, a high accuracy is recorded for both graders, even if the most reliable one was the grader with inclined belts.It has to be noted that for extreme calibres, there was some difficulties to recognize the correct issue, however, the highest accuracy was obtained for the dimensional class of 71-90 mm. Working time measurements achieved during trials for both graders of firm A are shown in Table 1.Reference timing unit corresponded to the amount of minutes required to grade one hundred kilograms of the product.Operative time could be calculated as well. These data, show that the lowest operative working time (TO), equal to 3.20 min/100 kg, was recorded for onion grader with inclined belts, however, the grader with rollers presented almost the double value (5.43 min/100 kg).Idle time has not been registered, showing so the good functioning of both graders and the ability of laborers charged of processing.From working time values, it was possible to obtain working capacity and productivity for the analyzed graders, whose values are reported in Table 2, taking into account the employment of three processing units for each machine.Operative working capacity of the grader with inclined belts according to TO had a value of 1,800 kg/h while operative working productivity (PO) had a value of 600 kilograms per hour per operator (kg/hop).For the grader with rollers, these parameters assumed respectively the values of 1,040 kg/h for working capacity and 349 kg/hop for operative working.Therefore, it clearly emerges that the grader with inclined belts is better than the other one in terms of both working capacity and productivity. Firm B In this firm, there are six dimensional classes for onions grading, divided as follows: 25/40, 40/50, 50/60, 60/70, 70/80, 80/100.Graphics 4 and 5 indicate that the greatest number of graded onions appertained to dimensional classes having the caliber between 60 and 80 mm.Furthermore, product size differences between the two firms can be observed, processed onions in firm B being smaller. Graphic 6 reports however the onions grade screen sizer efficiency.A high accuracy is observed even in this case.But differently from firm A, here, there are some difficulties for the recognition of the correct caliber of big size onions, however, the highest grading precision is obtained for the smallest dimensional class 25-50 mm. Table 3 show the recorded effective and operative working times during onions grading by mean of the screen sizer. These data, reveal that operative working time (TO) of the last grader, equal to 6.25 min/100 kg, is the highest one.Indeed, although still technologically valid, it represents a less recent model than the previous two graders.However, There were not idle times, highlighting once more the good functioning of the machine and the ability of laborers that accompany graders.From time values, it was therefore possible to obtain work capacity and productivity of the grade screen sizer, whose values are reported in Table 4, considering the employment of three working units.Operative working capacity (according to TO) had a value of 935 kg/h while, operative working productivity (PO) was equal to 311 kg/hop. Conclusions The conducted analysis on onions post-harvest operations, highlighted that the employed graders are able to ensure a high working capacity, maintaining a notable accuracy during grading and weighing operations.Such precision allows to reduce man power employment and make faster processing chain, engendering machines annual utilization increase and processing costs saving.It is to be noticed that the employment of two processing lines in firm A is excessive, given that the grader with inclined belts is enough for achieving rapidly and accurately this operation.Obtained data support therefore the managerial decision to substitute the grader with rollers by more graders with inclined belts.First models of optical grading machines start to be employed, however, these one do not represent an easy solution for onion grading, because external protective coats could highly interfere Journal of Agricultural Engineering 2013; volume XLIV(s2):e63 reception and transmission sensors.For these reasons, optical systems do not respect perfectly accuracy threshold declared by their constructors, although they are among the most advanced and accurate systems.Competition capability recovery passes through the valorization of typical products, accompanied by the rigorous and complete application of high quality standards, excellent presentations and various assortments.The introduction of graders such as those analyzed in this study, can certainly contribute for the accomplishment of the above cited objectives, since they present suitable features for post-harvest processes optimization. Table 1 . Graders average working time in firm A. Accessory time; TAS: Accessory time for unloading; TAC: Accessory time for handling; TO: Operative working time; TME: Evitable idle time; TMI: Inevitable idle time.
2019-04-14T13:06:31.241Z
2013-09-08T00:00:00.000
{ "year": 2013, "sha1": "453e0b839fb7e68a5de9b2de6762e829f3826083", "oa_license": "CCBYNC", "oa_url": "https://www.agroengineering.org/index.php/jae/article/download/jae.2013.s2.e63/267", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "453e0b839fb7e68a5de9b2de6762e829f3826083", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Engineering" ] }
243958123
pes2o/s2orc
v3-fos-license
Ecological risk evaluation in bottom-surface sediments and sub-surface water in the subtropical Meghna estuarine system Assessment of elemental contamination is emerging research in the present world. Metals are hazardous to the environment and people's health when metals concentration might exceed the tolerable level. In this research, 12 elements (i.e., Mn, Ni, Cu, Zn, As, Se, Sr, Co, Pb, Fe, Rb, and Ti) were assessed using the energy dispersive X-ray fluorescence (EDXRF) method in water and sediment samples in four (4) different spawning grounds of Tenualosha ilisha at the confluence of the Meghna River in Bangladesh. A comparative analysis was performed for the first time among four sampling spots, i.e., Chandpur, Bhola, Sandwip, and Hatiya, assessed all possible risk indices-it is a unique thing. Several risk indices were solved to determine the degree of sediment pollution for all 12 elements, e.g., degree of contamination (Cd):6.5–7.01, the modified degree of contamination (mCd): approximately 0.7, the pollution load index (PLI): 0.45–0.51, etc. where all the indices' results showed low or baseline levels of pollution. According to the enrichment factor (EF) computation, slight enrichment of examined metals except Pb and Zn was found. In addition to these, the ecological risk factor (Er) found in the following order (pollution level: low): Cu > Pb > Zn among the four stations. Moreover, a spatial incline of metal accumulation was observed among the four spots: Bhola < Sandwip < Hatiya < Chandpur based on the risk index (RI) set value. However, a strong positive correlation (p < 0.05) between Ca and Fe, Ti and Fe, Ti and Mn, Mn and Fe, Fe and Sr were observed while Ca, and Co were strongly negatively correlated (r = minus (-)0.60, p < 0.05). The cluster analysis was performed and got an asymmetrical cluster among the sampling stations. This study recommends assessing the heavy metal concentration in biological samples, particularly in Hilsha fish. Introduction Trace metals are inorganic metals known to be micronutrients. These nutrients are useful and necessary in a minute's concentration for a person (Smith and Garg, 2017). But if their level of presence in the human body is greater than the optimum level, it would be toxic. Zinc (Zn), copper (Cu), selenium (Se), chromium (Cr), cobalt (Co), iodine (I), manganese (Mn), and molybdenum (Mo) are the primary and vital trace elements that humans need in a small amount (Wada, 2004). For example, iron is an essential micronutrient for the human body by transporting oxygen in the blood. Apart from these, elements may play a vital role as enzymatic activity, i.e., various metabolism, hemopoiesis, etc. Considering the importance of calcium, it helps to improve bone and joint health. In terms of potassium, it is crucial for the body's ionic balance. Moreover, these trace metals are significantly sensitive for their behaviour due to extremely low or incredibly high levels that might affect humans. Notwithstanding this, when the concentration level is beyond the required level, it could negatively affect, e.g. goitre, gout, liver/kidney dysfunction, growth retardation, hypothyroidism, thyroid, CNS disorder, etc. (Wada, 2004). Trace metals can stay at 0.02% of the total body weight, while major elements can account for 96% (Wada, 2004). Some are considered heavy metals with a density of 3-7 gm/cm 3 , but most researchers support this value-specific gravity equal to or greater than 5 gm/cm 3 (Duffus, 2002). Some elements can be considered carcinogenic (e.g., Hg, Pb, Cd, Cr, As, etc.), and some are accounted as non-carcinogenic (e.g., Cu, Se, Mn, etc.). Besides, these elements are released into the environment by natural (e.g., rock, weathering, river erosion, volcanic eruptions, etc.) and human-made causes (e.g., industrial, agricultural, medicinal, domestic and atmospheric sources, mining, foundries and smelters, and other metal-based industrial operations) (Christophoridis et al., 2009;Jordao et al., 2002;Lim et al., 2016). Although heavy metal contamination happens naturally, a certain amount of metal remains on the earth's surface. Besides, metal contamination can be increased a considerable amount due to anthropogenic practices as well. Metals pose a risk to the atmosphere because of their characteristics as poisonous, non-biodegradable, and long-lasting. Additionally, it can be dangerous to our CNS and death in extreme cases. Not even the metals are lost after the cooking. Heavy metal causes degradation of the natural quality, deterioration of the ecosystem's health, and threatens near or near the average metal concentration level. These are therefore spread via the transportation phase throughout the world. Metals are presumably first transported to water, then adsorbed, and finally deposited in the sediment (Hakanson, 1980). Sediments, including metals and metalloids, have often been used as a concierge of chemical pollution. It is the primary sinker of all metals and or depositors (Joksimovic et al., 2011;Luoma and Bryan, 1981). For this reason, it is highly appreciated to examine the contamination status of the sediments of the Meghna river. But metal can come through different processes to the watery environment, such as upwelling, bioturbation, etc. However, a water sample was considered to collect to analyse trace and heavy metals in this research. Metals have been incorporated and transferred from water and sediment into the biological samples including fish, molluscs, crustaceans, and seaweed samples through bioaccumulation (i.e., it is the progressive accumulation of contaminants in an organism, such as heavy metals) and biomagnifications (i.e., it is a mechanism by which a contaminant (such as a heavy metal) raises its concentration as it passes across the food chain mostly in tissues) process. Furthermore, ecological risk could be increased, and human health risk might be accelerated by ingesting contaminated species. (Ali and Khan, 2019). But this research could be extensive and impressive if biological samples were also accounted for determining the metal transfer ratio from abiotic to biotic environment; this was outside the scope of the study. However, for understanding the state of aquatic health, water quality parameters were recorded. For example, pH indicates whether the water is acidic or alkaline. Aquaculturists suggest slightly base or neutral pH for aquaculture (6.5-8.5) (Towers, 2015). When the pH value falls from the neutral condition, ocean acidification occurs, and even coral bleaching has destroyed the calcified organisms' body cover. Dissolved oxygen (DO) is an essential quality for the living of aquatic species. The basic primary standard for DO for fish growth is five (5) ppm, except for catfishes. Catfish can survive with their accessory respiratory organs even at two (2) ppm DO, and even some lungfish can suck the oxygen out of the air (Towers, 2015). The recommended DO range is 5 ppm for tropical freshwater and marine fish (Mallya, 2007). Temperature is also an essential factor for fish production. It also influences the metabolism of the fish species. The tropical fish species average ambient temperature varied from 24-32 C (Towers, 2015). Salinity is the aspect where the effect of seawater in the environment is known. The saltiness of the brackish habitat varied between 5 and 20 ppt. But in some situations, the set of values might be changing from one area to another. The sea region's total salinity is 34.7 ppt, while the freshwater ecosystem is 0 ppt (Ridgway, 1969). Water alkalinity and hardness are considerations for determining water quality, with a typical value of 40-70 mg of calcium carbonate per litre (Towers, 2015). All the criteria of water quality usually have the standard level within the estuarine environment. The subject of estuarine pollution is of great concern to scientists, local stakeholders, and authorities. Contamination of sediments by heavy metals is a global necessity with a significant proportion in developing countries. The estuarine environment is often regarded as the productive zone in which air-sea gas exchange, nutrient mixing, and water turbulence will continue to be active (Kirk Cochran, 2014). The lower part of the Meghna River estuary is a popular area for Tenualosa ilisha (Hilsha, Bangladesh's national fish) breeding ground (Hossain et al., 2020a). Nearly three-fourths of the river and tributaries are related to that system of waterways. Thus, all contaminants may mix with this channel, estuary, and eventually discharge into the Bay of Bengal. Water ecological parameters for Hilsha fish are temperature (21-30 C), pH (6-7.5), salinity (0-30), and DO (4-6 ppm) (Hossain et al., 2019). Owing to this, it is a burning issue to determine the pollution status of the Meghna estuary. The national fish named Hilsha (Tenualosa ilisha) fish migration route is also in this river system. Hilsha is an anadromous fish, and it is migrated from sea area to freshwater for spawning. After hatching and passing childhood, they may again move to the marine area for further development from juvenile to adult. Consequently, Meghna estuarine system is the focal point of pollution assessment for protecting our Hilsha fish spawning and growth development. Hilsha production may deteriorate in some years in Bangladesh, where trace metal pollution might be the one reason. Further research is suggested to determine whether heavy metal pollution has any effect on Hilsha production in Bangladesh. This research collected the water and sediment samples (ecological indicators) from the four different spots in the Meghna estuary. Our main intention was to determine the condition of ecosystem health. However, the specific objectives of the research are given below-To find out the estuarine water quality parameters To evaluate the level of heavy metal effluence in sediment and surface water of Meghna estuary. To assess the sediment pollution indices To determine the ecological risk factor and index using the established equations To analyze the recorded data with international standards (e.g., sediment quality guidelines (SQG), USEPA, etc.) and other authors Site selection The study was conducted on the lower part of Meghna Estuary, connected to the northern Bay of Bengal. The GBM (Ganga, Brahmaputra, and Meghna) river system is the most extensive. Among the three rivers, the Meghna river is in the lower part and connected to the Bay of Bengal. Almost all the rivers and tributaries' of Bangladesh and even Indian rivers are connected to the Meghna river (Li et al., 2011). Besides, all the sediment and wastage falling have a final destination: rivers, including the Meghna river, ultimately to the Bay of Bengal. So, there is a chance to pollute the environment. Apart from these, the selected site has significant value for considering the spawning ground and habitat of the national fish of Bangladesh named Hilsha (Tenualosa ilisha). Considering the Hilsha migration path emphasizing spawning, four important spots and three sampling points from each site were ultimately chosen (12 sampling points in total) in the Meghna river named Chandpur, Bhola, Hatiya, and Sandwip. Besides, 1.5-2 km distance was maintained among the sampling points. The detail of the sampling area map is described in Figure 1. Water quality parameters' data Water multi-parameters were measured by hand-held digital meters (HANNA Test kits, Hanna instruments Ltd., Germany), and it was done in the sampling area instantly. First of all, a water sample was collected from the 12 stations by a plastic water bucket, and then 500 mL was poured into a beaker. After that, the probe of the digital meter was immersed and waited 5 min for stability, and finally, data were recorded. Similarly, twelve (12) water quality parameters including pH, temperature, DO, and alkalinity detected from the 12 sampling points where all the determined parameters were considered water health indicators. Sample collection, preparation, analysis and quality control measures Water samples were collected from 12 sampling points during the monsoon season in the year of 2017. A boat was used to reach our sampling points and used a plastic bottle for collecting the sub-surface sample water (below 5 cm from the surface for avoiding surfactants). Before this, sampling bottles were washed with de-ionized distilled water and finally rinsed with sample water. Furthermore, duplicate samples were collected from each point to avoid any potential error during the sampling process. After sample collection, 5% of HNO 3 (pH ¼ 2) was added as an inhibitor to prevent the water's heavy metals decay. After proper labelling of the sample bottle, the next step was to transport these to the laboratory with an icebox and stored them at 4 C for trace metal detection. Ekman grab sampler was used for collecting sediment samples from the selected locations. Duplicate samples were also collected to avoid the error of the sampling process. However, samples were stored in plastic bags and brought to the laboratory for further analysis. Here we have incorporated the sample analysis in brief. Because the raw data of this research has already been published in a "Data in Brief" journal where the methodology was explained in details, now this data article can be used as a supplementary file with this manuscript (Hossain et al., 2020a). However, before analyzing the water sample, sampled water was filtered by 0.45 μm filter paper (Sigma-Aldrich, USA). In the next step, 1 g C 6 H 10 O 5 (cellulose) was added and dried by a water bath (98 C). Evaporated samples were further dried using an IR lamp (70 C) for two hours until constant weight. The dry mass was then transferred to a carbide mortar for homogeneous mixing and ground to a powder. The fine powder was used to prepare a pellet by a pellet maker (Specac, UK). Finally, the pellet was kept on the XRF system and the element's concentration was measured as spectrum analysis. In terms of sediment samples, first of all, samples were kept in porcelain dishes individually. After that, the sample was dried until no moisture was present using an oven (70 C). Finally, a pellet was prepared and kept on an XRF system for metal detection like water sample analysis. For QA/QC, the standard reference material, i.e., Marine sediment, IAEA 433, was used and found precision level was within 94-106%. Basic contamination index To identify the contamination state of sediments, the contamination factor (CF) and the degree of contamination (CD) are employed. The CF and CD are estimated using Hakanson's postulated Eqs. (1) and (2) for Cf and Cd, accordingly (Hakanson, 1980). Where the element's baseline value is equal to the average value of the world's surface rock. Cu, Zn, Pb, Ti, Fe, K, Ca, Sr, Zr, and Rb, respectively, have average background concentrations of 45, 95, 20, 4600, 47200, 26600, 22100, 300, 160, and 140 in shale (Turekian and Wedepohl, 1961). The contamination factors were divided into four groups, with different values indicating varying levels of contamination, such as C f < 1 indicating low contamination and 32 > Cd indicating extremely high contamination, however the precise categories were discussed elsewhere (Hakanson, 1980;Hossain et al., 2021aHossain et al., , 2021b. Advanced tools and/or indices for the determination of the sediment contamination The formula no. 3 by Abrahim and Parker is being used to define mC d (Abrahim and Parker, 2008). mC d is also classified into various categories where mC d < 1.5 signifies nil to very low and mC d ! 32 indicate ultra-high degree of contamination respectively, however, the detailed category was described elsewhere (Abrahim and Parker, 2008;Hossain et al., 2021aHossain et al., , 2021b. According to Tomlinson (Tomlinson et al., 1980), Pollution Load Index (PLI) is defined as the nth root of the multiplications of the concentrations of the metals. Based on the calculative value of the PLI index following Eq. (4), the state of the sediment contamination can be comprehendible whether baseline or deteriorated (Hossain et al., 2021a(Hossain et al., , 2021b. Where CF is the contamination factor which is calculated by the following Eq. (1). The Geoaccumulation Index (I geo ) was used to calculate metals content in examined sediments by comparing them to undisturbed or crustal sediment (control) levels. After calculating Eq. (5), numerous Igeo values indicate different pollution status, for example, I geo < 0 signifies unpolluted and I geo > 5 indicates very strongly polluted (Muller, 1969). In total I geo values were classified into seven categories which is described in elsewhere (Hossain et al., 2021a(Hossain et al., , 2021b. Where C n is the measured concentration of the sediment for metal (n), B n is the geochemical background value of metal (n), and factor 1.5 is the possible variations of background data due to lithogenic impacts (Rabee et al., 2011). Because iron (Fe) is the fourth central element in the earth's crust and is rarely contaminated, it was utilized to calculate enrichment factor (EF) in this study. Because along with its noble qualities, Fe is the best-normalized metal. Furthermore, the geochemistry of Fe is nearly identical to that of all toxic elements, and it is equally dispersed throughout the sediments, among other things (Abolfazl and Ahmad, 2011). The number of experts were competent in normalizing metal pollution in river and coastal sediments by using Fe (Neto et al., 2000;Zhang et al., 2007). The EF is calculated by following Eq. (6) (Zoller et al., 1974). The elemental concentration in the sample is denoted by the letter M x . The Fe concentration in the sample is denoted by the letter Fe x . M ref stands for the elements content in the world's average shale, while Fe ref stands for the average Fe shale. Different EF values indicate different pollution status EF < 1 indicates no enrichment, and EF > 50 is extremely severe enrichment (Abolfazl and Ahmad, 2011;Hossain et al., 2021aHossain et al., , 2021bQiao et al., 2013). Ecological risk calculation from metal contamination in sediment Potential Ecological Risk (PER) is calculated to evaluate the hazard of an ecosystem's environment. Ecological risk factor and risk index were determined by Eqs. (7) and (8) respectively. Where E i r is ecological risk factor, C i f is contamination factor (see Eq. (1)), and T i r is the toxic response factor. The toxic response factor values are 5, 1, and 5 for Cu, Zn, and Pb, respectively (Hakanson, 1980). Moreover, the methods above are also developed by Hakanson (1980). However, different E i r and RI values represent different risk categories including E i r <40 for low, E i r !320 for very high risk and R I !320 for very high risk (Hakanson, 1980;Hossain et al., 2021aHossain et al., , 2021b. Results and discussions In this research, water and sediment samples were assessed to determine the elements, including trace metal, heavy metal, metalloid level of the Meghna estuarine ecosystem. These two samples are significant objects for determining ecosystem health. However, the findings of this study were compared with different international guidelines and the calculation of several pollution indices to assess the current degree of pollution. Water quality parameters', Meghna Estuary Essential physicochemical parameters were measured in the water sample of the studied area and found elsewhere (Hossain et al., 2020a). The water pH was recorded in twelve samples average 7.12 AE 0.27, whereas soil was 6.33 AE 0.41; both pH values are within the preferred range (6.5-8.5) for aquaculture. The temperature was found at 27.974 AE 2.937 degrees Celsius; it also matched with a standard value. Salinity was found in almost 0 (0.14 AE 0.10), meaning a freshwater environment. DO was found 5-9.5 mg/L in all samples due to present current flow, air-water gas exchange continually. In contrast, the study area's alkalinity and hardness were 83.75 AE 21.76 and 204.67 AE 98.28, respectively, which were crossed the standard limit. Trace metals concentration in the water sample In total eleven elements (Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, As, Se, and Sr) were determined in the water sample in 2017 during monsoon season (Jun-Aug) where has metal (Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, and Sr), metalloid (As), and non-metal (Se) and all elements have great significance. For instance, some elements are considered as an essential element in a certain amount for living species like Ca, Fe, and Zn; some are called as toxic elements including As, Co and Ni and some are considered as rare elements including Ti and Sr. However, all the studied elements are under the category of the aquatised cations. No anions or negatively charged elements were considered for this study. In this research, total elemental concentration was determined instead of speciation like total Fe was counted instead of Fe 2þ or Fe 3þ . The element accumulation percentage was 1-12% in the water sample, whereas 88-99 % was in the sediment sample, as shown in Figure 2. Generally, element concentration was lower in water than sediment because sediment is the final deposition place. But this study found it very low in the water sample due to the monsoon season. During the rainy season, the freshwater influx added to the ecosystem, water might be diluted more, and metals deposited in the bottom sediment (Hossain et al., 2020c). All the element's concentration in the water sample is described in elsewhere (Hossain et al., 2020a). Bhola's sampling spots were polluted maximum among the four spots in Ca, Ti, Mn, and Fe but vice-versa in Chandpur stations. The second most polluted site was Sandwip considering the following metals, i.e., Co, Ni, Cu, and Sr. However, selenium (3.88-4.71 μg/mL) concentration in the water samples was more or less similar for all stations. However, all the elemental concentrations exceeded the international drinking water standard and local guidelines (Table 1). Besides, manganese (Mn) and arsenic (As) were found a high concentration compared to the mentioned literature shown in Table 1. In contrast, the rest of the elements contamination in the studied area was varied with other authors' research (Table 1). Trace elements concentration in the sediment sample Trace element was assessed in a sediment sample from four critical spots in the Meghna estuary and described in elsewhere (Hossain et al., 2020a). The sediment sample's average potassium and calcium concentration was 16750.83 mg/kg and 16984.17, where Bhola stations were mostly polluted than the other three spots. In terms of titanium, Chandpur stations were maximumly contaminated (2950.25 AE 59.79 mg/kg), whereas the minimum was found in the Sandwip station. Hatiya station was more enriched with Rb, Fe, and Sr comparison to others. Besides, no apparent significant data variation among the sampling spots for the following metals, i.e., Pb, Cu, and Zn. Our finding results for especially three metals (Pb, Cu, and Zn) were matched with the metal concentration data in a sediment sample from Krishna River, India (Ramesh et al., 1989). However, the findings of this research is compared with the world average shale (Turekian and Wedepohl, 1961) data and sediment quality guidelines (MacDonald et al., 2000) data, as shown in Table 2. All the studied metal concentration was still below the average shale, but Cu and Rb concentration almost touched the average shale. It is assumed that these two metal concentrations might exceed the world average after a few years. According to sediment quality guidelines (SQG), the concentration of copper (38.1 mg/kg) has already topped over the TEL, threshold effect concentration (31.6 mg/kg). Besides, for more clarification of pollution status of the study site, the several established sediment pollution indices and ecological risk indices were solved (Hakanson, 1980). Sediment pollution indices In this study, sediment pollution was assessed based on two categories viz i. for assessing the contamination status for each metal in all stations, ii. for vulnerability assessment by ecological risk factor and risk index calculation. 3.4.1. Degree of contamination assessment for all elements in the sediment sample Contamination factor (C f ) for all elements, i.e., Cu, Zn, Pb, Ti, Fe, K, Ca, Zr, Sr, and Rb, in the studied area, was assessed found less than 1 value. It indicated that all the low contamination in the region by studied elements. In contrast, only in rubidium trace metals in the sediment sample of Hatiya station were more significant than 1, moderate meaning contamination. Although the C f values for all metals were below 1, this C f value is close to 1 (range 0.7-0.99) for three metals, i.e., Cu, Sr, and Zn. Consequently, this study might suspect that the value of the three metals above' contamination factor might be become > 1 after few years. C d , m C d , and PLI indices were calculated for all metals for in-depth justification of the sediment pollution using trace metals concentration ( Table 3). The degree of contamination (Cd) value was ranged from 6.5 to 7.01 indicating low contamination status. Besides, a modified degree of contamination was at the minimum range. However, the PLI value was found around 0.50, which means a baseline level of pollution, i.e. it is not in a perfection stage. Many anthropogenic sources have remained in the experimental place (Abrahim and Parker, 2008). Apart from these, two essential pollution indices named geoaccumulation index (I geo ) and enrichment factor (EF) were also calculated. The I geo was found a negative value for all studied metals, which indicates low pollution status, as shown in Figure 3. In the enrichment factor (EF) estimation, minor enrichment (EF ¼ 1-2) was found for almost all experimented metals in all the stations (Table 4). In contrast, EF values for Zn and Pb for all stations were recorded below 1, meaning no enrichment. However, the lead had only minutely (EF ¼ 1.27) enriched in Chandpur station. Potential ecological risk estimation Estimating the ecological risk of heavy metal contamination was proposed as an analytical tool for water pollution control purposes (Rahman et al., 2014). Owing to this, the content of heavy metals in sediments is also increasing. The metals could affect ecological health through this process. Hakanson created a system for evaluating possible environmental risk indices for managing aquatic pollution (Rahman et al., 2014). Besides, the method would also use finding out whether lakes or rivers or oceans are polluted or not and should be given special attention (Hakanson, 1980). This research calculated the ecological risk factor for only three metals among the studied metals due to the toxic response factor's availability (Table 3). The order of potential ecological risk factor of heavy metal in bottom sediments of the Meghna estuary was Cu > Pb > Zn. The potential ecological risk factors E r for Cu, Pb, and Zn were less than 40, which belong to low ecological risk (Hakanson, 1980). Without this, risk index (RI) was also determined and found less than 140 values for all metals, indicating the light pollution area. Statistical analysis Multiple correlation analysis (Figure 4) on the elemental data of water found that there is a strong correlation exists between Ca and Fe (r ¼ 0.75, p < 0.001), Ti and Mn (0.74, p < 0.001), Ti and Fe (r ¼ 0.77, p < 0.001), Mn and Fe (r ¼ 0.82, p < 0.001) and Fe and Sr (r ¼ 0.77) ¼ , p < 0.001). The correlation was observed between Ca and Co (r ¼ -0.60, p < 0.05), Ti and As (r ¼ 0.64, p < 0.05), Co and Ni (r ¼ 0.67, p < 0.05), and Zn and As (r ¼ 0.64, r ¼ 0.05). Positive relationship (i.e., positive value of r) indicates increase one metal led to an increase of other metal while vice-versa for negative relationship (i.e., negative value of r). In terms of sediment, slightly different result was found for correlation assessment. For instance, a positive correlation was found between Rb and Fe (r ¼ 0.86, p < 0.001) and Zr and Sr (r ¼ 0.67, p < 0.05). Relation between other metal concentrations is not significantly correlated (Figure 4). Cluster analysis found two major clusters in heavy metal concentrations at different sampling locations ( Figure 5). Cluster 1 consists of stations 4, 7, 8, 10, 11, 12, 9, and 6, while the second cluster consists of stations 2, 3, and 5. Two major clusters was found for the heavy metal concentration at different stations in sediments ( Figure 5). Cluster 1 consists of station 6, Turekian and Wedepohl (1961) Note: SQG: sediment quality guideline, TEC: threshold effect concentration; PEC: probable effect concentration. 11,12,8,9,10 and 5, while cluster 2 consists of station 7, 4 and 3. Interestingly, clusters in-terms of sampling stations found in water and sediments are not symmetrical. Conclusion Trace metals assessment in the water and sediment sample meaning to judge the pollution status in any ecology. In this work, 12 significant elements, including toxic chemicals, in both representatives, i.e., water and sediment from the trendy places in the subtropical area of the Meghna estuary. Elemental concentration was low to medium range among sampling stations. Based on the pollution indices, i.e., C d , m C d , and PLI, the study site is low or baseline contamination level yet. In terms of enrichment factor index, all the elements show slight enrichment excluding Pb and Zn. However, this study site is moderately polluted for a few metals and low contamination in almost all metals according to international guidelines and the sediment pollution indices' calculated value. The primary pollutants in the study area include agricultural runoff, household waste, industrial effluents, water vehicles waste, and river runoff. According to cluster analysis, all the elements' source is not the same and found asymmetrical cluster. Nevertheless, this study strongly recommends taking a biological sample (fish, molluscs, crustaceans, etc.) for trace metal detection, and Hilsha fish should prioritize future research. Subrata Sarker: Analyzed and interpreted the data. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability statement Data included in article/supplementary material/referenced in article. Declaration of interests statement The authors declare no conflict of interest. Additional information No additional information is available for this paper.
2021-11-11T16:17:17.920Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "fd918b3b7e24deaef69f4898f6707db08ecfc603", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2405844021024270/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "709ad8a15b38b6a55d5b78b583e2025b01b5594e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
237364857
pes2o/s2orc
v3-fos-license
Programming ultrasensitive threshold response through chemomechanical instability The ultrasensitive threshold response is ubiquitous in biochemical systems. In contrast, achieving ultrasensitivity in synthetic molecular structures in a controllable way is challenging. Here, we propose a chemomechanical approach inspired by Michell’s instability to realize it. A sudden reconfiguration of topologically constrained rings results when the torsional stress inside reaches a critical value. We use DNA origami to construct molecular rings and then DNA intercalators to induce torsional stress. Michell’s instability is achieved successfully when the critical concentration of intercalators is applied. Both the critical point and sensitivity of this ultrasensitive threshold reconfiguration can be controlled by rationally designing the cross-sectional shape and mechanical properties of DNA rings. Supplementary Note 1. Double-stranded DNA (dsDNA) ring analysis To understand the mechanics of Michell's instability, a finite element analysis was performed for the idealized dsDNA ring which consists of 336 base-pairs (BPs) using SNUPI (Structured NUcleic-acids Programming Interface) 1 . The configuration of each BP was abstracted into a node and triad, and the connection of two successive BPs was substituted by Euler-Bernoulli beam finite element. In the model, the dsDNA was assumed to be intrinsically straight and have the regular geometry of B-form DNA (axial rise of 0.34 nm and helicity of 10.5 BP per turn) and mechanical properties (stretching rigidity (S) of 1100 pN, bending rigidity (B) of 230 pN nm 2 , and torsional rigidity (C) of 460 pN nm 2 ). Here, the coupling coefficients were ignored. The bending moment was applied at both ends of a stress-free straight dsDNA to construct the initial configuration of the dsDNA ring structure. After fixing one end, the torsional displacement (θ) was applied at the other end while its other degree of freedom was also fixed. As the θ is increased, the configuration of the dsDNA ring structure was calculated through nonlinear static analysis with only geometrical nonlinearity until self-contact of the structure occurs. In each incremental step, the stretching (πS), bending (πB), and torsional (πC) strain energies of a finite element were calculated as πS = SΔ 2 /2L, πB = Bφ 2 /2L, and πC = Cω 2 /2L, where L, Δ, φ, and ω indicate the axial length, the length change, the bending angle, and the twist angle, respectively. Its total strain energy was obtained by summing these energies. Supplementary Note 2. Finite element analysis of DNA nanostructures We carried out a finite element (FE) simulation for DNA nanostructures to predict their equilibrium shapes using SNUPI (Structured Nucleic acids Programming Interface), which would be provided in our recent work 1 . From caDNAno design files, 2 SNUPI obtains information on connectivity and mechanical perturbation among BPs, and constructs FE model, where a BP was abstracted into a node and two successive BPs were connected by a beam finite element. In the FE model, the DNA duplex was assumed to have the regular B-form DNA geometry (diameter of 2.25 nm, axial rise of 0.32 nm and helicity of 10.5 BP per turn) with S of 1100 pN, B of 230 pN nm 2 , C of 460 pN nm 2 and torsional-stretch coupling (g) of -180 pNnm. Note that we did not consider any sequence-dependent properties and assumed a nicked DNA duplex to be the same with a regular DNA duplex in terms of its geometrical and mechanical properties. Crossovers were modeled by a beam finite element that connects two BPs that belong to two adjacent helices. Their mechanical stiffness was defined by multiplying a scale factor (SF) to those of DNA as 1 for S, 0.12 for B, and 0.1 for C, respectively. To simulate the unwinding effect induced by EtBr binding on structural shape, in addition, the equilibrated configuration of the structure was obtained through the nonlinear static analysis with increasing the helicity of dsDNA (BP/turn). In each incremental step, the total strain energy (π) was calculated by summing the strain energy of each finite element as π = Σπ (e) with π = S(Δ (e) ) 2 /2L (e) + B(φ (e) ) 2 /2L (e) + C(ω (e) ) 2 /2L (e) + gΔ (e) ω (e) /2L (e) . Supplementary Note 3. Calculation of mechanical properties of DNA nanostructures Normal mode analysis (NMA) was performed at the straight configuration to compute the lowest 40 normal modes of DNA nanostructures. Given the FE model for a DNA nanostructure under free boundary condition, a generalized eigenvalue problem is considered as follows. ( 1) where K is the global stiffness matrix, M is the global mass matrix and λ represents the eigenvalue as the square-root of natural frequency (λ = ! ). Among the eigenvalues obtained, only the two eigenvalues for the first bending modes were selected to calculate bending rigidity (EI). From the Euler-Lagrange equation for a beam representing effectively DNA nanostructures, we could obtain the following free vibration equation. (2) where w describes the lateral deflection of the beam, x represents the axial position, and µ is the mass per unit length of the beam. Solving analytically the above equation, we can obtain the natural frequencies of the first bending vibration (ω " ) as where m is the total mass of DNA nanostructures, and λ " is the eigenvalues for the first bending mode. Since the L # is defined as where k " is the Boltzmann constant and T is temperature, assumed to be 300 K. Then, L # of DNA nanostructures becomes (6) In a three-dimensional structure, there are always two first bending modes. Therefore, we defined the first bending mode with a smaller eigenvalue as a major bending mode (L #,% ) and the other with a larger eigenvalue as a minor bending mode (L #,! ). Using both bending modes, also, we defined an effective bending persistence length (L #,& ), which we compared with the measured one from AFM image analysis. Also, introducing the continuum assumption provides the torsional rigidity of the DNA nanostructure from the eigenvalues by NMA. The governing equation was given as where β is the wavenumber. In free-free end boundary condition, the wavenumber satisfies a relation with the lowest mode number as (10, 11) Accordingly, the torsional persistence length (L ' ) was derived as follows. (12) where the ratio of the polar moment of inertia to the area (J ! /A) can be pre-determined by the cross-sectional geometry of the DNA nanostructure.
2021-09-01T06:18:31.511Z
2021-08-30T00:00:00.000
{ "year": 2021, "sha1": "44e3a89dd1cb2130639a3532ddbe0f32763f4871", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-25406-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3024628ada1bf00d318d82a1e4a858426f90bb50", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
3846678
pes2o/s2orc
v3-fos-license
Together Forever: Bacterial–Viral Interactions in Infection and Immunity Most viruses first encounter host cells at mucosal surfaces, which are typically colonized by a complex ecosystem of microbes collectively referred to as the microbiota. Recent studies demonstrate the microbiota plays an important role in mediating host–viral interactions and determining the outcomes of these encounters. This review outlines recently described examples of how bacteria and viruses impact each other particularly during infectious processes. Mechanistically, these effects can be broadly categorized as reflecting direct bacterial–viral interactions and/or involving microbial impacts upon innate and/or adaptive immunity. Introduction: Host-Pathogen Encounters Occur in a Complex Ecosystem Rigorous application of scientific methods in general, and Koch's postulates in particular, encourages one to think about and study infectious diseases in reductionist systems that focus on how a potentially pathogenic microbe interacts with host cells. However, in fact, most hosts, especially their mucosal surfaces, harbor complex ecosystems of diverse microbial communities of bacteria, fungi, and viruses, collectively referred to as the microbiota. The specific microbiota composition present during, and even prior to, encounter of potentially pathogenic organisms will often have an extensive influence on whether the pathogen causes a productive and/or symptomatic infection. To date, a large body of work has focused on how the bacterial portion of the microbiota influences an array of disease outcomes, including infectious disease. In comparison, studies or how commensal fungi and viruses might themselves directly influence disease and/or interact with bacteria to do so, has only begun to be appreciated. Herein, we will review some of the recent progress in this latter area, focusing on how bacteria and viruses can directly, and indirectly, influence each other. Many of the findings are summarized in Figure 1. We will focus largely on interactions involving the gut microbiota but will extend discussion into how this system can influence immune responses and infections in other locales. Facilitation of Enteric Viral Infection by Gut Bacteria The broad notion that the microbiota can influence susceptibility to infection has long been appreciated by observations in humans and animal models that antibiotic usage increases susceptibility to numerous bacterial infections, especially well documented for Clostridium difficile and Salmonella via process termed colonization resistance. Direct dramatic demonstrations of these phenomena include exquisite sensitivity to colitis and death upon challenge of germfree or antibiotictreated mice upon challenge with these bacterial pathogens. Moreover, it has long been appreciated that the microbiota promotes immune system development and hence, germfree mice would lack a robust immune system, especially in terms of mucosal immunity that plays a key role in many enteric viral infections. Based on these observations, it was presumed that microbiota ablation, achieved by antibiotic or germfree approaches, would result in increased susceptibility to infection of viruses, especially those that enter their hosts via the gastrointestinal tract. However, as outlined below, in contrast to this expectation, lack of a microbiota has been found to result in reduced infectivity for several such viruses including poliovirus, reovirus, rotavirus and mouse mammary tumor virus (MMTV). A range of different mechanisms have been proposed to account for such observations, which can be broadly categorized as primarily reflecting a direct impact of the gut bacteria on the virus or involving immune-mediated effects. Direct Impact of Microbiota on Virus Considering that physical/chemistry stability of virions in their between-hosts environment is often a major determinant of their infectivity and that the gastrointestinal tract is a complex, and potentially hostile environment to viruses, one might imagine that some viruses evolve means of utilizing substances in that environment to aid their stability. Indeed, at least one virus, Poliovirus, which can invade the central nervous system and cause paralysis, this is the case. Poliovirus is transmitted via the fecal-oral route, where co-existing microbiota populations are high. Robinson et al. demonstrated that antibiotic treatment before poliovirus inoculation decreased mice susceptibility towards infection by reducing poliovirus replication. Further studies revealed poliovirus particles could also bind to bacterial peptidoglycan and lipopolysaccharide (LPS), which facilities the virus binding on poliovirus receptor of the host. Such studies also demonstrated antibiotic treatment was Facilitation of Enteric Viral Infection by Gut Bacteria The broad notion that the microbiota can influence susceptibility to infection has long been appreciated by observations in humans and animal models that antibiotic usage increases susceptibility to numerous bacterial infections, especially well documented for Clostridium difficile and Salmonella via process termed colonization resistance. Direct dramatic demonstrations of these phenomena include exquisite sensitivity to colitis and death upon challenge of germfree or antibiotic-treated mice upon challenge with these bacterial pathogens. Moreover, it has long been appreciated that the microbiota promotes immune system development and hence, germfree mice would lack a robust immune system, especially in terms of mucosal immunity that plays a key role in many enteric viral infections. Based on these observations, it was presumed that microbiota ablation, achieved by antibiotic or germfree approaches, would result in increased susceptibility to infection of viruses, especially those that enter their hosts via the gastrointestinal tract. However, as outlined below, in contrast to this expectation, lack of a microbiota has been found to result in reduced infectivity for several such viruses including poliovirus, reovirus, rotavirus and mouse mammary tumor virus (MMTV). A range of different mechanisms have been proposed to account for such observations, which can be broadly categorized as primarily reflecting a direct impact of the gut bacteria on the virus or involving immune-mediated effects. Direct Impact of Microbiota on Virus Considering that physical/chemistry stability of virions in their between-hosts environment is often a major determinant of their infectivity and that the gastrointestinal tract is a complex, and potentially hostile environment to viruses, one might imagine that some viruses evolve means of utilizing substances in that environment to aid their stability. Indeed, at least one virus, Poliovirus, which can invade the central nervous system and cause paralysis, this is the case. Poliovirus is transmitted via the fecal-oral route, where co-existing microbiota populations are high. Robinson et al. demonstrated that antibiotic treatment before poliovirus inoculation decreased mice susceptibility towards infection by reducing poliovirus replication. Further studies revealed poliovirus particles could also bind to bacterial peptidoglycan and lipopolysaccharide (LPS), which facilities the virus binding on poliovirus receptor of the host. Such studies also demonstrated antibiotic treatment was effective for limiting reovirus infection, suggesting bacteria is facilitating replication and pathogenesis of reovirus, though the detailed mechanism remains unclear [1,2]. Moreover, such peptidoglycan or LPS viral particle complex increased the resistance of poliovirus against environmental challenges such as heat that, otherwise, destroy its infectivity. In contrast, a mutated strain poliovirus, VP1-T99K, which has a reduced binding ability with LPS showed relative instability when added to feces [3]. It seems very reasonable to envisage that analogous mechanisms are used by other viruses that transit the gastrointestinal tract although direct evidence in support of other examples of this notion are not yet in hand. Dependent on Host Immune System The notion that intestinal viruses, would bind bacterial products present in the gut lumen is also proposed to govern MMTV infection, albeit by a very distinct mechanism. MMTV is transmitted from mother to offspring via milk. Such vertical transmission is thought to rely upon on activation of toll-like receptor 4 (TLR4), a pattern recognition receptor for bacteria lipopolysaccharide (LPS), which can bind MMTV in the intestine. In mice lacking TLR4, MMTV was no longer transmissible in that a robust cytotoxic immune response efficiently cleared infected cells [4]. Related studies in antibiotic treatment or in germ-free conditions and found that vertical transmission to their offspring was effectively abolished by the production of MMTV specific antibody. Further studies into the mechanism underlying these observations found binding of MMTV-bound LPS complex activated TLR-4/MyD88 pathway and to induce IL-10 that resulted in immune tolerance to MMTV infection [5]. The extent to which other viruses might rely upon similar mechanisms will certainly require further studies. Meanwhile, it has also been proposed that LPS-induced immune activation, rather than suppression, can also promote infection, in particular for Theiler's murine encephalomyelitis virus (TMEV), which primarily infects mouse strains and causes immune-mediated demyelinating disease. Specifically, Pullen et al. found that when challenging C57BL/6 mice, which are resistance to TMEV, co-administration of the virus with LPS induced many (about 50%) of the hosts to display infected signs of hypersensitivity and proliferation of T cell. Treatment of IL-1β, which was induced by TLR-4 activation also facilitated TMEV infection. It was proposed that LPS induced a hyper-inflammatory condition in the central nervous system that enhanced the relative infectivity of TMEV on the host albeit by a not well defined mechanism [6]. Indeed, direct effects of bacteria and/or their products on innate immunity may be a broad mechanism by which bacteria impact viral infection. Norovirus (NV), which now accounts for about 90% of worldwide incidence of viral gastroenteritis, generally takes longer than 10 days to clear and viral shedding can last for weeks to months by most hosts and, in some cases, can result in persistent infections [7][8][9] Virgin and colleagues group demonstrated that gut bacteria are critical for persistence of murine NV (MNV) strain CR6 infection in the mouse intestine in that treatment of antibiotic prevented the chronic infection of MNV while replenishment of gut bacteria reversed such effect. In the absence of interferon-λ, Ifnlr1, Irf3, and STAT1, antibiotic-mediated bacterial ablation no longer prevented MNV infection indicating the protection was mediated by interferon-λ signaling pathway. Rather, in this scenario, while infection of MNV was reduced, the extra-intestinal spread of MNV was not associated with alterations in gut microbiota populations [10]. Another means by which bacteria enhance norovirus infection is via infecting and trafficking in immune cells. Specifically, a pioneering study from Jones et al. found that NV was able to infect both human and mouse B-lymphocytes in a microbiota-dependent manner [11]. Thus, absence of B-cells or ablation of microbiota reduced MNV infectivity in mice. Infection of human B-cells by NV required the presence of histo-blood group antigen (HBGA)-expressing bacteria. Such findings explained earlier observations that HBGAs impact susceptibility to NV. Specifically, Miura et al. had previously demonstrated that A-like substance (blood antigen type A like substance) of extracellular polymeric substances (EPS), which belongs to the HBGA, is critical for NV and bacteria. Cleavage of N-Acetylgalactosamine of EPS significantly reduced the binding of bacteria and NV. Mutation at W375A blocked the binding of NV strain Gl.1 with bacterial EPS [12]. Thus, NV has seemed to utilize a variety of mechanisms to interact with bacteria to promote its infection and persistence. Rotavirus: Most of the Above, and More While it is instructive to categorize bacterial modulation of viral infection as being dependent or independent on host immunity, some viruses can be impacted by bacteria by both of these mechanisms. The example with which we are most familiar, in large part because our research focusses on it, is rotavirus. Rotavirus (RV) is a non-enveloped dsRNA virus and is responsible for a major worldwide burden of severe dehydrating diarrhea, causing approximately 200,000 deaths annually, primarily in young children in developing countries [13]. RV primarily infects villous intestinal epithelial cells, which shed the virus into the lumen where it interacts with gut microbiota, which one might imagine might impact its ability to infect new cells with the host and or impact its transit via fecal matter to new hosts. Analogous to the case of poliovirus and reovirus, ablation of microbiota in mice, achieved by use of antibiotics or germfree approaches, reduces initial infectivity of RV. However, the underlying mechanism has remained elusive. It is tempting to imagine analogous mechanism to reovirus and poliovirus, but RV is an even more stable virus as exemplified by the combined need for UV radiation and psoralen to render it non-infectious. Moreover, incubation of RV with LPS did not alter its infectivity in vitro or in vivo. Nonetheless, we speculate some as yet undefined bacterial component may facilitate RV binding and or internalization. In any case, microbiota can also impede RV infection by impacting adaptive immunity. Specifically, microbiota ablation also results in more robust immune responses to RV as evidenced by numbers of anti-RV IgA+ positive plasma cells and fecal and serum anti RV-antibodies. This effect is presumed to be mediated by the microbiota broadly setting the tone of the adaptive immune system such that, in the absence of a microbiota in which the immune system is in a state of minimal activation, sensing of RV results in a readily-detectable signal of immune activation that serves as a robust "signal 2" that drives co-stimulation of antigen-specific lymphocytes in a milieu in which the reduced levels of bacterial antigens would reduce competition for RV antigens. Thus, at least in some scenarios, gut bacteria can promote RV infection by facilitating initial infection and impeding adaptive immunity [14]. However, the extent to which such events happen is very much dependent upon the specific microbiota composition of a particular host. Indeed, we have also observed that some colonies of our mice are highly resistant to RV infection and such resistance it can be transferred to new hosts via fecal microbial transplant. Such protection is independent of adaptive immunity in that can be observed in Rag1-deficient mice, which lack functional T-and B-cells. We are currently working to identify the specific microbes that mediate such protection against RV and define the underlying mechanism of action but submit that, in any event, this observation highlights the notion that the extent to which bacteria promote and/or impede viral infection will vary in different hosts in part based on microbiota composition. Bacterial Inhibition of Viral Infection Multiple counterpoints can be envisioned to aforementioned notion that viruses that infect via the gastrointestinal tract would have learned to use bacteria and/or their products to promote viral infection. One would be that the mammalian intestine in adapting to survive amidst the large microbial load it harbors, would have selected for bacteria that might, somehow, protect from various infections, including by viruses. Another would be that in the highly diverse complex intestinal environment, which microbes produce a far greater array of molecules than that produced solely by the host, that there are likely to be some products with some sort of antiviral activity. Indeed, while it is difficult to say which of these perspectives is most relevant, there are certainly several specific examples whereby bacteria and their products impede viral infection. Bacteria Can Directly Interact with Virus to Reduce Infectivity Bacteria have long been known to secrete a chemically diverse array of anti-microbial peptides, namely bacteriocins, that have broad and specific activity against various other bacteria. Such bacteriocins are presumed to play a central role in colonization resistance and may play a key role in determining microbiota composition. Given the diverse chemical mechanisms by which such products act, it seems both tempting and reasonable to presume that bacteria might also secrete proteins with direct anti-viral activity. There is indeed one well documented example of this notion. Specifically, Boyd et al. isolated an 11 kDa protein from a culture of Cyanobacterium Nostoc ellipsosporum, which was originally recovered from algae, termed cyanovirin-N (CV-N), that has direct antiviral activity against HIV [15] and influenza virus [16]. Native and recombinant CV-N exhibits a high-affinity interaction with HIV envelope protein glycoprotein 120 that interferes with HIV binding of CD4 and correlates with a marked reduction of HIV infectivity in cultured CEM-SS cells. In the case of influenza virus, CV-N appeared to bind hemagglutinin, which has long been appreciated to mediate cellular entry of this virus. While these studies were undertaken with the goal of screening mixtures of compounds to develop novel therapeutic agents to treat these infections, it does not seem to be a great stretch to imagine that such compounds might exist in bacteria-dense milieus such as the distal colon, where initial encounters of many human hosts with HIV occur. Hence, we submit is remains an important goal to screen human microbiota for antiviral products, especially against diseases transmitted via anal sexual contact. From our own work on RV described above, we note that the microbiotas that can confer protection against RV upon transplant, have direct anti-RV activity when incubated with this virus in vitro suggesting the existence of some antiviral agents in the gut microbiota. Another potential means by which bacteria might impede viral infection is to impact the cell surface molecules to which they bind. Indeed Varyukhina et al. demonstrated that Bacteroides thetaiotaomicron and Lactobacillus casei can modify cell surface glycoproteins that mediate RV binding. Specifically, they showed that these bacteria and cell-free supernatants therefrom, modified HT-29 cell surface glycan by blocking galactosylation to increase galactose levels and reduce RV infection in vitro. Consequently, the authors suggest that bacterial modification of cell-surface glycans might be used as a means for treating RV infection [17]. It is possible that this mechanism revealed in this in vitro study may underlie the protective effect reported for Lactobacillus rhamnosus GG (LGG) on RV-induced disease. Specifically, Guandalini et al. reported that administering this bacteria to children with RV-associated diarrhea significantly shortened the duration of diarrhea on average, as well as reduced long-term diarrhea cases [18]. Thus, while mechanisms need deeper study, bacterial-viral interactions may prove exploitable to ameliorate severe disease. Bacterial Impacts on Immune System Can Impede Viral Infection One long generally appreciated means by which bacteria can impact the outcome of infections in general, including viral infections, is by promoting immune development, especially in terms of mucosal immunity. Specifically, it has long been appreciated that raising mice in germfree conditions results in a near complete absence of gut-associated lymphoid tissue (GALT), which is necessary for robust production of mucosal antibodies that are known to be the primary correlate of protection against rotavirus, for example. More recently, some specific examples of how bacteria and their products impact viral infection via impacting innate and adaptive immunity have been worked out in some detail. Wang et al. demonstrated that an upper respiratory tract commensal bacterium, Staphylococcus aureus, could rescue the specific pathogen-free mice from death resulting from influenza infection and related inflammation. The mechanism that S. aureus uses to provide such protection relies on activation of TLR-2 signaling that drives recruitment of peripheral CCR2 (+) CD11b (+) double positive cells to alveolar and subsequent maturation of these cells into M2 macrophages. M2 macrophages then mediate host tolerance against lethal inflammatory responses upon influenza infection. This study points out the importance of commensal bacterial for priming the host into appropriate tolerance levels for facing viral infection [19]. Our work described that a bacterial product, namely flagellin could provide strong protection against rotavirus. In particular, we reported that systemic treatment of mice models with bacterial flagellin prevented and cured rotavirus (RV) infection [20]. Such protection was not absolutely restricted to RV infection but rather extended to some other enteric viral infections such as reovirus. Mechanistically, flagellin's antiviral infection relied on activation of both Toll-like receptor 5 (TLR-5) and NOD-like receptor C4 (NLRC-4) and downstream cytokines IL-22 and IL-18 and was independent of interferon (types I, II, and III), which mediate most pathways of antiviral immunity. While the mechanism by which IL-18 and IL-22 work in concert to clear and prevent RV infection remains under investigation, it seems to involve a combination of driving IL-18-induced apoptosis, preferentially of RV-infected cells, and IL-22-induced proliferation that together result in turnover of villus epithelial cells at a rate faster than the rate at which virus can infect new cells. While it may be possible to harness this mechanism to develop therapies to treat RV infection, especially in immune compromised hosts whom can develop chronic RV infections, the extent to which such systemic treatment with flagellin, or IL-18/22 mimics events that might occur during any infectious process is unclear but nonetheless highlights strong theoretical potential for one infection to impact another. Another compelling demonstration of the ability of commensal gut bacteria to broadly impact immune function during an infection can be found in work from Abt and colleagues. They reported that microbiota ablation via antibiotic treatment markedly delayed mouse clearance of lymphocytic choriomeningitis virus (LCMV) infection. Antibiotic treatment impaired both innate and adaptive innate antiviral responses wherein both CD8+ T cell response and IgG titer in the blood were attenuated in response to LCMV infection. Macrophages isolated from the mice that received antibiotic treatment showed impaired activation upon stimulation of type I and II interferons and consequently impairment of responses that would normally have limited viral infection. These data highlight an important role of commensal bacteria in keeping host immunity in a state ready to be activated to prevent viral infection [21]. Moreover, such setting of the immune system tone can also impact the effectiveness of vaccinations. For example, work from Oh and colleagues found that ablation of gut bacteria via antibiotic or germfree approaches markedly reduced anti-influenza antibodies in response to the trivalent influenza virus vaccine [22]. Such hyporesponsiveness was rescued by colonizing mice with flagellated but not aflagellate E. coli. Moreover, mice maintained with a microbiota but lacking TLR5 also exhibited reduced responsiveness to this vaccine thus together suggesting a reliance upon TLR5 by microbiota derived flagellin to maintain the immune system in a state ready to quickly respond vaccines and presumably viral infections [22]. While use of experimental approaches to fully, or near fully, deplete microbiota are useful for examining the general importance of having a commensal microbiota, subtler but potentially broad and important effects will depend upon the specific microbiota composition present. This concept can be seen in work from Moon et al. who reported a mechanism that level of IgA in the GI track was associated with a specific group of bacteria, possibly including Gram-negative fecal anaerobe Sutterella, that can be vertically and horizontally transmitted by co-housing or fecal microbiota transplantation [23]. Furthermore, this phenotype was induced by these low IgA associated bacteria, which can degrade fecal IgA and thus lower its levels. Thus, immune tone is not only influenced by microbiota composition but also how such microbiota interact with the mucosal immune system. Impact of Viruses on Bacterial Infection Although not as well understood, or extensively studied as its bacterial component, it is increasingly clear that the microbiome includes an extensive collection of bacteriophages and, to a less well characterized extent, eukaryotic viruses collectively referred to as the virome. Analogous to gut bacteria, the virome is thought to be vertically inherited but modifiable and have a major impact upon host phenotype [24,25]. A recent example of the potential ability of viruses to drive immune development in a manner analogous to bacteria can be seen in the work of Kernbauer and colleagues who found that murine norovirus (MNV) infection could correct many aspects of intestinal and immune phenotype associated with microbiota ablation. Specifically, they observed that the antibiotic treatment, as well as germ-free condition, resulted in the loss of commensal bacterial-induced intestinal abnormities, such as shrinkage in the villus, and an abnormal distribution and function of immune cell population, including over expansion of group 2 innate lymphoid cells (ILCs) in the gut. Addition of MNV to such mice restored a relatively normal gut morphology and immune cell populations and did so in a manner dependent upon type I Interferon. Such correction of immune phenotype by MNV corrected the severe inflammatory pathology in these mice that would have otherwise resulted upon challenge with bacterial pathogen Citrobacter rodentium [26] together demonstrating the broad ability of viruses to impact immune phenotype and, consequently, impact the outcome of a bacterial infection. Another striking example virome impacting the immune system is found in the work of Masopust and colleagues who observed that in contrast to mice maintained in modern vivaria, mice captured in the wild and mice purchased from pet stores displayed serologic positivity to many common viruses and had populations of naïve and memory immune cells in roughly equal proportions to that observed in humans. While these common viruses are not classically viewed as being commensal microbes, they might functionally meet this definition in that most hosts are frequently exposed to one or more such agents [27]. Specific examples of the ability of individual viruses to impact bacterial infection in mice with a normal microbiota and immune system can be seen in work from Virgin and colleagues. Barton et al. reported that mice latently infected with either murine γ-herpesvirus 68 (MHV68) or murine cytomegalovirus (CMV), which genetically resemble the human pathogens Epstein-Barr virus and human cytomegalovirus, are resistant to the bacterial pathogens Listeria monocytogenes and Yersinia pestis [28]. Such protection relies on the systemic activation of macrophages and upregulation of basal innate immune system regarding cytokines production, including the production of cytokine interferon-γ. These observations may reflect that latent infection by these viruses calibrate the host to an immune competent mode thus prime the host for combating the bacterial pathogen challenges [29]. Further studies revealed that the protective activity of MHV68 chronic infection could offset the lethal Listeria monocytogenes infection in immunodeficiency host Hoil-1 KO, IL-6 KO, Caspase-1KO and Caspase-1 and Caspase-11 DKO mice by upregulating basal level of innate immune responses, including expression of interferon-γ, suggesting multiple pathways mediate the protection [30]. In addition to impacting infections by individual pathogens, studies on MNV can also be viewed to indicate that viral infection can broadly impact responses to pathobionts and complex microbiotas in general in a manner that impacts disease. Specifically, Cadwell and colleagues observed that the extent to which some mice that are genetically prone to developing microbiota-dependent colitis actually develop disease that can be determined by their carriage of a persistent MNV infection. The immediate immune consequences of persistent MNV infection is to result in modest but chronic elevations in TNFα expression. While WT mice manage to deal with such TNFα expression, in mice with defects in autophagy such as Atg16L1-deficient mice, which cannot efficiently manage intracellular pathobionts, such TNFα expression drives chronic inflammation that manifests as severe colitis [31]. Such observations support the long-standing hypothesis that development of inflammatory bowel disease in patients with a genetic predisposition is triggered by a viral infection and suggest prospective studies should be performed in humans to examine this possibility. Perspectives Herein, we sought to review and discuss how viruses and bacteria might interact in complex environments and, consequently, influence health and disease. We have outlined several examples whereby bacteria and viruses can, directly and/or indirectly, influence each other in the context of both normal development and disease. We submit that some of these examples provide compelling evidence for the importance and potential power of such interactions. Moreover, some of these examples describe tractable models to decipher the molecular interactions that mediate these phenotypic effects. However, at present, the most reasonable overriding conclusion that can be reached is our understanding of these interactions has only begun to scratch the surface. Indeed, further studies in multiple directions are needed, including studies to identify specific examples of bacteria and viruses influencing each other in humans and development of animal and in vitro models that could dissect the molecular underpinnings of these observations. Conflicts of Interest: The authors declare no conflict of interest.
2018-04-03T02:51:50.404Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "95e505fcfb7df6d6f5b62daa350561f97b57f2ca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/10/3/122/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95e505fcfb7df6d6f5b62daa350561f97b57f2ca", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
204941832
pes2o/s2orc
v3-fos-license
Axial anomaly and the delta_{LT} puzzle The axial anomaly contribution to generalized longitudinal-transverse polarizability $\delta_{LT}^{}$ is calculated within Regge approach. It is shown that the contribution from the exchange of the $a_1^{}(1260)$ Regge trajectory is nontrivial and might have the key role to explain the large difference between the predictions of chiral perturbation theory and the experimental data for the neutron $\delta_{LT}^{}$. We also present the prediction for the proton $\delta_{LT}^{}$, which will be measured at the Thomas Jefferson National Accelerator Facility in near future. The measurement of spin-dependent lepton-nucleon cross sections provides very important information about nucleon structure and is a very useful tool for examining the validity of various approaches for description of the QCD effects at large distances between quarks and gluons.One of such successful approaches is chiral perturbation theory (χPT) that enjoys successful description of hadron physics at small momentum transfer and at low energy.However, it was found that the recent predictions [1,2] of various versions of χPT show a strong deviation from the data for generalized longitudinal-transverse polarizability (δ LT ) of the neutron at low Q 2 measured by the E94010 Collaboration of the Thomas Jefferson National Accelerator Facility (TJNAF) [3].The generalized longitudinal-transverse polarizability can be evaluated from a combination of the spin-dependent structure functions g 1 and g 2 , and is an ideal quantity to test models for hadron reactions at low Q 2 .Therefore, such a large deviation is a serious challenge to the χPT approach to low energy reactions, and finding the possible sources for this discrepancy is an important open question. In this paper, we suggest a possible way to resolve this puzzle based on the consideration of the axial anomaly contribution to δ LT arising through the t-channel a 1 Regge-trajectory exchange to the spin-dependent forward scattering amplitude for γ * N → γ * N .We will show that this contribution is nontrivial and has a crucial role to bring a χPT prediction for the neutron to the measured data.For further testing of this idea, we also present the prediction for the proton δ LT in this approach, which can be examined by the planned experiment at TJNAF [4]. It is widely known that the axial anomaly [5,6] plays a very important role in hadron physics.Originated from the quark triangle diagram, the axial anomaly determines the π 0 → γγ decay width and provides a crucial role to the "spin crisis."(For a review, see, for example, Ref. [7].)In the present paper, we calculate its contribution to δ LT , which is induced by the t-channel exchange of the In the hadronic tensor of two electromagnetic currents, the spin-dependent part is where q and p are the momenta of the virtual photon and the nucleon, respectively.In Eq. ( 2), x = Q 2 /(2p • q) with Q 2 = −q 2 , and g 1 (x, Q 2 ) and g 2 (x, Q 2 ) are spindependent nucleon structure functions.The spin vector of the nucleon S µ is normalized as S 2 = −M 2 N with M N being the nucleon mass.The details can be found, for example, in Ref. [8].In terms of these structure functions the generalized longitudinal-transverse polarizability δ LT can be written in the following form: where is the threshold photon energy for π-meson production.Here, α = e2 /4π is the electromagnetic fine structure constant. The spin-dependent part of the forward Compton amplitude is which is related to the structure functions g 1 and g 2 as where ν is the photon energy.Since we are interested in the behavior of δ LT at low Q 2 , it is rather convenient to use the variable ν instead of the variable x defined for deep inelastic scattering.Then, the generalized longitudinal-transverse polarizability δ LT of Eq. ( 3) can be written as The contribution of the diagram in Fig. 1 to the spindependent forward Compton scattering amplitude is where g a 1 N N denotes the coupling constant of the a 1meson coupling to the nucleon, P αβ a1 (t, ν) is the a 1 -meson propagator, and R µνα (Q 2 ) is the triangle part of the diagram in Fig. 1, which is related to the axial anomaly.For the point-like a 1 -quark vertex in the triangle graph 1we use the well-known formula [7], where Here, N c is the number of color, e q is the electric charge of q-quark, g a 1 qq is the coupling constant of the a 1 meson and the quark, and m q is the constituent quark mass in the triangle diagram.In Eq. ( 8), the form factor where Note that this form factor vanishes in the real photon limit, i.e., F (Q 2 , m 2 q ) → 0 as Q 2 → 0. This feature is in agreement with the Landau-Yang theorem [10,11] that prohibits the axial-vector meson decay into two real photons.Therefore, the a 1 -meson exchange does not contribute to the real photon forward Compton scattering amplitude.As a result, the a 1 exchange does not alter, for example, the Gerasimov-Drell-Hearn sum rule and At high center-of-mass photon-nucleon energy, the a 1meson propagator should be replaced by its Regge propagator as [12] where A stands for the a 1 meson, and Here, s ≈ 2p • q = 2M N ν, t = (p − q) 2 , and s 0 ≈ 4 GeV 2 [13,14].The signature of the a 1 trajectory is σ A = −1 and the Regge trajectory of the a 1 -meson is given by We assume that the a 1 -trajectory is an ordinary Regge trajectory with slope α ′ A ≈ 0.9 GeV −2 .Then the intercept of the trajectory is estimated as α A (0) ≈ −0.36 for the a 1 -meson mass M A = 1.23 GeV [15]. By making use of the relation, the contribution of the a 1 -trajectory exchange to g 1 + g 2 is obtained as Substituting Eq. ( 15) into Eq.( 6) and then performing the integration over ν leads to the final result for the a 1 LT as a function of Q2 (shaded area).The dashed line is the result of heavy baryon χPT of Ref. [1].The filled area is the sum of the prediction of Ref. [1] and the a 1 -exchange contribution obtained in this work.The experimental data are from Ref. [3].(b) Same for the generalized longitudinal-transverse polarizability of the proton δ p LT . contribution to δ LT , which reads where z 0 = ν 0 /M N , and we used the constituent quark model relation [16] g The upper and the lower sign in Eq. ( 16) correspond to the neutron and the proton case, respectively.In Fig. 2, our results for the a 1 -exchange contribution to the generalized longitudinal-transverse polarizability δ LT of the nucleon are presented with the coupling constant that is obtained by using the assumption of axial-vector dominance [17], which gives the relation with g A /g V = 1.2694 ± 0028 and f a 1 = (0.19 ± 0.03) GeV 2 [15].This value is close to the estimation of g a 1 pp = 6.13 ∼ 7.09 that was obtained from the nucleon-nucleon potential in Ref. [18].Because we are using the constituent quark model relation in Eq. ( 17), we use the constituent quark mass in the form factor of Eq. ( 9) for consistency.In the present work, we use m q = 0.27 GeV that is supported by the study on hadron spectroscopy within Dyson-Schwinger equation approach [19]. 2 Shown in Fig. 2(a) are the results for the neutron δ n LT , while those for the proton δ p LT are given in Fig. 2(b).Here, the dashed lines are the predictions of the heavy baryon χPT of Ref. [1], which evidently overestimates the experimental data for δ n LT .The contributions from the a 1 exchange in Eq. ( 16) is given by the shaded areas because of the uncertainty of the coupling constants.As can be seen in Fig. 2(a), the a 1 -exchange contribution to δ n LT is large and negative, so, when combined with the χPT calculation, it can bring down the theoretical prediction for δ n LT closer to the measured data of the Jefferson Lab E94010 Collaboration [3] than the χPT calculation.By the filled area in Fig. 2(a), we give the result that is obtained by combining the χPT prediction of Ref. [1] and the present work.This evidently shows the nontrivial role of the axial anomaly in δ LT . In the case of the proton, the a 1 -exchange contribution to δ p LT has the opposite sign compared to the neutron case due to the isovector nature of the a 1 -meson as can be seen in Fig. 2(b).As a consequence, the contribution of the a 1 -exchange is added on the χPT prediction and the final result becomes larger.This is shown explicitly by the filled area in Fig. 2(b).Thus, measuring δ p LT , which is planned at the TJNAF [4], is an ideal tool to test the role of the axial anomaly to δ LT . However, different approaches of χPT give different predictions for δ LT at low Q 2 .As can be seen in Ref. [22], the predicted δ n LT of Ref. [2], which is based on a Lorentzinvariant formulation of χPT, has a very different Q 2 dependence.Namely, the predicted δ n LT and δ p LT of Ref. [2] decrease with Q 2 at low Q 2 region and then increase when Q 2 ≥ 0.05 ∼ 0.1 GeV 2 , while those of Ref. [1] monotonically decrease with Q 2 .This shows that δ LT is very sensitive to the approach to hadron reactions.In Fig. 3, we give the results as in Fig. 2 but with the calculation of Ref. [2].The results for δ n LT are shown in Fig. 3(a), while those for δ p LT are in Fig. 3(b).Since the contribution from the ∆ resonance is hard to control, the predictions of Lorentz-invariant χPT are given by the shaded areas with dashed lines in Fig. 3 following Ref.[2].This evidently shows that, although the two approaches of χPT give very different predictions on δ LT , both of them overestimate the measured δ LT of the neutron.Combining the a 1 -exchange with the prediction of Ref. [2] leads again to a better agreement with the measured data for δ n LT .Thus, as can be seen in Figs.2(a) and 3(a), the a 1 -exchange has a crucial role to bring the χPT calculations of Refs.[1,2] to the measured data for δ n LT , in particular, at low Q 2 region.However, we still overestimate the data at Q 2 = 0.26 GeV 2 for the both cases, and more elaborated examinations on the Q 2 > 0.2 GeV 2 region as well as on the χPT approaches are awaited. Finally, we make a comment on the contribution of the flavor singlet f 1 (1285) axial-vector meson exchange to δ LT .Because of the small coupling constant g f 1 N N ∼ 2.5 [17] and the constituent quark model relation g f 1 qq = g f 1 N N /3, the contribution of the f 1 -exchange is found to be an order of magnitude smaller than that of the a 1exchange.This allows us to safely ignore the f 1 -exchange contribution in the considered kinematical region. In summary, we calculated the contribution of the axial anomaly to the generalized longitudinal-transverse polarizability δ LT through the exchange of the a 1 Regge trajectory.In spite of the large uncertainties in the a 1 trajectory exchange, we found that its contribution is nontrivial and large enough to counterbalance the discrepancy between the χPT predictions and the experimental data for the neutron, especially, at low Q 2 region.To furtest the role of the axial anomaly in the generalized longitudinal-transverse polarizability, we also presented the prediction for δ LT of the proton, which will be measured at TJNAF in near future. FIG. 2 . FIG. 2. (a) Contribution of the a 1 -exchange to the generalized longitudinal-transverse polarizability of the neutron δ nLT as a function of Q 2 (shaded area).The dashed line is the result of heavy baryon χPT of Ref.[1].The filled area is the sum of the prediction of Ref.[1] and the a 1 -exchange contribution obtained in this work.The experimental data are from Ref.[3].(b) Same for the generalized longitudinal-transverse polarizability of the proton δ p LT . FIG. 3 . FIG. 3. (a) The generalized longitudinal-transverse polarizability of the neutron δ n LT and (b) of the proton δ p LT .Same as Fig. 2 but with the result of Lorentz-invariant χPT of Ref. [2].
2012-01-07T11:27:02.000Z
2011-03-25T00:00:00.000
{ "year": 2011, "sha1": "46bcc35df43809f009ec89af41ee93691017c970", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1103.4892", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "46bcc35df43809f009ec89af41ee93691017c970", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
203364854
pes2o/s2orc
v3-fos-license
Management of Endovascular Intervention Complication in Spinal Vascular Malformation—Report of Two Cases Ab s t r Ac t Aim: Management of cases of spinal vascular malformation, complicated by vessel injury during catheterization. Background: Vascular abnormalities of the spinal cord are rare group of spinal cord disease and complex angioarchitecture of these lesions makes its treatment both by surgical or radiological intervention technically demanding. Case description: We report two cases of spinal vascular malformation, complicated by vessel injury during catheterization. In case 1, extravasation of embolic material and blood could be documented by the digital subtraction angiogram (DSA), the patient developed progressive sensorimotor deficits after the next 48 hours, whereas in case 2, the patient developed immediate postprocedural weakness in both lower limbs and postprocedural computed tomography (CT) showed glue material in the spinal canal. Both the patients underwent decompressive laminectomy and showed improvement in motor power. Conclusion: Following intervention, if the patient develops weakness, it is generally thought to be due to ischemia, and not due to extravasated material (glue + hematoma). When imaging shows the mass effect due to this complication, the patient should undergo decompression instead of taking a pessimistic attitude of not doing anything due to supposedly arterial ischemia and infarct. IntroductIon Vascular abnormalities of the spinal cord entitle a special and rare group of spinal cord disease. The disabling natural history of the lesion caused due to hemorrhage, mass effect, venous congestion, and vascular steal prompts early diagnosis and treatment. Since the first surgical intervention by Elsberg in 1914, the understanding of the pathophysiology and treatment modalities of the spinal arteriovenous malformations (AVM) had made considerable advances. 1 Since the inception of spinal angiography over half century earlier by pioneers Djindjian, Doppman, Di Chiro, and others, significant advances have been made. 2 Current scientific advances had brought better imaging techniques, better instruments, and embolization materials at our disposal. Still, the complex angioarchitecture of these lesions makes its treatment both by surgical or radiological intervention treacherous and technically demanding. Though some cases are better managed with endovascular techniques, surgery may be required due to any complication arising out of endovascular procedures. cAse report 1 A 35-year-old lady presented with complaints of gradually progressive weakness of both lower limbs for 2 months. It was asymmetrical in onset with tingling and numbness involving the right followed by the left lower limb and progressive weakness of the distal lower limbs without any bladder or bowel involvement. On presentation, she had spastic paraparesis with power grade 4/5 and reduced sensation in all modalities from the L2 dermatomal level below. Imaging The first case shows spinal digital subtraction angiogram (DSA) (Fig. 1) showing a perimedullary arteriovenous (AV) fistula type 3 at D10-D11 levels with multiple feeders from posterior and anterior spinal artery. The patient underwent endovascular intervention for the fistula. Treatment She underwent spinal N -butyl cyanoacrylate (NBCA) embolization for the same. During the procedure, there was inadvertent rupture of the radiculopial artery and obliteration of the perforated feeder was done using 0.5 mL of 15% NBCA glue. However, the check angiogram showed persistent contrast extravasation. Due to this, the procedure was then abandoned. Magnetic resonance imaging (MRI) dorsolumbar spine ( Fig. 2) was obtained which showed spinal subdural hematoma and intramedullary hematoma, and compression of the cord with the cord signal changes. Post procedure, she was conscious and obeying without any fresh sensorimotor deficits for the next 48 hours except for severe back pain, which was treated with analgesia, and hydration was maintained both orally and intravenously. Then she was noticed to have paraplegia, and in view of neurological deficits which developed over time and based on MRI findings, she underwent D6-D12 decompressive laminectomy, evacuation of subdural hematoma, and extravasated glue material causing compression of the cord. Postoperative Course She improved slowly over a period of 4 days. Her paresthesia in both lower limbs persisted, and her lower limb power was 4/5 distally and 2/5 proximally. Follow-up after 1 year showed that she improved minimally. She preferred wheel chair to walk due to proximal weakness. The follow-up angiogram showed residual perimedullary arterio-venous fistula (AVF) (Fig. 3) and the MRI scan showed postoperative changes with no canal stenosis (Fig. 4). The patient was offered further surgical or radiological intervention. However, in view of previous complication, she was not keen on further intervention. She was given medication for symptomatic relief of paresthesia and advised to continue physiotherapy. cAse report 2 A 26-year-old male patient presented with the history of lumbar region back pain for 1 year, gradually progressive and increased in the last 6 months. He also developed difficulty in walking from the last 1 month and urinary urgency. On presentation, he had spastic paraparesis with power 3/5 proximally and 2/5 distally and reduced sensations below the D10 level. Imaging (spinal DSA) shows spinal AVM at D8-D9 levels with multiple feeders from D7-D11 (Fig. 5) with enlarged venous saccules extending intradural on the left side of the spinal cord into the canal. The patient underwent embolization of epidural AVM at the D10 level ( Fig. 6) with NBCA and there was no complication during the procedure but occurred immediately post procedure. The patient developed weakness in both the lower limbs and MRI showed glue cast in the spinal canal (Fig. 7). The patient underwent D9-D10 decompressive laminectomy (Fig. 8). The postoperative patient improved to preoperative power in both lower limbs. dIscussIon Complications associated with onyx embolization with spinal AVF have been discussed by Kim et al. 3 Similarly in the case of treatment of intracranial dural AV fistula, the patient developed cervical cord infarction due to occlusion of vertebral artery. 4 Retrieval of the catheter with both onyx and NBCA may be associated with traction force on radicular artery in our patient. The rupture of radicular artery probably resulted in extravasation of glue material and subdural hematoma. Some part of hematoma in the first patient was intramedullary and part of it was subdural, this is because AVF is perimedullary in location. The usefulness of the surgery was debated since the differential diagnosis of MRI finding was venous infarct. However, there would not have been any debate in a similar case of supratentorial AVF. Since the patient had developed weakness, and MRI showed the extravasated material and the hematoma, decision was made to evacuate the same. The 3 The second patient had presented 2 years after the first patient with a similar complication, also he had presented with rapid weakness intraprocedural, and CT scan had shown extravasated embolic material in the spinal canal, hence he was taken for immediate decompressive laminectomy and removal of the offending material, resulting in immediate post decompression improvement in motor power of both lower limbs. conclusIon We suggest that the occlusion of spinal vascular malformations with NBCA/onyx may be associated with the risk of radicular artery rupture resulting in hematoma or extravasated material causing compression and neurological deficit. Following intervention if the patient develops weakness, it is generally thought to be due to ischemia, and not due to extravasated material (glue + hematoma). When imaging shows the mass effect due to this complication, the patient should undergo decompression instead of taking a pessimistic attitude of not doing anything due to supposedly arterial ischemia and infarct. Informed pAtIent consent The patient has consented to submission of this case report to the journal.
2019-09-19T09:14:17.947Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "380194bd534c919bb1f8f2a7c2485a6801c72c58", "oa_license": null, "oa_url": "https://doi.org/10.5005/jp-journals-10039-1210", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c8e411af249411971337b089eacf696536fbee44", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221745952
pes2o/s2orc
v3-fos-license
Hematological detraining-related changes among elderly individuals with high blood pressure AIM: The aim of the present study was to compare the effects of detraining on physical performance, blood pressure, biologic and anthropometric variables of hypertensive elderly individuals, grouped by two levels of previous physical activity. METHODS: A total of 87 elderly individuals (70 to 93 years old) with systolic/diastolic blood pressure levels above 120/80 mmHg who participated during 18 non-consecutive months in 2 years in physical exercise programs offered in northern Portugal communities were included in the study. Tests were performed before and after three months of no exercise. Attendance to the exercise sessions, hematological markers, cardiorespiratory function, and anthropometric variables were assessed. The results were analyzed according to the fulfillment of the WHO recommendations on moderate physical activity (at least 150 minutes/week). RESULTS: Weight, total cholesterol, and glucose were influenced by the amount of physical activity performed previously to the detraining period. After the detraining period, the total cholesterol, glucose, insulin, and weight had significant differences influenced by the amount of physical activity previously performed (p<0.05). CONCLUSIONS: The number of minutes per week of aerobic and resistance exercise training over 18 non-consecutive months was not a significant determinant factor in the development of hypertension during the three months of detraining. Hematological detraining-related changes among elderly individuals with high blood pressure INTRODUCTION Physical inactivity is regarded as the greatest public health problem of the 21 st century 1 because of its high prevalence in the world population and associated problems. The negative effect of a sedentary lifestyle is associated with an increased risk of morbidity and worsening of many chronic diseases such as cardiovascular disease (CVD), congestive heart failure (CHF), stroke, osteoporosis, obesity, type 2 diabetes, some types of cancer and hypertension 2 . The many consequences of this problem led the World Health Organization (WHO) 3 in 2014 to set, within the 9 global non-communicable diseases (NCDs), targets for the year 2025, of which two were related to physical activity (PA). First, a relative reduction of 25% in premature mortality from cardiovascular diseases, cancers, and diabetes by 2025, in which PA plays an important role 4 . Second, in Target 3, a 10% relative reduction in the prevalence of insufficient physical activity. 87 elderly individuals aged between 70 and 93 years were recruited from several cities in the region of Minho in Portugal, where the city council offered physical exercise programs for the elderly in municipal sports centers. The inclusion criteria were: (a) Levels of SBP/DBP >120/80 mm Hg at the end of the exercise programs; (b) voluntary participation and signing of written informed consent. All individuals with BP lower than the aforementioned and those who had declined voluntary participation were excluded from the study. Procedure All potential participants who met the inclusion criteria were first informed about the study characteristics and protocol. Then, they were asked to sign a written informed consent prior to participation. The study was approved by the Scientific Council's Ethics Committee of the Polytechnic Institute of Viana do Castelo and was carried out in accordance with the Declaration of Helsinki Standards (World Medical Association, 2013), following the European Community guidelines for Good Clinical Practice (111/3976/88, July 1990). A general clinical interview and a fitness test were conducted, besides obtaining data on attendance to the exercise programs, as well as anthropometric and biochemical variables. In order to analyze detraining effects, variables were analyzed in the last month of training: June (T0) prior to finishing the exercise program, and after physical inactivity for three months, in October (T1), during the first month of activity. During this period, participants led a normal life and did not participate in said exercise programs. The researchers who analyzed the data were blinded to the amount of physical activity carried out by the subjects. Assessment The Six-Minute Walking Test (6MWT) was used to assess the physical condition in terms of cardiovascular function. This test is an amendment to the original Cooper's 12-Minute Walk-Run Test 10 and evaluates the maximum distance a person is able to walk in 6 minutes. It is widely used in people with cardiovascular or pulmonary diseases and it has been proven to be a good predictor of peak oxygen consumption (VO 2 max) and survival in patients with heart failure (r=0.64; p<0.001) 11 . Blood samples were obtained after a minimum of eight hours of fasting, with the subject in a One of the most common health complications in which physical inactivity plays an important role is high blood pressure (HBP). A person is considered to have HBP when the systolic blood pressure (SBP) and diastolic blood pressure (DBP) are greater than 120 and 80 mmHg, respectively. A chronic or persistent increase of blood pressure (BP) can be classified as prehypertension (120-139/80-89 mm Hg) or arterial hypertension (AH) (≥140/90 mm Hg) 5 . Prehypertension is already associated with an increased risk of cardiovascular diseases (CVD) 6 . Although age, male sex, and family history are non-modifiable risk factors for CVD, there are also other relevant factors, such as unhealthy eating habits (e.g. excessive salt consumption), smoking, and sedentarism, which can be prevented. Indeed, WHO's aims for 2025 focused also on a 30% relative reduction in the mean intake of salt/sodium (Target 4) and a 25% relative reduction in the prevalence of increased blood pressure (Target 5). Scientific literature has reported the benefits of adapted physical exercise in patients with this health condition, demonstrating that a single session of aerobic exercise at an intensity of 50-100% of VO 2 max produces a lowering of 18-20 mm Hg in systolic and 7-9 mm Hg in diastolic blood pressure 7 . These changes remain for 12-16 h after exercise, and the maximum changes in blood pressure have been observed in individuals with mild (Stage I) hypertension. Therefore, the promotion of healthy habits is especially important at this stage due to the frequently observed benefits in both prehypertensive and hypertensive patients 7, 8 . However, the authors understand that there is little knowledge demonstrated on how these and other AH related variables change when regular physical exercise is stopped over a period of time. Hence, the importance of knowing the extent to which the effects of physical exercise are sustained over time after suspension of exercise, and whether such detraining poses a risk to individuals. Therefore, the aim of the present study was to compare the effects of detraining on physical performance, blood pressure, biologic and anthropometric variables of hypertensive elderly individuals, grouped by two levels of previous physical activity. Participants The sample was taken from an already published study with a high number of participants 9 . A total of sitting position, using standard clinical laboratory methods. Serum variables were measured with COBAS 711 and 6000 analyzers (Roche Diagnostics, Mannheim, Germany) and included glycated hemoglobin (HbA1c), cholesterol (HDL, LDL and total), triacylglycerol, blood glucose, insulin, and D-Vitamin levels. The anthropometric variables analyzed were weight (Tanita BC-418 Body Composition Analyser, Tanita Corporation, Tokyo, Japan), height (Seca 202 Measuring Rod, Seca Gmbh & Co, Hamburg, Germany), and Body Mass Index (BMI). Evaluation of SBP and DBP was done with an electronic sphygmomanometer (Omron M6, Omron Healthcare Group, Kyoto, Japan) following the recommendations from international guidelines. Exercise program The exercise program carried out comprised fitness exercises from Monday to Saturday, with 50 minutes/week sessions held in the mornings over 9 months of the year. The sessions were supervised by Physical Education graduates, wherein exercises had a (65%) aerobic component and included lower limb strengthening exercises (35%), performed at moderate intensity. The program was carried out over a two-year period, in 18 non-consecutive months. Statistical analysis For statistical analysis of the results, subjects were divided into two groups depending on whether the amount of workout in minutes/week was higher (G>150) or lower (G<150) than 150 minutes/week, which is a cut-off point based on the WHO recommendations 3 . Data analysis was divided into two parts. The first part consisted of a descriptive analysis of the study sample based on the frequency of physical exercise per week (<150 min/week; ≥150 min/week). The Kolmogorov-Smirnov test was used to verify the normality (p>0.005) of the variables studied. The Student's t-test was used for independent data, both at T0 and T1, in order to establish the presence or absence of differences between the two groups. Subsequently, an inferential covariance analysis (ANCOVA 2X2) was performed with the following covariates: age, sex, and education level. A p-value <0.05 was considered significant. The SPSS Statistical package 20.0 for Windows (IBM Corp., Armonk, N.Y., USA) software was used for statistical analysis. RESULTS A total of 87 participants (69.4% female and 30.6% male) completed the follow-up phase carried out at the end of the exercise program, during which the average age of the sample was 78.5 ± 2.6 years. Start of the follow-up Table 1 shows the values of the physical variables, blood pressure, and complete blood count (CBC) at the start of the follow-up. Data are categorized by minutes of physical activity per week. No significant differences in SBP and DBP were observed between the two groups. However, the G>150 presented SBP values considered as high-normal (130-139 mm Hg) while the average values in G<150 exceeded 140 mm Hg, which is considered as Stage 1 hypertension. DBP values were optimal (<80 mm Hg) in both groups. In the remaining variables, significant differences were seen only in weight, BMI, and in the distance covered in the 6MWT test, in which BMI presented lower values while more distance was covered in the 6MWT, in the G>150. As far as cardiovascular risk factors (CRF) are concerned, despite total cholesterol in both groups being higher than 190mg/dL, the LDL, HDL, or triglyceride levels did not reach values that indicate the presence of dyslipidemia. On the other hand, the fasting plasma glucose in both groups was higher than 102 mg/dL, values considered as CRF (102-125 mg/dL) 5 . Moreover, in the G<150 group, the BMI was higher than 30 kg/m 2 . A second evaluation was performed after the 3-month inactivity period (detraining). Significant differences were found between the groups: the total distance covered in 6MWT and vitamin D levels were higher, while the total cholesterol, triglycerides, and insulin were lower in the G>150 as compared to the G<150 (Figure 1). Table 2 shows a comparative analysis of the two evaluation times (T0 and T1) for the two groups. Significant differences were found in the development of some variables for T0 and T1 in both groups. These differences were lower weight gain (p = 0.010) and BMI (p = 0.002) in the G<150 versus the G>150, as well as a decrease in total cholesterol (p = 0.003) in the G>150 compared to an increase in the G<150, a further reduction in HDL-cholesterol levels (p = 0.023) in the G>150 versus the G<150, a slight increase in glucose (p = 0.046) in the G>150 versus the G<150 and a greater increase in insulin (p = 0.004) in the G>150 versus the G<150. DISCUSSION The aim of the present study was to compare the effects of detraining on physical performance, blood pressure, biologic and anthropometric variables of hypertensive elderly individuals, grouped by two levels of previous physical activity. They participated in a 2-year physical exercise program promoted by different city councils in the North of Portugal. The analysis was carried out three months after completion of the program and focused on the number of exercise minutes per week. Numerous studies have documented the usefulness of physical exercise for reducing cardiovascular risk factors, among which is HBP 12,13 . The mechanism involved seems to be the reduction of arterial stiffness due to physical exercise by increasing serum levels of nitric oxide (determinant factor in endothelial relaxation), and also by producing adropine, whose levels increase with exercise 14 . The findings from our study are of special relevance for several reasons. Firstly, studies that have analyzed the effects of physical exercise on SBP/DBP in different population groups did not regularly report on any follow-up periods after the intervention, which may be a limitation for studying the residual effects of detraining on these variables 7 . This study intends to provide relevant information on this issue. Secondly, the follow-up periods subsequent to physical exercise programs with a duration greater than 6 or 12 months are even less analyzed. Cornelissen and Smart 7 , in their systematic review and meta-analysis, reported that prolonged exercise programs of more than 6 months were associated with a lower reduction in BP values than those of shorter duration. They suggest that this may be due to longer programs having fewer supervised sessions, and hence lower attendance at sessions. Therefore, in order to analyze the residual effects of exercise on subjects that participated routinely in the programs, this study focused on the number of minutes per week, representative of the adherence to the exercise sessions. Thirdly, as indicated by Cornelissen and Smart 7 , there are very few studies that have actually carried out combined physical exercise programs, and our study is one such example. Moreover, recent research indicates that the effects of combined strength and cardiovascular endurance programs may produce better results than programs that focus only on a single physical capacity 12,13,15 . Participants were initially assessed after completing the exercise programs and significant differences were only found between groups in the 6MWT and in their weight. The G<150 presented a higher weight, BMI>30 kg/m 2 , considered as obesity level I, and its relationship with physical activity is unmistakable 4 . Literature also indicates that weight reduction is associated with a higher reduction in SBP and SDP values 7 . There was no significant difference between groups for SBP. In the G>150 group, average values did not reach the cut-off point established for Stage 1 hypertension (SBP/DBP ≥140 mm Hg); however, this did happen in the G<150 group. The effect that the more physical exercise the better the BP has been amply demonstrated in the literature, hence this situation is probably influenced by the previous physical activity levels 7 . Another important aspect worth mentioning is that exercise intensity has been identified as a determinant factor for reducing BP 7 . Therefore, the present study groups were engaged in exercise programs of similar intensity, duration, and type, the only difference being in the program carried out, the frequency per week, and consequently the accumulated physical exercise in minutes/week. This is why the present results are exempt from this influence. After a 3-month detraining period, no significant SBP/ DBP differences were found in any of the groups or in the analysis of interaction time/group. There is nevertheless an increasing trend in it, which is more pronounced in the G>150. The authors feel that this rebound effect of BP recovery conditioned by the amount of physical exercise performed has not been reported previously 7 . One of the explanations could be related to the increase in weight and BMI after the detraining period, which was more pronounced in the G>150 compared to the G<150. Considering that BMI has been positively associated with BP 16 , it is reasonable to think that the increase in BMI could have influenced the increase of BP in the G>150. Nevertheless, more research is needed in order to identify how this or other parameters, such as the previous levels of physical activity, could influence this rebound effect. In terms of the 6MWT, at baseline, the G>150 group covered more distance in the 6MWT, something expected due to the widely documented relationship between the test and the amount of physical activity undertaken 17,18 . The distance covered by both groups in this test was lower than that reported by Araújo et al. 19 in the Portuguese high BP population with an average age of 71 years. The differences in the results of the 6MWT may be because the average age in our sample was higher than that in Araújo et al. 19 (78 years vs. 71 years), and the average BMI value in our sample was higher. Both these factors have been established as influential in the results of the 6MWT 20 . After the three months of detraining, significant differences were again observed in terms of higher total distance covered in the test, but no significant differences were identified in the analysis of interaction time/group. This indicates a trend towards greater maintenance of physical activity-related functional capacity after a detraining period. These results are in line with those reported by other studies 21 , considering that health condition is a determining factor in the results of the 6MWT 22 and that the G<150 group had higher BP values. The relationship between the cumulative level of physical activity and its influence on other biochemical variables is well documented in the literature and is present in the remaining variables, where significant differences were observed in the case of low total cholesterol (significantly higher in the G<150), as well as in triglycerides 23 . A lower increase in glucose level was observed in the G>150 group compared with the G<150, and results are in line with other studies 24 . Given that there was no nutritional intervention and assuming that eating habits did not change in the participants, the explanation for the high D-vitamin concentration is perhaps because the baseline evaluation (start of the follow-up) was carried out before the summer, while the second evaluation was performed later, hence the higher exposure to UVB rays. The differences between the groups could be due to the fact that participants who did more activity were outdoors in sunlight during the three months of follow-up, and the vitamin D concentration can also be related to the level of physical activity 25 . Even though the findings from this study may be of relevance for the reasons outlined above, there are still a number of limitations that need to be considered. In the first place, we were unable to obtain data prior to the exercise program, which would have facilitated analysis of their development and better contextualization of the differences between both groups during the follow-up period. Secondly, no information was collected on whether participants smoked or not, and the amount of tobacco consumed during the study period. In conclusion, the number of minutes per week of aerobic and resistance exercise training over 18 non-consecutive months was not a significant determinant factor in the HBP development during the three months of detraining after a 2-year exercise program in the sample of 87 elderly HBP individuals aged 70 to 93 years. However, other variables such as weight, total cholesterol, and glucose may be influenced by the amount of physical activity undertaken and thus have a protective effect at a cardiovascular level in this population.
2020-09-17T13:06:18.941Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "0f4f3648f02ebd63d73c0f4b3c1e2c95194f376b", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ramb/a/mkzKc9bMyRTNw4q5RXY7SXw/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "df1d1411d359a9334886793dfeeca405da469d91", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
42733582
pes2o/s2orc
v3-fos-license
Time dependent differences in gray matter volume post mild traumatic brain injury When the brain is subjected to excessive physical forces, including blunt impact, high-speed rotation, or blast overpressure waves, its tissue structure and function can be compromised, leading to traumatic brain injury (TBI). Based on the level of structural and functional damage, these injuries can be classified as mild, moderate, or severe, with mild TBI (mTBI) being by far the most common. Also known as concussion, mTBI frequently occurs in a wide variety of activities, including accidental falls, sports injuries, moving vehicle accidents, military training, and combat related events such as blast exposure. mTBI can lead to various cognitive, sensory and motor complaints like reduced memory, attention, and information processing speed, and emotional dysregulation (Carroll et al., 2004). Most individuals with mTBI will recover from these symptoms within 90 days post injury (Karr et al., 2014), but for some individuals, the symptoms may be protracted, persisting up to a year or longer (Satz et al., 1999). For a small minority of individuals, these cognitive and emotional symptoms are severe enough to significantly affect social and occupational functioning. When the brain is subjected to excessive physical forces, including blunt impact, high-speed rotation, or blast overpressure waves, its tissue structure and function can be compromised, leading to traumatic brain injury (TBI). Based on the level of structural and functional damage, these injuries can be classified as mild, moderate, or severe, with mild TBI (mTBI) being by far the most common. Also known as concussion, mTBI frequently occurs in a wide variety of activities, including accidental falls, sports injuries, moving vehicle accidents, military training, and combat related events such as blast exposure. mTBI can lead to various cognitive, sensory and motor complaints like reduced memory, attention, and information processing speed, and emotional dysregulation (Carroll et al., 2004). Most individuals with mTBI will recover from these symptoms within 90 days post injury (Karr et al., 2014), but for some individuals, the symptoms may be protracted, persisting up to a year or longer (Satz et al., 1999). For a small minority of individuals, these cognitive and emotional symptoms are severe enough to significantly affect social and occupational functioning. In contrast to moderate and severe injuries, one of the defining features of an mTBI is the absence of detectible structural lesions on a standard clinical imaging scan. While individual lesions may not be present, there is emerging evidence that, as a group, patients with mTBI may actually be differentiated from non-injured controls based on brain volume data. For instance, previous studies have shown decreased gray matter volume (GMV) post mTBI, suggesting a loss of cortical neurons (List et al., 2015). Very few studies, however, have explored differences in GMV at different time intervals post mTBI and their relationship with neuropsychological performance. Such research is crucial to understanding the recovery process because the brain is not static and neuroplastic remodeling may continue for some time after an injury. Understanding this relationship can facilitate better-targeted intervention strategies to aid in rehabilitation following mTBI. We recently reported findings suggesting that mTBI may not simply be associated with reduced cortical volume, but instead may show specific increases in gray matter volume (GMV) as well (Killgore et al., 2016). In that project we studied the cortical volume changes and their association with neuropsychological task performance at various time intervals up to a year following injury. We used a 3.0 Tesla magnetic resonance imaging scanner (Siemens Trim Trio, Erlangen, Germany) with a 32-channel head coil for our study. A T1 weighted 3D MPRAGE sequence (TR/TE/flip angle = 2.1 s, 2.3 ms, 12°) was used to acquire 176 sagittal slices (256 × 256 matrix) with a 1-mm slice thickness, yielding a voxel size of 1 × 1 × 1 mm 3 . The VBM8 toolbox in SPM8 (http://www.fil.ion.ucl.ac.uk/spm/software/ spm8/) was used to process the T-1 weighted structural images. All images were spatially realigned to the anterior-posterior commissure axis and then segmented into GM, WM, and CSF using VBM8. A custom DARTEL template was created using the segmented images and then the images were normalized to Montreal Neurological institute (MNI) space. Images were then smoothed with a 10 mm full width at half maximum (FWHM) isotropic Gaussian kernel. The study participants included 26 right-handed adults (age range 20-45 years, mean age 23.38 ± 5.23, 11 males, 15 females), with English as their primary language. All participants had a history of sports-related mTBI experienced within the 12 months prior to participation this study (2 weeks [n = 2], 1 month [n = 6], 3 months [n = 5], 6 months [n = 10], 1 year [n = 3]). All of these participants sustained mTBI while engaging in sports activities such as rugby (n = 7), basketball (n = 3), softball (n = 1), ultimate frisbee (n = 1), soccer (n = 1), ice hockey (n = 2), lacrosse (n = 1), martial arts (n = 2), weight lifting/ gym (n = 4) and track and field (n = 4). The participants were initially screened over the telephone for the details of their head injury, medical and psychiatric history. Participants were ruled out for any serious chronic medical, neurological or psychiatric condition like hypertension, diabetes, epilepsy, bipolar disorder, attention deficit hyperactivity disorder etc. The only exception was depression and anxiety developing after the concussion. Also, they were required to provide official documentation of head injury signed by an impartial but professionally responsible witness to the head injury or its immediate consequences (e.g., physician, nurse, ambulance driver, medical records, neuropsychologist). Additionally, 12 healthy control participants (age range 20-43 years, mean age 25.00 ± 6.55, 4 males, 8 females), with no history of head injury or loss of consciousness were recruited as a comparison group. On the day of visit, the healthy and mTBI individuals underwent same series of neuropsychological assessments and MRI sequences. Remarkably, in contrast to the general finding of reduced GMV following mTBI found in other studies, our results did not show such reductions, but instead showed that longer time since injury (TSI) was associated with increased GMV in two brain regions (see Figure 1), including the cortex of the right fusiform gyrus (RFG) and bilateral ventromedial prefrontal cortex (VMPFC). In other words, the cortex of these regions appeared to be larger among those whose injuries were most www.nrronline.org distal in time. Moreover, larger GMV was associated with better performance for visual motor, visual attention, and emotional functioning tasks, suggesting that greater cortical volume in specific regions was associated with better functional outcome. We speculate that these data point toward significant cortical remodeling occurring in the months following injury. To further evaluate that possibility, we divided our sample roughly in half so that we could compare those in the post-acute stage (0-99 days post-injury) to those in the chronic stage (100-365 days post-injury), and further compared them to a separate sample of healthy individuals with no reported history of head injury. Consistent with our hypothesis, the chronic group showed significantly greater GMV in both regions compared to the post-acute group, confirming that gray matter was increased with longer TSI. Moreover, the chronic group also showed significantly greater GMV compared to the healthy controls, suggesting that not only was the GMV returning to normal with greater TSI, it was actually exceeding the volume seen in healthy normals. Thus, for these individuals, the later stages of recovery were associated with exaggerated GMV in specific regions that are involved in regulating emotion as well as sustaining visual attention and information processing speed. We interpreted these findings as evidence of experience dependent cortical plasticity. In other words, we propose that for many individuals, mTBI leads to a host of subtle core cognitive impairments and emotional regulation deficits post-injury, which over time, lead the injured individual to draw upon these other cortical regions to compensate. For example, reduced frustration tolerance and emotional dysregulation are common experiences after mTBI and are not specific to a particular lesion site (Ryan and Warden, 2003). It is conceivable that individuals with these emotional difficulties may more routinely activate the ventromedial prefrontal cortical regions, which play an important role in emotional and visceromotor regulation, in an attempt to maintain emotional control. Similarly, many people experience slowed processing speed and attentional difficulties following a concussion (Levin et al., 1987). This may cause such individuals to draw more heavily upon regions such as the fusiform gyrus and other visual attention regions in order to compensate. With sustained and exaggerated use, it is conceivable that these highly exercised regions may begin to develop larger cortical volume through more extensive dendritic arborization. It is well established that repeated practice with certain motor or cognitive skills can lead to an increase in specific cortical regions supporting that skill (Quallo et al., 2009). The preliminary findings from our study are encouraging, suggesting that mTBI is not uniformly defined by decreased cortical volumes. On the contrary, regional increases in volume are possible within this population and these volume changes are associated with improved cognitive and emotional functioning. The fact that we identified specific regions of volume increases is remarkable given the fact that mTBI is an extremely heterogeneous injury, with multiple potential causes and diffuse locations of damage (Bigler, 2008). The fact that these areas of increased volume were consistent and focal suggests that they are likely independent of lesion location-rather they likely reflect common pathways for compensation that are relatively independent of the site of impact or location of damage. Previous studies have shown that behavioral experience interacts with regenerative and degenerative changes in the brain to induce structural and motor plasticity (Kerr et al., 2011). Compensatory remodeling is one of the ways neuroplasticity works and may undergird the mechanisms behind rehabilitative training, which forms one the mainstays of treatment post mTBI. On the basis of our findings we suggest that rehabilitative training might be even more beneficial if it can capitalize on this aspect of neuroplasticity. Perhaps by focusing rehabilitation efforts toward exercising existing compensatory skills that draw upon these regions (e.g., emotional regulation; regulating attention from distraction), patients can further develop the cortical volume of those regions and, over time, gain greater functional capacity. This would be encouraging and suggest that there is more that could be done for patients recovering from concussions than merely to "wait and see." Clearly this is speculative at this point, but further research should examine whether the cortical volume, structural and functional connectivity, and functional capacity of these same regions can be voluntarily enhanced in patients recovering from mTBI via focused training. Finally, it will be important for future work to focus efforts toward using functional neuroimaging. This will enable linkage among the cognitive tasks and identified deficits caused by an injury and the regions of increased gray matter volume identified in our study.
2018-04-03T05:41:38.411Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "ce497e3680dcb62fb591afea2026b165be508a21", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/1673-5374.184487", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bb4a0e2aa42fc53ab6c1f103cdd8065033625c12", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
211723396
pes2o/s2orc
v3-fos-license
Synergistic Effect of 1,3,6-Trihydroxy-4,5,7-Trichloroxanthone in Combination with Doxorubicin on B-Cell Lymphoma Cells and Its Mechanism of Action Through Molecular Docking Highlights • This study investigated the potency of a xanthone derivative when used incombination with doxorubicin on a cancer cell line.• The xanthone-doxorubicin combination showed promising in vitro activity againstlymphoma cells.• Potential mechanisms of action responsible for the ability of this xanthone derivative to reduce doxorubicin resistance were investigated. Introduction Malignant tumors result from complex structural and quantitative alterations at the molecular level, leading to altered cell behavior and uncontrolled cell proliferation. B-cell lymphoma is a lymphoid malignancy and type of cancer of the immune system. Worldwide data from Global Burden of Cancer (GLOBOCAN) in 2008 indicate that approximately half of all newly diagnosed hematologic neoplasms are lymphomas. 1 Lymphoid neoplasms are the fourth most common cancer and the sixth leading cause of death in the United States. Approximately 136,0 0 0 Americans were diagnosed with lymphoma in 2016, 2 and its incidence has been increasing. The prevalence of lymphoid malignancies is lower among Asians compared with other ethnic groups in the United States and among foreign-born Asians compared with US-born Asians. 3 Developing treatment strategies for lymphoid malignancies is an ongoing effort, but https://doi.org/10.1016/j.curtheres.2020.100576 0011-393X/© 2020 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license. ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ) relapses and treatment-related complications remain major obstacles. Thus, the development of new therapeutic strategies is a significant unmet need to improve the prognosis and quality of life of patients with malignant lymphomas. 4 Doxorubicin is a broad-spectrum antitumor antibiotic isolated from Streptomyces species and is widely used as an anticancer agent to treat different types of cancer, including lymphoid malignancies. 5 It is also widely accepted that using a single chemotherapeutic regimen is ineffective in producing the desired therapeutic effect in many cancers and, instead, causes the emergence of side effects and resistance. Half of the patients with B-cell lymphoma treated with anthracycline-based chemotherapy develop chemoresistance. 6 A new approach is, therefore, required to improve doxorubicin sensitivity and prevent chemoresistance; a possible approach is using a combination of doxorubicin and another compound, leading to a synergistic effect that inhibits the proliferation of lymphoma cells. Using such chemotherapeutic combinations is a rational strategy to improve response and tolerability and decrease resistance. 7 Drug combinations have been shown to be capable of lowering drug resistance (due to nonoverlapping mechanisms of action) and side effects (due to lower doses). 8 Utilizing multiple drugs with different mechanisms of action may involve either single or multiple targets and result in a synergistic effect, 9 thereby increasing the effectiveness of treatment. Proteins dysregulated in B-cell lymphoma and involved in the development of doxorubicin resistance include Raf isoforms, nuclear factor kappa B (NF κB)-inducing kinase (NIK), c-Jun Nterminal kinase (c-JNK), survivin, Bruton tyrosine kinase (Btk), Ak mouse thymoma (Akt), and cyclin-dependent kinases. Activated Raf isoforms have been reported to increase p21 and c-Myelocytoma (c-Myc) levels, and this signal transduction pathway may be involved in doxorubicin resistance. Raf-1 expression increases Michigan Cancer Foundation-7 (MCF-7) cancer cells' resistance to doxorubicin 10 through the activation of p-glycoprotein, a member of the adenosine triphosphate-binding cassette transporter family that facilitates the efflux of a wide variety of anticancer drugs, including anthracyclines. 11 NIK is a major enzyme involved in the activation of NF κB, a transcription factor with tumor-promoting properties. 12 NF κB is constitutively activated in various lymphoid malignancies and plays a dominant role in the neoplastic transformation of B-lymphoid cells. Some key components of the NF κB pathway are affected in B-cell lymphoid malignancies, leading to uncontrollable cell behavior. 13 Survivin is a member of the inhibitor of apoptosis protein family, which is highly expressed in various hematologic malignancies, 14 including B-cell lymphoma. 15 Survivin TmSm inhibitors increase apoptosis, suppress cell proliferation, and increase the sensitivity of cancer cells to doxorubicin when used in combination. 16 Btk is a cytoplasmic tyrosine kinase; it plays an important role in B-cell maturation and is overexpressed in various B-cell malignancies. 15 The activation of this protein catalyzes phosphorylation and activates phospholipase C2, resulting in the activation of Ras/Raf/MEK/ERK and NF κB pathways. 17 Akt is an intracellular kinase that plays an important role in cell survival and proliferation and is highly expressed in B-cell lymphoma. 15 Increased Akt expression is associated with breast cancer cells' resistance to doxorubicin. 18 Doxorubicin is an anticancer agent with apoptosis-inducing activity. c-JNK plays an important role in caspase activation and apoptosis induction by doxorubicin. c-JNK is also involved in the degradation of Myeloid cell leukemia-1 (Mcl-1) through phosphorylation and ubiquitination, which is an important process in sensitizing breast cancer cells to antimicrotubular agents. Inhibiting c-JNK activation leads to high concentrations of Mcl-1, which protects cancer cells from apoptosis, causing cancer cells' resistance to doxorubicin. 19 Suppressing Mcl-1 expression can induce matrix metalloproteinase degradation and caspase activation. 20 A xanthone-derivative compound, 1,3,6-trihydroxy-4,5,7trichloroxanthone (TTX), is a novel synthetic xanthone synthesized based on quantitative structure-activity relationship analyses of many xanthone compounds. 21 This compound contains 3 hydroxyl groups and 3 chloro atoms, as shown in Figure 1 ; its synthesis and characterization have been recently reported. 22 A previous study showed that hydroxyl group substitutions on the xanthone scaffold increase cytotoxic activity compared with the parent compound. Increasing the number of hydroxyl groups did not linearly increase cytotoxic activity, suggesting that the position of hydroxyl groups might also determine xanthones' cytotoxic activity. 20 Xanthone compounds, both natural and synthetic, have been widely studied as anticancer agents, but mostly for single-agent treatment and not in combination. The present study provides novel information regarding the potential role of TTX as a cochemotherapeutic agent with doxorubicin; such a combination may be an effective strategy resulting in synergistic efficacy while minimizing resistance and side effects of doxorubicin. In addition, molecular docking of TTX on potential protein receptors involved in doxorubicin resistance was also performed to predict the mechanism(s) underlying the synergy. Cell lines and cultures Cells from the Raji B-cell lymphoma line and a normal cell line (Vero) were obtained from the Parasitology Laboratory of the Faculty of Medicine, Gadjah Mada University. The cells were cultivated in Iscove's modified Dulbecco's medium (Raji) or M199 medium (Vero) supplemented with 100 U/mL penicillin, 100 μL streptomycin, and 1% fungizone in a humidified incubator at 37 °C and 5% carbon dioxide. The study received ethics approval from the Medical and Health Research Committee of the Faculty of Medicine, Gadjah Mada University −Dr. Sardjito General Hospital, Yogyakarta, Indonesia. Compounds The compounds tested were doxorubicin and a chlorosubstituted xanthone derivative, TTX. Doxorubicin was available as a stock solution at a concentration of 20 0 0 μg/mL (equivalent to 3679.85 μM) and diluted using culture media as needed. A total of 10 mg TTX was dissolved in 100 μL 10% DMSO in water to form a stock solution with a concentration of 10 0,0 0 0 μg/mL (equivalent to 244,200 μM). The highest TTX concentration tested was 100 μg/mL (dilution of 10 0 0 times using culture media, equivalent to 244.200 μM) so that the DMSO concentration in this solution was 0.01%. This solution was diluted to varying concentrations of 122.100, 61.050, 30.525, 15.263, 7.631, and 3.816 μM TTX using culture media, and the DMSO concentrations in subsequent dilutions were 0.0 05%, 0.0 025%, 0.0 0125%, 0.0 0 063%, 0.0 0 031%, and 0.0 0 0156% in each TTX concentration tested. A previous study has shown that 10% DMSO is safe to use in some in vitro studies without causing cytotoxic effects in various types of cancer cell lines. When conducting a cytotoxicity test, the DMSO concentration in the highest concentration of the tested compound should not exceed 0.1%. 23 Dilutions of doxorubicin and TTX were used with a suitable culture medium for each cell line. TTX was used in combination with doxorubicin to treat Raji cells, and its potential mechanism of action was investigated through molecular docking. In vitro cytotoxicity assay The in vitro cytotoxic activity of TTX and doxorubicin against Raji and Vero cell lines was measured using MTT assay as described in a previous study, 24 with some modifications, to determine the half-maximal inhibitory concentration (IC 50 ) values of doxorubicin and TTX and their selectivity against Raji cancer cells. The IC 50 values thus obtained were used to determine the concentrations to be used in TTX-doxorubicin combinations, comprising ratios of one-half, three-eighths, one-fourth, and one-eighth of the IC 50 values. Raji or Vero cells were seeded at a density of 1 × 10 4 cells per well in 96-well microplates and incubated overnight. Raji cells were treated with 100 μL of 244.20 0, 122.10 0, 61.050, 30 28.74 9, or 13.374 μM), or 50 μL culture medium without cells as a medium control. The plates were incubated for 24 hours at 37 °C in a 5% carbon dioxide incubator, followed by the addition of 50 μL 10% MTT solution and reincubation for another 4 hours. Next, 100 μL 10% SDS in water was added to all the wells, and the plate was incubated overnight to dissolve the formazan crystals. The absorbance (ie, optical density) of the wells at a wavelength of 595 nm was determined using an ELISA microplate reader. Viable cells (%) were counted using the following formula: 50 values represent the drug concentration in micrometers required to inhibit cell viability by 50%, calculated using probit analysis. The selectivity index was calculated from the ratio of IC 50 against Vero cells to that against Raji cells. Selectivity index values > 3 were considered to indicate good selectivity. 25 Drug combinations The effects of different concentrations of TTX, doxorubicin, and their combinations on Raji cell growth were measured using MTT assay as described previously, with modifications. Raji cells were plated in a 96-well microplate at a density of 1 × 10 4 cells per well and incubated at 37 °C in 5% carbon dioxide for 24 hours. The cells were treated with one-half IC 50 , three-eighths IC 50 , onefourth IC 50 , or one-eighth IC 50 of TTX, doxorubicin, or combinations thereof at a 50-μL total volume for 24 hours. The plate was incubated for another 4 hours at 37 °C in 5% carbon dioxide after adding 50 μL of 10% MTT to each well. Next, 100 μL SDS was added to the plate and incubated overnight to solubilize the MTT formazan crystals completely. The optical density of the wells at a wavelength of 595 nm was determined using an ELISA microplate reader. The percentage of viable cells was determined using the in vitro cytotoxicity assay method described above. Analysis of drug combinations Drug interactions between TTX and doxorubicin were determined using isobologram analysis 26 and denoted with the combination index. The combination index analysis was based on the median-effect principle and computed using the following formula. Combination index = D 1 /(Dx) 1 + D 2 /(Dx) 2 , where D 1 and (Dx) 1 are concentrations of TTX and doxorubicin, respectively, that inhibit cell growth to 50% of control when used alone, and D 2 and (Dx) 2 are concentrations of TTX and doxorubicin, respectively, that produce the same effect when used in combination. 8 Combination index values, calculated using CompuSyn software (ComboSyn Inc, Paramus, New Jersey) indicate the effects of drug combinations and are interpreted as shown in Table 1 . Preparation for docking A molecular docking study of TTX on protein receptors was conducted based on a procedure described previously. 27 Threedimensional crystal structures of protein receptors involved in doxorubicin resistance were retrieved from the Protein Data Bank(PDB) database (Research Collaboratory for Structural Bioinformatics/RCSB, USA). The downloaded protein structures were then prepared with YASARA ver. 10.1.8 (YASARA Biosciences GmbH, Vienna, Austria) ( www.yasara.org ) in the standard setting, and hydrogen atoms were added to the structures. The results were saved in the mol2 format for docking. The downloaded native ligands of protein receptors were prepared with MarvinSketch version 16.5.2.0 (ChemAxon, Budapest, Hungary) ( http:// www.chemaxon.com ) by configuring them in a 2-dimensional format. They were found to be protonated at pKa 7.4, and 10 ligand conformations were built. The 10 conformers generated were then saved in the mol2 format for docking. TTX was used as an experimental ligand, and its 2-dimensional structure was prepared with Marvinsketch in the same way as the native ligand; the conformers were then saved in the mol2 format for docking. Molecular docking Molecular docking simulation was carried out using Protein-Ant Ligand Systems. 28 The root median square deviation (RMSD) and free energy of binding were used as docking parameters and measured using Yasara. The docking process was considered valid and suitable for being reproduced if the RMSD value of the copy ligand after redocking was < 2 Å . 29 The best predictive binding position was selected based on the most electronegative free energy of binding. Interactions between the protein structure and ligands were visualized with PyMOL ver 1.7.5.0 (Schrödinger, New York, New York) ( http://www.pymol.org ). Additionally, the hydrogen bonds formed between protein receptors and ligands were compared with those of the native ligand. 30 Cytotoxicity assay The cytotoxicity of TTX and doxorubicin individually and in combination against Raji or Vero cells was determined using the MTT method. The results are shown in Table 2 . The Council of Scientific and Industrial Research classifies cytotoxic activities into inactive (mean IC 50 , > 50 μg/mL), weak (15 μg/mL < mean IC 50 , < 50 μg/mL), moderate (6.25 μg/mL < mean IC 50 , < 15 μg/mL), or potent (mean IC 50 , < 6.25 μg/mL). 31 According to these criteria, the cytotoxic activities of TTX 15.948 μM (equivalent to 6.53 μg/mL) and doxorubicin 25.432 μM (equivalent to 13.82 μg/mL) found in the present study are classified as moderate. The sensitivity index values of TTX and doxorubicin > 3 indicate their selectivity against Raji cells. 25 Combined effect of TTX and doxorubicin on Raji cells Cells were treated with various concentrations of TTX and doxorubicin for 24 hours to investigate the combined effect of TTX and doxorubicin on the viability of Raji cells. The IC 50 values obtained after single treatments of TTX or doxorubicin were used to determine their concentrations in TTX-doxorubicin combinations. The concentrations used were calculated as one-half, threeeighths, three-fourths, or one-eighth IC 50 of TTX or doxorubicin single treatments; thus, the TTX concentrations used were 7.974, 5.981, 3.986, and 1.993 μM, and the doxorubicin concentrations used were equivalent to 12.716, 9.537, 6.358, and 3.179 μM. The experiments consisted of 4 groups: a control group treated with medium alone, a group treated with TTX alone, a group treated with doxorubicin alone, and a group treated with a combination of TTX and doxorubicin. Cell viability was measured using MTT assay, and the data were converted into percentages of viable cells. The data confirmed that in individual treatments, all concentrations of either TTX ( Figure 2 ) or doxorubicin ( Figure 3 ) alone inhibited cell proliferation by < 50%. The cytotoxic activity of TTX or doxorubicin against Raji cells did not increase in direct proportion to the concentration. The percentage of viable cells at a TTX concentration of 1.993, 3.986, 5.981, and 7.974 μM was 71.04%, 71.57%, 69.65%, and 69.30%, respectively. In the doxorubicin-treated cells, the percentage of viable cells was 65.23%, 59.77%, 57.53%, and 54.65% at a concentration of 3.179, 6.358, 9.537, and 12.716 μM, respectively. There were no significant differences among the effects of the different drug concentrations ( P > 0.05), except in comparison with untreated cells ( P < 0.05). In contrast to the treatments with TTX or doxorubicin alone, their combinations at the 4 concentrations above inhibited cell proliferation significantly, as shown in Figure 4 . The percentage of viable cells at all concentrations was < 50%, ranging from 34.64% to 45.24%. The smallest number of viable cells was seen with the combination containing the highest concentrations of TTX and doxorubicin (7.974 and 12.716 μM, respectively). Furthermore, this combination suppressed Raji cell proliferation in a concentrationdependent manner. Combining TTX and doxorubicin enhanced their anticancer effect against Raji cells. In comparison with TTX or doxorubicin alone, this combination resulted in greater efficacy in inhibiting the growth of Raji cells ( Figure 5 ). This suggested a possible role of TTX in the doxorubicin-induced growth inhibition of B lymphoma cancer cells. Figure 5 shows the concentration-effect curves of TTX, doxorubicin, and the TTX-doxorubicin combination, and the combination index values of the TTX-doxorubicin combination obtained using CompuSyn calculations ( Table 3 ). These data indicated that all the tested concentrations had strong or very strong synergistic effects (combination index, 0.1 −0.3 and < 0.1, respectively). 8 Molecular docking of TTX on protein receptors involved in doxorubicin resistance Based on the synergistic effects of the TTX and doxorubicin combination against Raji cells, it is rational to explore the mechanism(s) underlying the effect of the TTX and doxorubicin combination in increasing cell sensitivity to doxorubicin or decreasing the risk of doxorubicin resistance progression. This was conducted using molecular docking with proteins involved in doxorubicin resistance. The ability of TTX to occupy the active site of these proteins would suggest its ability to decrease resistance to doxorubicin. The results of a molecular docking analysis of native ligand and TTX against some protein receptors involved in doxorubicin resistance are shown in Table 4 . The ligand copies almost coincided with all the protein receptor binding sites, with all the RMSD values being less than Å , and thus, they met the validity criteria for molecular docking. Among the protein receptors investigated, Raf-1 and c-JNK showed minimal free energy of binding and differences compared with that of the native ligand, suggesting these are the most likely targets mediating the mechanism of synergy. The native ligand's free energy of binding to Raf-1 was −83.22 kcal/mol, with 1 hydrogen bond with Cys 424 ( Figure 6 A). TTX occupied the binding site of Raf-1 with a free energy of binding of −79.37 kcal/mol and formed 4 hydrogen bonds with the amino acid residues Cys 424 , Lys 431 , Ser 427 , and Gly 426 ( Figure 6 B). Amino acid residues involved in the hydrogen bonds are shown in Table 5 . The native ligand's free energy of binding with c-JNK was -75.91 kcal/mol, with 3 hydrogen bonds formed with Met 111 (2 bonds) and Glu 109 ( Figure 7 A). The free energy of binding of TTX to the c-JNK receptor was −75.42 kcal/mol, and 4 hydrogen bonds were formed at Met 111 , Glu 109 , and Ser 34 (2 bonds) ( Figure 7 B). Amino acid residues involved in the hydrogen bonds are shown in Table 6 . The ability of TTX to occupy the active site of Raf-1 or c-JNK suggests the potential mechanism of action of its synergistic effect with doxorubicin. Discussion TTX is a xanthone derivative containing 3 hydroxyl groups and 3 chloro atoms. Xanthone compounds have been reported to show anticancer activity against several cancer cell lines, with IC 50 values varying from 1.81 μg/mL (P388 murine leukemia cells) to 91 μg/mL (HepG2 cells). 22 Hydroxyl substitutions in the xanthone scaffold are known to increase its cytotoxic activity. Increasing the number of hydroxyl groups does not linearly increase the cytotoxic activity of xanthones, suggesting that the position of the hydroxyl groups may also play a role in their inhibitory effect. 20 Modifying the hydroxyxanthone scaffold with halogen groups, especially chloro-substitutions, is predicted to increase cytotoxic activity. 21 The present study resulted in an important finding relevant to the potential of xanthone as an anticancer agent, especially in terms of a possible combination with an established anticancer agent. Doxorubicin was used as a positive control drug because it is a chemotherapeutic agent used to treat B-cell lymphoma 6 and Table 6 The atom components and amino acid residues involved in the hydrogen bonds formed between c-JNK protein and 1,3,6trihydroxy-4,5,7-trichloroxanthone (TTX). has a nucleus structure similar to that of xanthone compounds. 20 Doxorubicin is widely used as an anticancer agent to treat many cancer types, including B-cell lymphoma. This anticancer effect is due to its intercalation into DNA, leading to the inhibition of DNA synthesis and function. Doxorubicin is also a DNA topoisomerase II inhibitor by forming cleavable complexes with DNA and DNA topoisomerases II 5 and generating the formation of reactive oxygen species. 32 However, there is a significant concern with respect to resistance and adverse side effects. Combination anticancer therapy may decrease these risks due to nonoverlapping mechanisms of action and lower drug doses. 8 The cytotoxicity assay results in the present study show that TTX is a selective, moderately cytotoxic agent against Raji cells. Natural xanthone compounds normally exhibit in vitro anticancer activities through a variety of mechanisms in many cancer cell types, including the induction of apoptosis and cell cycle arrest, 33 stimulation of Bax proteins, and inhibition of Bcl-2 and NF κB, 34 as well as the inhibition of many cyclin proteins. 33 Apoptotic induction and cell cycle arrest by xanthone compounds are associated with an increase in reactive oxygen species levels, 35 which subsequently induce caspase activity, 36 leading to apoptosis. Increased Bax levels with the inhibition of Bcl-2 and NF κB cause the release of mitochondrial cytochrome c into the cytosol, 34 which leads to apoptosis. The induction of cell cycle arrest by natural xanthones may also be stimulated by a decrease in many cyclin proteins 33 and an increase in microtubule depolymerization, microtubule cytoskeleton disruption, and the phosphorylation of p38 and c-JNK. 37 Further investigations are required to ascertain whether the mechanisms of action of TTX in cancer cell lines are similar to those of natural xanthones. Previous studies have revealed that the cytotoxic activity of doxorubicin against many cancer cell lines varies among cancer cell types (between 0.190 and 0.892 μg/mL), and all are < 6.25 μg/mL (smaller than 11.50 μM). 38 , 39 In the present study, doxorubicin showed weaker cytotoxic activity against Raji cells (6.528 μg/mL or 25.432 μM) compared with those observed in previous studies ( < 11.5 μM), suggesting that Raji cells were less sensitive to doxorubicin in the present study. The study was continued using TTX-doxorubicin combinations to investigate xanthones' ability to increase cells' sensitivity to doxorubicin. Combinations of TTX and doxorubicin showed that TTX augmented the inhibitory effect of doxorubicin on the growth of Raji cells in vitro. The antiproliferative effect of combination treatment with TTX and doxorubicin depends on their individual concentrations, suggesting a synergistic effect. Synergy is inferred when the use of drug combinations at specific doses produces greater efficacy compared with the sum of the anticancer effects achieved by using the individual drugs at the same dose. 40 The combination index values computed using CompuSyn support this finding, in which the combination index values represent strong or very strong synergism (combination index, 0.1 −0.3 and < 0.1). CompuSyn was used because it generates higher-quality graphics ready for publication, provides more options and flexibility, and is able to handle data from large-scale drug combination studies. 9 Most studies of drug combinations are conducted in vitro because the experimental conditions can be easily defined, fixed, and standardized, and the concentrations can be maintained at relatively constant levels during experiments. It is also quick, accurate, and economical. 9 When investigating synergy in drug combination studies, the only prerequisite is the dose-effect curve for each drug alone, comprising the potency and shape of the dose-effect curve of each drug. Both of these parameters can be easily obtained from the median-effect equation using computer software such as Com-puSyn. 41 The molecular docking study's findings indicate that the synergistic effect of combining TTX and doxorubicin might be due to the ability of TTX to inhibit Raf-1 and c-JNK, the 2 proteins involved in doxorubicin resistance. The orientation of the TTX ligand to Raf-1 and c-JNK proteins indicate that TTX was capable of occupying the active site of Raf-1 and NIK receptors. The orientation of TTX in Pymol reveals that TTX is positioned deep in the binding pocket of Raf-1 and surrounded by the amino acid residues Ile 355 , Val 363 , Ala 373 , Trp 423 , and Phe 475 ( Figure 6 B). These hydrophobic residues are part of the active site of Raf-1 and play a role in its activity. 42 TTX docked at the active site of Raf-1 with minimal free energy of binding ( −79.37 kcal/mol), close to that of the native ligand ( −83.21 kcal/mol). The interaction between TTX and Raf-1 formed a hydrogen bond similar to that at Cys 424 . The hydrogen bond at Cys 424 residues plays a significant role in the inhibition of Raf-1 activity, 43 which suggests TTX has the potential to inhibit Raf-1 activity. Raf isoform is an enzyme serine/threonine kinase that acts as a transduction signal in a cascade initiated by growth factors or mitogens. Raf isoform activation is associated with the emergence of chemotherapy drug resistance in leukemia, and Raf-1 overexpression decreases B-cell lymphoma cells' sensitivity to doxorubicin. 10 Increased Raf-1 expression also plays a role in the low response of breast cancer cells to doxorubicin chemotherapy, along with increased Akt expression. However, under low Raf-1 expression conditions, the sensitivity of cancer cells to doxorubicin is higher. This indicates the role of Raf-1 in inhibiting the emergence of cancer cell resistance. 44 The effectiveness of using a combination of Raf inhibitors with doxorubicin is based on the role of the Raf/MEK/ERK pathway in regulating multidrug-resistance-1 gene promoter activity. The Raf/MEK/ERK pathway also appears to synergize with Bcl-2 in some hematologic malignancies. 45 The design and synthesis of pyrimidine derivative molecules as Pan-Raf inhibitors to overcome anticancer resistance has resulted in identifying several effective compounds. Among the structures that increase the effectiveness of compounds as Pan-Raf inhibitors are the addition of halogen elements, especially the chloride (-Cl) group. Adding the -Cl group produces more effective compounds than other halogen groups, such as fluoride (-F). 46 This is in accordance with the structure of TTX, which has 3 -Cl groups next to 3 hydroxyl (-OH) groups as sites of substitution. As explained previously, adding the -OH group to the xanthone core increases the inhibitory activity of xanthone compounds on cancer cells. Incorporating 2 -Cl groups into the compound increased the effectiveness of TTX, especially in inhibiting Raf-1. Amino acid residues of Met 111 , Glu 109 , and Ser 34 are part of the active ligand side of the c-JNK-1 receptor and play roles receptor, which play roles in c-JNK activation. 47 Met 111 and Glu 109 are also important in binding to 3,6-dihydroxiflavones, which have been shown to be c-JNK inhibitors. 48 The TTX-c-JNK receptor complex forms 4 hydrogen bonds, more than the number of hydrogen bonds formed between the native ligand and the c-JNK receptor. These hydrogen bonds potentially explain why the free energy of binding of TTX is close to that of the native ligand. c-JNK activation is known to play a role in the cytotoxic mechanism of several anticancer agents. Under normal conditions, c-JNK can be activated due to several stress stimuli. Research shows that the combined use of tetrathiomolybdate (a chopper-chelating agent) at subcytotoxic levels with doxorubicin is effective in inhibiting the growth of doxorubicin-resistant endometrial cancer cell lines. Further studies have shown that combining these 2 agents increases cancer cells' sensitivity to doxorubicin, with 1 route being increasing or activating c-JNK. 49 c-JNK is an important determinant in assessing tumor sensitivity to anthracycline anticancer agents, such as doxorubicin. c-JNK activation stimulates Bax translocation, an apoptotic inducer, into mitochondria, which causes apoptosis. 50 The different numbers of hydrogen bonds formed in TTXreceptor binding and native ligand-receptor binding could explain the much higher free energy of binding of TTX because the amino acid residues involved in the interaction determine the inhibitory activity of a protein receptor. 51 The hydrogen bond is the main interaction contributing to the free energy of binding of a drugreceptor complex. Furthermore, the hydrogen bonds in a ligandreceptor complex and their positions can predict the strength and catalytic activity of the complex. 52 A ligand-receptor complex's interactions become more stable as the free energy of binding becomes more negative. The proposed mechanism of how TTX increases the sensitivity of Raji cancer cells to doxorubicin is shown in Figure 8 . Our findings indicate that TTX increases cancer cells' sensitivity to doxorubicin by inhibiting Raf-1 and activating c-JNK. Raf-1, through its downstream targets, namely MEK and ERK, activates the transcription of multidrug-resistance-1 and Bcl-2 genes, which contribute to the resistance of cancer cells to doxorubicin; therefore, Raf-1 inhibition increases cancer cells' sensitivity to doxorubicin. c-JNK activation causes the translocation of Bax, a proapoptotic protein, from mitochondria to the cytoplasm and inhibits Mcl-1, an antiapoptotic protein, thereby stimulating apoptosis. Barriers to Mcl-1 also inhibit matrix metalloproteinase, so the invasion process can be inhibited. Therefore, verifying the docking results required further investigation of downstream proteins. The synergistic effect of combining TTX and doxorubicin might also result from their different mechanisms of action in killing cancer cells (ie, their nonoverlapping mechanisms of action). 8 Doxorubicin is known as a topoisomerase II inhibitor and acts as an anticancer agent by inducing cell death through DNA damage. It is also capable of inducing DNA injury and lipid peroxidation through free radical formation. Doxorubicin delays G-actin polymerization and inhibits small actin filament elongation during polymerization. 40 Some natural xanthone compounds induce apoptosis and cell cycle arrest through a variety of mechanisms, such as microtubule depolymerization and microtubule cytoskeleton disruption. 37 These facts indicate that the mechanisms of action of doxorubicin and xanthone compounds may be interrelated and are not entirely distinctive. The actin filament could be a potential inhibitory target of the combined use of TTX and doxorubicin. Another possible effect of this synergism is the theory of doxorubicin toxicities, which is associated with the ability of xanthone compounds to inhibit the liver cytochrome (CYP) P450 enzyme. Xanthone-derivative compounds can inhibit the enzyme families CYP2C8, CYP2C9, CYP2B6, CYP2C19, CYP3A4, and CYP2D6. 53 The CYP3A4 enzyme family contributes to the largest composition of CYP enzymes, and plays an important role in doxorubicin metabolism. 32 The xanthone compounds are known to be CYP3A inhibitors, inhibiting doxorubicin metabolism and, consequently, increasing the concentration of doxorubicin in plasma. Increasing the doxorubicin concentration is believed to play a role in doxorubicin's cytotoxic activity against Raji cells, suggesting the synergistic effect of TTX and doxorubicin. Altogether, these results indicate TTX and doxorubicin have a synergistic effect on Raji cells, which is likely induced by the inhibitory effect of TTX on Raf-1, stimulating sensitivity to doxorubicin, which is, presumably, also due to the nonoverlapping mechanisms of action of TTX and doxorubicin. The potential use of a combination of TTX and doxorubicin may enhance anthracyclinebased therapy by overcoming anthracycline drug resistance and reducing their toxic side effects. The weakness of this study was the authors' inability to mask observers, but this matter was minimized by using an objective measuring tool (ie, an ELISA reader). Future Research The present study demonstrated the synergistic effects of TTX and doxorubicin on Raji cells with unknown mutant forms of the target proteins. However, the status of doxorubicin resistance in these cells was unknown. It would be very interesting to evalu-ate the cytotoxic effects of this combination on Raji cells with mutant target proteins or a demonstrated resistance to doxorubicin. Furthermore, it would be of interest to also test combinations of these drugs at concentrations below one-eighth IC 50 , which was the lowest concentration tested in the present study. Conclusions Combining TTX and doxorubicin was found to enhance the antiproliferative effect of doxorubicin, a standard chemotherapy drug used to treat lymphoid malignancies. The combination index values at the concentrations used ranged from 0.057 to 0.258, suggesting the TTX and doxorubicin combination had strong and very strong synergistic effects. These synergistic effects were seen at much lower doses than the IC 50 of doxorubicin used in monotherapy. The synergistic effects might be due to the inhibition of Raf-1 and c-JNK, as indicated by the molecular docking results. The free energy of binding and hydrogen bonding interactions between Raf-1 and c-JNK indicate that TTX is docked and positioned deeply in the binding sites of the above proteins. In short, combined treatment with TTX and doxorubicin is stronger than the respective single-agent treatments against B-cell lymphoma (Raji cell line), indicating this combination is a potentially promising antilymphoma treatment suitable for further development. Declaration of Competing Interest The authors have indicated that they have no conflicts of interest regarding the content of this article. Author Contributions IM is participated in conception and design, analysis and interpretation of the data, and drafting the article; EY is participated in drafting and revising it critically for important intellectual content; SN is participated in analysis and interpretation of the data; JJ, SMH, and MM are participated in revising it critically for important intellectual content and approval of the final version.
2020-02-06T09:10:28.638Z
2020-01-30T00:00:00.000
{ "year": 2020, "sha1": "d3b52c681eb4c459150e0160668c19bb95429304", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.curtheres.2020.100576", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e4c3148008a9e75ee9132e490c30b15d0fb1a55", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
67858298
pes2o/s2orc
v3-fos-license
Dosimetry of 177Lu-PSMA-617 after Mannitol Infusion and Glutamate Tablet Administration: Preliminary Results of EUDRACT/RSO 2016-002732-32 IRST Protocol Radio-ligand therapy (RLT) with177Lu-PSMA-617 is a promising option for patients with metastatic castration-resistant prostate-cancer (mCRPC). A prospective phase-II study (EUDRACT/RSO,2016-002732-32) on mCRPC is ongoing at IRST (Meldola, Italy). A total of 9 patients (median age: 68 y, range: 53–85) were enrolled for dosimetry evaluation of parotid glands (PGs), kidneys, red marrow (RM) and whole body (WB). Folic polyglutamate tablets were orally administered as PGs protectors and 500 mL of a 10% mannitol solution was intravenously infused to reduce kidney uptake. The whole body planar image (WBI) and blood samples were acquired at different times post infusion (1 h, 16–24 h, 36–48 h and 120 h). Dose calculation was performed with MIRD formalism (OLINDA/EXM software). The median effective half-life was 33.0 h (range: 25.6–60.7) for PGs, 31.4 h (12.2–80.6) for kidneys, 8.2 h (2.5–14.7) for RM and 40.1 h (31.6–79.7) for WB. The median doses were 0.48 mGy/MBq (range: 0.33–2.63) for PGs, 0.70 mGy/MBq (0.26–1.07) for kidneys, 0.044 mGy/MBq (0.023–0.067) for RM and 0.04 mGy/MBq (0.02–0.11) for WB. A comparison with previously published dosimetric data was performed and a significant difference was found for PGs while no significant difference was observed for the kidneys. For PGs, the possibility of reducing uptake by administering glutamate tablets during RLT seems feasible while further research is warranted for a more focused evaluation of the reduction in kidney uptake. Introduction The most frequent cancer in adult males is prostate cancer (PCa). Prognosis is dependent on the tumor stage and is poor in patients with metastatic disease (mPC) as it has a five-year survival of only 29% [1]. Limited treatment options are available for the subgroup of metastatic patients with castration-resistant disease (mCRPC). The currently available treatment options are taxane-based chemotherapies (e.g., docetaxel, cabazitaxel) and novel second-line hormone therapies (e.g., enzalutamide, abiterone), which are all associated with moderate survival and poor quality of life [2,3]. Radioligand therapy (RLT), which is based on a combination of a short-range energy radionuclide and a substrate with high specificity for cancer cell receptors, enables lesions to be treated with targeted radiation. 177 Lu is a short energy beta emitter with a maximum range in water of 1.9 mm and a half-life of 6.71 days. Prostate-specific membrane antigen (PSMA) is a protein overexpressed in 90-100% of local PCa lesions and metastatic disease (lymph node and bone). There is an even greater level of overexpression in high-grade mCRPC tumors [4]. In recent years, different RLT radiopharmaceuticals exploiting PSMA-targeting radioligands have been developed, among which the novel theragnostic 177 Lu-PSMA-617 [5] is considered to be one of the most promising, with high specificity for the tumor and moderate uptake in the whole body and organs at risk (OaR). Mild toxicity has mainly been observed in patients undergoing 177 Lu-PSMA, with around 10% experiencing adverse events [6]. However, the absorbed dose to the OaRs (kidneys, parotid glands and red marrow) [7][8][9] limits the maximum injectable activity, reducing the dose to the tumor and compromising therapeutic efficacy [6]. OaR drug protectors with high specificity for PSMA-ligand are thus needed to reduce off-target uptake in both parotid glands and kidneys. With regard to parotid glands, an external ice pack cooling strategy was used by van Kalmthout et al. with the aim of reducing hematic flow and therefore local uptake [10]. A reduction in 68 Ga-HBED-CC-PSMA-11 uptake in externally cooled salivary glands compared to non-cooled ones was observed in terms of maximum standard uptake values (SUV max ) in PET images (14.52% reduction, 11.07 ± 3.53 versus 12.95 ± 4.16; p-value = 0.02) [10]. Although the external cooling technique seems to be a promising tool to reduce PSMA uptake in PET imaging, there is still no evidence of a reduced dose in parotid glands after treatment with 177 Lu-PSMA-617. Similarly, in peptide radionuclide receptor therapy (PRRT) for neuroendocrine tumors, a mean reduction of 27% (range 9-53%) in kidney uptake was observed with the infusion of an amino acid solution [11,12]. The reduction further increased to 39% when the infusion was prolonged for 10 h and finally reached 65% when prolonged for 2 days after injection [13]. The same strategy was used for 177 Lu-DOTA-PSMA treatment [14]. Nevertheless, given the specific interaction of each ligand used as a vector for 177 Lu molecule, the kinetic uptake and process of fixation may vary among treatment methods [14]. Unlike PPRT with somatostatin analogs, an efficient pharmacological method for nephroprotection has still not yet been found for RLT with PSMA inhibitors [15]. In April 2017, a prospective protocol (EUDRACT/RSO 2016-002732-32) with 177 Lu-PSMA-617 therapy was activated at IRST (Meldola, Italy) for patients with mCRPC. The protocol includes a dosimetry objective to perform pharmacokinetic and absorbed dose evaluations to determine their biodistribution to OaRs. The treatment is delivered in association with drug protectors for parotid glands and kidneys with specificity for PSMA receptor. To preserve salivary glands, polyglutamate folates of plant origin are orally administrated to patients during treatment in the form of tablets. This protector is a substrate of PSMA and exploits the enzymatic activity of PSMA receptors and the release of glutamates. Consequently, the glutamates compete with the 177 Lu-PSMA-617 for the active sites of PSMA in competition. Moreover, ice packs are also positioned on the parotid glands. The protocol includes the infusion of a 10% mannitol solution as a kidney protector. This is because PSMA is mainly expressed in proximal tubules [16]. Acting mainly on this region as an osmotic diuretic drug, mannitol is a potential candidate for kidney protection [17]. The efficacy of mannitol has previously been demonstrated with 68 Ga-HBED-CC-PSMA-11 for imaging purposes, with a reduction in kidney uptake expressed in terms of SUV max (range: 7.4-24.3%) [18]. According to the promising results obtained for PET imaging, dosimetry studies in patients treated with 177 Lu-PSMA-617 are needed to confirm its role as a nephroprotector. The present work summarizes the preliminary results of the dosimetric evaluation of parotid glands and kidneys according to the EUDRACT/RSO, 2016-002732-32 protocol. For the sake of completeness, the preliminary dosimetric data for the whole body, liver and red marrow are also reported. Patients and Treatment Characteristics A total of 32 patients were enrolled in the protocol from April 2017 to March 2018. Dosimetry evaluation was performed on 9 patients (6 during the first cycle and 3 during the second). The main patient and treatment characteristics are summarized in Table 1. A variability of 10% in injected activity is accepted due to different measurement uncertainties. Regarding patient 8, based on clinical considerations (only 5 months from 75-year achievement, excellent performance status, low tumor load and high weight of 94 kg), an exception for administered activity was performed in order to increase tumor load uptake. This exception was communicated to the local Ethical Committee, underlining that the risk benefit ratio was positive. All patients received renal and parotid gland protectors as protocol indications. Figure 1 shows an example of ROI contouring on anterior whole body images (WBI). Three acquisitions were only performed for patient no. 7, while red marrow dosimetry was performed in 6 patients. For parotid glands, a wash-in and wash-out trend was observed for all patients and a bi-exponential curve fitting was used. A maximum uptake was observed around 16 h after infusion for all patients. A combined wash-in/wash-out phase (4 patients) and pure wash-outs (5 patients) were observed for kidneys. Bi-and mono-exponential fitting models were used. In the cases of combined wash-in and wash-out phases, a maximum uptake was observed 16-24 h post infusion. With regard to the whole body and blood sample data for red marrow dosimetry, a pure wash-out trend was observed, which was fitted with a bi-exponential curve. Blood activity had already decreased by one order of magnitude compared to the initial blood activity 16 h post infusion. The effective half-lives for all source organs are summarized in A transient high uptake in intestinal loops was observed at different times between 16 h and 120 h after infusion, with an important overlap over the kidneys. During the contouring phase, the overlap with high uptake intestine region over the kidneys was carefully avoided for each image. The counts of the partially-contoured kidney were then re-scaled to the whole kidney, assuming a uniform uptake between the overlapped and non-overlapped regions. Dosimetry Results The dosimetric results for our patient cohort are reported in Table 3 and Figure 2. The median values were 0.48 mGy/MBq (range 0.33-2.63) for parotid glands, 0.70 mGy/MBq (0.26-1.07) for kidneys, 0.13 mGy/MBq (0.05-0.53) for liver, 0.044 mGy/MBq (0.023-0.067) for red marrow and 0.04 mGy/MBq (0.02-0.11) for the whole body. Overall, homogeneity was observed among patients with the exception of parotid glands. The outlier value observed for patient no. 2 caused a higher standard deviation and larger range. Table 3. Results of dosimetric study in terms of mGy/MBq (normalized to injected activity). Whole body and kidney model was used in OLINDA/EXM software, while sphere model of unit density was used for parotid gland modeling. SD = standard deviation. The effective half-lives for all source organs are summarized in A transient high uptake in intestinal loops was observed at different times between 16 h and 120 h after infusion, with an important overlap over the kidneys. During the contouring phase, the overlap with high uptake intestine region over the kidneys was carefully avoided for each image. The counts of the partially-contoured kidney were then re-scaled to the whole kidney, assuming a uniform uptake between the overlapped and non-overlapped regions. Comparison with Previous Studies Detailed dosimetric data were available for kidneys and parotid glands from the studies by Delker [8] and Kabasakal [9], while median dose values were reported in Baum's study [8]. Therefore, a graphical comparison (based on median and standard deviation data) was used for an overall comparison of the results (Figure 3), while a statistical comparison was performed for kidneys and parotid glands (Figure 4). Comparison with Previous Studies Detailed dosimetric data were available for kidneys and parotid glands from the studies by Delker [8] and Kabasakal [9], while median dose values were reported in Baum's study [8]. Therefore, a graphical comparison (based on median and standard deviation data) was used for an overall comparison of the results (Figure 3), while a statistical comparison was performed for kidneys and parotid glands (Figure 4). Discussion RLT with 177 Lu-PSMA-617 has shown encouraging results for the treatment of mCRPC as it has high uptake in disseminated lesions [6,20]. A reduction in disease burden is obtained after repeated treatment cycles (sometimes even after the first one) in the majority of patients [5,21]. However, high PSMA uptake in specific OaRs, such as salivary glands and kidneys, may impair treatment efficacy by limiting maximum injectable activity [4]. Although severe salivary gland toxicities are now seldom reported for 177 Lu-PSMA-617 treatment, such effects may occur very frequently when more advanced treatment techniques based on an α-particle (e.g., 225 Ac-PSMA-617) are used. Kratochwil et al. described severe xerostomia patients treated as a last curative treatment option with 225 Ac-PSMA-617 [22]. In more advanced treatment protocols where α-emitter PSMA-based drugs are used, parotid glands are the main OaRs and the administration of highly-specific organ protectors is an essential safety precaution [23,24]. With regard to renal toxicity, limited follow-up information is available for patients undergoing a last-chance treatment option due to their severely compromised baseline status. This may mask the onset of renal toxicity starting around 2 years after treatment [25]. Thus, when considering whether to start RLT earlier in an attempt to increase tumor control and overall survival, the potential for salivary gland and renal toxicity should be considered and attention should be paid to the OaR absorbed dose. In our protocol, we focused on organ protectors with specificity for PSMA receptors and aimed to reduce the 177 Lu-PSMA-617 uptake in these organs. The drug protection used for salivary glands (folic polyglutamate tablets; candies) is a substrate for PSMA receptors and the underlying strategy was to keep the PSMA enzyme active sites busy and thus, reduce the available binding sites for 177 Lu-PSMA-617 fixation after intravenous infusion. At the renal level, mannitol acts as an osmotic agent in the proximal tubule and thus, the fixation of 177 Lu-PSMA-617 at the proximal tubules may be decreased, reducing the kidney uptake. However, in this preliminary study, no significant difference was observed in terms of kidney absorbed dose, with a median value of 0.70 mGy/MBq (range = 0.26-1.07). At Johns Hopkins School of Medicine in Baltimore, Nedelcovych's group developed an OaR drug protector that is specific for PSMA called JHU-2545 [26], with a chemical and biological action that is similar to that of the drugs used in our study. A comparison of single-patient pre-therapy using 68 Ga-PSMA-11 with and without JHU-2545 showed a SUV max reduction of 41.8% in parotid glands and 31.4% in kidneys. In 2 patients, a 15-min pre-therapy drug administration revealed a reduced uptake of 26% for parotid glands (0. 38 . Although the number of patients was too small to evaluate the drug as an organ protector, Nedelcovych's results are consistent with our findings on parotid gland sparing and clearly demonstrate that highly specific PSMA organ protectors could be highly advantageous. As they are different from JHU-2545 that was developed in a laboratory and has yet to be validated, the drugs used in our study are commercially available, (relatively) inexpensive and ready for clinical use. Our study has a number of limitations. The patient cohort would need to be increased to provide more robust data and further confirm the role of organ protectors of the administered drugs. The generally poor performance status of patients enrolled in the treatment protocol also affected the number who were able to participate in the dosimetric protocol. However, this is also true for the other published studies, most of which carried out a dosimetric analysis on a patient cohort that is comparable with ours. Another factor affecting our analysis was organ overlap, such as high intestinal uptake or lesion overlap, which may compromise the obtained results. Although laxatives were administered to the majority of our patients before and shortly after treatment infusion (7/9), transient high intestinal uptake was still observed in post-infusion images. Laxative administration schemes (i.e., extension of drug administration 2-3 h after infusion) could be investigated to further reduce intestine uptake. The implementation of fully 3D dosimetry or hybrid techniques (i.e., combination of whole planar dosimetry and one 3D image for space distribution uptake evaluation) could improve the accuracy of absorbed dose evaluation for different organs, especially kidneys and target structures. After this, kidney absorbed dose evaluation could be more accurate using a hybrid approach. Despite the above limitations, our results are nevertheless encouraging. With regard to parotid glands, we only administered 2 tablets per treatment cycle. The optimum number of tablets and timing of administration requires a little 'fine-tuning' to improve efficacy. Given that the maximum uptake value was observed around 16 h post infusion, the further administration of candies before the maximum uptake time could reduce overall uptake in theory. Radiopharmaceutical Production National good preparation standards (NBP MN [30]) for pharmaceutical products were followed for 177 Lu-PSMA-617 production, as required by the European Association of Nuclear Medicine Treatment Procedure The study design included 2 patient cohorts. Patients who refused or were unfit to undergo treatment with docetaxel received 5.5 GBq per cycle of 177 Lu-PSMA-617, while patients previously treated with docetaxel (at least 3 cycles) received lower radiopharmaceutical levels ranging from 3.7 GBq to 4.2 GBq per cycle and 3.7-4.2 GBq of 177 Lu-PSMA-617 were also administered to patients > 75 years old, regardless of previous docetaxel administration. Patients underwent 4 cycles, which were repeated at intervals of 8-12 weeks. Up to 2 additional cycles were administered if there was no toxicity or evidence of disease progression and if, in the opinion of the investigator, further treatment could clinically benefit the patient. The radiopharmaceutical was slowly infused intravenously over 15-30 min in a dedicated room using a dedicated pump system (patent US 7,842,023 B2). Renal and Salivary Gland Protection To reduce salivary gland uptake, 2 folic polyglutamate tablets were orally administered to patients combined with an ice pack placed at each side of the neck 30 min before and during infusion. To preserve kidney functionality, a 10% mannitol solution in 500 mL was infused before and after 177 Lu-PSMa-617 injection, 250 mL 30 min before therapy and 250 mL one hour after therapy [18,31]. Image Acquisition and Analysis The gamma emission of 177 Lu (113 and 208 KeV, relative abundance of 6% and 11%, respectively) enabled us to monitor the radiopharmaceutical biodistribution during the therapeutic phase. Dosimetry evaluation was performed during the first or second treatment cycle. Planar whole body images (WBI) were acquired at 30-60 min, 16-24 h, 36-48 h and 120 h post infusion (Figure 1). Imaging was performed on a Discovery NM/CT 670 scanner (International General Electric, General Electric Medical System, Haifa, Israel). The dual-head gamma camera was equipped with 3/8"-thick NaI(Tl) crystals. Anterior and posterior views were acquired with 7 cm/min scan speed, an energy window of 20% applied around the dominant photon peak at 208 keV and a medium-energy high resolution (MEHR) collimator. Two additional energy scatter windows at 175 keV (10% width) and 238 keV (10% width) were used to apply the triple energy window-scatter correction to both posterior and anterior images. The first WBI was performed before bladder voiding because the total counts in this image were intended as a surrogate of the effective injected activity and were used to calculate the time-activity curves. The WBI was a 256 × 1024 pixel matrix with pixel dimensions of 2.21 × 2.21 cm. Body contouring to maintain a fixed detector-to-patient distance during image acquisition between scans was not used. For attenuation correction, a pre-infusion WBI transmission scan was performed in anterior projection with a sealed flood source ( 57 Co) providing transmission and blank images, using low-energy high resolution (LEHR) collimators. ROIs for different organs (i.e., kidneys, abdomen, parotid glands, liver) were identified on both transmission and blank images. Furthermore, the water equivalent thickness was evaluated as: where I transnission and I blank were average counts on transmission and blank images, respectively; and µ (57Co) the attenuation coefficient for 57 Co emissions. For activity quantification, ROIs were contoured on the first image for the whole body, kidneys, parotid glands and liver. Background regions for each ROI on both anterior and posterior images were also drawn close to the same body region, avoiding the overlap with other structures experiencing uptake (i.e., bladder, intestine). Sequential images were registered in the cranio-caudal direction and ROIs were propagated to all images. If needed, manual adjustments were performed to reduce organ mismatch among sequential images. In the event of an overlap between kidney and high intestinal uptake, the kidney contour was corrected on the single image to eliminate the intestinal uptake [32]. The source organ activity at a particular time-point was estimated by applying the conjugate projection method [33] according to the following equation: where I A and I p were the mean counts per seconds [cps] in the ROI in anterior and posterior views, respectively; µ (177Lu) was the attenuation correction factor for 177 Lu; τ was the mean 177 Lu half-life; and ∆t time was the difference between infusion and WBI acquisition. For paired organs (kidneys and parotid glands), the mean value was calculated between the left and right organs and a single time-activity curve was obtained. After this, biological time-activity curves %IA(t) were calculated normalizing A ROI values at each time-point to the total cps in the whole body ROI drawn in the first WBI image (A WBI ), which was considered as a reference for the total effective injected activity. Blood Sample Acquisition and Analysis Blood samples (2-cc volume) were collected before each WBI acquisition. The samples were analyzed with High-Purity-Germanium (HPGe, ORTEC, Ametek, TN, USA) Radiation Detector (24 h acquisition). The measured activity was corrected for decay and biological time-activity curves were calculated for blood samples. Dosimetric Analysis The dose evaluation was performed according to the MIRD formalism [33][34][35] with OLINDA/ EXM software (v 1.1, 2201 West End Ave, Nashville, TN, USA) [36]). Biological time-activity curves were fitted with mono-or bi-exponential curves, depending on the observed kinetic characteristics. Adult male OLINDA/EXM phantom organ models were used for kidneys, liver and whole body. Sphere model was used for parotid glands, assuming unit density composition (i.e., water) [37]. A WBI CT scan was used to evaluate the single organ weight for each patient and for phantom organ scaling (contouring performed on MimVista (v 6.6.5, MIM software, 25800 Science Park Drive-Suite 180, Cleveland, OH, USA). For red marrow dosimetry, a fast equilibrium in terms of uptake between blood and RM extracellular fluid was assumed [38]. A bi-exponential curve model was used for wash-out fitting. The total blood volume [cc] was evaluated based on single-patient height h [cm] and weight w [g] [39] After this, blood mass was calculated with a mean blood density of 1.06 g/cc [39]. Finally, red marrow mass was evaluated with a 0.224 blood/red marrow mass ratio for the standard adult male [36]. The red marrow model of OLINDA/EXM software was used for absorbed dose calculation. The remainder of the body was also considered. [9] were considered for the kidneys, parotid glands and whole body dosimetry comparison. Data comparison was performed in terms of both median difference (Mann-Whitney-Wilcoxon) and data distribution (Kolmogorov-Smirnov). R-software (v 3.5.2, https://www.r-project.org/) was used for statistical analyses and box plots were used for graphical data comparison. Conclusions Our results show that the treatment protocol is safe and that the organ protector used could help to reduce any out-of-target uptake. Further optimization of drug quantity and scheme administration is needed to enhance organ preservation. The proposed drug protectors are safe, commercially available, inexpensive, well tolerated, non-invasive and easy to administer in clinical practice. Our data represent a promising starting point for reducing the side-effects in view of more effective therapies, such as alpha-emitter-based radioligands. Conflicts of Interest: The authors declare no conflicts of interest. Appendix A The manual radiosynthesis of 177 Lu-DOTA-PSMA-617 can be divided into five steps: • Dose Calibrator Response Verification: Prior to each production, the background and instrument response are measured using a certified source of 137 Cs (NuklearMedizin, Dresden, Germany) with known activity; the percentage of deviation between the measured value and the expected value is measured and recorded. The deviation must never be greater than 5%. • Measure of the incoming 177 Lu activity: the arrival vial of 177 LuCl 3 is measured in the dose calibrator, after which a mixture is prepared containing a sufficient quantity of DOTA-PSMA-617 (calculated with a ratio of 0.9 µg/mCi) and a sufficient volume of the buffer solution to maintain the reaction pH at 5.0.
2019-02-11T12:21:22.514Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "a9071b2123da88562bf9cc0a8e1444f2620b58d3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/24/3/621/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9071b2123da88562bf9cc0a8e1444f2620b58d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119599235
pes2o/s2orc
v3-fos-license
The circular law for random regular digraphs Let $\log^Cn\le d\le n/2$ for a sufficiently large constant $C>0$ and let $A_n$ denote the adjacency matrix of a uniform random $d$-regular directed graph on $n$ vertices. We prove that as $n$ tends to infinity, the empirical spectral distribution of $A_n$, suitably rescaled, is governed by the Circular Law. A key step is to obtain quantitative lower tail bounds for the smallest singular value of additive perturbations of $A_n$. Convergence of ESDs and the Circular Law. For an n × n matrix M with complex entries and eigenvalues λ 1 , . . . , λ n ∈ C (counted with multiplicity and labeled in some arbitrary fashion), denote the empirical spectral distribution (ESD) We give the space of probability measures on C the vague topology. Thus, a sequence of random probability measures µ n over C converges to another measure µ in probability if for every f ∈ C c (C) and every ε > 0, and µ n converges to µ almost surely if for every f ∈ C c (C), C f dµ n → C f dµ almost surely. We say that µ n converges to µ in expectation if E C f dµ n → E C f dµ for every f ∈ C c (C). A well-studied class of non-Hermitian random matrices is the iid matrix X n , which has iid centered entries of unit variance. A seminal result in the theory of non-Hermitian random matrices is the Circular Law for iid matrices, which was established in various forms over several decades. We denote by µ circ the normalized Lebesgue measure on the unit disk in C. Theorem 1.1 (Strong Circular Law for iid matrices [TV10]). Fix a complex random variable ξ with zero mean and unit variance, and for each n ≥ 1 form an n × n random matrix X n = (ξ (n) ij ) with entries that are iid copies of ξ. Then the rescaled ESDs µ 1 √ n Xn converge to µ circ almost surely. The above strong form of the Circular Law due to Tao and Vu, and is the culmination of the work of many authors. Previous works had obtained the Circular Law under additional assumptions on the atom variable ξ, or with convergence in probability or expectation rather than almost-sure convergence (the above result is called a "strong law" in analogy with the strong law of large numbers). The earliest result was by Ginibre, who established the Circular Law (with convergence in expectation) for the Ginibre ensemble, where the atom variable ξ is a standard complex Gaussian [Gin65] (see also [Meh67]); the harder case of real Gaussian entries was handled by Edelman [Ede97]. These results relied on explicit formulas available for Gaussian ensembles for the joint density of eigenvalues. Following influential work of Girko [Gir84], Bai was the first to rigorously establish the Circular Law for a general class of atom variables, assuming that ξ has bounded density and finite sixth moment [Bai97]. Following breakthrough work of Rudelson [Rud08], Tao-Vu [TV09] and Rudelson-Vershynin [RV08] on the smallest singular value for random matrices with independent entries, the assumptions on the atom variable were progressively relaxed in works of Götze-Tikhomirov [GT10], Pan-Zhou [PZ10], and Tao-Vu [TV08,TV10]. Theorem 1.1 is an instance of the universality phenomenon in random matrix theory, exhibiting an asymptotic behavior of the spectrum which is insensitive to all but a few details of the atom variable (in this case the first two moments). In fact, it is a consequence of a more general "universality principle" established in [TV10], which states that if X n , X n are iid matrices generated from atom variables ξ and ξ , respectively, and M n is a deterministic matrix satisfying 1 n M n HS = O(1) (where M HS is the Hilbert-Schmidt norm), then µ Mn+Xn − µ Mn+X n → 0 in probability. (Almost sure convergence is also obtained under an additional technical assumption that we do not state here.) The Circular Law for general iid matrices can then be deduced from the universality principle (taking M n = 0) and the Circular Law for the Ginibre ensemble. The perturbations M n can also give rise to limiting measures different from µ circ . Since the work of Tao and Vu the Circular Law has been strengthened and extended in several directions. In a sequence of works, Bourgade, Yau and Yin [BYY14a,BYY14b,Yin14] have established the local Circular Law, showing that µ circ provides a good estimate for the number of eigenvalues of 1 √ n X n in a fixed small ball B(z, r) down to the optimal scale r ∼ n −1/2+ε for arbitrary fixed ε > 0, assuming an exponential decay condition for the tails of the atom variable ξ. A weaker local law was obtained by Tao and Vu in [TV15a] as part of their proof of universality for local eigenvalue statistics. We will informally say that a random matrix Y n (that is, a sequence of n × n random matrices (Y n ) n≥1 ) lies in the Circular Law universality class if, after rescaling, the ESDs µ Yn converge in probability to µ circ . Theorem 1.1 shows this class contains all iid matrices X n , but in recent years various works have shown it to be somewhat larger. In [GT10,TV08,Woo12,BR] it has been shown that the Circular Law is robust under sparsification, i.e. that matrices of the form Y n = A n • X n lie in the Circular Law universality class, where • denotes the Hadamard (entrywise) product, X n is an iid matrix, and A n is a 0-1 matrix of iid Bernoulli(p) variables, independent of X n , with p = o(n) and pn growing at some speed. In particular, Wood [Woo12] showed the Circular Law holds with convergence in probability if pn = n ε for any fixed ε ∈ (0, 1), while the recent work [BR] allows pn = ω(log 2 n) under higher moment assumptions on the atom variable. There has also been extensive work on non-Hermitian matrices with dependent entries. In [BCC12], Bordenave, Caputo and Chafaï showed the Circular Law class includes random Markov matrices obtained by normalizing the entries of a matrix with iid nonnegative entries of finite variance by the row sums. Nguyen and Vu obtained the Circular Law for random ±1 matrices with prescribed row sums |s| ≤ (1 − ε)n for some fixed ε ∈ (0, 1] [NV13]. Later, Nguyen proved the Circular Law for random doubly stochastic matrices (drawn uniformly from the Birkhoff polytope), which do not enjoy independence between rows or columns [Ngu14]. In [AC15], Adamczak and Chafaï showed that random real matrices having unconditional log-concave distribution obey the Circular Law, extending Edelman's result for real Gaussian matrices. Adamczak, Chafaï and Wolff proved the Circular Law for random matrices with exchangeable entries having finite moments of order 20 + ε (if not for the moment assumption this result would generalize the Circular Law for iid matrices) [ACW16]. In [Cooa], the author obtained the Circular Law for adjacency matrices of dense random regular digraphs with random edge weights, i.e. matrices of the form A n • X n with X n an iid matrix and A n a 0-1 matrix constrained to have all rows and columns sum to pn for some fixed p ∈ (0, 1). Such matrices A n , which are the focus of the present work, can be seen as a discrete version of the doubly stochastic matrices considered by Nguyen. The second moment hypothesis in Theorem 1.1 is sharp. Indeed, in [BCC11], Bordenave, Caputo and Chafaï established a different limiting law for matrices with iid entries lying in the domain of attraction of an α-stable distribution for α ∈ (0, 2). In [BCCP16] the same authors together with Piras have considered random stochastic matrices obtained by normalizing the entries of an iid heavy-tailed matrix with α ∈ (0, 1) by the row sums, proving convergence of the ESDs to deterministic measure supported on a compact disk (while they do not obtain an expression for the limiting density, simulations indicate that it is not the uniform measure on the disk). Finally, a natural question is whether the Circular Law extends to matrices with independent but non-identically distributed entries having finite second moment. If the entries all have unit variance then one can replace the assumption of identical distribution with some more general technical hypotheses; see [TV08], [BS10,p. 428]. Recently, the work [CHNR] studied the asymptotic ESDs for matrices of the form Y n = 1 √ n A n •X n , with X n an iid matrix having entries with finite (4 + ε)-th moment, and A n = (σ ij ) a fixed "profile" of standard deviations σ ij ∈ [0, 1]. In particular, it was shown that the Circular Law holds if the standard deviations σ ij are uniformly bounded and the variance profile ( 1 n σ 2 ij ) is doubly stochastic. Examples were also provided of variance profiles leading to limiting measures different from µ circ , though they are always compactly supported and rotationally symmetric. Another recent work [AEK16] has obtained a local law (analogous to the above-mentioned local Circular Law of Bourgade-Yau-Yin) for Y n as above, but under stronger assumptions: that the entries have smooth distribution and the variances are uniformly bounded above and below by positive constants. The Circular Law and its extensions have been applied to the stability analysis of complex dynamical systems, in theoretical ecology [May72] and neuroscience [SCS88]. In the latter work, an iid matrix was used to model the synaptic matrix for a large neural network. There has since been significant effort to extend the results of [SCS88] to various random matrix models incorporating additional structural features of natural neural networks such as the brain, using both rigorous and non-rigorous methods [RA06,ASS15,ARS15,AFM15]. However, a key feature that has not been covered by these works is sparsity. While the aforementioned works such as [Woo12,BR] can be used to extend the analysis of [SCS88] to sparse iid matrices, it would also be interesting to treat networks where each node has a specified valence. However, such constraints destroy the independence between entries, making the analysis of such models challenging. In the present work we make a first step in this direction by extending the Circular Law to adjacency matrices of random regular digraphs. For integers n ≥ 1 and d ∈ [n] denote A n,d = A ∈ {0, 1} n×n : which is the set of 0-1 adjacency matrices for d-regular directed graphs (digraphs) on n vertices, allowing self-loops. (Here and throughout, 1 = 1 n denotes the column vector of all 1s.) Given A ∈ A n,d , we denote the normalized matrix (1.4) The main result of this paper is the following: Theorem 1.2 (Circular Law for random regular digraphs). Assume d = d(n) satisfies min(d, n − d) ≥ log C 0 n for a sufficiently large constant C 0 > 0. For each n ≥ 1 let A n be a uniform random element of A n,d . Then µĀ n → µ circ in probability. Remark 1.3. The proof shows we can take C 0 = 96, but we have not tried to optimize this constant. Various parts of the argument work for smaller degree, and for the interested reader we state the required range for d in the statements of the lemmas, sometimes indicating how the range might be improved by longer arguments. Remark 1.4. The methods in this paper can also be used to prove the Circular Law for random regular digraphs with random edge weights, extending the result of [Cooa] to the sparse setting (in fact this is somewhat easier than Theorem 1.2). Specifically, letting X n be an iid matrix as in Theorem 1.1 with entries having finite fourth moment, and putting Y n = 1 √ d A n • X n , it can be shown that µ Yn → µ circ in probability if min(d, n − d) ≥ log C 0 n. We do not include the proof in order to keep the article of reasonable length. Remark 1.5 (Reduction to d ≤ n/2). From (1.3) we have that 1 is an eigenvector of A n with eigenvalue d (this is the Perron-Frobenius eigenvalue). A routine calculation shows that if λ ∈ C \ {d − n} is an eigenvalue of A n with nonzero eigenvector v ∈ C n , then putting A n := 1 1 T −A n we have A n w = −λw, where Note that w(λ, v) = 0 only if λ = d and v ∈ 1 . Thus, each eigenvalue of −A n (counting multiplicity) is an eigenvalue of A n , with at most one exception, and conversely. In particular, µ A n and the reflected ESD µ −Ān = µĀ n (−·) differ in total variation distance by at most 2/n. Since µ circ is invariant under the refection λ → −λ, in the proof of Theorem 1.2 we may and will assume that d ≤ n/2, as we can replace A n with A n if necessary. We conjecture that Theorem 1.2 still holds if min(d, n − d) tends to infinity with n at any speed. For fixed degree we have the following well-known conjecture. with respect to Lebesgue measure. For some numerical evidence supporting this conjecture the reader is referred to [Cooa]. A proof of Conjecture 1.6 would require a significantly different approach than the one we take to prove Theorem 1.2. For instance, to prove Theorem 1.2 we will need to understand the asymptotic empirical distribution of singular values µ √ (Ān−z) * (Ān−z) for arbitrary fixed z ∈ C (see Section 7 for additional explanation). In the present work we do this by comparing with a Gaussian matrix, for which the asymptotics are wellunderstood. However, whenĀ n is replaced by A (d) n the conjectured asymptotic singular value distributions are different, and in particular we cannot compare with a Gaussian matrix or any other well-understood model. We mention that in the recent work [BCZ] with Basak and Zeitouni we established the Circular Law for the permutation model S d) under the assumption that d grows poly-logarithmically as in the present work. Parts of the proof in [BCZ] follow a significantly different approach from the present paper. In particular, in the present work the singular value distributions ofĀ n − z for z ∈ C are analyzed by first replacing A n with an iid Bernoulli matrix B n using a lower bound for the number of 0-1 matrices with constrained row and column sums, and then replacing the Bernoulli matrix with a Gaussian matrix using a Lindeberg exchange-type argument (see Section 9). Such a comparison is unavailable for the sum of permutation matrices. In [BCZ] we derive and analyze the Schwinger-Dyson loop equations for the Stieltjes transform of empirical singular value distributions, implementing a discrete analogue of techniques that were used in [GKZ11,BD13] for the unitary group. 1.2. The smallest singular value. A key challenge for proving convergence of the ESDs of non-normal random matrices is to deal with possible spectral instability for such matrices. This can be quantified in terms of the pseudospectrum. Recall that the ε-pseudospectrum of a matrix M n ∈ M n (C) is the set where Λ(M n ) is the set of eigenvalues of M n . (Here and throughout, · denote the operator norm when applied to matrices.) If the pseudospectrum of a matrix is much larger than the spectrum itself, then the ESD can vary wildly under perturbations of small norm. For a random matrix M n the pseudospectrum is a random subset of the complex plane, and we will need it to be small in the sense that for a.e. z ∈ C (1.6) for some ε ≥ exp −n o(1) . (Here the rate of convergence in the o(1) terms may depend on z.) Establishing (1.6) is a key step in all known approaches to proving the Circular Law for a given random matrix ensemble (M n ) n≥1 (except in the case of integrable models such as the Ginibre ensemble). See the survey [BC12] for additional discussion of the pseudospectrum and its role in proving the Circular Law for random matrices. Denote the singular values of a matrix M by s 1 (M ) ≥ · · · ≥ s n (M ) ≥ 0. We can alternatively express our goal (1.6) as showing that for a.e. z ∈ C, (1.7) Establishing (1.7) is an extension of the invertibility problem, which is to show P(s n (M n ) = 0) = P(det(M n ) = 0) = o(1). (1.8) The problem of proving (1.8), along with quantifying the rate of convergence, has received much attention for the case of random matrices with discrete distribution, such as matrices with iid uniform ±1 entries -see [Kom67,KKS95,TV07,BVW10]. The problem of proving (1.7) with M n = X n as in Theorem 1.1 (i.e. having iid entries with zero mean and unit variance) was addressed in works of Rudelson [Rud08], Tao-Vu [TV09,TV08] and Rudelson-Vershynin [RV08]. In particular, through new advances in the Littlewood-Offord theory from additive combinatorics, for certain random discrete matrices [TV09] established bounds of the form P(s n (X n ) ≤ n −A ) = O(n −B ) for arbitrary B > 0 and A = O B (1). This result was extended in [TV08] to allow general entry distributions with finite second moment and deterministic perturbations (such as scalar perturbations as in (1.7)). The work [RV08] obtained the optimal dependence A = B + 1/2 under a stronger subgaussian hypothesis for the entry distributions. Recently, this optimal bound was obtained for centered real iid matrices only assuming finite second moment [RT]. The invertibility problem for adjacency matrices of random regular digraphs A n as in Theorem 1.2 was first addressed in [Coo15], where it was shown that if min(d, n − d) ≥ Figure 1. Two possible configurations of directed edges passing from a pair of vertices {i 1 , i 2 } to a pair of vertices {j 1 , j 2 } in a digraph, where a dashed arrow indicates the absense of a directed edge. Note that there may be edges passing from {j 1 , j 2 } to {i 1 , i 2 }, or between i 1 and i 2 or j 1 and j 2 . A simple switching replaces the configuration on the left with the one on the right and vice versa; for any other configuration the simple switching leaves the graph unchanged. C log 2 n, then for some absolute constants C, c > 0. The main difficulties in proving (1.9) over the case of, say, iid ±1 matrices are the lack of independence among entries and the sparsity of the matrix. The author introduced an approach based on a combination of strong graph regularity properties and the method of switchings. In its most basic form, one performs a simple switching on a regular digraph by replacing directed edges i 1 → j 1 , i 2 → j 2 with edges i 1 → j 2 , i 2 → j 1 when this is allowed (i.e. when this does not create parallel edges); see Figure 1. The switching preserves the degrees of all vertices. One can create coupled pairs (A n , A n ) of random elements of A n,d by first drawing A n uniformly at random, and then applying several switchings at different 2 × 2 submatrices of A n independently at random to form A n . Taking care to do this in a way that A n is also uniformly distributed, one can condition on A n (perhaps restricted to a "good" event on which A n enjoys certain graph regularity properties) and proceed using only the randomness of the independent switchings. In particular one gains access to tools of Littlewood-Offord theory. See [Coo15] for additional motivation of the switchings method for the invertibility problem. The switchings method has long been a popular tool for analyzing random regular graphs -for additional background see the survey [Wor99]. It has also recently been applied in the random matrix setting in [BKY17,BHKY15,BHY16] to prove local laws for the empirical spectral distribution and universality of local spectral statistics for undirected random regular graphs. Following the work [Coo15], it was shown in [LLT + 17] that for C ≤ d ≤ cn/ log 2 n we have for some absolute constants C, c > 0. Together with (1.9) this shows that A n is invertible with probability 1 − o(1) as soon as min(d, n − d) = ω(1). The work [LLT + 17] also follows the approach of using random switchings and graph regularity properties. The key new ingredients are finer regularity properties that apply for smaller degree, as well as an efficient averaging argument to improve the probability bound. (We make use of a variant of this averaging argument in the proof of Theorem 1.7 below.) A natural conjecture is that P(s n (A n ) = 0) = o(1) for all 3 ≤ d ≤ n−3, which mirrors a conjecture by Vu for adjacency matrices of undirected random regular graphs [Vu08]. In the present work we extend the approaches of [Coo15, LLT + 17] to obtain lower tail bounds on the smallest singular value ofĀ n − z for arbitrary scalar shifts z ∈ C. It turns out that we can handle a more general class of perturbations; to describe them we need some notation. First, note that 1 is a left and right eigenvector of A n with eigenvalue d. By a standard argument using the Cauchy-Schwarz inequality we have A n ≤ d, so that in fact (1.11) We will be able to handle perturbations Z ∈ M n (C) that also preserve the space 1, and which have polynomially-bounded norm on 1 ⊥ . For instance, Z could be the adjacency matrix of another regular digraph, either fixed or random and independent of A n . For a subspace W ⊂ C n we write (1.12) In the statement and proof of the following result we regard n as a sufficiently large fixed integer and give quantitative bounds. As such, we will generally suppress the subscript n from our matrices. Theorem 1.7 (The smallest singular value). Let 1 ≤ d ≤ n/2 and let A be a uniform random element of A n,d . Fix γ ≥ 1 and let Z be a deterministic n × n matrix with Z 1 ⊥ ≤ n γ and such that Z 1 = ζ 1, Z * 1 = ζ 1 for some ζ ∈ C with |d + ζ| ≥ n −10 . There exists Γ = O(γ log d n) such that for some absolute constant C 1 > 0. Remark 1.8. We note that (1.13) only gives a nontrivial bound if d ≥ C log 2C 1 n for some C = C(γ) sufficiently large. The proof shows we can take C 1 = 11/2, though there is certainly room for improvement. For instance, this exponent could be lowered using some refined graph expansion and discrepancy lemmas -see Remark 2.4. For our purposes of proving Theorem 1.2 we only need the following consequence (recall the notation (1.4)): Corollary 1.9. Assume log C 1 n ≤ d ≤ n/2 for a sufficiently large constant C 1 > 0, and let A be a uniform random element of A n,d . Fix z ∈ C. There exists Γ = o(log n) such that P s n Ā n − z ≤ n −Γ = o z (1). (1.14) Proof. We may assume n is sufficiently large depending on z. Up to perturbing Γ by a constant factor, it suffices to verify the matrix Z = −z d(1 − d/n) I n satisfies the conditions of Theorem 1.7. The condition Z 1 ⊥ ≤ n γ holds with γ = 0.51, say, when n is sufficiently large. Taking ζ = z d(1 − d/n) we have and the condition on ζ easily holds when n is sufficiently large. The result now follows from Theorem 1.7 and taking C 1 = 2C 1 + 1, say. Recently (a few months after this paper was first posted to arXiv) [LLT + ] obtained an improvement of Theorem 1.7 (for the case of scalar shifts), showing that for some constants C, c > 0 and any fixed z ∈ C with |z| ≤ d/6, if C ≤ d ≤ cn/(log n)(log log n) then s n (A n − z) ≥ n −6 with probability 1 − O(log 2 d/ √ d). 1.3. Overview of the paper. The first part of the paper (Sections 2-6) is devoted to the proof of Theorem 1.7. In Section 2 we recall some concentration inequalities for random regular digraphs from [Coo16] and use these to show that a random element of A n,d satisfies certain graph regularity properties with high probability. In Section 3 we describe the general approach to Theorem 1.7, which proceeds by partitioning the sphere S n−1 0 into sets whose elements have a similar level of "structure" (in a certain precise sense that we do not describe here), and separately controlling inf v∈S (A+Z)v for each part S of the partition. We then establish bounds on covering numbers for sets of highly structured vectors, and prove anti-concentration properties for unstructured vectors. In Section 4 we establish uniform control from below on (A + Z)v for "highly structured" vectors v, and in Section 5 we boost this to control for less structured vectors by an iterative argument. In Section 6 we obtain control over the remaining unstructured vectors. We mention that in each of Sections 4, 5 and 6 we make use of a different graph regularity property from Section 2, and all three sections use coupling arguments based on switchings. In the remainder of the paper we prove Theorem 1.2. In Section 7 we recall the approach to proving the Circular Law via the logarithmic potential, and give a highlevel proof of Theorem 1.2 using Theorem 1.7 and two propositions concerning the empirical singular value distributions for certain perturbations ofĀ n . In Sections 8 and 9 we prove these propositions by a two-step comparison approach, first comparing A n with a matrix B n having iid Bernoulli entries, and then comparing B n (suitably centered and rescaled) with an iid Gaussian matrix G n , for which the desired results are known. The comparison of singular value distributions for A n with those of B n is accomplished using a conditioning argument of Tran, Vu and Wang from [TVW13], together with a new estimate for the probability that B n lies in A n,d , proved in Appendix B. For the comparison between B n and G n we use the Lindeberg replacement strategy, through an invariance principle of Chatterjee (Theorem 9.4). In the appendix we prove Lemma 8.4, which gives a near-optimal estimate on local density of small singular values for perturbed Gaussian matrices. 1.4. Notation. C, c, c , c 0 , etc. denote unspecified constants whose value may change from line to line, understood to be absolute unless otherwise stated. f = O(g), f g and g f are synonymous and mean that |f | ≤ Cg for some absolute constant C < ∞. f g means f g and g f . f = o(g) and g = ω(f ) mean that f /g → 0 as n → ∞. We indicate dependence of implied constants with subscripts, e.g. f α g; by f = o α (g) we mean f /g → 0 when α is fixed and n → ∞, where the rate of convergence may depend on α. M n (C) denotes the set of n × n matrices with complex entries. For M = (m ij ) ∈ M n (C) it will sometimes be convenient to denote the (i, j)-th entry by M (i, j) = m ij . For i 1 , . . . i k , j 1 , . . . , j l ∈ [n] we write (1.15) If one of the sequences (i 1 , . . . , i k ), (j 1 , . . . , j l ) is replaced by an unordered set J ⊂ [n] then we interpret J as a sequence with the natural ordering inherited from [n]. We also write M (i 1 ,i 2 ) for the (n − 2) × n matrix obtained by removing rows i 1 and i 2 (assuming i 1 = i 2 ). We label the singular values of M in non-increasing order: In addition to our notation (1.1) for the empirical spectral distribution, we denote the empirical singular value distribution by (1.16) · denotes the Euclidean norm when applied to vectors and the n 2 → n 2 operator norm when applied to elements of M n (C). Other norms are indicated with subscripts; in particular, M HS denotes the Hilbert-Schmidt (or Frobenius) norm of a matrix M . We denote the (Euclidean) closed unit ball in C n by B n and the unit sphere by S n−1 . We write C J for the subspace of vectors supported on J ⊂ [n], and write B J , S J for the unit ball and sphere in this subspace. Given v ∈ C n and J ⊂ [n], v J denotes the projection of v to C J . 1 = 1 n denotes the n-dimensional vector with all components equal to one, and consequently 1 J denotes the vector with jth component equal to 1 for j ∈ J and 0 otherwise. We will frequently consider the unit sphere in 1 ⊥ , which we denote S n−1 0 := S n−1 ∩ 1 ⊥ = u ∈ C n : u = 1, u, 1 = 0 . (1.17) It will be conceptually helpful to associate a 0-1 n × n matrix A = (a ij ) to a directed graph Γ A = ([n], E A ), which we do in the natural way, i.e. E A = {(i, j) ∈ [n] 2 : a ij = 1}. Given a vertex i ∈ [n] we denote its set of out-neighbors by 1.5. Acknowledgement. The author thanks Terry Tao for his encouragement and support. Graph regularity properties Recall the graph theoretic notation from Section 1.4. In this section we define three collections of "good" subsets of A n,d , namely A codeg (i 1 , i 2 ) , A disc (n 0 , δ) , and A exp (κ) whose elements are associated to digraphs enjoying certain graph regularity properties. We will show that for appropriate values of the parameters, each of these sets constitutes most of A n,d . The key tools to establish this are sharp tail bounds for codegrees and edge densities for random regular digraphs that were proved in [Coo16]. For A ∈ A n,d , the number of common out-neighbors |N A (i 1 ) ∩ N A (i 2 )| of a pair of vertices i 1 , i 2 ∈ [n] in the associated digraph is called the out-codegree of i 1 , i 2 . By a routine calculation, for a fixed pair {i 1 , i 2 } ⊂ [n] and A ∈ A n,d drawn uniformly at random we have In the proof of Theorem 1.7 we will want to restrict attention to those A ∈ A n,d whose out-codegree at a fixed pair of vertices is not too large. For distinct i 1 , i 2 ∈ [n] and K > 0 define the set of elements of A n,d having good codegrees (2.2) In particular, taking K to be a sufficiently large constant multiple of n/d, we have for some constant c > 0. For fixed sets I, J ⊂ [n] and A ∈ A n,d drawn uniformly at random, the expected number of directed edges passing from I to J in the digraph associated to A is In the random graphs literature, a graph for which the edge densities e A (I, J)/|I||J| (for all sufficiently large sets I, J) do not deviate too much from the overall density d/n = e A ([n], [n])/n 2 is said to satisfy a discrepancy property. For n 0 ∈ [n] and δ > 0 we define the set of elements of A n,d enjoying a discrepancy property: The following is an easy corollary of the main result in [Coo16]. Lemma 2.2 (Discrepancy property). Assume 1 ≤ d ≤ n/2. Let δ ∈ (0, 1). If (C/δ)nd −1/2 ≤ n 0 ≤ n for a sufficiently large constant C > 0, then for a uniform random element A ∈ A n,d we have Proof. By [Coo16, Theorem 1.5], there is a set G 0 ⊂ A n,d with P(A ∈ G 0 ) = 1 − n O(1) exp(−cδ min(d, δn)) such that for any fixed I, J ⊂ [n], Let n 0 be as in the statement of the lemma. Applying the union bound over choices of I, J with |I|, |J| > n 0 , where in the last bound we took the constant C in the lower bound on n 0 sufficiently large. Thus, as desired. Finally, we will need to show that most elements of A n,d satisfy a certain neighborhood expansion property. By the d-regularity constraint, for any I ⊂ [n] and A ⊂ A n,d we have It turns out that for random regular digraphs and |I| < n/d, this upper bound is not far from the truth. For κ ∈ (0, 1) we define the set of elements of A n,d enjoying the "good expansion property": (2.6) Lemma 2.3 (Expansion property). There are absolute constants C, c > 0 such that if C log n ≤ d ≤ n/2, then Remark 2.4. While the above will be sufficient for our purposes, we note that Litvak et al. obtained a stronger "log-free" version in [LLT + 17] (see Theorem 2.2 there), showing that with high probability one has |N A T (J)| ≥ κd|J| with κ arbitrarily close to one, uniformly over |J| ≤ cκn/d (in fact they allow κ → 1 at a certain rate with d). Moreover, their result holds for all d at least a sufficiently large constant. Using their result in place of Lemma 2.3 would lower the power of log by 2 in our assumption on d in Theorem 1.7. Proof. This is essentially a restatement of [Coo15, Corollary 3.7], taking the parameter γ there to be κ log n. While it was assumed there that d = ω(log n), the proof actually only assumes d ≥ C log n for a sufficiently large constant C > 0. By definition, for A ∈ A exp (κ) and a sufficiently small set J ⊂ [n], the number of rows of A whose support overlaps with J is within a factor κ of its maximum value d|J|. However, we will also need lower bounds on the number of rows whose overlap with J has cardinality within a specified range. For J ⊂ [n] and r ≥ 1 write . Then the following hold: (1) For all J ⊂ [n] such that |J| ≤ n/2κd, for all r ≥ 1. We turn to (2). Let J and r be as in the statement of the lemma. Let m ∈ ( n 4κd , n 2κd ] be an integer, put k = |J|/m , and let J 1 , . . . , J k be pairwise disjoint subsets of J of size m. Denote J = m l=1 J l , and note that |J | ≥ |J|/2. By our restriction to A exp (κ), for each l ∈ [m] we have Let B be the adjacency matrix of the bipartite graph with vertex parts U = N A T (J ), V = [k] which puts an edge at (i, l) when |N A (i) ∩ J l | ≥ 1. From (2.9) we have that the number of edges e B (U, V ) in this graph is bounded below by κdmk. On the other hand, Combining these bounds on e B (U, V ) and rearranging, we have where in the second inequality we applied our assumption on r. Now since km = |J | ≥ |J|/2 we conclude Partitioning the sphere In this section we begin the proof of Theorem 1.7. Throughout this section A denotes a uniform random element of A n,d . Structured and unstructured vectors. A well-known approach to bounding the probability a random matrix M is singular is to classify potential null vectors v = 0 as "structured" or "unstructured", and use different arguments to bound This approach goes back to the work of Komlós on iid Bernoulli matrices X = (ξ ij ), where an integer vector v is said to be structured if it is sparse, i.e. | supp(v)| ≤ n/10, say. The key observation is that unstructured vectors v enjoy good anti-concentration for random walks On the other hand, while structured vectors only have crude anti-concentration properties, the set of structured vectors has low entropy (i.e. cardinality), which allows one to obtain uniform control via the union bound. Later, in [TV07] Tao and Vu used more complicated classifications of potential null vectors v ∈ Z n \{0} by relating concentration properties of the random walks (3.1) to arithmetic structure in the components of v using tools from additive combinatorics (such as Freiman's theorem). This approach carries over to the problem of bounding the smallest singular value. From the variational formula (3. 2) The analogue of Komlós's argument for the invertibility problem was accomplished by Rudelson for a general class of iid matrices in [Rud08] (see also [RV08]). There a unit vector is said to be structured if it is close to a sparse vector. Specifically, for m ∈ [n] we denote the set of m-sparse vectors : v j = 0}, and for m ∈ [n], ρ ∈ (0, 1) we define the set of compressible vectors where we recall that B n denotes the closed unit ball in C n , so that E + ρB n denotes the closed ρ-neighborhood of a set E ⊂ C n . In [Rud08], (3.2) is applied with N = 2, taking S 1 to be the set of compressible vectors (with an appropriate choice of paramers) and S 2 the complementary set of "incompressible" vectors. As in the invertibility problem, incompressible vectors enjoy good anti-concentration properties for the associated random walks (3.1), while the set of compressible vectors has low metric entropy, which allows one to obtain uniform control using nets and the union bound. Later works of Tao-Vu [TV09, TV08] and Rudelson-Vershynin [RV08] used larger partitions based on arithmetic structural properties. In the present work, the distribution of A n calls for a different notion of structure than those discussed above. In the work [Coo15] on the invertibility problem for A n an integer vector was said to be structured if it had a large level set. Thus, for controlling the smallest singular value, we consider a unit vector u ∈ S n−1 to be structured if it is close to a vector with a large level set, and call such vectors "flat". Alternatively, non-flat vectors are those u ∈ S n−1 for which the empirical measure 1 n n i=1 δ u i of components enjoys some anti-concentration estimate; this perspective will be expanded upon in Section 3.3. Formally, for m ∈ [n] and ρ ∈ (0, 1), define the set of (m, ρ)-flat vectors We denote the mean-zero flat vectors by For non-integral x ≥ 0 we will sometimes abuse notation and write Sparse(x), Flat(x, ρ), etc. to mean Sparse( x ), Flat( x , ρ). Now we state our main proposition controlling the event that (A + Z)u is small for some structured vector u. For K ≥ 1 we denote the boundedness event (recall the notation (1.12)). From (1.11), our assumptions on Z and the triangle inequality, B(K) holds with probability one for K √ d = d + n γ . (Sharper bounds than this hold with high probability, but are not necessary for our purposes.) For much of the proof we will leave the parameter K generic. For K ≥ 1 and m ∈ [n], ρ ∈ (0, 1), Proposition 3.1 (Control on flat vectors). Assume log 4 n ≤ d ≤ n/2 and 1 ≤ K ≤ n γ 0 for some fixed γ 0 ≥ 1/2. There exists Γ 0 γ 0 log d n such that for all n sufficiently large depending on γ 0 , where c > 0 is an absolute constant. We briefly outline some of the ideas of the proof. As in prior works controlling invertibility over structured vectors, we will reduce to controlling the size of (A + Z)u for u ranging over a ρ-net for the set Flat 0 (m, ρ) -that is, a finite set Σ 0 (m, ρ) ⊂ Flat 0 (m, ρ) whose ρ-neighborhood contains Flat 0 (m, ρ). The restriction to B(K) allows us to argue Applying the union bound, In the next subsection we construct such ρ-nets of controlled cardinality. Our task is then to obtain a strong enough lower tail bound for (A + Z)u , holding uniformly over fixed u ∈ Σ 0 (m, ρ), to beat the cardinality of the net. It turns out that with no additional information on u we can only beat the cardinality of the net when m is fairly small (of size m d/ log n). However, once we have shown E K (m 0 , ρ 0 ) is small for some m 0 , ρ 0 , then when trying to control vectors in a net Σ 0 (m, ρ) for Flat 0 (m, ρ) with m > m 0 , we can restrict the net to the complement of Flat 0 (m 0 , ρ 0 ): This additional information that u / ∈ Flat 0 (m 0 , ρ 0 ) allows us to get an improved lower tail bound for (A + Z)u , which beats the cardinality of the net |Σ 1 (m, ρ)| for m (d/ log O(1) n)m 0 . Assuming d grows at an appropriate poly-logarithmic rate, we can iterate this argument along a sequence (m k , ρ k ) until m k n/ log 3 n as in (3.9). The values ρ k will degrade by a polynomial factor at each step, but this is acceptable for our purposes. We mention that this iterative approach is similar to arguments from [RZ16,Coob], and to a lesser extent [TV09,RV08]. However, those works concerned matrices with independent entries; consequently, our proofs of lower tail bounds for (A + Z)u with fixed u are substantially different, making use of coupled pairs (A, A) formed by applying random switching operations, and relying heavily on the graph regularity properties from Section 2. We prove Proposition 3.1 in Sections 4 and 5. In the remainder of this section we develop some useful lemmas concerning flat and non-flat vectors. 3.2. Metric entropy of flat vectors. In this section we bound the metric entropy of the sets Flat 0 (m, ρ) -that is, we find efficient coverings of these sets by Euclidean balls. The following is a standard fact on the existence of such coverings of controlled cardinality, and is established by a well-known volumetric argument; see for instance [MS86], [Coob, Lemma 2.2]. Using this we can show: Lemma 3.3 (Metric entropy for flat vectors). Let 1 ≤ m ≤ n/10 and ρ ∈ (0, 1). There Proof. Note we may assume that ρ is smaller than any fixed constant. Consider an arbitrary element u ∈ Flat 0 (m, ρ). We may write We begin by crudely bounding v and |λ|. Suppose v is supported on J ⊂ [n], |J| = m. By the triangle inequality, v + λ 1 ≤ 1 + ρ, and by Pythagoras's theorem (3.10) Ignoring the first term gives by our bound on m and assuming ρ ≤ 1/2. Ignoring the second term in (3.10) and applying the triangle inequality and (3.11) yields (3.12) Denote v := 1 n n j=1 v j and similarly for w. Since u ∈ S n−1 0 , we have Rearranging and applying Cauchy-Schwarz gives Thus we may alternatively express In the remainder of the proof we will obtain a 3ρ-net for Flat 0 (m, ρ) from a ρ-net for the projection to 1 ⊥ of Sparse(m) ∩ 2B n . Then we will show that we can rescale the resulting vectors to lie in Flat 0 (m, ρ) to obtain a 6ρ-net. The result will then follow by replacing ρ with ρ/6. Anti-concentration properties of non-flat vectors. In this section we observe that the property of a unit vector u ∈ S n−1 not lying in Flat(m, ρ) implies an anticoncentration property for the empirical measure 1 n n i=1 δ u i of its components. We then show that we can find a large "bimodal" piece of the empirical measure for such a vector; specifically, we can find two well-separated subsets of the plane that each capture a large portion of the total measure. For v ∈ C n , λ ∈ C and ρ > 0 we write and define the concentration function the supremum is attained). We remark that Q v (ρ) is the classical Lévy concentration function for the empirical measure Lemma 3.5 (Existence of a large bimodal component). Remark 3.6. We will later apply the above lemma when studying random variables of the form W u,π = m j=1 ξ j (u j − u π(j) ), where ξ = (ξ 1 , . . . , ξ m ) is a sequence of iid Bernoulli(1/2) variables and j → π(j) is a pairing between 2m distinct elements of [n]. We will think of W u,π as a random walk on C with steps u j − u π(j) , and use (3.16) to argue that for certain u this walk takes many large steps and is thus unlikely to concentrate significantly in any small ball. At some point we will also consider a random walk whose steps are differences between averages of the components of u over sets of equal size, rather than differences between individual components, in which case we will need (3.17). Proof. We observe that for any ε > 0, for some universal constant c > 0. Indeed, letting λ ∈ C be such that Q u (ε) = 1 n |E u (λ, ε)|, we can cover the ball {w ∈ C : |w − λ| < ε} with O(1) balls of radius ε/2, and the claim follows from the pigeonhole principle. Write q k = Q u (2 −k ) and consider the non-increasing sequence {q k } k∈Z . Since all components of v lie in the unit disk we have q k = 1 for k < − 1 2 log 2 n. Let k 0 = min{k : q k < 1 − m/n}. Then k 0 ≥ − 1 2 log 2 n, and from Lemma 3.4 we have k 0 ≤ log 2 (1/ρ) . From (3.18), and (3.16) follows. For (3.17), let c 0 > 0 be a sufficiently small constant, and divide the complement of the ball B(λ 0 / √ n, 2 −k 0 / √ n) ⊂ C into 1/c 0 congruent angular sectors. By the pigeonhole principle one of which must contain at least c 0 m of the components of J 1 . Taking J 1 to be the set of corresponding indices, we can take c 0 smaller if necessary to ensure that for some open halfspace H ⊂ C, (3.17) now follows from the above lower bounds, the convexity of H, and the triangle inequality. Invertibility over very flat vectors In this section we prove the following lemma, which already implies Proposition 3.1 for d n/ log 2 n, but is weaker for smaller values of d. Lemma 4.1 (Very flat vectors). Let 1 ≤ d ≤ n/2 and 1 ≤ K ≤ n γ 0 for some fixed γ 0 ≥ 1/2. Recall the events (3.8) with A ∈ A n,d drawn uniformly at random. For all where c > 0 is an absolute constant. Here the key graph regularity property will be the control on codegrees enjoyed by elements of the set A codeg (i 1 , i 2 ) from (2.1). We need the following variant of a lemma of Rudelson Lemma 4.2 (Tensorization of anti-concentration). Let ζ 1 , . . . , ζ n be independent nonnegative random variables. Suppose that for some ε 0 , p 0 > 0 and all j ∈ [n], P(ζ j ≤ ε 0 ) ≤ p 0 . There are c 1 , p 1 ∈ (0, 1) depending only on p 0 such that Proof of Lemma 4.1. Let m be as in the statement of the lemma and denote First we consider an arbitrary fixed vector u ∈ Flat 0 (m, ρ). By definition, there exists λ ∈ C, v ∈ Sparse(m) and w ∈ ρB n such that Indeed, by the triangle inequality, On the other hand by the assumption u ∈ S n−1 0 and Cauchy-Schwarz, Combined with (4.4) this gives (4.3). It follows that there exists j 1 ∈ J with On the other hand, since j∈J c |w j | 2 ≤ w 2 ≤ ρ 2 it follows from the pigeonhole principle that there exists j 2 ∈ J c such that By the previous displays and the triangle inequality we have (4.6) Let A ∈ A n,d be drawn uniformly at random. We form a coupled matrix A ∈ A n,d as follows. Conditional on A, we fix some arbitrary bijection π : these sets are empty we simply set A = A). We do this in some measurable fashion with respect to the sigma algebra generated by A. We let ξ = (ξ i ) n i=1 ∈ {0, 1} n be a sequence of iid Bernoulli(1/2) indicators, independent of A, and form A by replacing the submatrix A (i,π(i))×(j 1 ,j 2 ) with We claim that A d = A. It is clear that for any realization of the signs ξ the replacements (4.7) do not affect the row and column sums, so A ∈ A n,d . Now note that A and A agree on all entries Conditional on any realization of the entries of A outside this set, from the constraints on row and column sums we have that the remaining entries of A are determined by the set N A T (j 1 ) \ N A T (j 2 ), and similarly the remaining entries of A are determined by One then notes that for any fixed realization of ξ, the set N A T (j 1 ) N A T (j 2 ) is also uniformly distributed over subsets of N A T (j 1 ) N A T (j 2 ) of the same cardinality. The claim then follows from the independence of ξ from A. Denote the rows of A + Z by R i . We have (4.9) For A such that A T ∈ A codeg (j 1 , j 2 ) we have , from (4.6) and (4.8) we have if c > 0 is a sufficiently small constant. From Lemma 4.2 it follows that Combined with (4.9) and Lemma 2.1 (applied to A T , which is also uniform over A n,d ) we conclude sup u∈Flat 0 (m,ρ) Let Σ 0 (m, ρ) ⊂ Flat 0 (m, ρ) be a ρ-net for Flat 0 (m, ρ) as in Lemma 3.3. On the ρ). Letting u ∈ Σ 0 (m, ρ) such that u − v ≤ ρ, by the triangle inequality, Thus, By our choice of ρ (adjusting the constant c) we have 2ρK √ d ≤ c d/m. Thus, we can apply the union bound and the estimate (4.11) to conclude where we have substituted the assumed bound on m and the expression for ρ. The claim follows. Incrementing control on structured vectors In this section we upgrade the control on very flat vectors from Lemma 4.1 to obtain Proposition 3.1 by an iterative argument. Note that this step is not necessary for large degrees n/ log 2 n d ≤ n/2. Whereas Lemma 4.1 was established by restricting attention to A ∈ A codeg (i 1 , i 2 ), i.e. restricting to the event that A has controlled codegrees, here we will use the expansion property enjoyed by elements of A exp (κ) (defined in (2.6)). As in the proof of Lemma 4.1 we will create a coupled matrix A using random switchings. This time the switchings will be applied across several columns rather than just two. While similar in spirit to the coupling from the previous section, the coupling used here requires some more care and notation to properly define. Then A ∈ A exp (κ/2). Proof. Fix an arbitrary set J ⊂ [n] with |J| ≤ n/2d. It suffices to show In the case that i = i l , by definition of the switching we must have i l ∈ N A T (J), and similarly if i = i l then i l ∈ N A T (J). Since the indices i 1 , i 1 , . . . , i k , i k are all distinct, we have . Depiction of the out-neighborhoods of vertices i, π(i) for the digraph corresponding to a matrix A ∈ G + (i) = G L,L (i, π(i)), where we only depict the directed edges from {i, π(i)} to L ∪ L . Note that J i = L A (i), the set of out-neighbors of i in L, while π(i) has no out-neighbors in L. The set In the formation of the coupled matrix F L,L (A) we apply Switch {i,π(i)},J i ,J i (see Figure 2), or not, depending on the value of ξ i ∈ {0, 1}, and we do this independently for each i ∈ I + (A). uniformly at random. For each i ∈ I − (A) we set J i (A) = L A (π(i)) and draw Furthermore, by the independence of ξ i from all other variables we may fix ξ i = 1 as the claim is immediate for ξ i = 0. Since i is fixed we will lighten notation by writing Fix an arbitrary element A 0 ∈ A n,d . Our aim is to show By symmetry we have P(A ∈ G + ) = P(A ∈ G − ). Thus, by subtracting a similar expression for P(A = A 0 ), it suffices to show Again by symmetry, it suffices to establish (5.9). Note that Thus, both sides of (5.9) are zero unless the event holds. Our aim is now to show Now notice that due to the constraints on column sums for elements of A n,d , restricting to the event E 0 ∧ {A ∈ G + } fixes all entries of A, A * except for those in {i, i } × L 0 , where and the same is true if we restrict to E 0 ∧ {A ∈ G − }. Thus, it suffices to show Under the conditioning, the submatrix A (i,i )×L 0 is determined by the random set is uniformly distributed over subsets of L 0 of fixed size r := |L A 0 |. Thus, it suffices to show that on E 0 ∧ {A ∈ G + }, L A is also uniformly distributed over subsets of L 0 of size r. From the definition (5.4), on E 0 ∧ {A ∈ G + } we have where, conditional on A, J is drawn uniformly from subsets of L A (i ) \ L A (i) = L 0 \ L A of size |L A (i)| =: s, and we note that s is fixed by the constraints on column sums. From the constraints on row sums we have |L A | = r − s. Also, again by uniformity of A ∈ A n,d , L A is uniformly distributed over subsets of L 0 of size r − s. Thus, on E 0 ∧ {A ∈ G + }, L A is generated by first selecting L A ⊂ L 0 of size r − s uniformly at random, and then adjoining a uniform random set J ⊂ L 0 \ L A of size s. It follows that L A is uniformly distributed over subsets of L 0 of size r, as desired. In Lemma 5.4 below we show that when A is drawn uniformly at random, with high probability the number of random switchings |I + (A) ∪ I − (A)| that are applied to form F L,L (A) is fairly large. To prove this we need a consequence of concentration of measure for the symmetric group, which will also be used in the proof of Lemma 6.3. Lemma 5.3. Let S, T be finite sets with m = |S| ≤ |T | and let σ : S → T be a uniform random injection. Let T 1 , . . . , T m ⊂ T be fixed subsets and set N σ = |{i ∈ S : σ(i) ∈ T i }|. We have E N σ = 1 |T | i∈S |T i |, and for any δ > 0, (5.14) In particular, if T 0 ⊂ T is fixed and U ⊂ T is a uniform random set of size m, then for any δ > 0, (note there exist such L, L if C is taken sufficiently large). Let A ∈ A n,d be uniform random, and let I + (A) be as in (5.2). We have for a sufficiently small constant c > 0. Proof. We will use Lemma 2.5 and the row-exchangeability of A. Let r ≥ 1 to be chosen later, and denote . (recall our notation from Lemma 2.5). Note that if i ∈ I is such that i ∈ I r (A) and π(i) ∈ I r (A; i), then i ∈ I + (A). Indeed, from π(i) ∈ I r (A; i) we have π(i) / ∈ N A T (L), i.e. L A (π(i)) = ∅, and |L A (π(i)) \ N A (i)| > r ≥ |L A (i)| ≥ 1. where σ is a uniform random permutation of (I r (A) ∩ I) c , independent of A under the conditioning. We condition on A and proceed using only the randomness of σ. Note that the restriction of σ −1 •π to I r (A)∩I is a uniform random injection into (I r (A)∩I) c . We have Lemma 5.5 (Incrementing flatness). There is a constant c 0 > 0 such that the following holds. Fix γ 0 ≥ 1/2 and let 1 ≤ K ≤ n γ 0 . Let κ ∈ (0, 1) and assume 1/(c 0 κ) 2 ≤ d ≤ c 0 κn. Let and let ρ , m satisfy Let A ∈ A n,d be drawn uniformly at random, and recall the events (3.8). We have From (5.6) and the fact that the switching mappings are their own inverses it follows that For the right hand side, letting c > 0 a sufficiently small constant and applying the union bound, By our assumed bounds on d and m and taking c 0 , c sufficiently small we can apply Lemma 5.4 to bound the second term above by O(exp (−c κdm)). Thus, taking c 0 ≤ c , to obtain (5.31) it suffices to show Condition on A satisfying |I + (A)| > c 0 κdm, and consider an arbitrary element i ∈ I + (A). Writing R i , R i for the ith rows of A + Z and A + Z, respectively, we have From (5.32) it follows that By independence of the components of ξ we can apply Lemma 4.2 to obtain that if c 0 is sufficiently small, Undoing the conditioning on A we obtain (5.33), and hence (5.31). By Lemma 3.3 we may fix a ρ -net Σ 0 = Σ 0 (m , ρ ) ⊂ Flat 0 (m , ρ ) for Flat 0 (m , ρ ) with |Σ 0 | ≤ O(n/m ρ 2 ) m . By definition, on the event E K (m , ρ ) we have A ∈ C(u, ρ K √ d) for some u ∈ Flat 0 (m , ρ ). Arguing as in the proof of Lemma 4.1 it follows that A ∈ C(u , 2ρ K √ d) for some u ∈ Σ 0 . Thus, by our assumption ρ < ρ/2, Now by the assumed bound (5.27) on ρ /ρ, we have 2ρ K √ d ≤ c 0 ρ κdm/n for all n sufficiently large, and hence for any u ∈ S n−1 . Combining (5.35) with (5.31) we have Proof of Proposition 3.1. We may and will assume throughout that n is sufficiently large depending on γ 0 . Since Lemma 4.1 already implies Proposition 3.1 when d n/ log 2 n, we may assume d ≤ n log 2 n . (5.36) In the sequel, we will frequently apply the observation that the events E K (m, ρ) are monotone increasing in the parameters m and ρ. Let m > c 0 n/d to be specified later. By the union bound and monotonicity of E K (m, ρ) in m and ρ, For each 1 ≤ k ≤ k * , since ρ k ≥ ρ k * +1 we can also apply Lemma 5.5 to bound Assembling the bounds (5.41), (5.38), (5.42), (5.43) and applying Lemma 2.3 we conclude Since ρ k * +2 = n −10γ 0 (k * +2) = n −O(γ 0 log d n) by (5.40), the claim follows after adjusting c. Invertibility over non-flat vectors In this section we conclude the proof of Theorem 1.7. Throughout this section A and Z are as in the theorem statement. 6.1. An averaging argument. The aim of this subsection is to establish Lemma 6.2 below, which essentially reduces our task to bounding the probability that the difference between two rows of A + Z has small inner product with a fixed unit vector. The statement and proof were inspired by arguments in [LLT + 17] for the invertibility problem, which in turn are an intricate refinement of a basic averaging argument that goes back to Komlós in his work on the invertibility of random ±1 matrices with iid entries [Kom]. Definition 6.1 (Good overlap events). For i 1 , i 2 ∈ [n], ≥ 1 and ρ, t > 0 we define O i 1 ,i 2 ( , ρ, t) to be the event that there exists u ∈ S n−1 0 and disjoint sets L 1 , L 2 ⊂ N A (i 1 ) N A (i 2 ) such that the following hold: Recall our notation A (i 1 ,i 2 ) for the matrix obtained by removing the rows with indices i 1 , i 2 , and write A (i 1 ,i 2 ) for the sigma algebra it generates. Crucially, we note that O i 1 ,i 2 ( , ρ, t) is a A (i 1 ,i 2 ) -measurable event. Indeed, conditioning on A (i 1 ,i 2 ) fixes R i 1 + R i 2 and N A (i 1 ) N A (i 2 ) by the constraint on column sums. Informally, O i 1 ,i 2 ( , ρ, τ ) is the event that there is a unit vector which is "almost normal" to the span of the n − 1 vectors {R i : i / ∈ {i 1 , i 2 }} ∪ {R i 1 + R i 2 }, and which also has "high variation" on N A (i 1 ) N A (i 2 ). The key property of u (i 1 ,i 2 ) , L 1 (i 1 , i 2 ), and L 2 (i 1 , i 2 ) is that they are fixed upon conditioning on A (i 1 ,i 2 ) . We will eventually be able to restrict to these events with parameters ≥ d/ log O(1) n and ρ, t ≥ n −O(Γ) . For m ≥ 1 and ρ, t > 0 we define the good event Recall also the set A disc (n 0 , δ) ⊂ A n,d from (2.4). Lemma 6.2 (Good overlap on average). Assume 1 ≤ d ≤ n/2. Let 1 ≤ m ≤ c 1 n for a sufficiently small constant c 1 > 0, and put = md/(8n). Then for all ρ > 0 and 0 < t ≤ n −10 we have Proof. Suppose the event on the left hand side of (6.2) holds. Let u, v ∈ S n−1 be the eigenvectors of (A + Z) * (A + Z), (A + Z)(A + Z) * , respectively, associated to the eigenvalue s n (A + Z) 2 . By our hypotheses and (1.11) we have Then since it follows that u and 1 are associated to distinct eigenvalues of (A + Z) * (A + Z) and hence u ⊥ 1; we similarly have v ⊥ 1. Thus, we have located u, v ∈ S n−1 0 such that Furthermore, by our restriction to G(m, ρ, t) we have that u, v ∈ S n−1 0 \ Flat(m, ρ). Our next step is to find a large set I * (u, v) of pairs of row indices (i 1 , i 2 ) for which O i 1 ,i 2 ( , ρ/2, t) holds, and for which |v i 1 − v i 2 | is reasonably large. We begin with the former -that is, counting pairs of row indices that are "good" for u. By Lemma 3.5 there are disjoint sets J 1 , J 2 ⊂ [n] with |J 1 | ≥ m, |J 2 | n − m such that Let us take the constant c 1 sufficiently small that |J 2 | ≥ m. For α = 1, 2 put By our restriction to A disc (m/8, 1/2) we have |I 0 α (u)| ≥ n − m/8 for α = 1, 2. Indeed, if this were not the case we would have |I 0 a contradiction. Now for α ∈ {1, 2} and i 1 ∈ [n] put For i 1 ∈ I 0 α (u) we have where in the last inequality we applied our assumption d ≤ n/2. Thus, using our restriction to A disc (m/8, 1/2) and arguing by contradiction as above, we find that |I α (u; i 1 )| ≥ n − m/8 for α ∈ {1, 2} and any i 1 ∈ I 0 α (u). Setting Setting I(u) := I 1 (u) ∩ I 2 (u), Now for each α = 1, 2 and (i 1 , i 2 ) ∈ I(u), since Furthermore, for each (i 1 , i 2 ) ∈ [n] 2 , Thus, u, L 1 (i 1 , i 2 ), L 2 (i 1 , i 2 ) satisfy the conditions on u, L 1 , L 2 in Definition 6.1, and it follows that O i 1 ,i 2 ( , ρ/2, t) holds for each (i 1 , i 2 ) ∈ I(u). Now we count pairs of row indices that are "good" with respect to v. By Lemma 3.4 and the fact that v ∈ S n−1 \ Flat(m, ρ) we have that for any i 1 ∈ [n], from (6.6) and (6.4) we have (6.8) Fix (i 1 , i 2 ) ∈ I * (u, v). By several applications of the fact that O i 1 ,i 2 ( , ρ/2, t) holds and the Cauchy-Schwarz inequality, Rearranging we have In summary, we have shown that on the event The bound (6.2) now follows by taking expectations on each side and rearranging. 6.2. Anti-concentration for row-pair random walks. The aim of this subsection is to prove the following: Lemma 6.3. Assume 1 ≤ d ≤ n/2. Let i 1 , i 2 ∈ [n] distinct, and suppose the event O i 1 ,i 2 ( , ρ, t) holds for some ≥ 1, ρ, t > 0. Then for all r ≥ 0, We will need the following standard anti-concentration bound of Berry-Esséen-type; see for instance [Coob,Lemma 2.7] (the condition there of κ-controlled second moment is easily verified to hold with κ = 1 for a Rademacher variable). Lemma 6.4 (Berry-Esséen-type small ball inequality). Let v ∈ C n be a fixed nonzero vector and let ξ 1 , . . . , ξ n be iid Rademacher variables. Then for r ≥ 0, Proof of Lemma 6.3. By symmetry we may take (i 1 , i 2 ) = (1, 2). We condition on a realization of A (1,2) satisfying the event O 1,2 ( , ρ, t) for the remainder of the proof. For ease of notation we will write u, L 1 , L 2 instead of u (1,2) , L 1 (1, 2), L 2 (1, 2). Note that conditioning on A (1,2) fixes u, L 1 , L 2 . Let r ≥ 0 be arbitrary. Our aim is to show (6.10) We now construct a coupled matrix A ∈ A n,d on an augmented probability space. Under the conditioning on A (1,2) , by the constraint on column sums the only remaining randomness is in the submatrix A (1,2)×N A (1) N A (2) . We create A by resampling a sequence of 2× 2 submatrices of this submatrix uniformly and independently. Specifically, we fix a bijection π : L 1 → L 2 in some arbitrary (but measurable) fashion under the conditioning on A (1,2) , and let ξ = (ξ j ) n j=1 be iid Rademacher variables independent of A. Conditional on A, for each j ∈ L 1 such that A (1,2)×(j,π(j)) ∈ 1 0 0 1 , 0 1 1 0 (6.11) we put A (1,2)×(j,π(j)) = 1 0 0 1 1(ξ j = +1) + 0 1 1 0 1(ξ j = −1); if (6.11) does not hold then we set A (1,2)×(j,π(j)) = A (1,2)×(j,π(j)) . Finally, we set A(i, j) = is generated by first sampling A (1,2)×N A (1) N A (2) uniformly under the conditioning on A (1,2) , and then independently and uniformly resampling the 2 × 2 submatrices A (1,2)×(j,π(j)) . It readily A (1,2) . Denoting the first two rows of A + Z by R 1 , R 2 , by replacing A with A in (6.10) it suffices to show (6.12) Define L 1 = L 1 (A) := j ∈ L 1 : (6.11) holds . (6.13) Using this set we can express the dot products R 1 · u, R 2 · u in a manner which exposes the dependence on the Rademacher variables ξ j : where z 1 (A), z 2 (A) ∈ C denote quantities that depend only on A. Subtracting we obtain where ∂ j (u) := u j − u π(j) and z(A) depends only on A. Since π(j) ∈ L 2 for each Our first task is to show that |L 1 | with high probability in the randomness of the first two rows of A. The conditioning on A (1,2) has fixed all entries but the submatrix A (1,2)×N A (1) N A (2) , and has also fixed m := |N A (1) \ N A (2)|. Moreover, the submatrix A (1,2)×N A (1) N A (2) is determined by the set N A (1)\N A (2), which is uniformly distributed over subsets of N A (1) N A (2) of size m. Define We have |L 1 | ≥ |L 2 |; indeed, L 2 is the image under π of the elements of j ∈ L 1 for which the first alternative in (6.11) holds. Now we obtain a high probability lower bound on |L 2 | by two applications of the bound (5.15) in Lemma 5.3. Applying (5.15) with T = N A (1) N A (2), U = N A (1) \ N A (2), T 0 = L 1 and δ = 1/2 (say) gives Next, conditional on L 1 we can apply (5.15) again with T = (N A (1) N A (2)) \ L 1 , U = (N A (2) \ N A (1)) \ L 1 , T 0 = π(L 1 ) and δ = 1/2. With these choices we have and we obtain for a sufficiently small constant c > 0. Combining this bound with (6.17) and the lower bound |L 1 | ≥ |L 2 | we obtain as desired. We henceforth condition on a realization of A satisfying |L 1 | ≥ c . Now it suffices to show At this point the only randomness is in the Rademacher variables ξ j . Next we locate a large subset of L 1 on which the discrete derivatives ∂ j (u) have roughly the same size. For k ≥ −1 define L (k) = {j ∈ L 1 : 2 −(k+1) < |∂ j (u)| ≤ 2 −k }. From (6.15) we have so by the pigeonhole principle we have for some k. Now define v ∈ C n to have components v j = ∂ j (u) 1(j ∈ L * ). From (6.15) and the fact that the components of v vary by a factor at most 2 on L * we have and v ∞ ≤ 2 v |L * | 1/2 log(n/ρ) 1/2 v . (6.23) From the expression (6.14), we condition on the variables (ξ j ) j / ∈L * and apply Lemma 6.4 to get Undoing the conditioning on (ξ j ) j / ∈L * gives (6.19) as desired. 6.3. Conclusion of proof of Theorem 1.7. Fix γ ≥ 1 and let C 1 , Γ > 0 to be chosen sufficiently large. We may and will assume that n is sufficiently large depending on γ. We may also assume d ≥ log 2C 1 n (6.24) as the desired bound holds trivially otherwise. From our hypotheses, (1.11) and the triangle inequality with probability 1. Thus, we may restrict to the event B(n γ+1/2 ). Set m = cn γ log 3 n (6.25) for a sufficiently small constant c > 0. With this choice of m and the lower bound (6.24), taking C 1 sufficiently large (C 1 ≥ 4 will do), we can apply Lemma 2.2 to bound (6.26) Furthermore, recalling the events (6.1), by applying Proposition 3.1 to A + Z and (A + Z) * and taking the union bound we have for some Γ 0 γ log d n, as long as We assume henceforth that Γ ≥ Γ 0 , so that (6.27) holds. Applying Lemma 6.2 (taking Γ ≥ 10.5) followed by Lemma 6.3, (1 + n −Γ+2Γ 0 +1 ) log n + n m e −cmd/n . The proof of Theorem 1.7 is complete. Proof of Theorem 1.2: reduction to asymptotics for singular value distributions We turn now to the proof of Theorem 1.2. We begin by recalling a lemma concerning the logarithmic potential, which allows us to convert the question of convergence of the ESDs µĀ n to questions about empirical singular value distributions. For a Borel probability measure µ over C integrating log | · | in a neighborhood of infinity, the logarithmic potential U µ : Lemma 7.1. For each n ≥ 1 let M n be a random n × n matrix with complex entries. Suppose that for a.e. z ∈ C, (1) there exists a probability measure ν z on R + such that ν Mn−z → ν z in probability; (2) the measures ν Mn−z uniformly integrate the function s → log(s) in probability, i.e. for every ε > 0 there exists T < ∞ such that Then µ Mn converges in probability to the unique probability measure µ on C satisfying for all z ∈ C. (We note that [BC12] uses the weak topology on the space of measures -defined in terms of bounded continuous test functions -rather than the vague topology used in this article. However, under the uniform integrability assumption (7.2) the assumption (1) above is equivalent to weak convergence in probability. ) We introduce the centered and rescaled matrix where we recall our notation p = d/n. The following two propositions, along with Corollary 1.9, are the main ingredients for establishing the conditions (1) and (2) in Lemma 7.1, and hence for proving Theorem 1.2. The proofs are deferred to later sections. Proposition 7.2 (Weak convergence of singular value distributions). Assume log 4 n ≤ d ≤ n/2. For each z ∈ C there exists a probability measure ν z on R + such that ν 1 √ n Yn−z → ν z in probability. Moreover, the family {ν z } z∈C satisfies the relation (7.3) with µ = µ circ . Proposition 7.3 (Anti-concentration of the spectrum). Assume log 4 n ≤ d ≤ n/2. There are absolute constants C, c > 0 such that with probability 1 − O(e −n ), for all η ∈ (0, 1], Remark 7.4. From the existence of the weak limit ν z provided by Proposition 7.2 we obtain ν 1 √ n Yn−z (I) = O z (|I|) + o(1) for any fixed interval I ⊂ R + . Proposition 7.3 gives a quantitative improvement of this bound for intervals near zero, providing nontrivial estimates down to the "mesoscopic scale" η ∼ d −1/48 . Recently there have been major advances establishing quantitative versions of the Kesten-McKay [BHY16] and semicircle [BKY17] laws for undirected random regular graphs of (large) fixed degree or growing degree, proving that these limiting laws are a good approximation for the finite n ESDs on intervals I at the near-optimal scale |I| ∼ n ε−1 . In particular, we expect that the arguments in [BKY17] could be used to obtain a similar local law for ν 1 √ n Yn−z . Now we conclude the proof of Theorem 1.2 on Propositions 7.2 and 7.3. We let C 0 > 0 to be taken sufficiently large. For the duration of the proof we abbreviate ν n,z := νĀ n−z . For any z ∈ C, 1 √ n Y n − z andĀ n − z differ by a matrix of rank one. By a standard eigenvalue interlacing bound it follows that their singular value distributions are close in Kolmogorov distance, specifically: | log(s)|dν n,z (s) > ε ≤ ε. (7.7) First we address the singularity of log at infinity. We claim that for any ε > 0 and z ∈ C there exists T = T (ε, z) > 0 such that ∞ e T log(s) dν n,z (x) ≤ ε/2 (7.8) almost surely for all n ≥ 1. Indeed, note that for any fixed z ∈ C, ∞ 0 s 2 dν n,z (s) = 1 n tr 1 with probability one, where in the fourth line we used our assumption d ≤ n/2 (see Remark 1.5), and in the final line we used that n i,j=1 a ij = dn for any element A = (a ij ) ∈ A n,d . Thus, with probability one, for all T sufficiently large and for all n, We can now take T sufficiently large depending on ε to make the right hand side smaller than ε/4 for all n sufficiently large, giving The right hand side is smaller than ε/2 for all sufficiently large n. We take T larger if necessary to make the integral zero for all other values of n, which yields (7.12). Finally, taking T ≥ T (ε, z) completes proof of Theorem 1.2 on Propositions 7.2 and 7.3. The remainder of the paper is organized as follows. In Section 8.1 we recall the approach to studying empirical singular value distributions via Hermitization and the resolvent method. In Section 8.2 we introduce two iid models to be compared with Y n -a Bernoulli matrix X n and a Gaussian matrix G n -and state lemmas giving quantitative comparisons between the singular value distributions of Y n and X n , and between X n and G n . In the remainder of Section 8 we use these comparison lemmas to prove Propositions 7.2 and 7.3. The comparison lemmas are proved in Section 9. In the appendix we prove a bound on the local density of small singular values for perturbed Gaussian matrices. 8. The comparison strategy 8.1. Hermitization and the Stieljes transform. To prove Propositions 7.2 and 7.3 we will use a popular linearization trick and Stieltjes transform techniques, which we now briefly outline; see [BC12] for additional background and motivation. For an n × n matrix M and z ∈ C we define the 2n × 2n matrix We will frequently apply the bound which is immediate from (8.3) as w is a lower bound on the distance from w to the spectrum of H z (M ) in the complex plane. Denote the Stieltjes transform of a probability measure µ on R by If H is an n×n Hermitian matrix with real eigenvalues λ 1 , · · · , λ n , the Stietjes transform of its empirical spectral distribution is given by In particular, g z,w (M ) = m µ Hz (M ) (w). (8.8) The Stieltjes transforms w → g z,w (Y n ) will be a key tool in the proofs of Propositions 7.2 and 7.3. Indeed, it is a standard fact in random matrix theory that weak convergence of the ESDs µ Hz(Yn) follows from pointwise convergence of their Stieltjes transforms. Furthermore, a key advantage of considering Stieltjes transforms over moments is that the former provide good quantitative control of ESDs of Hermitian matrices at short scales, which will be key for obtaining Proposition 7.3. For more discussion of the Stieltjes transform method in Hermitian random matrix theory we refer to the books [AGZ10,Tao12] and the review article [BGK]. 8.2. Comparison with Gaussian and Bernoulli ensembles. For each n ≥ 1 we let G n denote an n × n matrix with iid standard real Gaussian entries. We also let B n = (b (n) ij ) be an n × n matrix with independent Bernoulli(p)-distributed 0-1 entries, and set where we recall p := d/n ≤ 1/2. We denote the entries of X n by ξ (n) ij . As with A n , Y n we will often suppress the dependence on n and write G, X, ξ ij . Note that the variables ξ ij are iid centered variables with unit variance. We further note that for any q > 2 we have the moment bound The following two lemmas give quantitative comparisons between the ESDs of H z (Y ) and H z (X), and of H z (X) and H z (G). The proofs are deferred to Sections 9.1 and 9.2. Lemma 8.1 (Comparison with iid Bernoulli). Let z ∈ C and let f : R → R be an L-Lipschitz function supported on a compact interval I ⊂ R. Let X, Y be n × n matrices as in (8.9), (7.4), respectively. Assume log 4 n ≤ d ≤ n/2. For any ε > 0, for some constant c > 0. Lemma 8.2 (From iid Bernoulli to iid Gaussian). Let z ∈ C and w ∈ C + . We have where the implied constant is absolute. In the remainder of this section we use the above lemmas to establish Propositions 7.2 and 7.3. 8.3. Proof of Proposition 7.2. Our starting point is the following, which gives the desired limiting behavior for the Gaussian matrices G n in place of Y n . We will then use Lemmas 8.1 and 8.2 to transfer this limiting property to Y n . Lemma 8.3 (Convergence of singular value distributions, Gaussian case). For each z ∈ C there exists a probability measure ν z on R + such that ν 1 √ n Gn−z → ν z in probability and in expectation. Moreover, the family {ν z } z∈C satisfies the relation (7.3) with µ = µ circ . Proof. The existence of the measures ν z follows as a special case of a result of Dozier and Silverstein [DS07] (which allows more general entry distributions and more general shifts than −z I n ). See [PZ10,Lemma 3] for the verification that (7.3) holds with µ = µ circ . Proof of Proposition 7.2. By Lemma 8.3 it suffices to show ν 1 √ n Yn−z − E ν 1 √ n Gn−z converges in probability to the zero measure, i.e. to show that for any f ∈ C c (R) and any ε > 0, where here and in the remainder of the proof we implicitly allow quantities o(1) to tend to zero at a rate depending on f and ε. Fix f ∈ C c (R) and ε > 0. From Lemma 8.2 and our assumption that d grows to infinity with n it follows that (8.14) There exists a compactly supported Lipschitz function f ε with support and Lipschitz constant depending only on f and ε such that f − f ε ∞ ≤ ε/4. Now it suffices to show (8.15) But the above is immediate from Lemma 8.1 and our assumption d ≥ log 4 n. 8.4. Proof of Proposition 7.3. As in the proof of Proposition 7.2, our starting point is a result for Gaussian matrices. The proof is deferred to Appendix A. Lemma 8.4. There are constants C, c > 0 such that the following holds. Let 1 ≤ k ≤ n and let M ∈ M n (C) be a deterministic matrix. Except with probability O(n 2 e −ck ), for all k ≤ j ≤ n − 1 we have s n−j ( 1 √ n G + M ) ≥ cj/n. Remark 8.5. We will apply the lemma with k = √ n, but we note that it gives a nontrivial result down to much smaller scales: namely, with high probability, all but the smallest k singular values are within a constant of their expected values, as long as k grows faster than log n. Proof of Proposition 7.3. It will be more convenient to work with µ Hz(Y ) , the symmetrized probability measure for ν 1 . Let f η be the piece-wise linear η −1 -Lipschitz function which is equal to 1 on [−η, η] and zero outside of (−2η, 2η). We have 1 I ≤ f η pointwise, and by Lemma 8.1 (taking ε = C d −1/12 log 1/4 n for a sufficiently large constant C > 0), except with probability O(η exp(−cd 2/3 n log n)) = O(ηe −n ) (adjusting the constant c to absorb the prefactor d 1/12 / log 1/4 n). The error term on the right hand side above is where in the last line we used the assumption η ≥ d −1/48 ≥ n −1 to bound (nη) −1 = O(1). Now we use Lemma 8.4 to bound the first term. First, by Fubini's theorem, Let constants C, c be as in Lemma 8.4, let k ≥ 1 to be chosen later, and let G k denote the event that s n−j ( 1 √ n G − z) ≥ cj/n for all k ≤ j ≤ n − 1. On G k the integrand in (8.21) is bounded by Inserting this estimate in (8.21), we have From the deterministic bound g z,iη (G) ≤ 1/η and Lemma 8.4 (taking M = −z I n ) we conclude Taking k = √ n, by our assumption on η the last three terms are of lower order, which yields (8.18) and hence (8.17). (By optimizing η to balance the first and last terms above, one sees that we actually showed the stronger bound E f η dµ Hz(X) η + d −1/10 .) Now consider the events Denoting m * = 1 48 log 2 d , from the union bound, where we in the second and third lines we used that the events B m (t) are monotone in the parameter t. Applying (8.17), Fix η ∈ (0, 1] arbitrarily and suppose B does not hold. Let m be the integer such that 2 −(m+1) < η ≤ 2 −m . We have The result follows after adjusting the constant C. 9. Proofs of comparison lemmas 9.1. Proof of Lemma 8.1. Recall that the matrix B = B n has iid Bernoulli(p) entries, with p = d/n. Here we follow a strategy that was used by Tran, Vu and Wang in [TVW13] to prove a local semicircle law for adjacency matrices of random (undirected) regular graphs of growing degree. The idea is to use sharp concentration estimates for linear eigenvalue statistics of Hermitian random matrices together with a lower bound on the the probability that the iid Bernoulli matrix B n lies in A n,d . For the former we have the next lemma, which is easily obtained from the arguments of Guionnet and Zeitouni in [GZ00]: Lemma 9.1 (Concentration of linear statistics). Let H = (h ij ) n i,j=1 be a Hermitian random matrix with entries on and above the diagonal jointly independent and uniformly bounded by K/ √ n for some K < ∞. Let f : R → R be an L-Lipschitz function supported on a compact interval |I| ⊂ R, and let H 0 be an arbitrary n × n deterministic Hermitian matrix. For any ε > 0, for some constants C, c > 0. The following is established in Appendix B, following an argument of Shamir and Upfal for undirected d-regular graphs [SU84]. Lemma 9.2. Assume log 4 n ≤ d ≤ n/2. Then n/ log n by Canfield and McKay [CM05]. For the proof of Theorem 1.2 it is only important that we have a bound of the form P(B ∈ A n,d ) ≥ exp (−o(nd)). In Appendix B we actually prove a slightly stronger version of Lemma B.1 allowing d to grow as slowly as log 1+ε n. Proof of Lemma 8.1. Recall that A denotes a uniform random element of A n,d , and B denotes an n × n matrix with independent Bernoulli(p) entries, where p = d/n. For fixed B 0 ∈ {0, 1} n×n we denote In this notation we have X = M (B) and Y = M (A) (see (8.9), (7.4)). For f ∈ C c (R), z ∈ C and ε > 0 we denote the corresponding "bad set" of 0-1 matrices Our aim is to show that for any L-Lipschitz function f : R → R supported on a compact interval I and any ε > 0, Fix such f and ε. We can apply Lemma 9.1 (with n replaced by 2n), taking H z (X) for H, 0 z z 0 ⊗ I n for H 0 , and K = 1/ √ p to obtain Now notice that conditional on the event {B ∈ A n,d }, B is uniformly distributed over A n,d . Thus, , and (9.5) follows from (9.6) and Lemma B.1. 9.2. Proof of Lemma 8.2. Here we make use of the Lindeberg replacement strategy, which was introduced to random matrix theory by Chatterjee [Cha05,Cha06], who used it to prove the semicircle law for random symmetric matrices with exchangeable entries above the diagonal. It has since become a widely used tool in universality theory for random matrices, most notably with its use by Tao, Vu and others to establish universality of local eigenvalue statistics for various models; see e.g. [TaV14,EY12,TV15b] and references therein. In particular we will apply the following invariance principle: Theorem 9.4 (cf. [Cha05, Theorem 1.1 and Corollary 1.2]). Let X and W be independent random vectors in R N with independent components having finite third moment and satisfying E X i = E W i and E X 2 (9.7) Let f ∈ C 3 (R N → R), and denote |∂ r i f (x)| 3/r . (9.8) With Theorem 9.4 in hand, the proof of Lemma 8.2 boils down to estimating the partial derivatives of the resolvent R z,w (M ) from (8.3), viewed as a function of M . The proof is similar an argument that was sketched in the appendix of [TV10], and subsequently applied in the sparse setting by Wood [Woo12], and to matrices with exchangeable entries by Adamczak, Chafaï and Wolff in [ACW16] (who used a more general invariance principle from [Cha06] for exchangeable sequences). Here we will follow similar lines to the above-mentioned works; the only difference is that we will need to quantify errors in order to obtain the bound in Proposition 7.3. The aforementioned works all obtained estimates like Proposition 7.3 by a different geometric argument, also introduced in [TV10]. Adapting that argument to the setting of random regular digraphs appears to be of comparable difficulty to the proof of Theorem 1.7 due to the dependencies among entries. Instead, we have opted to make our comparisons in Lemmas 8.1 and 8.2 quantitative, and apply the geometric argument from [TV10] in the simpler Gaussian setting (see the proof of Lemma 8.4). One obtains the same bound for the imaginary parts by the same lines. Appendix A. Proof of Lemma 8.4 In this appendix we establish the estimate of Lemma 8.4 for the local density of small singular values of a perturbed real Gaussian matrix. The argument is a (by now standard) application of an approach introduced in [TV10]. In fact, a weaker version of the lemma (but still sufficient for our purposes) follows directly from [TV10, Lemma 6.7], which applies to any matrix with iid standardized entries with finite second moment. However, the argument is simpler in the Gaussian case and gives a stronger bound, so we include the proof below. Fix M ∈ M n (C) and denote G = G + √ nM . By the union bound it suffices to show that for some constants C, c > 0, for all 1 ≤ k ≤ n − 1. Fix such a k; since the desired bound is trivial for small values of k we may assume k is larger than any fixed constant. Let R i denote the ith row of G. Put m = n − k/2 , and for each i ∈ [m] set V i = span(R j : j ∈ [m] \ {i}). We claim Indeed, letting G be the m × n matrix obtained by removing the last k/2 rows from G, by the Cauchy interlacing law we have for a suitable constant c > 0. Also, where here and in the following we mean dimension over C. By rotational symmetry of the distribution of G i we may assume V i is spanned by the last dim V i standard basis vectors in C n . Then we have In particular, E dist(G i , V i ) 2 = n − dim V i ≥ n − m ≥ k/2. (A.8) (A.5) now follows from a standard concentration inequality for Gaussian measure (see for instance [Led01]). Lemma B.1 follows from the above by setting α = 1/3 and ε = 1. To prove Lemma B.1 we follow an argument of Shamir and Upfal from [SU84], who established estimates on the probability that an (undirected) Erdős-Rényi graph is a d-regular graph. The heart of the proof is to show that with high probability, the matrix B contains a d-regular factor of slightly smaller density. Recall that a d-regular factor in B = (b ij ) is an element B = (b ij ) ∈ A n,d such that b ij = 0 ⇒ b ij = 0 (usually this terminology is applied to the associated graphs/digraphs rather than adjacency matrices). Lemma B.2. Let n ≥ 1, p ∈ (0, 1), and let B ∈ {0, 1} n have iid Bernoulli(p) entries. Let 1 2 ≥ δ ≥ C max log n pn 1/2 , 1 (pn) 1/3 (B.1) for a sufficiently large constant C > 0, and put d = (1 − δ)pn (we assume δ is such that d is an integer, and n and p are such that the range for δ is nonempty). Then B contains a d-regular factor except with probability at most exp −cδ 2 pn for some constant c > 0. In the proof of Lemma B.2 we will write First we will control the event that X T < d|T | for some T of size less than (1 − δ/2)n. For fixed T and i, since E deg T (i) = p|T |, from Bernstein's inequality we have P(i ∈ L T ) ≤ exp −cδ 2 p|T | . Remark B.4. The second lower bound in (B.1) could likely be improved, or removed entirely, by separately controlling X T for sets T of "intermediate" size. However, we do not pursue such an improvement as it not necessary for the purposes of this work. Let α ∈ (0, 1/2). From Bernstein's inequality, the probability that a given row or column of B has support of size differing from p n by at least d 1−α is at most 2 exp −cd 1−2α for some constant c > 0. By the union bound we have that all rows and columns have supports of size p n + O(d 1−α ) with probability at least 1/2, say, for all n sufficiently large depending on ε. By Lemma B.2, B contains a d-regular factor with probability at least 3/4. Denoting the intersection of these events by G, we have P(G) ≥ 1/4. Identifying G with a subset of {0, 1} n×n , we see that every element of G can be obtained by taking an appropriate element of A n,d and adding at most
2017-08-07T20:06:01.000Z
2017-03-16T00:00:00.000
{ "year": 2019, "sha1": "878c9e2cec74c95147d4e73d7ef87dbba1776f7f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.05839", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "878c9e2cec74c95147d4e73d7ef87dbba1776f7f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
140028368
pes2o/s2orc
v3-fos-license
Multiaxial fatigue study on steel transversal attachments under constant amplitude proportional and non-proportional loadings In this study, the multiaxial fatigue strength of full-scale transversal attachment is assessed and compared to original experimental results and others found in the literature. Mild strength S235JR steel is used and an exploratory investigation on the use of high strength S690QL steel and the effect of nonproportional loading is presented. The study focuses on non-load carrying fillet welds as commonly used in bridge design and more generally between main girders and struts. The experimental program includes 33 uniaxial and multiaxial fatigue tests and was partially carried out on a new multiaxial setup that allows proportional and non-proportional tests in a typical welded detail. The fatigue life is then compared with estimations obtained from local approaches with the help of 3D finite element models. The multiaxial fatigue life assessment with some of the well-known local approaches is shown to be suited to the analysis under multiaxial stress states. The accuracy of each models and approaches is compared to the experimental values considering all the previously cited parameters. Introduction In bridge design, welded joints submitted to multiaxial stresses interaction are generally fillet welds that can be classified in two types, i.e. load carrying and non-load carrying fillet welds as shown in Fig. 1. Fig. 1. Multiaxial normal-shear stress interaction in a typical bridge detail according to Baptista [1] Detail (a) is relatively well reported in literature. It is a cruciform joint (CJ) characterized by the fact that the main tensile or bending efforts are carried by the weld, leading to a higher variability in the critical location under multiaxial stresses, i.e. the weld throat or toe, depending mainly on the size and penetration of the fillet weld. For standard fillet weld, failure usually initiates at the weld root with cracks developing into the throat section [2,3]. On the contrary, detail (b) represents the so-called transversal attachment (TA), generally seen between for example main girders and secondary elements. It consists in a continuous plate subjected to a main tensile or bending stress, with transversal shear carrying fillet welds on its surface. Indeed, transversal attachments are usually loaded in shear, meaning that the weld toe on the main plate will experience an additional shear stress interacting with the main normal stress. No fatigue tests on this type of detail under multiaxial proportional or nonproportional loads were found in the literature, for any steel grade. Both details are considered in this study. Experimental campaign The experimental program was mainly carried out on a new multiaxial setup that allows for proportional and nonproportional tests. It includes 33 fatigue tests under uniaxial and multiaxial stress on details CJ (a) and TA (b) according to the matrix of experiments in Table 1. The two details are considered, depending on whether the tests are carried out under uniaxial normal stresses or multiaxial stresses. Effects related to the use of high strength steel are explored on the multiaxial TA detail. Six specimens were tested under uniaxial normal stress on detail CJ and combined to data acquired in a literature review previously done by one of the authors (Baptista [1] in the Appendix A4) to allow for the FAT classification of the detail. The classification in shear, due to some limitation of the setup (see 1.2.2), is only based on a literature data review (Baptista [1] in the Appendix A6). A total of 27 tests were done on detail (b) with 17 specimens tested proportionally and 10 nonproportionally with a 180° out-of-phase shift (the influence of the phase shift will be studied in a subsequent task). Specimens failed at the weld toe, both under uniaxial, multiaxial, proportional and non-proportional stress state. In order to be consistent with usual bridge state of stress in specific details and because of setup limitations, a unique stress ratio = max ⁄ of 0.1 is used except in the case of multiaxial non-proportional loading, where an initially unaccounted additional tensile stress (see 2.1) increased the R ratio to 0.6 . In order to emphasize the detrimental influence of the shear stress on the normal uniaxial fatigue life, the shear to normal stress ratio λ is comprised between 0.32, which correspond to a more realistic bridge loading situation, and 1.1. Experimental setup All the S235JR mild steel specimens were produced by C. Baptista, who is a former welder, at the EPFL-ICOM laboratory with commercial flat bars, whereas all S690QL high strength quenched and tempered steel specimens where fabricated by a steel fabricator. The specimen geometry consists of two orthogonally welded plates loaded in their planes as shown in Fig. 2. Fig. 2. Transversal attachment (TA) specimen geometry The longitudinal plate is continuous and has a section of 100*12mm while the second plate is a rigid and not continuous plate with a section of 150*20mm. To ensure crack to initiate at the weld toe on the main longitudinal plate and nor at the weld root neither on the longitudinal material, full penetration welds and a smooth reduction of the thickness to 10mm over 200mm of the plate at the location of interest are applied. To avoid weld defects at the longitudinal plate lower edge, vertical welds length is 80mm, leaving a 10mm distance to the plate edges (see Fig. 2 and Fig. 7). The welding was made using either a MAG 136 (for S235) or a MAG 138 (for S690) with welldefined parameters available upon request to the authors. The TA setup is based on two orthogonal jacks allowing the application of forces in two directions. The TA specimen is fixed horizontally to a column on one side and to the horizontal jack on the other side. Due to this support condition, a small horizontal displacement is occurring at the middle point and a larger one on the loaded side. To ensure the application of the defined normal stress range at the weld toe and to avoid any interaction between the two orthogonal jacks, the TA specimen is simply supported vertically on its bottom side by two linear roller bearing sliders while the vertical load is applied by the vertical jack above the specimen. Due to the support conditions and plate slenderness, only tensile loads can be applied horizontally to avoid any effects of elastic buckling or slipping at bolted connections. Vertically, in order to maintain contact with the roller bearing sliders, only compressive loads can be applied. Experiments under normal stresses Six specimens were tested under uniaxial normal stresses. A summary of the results is shown in Table 2. Different stress levels were applied during the tests to allow the definition of the S-N slope and therefore the FAT classification of the detail. The observed cracks are typical of mode I. Their pattern is similar among all the tested specimens, showing multiple semi-elliptical cracks that have initiated at the weld toe, with no preferential initiation point, and that have coalesced and propagated in the longitudinal plate perpendicularly. To ensure this behaviour and that initiation do not start specifically at the weld toe on any of the four longitudinal plate edges, these specific areas were smoothly grinded to avoid any notch or unwanted stress concentration effect. These results under uniaxial normal stress, together with the existing database, were already presented [1] and are shown in Fig. 4 along with their mean S-N curve. As the total number of tests is limited, a fixed slope m=3, is considered, leading to a detail mean strength of ∆ 2•10 6 = 120 . The database contains results for specimens with toe-to-toe length smaller than = 80 that failed at the weld toe on the main plate and with 0. Procedure for the statistical analysis follows the IIW recommendations [4] as well as Schneider and Maddox [5], see [1] Appendix A for details. Finally, whether or not data from the literature are considered, the detail is classified as FAT80 with a confidence level = 95%. Results are in good agreement with values given in the IIW [4] and EC3 [6] Data under shear stresses One limitation of the setup in its actual design is that it is impossible to create a pure nominal shear stress because of the space between the two roller bearing supports inherent to their geometry (see Fig. 7). Roller bearings are installed as close as possible from the centre axis of the vertical plate, and therefore of the vertical jack, but a bending effect is still present due to the 75mm distance imposed by the bearings design, which induces an additional normal stress due to the vertical load as shown in Fig. 7. However, a minimum span of the toe-to-toe distance is necessary in order to avoid a direct load transfer from the vertical plate to the roller bearing without having full transfer of the shear stress passing thru the weld toe. As for the uniaxial test data, existing shear fatigue results were collected from the literature and presented previously in [1] (Appendix A.6). Relevant shear fatigue tests found in the literature were mainly done on tube-toplate specimens and a few on welded plates. An applied stress ratio of generally = −1 was applied and only a few tests were done with 0. The re-analysis is given in Fig. 5 and shows that the mean strength is defined by ∆ 2•10 6 = 132 with a fixed slope m=5, the derived category being FAT98. Experiments under multiaxial stresses A total of 27 specimens were tested under various multiaxial states of stress. Twenty-one tests were done on S235JR steel, including 11 tests under proportional and 10 tests under non-proportional loadings. Six tests were done on S690QL steel specimens under proportional loadings. In order to assess the detrimental or beneficial effects of these various situations, results are first shown in the usual nominal normal stress range to number of cycles domain and compared to the uniaxial normal stress results. As a first approach, nominal stresses are defined in a two-dimensional plane at the expected and observed location of crack initiation, i.e. at the bottom of the weld toe, as shown in Fig. 7. Applied load signals and nominal stresses output are presented in Fig. 9. Tests were carried at frequencies between 3 and 4Hz depending on the loading situation. Among all specimens, cracks initiated and propagated in the lower part of the weld toe section into the longitudinal plate, i.e. the same section and crack plane as under uniaxial normal stresses. Multiple semi-elliptical cracks initiated in this area and coalesced into a single one with a high / ratio. The bigger the shear ratio, the more the crack pattern tends to a semi-circular shape due mainly to the Mode II. Exceptions to mention are the specimens P16 and P17 where the crack initiated from a small notch on the lower edge of the main plate at the symmetry axis of the vertical plate. This area being subjected to the maximum principal stress when under multiaxial loads, any welding notch or imperfection needs to be carefully eliminated. Unlike the uniaxial case, crack surfaces are rough and presents small branches that are typical of Mode II/III cracks see Fig. 6. The bigger the ratio between shear and normal stress, the more the observable Mode II and III. These tests also show the anisotropic nature of the multiaxial fatigue propagation phenomenon in welded plate details. Most of the crack propagation driving force is lost in friction so these modes have higher lives than their corresponding Mode I cracks. The shear friction may however be severely reduced under the presence of a normal stress. The idea of factorising the shear damage with the normal stress influence can be found in most of the important strainbased criteria, e.g. Findley [7], which however are initiation criteria. In propagation and at macroscopic level, we believe this phenomenon to be the main reason for the lowest fatigue lives of the proportional load case. The normal stress influence, acting in phase with the shear, greatly reduces friction between the crack surfaces, increasing the damaging character of the shear forces. In bridge and other structures design, fatigue under shear and its relative failure modes are generally neglected in plate details because of the relative lack of tests on these types of details. It is effectively easier to obtain mode II and III in tubular specimens from pure torsion tests. In this regard, the new multiaxial setup presented in this paper showed to be well suited. The first conclusion that can be drawn is that multiaxial loadings are definitely more damaging than uniaxial loading situations when looking only at the uniaxial stresses. Results in Fig. 8 shows the detrimental effect on the fatigue life of an additional cyclic shear stress acting in combination to the cyclic normal stress. As for uniaxial tests, linear regressions are done with a fixed slope of 3 due to the limited number of specimens tested at only a few different levels of stresses. The linear regression of the proportional results gives a mean strength of ∆ 2•10 6 , = 82 . The nonproportional load case significantly reduces uniaxial fatigue life with regard to the obtained results expressed in terms of nominal stresses. The corresponding mean fatigue strength is ∆ 2•10 6 , = 57 . This conclusion is in accordance to what has been observed in multiaxial tests on tubular specimens where the nonproportional load case has been reported as more damaging in ductile material because of grain dislocations that occur when the principal stress plane angle varies during the loading cycle. The difference may also come from the use of a very high ratio of 1.10, but on the other hand, a very low stress amplitude puts the tests very close to the theoretical fatigue limit and thus the fatigue life should increase. This is effectively seen with both specimens NP9 and NP10 whom have reached the fatigue life of 10 7 cycles before being stopped and considered as run-outs. An important remark is that these tests were carried out with a shift angle of 180°. A 90° phase shift should be more damaging but this influence will only be studied in a subsequent task. It is not possible to draw a general conclusion on the use of high strength steel due to the very low number of specimens tested to date. However, if specimens P16 and P17 are neglected due to their different initiation location, the mean fatigue strength is equal to ∆ 2•10 6 , = 80 for S235JR specimens and ∆ 2•10 6 , = 90 for S690QL specimens indicating a possible improvement in the fatigue strength for the high strength steel. Multiaxial fatigue approaches for fatigue life estimation Models and criteria presented in this paper can be separated in three groups, i.e. equivalent stresses, equivalent stresses on the critical plane and interaction equations approaches. Most of them were initially used or defined to assess the fatigue life up to crack initiation, to the so-called technical crack size i.e. from 0.01 to 1mm in general. To study the total fatigue life, these models should be combined with propagation approaches based on fracture mechanics. Several authors [8,9] concluded that even for welded joints an average factor of 0.5 is observed between the time to initiate a 1 mm deep crack and through-thickness cracking, for both as-welded and weld machined and ground [13]. Modelling and stress definition with the effective notch stress approach The effective notch stress is part of the various notch stress approaches as explained by Radaj et al. [10]. The fatigue notch factor defining the effective stress concentration is determined with the help of a microstructural notch support hypothesis. The maximum notch stress is determined by an averaging over a material-characteristic small-length, area or volume at the notch, see for example Peterson [11], Neuber [12][13][14], Taylor [15]. Olivier et al. [16,17] confirmed the validity of the notch stress approach according to Radaj, for several weld configurations, including the scatter due to different notch values and the effect of stress ratio. They also concluded that even if a higher strength steel offers a better local fatigue strength on smooth specimens, it is also more sensitive to notch sharpness. Thus, in its application, even if the approach was proposed for any notch radius, one generally use the theoretical worst case, leading to the popular fictitious notch radius = 1 . This approach is used to model any notch at the expected crack location, weld toe or root, with a fictitious transition radius of 1mm. The numerical FE model is developed in Abaqus 6.16 (Dassault Systèmes) and a submodelling approach is used to model the transition radius. Hexahedral quadratic elements with reduced integration (C3D20R) are used in both global and sub-models. The IIW [4] recommends to discretize the radius by minimum 4 quadratic elements or 6 linear elements. Six elements are used in the model. One important feature of the setup is that it is not purely symmetrical, implying that the FE model cannot be simplified and flexible supports have to be taken into account. The linear roller bearings and the column are modelled with spring elements supporting the edges of the main horizontal plate. Calibration of the spring elements in the FE model is made with the help of measured displacements on the specimens during several quasi-static uniaxial and multiaxial loading situations. As shown in Fig. 10, the behaviour of the spring elements proved to be relatively linear. The allowable value for structural steels used in bridge and ship welded construction according to the IIW is FAT225/3 (i.e. 225 MPa at 2 million cycles and slope m = 3, with 97,7% survival probability, R ≤ 0.5). Furthermore, the FAT160/5 proposed by Sonsino [18] for the assessment of welded joints loaded in pure torsion/shear is chosen as the shear fatigue strength. Equivalent stress approaches If, in general, most of the multiaxial fatigue situations can be simplified into a two-dimensional system, the studied geometry and its relative stress state intricacy at the location of interest implies to consider a threedimensional stress tensor when using local approaches. The principal stresses (PS) is used first in the analysis. Secondly, the von Mises criterion is used. Indeed, Radaj et al. [10] stated that one should use the Von Mises equivalent stress combined with the effective notch stress in case of ductile material instead of the maximum principal normal stress that is more suited to brittle material as cast steel. Findley critical plane approach Critical plane approaches are all based on the research of the one or multiple planes that maximizes a specific damage parameter. The critical plane search method (CPS) used in this paper is similar to the minimum circumscribed circle concept proposed by Dang Van [19] and Papadopoulos [20,21]. This new CPS method proposed by Karpanan [22] can be used for all stress based proportional and non-proportional loading fatigue analysis and all strain-based fatigue analysis methods. The stress tensor in each material point is needed but in this paper two points (nodes) on the FE models are analysed for sake of simplification, i.e. the points subjected to the maximum principal stress and to the maximum von Mises equivalent stress. Pedersen [23] has recently shown that, when using the effective notch stress approach, for CA proportional loadings, Findley's [7] simple critical plane criterion leads to the safest predictions. The Findley criterion suggest that the maximum normal stress , acting in combination with the shear amplitude ∆ 2 ⁄ on a specific plane has a detrimental effect on the allowable alternating shear stress. The criterion is defined on the plane subjected to the maximum value of this combination, where is a material constant related to materials' sensitivity to normal stresses and is directly related to the materials' shear fatigue strength. For ductile materials, usually varies between 0.2 and 0.3. A value = 0.3 is used in the present paper. Findley's criterion is here directly compared to the shear fatigue strength FAT160/5. The multiaxial comparison value may have the value of 1,0 or 0,5 depending on the loading state (proportional or non-proportional) and the ductility of the material. In the EC3 [6], the criterion is based on a linear damage sum D according to Miner's rule eq.(3). Unlike other models presented, the EC3 and IIW equivalent stresses are dependent of previously defined shear and normal fatigue strengths and comparison value or damage sum. Multiaxial fatigue criteria results The equivalent stress is calculated for each test and then plotted in SN diagrams for each criterion presented previously. However, in order to analyse the results and present them in a clear and brief manner, Table 4 gives the most important outcomes of the tests data expressed with the help of the five different criteria. A graphical example is given in Fig. 12, showing data plotted in terms of Findley's equivalent stresses defined with the effective notch stress approach. The notch stress has the ability to regroup the different load cases in a single scatter band but all criteria used in this paper lead to a large scatter when considering non-proportional tests except for the Findley critical plane approach, which showed good results. It is interesting to note that the characteristic curve for the uniaxial case with principal stresses is FAT227/3 which is in good agreement with the FAT225/3 from the IIW. In general, the IIW and EC3 criteria give the larger scatter. Table 4 also gives the probability of non-conservative prediction calculated with the help of a life ratio as employed by Bruun & Härkegard [24] and Pedersen [23], where ∆ , , is the equivalent strength at a specific number of cycles for each tests and ∆ , is the design shear or normal fatigue strength at the same number of cycles. A Weibull distribution is then fitted to the life ratios for each situation. The resulting scale and shape parameters are then used to compute the nonconservative prediction , i.e. the probability that tests results have life ratios < 1.0. In the case of uniaxial and multiaxial proportional load cases, both the maximum principal stress and von Mises give results in the range of 12 to 20% but the principal stress gives overall better results. Again, the combination of Findley's equivalent stress with the effective notch stress method is the safest with a of 1.56%. In the case of non-proportional loadings, all criteria overestimate the fatigue lives, giving a which tends to 0 this is mainly due to the very low strange range level resulting from the out-of-phase angle of 180°. In general, the IIW and EC3 gives surprisingly non-conservative values, with an advantage for the IIW that has better results even in the non-proportional case certainly due to the use of = 0.5 in this case. It is important to mention that all these conclusions are mostly indicative because of the very limited number of tests that are analyzed. Conclusions A new multiaxial setup was presented that allowed proportional and non-proportional multiaxial tests in a typical welded plate detail. Mode II/III cracks were obtained under multiaxial loadings tests. The results on a transversal attachments loaded both in normal stress and shear stress clearly showed the detrimental effect of the multiaxial loadings. The local 1mm effective notch stress approach was shown suitable to the analysis under multiaxial stress states. Combined to Findley's criterion and compared to the shear fatigue strength curve FAT160/5 or combined to the maximum principal stress range and compared to the normal fatigue strength curve FAT225/3, it proved to be both safe and the most accurate. Effects of the use of high strength steel was explored and even if results are scarce, one may expect an improvement in the total fatigue life under multiaxial loadings. Among parameters to be considered in the future to improve the models: • The influence of transverse residual stresses and their relaxation. Better knowledge would improve initiation life prediction with for example the Manson-Coffin relation using Neuber's notch rule and multiaxial strain based models e.g. Fatemie-Socie [25].
2019-01-23T00:30:55.464Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "129569af7fc490e272c00e658e0ad5bdc79afc41", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/24/matecconf_fatigue2018_16007.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "129569af7fc490e272c00e658e0ad5bdc79afc41", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
253534490
pes2o/s2orc
v3-fos-license
Physico-Mechanical, Thermal, Morphological, and Aging Characteristics of Green Hybrid Composites Prepared from Wool-Sisal and Wool-Palf with Natural Rubber In the reported study, two composites, namely sisal-wool hybrid composite (SWHC) and pineapple leaf fibre(PALF)-wool hybrid composite (PWHC) were prepared by mixing natural rubber with equal quantities of wool with sisal/PALF in a two-roll mixing mill. The mixture was subjected to curing at 150 °C inside a 2 mm thick mold, according to the curing time provided by the MDR. The physico-mechanical properties of the composite viz., the tensile strength, elongation, modulus, areal density, relative density, and hardness were determined and compared in addition to the solvent diffusion and thermal degradation properties. The hybrid composite samples were subjected to accelerated aging, owing to temperature, UV radiation, and soil burial tests. The cross-sectional images of the composites were compared with a scanning electron microscopic analysis at different magnifications. A Fourier transform infrared spectroscopic analysis was conducted on the hybrid composite to determine the possible chemical interaction of the fibres with the natural rubber matrix. Introduction Green composites are expected to be the next generation of sustainable composite materials, and both academia and industry are interested in them [1]. These materials are made from natural resources that are renewable, recyclable, and biodegradable. Green composites are typically made by combining natural resins with plant and animal fibres. Natural fibres are demonstrating that they are a more ecologically friendly, cost-effective, and a lighter alternative to synthetic fibres [2]. Bio-resins, which are derived from protein, starch, and vegetable oils, have been created as an alternative to petroleum-based polymers. Compared to synthetic fibres, natural plant-based fibres have a number of clear advantages, such as a reasonable price, good mechanical properties, thermal and acoustic insulation, and can degrade naturally. The short natural fibre reinforced rubber composites have been found to possess a good dimensional stability and high green strength [3]. Sisal is a commercially valuable fibre, extracted from Agave sisalana leaves. It is primarily utilized in the production of carpets, insulating panels, and maritime ropes and is commercially grown in Brazil, Tanzania, Kenya, and Madagascar. It has tremendous tensile strength and is quite robust. Many research attempts have been reported for the use of sisal fibre in composites [4]. Pineapple is one of the most popular fruits extensively grown in Costa Rica, the Philippines, Brazil, Thailand, China, and India. The pineapple leaf fibre (PALF) is extracted from the leftover leaves of the pineapple plant. Of all of the natural fibres derived from plant leaves, PALF has the largest proportion of cellulose content and the lowest microfibrillar angle, which results in an exceptionally good tensile strength. PALF isused for a variety of purposes, such as the creation of textiles [5], paper [6], and Preparation of the SWHC and PWHC The wool, sisal, and PALF fibre were chopped in to 1.5 cm length. The NR was adequately masticated in a two-roll mixing mill (300 × 500 cm) for two minutes. The masticated rubber, the vulcanizing agents, and the fibreswere combined, as mentioned in Table 1. Just before adding sulphur, the wool fibre was added to the NR polymer matrix, along with the sisal fibre for the SWHC and withthe PALF for the PWHC. Care was taken to preserve the compound flow direction so that the majority of the fibres followed the same flow path. In order to ensure an equal distribution of thefibres in the polymer matrix, samples were milled for 10 min [18]. Table 1. List of rubber compounding ingredients [19,20]. Using a Moving Die Rheometer (Rheometer MDR 2000, Alpha Technology), the curing properties of the SWHC and the PWHCwere studied, in accordance with the ASTM D5289 method at 150 • C. The compositeswerevulcanized at 150 • C inside a 2 mm thick mold, according to the curing time provided by the MDR at 100 bar pressure (5 min for curing for both composites (see t 90 vales in Table 2). Following thevulcanization, the hybrid composite samples were removed from the mold and cooled. The samples were pre-conditioned at 25 • C and 65% RH before further analysis. For each set of composites, fivereplicas were prepared. The cure rate index, which is a measurement of difference between t 90 (optimum cure time)and ts 2 (insipient scorch time), was calculated using the formula [21] Cure rate index = 100/(t90 − ts2) Analysis of the Physico-Mechanical Properties of the Composites A universal testing machine (Tinius Olsen H50KT) was employed for the determination of the tensile and tear strength of the SWHC and PWHC. The samples were analyzed, in accordance with the ASTM D412 and ASTM D624 standards, respectively. Three samples were tested for each composite and the average result was calculated. The moisture content of the hybrid composites was determined, in accordance with ASTM D2495-07. The hardness of the SWHC and PWHC was assessed with the aid of a Shore-A hardness tester (Presto), following the ASTM D-2240 guidelines. The areal density of the composites was calculated, using the following formula. The relative density of the SWHC and PWHC in water was calculated according to ASTM D792, using the equation below. Relative density = Weight in air Weight in air − Weight in water × Density o f water The SWHC and PWHC were analyzed for their solvent diffusion properties using water and toluene. Three replicas from each composite were cut in a round disc shape (2 cm diameter).Prior to dipping in the solvent, both specimens underwent preconditioning (25 • C and 65% RH) and were weighed. The hybrid composite samples were dipped in their respective solvents and removed from the solvents in predefined intervals. The specimens were gently hand pressed in between a blotting paper, to remove the surplus solvent and weighed. The procedure was repeated until a swelling equilibrium was reached [18].The mole% solvent uptake of the composite samples was calculated using the Equation (4). Qt represents the solvent's mole% uptake at a certain time t. Further, to investigate the diffusion properties of the SWHC and PWHC with water and toluene, a graph of Qt vs. (4) The crosslink density, associated with the composites immersed in toluene is calculated using the following sets of equations below [18] hardness tester (Presto), following the ASTM D-2240 guidelines. The areal density of the composites was calculated, using the following formula. ℎ ℎ ℎ 10000 (2) The relative density of the SWHC and PWHC in water was calculated according to ASTM D792, using the equation below. ℎ ℎ ℎ The SWHC and PWHC were analyzed for their solvent diffusion properties using water and toluene. Three replicas from each composite were cut in a round disc shape (2 cm diameter).Prior to dipping in the solvent, both specimens underwent preconditioning (25°C and 65% RH) and were weighed. The hybrid composite samples were dipped in their respective solvents and removed from the solvents in predefined intervals. The specimens were gently hand pressed in between a blotting paper, to remove the surplus solvent and weighed. The procedure was repeated until a swelling equilibrium was reached [18].The mole% solvent uptake of the composite samples was calculated using the Equation (4).Qt represents the solvent's mole% uptake at a certain time t. Further, to investigate the diffusion properties of the SWHC and PWHC with water and toluene, a graph of Qt vs √t was generated. Qt ℎ / ℎ ℎ 100 (4) The crosslink density, associated with the composites immersed in toluene is calculated using the following sets of equations below [18] ʋ 1/2 (5) ρp Vs ϕ⅓ ln 1 ϕ ϕ ϰϕ² The crosslinking density of the material is given by Equation (5). Equation (6)is used to compute the molar masses between the crosslinks, or "Mc". Equation (7)is used to compute "ϕ", which is the volume fraction of rubber at the swelling equilibrium. "ρp" stands for the polymer density, "ρs" for the solvent density, and "Vs" for the molar volume of each solvent. Equation (8)can be used to calculate "χ" which is the interaction parameter between the polymer and the solvent. Equation (6) would be used to derive "Mc", using the values of "ϕ" and "χ", as determined by Equations (7) and (8), respectively. In Equation (4), "β""δs", and "δp" stand in for the lattice constant (zero for polymers), the solubility parameter of the solvent, and the solubility parameter of the polymer, respectively. "R" is the universal gas constant and "T" is the temperature. For all testing, five replicas were made and the average value was taken. hardness tester (Presto), following the ASTM D-2240 guidelines. The areal density of the composites was calculated, using the following formula. ℎ ℎ ℎ 10000 (2) The relative density of the SWHC and PWHC in water was calculated according to ASTM D792, using the equation below. ℎ ℎ ℎ The SWHC and PWHC were analyzed for their solvent diffusion properties using water and toluene. Three replicas from each composite were cut in a round disc shape (2 cm diameter).Prior to dipping in the solvent, both specimens underwent preconditioning (25°C and 65% RH) and were weighed. The hybrid composite samples were dipped in their respective solvents and removed from the solvents in predefined intervals. The specimens were gently hand pressed in between a blotting paper, to remove the surplus solvent and weighed. The procedure was repeated until a swelling equilibrium was reached [18].The mole% solvent uptake of the composite samples was calculated using the Equation (4).Qt represents the solvent's mole% uptake at a certain time t. Further, to investigate the diffusion properties of the SWHC and PWHC with water and toluene, a graph of Qt vs √t was generated. Qt ℎ / ℎ ℎ 100 (4) The crosslink density, associated with the composites immersed in toluene is calculated using the following sets of equations below [18] ʋ 1/2 (5) ρp Vs ϕ⅓ ln 1 ϕ ϕ ϰϕ² The crosslinking density of the material is given by Equation (5). Equation (6)is used to compute the molar masses between the crosslinks, or "Mc". Equation (7)is used to compute "ϕ", which is the volume fraction of rubber at the swelling equilibrium. "ρp" stands for the polymer density, "ρs" for the solvent density, and "Vs" for the molar volume of each solvent. Equation (8)can be used to calculate "χ" which is the interaction parameter between the polymer and the solvent. Equation (6) would be used to derive "Mc", using the values of "ϕ" and "χ", as determined by Equations (7) and (8), respectively. In Equation (4), "β""δs", and "δp" stand in for the lattice constant (zero for polymers), the solubility parameter of the solvent, and the solubility parameter of the polymer, respectively. "R" is the universal gas constant and "T" is the temperature. For all testing, five replicas were made and the average value was taken. hardness tester (Presto), following the ASTM D-2240 guidelines. The areal density of the composites was calculated, using the following formula. ℎ ℎ ℎ 10000 (2) The relative density of the SWHC and PWHC in water was calculated according to ASTM D792, using the equation below. The SWHC and PWHC were analyzed for their solvent diffusion properties using water and toluene. Three replicas from each composite were cut in a round disc shape (2 cm diameter).Prior to dipping in the solvent, both specimens underwent preconditioning (25°C and 65% RH) and were weighed. The hybrid composite samples were dipped in their respective solvents and removed from the solvents in predefined intervals. The specimens were gently hand pressed in between a blotting paper, to remove the surplus solvent and weighed. The procedure was repeated until a swelling equilibrium was reached [18].The mole% solvent uptake of the composite samples was calculated using the Equation (4).Qt represents the solvent's mole% uptake at a certain time t. Further, to investigate the diffusion properties of the SWHC and PWHC with water and toluene, a graph of Qt vs √t was generated. The crosslink density, associated with the composites immersed in toluene is calculated using the following sets of equations below [18] ʋ 1/2 (5) ρp Vs ϕ⅓ ln 1 ϕ ϕ ϰϕ² The crosslinking density of the material is given by Equation (5). Equation (6)is used to compute the molar masses between the crosslinks, or "Mc". Equation (7)is used to compute "ϕ", which is the volume fraction of rubber at the swelling equilibrium. "ρp" stands for the polymer density, "ρs" for the solvent density, and "Vs" for the molar volume of each solvent. Equation (8)can be used to calculate "χ" which is the interaction parameter between the polymer and the solvent. Equation (6) would be used to derive "Mc", using the values of "ϕ" and "χ", as determined by Equations (7) and (8), respectively. In Equation (4), "β""δs", and "δp" stand in for the lattice constant (zero for polymers), the solubility parameter of the solvent, and the solubility parameter of the polymer, respectively. "R" is the universal gas constant and "T" is the temperature. For all testing, five replicas were made and the average value was taken. The crosslinking density of the material is given by Equation (5). Equation (6) is used to compute the molar masses between the crosslinks, or "Mc". Equation (7) is used to compute "φ", which is the volume fraction of rubber at the swelling equilibrium. "ρ p " stands for the polymer density, "ρ s " for the solvent density, and "Vs" for the molar volume of each solvent. Equation (8) can be used to calculate "χ" which is the interaction parameter between the polymer and the solvent. Equation (6) would be used to derive "Mc", using the values of "φ" and "χ", as determined by Equations (7) and (8), respectively. In Equation (4), "β" "δs", and "δp" stand in for the lattice constant (zero for polymers), the solubility parameter of the solvent, and the solubility parameter of the polymer, respectively. "R" is the universal gas constant and "T" is the temperature. For all testing, five replicas were made and the average value was taken. FTIR, SEM, TGA, and the Aging Analysis of the Composites With a Perkin Elmer Spectrum-2 spectrometer, the FT-IR spectra of theSWHC and PWHC were recorded across a range of 4000 cm −1 to 400 cm −1 , using the attenuated total reflection (ATR). The spectrum was obtained after 24 consecutive scans. The JEOL-JSM-6390 scanning electron microscope was used to examine the surface morphological properties of the hybrid composites. The samples were sputter coated with gold-palladium to prevent electron beams from having any charge effects during the examination. The images were captured at various magnifications at a 20 kV accelerating voltage. The thermogravimetric analysis was carried out, using TA instruments (SDT Q600) in an inert atmosphere at temperatures ranging from 25 • C to 700 • C. The heating rate, 10 • C/min and a DTA sensitivity of 0.001 • C were maintained throughout the analysis. The composite samples were subjected to accelerated aging to temperature (ASTMD 573-04), UV, and biodegradation as per the standard methods reported in our previous studies [22]. Results and Discussion 3.1. Cure Characteristics Figure 1 shows the cure characteristics of the SWHC and PWHC. It is apparent from the figure that both composite mixtures followed almost the same pattern in the MDR curve. It can be clearly understood from the graph that the time required for the initiation of the crosslinking is nearly the same for the SWHC and PWHC. There is an initial decrease in torque that was noted in both the SWHC and PWHC, because of the softening of the rubber polymer matrix, when subjected to heat. When the crosslinking was initiated, with respect to time, the torque increased to a maximum, where the crosslinking was the highest and then showed a slight reduction, and then became almost constant. The corresponding torque and cure time values are shown in Table 2. PWHC were recorded across a range of 4000 cm to 400 cm , using the attenuated total reflection (ATR). The spectrum was obtained after 24 consecutive scans. The JEOL-JSM-6390 scanning electron microscope was used to examine the surface morphological properties of the hybrid composites. The samples were sputter coated with gold-palladium to prevent electron beams from having any charge effects during the examination. The images were captured at various magnifications at a 20kV accelerating voltage. The thermogravimetric analysis was carried out, using TA instruments (SDT Q600) in an inert atmosphere at temperatures ranging from 25°C to 700°C. The heating rate, 10°C/min and a DTA sensitivity of 0.001°C were maintained throughout the analysis. The composite samples were subjected to accelerated aging to temperature (ASTMD 573-04), UV, and biodegradation as per the standard methods reported in our previous studies [22]. Figure 1 shows the cure characteristics of the SWHC and PWHC. It is apparent from the figure that both composite mixtures followed almost the same pattern in the MDR curve. It can be clearly understood from the graph that the time required for the initiation of the crosslinking is nearly the same for the SWHC and PWHC. There is an initial decrease in torque that was noted in both the SWHC and PWHC, because of the softening of the rubber polymer matrix, when subjected to heat. When the crosslinking was initiated, with respect to time, the torque increased to a maximum, where the crosslinking was the highest and then showed a slight reduction, and then became almost constant. The corresponding torque and cure time values are shown in Table 2. In fibre reinforced rubber composites, the maximum torque is an indication of the extent of the crosslinking and the stiffness, while the minimum torque indicates the fibre content present in it [23]. The maximum torque for the PWHC (48.68) is marginally higher than that for the SWHC (38.36). The value of ts 2 (insipient scorch time) is same for both composites, which indicate that the vulcanization in both composites begins at the same time and the similar value of t 90 shows that the vulcanization proceeds to a completion at the similar time, for the composites. Cure Characteristics Regardless of the same values for t 90 and ts 2 in both the SWHC and PWHC, the maximum torque for the PWHC was found to be high. This indicates that the crosslinking associated with the vulcanization was increased by the addition of PALF in the wool-NR matrix, in comparison to the addition of the sisal fibre. The cure rate index is almost same for both composites, which indicates that the rate of curing is almost the same for the SWHC and PWHC [24].Since the rubber content is lower in the NR hybrid composite, the cure curve declined after reaching t 90 . This might be due to the over curing of the composites. Similar observations have been reported elsewhere [25]. Tensile and Tear Properties The stress-strain curve of the composite samples is depicted in Figure 2. The curves indicate that the PWHCs could withstand more stress in comparison with SWHC, meanwhile the latter possess a higher elongation. The corresponding tensile strength data is shown in the Table 3. Moisture Absorption and the Hardness Properties In comparison with the synthetic fibre composites, the moisture content of the SWHC and the PWHC was found to be in a higher range ( Table 4). The higher moisture content of the developed hybrid composites may be due to the high inclusion (100 phr) of hydrophilic natural fibres. In comparison with the SWHC, the PWHC has a marginally lower moisture content (5.90%), perhaps due to the fact that sisal fibres absorb more moisture than the PALF [27]. In the context of composites, the high moisture content of the natural fibre is a major concern to researchers. Sisal and PALF are lignocellulosic, and the presence of hemicelluloses causes a high moisture uptake. The presence of moisture in the natural fibre reinforced composites causes a weaker interaction between the fibres and the matrix [28]. Rubber is a soft polymer. The inclusion of natural fibres, such as wool, sisal, and PALF, in large quantity significantly increase the hardness of the composites. A good network formation of a natural fibre inside of the soft rubber polymer matrix may be the reason behind it, as a result, the Shore A hardness increased. The hardness of the PWHC was 91.56 and that of the SWHC was 91.3. Though there exists a considerable difference in the mechanical properties, hardness (91), areal density (2700g/m²), and relative densi- It is evident from Table 3 that the tensile strength for the PWHC (11.14 MPa) is almost double thanthat for the SWHC (6.09 MPa). The higher tensile strength of the PWHC, in comparison with the SWHC may be due to the following reasons. (i) The higher tensile strength of the PALF (38.5 g/tex) than sisal fibre (30.9 g/tex), (ii) the dense packing of the fibre, the lack of voids and a better interfacial adhesion of the PALF in the rubber matrix, as observed from the SEM images, (iii) the better transfer of stress between the NR and PALF, compared to that between the NR and sisal fibre, as indicated by the tear analysis (Table 3). Thus, Young's modulus associated with the PWHC is higher than that of the SWHC. The lower value of Young's modulus and the elongation (%) indicates that the PWHC has a much better resistance to elastic deformation, compared to the SWHC. The tear strength of a polymer indicates the ability of the polymer to withstand tearing or cracking, when they are subjected to an external force. The tear strength of the PWHC (85.0 N/mm) is significantly higher than that for the SWHC (45.9 N/mm). This is because the transfer of stress in the PALF incorporated composite, is better than that in the sisal fibre incorporated composite. The lower tear strength is also an indication of a poor interaction between the fibre and the polymer matrix [26]. The SWHC possessed a higher elongation at break (5.42%) than the PWHC (4.49%). Moisture Absorption and the Hardness Properties In comparison with the synthetic fibre composites, the moisture content of the SWHC and the PWHC was found to be in a higher range ( Table 4). The higher moisture content of the developed hybrid composites may be due to the high inclusion (100 phr) of hydrophilic natural fibres. In comparison with the SWHC, the PWHC has a marginally lower moisture content (5.90%), perhaps due to the fact that sisal fibres absorb more moisture than the PALF [27]. In the context of composites, the high moisture content of the natural fibre is a major concern to researchers. Sisal and PALF are lignocellulosic, and the presence of hemicelluloses causes a high moisture uptake. The presence of moisture in the natural fibre reinforced composites causes a weaker interaction between the fibres and the matrix [28]. Rubber is a soft polymer. The inclusion of natural fibres, such as wool, sisal, and PALF, in large quantity significantly increase the hardness of the composites. A good network formation of a natural fibre inside of the soft rubber polymer matrix may be the reason behind it, as a result, the Shore A hardness increased. The hardness of the PWHC was 91.56 and that of the SWHC was 91.3. Though there exists a considerable difference in the mechanical properties, hardness (91), areal density (2700 g/m 2 ), and relative densities (1.11 g/cm 3 ) of both, the PWHC and SWHC were found to be almost the same. It can be seen from Table 4 that the relative density of both hybrid composites is almost same. Figure 3a displays the FTIR spectra of sisal, PALF, and wool fibre. The sharp peak for wool fibre, seen at 1640 cm −1 , is due to the amide I group present in the wool protein, while the peak at 3272 cm −1 denotes the amide N-H stretching vibration. The bending vibration of the C-N-H bond corresponds to the peaks at 1520 cm −1 [29][30][31]. The FTIR spectra shows similar peaks for both the PALF and sisal, as theyare lignocellulosic fibres. In case of the sisal fibre and PALF, the broad peak at 3332 cm −1 corresponds to the -OH stretching vibrations from cellulose while the symmetric and asymmetric stretching of the CH 2 groups is indicated by the peak at 2886 cm −1 [32]. The peaks at 1732 cm −1 can be attributed to the stretching vibration of C=O groups in hemicellulose and the peaks in between 1627-1606 cm −1 , indicate the aromatic C=C stretching vibrations in lignin [33,34]. It can also be observed that the intensity of the peak at 1606 cm −1 is higher for the sisal fibre, compared to PALF, indicating that the lignin content is higher for the sisal. The C-O/ C-C group stretching is indicated by the peaks at 1017 cm −1 [35]. FTIR Analysis attributed to the stretching vibration of C=O groups in hemicellulose and the peaks in between 1627-1606 cm −1 ,indicate the aromatic C=C stretching vibrations in lignin [33,34]. It can also be observed that the intensity of the peak at 1606 cm −1 is higher for the sisal fibre, compared to PALF, indicating that the lignin content is higher for the sisal. The C-O/ C-C group stretching is indicated by the peaks at 1017 cm −1 [35]. It can be clearly observed, from the Figure 3b, that the SWHC, PWHC, and the vulcanized rubber shows almost the same peaks. The peaks observed at 2920 cm −1 and 2848 cm −1 indicate the symmetrical stretching of the -CH3 bonds and -CH2-bonds, respectively. The characteristic in-plane bending of the amide II group in the wool protein is denoted by the peak at 1536 cm −1 . The sharp peaks observed at 1452 cm −1 and 1370 cm −1 indicate the deformation of the -CH3 bonds, while the out of plane bending for the C=C-H group is shown by the peak at 830 cm −1 [36]. All of these peaks are characteristic ofthe vulcanized rubber sample. It is observed from the spectra that the peaks corresponding to the It can be clearly observed, from the Figure 3b, that the SWHC, PWHC, and the vulcanized rubber shows almost the same peaks. The peaks observed at 2920 cm −1 and 2848 cm −1 indicate the symmetrical stretching of the -CH 3 bonds and -CH 2 -bonds, respectively. The characteristic in-plane bending of the amide II group in the wool protein is denoted by the peak at 1536 cm −1 . The sharp peaks observed at 1452 cm −1 and 1370 cm −1 indicate the deformation of the -CH 3 bonds, while the out of plane bending for the C=C-H group is shown by the peak at 830 cm −1 [36]. All of these peaks are characteristic of the vulcanized rubber sample. It is observed from the spectra that the peaks corresponding to the wool, PALF, and sisal are not prominent and are masked by the natural rubber. No shifts in the peaks, even after the addition of the fibres, indicates that there is no chemical interaction between the fibres and the polymer matrix, leading to the conclusion that the interaction may be physical, involving van der Waals forces or hydrogen bonds. Figure 4e-h shows that of the PWHC. It is visible from the images that in both the composites, a good network of fibres is present throughout the polymer matrix. It can also be seen from the images that the wool fibre (with scales) is distributed evenly with the sisal fibre (without scales) in the SWHC and with PALF (without scales) in the PWHC. Most of the fibres are in a uniform direction inside the rubber matrix. The absence of large voids in the composites indicates that the hybrid composites are properly prepared without the entrapment of air inside them. However; while comparing the SEM images (Figure 4d,h), which is of the cross sectional images of the SWHC and PWHC, respectively, it is found that the number of voids are comparatively higher in the SWHC than in the PWHC. This indicates a good adhesion between the PALF and NR matrix. The fibre pull-out is visible from the matrix in Figure 4b,f, and the images suggest that in the PWHC (Figure 4h), the voids formed at the root of the pulled-out fibres, are relatively small when compared to that in the SWHC. The higher interfacial adhesion between the natural fibres and the polymer matrix keeps the fibres intact with the matrix, which can also be the reason for the higher tensile strength, shown by the PWHC (11.14 MPa), compared to the SWHC (6.09 MPa) [37]. It has been reported that the formation of large voids is an indication of the poor interfacial adhesion between the fibres and the matrix [13].It is inferred from the SEM analysis that the failure mechanism in these composites was the fibre pull-out, the fibre fracture, and the interfacial debonding. Figure 5a shows the TG curve of the SWHC and PWHC. Due to the similarity in the chemical nature and composition, both composites followed almost the same pattern. As discussed regarding the physical properties of the composites, the composites have a moisture content of 6-7%. The minor weight loss at 110 • C, may be due to the removal of moisture from the composites. The TGA shows a major weight reduction between 250 • C and 400 • C and both composites showed a weight loss of 84.14%, until a constant weight was reached at a temperature of 442.5 • C. SEM Analysis The thermal degradation of the composites may be explained, based on the degradation of the individual components. In the case of the wool fibre, the thermal degradation takes place in three steps [38]. The first stage of the degradation takes place between 100 • C and 135 • C and is attributed to the loss of moisture content [39]. During the second step, a maximum weight loss occurred between 218 • C and 390 • C, due to the breakdown of the microfibril-matrix structure and the disulfide linkages [38]. In the third step, various peptide bonds present in the wool were broken at around 390-500 • C. Above 500 • C, the char oxidation reactions dominated [31,40]. natural fibres and the polymer matrix keeps the fibres intact with the matrix, which can also be the reason for the higher tensile strength, shown by the PWHC (11.14 MPa), compared to the SWHC (6.09 MPa) [37]. It has been reported that the formation of large voids is an indication of the poor interfacial adhesion between the fibres and the matrix [13].It is inferred from the SEM analysis that the failure mechanism in these composites was the fibre pull-out, the fibre fracture, and the interfacial debonding. Figure 5a shows the TG curve of the SWHC and PWHC. Due to the similarity in the chemical nature and composition, both composites followed almost the same pattern. As discussed regarding the physical properties of the composites, the composites have a moisture content of 6-7%. The minor weight loss at 110 °C, may be due to the removal of moisture from the composites. The TGA shows a major weight reduction between 250°C and 400 °C and both composites showed a weight loss of 84.14%, until a constant weight was reached at a temperature of 442.5°C. Thermogravimetric Analysis The thermal degradation of the composites may be explained, based on the degra- Thermogravimetric Analysis Being lignocellulosic in nature, in both the sisal and PALF, after the removal of moisture at 110 • C, the second weight loss corresponds to hemicellulose's degradation that starts at about 190 • C. Further, cellulose starts degrading from 290 • C up to 360 • C. The lignin degradation starts at about 280 • C and continues even above 500 • C [32]. For the vulcanized rubber, the degradation begins at about 200 • C and is completed at about 475 • C, where the maximum weight loss is obtained at 358 • C, which may be attributed to the oxidation of the rubber [41]. It is also inferred from the data that the incorporation of wool, sisal, and PALF, slightly increases the thermal stability of the vulcanized rubber. Solvent Diffusion The uptake of toluene and water by the SWHC and PWHC, via the diffusion, was analyzed and plotted between Qt (mole% uptake of solvent) and √t (min). The process of diffusion is a parameter for the kinetics and is related to the nature of the polymer, the nature of the fillers added, its free volume, the extent of crosslinking, etc. [18]. It is apparent from Figure 6a that the rate of diffusion and the quantity of the toluene absorption is higher in the SWHC, in comparison with the PWHC. The low absorption and diffusion of toluene in the PHWC may be due to the dense packing of PALF inside the rubber ma- The degradation process has demonstrated one corresponding weight-loss peak in the DTG curves, as shown in Figure 5b, which corresponds to a single turn in the TG curves and was caused by thermal scissions of the C-C chain bonds in the natural rubber matrix [42].The DTG curves show that at 361.67 • C, both composites show an equal rate of weight loss (0.8373%/ • C), with respect to the temperature. The results indicate that the SWHC and PWHC possess a similar range of magnitude when considering their thermal stability, which may be due to the fact that both sisal fibre and PALF are plant fibres that havean almost similar chemical structure and properties. Solvent Diffusion The uptake of toluene and water by the SWHC and PWHC, via the diffusion, was analyzed and plotted between Qt (mole% uptake of solvent) and √ t (min). The process of diffusion is a parameter for the kinetics and is related to the nature of the polymer, the nature of the fillers added, its free volume, the extent of crosslinking, etc. [18]. It is apparent from Figure 6a that the rate of diffusion and the quantity of the toluene absorption is higher in the SWHC, in comparison with the PWHC. The low absorption and diffusion of toluene in the PHWC may be due to the dense packing of PALF inside the rubber matrix, which restrict the diffusion of the aromatic solvent [43]. This is also supported by the SEM images, which showed a higher packing density, a better adhesion, and less void contents in the PWHC than the SWHC. At the time of saturation, the SWHC showed a weight gain of 184.65%, while it was 95.85% for the PWHC. trix, which restrict the diffusion of the aromatic solvent [43]. This is also supported by the SEM images, which showed a higher packing density, a better adhesion, and less void contents in the PWHC than the SWHC. At the time of saturation, the SWHC showed a weight gain of 184.65%, while it was 95.85% for the PWHC. Interestingly, it can be also seen from Figure 6b, that the mole% uptake of water does not show much difference for both the SWHC and PWHC, although the SWHC graph shows a marginally higher uptake of water. Both composites showed similar rates of diffusion up to the saturation. The water may enter the composite though the small cracks and pores and generates diffusion pathways. Both the sisal fibre and PALF are hygroscopic and allow the diffusion of water through them whilst, the matrix is hydrophobic. At the time of saturation, the SWHC gained 14.99% and the PWHC gained 14.39% weight, respectively. These high water absorption properties, though common in natural fibre-reinforced composites, are not in an appreciable quality for the composites. It can also be observed from Table 5 that both composites possess a similar crosslink density. The crosslink density was defined as the density of chains or segments that connect two infinite sections of the polymer network [44]. The value of "Mc", which is the molar masses between the crosslinks, is so high that it can be considered as an indication of the greater crosslinking of the networks present in the composites [44]. Interestingly, it can be also seen from Figure 6b, that the mole% uptake of water does not show much difference for both the SWHC and PWHC, although the SWHC graph shows a marginally higher uptake of water. Both composites showed similar rates of diffusion up to the saturation. The water may enter the composite though the small cracks and pores and generates diffusion pathways. Both the sisal fibre and PALF are hygroscopic and allow the diffusion of water through them whilst, the matrix is hydrophobic. At the time of saturation, the SWHC gained 14.99% and the PWHC gained 14.39% weight, respectively. These high water absorption properties, though common in natural fibre-reinforced composites, are not in an appreciable quality for the composites. It can also be observed from Table 5 that both composites possess a similar crosslink density. The crosslink density was defined as the density of chains or segments that connect two infinite sections of the polymer network [44]. The value of "Mc", which is the molar masses between the crosslinks, is so high that it can be considered as an indication of the greater crosslinking of the networks present in the composites [44]. Accelerated Thermal and UV Aging The composites of the NR are susceptible to degradation by heat, UV radiation, ozone, humidity, etc. [45]. The PWHC and SWHC (Figure 7) were subjected to the accelerated thermal and UV degradation and the change in their mechanical properties were analyzed. In large chain macromolecules with complicated crosslinked structures, the application of heat, as well as radiation, can cause scissions, not only to the main chain, but also to the side chains which may lead to a loss of weight and the emission of gases with low molecular weights. As a result, the exposure to heat/radiation can cause changes to the chemical structure of the composites, such as the chain scission, crosslink formation, and breakage [46].It can be observed from Figure 8 that there is a slight increase in the tensile strength and Young's modulus after the thermal aging for both the SWHC and PWHC. This increase might be due to the formation of new crosslinks when the vulcanized NR is subjected to heating [47]. When exposed to prolonged UV radiation, the tensile strength increased for both the SWHC and PWHC, although there was a significant reduction in Young's modulus of the material. The trend shown by the un-aged, thermal aged, and UV aged samples for both the PWHC and SWHC, was similar, such that there is an increase in their tensile strengths. In the case of their stress % (elongation at break %), the two composites showed a different trend, such as in the SWHC, the stress % demonstrates an increase after the UV aging, with respect to the un-aged samples, where the stress % for the PWHC remains stagnant. Similarly, Young's modulus for both the PWHC and SWHC showed a similar trend, such as an increase in the modulus after thermal aging and a decrease in the modulus after UV aging. the PWHC and SWHC showed a similar trend, such as an increase in the modulus after thermal aging and a decrease in the modulus after UV aging. the PWHC and SWHC showed a similar trend, such as an increase in the modulus after thermal aging and a decrease in the modulus after UV aging. Biodegradation Biodegradation is a process in which a compound decomposes due to the enzymes or chemicals secreted by bacteria or fungi present in soil. Both the SWHC and PWHC were subjected to accelerated biodegradation for 60 days through a soil burial test, as mentioned earlier. Once the stated period was over, the reduction in weight for the SWHC was found to be 2.43%, while it was 2.34% for the PWHC. It can be seen from Ta- Biodegradation Biodegradation is a process in which a compound decomposes due to the enzymes or chemicals secreted by bacteria or fungi present in soil. Both the SWHC and PWHC were subjected to accelerated biodegradation for 60 days through a soil burial test, as mentioned earlier. Once the stated period was over, the reduction in weight for the SWHC was found to be 2.43%, while it was 2.34% for the PWHC. It can be seen from Table 6 that both composites showed an almost equal loss of weight. It appears that the biological decomposition occurred at a very slow rate. This may be due to the following reason. (1) The vulcanized rubber, as it is poorly biodegradable, due to the presence of high crosslinking. (2) Though wool, sisal, and PALF are natural fibres that degrade over time, the presence of lignin in the PALF and the sisal mask and protects the cellulose and hemicellulose from a rapid degradation by the microorganisms, due to the presence of the aromatic and crosslinked structure of the lignin [13]. (3) The extensive packing of fibres in the matrix can slow down the degradation process, since the tight network prevents the excessive growth of microorganisms [48].Above all, these facts and the presence of high amounts of natural fibres in the developed composites resulted in an increase in the absorption of moisture which eventually led to the growth of microorganisms that caused the mass degradation of the composite. Conclusions Hybrid green composites, SWHC (sisal fibre + coarse wool fibre + NR) and PWHC (PALF + coarse wool fibre + NR) were fabricated and compared for their morphological, physical, mechanical, and aging properties. In comparison with the SWHC, the PWHC showed a higher tensile strength and modulus, a higher tear strength, a low moisture absorption and low mole% uptake (diffusion) of toluene and water. The PHWC showed a higher torque during the cure analysis. The results obtained from the FTIR spectraprovided no valid evidence for any chemical interaction between the polymer matrix and the natural fibres in both the SWHC and PWHC. An analysis of the SEM images showed that the PWHC had a better packing of fibres, thereby an increased interfacial adhesion between the fibres and the polymer matrix. The thermal degradation characteristics remained as constant. Both the hybrid composites showed a slow pace of degradation while subjected to the soil burial test. It is concluded that the newly developed hybrid composites can be regarded as good substitutes for non-biodegradable composites and can be considered as potential material for packing and household applications.
2022-11-16T16:49:41.895Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "b9832ad3cb0cda82b6da5ac05a9b3d1ee35e9baa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/22/4882/pdf?version=1668243282", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98bbc3ae866705025626b0bf1f942f12366034cd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
235313620
pes2o/s2orc
v3-fos-license
Ember: No-Code Context Enrichment via Similarity-Based Keyless Joins Structured data, or data that adheres to a pre-defined schema, can suffer from fragmented context: information describing a single entity can be scattered across multiple datasets or tables tailored for specific business needs, with no explicit linking keys (e.g., primary key-foreign key relationships or heuristic functions). Context enrichment, or rebuilding fragmented context, using keyless joins is an implicit or explicit step in machine learning (ML) pipelines over structured data sources. This process is tedious, domain-specific, and lacks support in now-prevalent no-code ML systems that let users create ML pipelines using just input data and high-level configuration files. In response, we propose Ember, a system that abstracts and automates keyless joins to generalize context enrichment. Our key insight is that Ember can enable a general keyless join operator by constructing an index populated with task-specific embeddings. Ember learns these embeddings by leveraging Transformer-based representation learning techniques. We describe our core architectural principles and operators when developing Ember, and empirically demonstrate that Ember allows users to develop no-code pipelines for five domains, including search, recommendation and question answering, and can exceed alternatives by up to 39% recall, with as little as a single line configuration change. INTRODUCTION Machine learning (ML) systems that extract structural and semantic context from unstructured datasets have revolutionzed domains such as computer vision [34] and natural language processing [18,49]. Unfortunately, applying these sytems to structured and semistructured data repositories that consist of datasets with pre-defined schemas is challenging as their context is often fragmented: they frequently scatter information regarding a data record across domainspecific datasets with unique schemas. For instance, in Figure 1B, information regarding Asics shoes is scattered across three catalogs with unique schemas. These datasets adhere to fixed schemas that are optimized for task-specific querying, and often lack explicit linking keys, such as primary key-foreign key (KFK) relationships. This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit https://creativecommons.org/licenses/by-nc-nd/4.0/ to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing info@vldb.org. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. 15 Figure 1: An end-to-end task requiring context enrichment. Predicting the rating of and recommending a new product (A82), requires relating the Asics products (highlighted in dark gray) via a keyless join (top). This process is manual due to data heterogeneity-we aim to automate it (bottom). This constrains users to a single view of an entity that is specialized for a specific business need. Associating these fragmented data contexts is critical in enabling ML-powered applications over such datasets-a process we denote as context enrichment-yet is a heavily manual endeavor due to task and dataset heterogeneity. Engineers develop solutions for context enrichment tailored to their task, such as similarity-based blocking in data integration [47], retriever models in question answering [67], or retrieval functions in search [53] (see Section 2). Constructing these independent solutions is repetitive, time-consuming, and results in a complicated landscape of overlapping, domain-specific methods. For instance, consider the scenario depicted in Figure 1: An e-commerce company has a proprietary product catalog, and aggregates product information from several external vendors to perform market analysis. Each vendor uses a unique product catalog, each with unique product representations (i.e., schema) ranging from free-form text to tabular product descriptions; products may overlap and evolve ( Figure 1B). Given normalized tables containing user and rating data, an engineer wishes to estimate the rating for a candidate new product (A82) and identify users to recommend the product to ( Figure 1A). The engineer must first perform context enrichment by joining information across tables to extract features that capture similarities between the new (A82) and existing (A80, P8) products. They can then estimate the product rating, and recommend the new product to users based on how they rated related products. The classic data management approach is to denormalize datasets using KFK joins. This fails to solve the problem due to two reasons. First, not all tables can be joined when relying on only KFK relationships (e.g., there is no KFK relationship between Catalog B and Catalogs A or C). Second, even when KFK relationships exist, as between Catalogs A and C (ITEM), relying on only KFK semantics fails to capture the similarity between A82 and A80. Alternatively, the engineer can rely on similarity-based join techniques, as in data blocking [47], built to avoid exhaustive, pairwise comparison of potentially joinable records. However, as we show in Section 8.3, the optimal choice of join operator to maximize recall of relevant records is task-dependent, and may not scale at query time when new records arrive. The engineer must first note that the Description column in Catalog A relates to the Brand and Model columns in Catalog B and the Size, Make, and Color columns in Catalog C. They can then select a custom join based on table properties: for matching primarily short, structured data records, they may want to join based on Jaccard similarity, whereas BM25 may be better suited for purely textual records (see Table 3). As database semantics do not natively support these keyless joins that require similarity-based indexing, joining arbitrary catalogs remains heavily manual-even large companies rely on vendors to categorize listings, which results in duplicate listings. 1 To counter this manual process, we draw inspiration from recent no-code ML systems such as Ludwig [42], H20.ai [4], and Data Robot [3] that are rapidly shifting practitioners towards higher-level configuration-based abstractions for developing ML applications. Despite their success, these systems leave context enrichment as a user-performed data preparation step. 2 In this paper, we evaluate how to bring no-code semantics to context enrichment. No out-of-the-box query interface surfaces the relatedness between A80, A82, and P8 (all Asics, two GT-1000, and two blue), and links these records to the Ratings table with minimal intervention. The challenge in developing a no-code context enrichment system to enable this is to construct an architecture that is simultaneously: (1) General: Applicable to a wide variety of tasks and domains. Our key insight to enable such a system is to simplify context enrichment by abstracting an interface for a new class of join: a learned keyless join that operates over record-level similarity. Just as traditional database joins provide an abstraction layer for combining structured data sources given KFK relationships, we formalize keyless joins as an abstraction layer for context enrichment. We then propose Ember: a no-code context enrichment framework that implements a keyless join abstraction layer. Ember creates an index populated with task-specific embeddings that can be quickly retrieved at query time, and can operate over arbitrary semistructured datasets with unique but fixed schema. To provide generality, Ember relies on Transformers [58] as building blocks for embedding generation, as they have demonstrated success across textual, semi-structured, and structured workloads [18,40,49,57,64]. To provide extensibility, Ember is composed of a modular, three step architecture with configurable operators. To provide ease of use, Ember can be configured using a simple json-based configuration file, and provides a default configuration that works well across five tasks with a single line change per task. As input, users provide Ember with: (1) a base data source, (2) an auxiliary data source, (3) a set of examples of related records across the sources. For each record in the base data source, Ember returns related records from the auxiliary data source as characterized by the examples, which can be post-processed for downstream tasks (e.g., concatenated or averaged). We present Ember's three-step modular architecture that enables index construction via Transformer-based embeddings to facilitate related record retrieval: preprocessing, representation learning, and joining ( Figure 2). Preprocessing. Ember transforms records from different data sources to a common representation, which allows us to apply the same methods across tasks. By default, Ember uses operators that convert each record to a natural language sentence for input to a Transformer-based encoder, which we optionally pretrain using self-supervision. We demonstrate that depending on the data vocabulary, encoder pretraining to bootstrap the pipeline with domain knowledge can increase recall of relevant records by up to 20.5%. Representation Learning. Ember tunes the preprocessed representations to identify contextually similar records as defined by supervised examples. Learning this representation is task-dependent: we show that a natural approach of using pretrained Transformerbased embeddings with no fine-tuning performs up to three orders of magnitude worse than a fine-tuned approach, often returning less than 10% of relevant records. Pretraining in the first step can increase recall by over 30%, but this is still insufficient. Thus, Ember relies on operators that use supervised examples of related records and a contrastive triplet loss [60] to learn a representation that encourages similar records to be close while pushing away dissimilar records in the induced embedding space. We evaluate how many labeled examples Ember needs to match the performance of using all provided examples, and find that at times, 1% of the labels is sufficient, and that operators for hard negative sampling improve performance by up to 30% recall, further improving ease of use. Joining. Ember quickly retrieves related records using the tuned representations to perform the keyless join. Ember populates an index with the embeddings learned to capture record similarity, and uses optimized maximum inner product search (MIPS) to identify the nearest neighbors of a given record. This procedure allows Ember to capture one-to-one and one-to-many join semantics across a variety of downstream applications with low query-time overhead. We demonstrate that Ember meets or exceeds the performance of eight similarity-based join baselines including BM25 [53], Jaccard-similarity-based joins, Levenshtein distance-based joins, and AutoFuzzyJoin [37] with respect to recall and query runtime. Although conventional wisdom in data management often says to rely on domain-specific pipelines for context enrichment, we introduce no-code ML semantics to context enrichment and empirically show how a single system can generalize to heterogeneous downstream tasks. We report on our experiences in deploying Ember across five tasks: fuzzy joining, entity matching, question answering, search, and recommendation. We demonstrate that Ember generalizes to each, requires as little as a single line configuration change, and extends to downstream similarity-based analyses for search, entity matching, and recommendation as in Figure 1. In summary, we present the following contributions in this paper: • We propose keyless joins with a join specification to serve as an abstraction layer for context enrichment. To our knowledge, this is the first work to generalize similarity-based tasks across data integration, natural language processing, search, and recommendation as a data management problem. • We design and develop Ember, the first no-code framework for context enrichment that implements keyless joins, and provides an API for extending and optimizing keyless joins. • We empirically demonstrate that Ember generalizes to five workloads by meeting or exceeding the recall of baselines while using 6 or fewer configuration line changes, and evaluate the modular design of Ember's default architecture. CONTEXT ENRICHMENT In this section, we define context enrichment, and provide example workloads that can be framed as a form of context enrichment. Problem Statement Structured data, or data that adheres to a fixed schema formatted in rows and columns, suffers from context fragmentation. We broadly define structured data to include semi-structured and textual data sources that may be stored and keyed in data management systems, such as content from Wikipedia articles or web search results. Unlike unstructured data, structured data follows a scattered format that is efficient to store, query, and manipulate for specific business needs. This may be in the form of an e-commerce company storing datasets cataloging products and user reviews as in Figure 1. To use this fragmented data for ML tasks, practitioners implicitly or explicitly join these dataset to construct discriminative features. We refer to this joining process as context enrichment, which we define as follows: Given a base dataset in tabular form, 0 , context enrichment aligns 0 with context derived from auxiliary data sources, = { 1 , ..., }, to solve a task . We focus on a text-based, two-dataset case (| | = 1), but propose means for multidataset extension Sections 5-7, and defer their evaluation to future work. We explore context enrichment in regimes with abundant and limited labeled relevant pairs to learn source alignment. We represent each dataset as an n i ×d i matrix of n i data points (or records) and d i columns. We denote as the ℎ column in dataset . We denote as the ℎ row in dataset . Columns are numeric or textual, and we refer to the domain of as D i . Our goal is to enrich, or identify all context for, each 0 in 0 : auxiliary records ( ≠ 0) that are related to 0 , as per task . Motivating Applications Context enrichment is a key component for applications across a broad array of domains. Thus, a context enrichment system would reduce the developer time and effort needed to construct domainspecific pipelines. We now describe a subset of these domains, which we evaluate in Section 8. Additional applicable domains include entity linkage or disambiguation [46,55], nearest neighbor machine translation [28], and nearest neighbor language modeling [29]. Entity Matching and Deduplication. Entity matching identifies data points that refer to the same real-world entity across two different collections of entity mentions [33]. An entity denotes any distinct real-world object such as a person or organization, while its entity mention is a reference to this entity in a structured dataset or a text span. Entity deduplication identifies entities that are found multiple times in a single data source, thus is a special case of entity matching where both collections of entity mentions are the same. We frame entity matching as a context enrichment problem where 0 and 1 represent the two collections of entity mentions. Context enrichment aligns entities 1 in the auxiliary dataset 1 with each entity 0 in the base dataset 0 . In entity deduplication, 0 = 1 . The downstream task is a binary classifier over the retrieved relevant records, as the aim is to identify matching entities. Fuzzy Joining. A fuzzy join identifies data point pairs that are similar to one another across two database tables, where similar records may be identified with respect to a similarity function (e.g., cosine similarity, Jaccard similarity, edit distance) and threshold [15,59] defined over a subset of key columns. Though related to entity matching, fuzzy joins can be viewed as a primitive used to efficiently mine and block pairs that are similar across the two data sources. We frame fuzzy joining as a context enrichment problem where 0 and 1 represent the two database tables. We let 0 and 1 denote the set of key columns used for joining. Records 0 and 1 are joined if their values in columns 0 and 1 are similar, to some threshold quantity. This is equivalent to a generic context enrichment task with a limited number of features used. The downstream task is a top-k query, as the aim is to identify similar entities. Recommendation. A recommender system predicts the rating that a user would give an item, typically for use in content recommendation or filtering [41]. We also consider the broader problem of uncovering the global rating for an item, as in the e-commerce example in Figure 1, as being a form of recommendation problemnamely, the global score may be predicted by aggregating the predicted per-user results obtained via classic recommender systems. We frame such recommendation workloads as a context enrichment problem where 0 represents all information regarding the entity to be ranked (e.g., a base product table, or new candidate products), and 1 represents the information of the rankers, or other auxiliary data (e.g., catalogs of previously ranked products). Context enrichment aligns rankers 1 in the auxiliary dataset 1 with those that are related to each entity to be ranked 0 in the base dataset 0 . The downstream task is to return the top-1 query, or to perform aggregation or train an ML model over the returned top-k entries. Search. An enterprise search engine returns information relevant to a user query issued over internal databases, documentation, files, or web pages [26,44]. A general web retrieval system displays information relevant to a given search query [12,21]. Both rely on information retrieval techniques and relevance-based ranking over an underlying document corpus as building blocks to develop complex, personalized pipelines, or for retrospective analyses [10]. We frame retrieval and ranking in search as a context enrichment problem where 0 represents the set of user queries, and 1 represents the underlying document corpus. Context enrichment aligns documents 1 in the auxiliary dataset 1 that map to each query 0 in the base dataset 0 . The downstream task is to return the top-k documents for each query, sorted by their query relatedness. Question Answering. Question answering systems answer natural language questions given a corresponding passage (e.g., in reading comprehension tasks) or existing knowledge source (e.g., in open domain question answering) [50,52]. Multi-hop question answering generalizes this setting, where the system must traverse a series of passages to uncover the answer [63]. For instance, the question "what color is the ocean?" may be provided the statements "the sky is blue" and "the ocean is the same color as the sky. " We frame the retriever component [67] of a classic retriever-reader model in open domain question answering as a context enrichment problem. Our task is to identify the candidate text spans in the provided passages that contain the question's answer (the retriever component), and a downstream reader task can formulate the answer to the question. We let dataset 0 represent the set of questions, and 1 represent the provided passages or knowledge source, split into spans (e.g., based on sentences, or fixed word windows). Context enrichment aligns spans 1 in the auxiliary dataset 1 that contain the answer to each question 0 in the base dataset 0 . Multi-hop question answering requires an extra round of context enrichment for each hop required. Each round selects a different passage's spans as the base table 0 ; to identify context relevant to the question, the user can traverse the related spans extracted from each passage. The downstream task is to learn a reader model to answer questions using the enriched data sources. LEARNED KEYLESS JOINS FOR CONTEXT ENRICHMENT Although the applications in Section 2 implicitly or explicitly perform the same context enrichment, many state-of-the-art systems for these tasks re-implement and rediscover primitives across domains spanning machine learning, databases, information retrieval, and natural language processing [29,30,38,56]. We propose learned keyless joins as an abstraction to unify context enrichment across these tasks and provide a vehicle to develop a no-code system with out-of-the-box recall exceeding naïve baselines. In this section, we define keyless joins, provide a keyless join API specification, and introduce Transformer models that enable general keyless joins. Keyless Joins: Definition and Objective Context enrichment requires a similarity-based join that operates at the record-, not schema-, level to retrieve related records without primary key-foreign key (KFK) relationships. We propose learned keyless joins as an abstraction to enable this functionality. The goal of a keyless join is to quantify the relatedness of records across different datasets to identify records referring to similar entities. A keyless join must learn a common embedding space X for all records across datasets 0 ∪ that reflects entity similarity. For each D i , a keyless join learns a function : D i → X that maps elements of to the space X. We denote each transformed data point ( ) as . This mapping must be optimized such that related values map to similar feature vectors in X and unrelated values map to distant feature vectors in X: ( , ) > ( , ) implies that the ℎ entity in the ℎ dataset is more closely related to the ℎ entity in the ℎ dataset than the ℎ entity in the ℎ dataset. For instance, if we define similarity with respect to an ℓ -norm, the above condition is equal to optimizing for ∥ − ∥ < ∥ − ∥ . Join Specification for Context Enrichment We propose a minimal join specification for our applications using keyless joins as a building block (Listing 1). Given a pair of data sources to join (base_table_ref, aux_table_ref), and examples of similar records across them (supervision), users first specify the join type: an inner join to only return enriched records, or outer join (left, right, full) to return enriched and unenriched records. Users then specify the join size: how many records from one data source joins to a single record in the other data source to indicate one-to-one, one-to-many, or many-to-many semantics. As output, joined tuples (matches) between the two tables are the most similar records across them as learned using the keyless join objective. Results are ranked in order of greatest entity similarity, and the top k results are returned based on join size. For instance, an entity matching application is written as follows to retrieve a single matching record between each data source: A search application is written as follows to retrieve 10 documents for each search query, else return the unenriched query: query_corpus LEFT KEYLESS JOIN document_corpus LEFT SIZE 1 RIGHT SIZE 10 USING relevant_docs_for_query; A recommendation application is written as follows to retrieve 10 items for each user, and 20 users who like each item: user_database INNER KEYLESS JOIN product_database LEFT SIZE 20 RIGHT SIZE 10 USING relevant_docs_for_query; In the remainder of paper, we describe our prototype system that implements this keyless join abstraction layer for enrichment. Background: Transformer-Based Encoders Our main tools for creating a representation optimized for the objective in Section 3.1 are Transformers [58], as they have demonstrated success across a range of structured, semi-structured, and unstructured domains [18,40,49,57,64]. They consist of an encoderdecoder architecture, where the Transformer first encodes the input to an intermediate representation, and then decodes this representation to produce the end output. Stacking encoder (e.g., BERT [18]) or decoder (e.g., GPT-2 [49]) modules allows us to learn high-quality word embeddings that can either be used for a wide array of downstream applications. We focus on BERT-based embeddings. BERT embeddings used for a downstream task are trained in a two-step procedure. The first step is self-supervised pretraining using a masked language model (MLM) objective: a random subset of tokens are masked, and the model must predict each masked token given the surrounding text as context. Pretraining is performed with general purpose corpora such as Wikipedia. The second step is task-specific fine-tuning. Additional layers are appended to the final layer of the pretrained model based on the downstream task, and all model parameters are updated given small amounts of downstream supervision. Rather than relying solely on fine-tuning, an additional MLM pretraining step can be performed prior to fine-tuning to introduce additional domain knowledge. EMBER We develop Ember, 3 an open-source system for no-code context enrichment. Ember implements a keyless join abstraction layer that meets the specification in Section 3.2. Ember first represents input 3 https://github.com/sahaana/ember records using transformer-based embeddings directly optimized for the condition in Section 3.1. Ember then populates a reusable index with these embeddings based on the join type, and configures index retrieval based on the join sizes. We now provide an overview of Ember's usage, API and architecture; data types and operators are in Table 1, with an architecture overview in Figure 3. Usage As input, Ember requires the join specification parameters: a base data source 0 ("left"), auxiliary data sources = { 1 , ..., } ("right"), labeled examples of related data points that represent similar entities, join type, and join sizes. Labeled examples are provided in one of two supervision forms: pairs of related records, one from each table, or a triple where a record from 0 is linked to a related and unrelated record from each . That is, unrelated examples are optional, but related ones are required. Recall that we focus on the = 1 case, but describe extensions in Sections 5-7. As output, Ember retrieves the data points enriched based on the join type and join sizes as a list of tuples. Users configure Ember using a json configuration file that exposes the join specification and lower-level Ember-specific parameters (see Section 8.5). API and Architecture Keyless joins learn a representation that quantifies data point relatedness, and retrieve these points regardless of the input schema. We propose a modular system with three architectural elements to enable this: data preprocessing, representation learning, and data joining. Ember consists of dataflow operators that transform inputs into the formats required for each step (see Table 1). Ember represents input data records as records in keyvalue pairs. Supervision is represented via labelers. An Ember pipeline consists of preparers, encoders, samplers, losses, and retrievers. Ember uses preparers to transform records into sentences. Ember uses encoders to transform sentences into embeddings that are optimized per a provided loss; samplers mine negative examples if the provided supervision only provides examples of related records. The trained embeddings are stored in an index, and are retrieved using a retriever. We provide an API over these operators in the form of a customizable, high-level configuration, as described in Section 8.6. To enable functionality beyond that described, users must implement additional preparers, encoders, samplers, or retrievers. Users configure Ember using pre-defined or custom operators, or use the following pre-configured default: Preprocessing ( Figure 3A, §5). Ember ingests any text or numeric data source with a pre-defined schema, and converts its records to a common representation. By default, Ember converts records to sentences using sentence preparer modules, which are fed as input to the pipeline's encoders. Ember optionally pretrains the encoders in this step using self-supervision. Representation Learning ( Figure 3B, §6). Ember learns a mapping for each input data source's records such that the transformed data is clustered by relatedness. To learn this mapping, Ember's encoders are fine-tuned with the input supervision (in the form of a labeler) and loss. Ember applies the learned mapping to each of the sentences to generate embeddings passed to the final step. Joining ( Figure 3C §7). Ember populates an index with the learned embeddings using Faiss [27]. A keyless join can be completed by issuing a similarity search query given an input record (transformed to an embedding) against this index. The dataset that is indexed is determined by the join type. Ember uses a k-NN retriever module to retrieve as many records specified by the join sizes. PREPROCESSING In this section, we describe the first step of Ember's pipeline: preprocessing. Users provide base and auxiliary datasets with examples of related records across the two. Ember first processes the input datasets so all records are represented in a comparable format. Ember then pretrains the encoders for the representation learning step. We describe these phases, and then multi-dataset extension. Data Preparing ( Figure 3A.1). This phase converts each record into a format that can be used for downstream Transformerbased encoders regardless of schema. Motivated by recent work in representing structured data as sentences [57,64], Ember's default pipeline uses a sentence preparer to convert input records into sentences ( ) = . Optional Pretraining ( Figure 3A.2). Our default BERT-based encoder is trained over natural language corpora (Wikipedia and BookCorpus [8,68]). Structured data is often domain-specific, and is of a different distribution than natural language. Thus, we find that bootstrapping the pipeline's encoders via additional pretraining can improve performance by up to 2.08×. Ember provides a pretraining configuration option that is enabled by default. Users can pretrain the pipeline's encoders via BM25-based selfsupervision (out-of-the-box) or by developing weak supervisionbased labelers (custom-built). We add a standard Masked Language Modeling (MLM) head to each pipeline encoder, and pretrain it following the original BERT pretraining procedure that fits a reconstruction loss. To encourage the spread of contextual information across the two tables, we concatenate one sentence from each table to one other as pretraining input as " [SEP] " (Figure 4). We select sentence pairs that are likely to share information in an unsupervised manner via BM25, a bag-of-words relevance ranking function [53], though any domain-specific unsupervised similarity join can be used, such as from Magellan [32] or AutoFJ [37]. We evaluate BM25-based MLM pretraining in Section 8.4. During development, we explored learning conditional representations of each Figure 4: Examples of the data preparing and pretraining phases of the preprocessing architecture step. Multi-Dataset Extension. Depending on the encoder configuration used in the multi-dataset case, the BM25-based pretraining step will be applied to all sentences across all datasets at once (anchored from the base table), or each dataset pairwise. REPRESENTATION LEARNING In this section, we describe the second step of Ember's pipeline: representation learning. Given the sentences and optionally pretrained encoders from the first step, Ember fine-tunes the encoders such that embeddings from related records are close in embedding space. These embeddings are then passed to the final pipeline step. Users can choose how many encoders to configure, the encoder architecture and output dimension, and how to train the encoders, which we now detail prior to describing multi-dataset extension. Encoder configuration. Ember provides a choice of training independent encoders, for each data source, or using a single encoder 0 common to all data sources. In all of our scenarios, using a single, common encoder performs best, sometimes by over an order of magnitude, and is set as Ember's default configuration. In our tasks, we found that using separate encoders perturbs the representations such that exact matches no longer share a representation-thus, the encoders must relearn correspondences, and often fail to. However, to extend Ember to non-textual data (e.g., images, video, text-image for image captioning or joint sentiment analysis, or text-audio tasks), users must leverage source-specific encoders-thus, we provide them as a configurable option. Encoder architecture. Ember lets users configure each encoder's base architecture and the size of the learned embeddings. Users can choose BERT base or DistilBERT base as the core architecture; Ember's default is the 40% smaller, and 60% faster DistilBERT model. We use models from HuggingFace Transformers [61]; thus, integrating new architectures requires few additional lines of code. We remove the MLM head used in the preprocessing step for optional pretraining, and replace it with a fully connected layer that transforms BERT's default 768-dimensional output to a userspecified embedding output dimension, . The output of the fully connected layer is a -dimensional embedding for each input token. Users can choose one of two types of aggregation methods that will return a single -dimensional output per input sentence: averaging all of the embeddings, or using the embedding of the leading CLS token that BERT appends to its input data (Figure 4). Ember defaults to CLS-based aggregation with a 200-dimensional output. Encoder training. The encoders' goal is to learn a representation (embedding), for the data sources such that sentences that refer to similar entities are grouped in the underlying embedding space. Recall that to perform a keyless join under an ℓ -norm, each encoder must learn a function that maps elements of to the space X, such that ∥ − ∥ < ∥ − ∥ when the ℎ entity in the ℎ dataset is more closely related to the ℎ entity in the ℎ dataset than the ℎ entity in the ℎ dataset. We directly optimize for this objective function by training encoders using a contrastive, triplet loss together with user-provided supervision. Given an anchor record 0a from 0 , and records 1p and 1n in 1 that are related and unrelated to 0a , respectively, let 0 ( 0a ) = 0a , 1 ( 1p ) = 1p , and 1 ( 1n ) = 1n be their embeddings following the composition of the sentence preparer and encoder operators ( = • ). We minimize the triplet loss, defined as: where is a hyperparameter controlling the margin between related and unrelated embeddings. Users could use an alternative loss function: a cosine similarity loss would provide representations that similarly encourage related points to be close and unrelated points to be far, and a standard binary cross-entropy loss may be used as well, where the final layer in the trained encoder would be the end embeddings. However, they do not explicitly optimize for relative distances between related and unrelated records. Users often have examples of related pairs, but no unrelated examples to form triples. In our tasks, only the search workload provides triples, while the rest provide lists of related pairs, which we represent with a labeler. As a result, we provide negative sampler operators to convert labelers that operate over pairs to the triples required by the triplet loss. If the user-provided labeler already contains a related and unrelated example for a record, we form a triple using these examples. If a record does not contain a related and unrelated example, for each record 0 in 0 with a labeled related record, we use either a random sampler or a stratified hard negative sampler. The random sampler selects a record at random (that is not a supervised related pair) as an example unrelated record. The stratified sampler provides tiers for different degrees of relatedness for the negative examples-a user can specify a region of hard examples to sample from, given prior domain knowledge. We provide a default stratified sampler with a single tier of relatedness defined using BM25 or Jaccard similarity, though any unsupervised join method can be used (including Aut-oFJ [37]). We show that a hard negative sampler improves recall by up to 30% compared to a random sampler in Section 8.4. Multi-Dataset Extension. The encoder configuration and architecture are unchanged in the multi-dataset case. There are two options for the encoder training procedure. The first scenario follows the context enrichment problem definition-where a base table 0 must be augmented with several auxiliary data sources . In this case, each encoder can be trained pairwise with 0 and as described above. In the case where multiple data sources must be aligned in sequence, a user can successively apply Ember over a data DAG created using their prior knowledge-effectively creating several, sequential context enrichment problems as described in Section 2.2 for multi-hop question answering. Else, Ember can induce a DAG and sequentially traverse the space of possible keyless joins based on the cardinality of each dataset, in ascending order. JOINING In this section, we describe the last step of Ember's pipeline: joining. Given the embeddings output by the trained encoders, Ember executes the keyless join by identifying related points across the input datasets, and processing them for downstream use. Ember indexes the learned embeddings and queries this index to find candidate related records. The join sizes determine the number of returned results per query, and we also allow users to configure a similarity threshold for each candidate match. We now detail the indexing and retrieval procedure, post-processing, and multi-dataset extension. Indexing and Query Retrieval. Given two records, Ember computes the similarity between their embeddings to determine if the records are related. Many traditional solutions to our motivating applications perform such pairwise checks across all possible pairs either naïvely or with blocking [43,45,47] to identify related records. However, the choice of operator is both domain-specific, and scales quadratically at query time with the size of the input datasets and blocking mechanisms used (see Section 8.3). We eliminate the need for pairwise checking by indexing our embeddings, which are optimized for clustering related records, and rely on efficient libraries for maximum inner product search (MIPS) [27]. For a LEFT or RIGHT OUTER JOIN, Ember constructs an index over the base ( 0 ) or auxiliary ( 1 ) datasets, respectively. Ember then queries the index with each embedding of the remaining dataset, and returns the record and embedding corresponding to the most similar records using a retriever operator. For a FULL OUTER or INNER JOIN, Ember may jointly index and query both datasets to identify related entries in either direction. By default, Ember only indexes the larger dataset to reduce runtime-an optimization we evaluate in Section 8.4 that improves query runtime by up to 2.81×. Ember's default configuration is an INNER JOIN. Post-Processing. The user-provided join size configures the retriever to return the top-records with the closest embeddings in the indexed dataset. Ember additionally supports threshold-based retrieval. The former is useful for applications such as search, where a downstream task may be to display the top-search results to a user in sorted order. The latter is useful for applications where there may not be related records for each record in the base dataset. Ember's default retriever is configured for a 1-to-10 join. If the task is not to simply return related records, users can construct pipelines on top of Ember's outputs, relying on keylessjoin-based context enrichment as a key primitive. Examples of such workloads include the recommendation example from Figure 1, which we simulate as a case study in Section 8.5, open domain question answering, data augmentation, or applications that involve humans in the loop to verify matches or drive business needs. Multi-Dataset Extension. The joining step may vary in the multidataset case in two ways. In a first scenario, each auxiliary data source is indexed, and the query retrieval phase will query each of these indexes and return candidate related data points for each pairwise interaction. If multiple data sources must be sequentially aligned and a DAG can be specified over this sequence, a user can chain context enrichment subroutines by querying the next index with the records retrieved by the previous subroutine; we provide a toy example of this scenario in Section 8.5 under Recommendation. EVALUATION In this section, we demonstrate that Ember and its operators are: (1) General: Ember enables context enrichment across five domains while meeting or exceeding similarity-join baseline recall and query runtime performance (Section 8.3). (2) Extensible: Ember provides a modular architecture, where each component affects performance (Section 8.4). Ember enables task-specific pipelines for various similarity-based queries, and provides task performance that can be finetuned by state-of-the-art systems (Section 8.5). (3) Low Effort: Ember requires no more than five configuration changes (Table 2) from its default, and does not always require large amounts of hand-labeled examples (Section 8.6). Evaluation Metric and Applications Context Enrichment Evaluation Metric. Context enrichment identifies related records across a base and auxiliary dataset. We can view related records between the two datasets as forming edges in a bipartite graph: each record in each dataset represents a vertex in the graph. Under this framing, a context enrichment system must retrieve all of the outgoing edges (i.e., related records) for each record in the base dataset. This is equivalent to maximizing recordlevel recall, or the fraction of records for which we recover all related records. We choose to define recall at the record-level, rather than the edge-level, as we view context enrichment systems as being repeatedly queried for new incoming data once instantiated. A naïve means to optimize for recall is to return all auxiliary records as being related to each record in the base dataset. A precision metric greatly drops when we retrieve multiple records. Thus, our evaluation metric is recall@k, for small k (i.e., join size). Applications. We evaluate Ember against workloads from five application domains: fuzzy joining, entity matching, search, question answering, and recommendation (summarized in in Table 2). We make all our datasets publicly available post-Ember processing. 8.1.1 Fuzzy Join (FJ). We build two workloads using a dataset and generation procedure from a 2019 scalable fuzzy join VLDB paper [15]. The first dataset consists of the Title, Year, and Genre columns from IMDb [5]. The second dataset is generated by perturbing each row in the first by applying a combination of token insertion, token deletion, and token replacement. The task , is to join each perturbed row with the row that generated it. We generate two dataset versions: 5 perturbations per row (IMDb) and 15 perturbations per row (IMDb-hard), up to 25% of the record length. As we focus on generalizablity more than scalability, to form a dataset, we randomly sample 10,000 movies and generate 5 perturbed rows for each. We hold out 20% of records as the test set; no records from the same unperturbed record are in both the train and test sets. Entity Matching (EM). We use all 13 benchmark datasets [1] released with DeepMatcher [43], spanning structured (EM-S), textual (EM-T), and dirty (EM-D) entity matching. The base and auxiliary datasets always share the same schema. In EM-T, all data records are raw text entries or descriptions. In EM-S, each data record is drawn from a table following a pre-defined schema, where text-based column values are restricted in length. In EM-D, records are similar to EM-S, but some column values are injected into the incorrect column. The task is to label a record pair (one from each dataset) as representing the same entity or not. Train, validation, and test supervision are lists of unrelated, and related pairs-we only use the related pairs for training Ember, as we identified mislabeled entries (false negatives) when using Ember to explore results. Search (S). We use the MS MARCO passage retrieval benchmark [10]. MS MARCO consists of a collection of passages from web pages that were gathered by sampling and anonymizing Bing logs from real queries [10]. The task is to rank the passage(s) that are relevant to a given query as highly as possible. Supervision is a set of 397M triples, with 1 relevant and up to 999 irrelevant passages for most queries. Irrelevant passages are retrieved via BM25. We report results over the publicly available labeled development set. Question Answering (QA). We modify the Stanford Question Answering Dataset (SQuAD) [50]. SQuAD consists of Wikipedia passages and questions corresponding to each passage. The task is to identify the beginning of the text span containing the answer to each question. As described in Section 2, a retriever module, used in retriever-reader models for QA [67], performs context enrichment. We modify SQuAD by splitting each each passage at the sentencelevel, and combining these sentences to form a new dataset. The modified task is to recover the sentence containing the answer. Recommendation (R). We construct a workload using IMDb and Wikipedia to mimic the e-commerce example from Figure 1. For the first dataset, we denormalize four IMDb tables using KFK joins: movie information (title.basics), principal cast/crew for each movie (title.principals) and their information (name.basics), and movie ratings (title.ratings) [5]. For the second dataset, we extract the summary paragraphs of the Wikipedia entry for each IMDb movie by querying latest Wikipedia snapshot; we extract 47813 overlapping records [8]. We remove the IMDb ID that provides a KFK relationship to induce a need for keyless joins. We construct this workload to enable two applications. In Application A, we show that Ember can join datasets with dramatically different schema. In Application B, we show how to estimate the rating for movies in the test set given ratings for the train set by performing similaritybased analyses enabled by Ember. Supervision is provided as exact matches and movie ratings with an 80-20 train-test set split. Experimental Setup Baselines. We first evaluate all workloads compared to seven similarity-based joins with respect to recall@1 and recall@10. Our baselines are joins using Levenshtein distance, four variations of Jaccard Similarity, BM25, Auto-FuzzyJoin [37], and a pretrained embedding-based approach. We evaluate downstream EM and search workloads with respect to previously reported state-of-the-art and benchmark solutions [6,20,38,43,45]. The remainder of our workloads were constructed to isolate context enrichment from the downstream task, and do not have standard baselines. Ember Default Configuration. Tasks use a sentence preparer, and perform 20 epochs of self-supervised pretraining. Ember uses a single DistilBERT base encoder trained with a triplet loss and stratified hard negative sampler. Self-supervision and the sampler use BM25 to identify similar sentences to concatenate from each data source, and to mark as hard negatives, respectively. The output embedding size is 200, and models are trained with a batch size of 8 using an ADAM optimizer [31] with initial learning rate of 1e−5. The default join sizes are 1 and 10. Results are five-trial averages. Generalizability We show that Ember's recall and runtime meets or outperforms that of the following similarity-join baselines in nearly all tasks: Levenshtein-Distance (LD). LD is the number of character edits needed to convert one string to another. This join returns the closest records with respect to single-character edits over provided key columns. We filter and only return results under a 30 edit threshold. Jaccard-Similarity, Specified Key (JK-WS, JK-2G). The Jaccard similarity between sets and is ( , ) = | ∩ | | ∪ | . Defining a Jaccard-similarity based join over textual inputs requires tokenizer selection. We consider a whitespace tokenizer (WS) and a 2-gram tokenizer (2G) to capture different granularities. JK-WS and JK-2G return the closest records with respect to Jaccard similarity over provided key columns using a WS or 2G tokenizer. We set a filtering threshold to return results with at least 0.3 Jaccard similarity. Jaccard-Similarity, Unspecified Key (J-WS, J-2G). J-WS and J-2G return the closest records with respect to Jaccard similarity using a WS or 2G tokenizer, after a sentence preparer. We set a filtering threshold to return results with over 0.3 Jaccard similarity. BM25 (BM25). BM25 is a bag-of-words ranking function used in retrieval [53]. This join returns the closest records with respect to the Okapi BM25 score using default parameters k1=1.5 and b=0.75. Pretrained-Embedding (BERT). BERT generates embeddings for each prepared sentence via a pretrained DistilBERT base model, and returns the closest records based on the ℓ 2 -norm between them. Auto-FuzzyJoin (AutoFJ). AutoFJ automatically identifies join functions for unsupervised similarity joins [37] by assuming one input is a "reference table" with few or no duplicates, which does not always hold in context enrichment. If duplicates are present, or records in the reference table are not sufficiently spread, precision estimation may break down (Ember trivially accounts for these scenarios). AutoFJ optimizes for a precision target, and does not expose any join size semantics. We find a single result is typically returned per record, so we only consider recall@1 at a low precision target of 0.5. We comment on workloads that completed in 1.5 days on our machines using all cores. As AutoFJ exhaustively computes record similarities for each considered join configuration, this means we omit results for MS MARCO, EM-T Company, EM-S BeerAdvo-RateBeer, and several others consisting of large text spans, or where blocking may not be as effective. Users must manually align matching columns, which do not always exist thus ignore information in our tasks (e.g., IMDb-wiki): as input, we pass the output of a sentence preparer if column names are not identical. For multi-column datasets, we provide a plausible key column if the method requires. In FJ, EM, and R, this is a title or name column. 8.3.1 Retrieval Quality. We show that Ember is competitive with or exceeds the best performing alternatives with respect to Recall@1 and Recall@10 (Table 3). In both tables, we highlight the methods that are within 1% of the top performing method. No method dominates the others across the different workloads, while BERT underperformed all alternatives. Broadly, character-based joins (LD, JK-2G, J-2G) tend to perform well in scenarios where a join key exists but may be perturbed (FJ, EM-S, EM-D); word-based joins (J-WS, BM25) tend to perform well in scenarios with word-level join keys (EM-S), or where a join key does not exist but common phrases still link the two datasets (R, EM-D). Neither performs well otherwise (S, EM-T). Ember most consistently outperforms or is comparable with alternatives across the tasks: a learned approach is necessary for generalization. The exception is with the synthetic FJ workload that is, by construction, suited for character-based joins. We do not report AutoFJ in Table 3 due to incompleteness. AutoFJ is tailored for the EM and FJ: many-to-one joins with identical schema. Of those completed, Ember is sometimes outperformed by up to 8% recall@1 (Amazon-Google), but is otherwise comparable or far exceeds AutoFJ when considering recall@10. AutoFJ's recall is 37.9% for QA, and 38.5% for R. As AutoFJ over the Wikipedia summary (task R) did not terminate, we only use the title column. 8.3.2 Query Runtime. Using optimized MIPS routines improved query runtime performance by up to two orders of magnitude. We note that this reflects Ember joining time, not pretraining and representation learning, which may require several hours. As the two FJ datasets are synthetic and follow identical generation methods, we consider just IMDb-hard. For our largest dataset, MS MARCO (S) with 8.8M auxiliary records, Ember indexes and retrieves results from the entire corpus, but each baseline only considers each of the up to 1000 relevant records provided by the MS MARCO dataset Extensibility: Architecture Lesion We demonstrate modularity and justify our architectural defaults by evaluating Ember's performance when modifying components as follows, each of which requires a single line configuration change (all reported percentages are relative unless explicitly specified): (1) Replacing MLM pretraining and representation learning (RL) with a pretrained encoder. (3) Replacing the fine-tuned Transformer model with a frozen pretrained embedding and a learned dense layer. (4) Removing MLM pretraining. (5) Random negatives compared to hard negative sampling (6) Using an encoder for each dataset versus a single encoder. (7) Removing the joining index-query optimization. 8.4.1 Pretrained encoder (-mlm, rl). We remove MLM pretraining and RL, and use a pretrained DistilBERT base encoder-i.e., the same as BERT in Table 3. We report our results in Figure 5 normalized by Ember's default. Ember's performance dramatically declines, sometimes by three orders of magnitude (e.g., recall dropped to 0.04 in R). It is only feasible when the contents of the datasets to be joined are similar, as in certain EM-S and EM-D tasks, and FJ-IMDb. Removing Representation Learning (-rl). We remove representation learning (RL) and only pretrain an encoder using the BM25-based MLM procedure from Section 5. We report our results in Figure 5. Ember's performance declines by up to an order of magnitude, though it improves performance compared to -mlm, rl on tasks whose datasets do not match the original natural text Wikipedia corpus (e.g., EM-S, FJ). Primarily textual datasets do not see as large an improvement over -mlm, rl, and for QA, where the corpus is derived from Wikipedia, -rl performs slightly worse. Remove Transformer fine-tuning (-ft). We replace RL with fixed MLM pretrained embeddings followed by a learned fullyconnected layer (i.e., we do not fine tune the entire BERT-based encoder, just the final output layer). We report our results in Figure 5. Without end-to-end Transformer fine-tuning, Ember slightly outperforms using just a pretrained encoder (-rl) at times. We primarily observe benefits in text-heavy workloads, where the pretrained encoder provides meaningful representations (QA, S, R, EM-T) without relying on positional information from Transformers. While we do not perform an exhaustive comparison of pretrained embeddings or non-Transformer-based architectures, this shows that Ember's architecture is a strong out-of-the-box baseline. Remove MLM Pretraining (-mlm). We eliminate MLM pretraining from the pipeline, and report our results in Figure 5 as -mlm. Ember meets or exceeds this setting in all but QA, though by a smaller margin than the previous experiments (up to 20.5% in IMDb-hard). MLM pretraining is not effective for QA as the corpus is drawn from Wikipedia-one of the two corpora BERT was trained with-and both base and auxiliary datasets consist of the same, primarily textual vocabulary. In such cases, users may wish to disable MLM pretraining, although performance is not significantly impacted. In contrast, of the non-EM tasks, MLM pretraining is most helpful for FJ IMDb-hard, where random perturbations result in many records with words that are not in the original vocabulary. 8.4.5 No Negative Sampling (-NS). We replace hard negative sampling with random sampling of unrelated records, and report our results in Figure 5. Hard negative sampling tends to improve performance by a similar margin as MLM pretraining, and only negatively impacted the EM-T Company dataset (by up to 8.72% absolute recall). We find larger improvements when the join condition is more ambiguous than recovering a potentially obfuscated join key, as in tasks S (30% R@1), QA (15% R@1), and EM-D (15% R@1). We use BM25-based sampling for all but FJ and QA, where we have prior knowledge, and develop custom samplers based on Jaccard similarity, and sentence origin, respectively. This improved performance over the BM25-based sampler by 1% absolute recall for FJ, and 8.7% and 5.6% absolute recall@1 and recall@10, respectively, for QA. 8.4.6 encoder Configuration (te). We use two encoders, one for each dataset, instead of our default single encoder configuration, and report our results in Figure 5. We find that using two encoders performs up to two orders of magnitude worse than using a single encoder, especially for strictly structured datasets. We observe that through the course of the encoder training procedure, the performance of using two identically initialized encoders often degrades-inspecting the resulting embeddings even when running Ember over two identical tables shows that the exact same terms diverge from one another in the training process. However, we still provide the option to use a distinct encoder for each data source for future extension to non text-based data, such as audio or image. 8.4.7 Index Optimization. For INNER and FULL OUTER JOINs, we optimize for join execution time by indexing the larger dataset, which reduces the number of queries made by the system. Due to encoder training, this reflects a small fraction of the end-to-end pipeline execution time at our data sizes, but can be a substantial cost at scale. We evaluate this optimization by running the joining step while indexing the larger dataset, and then the smaller dataset for all tasks. On average, this optimization reduces join time by 1.76×, and up to 2.81×. However, this improvement is only meaningful in the MS MARCO workload, saving 2.7 minutes. Extensibility: End-to-End Workloads We show how to extend and use Ember in an end-to-end context. Entity Matching. Recent deep-learning-based EM systems focus on the matching phase of that two-part (i.e., blocking and matching) end-to-end EM pipeline [20,38,43]: given a pair of candidate records, these systems must identify if the pair correspond to the same entity. For end-to-end EM, Ember must generate candidate blocks with a low rate of false negatives such that it can be efficiently followed by downstream matchers; we verify this in Table 4 (R@10). Perhaps surprisingly, we find that treating Ember results from a top-1 query as binary classifier achieves performance that is comparable to or better than previous, custom-built state-of-the-art with respect to F1 score. We believe incorporating general operators for data augmentation and domain knowledge integration to enable the custom advances presented in the current state-of-the-art EM system, Ditto, may allow Ember to entirely bridge this gap [38]. [45], and current state-of-the-art is 0.439 [19]. By developing an additional joining operator, Ember can implement ColBERT [30], a previous state-of-the-art method that achieves 0.384 MRR@10 in MS MARCO document ranking that operates on larger input passages: we must remove the pooling step from Ember's encoder (1 line config change), and develop a retriever that indexes and retrieves bags of embeddings for each record. Recommendation. In Application B of task R, we estimate the IMDb ratings of movies in the test set given a training dataset and Wikipedia corpus. We report the mean squared error (MSE) between the rating estimates and the true rating. The task is easy: predicting the average training rating returns 1.19 MSE, and a gradient-boosted decision tree (GBDT) over the joined data returns 0.82 MSE. We aim to show extensibility while meeting GBDT performance. There are two approaches to enable this analysis with Ember by reusing the representation learned for the IMDb-wiki task: similarity defined in a single hop in terms of the labeled training data, or by first going through the Wikipedia data in two hops. " data_dir " : "IMDb−wiki " , " join_type " : "INNER" , # or "LEFT" , "RIGHT" , "FULL" " l e f t _ s i z e " : 1 , " right_size " : 1 , Listing 2: Core configuration lines annotated with options. In the first method, users configure Ember to index labeled IMDb training data (with ratings) and retrieve records related to the test data (without ratings) via an INNER JOIN. Users then post-process the output by averaging the labels of the returned records. In the second method, users require two LEFT JOINs that index the Wikipedia training data against the IMDb test dataset, and the IMDb train dataset against the Wikipedia training data. Following a two-hop methodology, users first retrieve the closest Wikipedia summary to each test IMDb record, and then retrieve the closest labeled IMDb instances to each previously retrieved Wikipedia summary, which is post-processed via averaging. Both approaches reuse the pretrained encoder from the IMDbwiki task, and require at most 6 configuration changes and a 7line post-processing module. We report results when aggregating with a join size of 1, 10, 20, and 30; both approaches improve performance as the number of neighbors increase, until a plateau is reached. The two-hop approach attains MSE of 1.69, 0.92, 0.89, and 0.89 for join size 1, 10, 20, and 30, respectively, while the one-hop approach performs better with 1.59, 0.89, 0.86, and 0.85. Low Development Effort We describe Ember's configuration to show the low number of config changes we made, and comment on the supervision required. 8.6.1 Configuration. Perhaps surprisingly, we found that exposing only join specification parameters (Listing 2) is a strong out-of-thebox baseline. We require input datasets and supervision to follow a fixed naming convention, under which obtaining the results in Table 3 relies on a default set of configurations options, where only the data directory must be changed. In Table 2, we list how many config changes are required for the best results from Figure 5. We generate each result in Figure 5 by altering the following unsurfaced options: number of encoders, encoder intialization, encoder finetuning (boolean), fraction of supervision, negative sampler. Users can additionally specify lower-level hyperparameters, which we fix across our experiments: epochs, batch size, embedding size, pooling type, tokenizer, learning rate, loss function parameters. Table 3 and Figure 5. In MS MARCO, we use 2.5M of the 397M of the provided labeled triples, which is just 0.63% of the provided triples. For the remaining datasets, we evaluate how many labeled examples are required to match performance of our reported results. We found that 9 of the 17 datasets required all of the provided examples. The remaining datasets required 54.4% of labeled relevant data on average to meet Recall@1 performance, and 44.5% for Recall@10. In the best case, for EM-S DBLP-ACM, only 30% and 1% of the data is required to achieve the same Recall@1 and Recall@10, respectively. RELATED WORK Similarity Joins. Similarity-based, or fuzzy, joins often focus on the unsupervised setting [15,59,65]. State-of-the art systems such as AutoFJ [37] are tailored for the case where tables can be joined using exact keys or shared phrases that do not differ greatly in distribution (e.g., our EM tasks)-not complex context enrichment. Ember generalizes to arbitrary notions of similarity and join semantics at the expense of supervised pairs, and can also leverage unsupervised methods for negative sampling or MLM pretraining. Automated Machine Learning. Automated Machine Learning (AML) systems aim to empower users to develop high-performance ML pipelines minimal intervention or manual tuning. They support various types of data preprocessing, feature engineering, model training and monitoring modules. Examples of AML tools include Ludwig [42], Overton [51], Google Cloud AutoML [2], and H20.ai [4]. However, these platforms do not focus on context enrichment, leaving it as an exercise for users to perform prior to data ingestion. Relational Data Augmentation. Relational data augmentation systems seek to find new features for a downstream predictive task by deciding whether or not to perform standard database joins across a base table and several auxiliary tables [16,35,54]. Similar to context enrichment, these systems aim to augment a base table for a downstream ML workload. However, they assume the existence of KFK relationships that simply must be uncovered. Data Discovery. Data discovery systems find datasets that may be joinable with or related to a base dataset, and to uncover relationships between different datasets using dataset schema, samples, and metadata [11,13,14,17,[22][23][24]. These systems typically surface KFK relationships, and do not tune for downstream ML workloads. NLP and Data Management. A recurring aim in data management is to issue natural language commands to interface with structured data [25,36]. Recent work in Neural Databases [57] aims to replace standard databases with Transformer-powered schemaless data stores that are updated and queried with natural language commands. Related to this work are systems that leverage advances in NLP to provide domain-specific functionality, such as converting text to SQL [39,66], correlating structured and unstructured data [64], enabling interpretable ML over structured data [9], or automating data preparation [56]. We focus on the broader problem of context enrichment of entities for downstream tasks. To our knowledge, this has not been formally addressed in the literature, which often assumes entity information has been resolved [57]. CONCLUSION We demonstrate how seemingly unrelated tasks spanning data integration, search, and recommendation can all be viewed as instantiations of context enrichment. We propose keyless joins as a unifying abstraction that can power a system for general context enrichment, which allows us to view context enrichment as a data management problem. Consequently, we developed and applied Ember, a first-of-its kind system that performs no-code context enrichment via keyless joins. We evaluate how developing a keyless join enrichment layer empowers a single system to generalize to five downstream applications, with no ML code written by the user.
2021-06-04T01:15:36.369Z
2021-06-02T00:00:00.000
{ "year": 2021, "sha1": "fe935caed47ef090a306d6d09240f76adc43a420", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fe935caed47ef090a306d6d09240f76adc43a420", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
254801538
pes2o/s2orc
v3-fos-license
NBAS deficiency due to biallelic c.2809C > G variant presenting with recurrent acute liver failure with severe hyperammonemia, acquired microcephaly and progressive brain atrophy Biallelic pathogenic variants in the neuroblastoma amplified sequence (NBAS) gene were firstly (2015) identified as a cause of fever-triggered recurrent acute liver failure (RALF). Since then, some patients with NBAS deficiency presenting with neurologic features, including a motor delay, intellectual disability, muscular hypotonia and a mild brain atrophy, have been reported. Here, we describe a case of pediatric patient diagnosed with NBAS deficiency due to a homozygous c.2809C > G, p.(Pro937Ala) variant presenting with RALF with severe hyperammonemia, acquired microcephaly and progressive brain atrophy. Not reported in the literature findings include severe hyperammonemia during ALF episode, and neurologic features in the form of acquired progressive microcephaly with brain atrophy. The latter raises the hypothesis about a primary neurologic phenotype in NBAS deficiency. Introduction In 2015, biallelic pathogenic variants in the neuroblastoma amplified sequence (NBAS) gene were identified as a cause of fever-triggered recurrent acute liver failure (RALF) (Haack et al. 2015). Staufner et al. (2020) have recently reported results of an international, multicenter study involving 110 patients, including novel and previously published patients. This study had shed a new light on the pathomechanism and expression of NBAS deficiency. Not only liver but also other organs and systems, including the skeletal, immunological and central nervous system, could be affected. Regarding neurologic features, a motor delay, intellectual disability, and muscular hypotonia were observed in some patients (Balasubramanian et al. 2017;Capo-Chichi et al. 2015;Haack et al. 2015;Li et al. 2017;Staufner et al. 2016Staufner et al. , 2020Suzuki et al. 2020). MRI of the brain was normal in the majority of patients while a mild brain atrophy in some of them was reported. Hepatic encephalopathy with hyperammonemia were also transiently observed in some patients secondarily during ALF. However, the natural history of NBAS deficiency, especially regarding the neurologic phenotype, is not known. Here, we describe a case of a pediatric patient diagnosed with NBAS deficiency due to a homozygous c.2809C > G, p.(Pro937Ala) variant presenting with recurrent acute liver failure with severe hyperammonemia, acquired microcephaly and progressive brain atrophy. Case report The patient was the third child of nonconsanguineous Polish parents born at 39 weeks of gestation by a spontaneous delivery with a birth weight of 3,000 g and head circumference 33 cm (10-25 pc). At 2 months of age head circumference was 37 cm (10 pc). A psychomotor development was normal till 10 months of age. At this age, she underwent the respiratory tract infection with high fever. Recurrent vomiting and increasing apathy within 2 days after the onset of fever were observed and the child was admitted to hospital. At baseline, laboratory results revealed the presence of anemia (Hgb 6.2 g/dl), thrombocytopenia (PLT 60 × 10 3 /uL), elevated serum transaminases (ALT 7725 IU/L, AST 3024 IU/L), hyperbilirubinemia (total serum bilirubin 1.8 mg/dl, direct serum bilirubin not tested), coagulopathy (INR 3.4) and hyperammonemia (serum ammonia 572 umol/l). She was transferred to the pediatric intensive care unit because of hepatic encephalopathy and acute kidney injury. The greatest degree of elevation of serum transaminases (ALT 7259 IU/L and AST 12,246 IU/L) was noted on the 2 nd day of hospitalization. Severe coagulopathy (INR 9.0) and increasing parameters of cholestasis reaching up to total serum bilirubin 6.0 mg/dl with direct serum bilirubin 5.1 mg/dl were noted on the 3 rd day of hospitalization. Serum ammonia concentrations were observed elevated (range 572-662 umol/L) during first six days of hospitalization (no ammonia scavengers were administered). A fully recover of liver function was noted on the 20 th day of hospitalization. Brain MR at that time revealed the presence of extensive symmetrical atrophic cortico-subcortical changes of both hemispheres of the brain (Fig. 1). The patient was discharged home on the 70 th day of hospitalization because ALF was complicated by sepsis. At the age of 3 years, the patient developed the 2 nd episode of respiratory tract infection with fever. She was hospitalized within 6 days after the onset of fever due to apathy and weakness in our Institute; no antypiretics were administered ambulatory because of Reye syndrome suspicion. The patient's head circumference was 44,5 cm (below 3 rd pc), height was 94,5 cm (25 th pc) and body weight was 12,5 kg (10 th pc). Neurological examination revealed the psychomotor retardation and muscle hypotonia. At baseline, laboratory results revealed the presence of elevated serum transaminases (AST 3300 IU/L, AST 1520 IU/L), cholestasis (total serum bilirubin 2.0 mg/dl and direct serum bilirubin 1.0 mg/ dl), coagulopathy (INR 1.8). Normal serum ammonia concentration was noted. The patient received parenteral 10% a f e d c b Fig. 1 Upper row -MR brain examination at the age of 11 months, axial T2-weighted images. Hypointensity of the pons and cerebellar peduncles (a), posterior limbs of internal capsules (b), corpus callosum (not shown), perirolandic cortex (c) are visible, indicated myelination. Diffuse hyperintensity of the cerebral white matter of both hemispheres is seen. Mild enlargement of the ventricular system, enlargement of Sylvian fissures, subarachnoid spaces along the con-vexities and anterior interhemispheric falx are demostrated. Bottom row -MR brain examination at the age of 3 years, axial T2-weighted images. Hyperintensive signal of hilum of dentate nuclei (d) and posterior limbs of internal capsules (e) suggested degeneration of myelin. Progression of the atrophy of the cerebral hemispheres with severe cortical and subcortical atrophy (e, f) glucose infusion during all hospitalization. The follow-up brain MR examination showed progression of the atrophy of the cerebral hemispheres with severe cortical and subcortical atrophy (Fig. 1). A fully recover of liver function was noted on the 8 th day of hospitalization. All known causes of pediatric ALF, including inborn errors of metabolism, were ruled out. TruSightOne NGS panel containing 5000 genes (including 33 genes associated with acute liver failure development) was commenced. There have been found NBAS (RefSeq: NM_015909.3, NP_056993.2) c.2809C > G p.(Pro937Ala) variant, in a homozygous state. The variant was confirmed by Sanger sequencing to be inherited by biparental transmission. During control visit in the Outpatient Clinic, at 4 years of age, she was able to sit independently, walk with help, speak simple words. Ophthalmologic examination revealed no abnormalities; visual evoked potentials test is planned. Normal serum concentration of immunoglobulins G, A, M was noted. Echocardiography revealed normal structure of the heart. Discussion In the study we reported an additional patient with fevertriggered recurrent ALF associated with NBAS deficiency. Not reported findings include severe hyperammonemia during ALF episode, and neurologic features in the form of acquired microcephaly and progressive brain atrophy. Staufner et al. (2020) have recently published results of an international, multicenter study defining clinical subgroups and genotype-phenotype correlations in NBAS-associated disease across 110 patients. Three subgroups related to the affected region of the NBAS protein that differed significantly regarding main clinical features were explored, including Sec39 (predominant liver phenotype with recurrent acute liver failure), C-terminal (multisystemic phenotype with the presence of short statute, skeletal dysplasia, immunological abnormalities, Pelger-Huët anomaly and optic nerve atrophy), and β-propeller (combined phenotype with RALF and multisystemic phenotype). The c.2809C > G (p.Pro937Ala) variant identified in our patient, was located in the region coding for the Sec39 domain, thus NBAS deficiency should have a predominant liver phenotype. However, the presence of an acquired microcephaly and progressive brain atrophy questions this hypothesis. Motor and cognitive development was normal in the most of reported patients with NBAS deficiency (Balasubramanian et al. 2017;Capo-Chichi et al. 2015;Haack et al. 2015;Li et al. 2017;Staufner et al. 2016Staufner et al. , 2020Suzuki et al. 2020). Neurologic features, including a mild hyperammonemia, encephalopathy and mild MRI abnormalities (non-progressive brain atrophy with normal results of 1 H-MRS studies) observed in some reported patients could be explained as a sequelae of ALF, however a primary neurological phenotype cannot be ruled out (Balasubramanian et al. 2017;Capo-Chichi et al. 2015;Haack et al. 2015;Li et al. 2017;Staufner et al. 2016Staufner et al. , 2020Suzuki et al. 2020). The presence of an acquired and progressive microcephaly with brain atrophy in our patient raises the hypothesis about a primary neurologic phenotype in NBAS deficiency. On the other hand, brain atrophy could be secondarily to acute hepatic hyperammonemic encephalopathy (HE), however this phenomenon is reversible in the course of HE while the progression of brain MR suggests a primary central nervous system (CNS) involvement. Liver involvement in NBAS deficiency is characterized by recurrent episodes of acute liver failure (liver crises) with normalization of liver function between crises Balasubramanian et al. 2017;Capo-Chichi et al. 2015;Haack et al. 2015;Li et al. 2017;Staufner et al. 2016Staufner et al. , 2020Suzuki et al. 2020). Based on our case report, we could speculate that the CNS involvement is independent of the liver dysfunction. An early and consequent administration of antipyretics together with an anabolic energy management (high glucose and lipids) proved to be highly beneficial on liver function recovery but it seems to have no effect on the course of CNS involvement. The long-term outcome in NBAS deficiency is unknown. Suzuki et al. have recently reported the oldest patient (34-year-old) showing a progressive course of disease (Suzuki et al. 2020). The patient presented with RALF until an early childhood, epilepsy diagnosed at 12 years of age, progressive intellectual disability and liver fibrosis with portal hypertension at 20 years of age. Giving the similarities between the clinical presentation of NBAS deficiency and Reye syndrome (defined as an acute non-inflammatory encephalopathy with acute liver failure) we could speculate about the classification of NBAS deficiency as a Reye-like syndrome (Schrör 2007). It is obvious that a primarily genetic background as the cause of Reye syndrome, was not identified or even considered in the 1960s and 1970s when the majority of cases were reported (Schrör 2007). Nowadays, such a genetic background is an explanation for recurrent Reye-like symptoms triggered by infections/fever and should be considered. Thus, every patient presenting with Rey-like symptoms should undergo molecular analysis, especially including NBAS gene. Funding This study was partly supported by CMHI grant S245/17. Data availability All data generated or analyzed during this study are included in this article. Declarations Ethics approval Ethical approval was obtained from the Children's Memorial Health Institute Bioethical Committee, Warsaw, Poland. Conflicts of interest All authors declare no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-12-18T15:04:41.756Z
2021-08-24T00:00:00.000
{ "year": 2021, "sha1": "232299d0d1a1e3826549d60e4eb47dc45800a8cd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11011-021-00827-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "232299d0d1a1e3826549d60e4eb47dc45800a8cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
55844669
pes2o/s2orc
v3-fos-license
Nonenzymatic Glucose Biosensors Based on Silver Nanoparticles Deposited on TiO In the present research, a nonenzymatic glucose biosensor was fabricated by depositing Ag nanoparticles (Ag-NPs) using in situ chemical reduction method on TiO 2 nanotubes which were synthesized by anodic oxidation process. The structure, morphology, and mechanical behaviors of electrode were examined by scanning electron microscopy and nanoindentation. It was found that Ag-NPs remained both inside and outside of TiO 2 nanotubes whose length and diameter were about 1.2 μm and 120 nm. The compositionwas constructed as an electrode of nonenzymatic biosensor for glucose oxidation.The electrocatalytic properties of the prepared electrodes for glucose oxidationwere investigated by cyclic voltammetry (CVs) and differential pulse voltammetry (DPV). Compared with bare TiO 2 and Ag-fresh TiO 2 nanotube, Ag-TiO 2 /(500C) nanotube exhibited the best electrochemical properties from cyclic voltammetry (CVs) results. Differential pulse voltammetry (DPV) results showed that, at +0.03V, the sensitivity of the electrode to glucose oxidation was 3.69mA∗cm∗mM with a linear range from 20mM to 190mM and detection limit of 24μM (signal-to-voice ratio of 3). In addition the nonenzymatic glucose sensors exhibited excellent selectivity, stability, and repeatability. Introduction The blood glucose detection is of great significance in the application of food process, clinical medicine, and biology [1][2][3].In particular accurate glucose determination for diabetics can be very effective to the detection and treatment of diabetes mellitus [4].The glucose oxidase (GOD) analytical method is the first and most commonly used way for glucose detection in clinical trials [5,6].However, the entity of GOD is enzyme which is easily inactive and denatured, resulting in poor stability and repeatability of enzymatic electrodes [7,8]. Nonenzymatic glucose biosensors can avoid that problem because of direct catalyzed oxidation to glucose on the surface of electrode [9][10][11].In order to increase specific surface areas and enhance mass transport ability, these electrodes could use nanomaterials as substrates like carbon nanotubes (CNTs) [12], nanowires [13], mesoporous structure [14], and so forth and decorate with mental nanoparticles such as Pt [15], Ni [16], Ag [17], and Cu [18].TiO 2 nanotube has been paid more and more attention in chemical reactions and biosensor fields because of their well-aligned nanostructure, large surface area property, thermal stability, chemical inertness, and nontoxicity [19,20].Recently, silver doped TiO 2 nanocomposite structures have attracted much attention not only because TiO 2 is a promising material with desirable electronic, but also because Ag displays some unique activities in chemical and biological sensing compared with the other noble metals such as Ru and Pt [19].However, the electrocatalytic activity of Ag doped TiO 2 nanotubes has not been subjected to intensive report. Many approaches were developed to fabricate different sizes of Ag doped TiO 2 nanotubes such as sol-gel, "wet" chemical, and ceramic methods.However, drying, heating, or annealing at high temperatures is revolved in the preparation process [21,22].In order to modify the TiO 2 nanotubes by Ag nanoparticles, a polyol method at low temperature has been developed.The polyol process was that using polyol such as diethylene glycol [23] and glycerol [24] both as solvent and reducing agent during reaction which can be conduce to the formation of nanostructures.The Ag + ions from silver nitrate (AgNO 3 ) can be reduced to metallic Ag.The Ag nanoparticles were deposited successfully on the surface of TiO 2 .Moreover the specific surface area will be hugely increased by silver nanoparticles deposited on TiO 2 nanotubes.So TiO 2 nanotubes decorated with silver nanoparticles possess distinct advantages for nonenzymatic glucose sensors. In this work, the Ag nanoparticles were deposited successfully on the surface of TiO 2 by polyol process.Scanning electron microscope (SEM) and nanoindentation were used for characterization.Furthermore, the electrocatalytic activity of the Ag-TiO 2 electrode was also evaluated by cyclic voltammetry (CVs) and differential pulse voltammetry (DPV).Scanning electron microscope (FE−SEM, Hitachi S−4800) was used to determine the morphology and composition of the samples.Nanoindentation (NanoIndenter G200, USA) was used to determine the mechanical properties.All measurements were conducted at room temperature. Synthesis of 𝑇𝑖𝑂 2 Nanotube Arrays on Ti Substrate. Before use, the titanium foils were polished with abrasive paper of 400#, 600#, and 800# until the surfaces were smooth and remained with no scratch and then ultrasonically cleaned in alcohol and double distilled water for 10 min, respectively, and dried in air eventually.Then, titanium sheets were used as the substrate electrode with Pt electrodes as cathode.Samples were anodized in water/glycerol (1 : 1 Vol.%) mixtures containing 0.3 M NH 4 F at a potential of 30 V for 3 h.Finally, the as-prepared TiO 2 nanotube electrode was annealed at 500 ∘ C (10 ∘ C/min) under nitrogen atmosphere for 2 h. Preparation of Ag Nanoparticles on 2 Nanotubes.Ag nanoparticles on TiO 2 nanotubes were prepared by the following process.Firstly, 1 mL AgNO 3 (0.04 M) was mixed with 10 mL PVP (0.01 M) and stirred with ice-bath after adding 50 mL water to the solution.Then, 1 mL NaHB 4 (0.06 M) was dropped quickly into the solution with colour turning brown, by stirring for 10 min.At last, the above-prepared TiO 2 nanotubes (500 ∘ C) were entirely immersed in this electrolyte under static conditions for 7 h and subsequently rinsed with double stilled water and air-dried and then Ag-TiO 2 /(500 ∘ C) was obtained. Characterization of Electrocatalytic Properties of Ag- 2 Nanotube Electrode.The electrochemical measurements were carried out with a conventional three-electrode system.The prepared electrode (TiO 2 , Ag-TiO 2 and Ag-TiO 2 /(500 ∘ C)) was used as a working electrode with a platinum electrode as an auxiliary electrode and a saturated calomel electrode (SCE) as a reference electrode in all cases. Results and Discussions 3.1.Morphology, Composition, and Structure Analysis.The morphologies of the as-formed TiO 2 nanotubes, Ag-fresh TiO 2 , and Ag-TiO 2 /(500 ∘ C) were depicted in Figure 1. Figure 1(a) showed clearly the prepared vertically aligned TiO 2 nanotubes arrays with diameter of 120 nm and length of 1.2 m.It is apparent from Figure 1(b) that the ordered and evenly distributed Ag nanoparticles with average diameter of 20 nm are formed preferentially on the exterior mouth of the amorphous TiO 2 nanotubes.After the nanotubes were annealed at 500 ∘ C, the Ag nanoparticles were deposited on them under the same method (see Figure 1(c)).Some Ag nanoparticles were dispersed on the pore openings and showed a distribution that is more dense than that in Figure 1(b), while some were deposited into the nanotubes, as indicated by the arrow.It can be presumed that the nanotubes annealed at 500 ∘ C facilitate the Ag formation on the tube surface, and the electrocatalytic properties were also improved as discussed later. Nanoindentation Properties. In recent years nanoindentation technology has been widely used to measure the hardness and elasticity modulus of medical materials, especially in implants devices [25].The description of the samples' (TiO 2 , Ag-fresh TiO 2 and Ag-TiO 2 /(500 ∘ C) nanotube) nanomechanical characterization was provided.As observed in Figure 2, the displacement of Ag-TiO 2 /(500 ∘ C) nanotubes was the shortest and TiO 2 nanotubes were the longest when loan was selected the same.It was easy to figure out Ag-TiO 2 /(500 ∘ C) nanotubes were the hardest and the reason could be ascribed to two aspects.Compared with nonannealed sample, the higher hardness observed for Ag-TiO 2 /(500 ∘ C) nanotubes can be due to the higher hardness of anatase and rutile phases for annealed samples at 500 ∘ C. The hardness results obtained for nonannealed sample and annealed sample were similar to those stated in earlier studies [26].In addition, it may be ascribed to the deposition of Ag nanoparticles.Both inside and outside of nanotubes were attached with Ag-NPs which densified empty space, resulting in fixed enhancement of hardness. Electrocatalytic Oxidation of Glucose on the Prepared Electrodes. CVs were used to investigate the catalytic activities of the Ag-TiO 2 /(500 ∘ C) electrode.Figure 3 showed CVs of the as-formed TiO 2 nanotube, Ag-fresh TiO 2 nanotube, and Ag-TiO 2 /(500 ∘ C) nanotube electrode in the presence and absence of 0.5 M glucose supported by 0.1 M neutral PBS, respectively.Almost no current increase of TiO 2 and Ag-TiO 2 nanotubes with glucose addition from Figures 3(a) and 3(b) was observed, which demonstrated that the TiO 2 nanotubes and Ag-TiO 2 nanotubes electrode exhibited no electrocatalytic oxidation activity to glucose while Ag-TiO 2 /(500 ∘ C) electrode displayed a pair of redox peaks with the anodic and cathodic peak potential positioned at +0.03 V and −0.3 V from Figure 3(c), which can be ascribed to the oxidation of glucose, indicating that the Ag-TiO 2 /(500 ∘ C) electrode possessed strong catalytic activity towards glucose.There were two possible explanations for this result.The first possibility was relatively weak adherence of Ag nanoparticles onto the nonannealed nanotubular surfaces.The other possibility was that a small amount of silver titanates were obtained by depositing Ag nanoparticles on the TiO 2 /500 ∘ C nanotubes, which indicated that the Ag can not only be deposited but also be doped in the TiO 2 nanotubes.The crystal structure distortion increased state density of Ag-TiO 2 /500 ∘ C coating, facilitating the migration of charge carriers, and enhanced catalytic activity [19]. Figure 4 showed CVs of Ag-TiO 2 /(500 ∘ C) nanotube electrode in 0. usual interfering ion which was preferentially absorbed on the surface of electrodes, resulting in toxicosis and losing electrocatalytic property.As observed in Figure 5, in the blank PBS solution, CVs were characterized by hydrogen adsorption/desorption peaks at negative potentials and a flat double layer region at positive potentials.After 0.5 M glucose was added, an obvious redox peak appeared from −0.3 V to +0.03 V, proving excellent electrocatalytic oxidation ability, whereas in the presence of chloride ions, when Cl ions were dropped into the solution, the redox peak vanished and the oxidation of glucose was suppressed, which could be attributed to the adsorbing of Cl − preferentially compared with glucose. According to the adsorption theory proposed by Wang et al. [27], it could be deduced that when there were no Cl ions in the solution, glucose molecules would adsorb on the surface of Ag-TiO 2 /(500 ∘ C) nanotube electrode in the first place, forming glucose intermediates.During the potential scanned from −0.3 V to +0.03 V gradually, the intermediates were oxidized leading to current increasing.When potential was over +0.03 V, the oxidized intermediates were absorbed on the surface, covering the active sites of electrode and decreasing current.When large amounts of chloride ions were present, the poisoning Ag-TiO 2 /(500 ∘ C) nanotube electrode showed no electrocatalytic properties.3.4.Amperometric Performance of Ag- 2 /(500 ∘ C) Nanotube Electrode to Glucose Oxidation.Differential pulse voltammetry (DPV) was used to determine the sensor outputs at different glucose concentrations.Figure 6 presented the relationship between currents and variation concentrations of glucose.The prepared electrode exhibited linearity for glucose sensing that ranged from 20 mM to 190 mM with a correlation coefficient of 0.9993.The electrode sensitivity calculated from the slope of the calibration curve was 3.69 mA * cm −2 * mM −1 with the detection limit of 24 M.Ascorbic acid (AA), uric acid (UA), sucrose, fructose, dopamine, ethanol, and acetaminophen (AAP) were the commonly interfering biomolecules which coexist with glucose in the human blood.To evaluate the selectivity of the Ag-TiO 2 /(500 ∘ C) nanotube electrode, the current responses to ethanol, fructose, and AAP were examined.As shown in Figure 7, it was observed that the response signals of ethanol, fructose, and AAP were negligible for glucose determination.The good selectivity of the nonenzymatic sensor was related to the proper working potential used. The reproducibility and stability of response current of the Ag-TiO 2 /(500 ∘ C) nanotube electrode were studied.The amperometric response of 10 different Ag-TiO 2 /(500 ∘ C) nanotube electrodes to 1.0 mM glucose was tested independently.As shown in Figure 8, the relative standard deviation (RSD) was 1.5% for 10 successive measurements, revealing that the preparation method was acceptably reproducible.The longterm stability was explored by measuring a glucose solution, and the electrode was stored at room temperature.Figure 9 showed that the response current maintained about 92% of the initial value after 30 days, demonstrating the good stability of the Ag-TiO 2 /(500 ∘ C) nanotube electrode based nonenzymatic glucose biosensor. Conclusions In this work, a simple and effective way to prepare Ag nanoparticles deposited on the surface of TiO 2 nanotube was developed.The Ag deposited and annealing processes made TiO 2 nanotube harder which was conducive to the application of tough condition.The electrochemical results demonstrated that the prepared Ag-TiO 2 /(500 ∘ C) nanotube electrode possessed excellent electrocatalytic performance.The constructed nonenzymatic glucose biosensor exhibited good selectivity, stability, and reproducibility.Because of the simple preparation method and good catalytic performance, such material has potential application in catalysis and sensor areas. 5 M glucose supported by 0.1 M neutral PBS, 0.1 M H 2 SO 4 , and 0.1 M NaOH.As observed in Figure 4, compared with CVs in 0.1 M H 2 SO 4 and 0.1 M NaOH, CVs of Ag-TiO 2 /(500 ∘ C) nanotube electrode in PBS exhibited two oxidation peaks related to the oxidation of glucose, which indicated excellent electrochemical behavior. Figure 5 : Figure 5: CVs of Ag-TiO 2 (500 ∘ C) nanotube electrode in blank solution, 0.5 M glucose with the presence of 0.12 M NaCl and 0.5 M glucose without the presence of 0.12 M NaCl supported by 0.1 M neutral PBS. NaBH 4 ), silver nitrate (AgNO 3 ), Na 2 HPO 4 , NaH 2 PO 4 , H 3 PO 4 , poly(vinylpyrrolidone) (PVP), and KCl, were of analytical grade and used as received.A 0.1 M phosphate buffer solution (PBS) prepared using Na 2 HPO 4 and NaH 2 PO 4 was employed as supporting electrolyte.The desired pH of solution was adjusted with 0.1 M NaOH or 0.1 M H 3 PO 4 .All aqueous solution was prepared with reagent grade chemicals and double distilled water.
2018-12-10T06:36:29.569Z
2016-06-19T00:00:00.000
{ "year": 2016, "sha1": "40d47b5a6aa12e9ddf7153f28a4628c00fe30251", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jnt/2016/9454830.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "40d47b5a6aa12e9ddf7153f28a4628c00fe30251", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
7710336
pes2o/s2orc
v3-fos-license
A school intervention for mental health literacy in adolescents: effects of a non-randomized cluster controlled trial Background “Mental health for everyone” is a school program for mental health literacy and prevention aimed at secondary schools (13–15 yrs). The main aim was to investigate whether mental health literacy, could be improved by a 3-days universal education programme by: a) improving naming of symptom profiles of mental disorder, b) reducing prejudiced beliefs, and c) improving knowledge about where to seek help for mental health problems. A secondary aim was to investigate whether adolescent sex and age influenced the above mentioned variables. A third aim was to investigate whether prejudiced beliefs influenced knowledge about available help. Method This non-randomized cluster controlled trial included 1070 adolescents (53.9% boys, M age14 yrs) from three schools in a Norwegian town. One school (n = 520) received the intervention, and two schools (n = 550) formed the control group. Pre-test and follow-up were three months apart. Linear mixed models and generalized estimating equations models were employed for analysis. Results Mental health literacy improved contingent on the intervention, and there was a shift towards suggesting primary health care as a place to seek help. Those with more prejudiced beleifs did not suggest places to seek help for mental health problems. Generally, girls and older adolescents recognized symptom profiles better and had lower levels of prejudiced beliefs. Conclusions A low cost general school program may improve mental health literacy in adolescents. Gender specific programs and attention to the age and maturity of the students should be considered when mental health literacy programmes are designed and tried out. Prejudice should be addressed before imparting information about mental health issues. Background Public knowledge and beliefs about mental disorders, also termed mental health literacy, may be crucial to the early recognition of mental health problems, and to the seeking and acceptance of mental health care [1,2]. Most mental disorders show their first signs, and increase in prevalence, from childhood through adolescence [3]. This is true both for anxiety, depression, schizophrenia [3] and eating disorders [4]. Universal health promotion/ prevention programmes are aimed at all members of a population or cohort, in contrast to targeted programmes, that are aimed at specific risk groups, or sub-groups [5]. The school is the obvious arena for universal programmes [6,7]. A considerable research literature exists on health promotion programmes aimed at schools, mainly from English language countries [5]. Norwegian health authorities have recommended a series of programmes for use in schools [8]. The efficacy of some programmes in Norwegian samples has been studied [9,10]. The purpose of the present study was to examine the possible effect of the universal school programme "Mental health for everyone" [11]. Mental health literacy Jorm [1,2] defines mental health literacy as: a) recognition of mental disorder, knowledge and belief about b) risk factors and causes, as c) self-help interventions and d) professional help available, as e) attitudes which facilitates recognition and appropriate help seeking, and finally as f ) knowledge about how to seek mental health information. According to this definition, there should be a direct connection between knowledge about and attitudes towards mental health problems, and the ability to both recognise symptoms of mental disorder, and to seek help appropriately. Recognition of mental disorder Recognition of mental disorder has been operationalized [12] as the ability to identify and name a mental disorder based on a written case vignette. Studies have reported conflicting results for the identification of disorders from such vignettes. Jorm and colleagues [12] found that 40% of an adult Australian sample identified depression and 30% identified schizophrenia. Lauber and colleagues [13] found that 40% of a Swiss sample aged 16-76, identified depression and 75% identified schizophrenia. The highest level of case recognition for depression (81%) was reported in rural residents in Queensland, Australia by Bartlett and colleagues [14]. Contrasting, Suhail [15] found that only 20% percent of Pakistani people were able to identify depression, and even less (5%) were able to identify psychosis from a written vignette, and furthermore that level of education was a predictor of correct identification. In young Australian respondents, the ability to identify depression (50%) has been reported to be better than the ability to identify psychosis (25%) [16]. US adolescents had problems recognising both a depression vignette (42% correct) and an anxiety vignette (28% correct) as a mental health problem [17]. Women of all age groups are generally better than males in identifying depression [16,18]. Consequently, the ability to recognise a mental disorder seems to vary with gender, age, country, and level of education. Only a few studies report symptom knowledge in adolescents. Consequently, more research is needed on this age group. Prejudiced beliefs Prejudice, or stigma, is a complex construct, and can theoretically be broken down into cognitive, affective and behavioural domains. The cognitive part refers to stereotyped knowledge and beliefs, the affective domain refers to negative affects like aversion, embarrassment, shame, fear and aggression, and the behavioural aspect refers to avoidance and isolation of, and to discrimination against the object of prejudice [19]. Sartorius [20] suggests that the stigma attached to mental illness is the main obstacle to provision of care for mental health problems. In a review of the literature, Rüsch and colleagues [21] suggested that both self-stigma and fear of stigma are barriers against using health services. Selfstigma refers to perceiving ones symptoms as signs of being weak, stupid, lazy or even evil, rather than understanding that one has a mental health problem that could be helped by health professionals. In a study of Norwegian high-school students, young men had higher levels of stigmatizing attitudes towards mental health problems, compared to young women [9]. In a Canadian study of adults, stigmatizing attitudes about depressed individuals correlated negatively with identification of a depressive case vignette [22]. According to Thornicroft and colleagues [23], stigma refers to problems of ignorance, prejudice and discrimination, in the way that lack of knowledge and prejudiced attitudes towards people with mental illness results in discrimination. Kroger and Marcia [24] connect stereotypes to the earlier and immature developmental stages of identity formation in adolescence, thus age, or cognitive and emotional maturity, may also be a crucial factor in studying prejudice. Based on theories for understanding mental health stigma, Corrigan and colleagues [25] suggest three levels of programs for stigma change: public programs, programs targeted on specific groups, and individualized programs for coping with self-stigma. The Norwegian health care system for mental health In Norway, adolescents have easy access to the primary health care system through the school nurse, and all have a designated general practitioner (GP). All schools have school counsellors. The educational psychological services are only available through referral, and deal mainly with educational problems. Some communities, like the town in which the present study was performed, have low threshold psychological counselling services for youth [26]. For more severe emotional and behavioural problems there are specialist mental health services available, to which you have to be referred by your GP. Thus, the GP, the school nurse, the school counsellor and low threshold youth counselling services represent the primary level of care in Norway. The GP and the school counsellor are the "gate keepers" to the specialist mental health services, and to the educational psychological services, respectively. Primary health care is free of charge, except the GP, where you have to pay a small fee. Out-patient specialist mental health care is free of charge for citizens below age 18. Health service use was reported by the total cohort of North Norwegian adolescents aged 15-16, and 25% of these had visited the school nurse, approximately 50% had visited their GP, and nearly 6% had seen a psychologist, or a psychiatrist (specialist services) in the preceding year [27]. Turi and colleagues [27] also found that girls utilised the school nurse and GP more than boys, while no gender difference was found for the use of specialist services. The relationship between mental health knowledge, prejudice and help-seeking Recognition of symptoms and lack of fear of stigma may be crucial to adequate help-seeking for mental health issues. Stigmatization, self-stigmatization, embarrassment and shame are often seen in connection to mental disorders. After a mental health teaching programme, adolescents showed more understanding and empathy, and used less negative expressions to describe mental health problems [28], demonstrating a path between knowledge and prejudice. Depressed subjects reported feeling embarrassed about seeking professional help for them-selves, and had negative expectancies about how other people would react to them [29]. Depressed subjects reported lower probability for seeking help from professional sources, compared to non-depressed subjects, probably at least partly because of self-stigmatization [29]. Thus sufferers from mental health problems may hinder themselves from proper help-seeking. Knowledge should be vital to help-seeking. If you do not recognise the symptoms and understand that they are signs of a health problem, the probability is low for consulting health professionals. Olsson and Kennedy [17] found that those who recognized a disorder as a mental health problem had much higher probability to suggest seeking help for the problem, than those who did not identify the health problem. Gender differences in these aspects of mental health literacy have also been demonstrated. In their study of young Australians, Cotton and colleagues [16] found that roughly half of both women and men suggested seeing a doctor or a specialist for symptoms of depression, while two thirds of the young women, but only half of the young men, suggested seeing a doctor or a specialist for the treatment of psychosis. Burns and Rapee [18] found that girls were more likely to suggest seeing a counsellor for depressive symptoms, while both genders endorsed other health services equally. School programmes for mental health literacy Several universal school programmes for early prevention of behavioural and emotional problems are available. These programmes are more or less based on psycho-educational and experiential principles. Some of the programmes available in Norway are translated and adapted, while others are developed in Norway, like the program presented here, "Mental health for everyone" [11], aimed at students in secondary schools, 13-15 year olds. Mental health for everyone The universal mental health promotion program "Mental health for everyone" is available online in Norwegian, from a governmental website, free of charge [11]. Both an English language and an Arabic language translation are available from the Norwegian Council of Mental Health. The program is based on Antonovsky's theory of salutogenesis [30]. Based in positive psychology, the salutogenetic perspective aims at health promotion through empowerment [31]. The basic principle of empowerment health education [32] is to engage participants in group experiences and group dialogues, in order to stimulate control and beliefs in the ability to change their own behaviour. Meta-analyses of the effectiveness of health promotion programmes have confirmed that programmes focusing on positive mental health, and engaging students in practical tasks and activities are more likely to succeed, compared to programmes dominated by lectures and delivering of knowledge [5]. The aims of "Mental health for everyone" are fourfold: 1) To contribute to the prevention of mental disease, 2) To challenge the attitudes and prejudices against mental health problems and the mentally ill, 3) To contribute to openness and confidence about mental health issues, 4) To impart knowledge about mental health services and availability of help for mental health problems. The available material for teachers [11] includes three packages with student tasks and video-material, one for each grade (8th to 10th) of Norwegian secondary school. Each package has a basic theme: For the 8th grade the themes are Self-awareness and Identity; for the 9th, Being different, and Loneliness; and for the 10th grade Fear of the Unknown. The pedagogy of "Mental health for everyone" was chosen and organized so as to actively engage and include all students. Consequently, the themes and tasks are meant to be varied and engaging, and to capture the attention of the students throughout three consecutive school days. The pedagogy includes individual tasks, group tasks and plenary sessions, and illustrating video material is included. When implementing the school package for the first time, schools are advised to use the 8th grade programme for all levels, since the programmes build upon each other, and this was done in the present study. The theme Self-awareness and Identity for the 8th grade have three subsidiary themes: Well-being, mental health problems, and mental disorders. For these themes the teachers are free to choose among a variety of tasks, whichever he or she finds most suiting for their class. All tasks are relatively short, and intend to catch attention and inspire to reflect around the chosen theme. Examples of individual student tasks for the theme identity and well-being are: 1) to make a play-list of her or his favourite music, or 2) to bring dispensable items symbolizing favourite themes (i.e. movie or concert tickets, postcards) to school and use them for a collage representing him or herself. An example of a plenary task is to arrange a catwalk where the commentator has been secretly instructed to only comment on the positive personal and inner qualities of the "model", and not on looks; and the audience has been instructed to cheer, no matter what happens. After the show the class discussion is aimed at the experience of valuing inner rather than surface qualities. Lectures on the most common or well-known mental disorders are given for each level in the programme, and in the 8th grade anxiety, depression, eating disorders and schizophrenia are presented. In consecutive years other common disorders, like ADHD, bipolar disorder, psychosomatic disorders and problems with self-harm and suicide, are presented. Each year, practical information about where and how to find help is included for all levels. However, the main focus is on positive mental health, and only one lesson in the three-day programme is aimed at information about specific mental disorders. Aims The primary aim of this study was to investigate whether adolescent mental health literacy, could be improved by means of a universal education programme by: a) improving naming of symptom profiles of mental disorder, b) reducing prejudiced beliefs about mental illness, and c) improving knowledge about where to seek help for mental health problems. A second aim was to investigate whether adolescent sex and age influence mental health literacy. The third aim was to investigate whether prejudiced beliefs about mental illness affected knowledge about available help. Socio-economic and cultural setting of the study Norway is an egalitarian country with rather high standards of living and with small socio-economic differences. The Norwegian compulsory school system is financed and regulated by the authorities, and only 2% of Norwegian children and adolescents attend private schools. Compulsory primary school covers grades 1st (age 6) through 7th (age 12), and secondary school covers grade 8th (age 13) through 10th (age 15). The study was conducted in 2005 in a North Norwegian town with a population of 70,000. North Norway has a multi-ethnic population with a Norwegian majority, an indigenous Sami minority of about 10%, and other ethnic groups representing around 5-10% of the population. The Sami population is well integrated, and has equal education and living standards as the majority population. In a recent population based study from North Norway, no differences in socio-economic status or internalisation symptoms were found between indigenous Sami and ethnic Norwegian adolescents [33]. The participating schools were situated in residential and suburban areas. These schools scored slightly above the national average on standardized academic tests when such results were available from 2008 [34]. Design The study was a cluster controlled trial, since randomization was not possible at the individual level. The design is presented in Figure 1. The entire school was assigned to either the intervention, or to the control group. A pre-test was performed by questionnaire in both control and intervention schools. The three-day intervention followed immediately after the pre-test. The followup was performed in both intervention and control schools two months after the pre-test. Thus, two measurements were performed in all schools. Procedure Recruitment Four secondary schools with a total of 1500 students were invited. One school with approximately 400 students declined to participate. The remaining three schools (1100 students) agreed to participate in the study. The largest school gave consent for the school to participate as an intervention school. The two remaining schools were assigned to the control group. The researchers had to yield to the fixed school schedules, and the principals of the schools decided when the data Figure 1 The design of the study. collection and intervention could be performed. The data was collected three months earlier in the school year in the intervention school than in the control schools. Briefing of teachers prior to the intervention Three of the authors (CB, LIJ and YA) performed the data collection, and also briefed the teachers at the intervention school one month prior to the intervention. Before the briefing, the teachers received the manual for "Mental health for everyone", the teacher's guide, and written information about the four mental disorders depression, anxiety, eating disorders and schizophrenia. The aim of the meeting was to present the school package, to answer any questions, and to inform about the research project. The participants were also given one of the tasks in the manual with the purpose to give the teachers a "hands on" experience with the programme, and to inspire for discussion. Emphasis was given to the importance of respect for the personal integrity of the students when approaching the theme mental health. The teachers were informed about the structure of the mental health care system and where and how to seek help. Teachers at each grade subsequently met in groups with the authors to plan for the three-day intervention. Implementation and mental health literacy lectures The major part of the three school days was spent engaging the students in tasks and activities chosen from the manual by their class teachers. The three co-authors were at the disposal of the teachers for imparting the information about mental disorders, knowledge about mental health problems and about the available mental health services. For each class, one school lesson was dedicated to information about the clinical picture, epidemiology and treatment of anxiety, depression, eating disorders and schizophrenia. In 17 of the 23 classes, the teachers asked the authors to lecture about the mental disorders and the available help system. Thus in almost ¾ of the classes this part of the curriculum was taught in a more or less uniform manner by the three co-authors, who were graduate students in clinical psychology. In the remaining ¼ the class teachers taught these subjects after being briefed by the same three authors. Measurements and anonymity At the intervention school the pre-test was performed in the beginning of January, in the morning of the same day the intervention started. The follow-up was performed in March, two months after the intervention. Teachers distributed and collected the questionnaires during the same school lesson. In school B, the pre-test was performed in April, and the follow-up was performed in June. In school C the corresponding data were collected in June (end of academic year) and August (beginning of the next academic year). Consequently, the data collection of both the pretest and the follow-up in the intervention school A were performed 3 and 5 months earlier in the year, than in the control school B and C, respectively. In all schools the interval between pre-test and follow-up was 2 months. The authors administrated the questionnaires in the control schools. For identification and anonymization, students were given ID-numbers. The list linking names and numbers was kept by the teacher. The ID-number followed the pupil through the pre-test and the follow-up. After the follow-up, the teachers were instructed to destroy the list linking names and numbers, to safeguard the anonymity of the respondees. Ethics The research project was approved by the Regional committee for ethics in medicine in Health region Northern-Norway. Informed consent was obtained by sending an information letter to all parents and students, informing about the study and that participation was voluntary. The majority of the students were minors (below age 16) according to Norwegian health law. The response procedure guaranteed total anonymity for the individual respondents. Informed consent was considered appropriate for this study, by the ethical committee. Respondents were informed that participation was voluntary and that they could refrain from answering any question. Instruments A 66-item questionnaire, of which 7 questions were open ended, was employed. Demographic variables collected were age, sex, school and grade, and a constructed ID-number. Measurement of mental health literacy Symptom profile recognition. Four symptom profiles were presented in the questionnaire, and participants were asked open-ended questions to name the disorder (correct answers in parenthesis): "Name the disorder characterized by lack of energy, problems concentrating, lack of initiative, and by sadness and withdrawal from social activities." (Depression), "Name the disorder characterized by delusions, disturbed thoughts, and strange sensual experiences." (Psychosis/ schizophrenia)." Name the disorder characterized by a feeling of uneasiness, feeling of panic, of fear and rapid heart-beat." (Anxiety), and, finally," Name the disorder characterized by fear of weight gain, by low food intake and extremely low weight." (Anorexia nervosa/ eating disorder). No response alternatives were offered. For each correct answer, 1 point was given, and the mean of the four answers constituted the scale Knowledge about mental disorders, with a minimum value 0 and a maximum value 1. The internal consistency of the scale was fairly high, Cronbach's alfa = .80. Prejudiced beliefs. Items measuring prejudiced beliefs towards mental illness were constructed by the authors in collaboration with the Norwegian Council for Mental Health. The scale consisted of four statements, considered to be representative of commonly encountered prejudiced beliefs. The statements were rated on a 5-point Likert scale ranging from "disagree completely" (score 0) to "totally agree" (score 4). The first two statements: "All who have a mental illness should be committed to a mental hospital." and "All schizophrenics are violent." were inspired by a debate in Norway, following a tragedy when a newly discharged psychiatric patient stabbed and killed a fellow passenger and wounded several others on a tram in Oslo. The last two statements: "Those who become mentally ill are weak people." and "You must really be in trouble if you see a psychologist", were inspired by comments from adolescents participating in a pilot to the present study [35]. Exploratory factor analysis for these four items was performed, and only one principal component emerged (Eigenvalue 1.8). The four items loaded fairly equally on the component (.61 -.71). The mean score of these four items was calculated, constituting the scale Prejudiced beliefs, ranging from 0.0 (no prejudiced beliefs) to 4.0 (maximum prejudiced beliefs). In the present sample Cronbach's α for the scale was .56 at pre-test and .76 at follow-up. Although the internal consistency at pre-test was questionable, the α at follow-up was acceptable. Furthermore, the one principal component solution for these four items indicates that they constitute a shared construct. Since prejudice is a complex construct, as described in the introduction a very high internal consistency was not to be expected. A possibly adding more items with similar content would not guarantee an increased α either. This was demonstrated by Andersson and colleagues [8], who adopted this scale. They added two items: "It is difficult to talk to people with mental health problems" and "ADHD is caused by bad manners", and obtained a similar one principal component solution, similar loadings, and an α of .78 in a representative sample of Norwegian 16-18 year old high-school students. Knowledge about where to seek help for mental health problems. The questionnaire included open-ended questions about the mental health care system. The answers to the item: "There are places where you can seek help for mental health problems. Write down the places that you know of." were employed in the present study. No response alternatives were given, so the adolescents had to formulate their own answers. Answers were sorted into four categories. All who left this question unanswered, or who answered "I don't know" were placed in the category "No answer given". All who mentioned parents, siblings, other family members, friends, selfhelp, the Internet, or other non-professional sources of help, were placed in the category "Home, self-help, internet". All who mentioned GP, school nurse, school counsellor or low threshold counselling service, were placed in the category "Primary care". Finally, the category "Specialist care" included all answers mentioning health care requiring a referral from the primary health care system: The Child and Adolescent Mental Health Out-patient Clinic, or Hospital, psychiatrists, psychologist or educational psychologist. Treatment of missing data For cross-sectional analyses at pre-test and post-test, data from all students who returned their questionnaire were included. Missing data when calculating scale scores were treated as follows: On the Symptom profile recognition and Prejudiced beliefs scales, only one missing item out of four was allowed, and mean score were calculated based on the items that were answered. Statistical analyses IBM SPSS version 19.0.0 was employed for all statistical analyses. Simple between group differences were examined using chi-square tests for dichotomous and analysis of variance (ANOVA) for continuous variables. Analyses of change (pre-test to follow up) were conducted using linear mixed model regression analyses for continuous outcome variables, and generalized estimating equation (GEE) regression models for dichotomous outcome variables. A logit link function and a binomial distribution were specified for the latter. Robust estimation of standard errors (Huber-White correction) was used. Participants and attrition The recruitment and attrition of participants is presented according to the Consort guidelines in Figure 2, and distribution of those who returned usable data on any measurement are presented in Table 1. Out of 4 schools (n = 1500), three schools (73% of students) agreed to participate (n = 1100). A total of 1070 students (97%) out of the eligible 1100 participated in the pre-test, the follow-up, or both ( Figure 2). All 520 students at the largest school (A) received the intervention, while 550 students from the two smaller schools (B and C) formed the control group. Out of the 1070 that returned usable data, the participation rate was significantly higher in the intervention group (90%) compared to the control group (85%) (Χ 2 1df = 7.68, p < .005). There were more boys (55%) than girls (45%) in both intervention and control group. All students present at school at the day of the pretest were included. The pre-test was done in the first lesson, followed by the 3-day intervention for those assigned to the intervention group. Thus, there was no attrition from the intervention group from pre-test to intervention. At follow-up, usable data was obtained from 889 (83.1%) out of the 1070 students. The participation rate at follow-up was nearly equal in the intervention (83.5%) and control (82.7%) group. At follow-up 476 (53.5%) boys and 413 (46.5%) girls participated in the study. Out of the 1070 that participated at either pre-test or follow-up, or both, 834 (77.9%) returned complete data at both occasions, while 236 (22.1%) returned complete data at only one occasion. The participation rate for both measurements was relatively equal in both groups, with 399 (76.7%) returning complete data at both pre-test and follow-up in the intervention group, and 445 (80.9%) in the control group (NS). No statistically significant differences on outcome variables were found between those who had complete data at pre-test and missing data at follow-up, but there was a tendency that those who had missing data at follow-up had more prejudiced beliefs (F = 3.521, p = .06). On the other hand, those who did not participate at pre-test, but who returned valid data at follow-up, had less symptom profile knowledge (F = 16.15, p < .0001) and more prejudiced beliefs (F = 8.27, p = .004) compared to those who returned valid data on both occasions. Sample characteristics In the intervention group, the age range was 12-16 years (M = 14.06, SD = .85), while the age range in the control group was 13-17 years (M = 14.29, SD =. 82). The control group was significantly older than the intervention group by 3 months (ANOVA, F = 17.08, p < .001). This age difference reflects that the data was collected later in the school year in the control schools than in the intervention school. Table 2 shows the numbers of valid and missing cases for the outcome variables at pre-test and follow-up. At pre-test 87% of the sample returned usable data, while 83% delivered usable data at follow-up. Identification of symptom profiles First, correct identification of the four symptom profiles was studied (Table 3). At pre-test eating disorder was identified by more than half of the students in both control group and intervention group. The second most recognized disorders at pre-test were depression and schizophrenia, identified by one fifth to one third of the students. Anxiety disorder was only identified by 12-13% of students at pre-test. Correct naming of schizophrenia, depression and eating disorder was significantly more common in the intervention group at pre-test, while the anxiety disorder profile was equally unrecognized in intervention and control group. Table 3 also shows symptom profile recognition at follow-up. For all the four disorders, the proportions of correct profile recognition remained the same at pre-test and follow up in the control group. For all symptom profiles the differences between intervention group and control group were larger at follow-up. This is shown by higher proportions for correct identification in the intervention group, and by substantial increase in the Χ 2 -statistics for the differences between intervention and control group. Mean scores for correct symptom profile recognition were calculated. To study the possible impact of the intervention on symptom profile recognition, mixed models analyses were performed (Table 4 and Table 5). For all models, the variables sex (girl = 0, boy = 1), age, school (A, B or C), grade (8th, 9th or 10th), group (intervention vs control), time (pre-test vs post-test) and group x time (the interaction between group status and time) were entered as independent variables. In Table 4, descriptive statistics of mean scores for symptom profile recognition at pre-test and follow-up, as calculated by Generalized mixed model anaysis are presented. In the control group a mean score of .31 reflects that the control school adolescents recognized slightly less than one third of the symptom profiles correctly at both pre-test. The score .38 reflects that the intervention group identified somewhat more than one third of the profiles correctly at pre-test. Table 4 also shows that there was no change from pretest to follow-up in the control group, while the mean recognition of symptom profiles nearly doubled in the intervention group (p < .0001). The model for change over time in symptom profile recognition is presented in Table 5. School was controlled for, but showed no effect, and the variable was thus removed from the model. The independent variable group shows the main effect of group membership, the time variable shows the effect of time passing between pre-test and follow-up, while the interaction term group x time can be interpreted as the effect of the intervention. Both the effects of group and group x time were statistically significant, and the interaction term had the strongest Beta-value. This means that the intervention had an independent and stronger effect than the initial group differences on the outcome variable. No significant effect of age was found. However, since grade and age are highly correlated, the moderate, but significant effect of increasing school grade from 8th to 10th should be read as an age effect. Sex had the second strongest effect on symptom profile recognition, in that girls recognized symptom profiles better than boys. Prejudiced beliefs As shown in Table 2, observed mean score for prejudiced beliefs was 2.3 (SD .77) at pre-test and 2.2 (SD .85) at follow-up. When prejudice was measured on a 5-point likert-scale from disagree completely (0) to agree completely (4) in a prejudiced statement, a score of 2 means that the respondents neither completely reject nor completely accept the prejudiced statements, while a score above 2 means that they tend to agree. To study the possible impact of the intervention on prejudiced beliefs, mixed models analyses were performed (Table 6 and Table 7). For all models, the variables sex (girl = 0, boy = 1), age, school (A, B or C), grade (8th, 9th or 10th), group (intervention vs control), time (pre-test vs follow-up) and group x time (the effect of the intervention) were entered as independent variables. Table 6 shows that there was a small decline in prejudiced beliefs in the control group (p < .02), and a marked decline from pre-test to follow-up in the intervention group (p < .0001). The model for change over time in prejudiced beliefs is presented in Table 7. The independent variables entered in the model were the same as for symptom recognition. School was controlled for, but showed no effect, and the variable was thus removed from the model. Both group, time and group x time contributed significantly to reduction in prejudiced beliefs from pre-test to follow-up, meaning that both the difference between control and intervention group, the passing of time, and being exposed to the intervention contributed to change in prejudiced beliefs. As for the previous model, the effect of age was not significant, but probably covered by the effect of increasing grade. Prejudice decreased with increasing school grade. Finally, sex contributed separately to prejudiced beliefs. Boys held more prejudiced beliefs than girls. Places to seek help for mental health problems In Table 8, the distribution of mentioned places to seek help for mental health problems are presented. At pretest more in the intervention group mentioned specialist psychological or psychiatric health care, and more in the control group left the open-ended question unanswered, i.e. no place mentioned. Otherwise the groups were equal at pre-test. At follow-up, the weight of answers in the intervention group seems to have shifted from specialist health care at pre-test, towards more mentioning of primary health care at follow-up. At follow-up there were also slightly more in the intervention group that mentioned home, self-help, internet as places to turn for help, compared to the control group. The distribution of answers in the control group remained unchanged from pre-test to post-test. In Table 9 results from generalized estimating equation models for the spontaneously mentioning of places to seek help are presented. The four categories were treated as mutually exclusive, hence yielding four analyses. The categories were: 1) No places mentioned, 2) home, selfhelp, internet (e.g., parents, siblings, other family, friends, self-help, Internet) as a place to seek help, 3) primary health care (e.g., school nurse, GP, school counsellor, lowthreshold community counselling service), and 4) specialist health care (i.e. clinical psychologist, psychiatrist, child and adolescent mental health service). Each category was compared with all the others. The variables of main interest were group (intervention vs control group), time (pre-test vs follow-up), and the group x time interaction (effect of intervention). A significant interaction would support a differential change over time, and possibly confirm a hypothesis of intervention effects. The following variables were additionally included as covariates (but removed if not significant): School, grade, age, gender and prejudiced beliefs. The means referred below to are means calculated in post-hoc tests, and correspond to the proportions of sample answering in the four categories presented in Table 8 (M intervention = mean proportion in intervention group, M control = mean proportion in control group, M pre = mean proportion at pre-test, M follow-up = mean proportion at follow-up). No places mentioned (leaving blank the question: "There are places where you can seek help for mental health problems. Write down the places you know of"): The group factor was significant (M intervention = .18 and M control = .24), but the interaction term was not. Thus, the intervention had no effect on this category, meaning that those who gave no answer at pre-test, continued to not answer this question after the intervention. The covariate prejudiced beliefs was significant, indicating that respondents with higher scores on prejudice more often chose the response category no places mentioned Home, self-help, internet: The time x group interaction was significant, indicating that change was contingent on the intervention. Post-hoc tests indicated no change from pre to post in the control group (M = .08 at both time points), while significantly more subjects mentioned this category following the intervention (M pre = .05 to M follow-up = .10, p < .001). None of the covariates affected the estimates. Primary health care: Again, the time x group interaction was significant. There was no significant change Table 7 Generalized linear models for prejudiced beliefs The estimates were adjusted for by the following significant covariates: older age, being a female and having less prejudiced beliefs increased the probability of mentioning primary health care like the GP, school nurse, school counsellor or other counselling services as places to turn to for help. Specialist health care: Both group and the group x time interaction were significant, hence indicating an intervention effect. Again, the change in the control group was not significant (M pre = .42, M follow-up = .41), while the intervention group mentioned this category significantly less often following the intervention (M pre = .54, M follow-up = .35, p < .001). Gender was the only significant covaraite, indicating that boys mentioned specialist health care more often than girls. Discussion The main finding in this study was that mental health literacy in terms of symptom profile identification, prejudiced beliefs, and knowledge about where to seek help, changed after a three-day universal school intervention aimed at secondary school students. Furthermore, the present study demonstrated that prejudiced beliefs might function as a buffer against gaining knowledge about the mental health help care system. Younger students and males were more prejudiced and had less knowledge. Effects of intervention upon mental health literacy Identification of symptom profiles changed contingent on the intervention, and most probably on the information delivered during the three-day school programme. Knowledge increased substantially for anxiety, depression and schizophrenia, and moderately for anorexia. However, anxiety and depression are the by far the most prevalent disorders among adolescents [3]. Our data show a need for knowledge about internalising disorders among adolescents, and that it can relatively easily be met. Still, adolescents also need to know more about schizophrenia and other psychoses, not the least because sufferers from psychoses are more often the victims of public stigma than sufferers from internalisation disorders. Anxiety disorder, one of the most prevalent mental disorders among adolescents, was the least known, identified by only one in ten adolescents. Depression, the most prevalent mental disorder in the general population, was only identified by one third of the adolescents. Interestingly, the symptoms of psychosis (schizophrenia) were almost as easily identified as those of depression. Anorexia nervosa was recognized by a majority of the adolescents. Both depression and schizophrenia were identified by a smaller proportion in our sample of Norwegian adolescents Notes. Grade and school were non-significant in all models. Variables reported as "-" were non-significant and was removed from the model, except for the intervention factors group and time. * p < .05, ** p < .01, *** p < .001. than by young adult Australians [16]. One reason for this may be that in our study only a list of cues was presented, and not a more detailed case vignette, as in the Australian study. Another reason may be that the Australian sample was older, and thus probably more experienced and knowledgeable, than the youths in the present study. A third explanation may be that the level of mental health literacy is higher in Australia than in Norway, due to information campaigns, aimed for mental health literacy and early detection of mental disorder [36]. It is however noteworthy that symptom recognition increased equally for the depression and the schizophrenia profile, after intervention. Anxiety disorders and anorexia nervosa were the least and most recognized profiles in our study. Almost similar low identification of anxiety disorder was reported in a US sample [17]. It is a paradox that one of the most prevalent disorders in youths, anxiety, is so poorly recognized, while the rare disorder anorexia is easily recognized. The explanation may be found in youth culture and media focus. Anorexia is a visible disorder, attracting much media attention, and can be associated with actors, pop stars, and athletes. On the other hand, few films or TV series with an adolescent audience have a leading character suffering from an anxiety disorder. Knowledge about symptoms and mental health issues in general may be seen as cognitive components of mental health literacy, while prejudiced beliefs in addition have a negative affective component. The relationship between these cognitive and affective components of the mental health literacy construct is not necessarily linear. Interestingly, this study has demonstrated that prejudiced beliefs did decline in those who were exposed to a threeday universal school intervention. Whether the decline is related to the information given during the intervention, or to the different individual, group and plenary assignments in the three-day intervention, is a question left over for future research. However, this study demonstrates that adolescents prejudiced beliefs about mental health problems can be changed. The present study demonstrated that there was a shift from mentioning the specialist health care system as a place to seek help, towards mentioning the primary health care system as the preferred place to seek help. There was also a tendency of increased mentioning of help at home, or self-help after intervention. Since the primary health care system is the gatekeeper to the specialist health care system in Norway, these shifts are adequate, since they are in direction of a more effective path to receiving help. Furthermore, Norwegian youths are dependent on their parents to receive specialist health care, since they are minors in health care until they are 16 years old. Thus seeking help at home for the youngest subjects is adequate behaviour. The impact of gender in mental health literacy Gender had significant impact on both symptom profile recognition and prejudiced beliefs, and, to a lesser degree, on knowledge about places to seek help. The sex difference in recognition of depression reflected the difference found in young Australians [16]. Girls were also better than boys in identifying anxiety disorders and anorexia nervosa. Similar as in Australian young adults [16], no gender difference was found in the identification of schizophrenia in Norwegian youths. We have no cues in the data to explain these gender differences, so what follows is pure speculation. However, anxiety, depression and anorexia nervosa are more prevalent in women, while there is no sex difference in the prevalence of schizophrenia [37]. Perhaps young girls and boys have "gendered" awareness of the disorders they are at risk for themselves. Being a girl predicted lower level of prejudiced beliefs about mental disorders in our sample. Previous research has also demonstrated that females are less prejudiced towards mental illness [9,38]. Finally, gender had less impact on knowledge about where to seek help. Only the mentioning of primary health care was influenced by respondent's female sex. We know from a previous study of the North Norwegian youth population [27] that girls utilize the primary health care system more and that the specialist mental health services are equally utilized by both sexes. The importance of age Some small, but important, age differences were found in the present study. For symptom profile recognition, 10th grades (mean age 15 years) knew more than 8th graders (mean age 13) and the age difference in knowledge increased after the intervention. Whether this could be read as an interaction effect was however not confirmed. However, this may indicate that the younger adolescents are not yet cognitively ready to absorb and understand specific knowledge about symptom profiles and syndromes. Furthermore, lower grade students held more prejudiced beliefs than higher grade students. Prejudiced and stereotypical beliefs may be viewed as correlates of identity development statuses in adolescence, in accordance with Kroger and Marcia's [39] theories, and stereotypes are indicators of less mature identity statuses. Finally, we found that increasing age was a predictor of mentioning primary health care as a place to find help. Since the older adolescents in this study were reaching the age for being autonomous users of health services, according to Norwegian legislation, this finding indicates that increased awareness of easily available health care follows increasing age in our sample. Prejudice as a buffer against knowledge According to Thornicroft and colleagues [23, p 192], "The main challenge in the future is to identify which interventions will produce behaviour change to reduce discrimination against people with mental illness." Earlier studies of school programmes aimed at reducing stigma have remained inconclusive [40]. In the present study, we have identified a direct path between prejudiced beliefs and the lack of knowledge about places to seek help for mental health problems. Endorsing more prejudiced beliefs was directly related to not mentioning places for help. Furthermore the school intervention had no effect in changing No places mentioned into suggesting a place to seek help. Perhaps prejudice worked as a kind of "insulation" or buffer against gaining knowledge? In the present study we have demonstrated that youths with prejudiced beliefs about mental health problems were difficult to reach with this specific universal mental health school program, since many of them remained ignorant about the help system after intervention. We do not know from our data whether the lack of knowledge would remain if the subjects themselves experienced a need for help. However, Barney and colleagues [29] demonstrated that subjects with depressive symptoms reported feelings of embarrassment about seeking professional help, and negative beliefs about other people's reactions to them; and that this self-stigmatization was related to lower probability of help-seeking from professional sources. It is reasonable to fear that prejudice both may serve as a hindrance against seeking help for one-self, and also against recognizing problems and seeking adequate help for those who may be the future dependents of the prejudiced person. Thus it is doubly important to fight stigma, both self-stigma and stigma aimed at mentally ill. Strengths and limitations The strengths of this study were the sample size, the high adherence of participants at pre-test and follow-up and that the mental health literacy construct has been operationalized into measurable units with good face validity. The written symptom profiles in the present study were very short, only presenting a list of symptoms. Other studies have made vignettes in the form of short stories [12,17]. There is variety of form, detail and quality of vignettes for case identification in current research, and there is need for standardisation and validation of this method for measuring mental health literacy. The measurement of prejudiced beliefs is complicated, since the construct is complex. The internal consistency of the prejudiced belief scale was marginal at pre-test. However, the stability of the scale improved at post-test. Furthermore, a factor analysis demonstrated that a single component explained the four items well. The content of the scale had high face validity, since it clearly contained utterances of prejudiced beliefs about mental health issues. Furthermore, the scale gave valuable information and seemed to be a key variable in the study of mental health literacy in the present sample. As lower reliability implies lower statistical power, the true intervention effect on prejudices in the population would be expected higher given an improved internal consistency. Categorizing answers to open-ended questions is challenging. In classifying suggestions of places to find help, three categories were rather clear-cut: "no answer", "primary health care" and "specialist health care". The fourth and smallest category "home, self-help, internet" covers help-seeking from non-professional sources, or from written sources, and is a somewhat heterogeneous, "other answers", category. If splitting in smaller more homogenous categories, statistical power would have been lost. Another issue is that we measured health service knowledge through mentioning of places, and did not ask about help seeking behaviour in terms of actual use. There were significant differences in mental health literacy between the intervention and the control group at pre-test which may offer an alternative hypothesis as to the origin of change from pre-test to follow-up. However, we employed statistical methods that could separate the effect of group differences from the intervention. We offer two possible explanations for the baseline differences. Firstly, there could be socio-economic differences between the uptake area of the intervention school and the two control schools. Regretfully, we did not ask for parent's length of education, which could be used to control for differences in socio-economic status between intervention and control group. We did however obtain information about school average academic scores on national standardized tests three years after the intervention, and all schools scored slightly above average and had rather similar scores, so we have no indication of major academic differences between the three schools. Secondly, since the students had received information about the research project, since their teachers had been primed, and since the school schedule was announcing the three day intervention beforehand, students at the intervention school knew that they were about to engage in the three day educational programme when they completed the pre-test. Thus, it is possible that this may have resulted in priming effects. However, despite these baseline differences, the intervention seems to have had an independent effect. An alternative design was also considered, splitting all three schools into an intervention and a control group. However, since the implementation of the programme required that three consecutive school days were assigned to mental health issues, we suspected that a considerable leakage of information would occur from the intervention group to the control group, since friendship and extra-curricular activities happen across school class-boundaries. It has been shown that to be successful, universal programmes should be implemented in the entire school, by changing the curriculum of all classes [5]. A possible confounding variable to the results about identification of symptom profiles may be that the specific information about mental disorders was taught in a uniform manner by three graduate students of clinical psychology to the majority of classes. The remaining class teachers considered themselves knowledgeable about the specific mental disorders and performed this lecture themselves, after briefing. We did however not register teacher vs researcher as a variable, or any other information about the teaching skills, or programme adherence of the teachers, and could thus not control for these possible confounders. It has been shown that teacher related factors, such as teaching skills and programme adherence are predictors for success for school programmes for mental health [5]. Thus giving the teachers the choice to teach the specific knowledge about mental health themselves, or having it delivered by an expert, may perhaps have ensured that these issues were taught in an optimal manner in all classes. Most school have school nurses, psychologists and counsellors with special competence in mental health issues. These should be actively involved in universal mental health promotion programmes. One should however not overrate the possible impact of this single lesson, since the class teachers were in charge of the remaining mental health promoting activities during three consecutive school days. The follow-up was performed three month after the intervention, so long-term effects and possible decay of effect of the programme was not reported here. Implications We found systematic sex effects for mental health literacy, and gender specific programmes for enhancing mental health literacy in male adolescents should thus be considered. An equally important implication is that older adolescents had less prejudice and more knowledge. Thus perhaps the programme should be revised and made more age-adequate for the youngest students? School programmes should focus on enhancing knowledge about the common mental disorders, like anxiety disorders and depression, since we found that youths knew little about the most prevalent disorders. However, it is still important to impart balanced information about schizophrenia and other psychoses, since sufferers from these conditioned are often victims of prejudiced attitudes. Adolescents with prejudiced beliefs about mental disorders learnt less from this universal school intervention. For school programmes to be effective, prejudices should first be challenged, before imparting general information about mental health. Since younger adolescents are in need of knowledge about the health care system, to be able to take responsibility of their own health, it seems important that preventive programs impart information about the system and where to seek help. Health services should also be aware of adolescent's need of information about available help and aim information about availability at these young users. Our findings speak for well-balanced information programmes about mental disorders among youths, challenging prejudice against mental disorders and focusing more on anxiety and depression. Conclusions The low cost universal school program had effect on both recognition of mental disorders, prejudice and knowledge about where to seek help, and consequently on mental health literacy of adolescents. Prejudice appears to be closely knit to knowledge about places to seek help. These findings shed light upon the relationship between prejudice, stigma and possible self-stigma that hinders adequate help seeking, both for one-self and possibly for others.
2017-04-20T17:48:20.565Z
2013-09-23T00:00:00.000
{ "year": 2013, "sha1": "fd31d141568e1787709d44f8998f16b63f149400", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-13-873", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "423d0b1165bdd1c93aacc87e6a731acf2527b501", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
270328837
pes2o/s2orc
v3-fos-license
Effect of Self-Directed Learning Approach on Academic Outcomes of ALS Learners in Digos City :- The purpose of this quasi-experimental research study was to ascertain that self-directed learning is an educational technique in which learners choose what and how they would learn under the teacher's guidance. The general idea is that learners take responsibility for their own learnings as ALS learners for the 2022–2023 school year, and this can be done either individually or in groups. At this stage in the research, the teacher is experiencing challenges in facilitating curriculum delivery and management and how she strategizes to overcome her difficulty and insights to mitigate such for better learning process among learners. The method used was quasi experimental research where assumptions were considered given fifty (50) participants of ALS Learners whose responses and identity were treated with utmost confidentiality. Findings revealed that a self-directed learning approach must be used on academic outcomes of ALS learners as a strategy that would further improve the academic performance of learners in order to make the teaching meaningful. If learners participate in the learning process by experiencing the unfolding of the lesson, they will appreciate the concept being developed, and therefore will learn. INTRODUCTION Independent learning is an educational procedure where the students, with the direction from the instructor, conclude what and how they will learn.It very well may be done independently or with bunch learning, however the general idea is that students take responsibility for learning.Independent learning is a learning technique that powers people to get a sense of ownership with their own learning.Become familiar with the meaning of independent realizing, how to carry out the informative technique, and about its different parts and advantages.Independent learning in the broadest sense is the cycle through which a singular gets a sense of ownership with their learning.This is separated into: surveying the requirements and status for getting the hang of, distinguishing learning objectives, taking part in the educational experience and self-assessment. To put it plainly, it is a far reaching technique intended to enable individuals to take possession for their selfawareness.Instead of surrendering it to Inside the setting of distance learning, it gives you the apparatuses you want to recognize the right course for you, in view of your ideal results.Besides, it will assist you with concentrating on in an organized manner, it are useful and positive to guarantee that your review meetings.You'll likewise have the option to dispassionately assess how your examinations are proceeding to distinguish where you can improve or require support.Learning alone with independent learning is certainly not a basic undertaking. It takes evaluating new review strategies, knowing how you learn, and the inspiration to continue onward.While this all sounds basic on paper, noticing individuals' general state of mind towards learning is significant.For some individuals, it's been a long time since they last got a book, not to mention a course reading. It would make sense if the vast majority quit advancing earnestly after college or school.It's great now that you are zeroing in on your advancing over again, in light of the fact that once you dive into what realizing is, you'll understand that school learning wasn't the most ideal.Independent learning can be an incredible instrument to assist you with proceeding with your deep rooted learning. We'll examine what it is and a few hints to help you.In the Division of Digos City, independent learning is polished even in the ALS program experiences a situation when students don't go to classes routinely.They focus on working than going to ALS classes.The specialist being an ALS educator might want to investigate her drive of bring ALS students into their classes even away from the homeroom by presenting the independent learning program.Thus, this review.This segment manages the further readings and explores of the scientist taken from books, magazines, diary and the web. These connected written works will additionally approve and back up the concentrate on its general presence as they give bearing and importance.Self-Heading in grown-up learning has been a subject of expanding revenue and examination by researchers and professionals of grownup training since the mid 1900's.Various instructors have addressed it with different terms, like self-training, andragogy, independent learning, free review, independent learning, self-arranged learning, grown-ups' learning projects, free review, long lasting learning and autoteaching. Be that as it may, every one of these terms underscores the purposeful obligation of the singular student in the growing experience (Guglielmino et al, 2005).Likely the best meaning of independent learning is that given by M. Knowles (1975): A fascinating model for SDL is that proposed by Brockett and Hiemstra (1991) and named as the Moral Obligation Direction (Star) Model.A significant component of this model is the idea of moral obligation, which these 2 creators characterize as "people accepting possession for their own contemplations and activities".Exclusively by tolerating liability regarding one's own learning it is feasible to adopt a proactive strategy to the educational experience.These suppositions are generally draw from humanism, including thinking about the student free, independent, and capable to settle on choices prompting self-realization. As such, individuals can't be autonomous in the event that they are not mindful and responsible for our decisions, activities and results.As composed by Brockett and Hiemstra (1991): "Self-heading isn't a panacea for all issues related with grown-up learning.Nonetheless, on the off chance that having the option to take command for one's predetermination is a positive objective of grown-up training (and we accept it is!), then, at that point, a job for instructors of grown-ups is to assist students with turning out to be progressively ready to take care of their own learning".Another supportive independent learning educational model is that proposed by Grow (1991Grow ( , 1994)), assigned as Organized Independent Learning (SSDL) model, which frames how educators can advance independent learning in their understudies (Merriam et al, 2007).Independent learning advances and supports figuring out how to learn and long lasting learning. Through SDL grown-ups can acquire new metacognitive abilities about their mastering endeavors (Kasworm, 2011).What's more, the capacity to learn all alone is totally basic in a world that continues changing and creating new data and information consistently.The discoveries of an enormous and fastidious review (Lyman, and Varian, 2003;as refered to in Guglielmino, 2013), at the College of California, Berkeley, showed that new put away data nearly multiplied somewhere in the range of 1999 and 2002, developing at an expected 30% each year.Google Chief Eric Schmidt made a declaration in 2010 that predominated its effect when he point by point the remarkable acceleration of the creation of new data.Before, instruction was viewed as groundwork for as long as you can remember.In the last part of the 1940's, an individual could hope to move on from secondary school with 75% of the information expected to remain effectively utilized until retirement. After fifty years, that figure diminished to 2% (Barth, 1997).Similarly as youth learning is presently insufficient satisfactory groundwork forever, beginning preparation or learning is definitely not a sufficient groundwork for keeping up with skill at work or in a calling.Consequently, in the current period of steady change and new data being added consistently, it is sensibly impractical to have coaches that are continually directing us through our learning.SDL, consequently, turns out to be totally vital as an instrument of advancement for people in the 21st hundred years.In grown-up schooling, the idea of independent learning has extraordinary significance.This term emerged in the field of grown-up training during the 1970s and is as yet a generally involved term in the field.Yearly discussions have been held by the Worldwide Society for Independent Learning starting around 1986, committed to the advancement of independent learning.The general public additionally distributes a global diary of independent learning. A term of later beginning is self-guideline, utilized by certain creators at times conversely with self-heading.The standard of self-heading can be dated long back to Britain during the 1800s, where terms like self improvement, personal growth, and self-schooling were utilized ([1], p. 46).Nonetheless, there are clear motivations to date the academic investigation of independent learning back to the start of the 1960s.In 1961, Cyril Houle distributed his book The Inquisitive Psyche [2].Independent learning is obviously a multi-layered idea that ought not be moved toward through one point of view. As per Kerka [11], the greatest misinterpretation might be in attempting to catch the pith of independent learning in a solitary definition.Van der Walt [12] additionally focuses to the expressed disarray in regards to this idea, which has prompted correspondence hardships about the subject of independent learning.Van der Walt reasons that scientists in the field of independent learning have two choices.One is to proceed with the expressed disarray by characterizing how they might interpret the idea, or, as a subsequent choice, they can leave their examination from the first meaning of independent learning given by Knowles and his associates ([12], p. 16).In the accompanying, a few thoughts of the independent learning idea are represented.Independent learning involves people stepping up and obligation regarding their own learning.You are allowed to put forth objectives and characterize what merits realizing.Independent learning can occur both inside and beyond formal instructive establishments. At the point when educators are involved, they ought to be facilitators of learning, not transmitters.What is normal to most conceptualizations, as per Post [13], is the idea of some private command over one or the other or both the preparation (objectives) and the administration (support) of the opportunity for growth.Post [14] likewise emphasizes that a definitive objective of independent learning isn't really completely independent learning since it involves degree. Independent learning doesn't altogether rely upon the open door yet additionally the capacity to go with learning choices.Hence, as per Post, in a conventional learning circumstance, it ought to be viewed as a cooperative cycle between the educator and the student.Seen according to a basic perspective, lessening self-bearing to an issue of outside control is deficient."We live reliantly, and information not entirely settled" ([14], p. 141).Brookfield [15] additionally censures independent learning for disregarding friendly setting by zeroing in on the individual, segregated student and stresses the social development of information and the social setting of learning.Merriam and Caffarella [16] require a more extensive acknowledgment of the reliant and cooperative parts of independent learning.O'Donnell [17] goes the furthest in stressing the aggregate over individual aspect when he presents a reasoning for what he calls "selves-coordinated learning" (p.251).Post [13] claims that the individual doesn't build meaning II. METHOD This review talks about the scientist strategy, the exploration plan, the spot and time, the examination instruments, test development and approval, scaling, information gathering methodology and the information investigation. This review utilizes the semi exploratory examination plan, which is a non-comparable benchmark group pretestposttest plan.Non-identical plan is a decent plan when the scientist approaches one gathering for trial and error (Vockel 1983).The scientist selected to utilize this plan in light of the fact that the subjects of the review are flawless gatherings of students.This plan is addressed as follows: This study will be led in ALS People group Learning Center inside division of Digos City. The subjects of this study will be the 50 ALS students -25 are from segment A which will be the controlled gathering and 25 are from area B which will be the trial bunch.The organization of these two segments is homogeneous.The two students from segments An and B have indistinguishable grades.This study utilizes the nonirregular task of subjects where all students of the two segments An and B are involved as subjects of the review.Since it is pandemic time, the analysis will be directed in light of the mechanics of Far off Learning System.This study will use the new typical learning methodology. It is a mixed realizing where the educator gave module simultaneously meet the students in eye to eye however sticking to the conventions of Between organization Team (IATF).The scientist needs to meet the students with the authorization of guardians during up close and personal meetings.One gathering of ALS students is given homeroom guidance in the typical manner while the other utilize independent modules.The pre and post-execution test comprises of a 25 -thing test will ultimately decide the learning interest of the exploration subjects.The pretest will be directed to all subjects preceding the treatment.The pretest will be extremely useful to survey the learning interest of the ALS students. Then again, a post-test will be controlled to gauge the impact of the treatment.To decide the learning interest of the ALS students, the accompanying continuum will be utilized in light of DepEd rating framework. At the start of information gathering strategy, the scientist will draft a letter looking for consent that this exploration study be led were shipped off the Dr. Melanie P. Estacio, CESO VI, the Schools Division Director and to the ALS Central Individual in the division of Digos City.While letters looking for consent were conveyed to the DepED Schools Division Director and ALS Central Individual concerned, the specialist developed a poll and have it approved by the specialists ideally the specialists of the review.After consent has been conceded that this review be led in Digos City ALS learning focus and after the examination poll has been completely analyzed by the master validators, the scientist will regulate pretest to both controlled and trial class and in the end starts her analysis in the exploratory class.Following three weeks of trial and error, the analyst will oversee posttest to the two segments.Scores of the subjects will be submitted to the analyst for factual calculation after which the specialist will make investigation and translation on the information assembled. III. RESULTS AND DISCUSSIONS The consequences of the review demonstrate that the use of an independent learning approach decidedly affects the scholastic results of ALS students.The pre-test scores of both the controlled and trial bunches were viewed as toward the Starting level, demonstrating a comparative beginning stage for the two gatherings.Be that as it may, the post-test scores of the trial bunch were fundamentally higher than those of the controlled gathering, showing a more noteworthy improvement in scholarly execution.1 presents the pre-test scores of the ALS learners.The controlled group had a mean score of 12.96, while the experimental group had a mean score of 12.81.Both groups were rated at the Beginning level, suggesting a similar level of knowledge and skills prior to the intervention.2 shows the post-test scores of the ALS students.The controlled gathering had a mean score of 20.96, while the trial bunch had a mean score of 20.78.The two gatherings were appraised at the Oncoming Capability level, demonstrating an improvement in scholastic execution.Notwithstanding, the exploratory gathering had a marginally higher mean score, proposing a more prominent improvement contrasted with the controlled gathering. The discoveries of this study are steady with the possibility that the usage of innovation, like independent learning modules, can upgrade the growth opportunity and work on scholarly results.The utilization of independent learning modules permits students to participate in the growing experience at their own speed and investigate reasonable circumstances in an upheld climate.This approach urges students to foster insight, preliminary various methodologies, and gain a more profound comprehension of the topic. The outcomes likewise demonstrate a massive distinction between the pre-test and post-test scores of the ALS students.The t-test investigation uncovered a huge distinction with a p-worth of 0.0211, demonstrating that the independent learning approach essentially affected the scholastic results of the students.In light of the discoveries of this review, it tends to be reasoned that the use of an independent learning approach can really work on the scholastic results of ALS students.The trial bunch, which got the independent learning mediation, showed a more noteworthy improvement in scholastic execution contrasted with the controlled gathering. These discoveries have significant ramifications for instructive organizations and policymakers.It features the significance of integrating independent learning approaches into the educational plan to upgrade the growth opportunity and work on scholarly results.By giving students the chance to participate in independent learning, teachers can engage them to take responsibility for mastering and foster fundamental abilities for deep rooted acquiring.All in all, the consequences of this study support the utilization of an independent learning way to deal with upgrade the scholastic results of ALS students.The discoveries recommend that the use of independent learning modules can prompt a huge improvement in scholastic execution.It is suggested that instructive establishments consider coordinating independent learning approaches into their helping methodologies to advance successful learning and further develop understudy results. Table 1 Pre -Test Score of Learners Table 2 Post -Test Score of Learners
2024-06-08T15:04:47.432Z
2024-06-06T00:00:00.000
{ "year": 2024, "sha1": "4779dfcfc8aa7ffabe9a1f90761fd5cd24bbc168", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.38124/ijisrt/ijisrt24may2101", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4d2611136a470d5bfeb872beb220342259e481f7", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
237918424
pes2o/s2orc
v3-fos-license
Low-intensity, Long-wavelength Red Light Slows the Progression of Myopia in Children: an Eastern China-based Cohort Myopia is prevalent worldwide, particularly in East and Southeast Asia. Recent studies have suggested that the spectral composition of ambient lighting inuences refractive development, especially in humans. We aimed to determine the effect of 650-nm single-wavelength red light on the inhibition of myopia progression in children. In this retrospective cohort study, 105 myopic children (spherical equivalent refractive error [SER], -6.75 to -1.00 dioptres (D)) aged from 4 to 14 years old were retrospectively reviewed. Subjects were treated with 650-nm, low-intensity, single-wavelength red light twice a day for 3 minutes each session, with at least a 4-hour interval between sessions. IOL Master was utilized to measure the axial length (AL) and corneal curvature. Choroidal images were assessed using enhanced depth imaging optical coherence tomography (EDI-OCT), and the luminal area (LA) and stromal area were converted to binary images by the Niblack method. At baseline, the mean SER was -3.09 ± 1.74 D and -2.87 ± 1.89 D at 9 months, and signicant changes occurred over time (P = 0.019). The AL increased by -0.06 ± 0.19 mm for 9 months (0.21 ± 0.15 mm pretreatment; P(cid:0)0.001). The subfoveal choroidal thickness (SFChT) had changed by 45.32 ± 30.88 μm at the 9-month examination (P(cid:0)0.001). Repetitive exposure to 650-nm, low-intensity, single-wavelength red light effectively slowed the progression of myopia and reduced axial growth after short treatment durations. These results require further validation in a longitudinal study, as well as further research in animal models. Introduction Myopia, a refractive condition associated with visual impairment and vision loss, is the most common eye disorder worldwide. [1][2][3] In recent decades, the prevalence of myopia in children and adolescents has been dramatically increasing; the onset age has decreased, while the severity of myopia has rapidly increased. 4,5 It is estimated that by 2050, 49.8% of the global population will have myopia, and 9.8% will have high myopia. 1 During the coronavirus disease 2019 (COVID-19) pandemic, the myopia rate in primary and middle school students increased by 11.7% in six months in China. In particular, the results of a survey released by the Chinese Ministry of Education at the end of August 2020 highlighted the problem of myopia. Currently, there is no effective intervention to prevent the progression of myopia. Young adults can wear orthokeratology (OK) contact lenses or take the muscarinic antagonist atropine in hospital-based interventions, which have had only limited success, although low-dose atropine eyedrops have shown promise. [6][7][8][9] The long-term use of atropine is associated with side effects such as photophobia, rebound myopia and drug resistance. In this context, exploring new methods to prevent and control myopia in young people has become a top priority. Thus, we focused on low-intensity, long-wavelength red light therapy as a new method to help restrict the progression of myopia by stimulating longer-wavelengthsensitive (LWS) cones, 10 improving mitochondrial complex activity, 11 producing slower axial elongation and preventing the normal decrease in refraction. [12][13][14][15][16] Recent studies have suggested that spending time outdoors may prevent the development of myopia. 17,18 The protective association appears to be related to the higher ambient illuminances and the spectral composition of ambient light rather than engaging in sport activities, because the time spent engaged in an indoor sport activity is not associated with a lower likelihood of myopia. 17 Because both luminance and chromatic signals can in uence emmetropization, alterations in the spectral composition of ambient lighting could in uence refractive development in different ways. 13 The spectral composition of indoor lighting produced by tungsten or light-emitting diodes (LEDs) is different from that of sunlight, which may play a role in the inhibitory effect on myopia. 19 Some previous studies suggested that reducing potential chromatic cues associated with longitudinal chromatic aberration (LCA) by restricting the spectral composition of ambient light interferes with emmetropization and supports the hypothesis that chromatic cues may normally contribute to the regulation of refractive development. 13 The effects of the spectral composition of light on refractive development have been studied in a number of animal models. 10,12,14,19−24 It is important to determine how and the extent to which the wavelength composition of ambient lighting in uences refractive development, especially in humans, because it may be possible to manipulate the spectral characteristics of ambient lighting in ways that could have a therapeutic bene t. The purpose of this study was to assess the e cacy of 650-nm, low-intensity, single-wavelength red light therapy for preventing the progression of myopia and inhibiting the excessive elongation of axial length (AL) in school children, similar to ambient outdoor sunlight. Results A total of 105 children (49 girls and 56 boys) with a mean age of 8.97 ± 2.36 years whose myopia progression and axial elongation were accelerated were enrolled in the study. The mean SER was − 3.09 ± 1.74 D at baseline and was − 3.02 ± 1.65, -2.90 ± 1.44 and − 2.87 ± 1.89 D at 3 months, 6 months, and 9 months, respectively. The mean corneal curvature was 42.89 ± 1.90, 42.90 ± 1.91, 42.94 ± 1.81 and 42.86 ± 1.96 D at baseline and at the various timepoints up to the 9-month follow-up, respectively, with no signi cant differences (P = 0.174). During the same period, the mean AL slightly decreased over time from 24.81 ± 1.31 mm to 24.74 ± 1.29, 24.72 ± 1.34 and 24.75 ± 1.30 mm after repeated red-light irradiation, and the changes were statistically signi cant (Table 1 and Table 2). Brie y, there was a steady increase in the SER and AL before red light therapy, while the opposite trend was observed after treatment ( Fig. 1). The corneal curvature slightly increased at rst and then slightly decreased, but there was no signi cant difference. respectively. The changes in school-age children were slightly greater than those of preschool children (4-7 years old) (-0.30 ± 0.15, 0.06 ± 0.11, 0.13 ± 0.10 and 0.20 ± 0.14 D, P = 0.034), and the difference was signi cant (P = 0.021). Correspondingly, for the 8-14 years old group, the increases in the AL were 0.21 ± 0.13, -0.08 ± 0.11, -0.10 ± 0.15 and − 0.06 ± 0.18 mm (P < 0.001) at baseline, 3 months, 6 months, and 9 months, respectively, which were slightly greater than those in the preschool subjects (4-7 years old) (0.22 ± 0.18, -0.06 ± 0.12, -0.06 ± 0.19 and − 0.02 ± 0.22 mm, P < 0.001), and the differences were signi cant (P = 0.05). The corneal curvature decreased by -0.00 ± 0.51, -0.02 ± 0.34, -0.03 ± 0.46 and − 0.14 ± 0.60 D at each sampling point, but the changes were not statistically signi cant (P = 0.174). The disparity between the four subgroups was also not statistically signi cant (for the age groups, P = 0.832; for the AL groups, P = 0.494). SER: spherical equivalent refractive error; CC: corneal curvature; AL: axial length. a One-way ANOVA, b Two-way ANOVA (with LSD Post Hoc multiple comparisons tests). The mean SFChT was 223.38 ± 69.81 mm at baseline and signi cantly increased to 268.71 ± 73.59 mm at 9 months after repeated narrow-band, long-wavelength light irradiation (P < 0.001). The baseline TCA was 0.43 ± 0.12 mm 2 ; the LA was 0.28 ± 0.08 mm 2 and the stromal area was 0.15 ± 0.04 mm 2 , which yielded a mean CVI of 0.65 ± 0.02. Nine months after the treatment, both the TCA (0.48 ± 0.13 mm 2 ) and the LA (0.33 ± 0.09 mm 2 ) were signi cantly higher than those at baseline (both P < 0.001, Table 3). The stromal area (0.14 ± 0.04 mm 2 ) was signi cantly lower than the baseline stromal area (P < 0.001, Table 3). The CVI was 0.69 ± 0.04 at 9 months after the treatment, which was not signi cantly different from the baseline CVI (P = 0.433). To understand the relationship between changes in parameters and baseline factors, Pearson's correlation coe cient was used. A scatter plot graph of the change in the AL at 9 months and age is shown in Fig. 3(A). At the 9-month visit, the change in the AL was found to be signi cantly associated with age at enrolment (R = -0.229, P = 0.002). We also found a signi cant correlation between the change in AL and baseline AL in the study (R = -0.34, P < 0.001) (Fig. 3(B)). According to univariate linear regression analysis, the decrease in AL had a positive relationship with age at enrolment and baseline AL; the decrease in AL was larger in individuals who were older and who had a longer baseline AL. Discussion It is well established that spending time outdoors may prevent the development of myopia, although the underlying mechanism is unknown. 6,17,18, 29 The dominant theory is that exposure to brighter sunlight outside stimulates a release of dopamine in the retina, and dopamine can promote slower, normal growth of the eye, thereby leading to a lower risk of myopia. 30 Recently, some novel ndings have supported that the differences in the spectral composition of lighting in indoor and outdoor environments may contribute to the higher prevalence of myopia in children who spend less time outdoors. 13 The chromaticity signals from LCA can promote the normal rate of eye growth and development of ocular refraction, 31 which may help to explain why outdoor activity has a protective effect against myopia and to highlight the possible effects of arti cial light in the increasing prevalence of childhood myopia. Therefore, the manipulation of the chromaticity of light to which the eyes are exposed may be valuable for the management or prevention of childhood myopia. To date, we know relatively little about how variations in the wavelength composition of ambient lighting affect refractive development in humans. Our goal was to determine whether environments dominated by longwavelength red light inhibit the development of myopia. The study programme integrated red light with a wavelength of 650 nm in natural light, which is bene cial to the human body, to replace natural light, irradiate the retina with safe power and effective time, and stimulate the retina to produce and release more dopamine, thereby effectively controlling axial eye growth and preventing the occurrence and progression of myopia. In the present study, children who received repeated 650-nm red light irradiation developed relative hyperopic refractions, and signi cant changes occurred over time (P = 0.019, Table 2). Corresponding to refractive development, the treated eyes of all children exhibited statistically signi cantly slower axial eye growth than before treatment. Our results showed that low-intensity, single-wavelength red light irradiation can effectively control AL elongation and slow myopia progression ( Table 2, Fig. 2). Interestingly, the decreases in both the SER and AL in the older group (8-14 years) were greater than those in the younger group (4-7 years), which is consistent with previous studies. The decreases in the SER and AL in the longer AL group (≥ 24 mm) were greater than those in the shorter group (< 24 mm). Notably, no signi cant differences in corneal curvature were found pretreatment and posttreatment. These results suggest that the slowed myopia progression was mainly due to AL shortening and not corneal curvature attening. Regardless of the exact mechanism that was responsible for the hyperopic shifts found in the present study, the results suggest that exposure to long-wavelength lighting may, at least under certain circumstances, be bene cial for reducing myopia progression. The results also support the emerging view that the eye can utilize chromatic cues associated with LCA to regulate ocular growth. 13,31 Whether certain spectral ranges of light can more potently delay and inhibit myopia than others is a topic of ongoing research. The effects of the spectral composition of light on refractive development have been studied in a number of animal models. When infant rhesus monkeys wore spectacles with red lters or were raised in red light, they became consistently more hyperopic. 13,14 Similarly, light from red LEDs also acted as an inhibitory stimulus for axial eye growth in tree shrews, even in adolescent animals. 10,12 In contrast, upon their return to short-wavelength light, tree shrews tended to become more myopic. Nonetheless, previous studies on the effect of various spectral compositions of light on refractive development have yielded inconsistent ndings. In contrast to monkeys and tree shews, chickens, 24 guinea pigs 23 and sh 20 became more myopic under red light and more hyperopic under blue light, which was interpreted as an attempt by the eye to compensate for LCA. Why manipulations of the spectral composition have opposite effects in chicks, guinea pigs, and sh compared to tree shrews and rhesus monkeys is an important open question. The disparity of the results of previous studies highlights that our understanding of how chromatic cues in uence eye growth is not complete. There are, however, similarities between the results of our study and previous studies in monkeys and tree shews, namely, that long-wavelength red light acted as an inhibitory stimulus for axial eye growth and induced hyperopic shift in children. The choroid, a highly vascularized layer located between the retina and sclera, plays a crucial role in relaying signals derived from the retina to the sclera, producing mediators that regulate scleral metabolism during visual cue-modulated ocular development, further affecting extracellular matrix (ECM) remodelling in the sclera, and playing an active role in emmetropization or in the pathogenesis of myopia. 32,33 It has been determined that there is a direct correspondence between changes in choroid thickness and choroidal blood ow in both animal models and humans. Decreases in choroidal blood ow may result in reduced levels of oxygen and nutrient supply to the neighbouring avascular sclera. 33 Such modulation affects myopia development in experimental animal models and humans. 34,35 In this study, 650-nm red light therapy increased the choroidal thickness, possibly in response to an improvement in scleral blood perfusion through the choroid and reduced scleral hypoxia. Previous studies have demonstrated that narrow-band, long-wavelength red light can thicken the choroid and restore the elasticity of scleral bres. 33 When the choroid is exposed to 650-nm red light, the warm effect of the red light will open the neck-like stenosis at the opening of the small arteries of the choroidal lobules, increase the blood ow into the lobules, increase the microcirculation blood volume, increase the oxygen permeability of the choroidal blood vessels and the oxygen absorption capacity, and thicken the choroid. Additionally, the increase in choroid thickness can move the retina towards the focal plane of the eye (choroidal accommodation), change the AL. 32 Various studies 36,37 on the association between the AL and choroidal thickness and the association between choroidal thickness and choroidal components in adults have been described with the help of binarization techniques applied to EDI-OCT images. 27,38,39 In myopic children, subfoveal choroidal thinning with longer ALs was found to be associated with a reduction in the LA. 28 Changes in the LA may directly in uence choroidal thickness, as blood vessels represent the main component of the choroid, which might be a helpful signal for inhibiting myopia development. 28 Studies using chicks have shown that myopia resulted in smaller vessel diameters and lower blood vessel densities. 40,41 After short treatment durations (9 months), the SFChT, TCA and LA all signi cantly increased; however, the stromal area slightly decreased, with no signi cant difference compared to that at baseline. Univariate linear regression analyses showed that the changes in the AL were signi cantly associated with age and baseline AL. In conclusion, the results demonstrate that repeated 650-nm, low-intensity, single-wavelength red light effectively reduced myopia progression, inhibited axial elongation and increased the choroidal thickness, possibly in response to an improvement in scleral blood perfusion through the choroid and reduced scleral hypoxia. These ndings require further exploration in a longitudinal study, as well as furth0er research in animal models. Subjects This study was a retrospective case series. Study Procedures All the children wore spectacles and underwent repeated 650-nm, low-intensity, single-wavelength red light treatment twice a day for three minutes each time. Among the subjects, one child had anisometropia, and only the left eye was treated with red light because the right eye was emmetropic. We retrospectively reviewed patient medical records to extract the age, SER, AL, corneal curvature, treatment modality and follow-up data. Researchers performed detailed ophthalmological examinations before treatment (baseline) and at each follow-up. Cycloplegic subjective refraction were conducted by experienced optometrists with three drops of 1% Tropicamide (Xing Qi Ophthalmic Co., Ltd, Shenyang, China) at ve-minute intervals. The SER was calculated as the average of the spherical power and half the magnitude of the cylinder power. Ocular biometrics, including corneal curvature and AL were measured using a noncontact biometer (IOL Master; Carl Zeiss Meditec AG, Jena, Germany). Spectral domain optical coherence tomography (SD-OCT) was performed using a Heidelberg Spectralis instrument (Spectralis HRA + OCT, Heidelberg Engineering, Heidelberg, Germany), and the enhanced depth imaging (EDI) mode was used to enhance the visibility of the choroid. Subfoveal choroidal thickness (SFChT) was measured by two independent observers experienced in analysing OCT images using the Heidelberg linear measurement tool at the foveal location of the horizontal line scan. We de ned the thinnest part of the macula in the image as the fovea. The SFChT was measured from the outermost part of the retinal pigment epithelium to the inner layer of the choroidoscleral interface. Image Analysis After recording the enhanced depth imaging optical coherence tomography (EDI-OCT) images, a highquality image was displayed on a computer screen and independently evaluated by two trained examiners (QW and HC). When the two examiners determined that the choroidal image was eligible, the image was used for the following analyses. Binarization of the subfoveal choroidal area in the OCT image was done by a modi ed Niblack method. Brie y, the OCT image was analyzed by ImageJ (version 1.47; provided in the public domain by the National Institutes of Health, Bethesda, MD, USA; http://imagej.nih.gov/ij/). 25 The examined area was selected to be 1500μm wide, with margins 750 μm nasal and 750 μm temporal to the fovea. It extended vertically from the RPE to the choroid-scleral interface, and the choroidal area was determined manually with the ImageJ ROI manager (Fig.1). Three choroidal vessels with lumens larger than 100 μm were randomly selected using the Oval Selection Tool on the ImageJ tool bar, and the average re ectivity of these areas was determined. [25][26][27] The average brightness of the luminal area was set as the minimum value to minimize the noise in the OCT image. The image was converted to 8 bits and adjusted using the Niblack Auto Local Threshold. Then the binarized image was converted to RGB image again, and the luminal area was determined using the Threshold Tool. After adjusting the Set Scale parameters by putting the pixel information of the acquired OCT image before the measurements, the total choroidal area (TCA), luminal area (LA), and of the stromal area were automatically calculated. 28 The light pixels were de ned as the interstitial choroid or choroidal stroma, and the dark pixels were de ned as the LA. The choroidal vascularity index (CVI) was de ned as the ratio of LA to TCA. Statistical Analyses All statistical analyses were performed using SPSS version 23.0 (SPSS Inc., Chicago, IL, USA). All values were presented as the mean ± SD unless otherwise stated. Paired t-tests or Wilcoxon signed-rank tests were used to assess the differences between the baseline and the end follow-up visit for the SFChT, TCA, LA, stromal area and CVI. Changes in SER, AL, and corneal curvature between baseline and each followup visit in different subgroups were analysed by repeated-measures ANOVA. Associations of the demographic factors and ocular parameters were determined through univariate linear regression analyses. A P value < 0.05 was considered to be statistically signi cant. Changes in the biometric measurements of the treated eyes are shown. There was an increase in spherical equivalent refractive error (SER) (A), corneal curvature (CC) (B) and axial length (AL) (C) before red light therapy, and the trend was opposite after treatment except the corneal curvature.
2021-09-01T15:09:38.358Z
2021-06-25T00:00:00.000
{ "year": 2022, "sha1": "9ceba10148536647b3ceb9279a1c5612a99042f6", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-640156/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "383d58671827c6417bba3c04b79660ba9e6f9462", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
198817791
pes2o/s2orc
v3-fos-license
Adjusted macroeconomic indicators to account for ecosystem degradation: an illustrative example ABSTRACT Adjusting macroeconomic indicators to account for the depletion and degradation of natural capital has long been viewed as a way to show the linkages between nature and the economy and foster more sustainable paths of development. Here, we show how the System of Environmental Economic Accounting (SEEA) Experimental Ecosystem Accounting (EEA) can be used to develop environmentally adjusted macroeconomic indicators. More specifically, we show how an enlargement of the production boundary to include the consideration of ecosystems as institutional units allows the development of a net value added metric that reflects depreciation and degradation of the natural capital assets. This measurement could be useful for policymaking: the linkage of ecosystem services to economic activities allows us to account for loss, repair and substitution of natural capital, and thus helps us to formulate and monitor environmental policies. Introduction The aim of "environmentally" adjusting macroeconomic indicators, by consistently presenting information in accounts from which indicators can be derived (Vardon et al. 2018), inspired the pioneers of the System of Environmental Economic Accounting (SEEA) more than 25 years ago. In fact, a group of experts from the United Nations Statistical Office attempted to build an integrated system to calculate a "green gross domestic product (GDP)" by subtracting estimates for depletion and degradation (Bartelmus et al., 1991). The initial version of the SEEA (United Nations 1993) was modified through a continuous enhancement process (United Nations, European Commission, International Monetary Fund, Organisation for Economic Co-operation, World Bank 2003; United Nations, European Union, Food and Agriculture Organization of the United Nations, International Monetary Fund, Organisation for Economic Co-operation, World Bank 2014a) and, although correcting macroeconomic indicators was (and is) not the only purpose of the SEEA, it remains among the possibilities offered by this integrated system. The development of environmentally-adjusted macroeconomic indicators can follow an accounting path through the SEEA, or it can follow an alternative path based on economic theory. Both approaches present advantages and drawbacks. The accounting approach offers room to develop analytical modules that transparently show environmental changes and the causes of these changes in an economic accounting format. However, the estimates presented in the initial version of the SEEA (United Nations 1993) were too rough moreover the SEEA Central Framework (United Nations, European Union, Food and Agriculture Organization of the United Nations, International Monetary Fund, Organisation for Economic Cooperation, World Bank 2014a) presents the methodological approach to estimate natural resource depletion, but not degradation. The green net national product (NNP), based on the Weitzman model (1976), is probably the best known indicator stemming from the economic theory path; it aims to "correct" NNP for the depreciation and depletion of natural resources (Hartwick 1990;Dasgupta et al., 1995). One disadvantage of this approach is that it embeds depletion and degradation into highly aggregated indicators and thus its effectiveness in supporting the choices of policymakers is strongly affected by how accurately the whole model (including the chosen environmental indicators) reflects policymakers' specific goals and policy options. Both approaches raise a few issues: on the one hand, we need to measure degradation and leave the accounting tool open to a variety of policy uses and policy options; on the other hand, we need to keep depletion and degradation accounts connected to macroeconomic analysis. Over the last two decades, progress has been made to overcome most of the drawbacks of both approaches. In this paper, we focus on the accounting approach and propose a way to meet the foreseen challenges; specifically, how to provide measurements that contribute to assessing degradation and consistently account for it. SEEA Experimental Ecosystem Accounting (SEEA EEA) represents an excellent starting point to address the issues of how to account for ecosystem services in a way that remains coherent with the core System of National Accounts (SNA) structure. It provides a picture of the actual situation; without this basis, no progress can be made in accounting for ecosystem services (United Nations, 2017; United Nations, European Union, Food and Agriculture Organization of the United Nations, International Monetary Fund, Organisation for Economic Co-operation, World Bank 2014b). As explicitly stated in the SEEA EEA (2014), the need to combine ecosystem accounts with national accounts is very important, considering that on the one hand some ecosystem services get scarce and, on the other hand there are no policy instruments to manage this scarcity. Transactions, assets and liabilities related to ecosystem services exist and need be appropriately recorded in the existing framework of the SNA. The inclusion of ecosystem services by SEEA EEA represents an enlargement of the SNA production boundary Obst 2015). Thanks to the enlarged production boundary, it is possible to move forward and report information concerning degradation that reflects an ecological perspective (La Notte et al., 2019a). Reflecting an ecological perspective implies considering ecosystems not only as input providers, but as institutional sectors (as well as economic sectors). In the SNA, the supply and use tables offer a detailed picture of the economy by providing: (i) the elements of the production process, (ii) the use of the goods and services (products), and (iii) the income generated through that production (European Communities, International Monetary Fund, Organisation for Economic Co-operation and Development, United Nations, World Bank 2009). Supply and use tables describe how the supply of different kinds of goods and services originates from domestic industries and imports, and how this supply is allocated between various intermediate or final uses, including exports. It is possible to explore interdependencies among different sectors and throughout the supply chain, e.g., linkage between the providers of raw materials, those who transform them into manufactured or semi-manufactured goods, and those who consume those goods for further processing or who deliver to final consumers. This set of information allows policymakers in general, and sectorial analysts in particular, to study the current situation clearly and to make appropriate evaluations. The same principle should apply to ecosystems. As already stated in business (Natural Capital Coalition 2016) and finance (Natural Capital Declaration, 2015), the interest of institutional sectors in the environment is not only in terms of marketing and reputation, but mostly in terms of impacts and dependencies: knowledge gaps must be filled in order to provide decisionmakers with appropriate sets of information. Thus, it is important to consider interdependencies among ecosystem types and economic sectors. By providing complementary information on supply and use for ecosystem services, we can complete the accounting mosaic and provide measurements of not only production and consumption, but also the excess of yearly use for those ecosystem services characterised by regeneration and absorption rates (La Notte et al. 2019a). These elements ultimately translate the capacity of ecosystems to meet human demand. By considering ecosystem types as full accounting units, in the same way we consider institutional sectors, we further empower the notion of "enlarged production boundary." When considering an "enlarged production boundary," additional information needs to be inserted into the sequence of accounts. In this paper, we are going to first describe the accounting framework to be applied (section 2), and then show how it would work through an illustrative example (section 3). In the discussion (section 4), practical issues and implications will be addressed. Method: the enlarged accounting framework The production boundary in the SNA comprises those activities "under the control and responsibility of an institutional unit that uses inputs of labour, capital, and goods and services to produce outputs of goods or services" (European Communities, International Monetary Fund, Organisation for Economic Cooperation and Development, United Nations, World Bank 2009, paragraph 6.24). It excludes: (i) all processes that take place in nature without human intervention by economic agents, and (ii) all those services that are not the result of a consensual transaction between economic agents (European Communities, International Monetary Fund, Organisation for Economic Co-operation and Development, United Nations, World Bank 2009, paragraph 3.91). Following the guidelines in the SEEA Central Framework (United Nations, European Union, Food and Agriculture Organization of the United Nations, International Monetary Fund, Organisation for Economic Co-operation, World Bank 2014a), SEEA EEA (United Nations, European Union, Food and Agriculture Organization of the United Nations, International Monetary Fund, Organisation for Economic Co-operation, World Bank 2014b) and SEEA EEA Technical Recommendations (United Nations, 2017), the overall accounting framework comprises the following types of accounts: • supply and use tables in physical and monetary terms; • asset accounts for individual environmental assets/ecosystem types in physical and monetary terms, showing the stock of environmental assets at the beginning and the end of each accounting period and the changes in the stock; • a sequence of economic accounts highlighting depletion-adjusted economic aggregates; and • functional accounts recording transactions and other information about economic activities undertaken for environmental purposes. Here, we focus the attention on supply and use tables for ecosystem services, together with the sequence of accounts. In the SNA, supply and use tables record all flows of products in an economy between different economic units, in monetary terms, with the objective of describing the structure of an economy and the level of economic activity. In SEEA EEA, supply and use tables record the actual flow of ecosystem services provided by different ecosystem types to economic sectors and households. Actual flow is generated by the interaction between what the ecosystem is able to offer (ecosystem service potential) and the human demand for it (La Notte et al. 2019a). It is important to separate (i) degradation as a decline in the condition of ecosystem assets as a result of economic and other human activity, from (ii) natural losses as a decline in the condition of ecosystem assets as a result of natural events (UN 2017). This is an important distinction because it is unjust to accuse an entity of degradation and hold them accountable when in fact the reduction in stock is due to natural events (such as a large storm). To distinguish between degradation and natural loss, we need to determine whether the decline in ecosystem asset condition is a result of economic or human activity or a result of a natural event. Some ecosystem services are characterised by regeneration and absorption rates that can be exceeded, such as provisioning services whose overuse may generate depletion (e.g., overharvesting timber) and sink-related services whose overuse may generate degradation (e.g., overwhelming the capacity of the ecosystem to provide water purification services). The ability to complement official recording of ecosystem service actual flow with ecosystem service potential flow, to allow us to assess whether overuse (as a difference between potential and actual flows) is taking place and to what extent. Any negative mismatch can be interpreted as wear and tear on natural capital. Readers should be aware that for provisioning services, the ecosystem service can be considered as a "production factor" (i.e., ecosystem contribution as natural inputs), and thus disentangled from the final SNA product (Vallecillo, La Notte, and Kakoulaki et al. 2019). This approach guarantees there is no double counting. Gross domestic product (GDP) is obtained through the framework of the production accounts. It is the sum of gross value added (GVA) over all industries or sectors plus taxes on products and minus subsidies on products. GVA is defined as output (at basic prices) minus intermediate consumption (at purchaser prices); it is the balancing item of the national accounts' production account. Net value added (NVA) is the outcome of GVA minus consumption of fixed capital (CFC). CFC reflects the decline in the value of the fixed assets of enterprises, governments and owners of dwellings. It includes normal wear and tear, foreseeable ageing (obsolescence) and a normal rate of accidental damage; it excludes unforeseen obsolescence, major catastrophes and the depletion of natural resources. In this section, we will use "depreciation of natural capital" to refer specifically to the notion of CFC in national accounts applied to ecosystem capacity (La Notte et al. 2019a). Taking advantage of the background set out in La Notte and Dalmazzone (2018) on the expansion from the SNA supply and use table to the SEEA Central Framework (SEEA CF) and then SEEA EEA supply and use tables, we proceed to show how it would be conceptually possible to integrate the complementary information reported in supply and use tables into the sequence of accounts. Figure 1 shows the main fields that should be compiled in supply and use tables. Figure 1 refers to "complementary" tables; the differences between complementary and official supply and use tables are as follows: • official supply tables report actual flows; complementary information is provided by assessing the potential flows, only for those ecosystem services characterised by regeneration (provisioning) and absorption (sink-related) rates; • official use tables do not consider the difference between potential and actual flow, which in La Notte, Vallecillo, and Maes (2019b) is defined as mismatch account. When reporting the complementary use table, a specific meaning of SNA "accumulation" is highlighted: consumption of fixed capital. This specification is made to facilitate interpretation of the mismatch account (when negative): depreciation of the natural capital. In the same way that manmade capital (e.g., machinery, constructions) deteriorate due to consumption year after year, natural capital deteriorates when overused year after year. In Figure 2, this concept is summarised as "changes in fixed capital," to align both core SNA sectors and the satellite ecosystem type sector needed to calculate NVA. Degradation is generally related to a change in ecosystem assets (UN, 2017). In La Notte, Vallecillo, and Maes (2019b), capacity is defined as the "virtual stock" of ecosystem services. This "virtual stock" is cross-sectional with respect to individual ecosystem assets because it is calculated for individual ecosystem services. Thanks to this function (Figure 3), it is possible to connect the changes recorded in ecosystem services to changes occurring in ecosystem types, and thus in ecosystem assets. Individual ecosystem services can be summed up in order to consider the overall change in the ecosystem asset. The graphical simplification shown in a tabular format in Figure 2 is applied in the illustrative example presented in the next paragraph. The reader should be aware that at this stage the concept is presented in an extremely simplified version: the accounting procedure would require complex refinement of each step with much more detailed information. The whole example should be interpreted as a first block at the start of an in-depth discussion. Expected results: an illustrative example of applying the accounting framework Here, we present an illustrative example of the integration of satellite ecosystem services accounts with the current structure of national accounts. We compare different hypotheses to check whether and how GDP could be adjusted to record natural capital depreciation. Let us consider two sectors (agriculture and forestry, manufacturing), three ecosystem units (cropland, woodland and forest, inland waters) providing three different ecosystem services (pollination, timber provision, and water purification), and the household sector. For water purification, the annual flow that does not exceed the absorption rate is 50 current units; for timber provision (biomass growth), the annual flow that does not exceed the regeneration rate is 60 current units. For crop pollination, USE overuse of the service cannot take place since the yearly flow of this service depends on initial conditions (e.g., extent of land cover) that do not vary as a consequence of actual use. We consider two situations: • Situation 1: agriculture and forestry sectors do not manage their resources according to sustainable practices, i.e., (i) agriculture emits more tons of nitrogen (70 current units) than inland waters can remove, and (ii) forestry harvests at a rate (70 current units) higher than the annual increment in biomass regeneration; • Situation 2: agriculture and forestry sectors manage their resources according to sustainable practices, i.e., (i) agriculture reduces nitrogen emissions to the level inland waters are able to remove, and (ii) forestry harvests at a rate that is equal to the net annual increment of wood biomass. Table 1 reflects the "No ecosystem service" approach, i.e., the traditional accounts as reported by the core SNA, where neither row items related to ecosystem services, nor column items related to ecosystem types, are included. In this simplified example, the recording ignores all other inputs and potentially relevant flows (e.g., labour costs, retail margins). The sum of value added (VA) is 128 currency units. Prices are hypothesized as constant, because the driver of change is the biophysical assessment and no room is given to inflation. In Table 1, no difference is measurable between Situation 1 and Situation 2. In Table 2, the SEEA EEA extends this recording to include the flow of the timber provision service from "Forest and other wooded land" (reported in Table 2 as "Forest") to the forestry sector, the pollination service from "Cropland" to the agricultural sector, and the water purification service from "Inland waters" to water companies. In the current version of the SEEA EEA we record the actual flow, which for wood biomass growth and water purification will be higher in Situation 1 and lower in Situation 2. The main effect, in comparing with "no ecosystem services" and between Situation 1 and Situation 2, is to partition the VA of economic sectors between agriculture and forestry, and the ecosystem units. The overall VA is unchanged (at 128 currency units) because the interchange between intermediate sectors redistributes the overall amount. Please note that, in this example, water purification is allocated to the agricultural sector because Situation 2 (sustainable practices) targets the polluters, e.g., through the agri-environmental payments (within EU rural development programmes) that compensate farmers who reduce the use of fertilisers. The underlying hypothesis is that there are no changes in agricultural production because farmers adopt different management practices, supported by agri-environmental payments. The water purification service mediate part of pollution emission in each catchment; what is not mediates flows into downstream catchments until it reaches the sea: spatially explicit assessments allow this kind of measurement and analysis. In Table 3, the SEEA EEA reports the outcomes described by considering ecosystem types as institutional sectors and thus measuring and reporting changes in their regeneration and absorption rates. Here we not only consider the actual flow, but also the potential flow, which in the case of sustainable practices will not exceed the regeneration rate (annual increment of wood biomass) for timber provision, or the absorption rate (pollutant emission sustainability threshold) for water purification. By recording overuse of ecosystem services, we can clearly see that the sum of VA is much lower (98 currency units) under unsustainable practices than under sustainable practices (128 currency units). In terms of aggregated economic indicators, this additional information would allow us to calculate a sustainability-adjusted macroeconomic indicator. In terms of establishing and monitoring environmental policies, Table 3 could provide the complementary information needed to assess whether improvements are likely to be generated by the planned actions and sustainability targets. This framework would not alter neither the official SNA nor the current SEEA EEA, because all the measurements are "added" to complement and not to modify standardised structures. Thanks to the information provided by external satellite accounts, we could estimate the ecosystem performance in terms of VA when an unsustainable production pattern is implemented (−30 currency units). Unsustainable practices affect the capacity of the ecosystem to provide the same amount of services that (in this case) support the SNA "harvest:" the negative values that we read in the "ecosystem depreciation" column quantify degradation, which will affect the future generation of ecosystem service flows. Discussion and conclusions The tracking of patterns of unsustainable use of ecosystem services has been demonstrated at the microeconomic level in physical and monetary terms (Ogilvy 2015;Ogilvy et al. 2018). This paper targets the same objective at macroeconomic level by attributing the source of unsustainability, quantifying the producer surplus gained through the unsustainable patterns of use, and thus influencing the direction of future policy actions. As pointed out by Bartelmus (2015), the difference between an accounting system and a set of statistics and indicators is that the former requires a systemic approach with strict accounting conventions and rules, while the latter can be presented without aggregation in relatively loose frameworks. The three characteristics that Bartelmus highlights for an accounting system are: comprehensiveness, consistency and integration. Integration in particular requires "recording all consequences of a transaction in all accounts and balance sheets" (Bartelmus 2015, 293). By looking specifically at integration, the illustrative example attempts to show how economic production and consumption can cause overuse of natural capital, which risks affecting the sustainability of economic performance in the medium and long term. Ecosystems are considered as natural capital and the overuse of their source and sink functions is assessed as natural capital consumption, i.e., the SNA concept of produced capital consumption can be extended to include natural capital as "an additional production cost" (Bartelmus 2015). The value of unpriced ecosystem services is part of the comparatively higher producer surplus (profits) of enterprises that benefit from them. If these services were priced on markets, the national accounts would record their purchase as capital formation and current production cost. In the example, the flow of pollination and biomass growth into agriculture is an additional production input that the sector receives for free from ecosystems. Also, water purification is a service provided for free by ecosystems to clean the negative externalities mainly generated by the use of chemical fertilisers and manure. On this specific Bartelmus (2014Bartelmus ( , 2015 also states that degradation costs (in our example, water purification) should be allocated to the economic actor who generated them, referring to the policyoriented polluter pays principle in the 1992 Rio Declaration (United Nations 1994, Principle 16). The fact that the primary sector does not have to pay for those services increases its profit: if it had to pay for them, then its profit would be negative (−55 for Option 1 and −25 for Option 2). When there is overuse of these services (Table 3), depreciation of ecosystem capital occurs. The possibility of valuing ecosystem services accounts in monetary terms allows us to deal with the overuse of ecosystem source and sink functions by treating them as ecosystem depreciation, and thus propose an environmentally-adjusted NVA (e.g., please refer to the water purification application in La Notte et al. 2019a). Weber (2011) applies the same depreciation concept, but treating the ecosystem as a whole (holistic view) rather than considering ecosystem services individually. However, the valuation in monetary terms presents itself challenges to be addressed, i.e., the valuation of non-market ecosystem services by using techniques that are consistent with exchange values principles adopted in the SNA (please refer to chapter 6 in UN 2017). This measurement could be useful for policymaking: the linkage of ecosystem services to economic activities allows us to account for loss, repair and substitution of natural capital (Bartelmus 2015), and thus helps us formulate and monitor environmental policies. The idea proposed in this paper is only the starting point: it needs further development and testing. Practitioners should be aware that the suggested approach could contribute to the assessment of degradation, but it is not meant as a substitute for the assessment of ecosystem condition: ecosystem accounts and ecosystem services accounts proposed within the SEEA EEA (UN, 2014b; UN, 2017) work well when they work together, since degradation can occur even with sustainable use patterns (e.g., fire, storms, drought). Disclosure statement No potential conflict of interest was reported by the authors.
2019-07-26T13:34:28.338Z
2019-01-02T00:00:00.000
{ "year": 2019, "sha1": "91708b9ee92b92a40c434d9c62629d6634de8768", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20964129.2019.1634979?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "460f9aa91f6d5864854674cce74deb86e1ef9fe1", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
12178711
pes2o/s2orc
v3-fos-license
The Impact of Middle Turbinate Concha Bullosa on the Severity of Inferior Turbinate Hypertrophy in Patients with a Deviated Nasal Septum BACKGROUND AND PURPOSE: Inferior turbinate hypertrophy and concha bullosa often occur opposite the direction of nasal septal deviation. The objective of this retrospective study was to determine whether a concha bullosa impacts inferior turbinate hypertrophy in patients who have nasal septal deviation. MATERIALS AND METHODS: The electronic medical record was used to identify sinus CT scans exhibiting nasal septal deviation for 100 adult subjects without and 100 subjects with unilateral middle turbinate concha bullosa. Exclusion criteria included previous sinonasal surgery, tumor, sinusitis, septal perforation, and craniofacial trauma. Nasal septal deviation was characterized in the coronal plane by distance from the midline (severity) and height from the nasal floor. Measurement differences between sides for inferior turbinate width (overall and bone), medial mucosa, and distance to the lateral nasal wall were calculated as inferior turbinate hypertrophy indicators. RESULTS: The cohorts with and without concha bullosa were similarly matched for age, sex, and nasal septal deviation severity, though nasal septal deviation height was greater in the cohort with concha bullosa than in the cohort without concha bullosa (19.1 ± 4.3 mm versus 13.5 ± 4.1 mm, P < .001). Compensatory inferior turbinate hypertrophy was significantly greater in the cohort without concha bullosa than in the cohort with it as measured by side-to-side differences in turbinate overall width, bone width, and distance to the lateral nasal wall (P < .01), but not the medial mucosa. Multiple linear regression analyses found nasal septal deviation severity and height to be significant predictors of inferior turbinate hypertrophy with positive and negative relationships, respectively (P < .001). CONCLUSIONS: Inferior turbinate hypertrophy is directly proportional to nasal septal deviation severity and inversely proportional to nasal septal deviation height. The effect of a concha bullosa on inferior turbinate hypertrophy is primarily mediated through influence on septal morphology, because the nasal septal deviation apex tends to be positioned more superior from the nasal floor in these patients. N asal airway obstruction is a challenging problem that can arise from multiple etiologies, which include structural abnormalities such as nasal septal deformity and turbinate hypertrophy. Inferior turbinate hypertrophy (ITH) has received much attention in the literature in the debate over optimal surgical management of nasal obstruction. 1 Although the term "hypertrophy" is most accurately reserved for the overall enlargement of an organ because of increasing cell size, its use is widely accepted in the setting of turbinate enlargement secondary to greater thick-ness of soft-tissue and/or bone components. [2][3][4][5] Although limited normative data has been published on inferior turbinate size by using CT, ITH remains a clinical diagnosis. 6 ITH has been commonly described as occurring contralateral to the direction of nasal septal deviation (NSD) or, alternatively phrased, along the concave side of the septum. 2,3,[7][8][9][10][11][12][13][14] Because of this association, it has been speculated that ITH is compensatory, to create physiologically favorable nasal airflow turbulence and to protect the mucosa from excessive drying and crusting with increased air flow. In other words, the inferior turbinate may have progressively enlarged to fill the void in the nasal cavity created by the shifted midline with the undesirable result of a smaller-than-expected crosssectional area for air passage. 2 Using septoplasty to correct NSD without addressing the ITH may have the unintended consequence of worsening symptomatic nasal obstruction. 7 Concha bullosa is an anatomic variant of ethmoid air cell development in which pneumatization most commonly extends into the middle turbinate. This can be limited to the vertical lamella, extend into the bulbous portion, or extensively involve the vertical lamella and bulbous segment of the middle turbinate. 15 If one allowed some outlier data, the prevalence is likely in the range of 21%-53%. [15][16][17][18][19][20][21][22][23][24][25][26][27] Some of the reported variability can be attributed to differences in the populations being evaluated, the type of evaluation (ie, CT versus surgery), and the definition of concha bullosa (ie, whether to include small lamellar types). Similar to ITH, a preponderance of published reports support a strong association between the presence of concha bullosa and NSD, in which the nasal septum typically bows toward the contralateral side and may increasingly do so when middle turbinate pneumatization is greatest. 23,24,[27][28][29][30] Moreover, in bilateral cases, the nasal septum is usually near midline when the conchae bullosa are balanced in size but usually deviates away from an asymmetrically enlarged dominant concha bullosa. When one controls for the shape and severity of a deviated nasal septum, it has yet to be determined whether a concha bullosa significantly influences the presence of ipsilateral ITH. Logically, an interaction may exist between the structures because the concha bullosa and ITH both commonly develop along the concave side of a deviated nasal septum within a secondarily widened nasal cavity. The objective of this study was to assess patients with NSD on CT in an attempt to identify whether the presence or absence of a concha bullosa influences ipsilateral ITH. MATERIALS AND METHODS This retrospective study, which is compliant with the Health Insurance Portability and Accountability Act, was approved by the institutional review board at the authors' institution, and the need for informed consent was waived. The radiology information management system was used to identify patients who underwent noncontrast sinus CT between January 1, 2011, and July 1, 2014. All sinus CT scans were acquired with a 64-detector scanner (LightSpeed VCT or Discovery CT750 HD; GE Healthcare, Milwaukee, Wisconsin), and the same CT protocol was used for all studies (120 kV[peak], 180 mA, 0.5-second rotation time, 0.531 pitch, and 0.625-mm section collimation). No topical intranasal vasoconstrictors were administered at the time of imaging. The sinus CT scans and corresponding electronic medical records were evaluated in consecutive reverse-chronologic fashion to determine study eligibility. We specifically excluded patients with a Lund-Mackay score greater than zero, prior sinonasal surgery, CT or clinical findings of sinonasal polyposis, a history of head and neck tumor or irradiation, nasal septal perforation, and a documented history of craniofacial trauma. Inclusion required that patients were at least 18 years of age at the time of imaging and that the sinus CT was of diagnostic quality. Additionally, all patients were required to have unilateral NSD without a minimum threshold for severity. Subjects with S-shaped or other complex bidirectional nasal septal deformities were excluded. In total, we enrolled 200 patients with NSD: 100 with a unilateral middle turbinate concha bullosa (CBϩ) and 100 without a concha bullosa (CBϪ). As previously published, concha bullosa was defined as Ͼ50% pneumatization of the vertical height of the middle turbinate, thereby excluding very small conchae bullosa or pneumatization of the vertical lamella only. 24 Image assessment was performed by a board-certified neuroradiologist by using a PACS. The following measurements were performed on 1.25-mm coronal reformations that were rendered in a bone algorithm and viewed at window level and width of 450 HU and 2500 HU, respectively: Concha Bullosa. The maximum transverse width and craniocaudal length of the middle turbinate concha bullosa (CBϩ group only). NSD. Using the image on which the NSD was most severe, we drew a line from the crista galli to the nasal crest to define the midline. An orthogonal measurement was taken from the midline to the apex of maximal nasal septal deviation (NSD severity). The vertical distance from the apex to the floor of the nasal cavity was measured parallel to midline (NSD height), and the direction of septal deviation was recorded (Fig 1). Inferior Turbinate. Because no standard definition exists for ITH on CT, 4 measurements were acquired to document the width of the inferior turbinate and the degree to which it projected into the nasal cavity. 1) Lateral offset (Fig 2A) represents the maximum transverse distance from the most medial aspect of the inferior turbinate bone to the lateral nasal wall. 2) Width (Fig 2A) was determined by the maximum transverse width of the pendulous portion of the inferior turbinate inclusive of soft tissue and bone. 3) Bone width (Fig 2B) represents the maximum transverse width of the inferior turbinate bone. 4) Medial mucosa width ( Fig 2B) was a transverse measurement at the point of maximal soft-tissue thickness along the medial aspect of the inferior turbinate. For consistency, these measurements were all performed by the same neuroradiologist at the level of the ostiomeatal complex on the posterior-most coronal image on which the primary maxillary sinus ostium was visible. As an indicator of ITH for each patient, side-to-side differences (⌬) in inferior turbinate measurements were calculated by subtracting the side ipsilateral to the apex of the NSD from the contralateral side. In other words, a positive value for this difference would support the hypothesis that the inferior turbinate on the concave side of the nasal septum was larger than the one on the opposite side. To assess the potentially confounding influence of vasocongestion related to the normal nasal cycle, we recorded the maximal mucosal thickness along the inferior aspect of the middle turbinate for the concave and convex sides of the deviated nasal septum. Patient characteristics and side-to-side differences for the inferior turbinate measurements were compared between the presence and absence of the concha bullosa by using the 2-sample t test for continuous variables and the 2 test for categoric variables. Pearson correlation coefficients were also used to assess the strength and direction of linear relationships for the inferior turbinate measurement differences relative to NSD severity and NSD height for all 200 subjects, in addition to being stratified by CBϩ and CBϪ groups. Multiple linear regression models were conducted for predicting the side-to-side differences in inferior turbinate measurements, controlling for CBϩ, NSD severity, NSD height, and potential interaction terms. A paired t test was used to compare the middle turbinate mucosa thickness on the concave side of the deviated nasal septum with that on the convex side. Potential correlations between the concha bullosa width, concha bullosa length, NSD severity, and NSD height were examined by using Pearson correlation coefficients, and simple linear regressions were used for modeling the relationships. All analyses were performed with SAS 9.4 (SAS Institute, Cary, North Carolina). All hypothesis tests were 2-sided, and statistical significance was defined as P Ͻ .05. RESULTS The CBϩ and CBϪ cohorts were similarly matched in age, sex, and NSD severity ( Table 1). The mean (SD) transverse width and craniocaudal length of the middle turbinate conchae bullosa were 7.5 mm (2.2 mm) and 15.5 mm (3.8 mm), respectively, in the CBϩ group, and the NSD height was significantly greater in the presence of a contralateral middle turbinate concha bullosa (mean, 19.1 Ϯ 4.3 mm versus 13.5 Ϯ 4.1 mm; P Ͻ .001). Concha bullosa width showed statistically significant moderate correlations for NSD height (Pearson r ϭ 0.30, P Ͻ .01) and NSD severity (Pearson r ϭ 0.20, P ϭ .04). However, concha bullosa height was not significantly correlated with NSD height or NSD severity. Because concha bullosa height and width were highly correlated (Pearson r ϭ 0.53, P Ͻ .001), simple linear regression was performed by using only concha bullosa width as the predictor. There were statistically significant positive relationships between NSD height and concha bullosa width (␤ coefficient ϭ 0.61, standard error ϭ 0.20, P Ͻ .01) and NSD severity and concha bullosa width (␤ coefficient ϭ 0.19, standard error ϭ 0.09, P Ͻ .04). Bivariate analysis was initially undertaken to evaluate the indicators of ITH related to concha bullosa status. No significant relationships were identified on the basis of age or sex. As seen in Table 1, the values for ⌬lateral offset, ⌬width, and ⌬bone width were greater in the CBϪ group compared with CBϩ, but the results for ⌬medial mucosa width did not reach significance. However, this comparison does not correct for the potentially confounding influence of NSD severity and NSD height. Pearson correlation coefficients were examined to determine the strength and direction of potential relationships between the side-to-side differences in inferior turbinate measurements and NSD severity and NSD height (Table 2). When we evaluated the data in aggregate and divided into CBϩ and CBϪ groups, ⌬lateral offset, ⌬width, and ⌬bone width showed strong potential as indicators of ITH as correlated to NSD severity and NSD height, while the level of strength for ⌬medial mucosa width was again not as strong. Regression models were constructed with ⌬lateral offset, ⌬width, ⌬bone width, and ⌬medial mucosa width as the dependent variables, respectively, while NSD severity, NSD height, CBϩ, and appropriate statistical interaction terms were the independent variables. No statistically significant interactions were identified, so these terms were removed. Because CBϩ was highly associated with NSD height (P Ͻ .001), these variables essentially conveyed the same information so that both could not achieve significance within the same model. Because the models containing NSD height and NSD severity had the best overall statistical significance, CBϩ was removed from the regression models. The regression models for ⌬lateral offset, ⌬width, and ⌬bone width reached significance (P Ͻ .001), with all of these variables showing an inverse relationship with NSD height and a positive relationship with NSD severity ( Table 3). The model for ⌬medial mucosa width approached, but did not reach, statistical significance (P ϭ .06). In other words, as the nasal septum further deviates, the inferior turbinate on the concave side of the septum becomes asymmetrically enlarged, but the degree of enlargement is abated as the apex of the NSD moves farther away from the floor of the nasal cavity. This relative increase in inferior turbinate size can be explained with this model as greater projection of the turbinate bone more medially into the nasal cavity (⌬lateral offset) and an increase in the width of the pendulous portion of the inferior turbinate (⌬width), with the latter driven by thickening of bone more than mucosa. DISCUSSION Concordant with previous reports, the current study supports the association between NSD and contralateral ITH. 2,3,7-14 NSD severity and NSD height best predicted the severity of ITH without significant contribution from the presence or absence of the concha bullosa. Although some of the prior reports did not objectively measure the severity of NSD, 1 study found that inferior turbinate bone thickness on the side opposite the NSD positively correlated with NSD severity as measured by septal angle and volume. 13 In contrast, Akoglu et al 9 attempted to associate the angle of the deviated septum with the cross-sectional areas for hypertrophied inferior turbinate bone, mucosa, and overall size, but no significant correlation was found. This may be because a septal angle eliminates some useful information. When one measures the angle from the region of the crista galli, severe NSD centered closer to the floor of the nasal cavity and a milder NSD positioned more superiorly can yield the same septal angle but have a different impact on septal morphology and surrounding structures. Therefore, NSD in the current study was characterized by 2 variables, NSD severity and NSD height. The elimination of the concha bullosa from the regression model does not mean that it is irrelevant, because the bivariate analysis clearly showed significant associations between the concha bullosa and the indicators of ITH (ie, side-to-side measurement differences in inferior turbinate bone width, overall width, and the degree of intranasal projection). Instead, it merely indicates that the presence of a concha bullosa did not provide additional statistical significance in a multiple regression model because it presumably conveys much of the same information as the parameters of NSD. On the basis of the current results, the severity of ITH correlates directly with NSD severity and inversely with NSD height. This correlation effectively accounts for the observation that the apex of maximal NSD tends to be located more superiorly in the presence of a unilateral concha bullosa. Prior studies have documented the presence of ITH in NSD in a variety of different ways. Some of the earliest work on ITH used acoustic rhinometry to indirectly evaluate the extent of ITH by estimating the cross-sectional area of the nasal cavity as a function of the distance from the nostrils. 2,7 Because of an incomplete response following topical nasal decongestant application, it was concluded that ITH must result from combined mucosal and skeletal hypertrophy. CT has been used as an alternative form of in vivo assessment for ITH associated with NSD by acquiring different measurements of bone and soft-tissue components of the inferior turbinate and comparing these results internally with the contralateral side or exter- nally with a control population. 8,9,[11][12][13] In general, these CT data support the findings of acoustic rhinometry that compensatory ITH arises from increased bone and mucosal thickness. Additionally, as measured by distance or angle, the inferior turbinate on the concave side of a deviated septum projects farther medially into the nasal cavity. 11,14 Histopathologically, Berger et al 3 compared resected inferior turbinate specimens in patients undergoing surgery for NSD with ITH and compared them with freshly harvested postmortem specimens. The conchal bone in the ITH group showed a 2-fold increase in thickness, which accounted for approximately 75% of the difference in overall turbinate thickness compared with the cadaveric controls, with no significant difference in bone type (lamellar versus compact). The mucosal contribution to ITH was much less, though the appropriateness of comparing surgically resected turbinates with postmortem specimens has been questioned. 5 When viewed in aggregate, this previously published work supports compensatory ITH arising from both bone and mucosal thickening. The tendency of the bone findings to be slightly more reproducible across studies may relate to the inherent variability in mucosal thickening introduced by normal mucosal cycling. In the current study, the lack of a difference in middle turbinate mucosal thickness along the concave and convex sides of the deviated nasal septum argues against the presence of a systematic bias from the nasal cycle. However, the multivariable model for ⌬medial mucosal width showed that there was no statistically significant difference (P ϭ .06), and this might be attributable to the added variability introduced by the nasal cycle. In contrast, the 3 ITH variables that included osseous structures were all highly significant. The choice of representative measurements in the current study was grounded in the previously published body of literature. 2,3,7-9,11-14 Inferior turbinate width (total and bone only) and the distance that the inferior turbinate projects into the nasal cavity were selected as appropriate representations of ITH. Because the medial mucosal layer of the inferior turbinate tends to be the widest because it contains the thickest lamina propria, it was also chosen for measurement. 3 We chose the coronal image used for assessment through the level of the maxillary sinus ostium to include the concha bullosa, while noting that the contributions of bone and mucosa to ITH have been validated for the middle third of the inferior turbinate. 8,9,12 The precise mechanism underlying the development of compensatory ITH in NSD remains unclear, but there is evidence to suggest a long-standing acquired process. Aslan et al 12 stratified adult (mean age, 40.2 Ϯ 12.4 years) and pediatric (mean age, 10.9 Ϯ 3.8 years) cohorts into those who had NSD versus those with a straight or nearly straight septum and calculated interturbinate ratios to determine relative size differences in bone and soft-tissue components of the inferior turbinates (ie, an increased ratio suggested ITH). For bone more than soft-tissue structures, the adults with NSD had significantly higher interturbinate ratios compared with the adults with a straight septum, thereby indicating ITH. In contrast, the interturbinate ratios did not significantly differ between the pediatric groups on the basis of the presence of NSD, and the adults with NSD had significantly higher interturbinate ratios than the children with NSD. The results were inter-preted as ITH being an acquired compensatory process in NSD rather than a congenital abnormality. A separate study compared patients with NSD and stratified them as to whether the NSD was thought posttraumatic or congenital. 14 Inferior turbinate measurements were compared between the convex and concave sides of the septum to delineate ITH. In the congenital group, the bone of the inferior turbinates on the concave side of the septum projected more medially into the nasal cavity on the basis of the distance and angle relative to the lateral nasal wall. The authors concluded that the conchal bone plays a much greater role in ITH in congenital NSD, underscoring the much longer time needed to acquire osseous-versussoft-tissue changes. The notion that the mucosal component of compensatory ITH is more dynamic is also supported by a study that evaluated patients who underwent septoplasty without a turbinate operation and then underwent repeat CT at least 1 year postoperatively. 10 On average, the medial mucosa of the hypertrophied inferior turbinate on the concave side of the septum preoperatively became thinner by approximately 1 mm after the septal deformity was corrected. This finding was presumed to represent the mucosal response to narrowing of the adjacent air channel caused by moving the septum back to midline. In contrast, the turbinate skeletal structure was unchanged. While long-standing NSD appears to lead to the development of ITH, the precise relationship between NSD and concha bullosa continues to be debated. A significant body of evidence not only associates NSD and concha bullosa but also indicates that the NSD is typically directed away from the concha bullosa when unilateral or the dominant concha bullosa when bilateral. 16,23,24,28,29 The severity of NSD tends to be greater in larger or more extensively pneumatized conchae bullosa; conversely, the prevalence of a concha bullosa correlates positively with the severity of NSD. 23,24,28,29 The current results further strengthen the intimate relationship between concha bullosa and NSD by demonstrating that the apex of maximum NSD is positioned more superiorly when a unilateral concha bullosa is present and that the severity of NSD increases in direct proportion to concha bullosa width. However, causation remains uncertain. Because a number of studies have documented a preserved air channel between the medial aspect of the concha bullosa and the nasal septum, it is unlikely that an enlarging concha bullosa directly pushes the septum. 16,23,24 It has been previously suggested that concha bullosa and NSD represent 2 incidental and potentially unrelated developmental anomalies that tend to appear concomitantly or that a concha bullosa develops to fill in vacant space created by a preexisting NSD, termed the "e vacuo" hypothesis. 31 However, a study comparing dizygotic and monozygotic twins found that the intrapair similarities were virtually identical for the presence of a deviated nasal septum (23% versus 25%), but monozygotic twins had an intrapair similarity for concha bullosa of 70% compared with 25% for dizygotic twins, suggesting a genetic influence in the presence of a concha bullosa. 32 Thus, a high probability of congenital coexistence of NSD and concha bullosa seems questionable. In addition, the concha bullosa would be more apt to precede NSD because of a stronger genetic link, thereby contradicting the e vacuo hypothesis. Additional observations have further disputed these prevailing hypotheses. 23 Not all individuals with septal deviation have concha bullosa, while most cases with a large or dominant concha bullosa have septal deviation. Moreover, there are instances of medium-to-large bilateral conchae bullosa in the setting of a straight nasal septum. Consequently, it is difficult to establish a purely congenital association between the 2 anatomic variants, and the compensatory growth of a concha bullosa to fill in the space vacated by a deviated nasal septum also seems implausible. In fact, Sazgar et al 23 recently hypothesized that it is the NSD that is more likely to be compensatory for a concha bullosa by synthesizing this idea with the prevailing literature and substantiating it with a discussion of fluid dynamics. The limitations of the current study are primarily related to the generalizability of the results regarding the relationship of NSD, concha bullosa, and ITH. In an attempt to eliminate confounding variables, the strict inclusion criteria eliminated patients with active inflammatory sinus disease, posttraumatic sinonasal deformity, bilateral conchae bullosa, and small lamellar conchae bullosa. Moreover, the study population consisted of patients with a clear unilateral pattern of septal deflection, and it has been recognized that different patterns of concomitant turbinal hyperplasia and concha bullosa vary on the basis of NSD morphology. 33 Last, ITH and NSD were exclusively defined by CT appearance with no correlate as to the presence of clinically significant nasal obstruction. Because the study population was limited to subjects with clear sinuses on CT, the overwhelming majority never underwent a thorough rhinologic evaluation, thereby limiting the ability to make meaningful imaging-clinical correlations. In reality, symptomatic nasal obstruction from ITH and NSD is best determined clinically. The CT-based measurements performed in this study were a tool to identify anatomic features that contribute to the development of ITH, not a recommended form of clinical assessment. CONCLUSIONS The degree of compensatory ITH increases in proportion to the severity of NSD and decreases as the NSD apex moves farther superiorly from the nasal floor. Although the presence of a unilateral middle turbinate concha bullosa is associated with less severe ITH, this effect is primarily attributed to the higher apex of the NSD seen with concha bullosa, as opposed to an independent relationship between concha bullosa and ITH, both of which are commonly found along the concave side of a deviated septum.
2017-10-15T18:06:01.802Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "2fbf65cc22a930d3161c23bdf0f2141a581084be", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/ajnr/37/7/1324.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "2fbf65cc22a930d3161c23bdf0f2141a581084be", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }